id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
270720973
pes2o/s2orc
v3-fos-license
Low molecular weight heparin promotes the PPAR pathway by protecting the glycocalyx of cells to delay the progression of diabetic nephropathy Diabetic nephropathy (DN) is one of the most important comorbidities for diabetic patients, which is the main factor leading to end-stage renal disease. Heparin analogs can delay the progression of DN, but the mechanism is not fully understood. In this study, we found that low molecular weight heparin therapy significantly upregulated some downstream proteins of the peroxisome proliferator–activated receptor (PPAR) signaling pathway by label-free quantification of the mouse kidney proteome. Through cell model verification, low molecular weight heparin can protect the heparan sulfate of renal tubular epithelial cells from being degraded by heparanase that is highly expressed in a high-glucose environment, enhance the endocytic recruitment of fatty acid–binding protein 1, a coactivator of the PPAR pathway, and then regulate the activation level of intracellular PPAR. In addition, we have elucidated for the first time the molecular mechanism of heparan sulfate and fatty acid–binding protein 1 interaction. These findings provide new insights into understanding the role of heparin in the pathogenesis of DN and developing corresponding treatments. Diabetic nephropathy (DN) is one of the most important comorbidities for diabetic patients, which is the main factor leading to end-stage renal disease.Heparin analogs can delay the progression of DN, but the mechanism is not fully understood.In this study, we found that low molecular weight heparin therapy significantly upregulated some downstream proteins of the peroxisome proliferator-activated receptor (PPAR) signaling pathway by label-free quantification of the mouse kidney proteome.Through cell model verification, low molecular weight heparin can protect the heparan sulfate of renal tubular epithelial cells from being degraded by heparanase that is highly expressed in a high-glucose environment, enhance the endocytic recruitment of fatty acid-binding protein 1, a coactivator of the PPAR pathway, and then regulate the activation level of intracellular PPAR.In addition, we have elucidated for the first time the molecular mechanism of heparan sulfate and fatty acid-binding protein 1 interaction.These findings provide new insights into understanding the role of heparin in the pathogenesis of DN and developing corresponding treatments. Diabetic nephropathy (DN) is a major microvascular complication of diabetes, occurring in approximately 30% of patients with type 1 diabetes mellitus and approximately 40% of patients with type 2 diabetes mellitus.DN is characterized by persistent proteinuria, elevated arterial blood pressure and a persistent decrease in glomerular filtration rate and is the leading cause of morbidity and mortality in diabetics, leading not only to end-stage renal disease but also an increase in cardiovascular adverse events (1,2).At present, the focus of clinical treatment of DN is mainly antihypertensive and antiproteinuria, but the specific treatment of DN has not been determined (3).Therefore, accelerating the development of novel therapeutics is critical to improving the current disease situation. Proteoglycans (PGs) are components of the glycocalyx in the extracellular matrix, and the glycosaminoglycan (GAG) glycan chains on PGs (particularly heparan sulfate (HS) PGs, HSPGs) play a key role in cellular and tissue homeostasis by interacting with a variety of proteins and regulating various processes such as proliferation, differentiation, angiogenesis and inflammation, thereby participating in and intervening in a variety of human diseases, including DN (4,5).Several studies have shown that long-term hyperglycemia in diabetic patients induces the alteration and destruction of HS molecular structure on HSPG through multiple pathways and plays an important role in DN (6,7).The glomerular basement membrane (GBM) is an ordered network composed of proteins and HSPGs, on which the HS determines its ionic charge permeability characteristics, and the upregulation of heparanase expression in diabetic patients leads to a decrease in the content of heparin sulfate in GBM, which in turn leads to changes in the permeability of negatively charged macromolecules (such as albumin), resulting in proteinuria (6,8).In addition, the increased levels of reactive oxygen species induced by hyperglycemia not only degrades GAGs on glycocalyxes but also activates matrix metalloproteinases to induce proteolysis of sugar yeast, thereby leading to glycocalyx shedding and promotes kidney disease (9).Therefore, maintaining or restoring the integrity of the glycocalyx appears to be a promising therapeutic target. In addition to being an anticoagulant, heparin also has unique properties of high negative charge and high heterogeneity, can interact with various proteins, exert a variety of nonanticoagulant activities, and show new and unexpected therapeutic effects in DN (10).Firstly, low molecular weight heparin (LMWH) can improve blood rheology and renal microcirculation in diabetic patients, delaying glomerular sclerosis, and reducing intrarenal circulation resistance (11).Secondly, heparin has certain antiinflammatory and antioxidant functions, which can inhibit inflammatory responses and renal cell damage (12)(13)(14).Furthermore, heparin or heparin analogs can act as heparanase inhibitors in vivo, reducing the degradation of HS on GBM, and protecting the glomerular barrier (8,15).Moreover, LMWH can bind to the receptor for advanced glycation end product (RAGE) as an antagonist of RAGE and improve the various indicators of DN (16).Nevertheless, the cellular and molecular mechanisms involved (functional glycan chain structure and target protein) are not fully understood and need to be further explored and elucidated. HS participates in various pathophysiological processes by interacting with a variety of proteins.Therefore, it is necessary to understand how exogenous heparin/heparin analogs affect changes in the proteome for revealing abovementioned mechanism.In this study, we analyzed the renal proteome of mice in the DN and LMWH treatment groups and found that LMWH could significantly promote the high expression of fatty acid-binding protein 1 (FABP1), Acaa1b, Acox2, Hmgcs2, and Pltp downstream of the peroxisome proliferatoractivated receptors (PPARs) pathway, indicating that LMWH could promote the PPAR pathway to protect the kidney.LMWH could protect the HS of renal tubular cells from heparanase destruction and maintain HS-mediated endocytosis of FABP1 in renal tubules and the activation level of PPAR.Furthermore, we also characterize the sequence characteristics and molecular mechanisms of HS and FABP1 interactions. Polymerization distribution analysis of LMWH LMWH is a U.S. Food and Drug Administration-approved drug that contains a mixture of oligosaccharides.We prepared the LMWH from unfractionated as described under "Experimental procedures."The prepared LMWH was analyzed by size-exclusion chromatography (SEC), and its degree of polymerization was mainly distributed in dp4-dp24 (Fig. S1).The molecular weight distribution of the LMWH prepared by us is consistent with the literature report (17). LMWH can alleviate renal pathological changes in DN mice The establishment of the animal DN model and the experimental process of LMWH intervention are shown in Figure 1A.After 8 weeks of high-fat, high-sugar diet, streptozocin (75 mg/kg) was injected for three consecutive days to construct T2DM.The fasting blood glucose was greater than 16 mmol/L 1 week later, suggesting that the diabetes model was successful.After 6 weeks of continued feeding, urine albumin/creatinine ratio (ACR) was significantly higher than that of the normal group, indicating the development of DN (Fig. 1B).We found that LMWH administration for 8 weeks significantly decreased ACR of the mice with DN (Fig. 1C).Kidney sections were stained with periodic acid shiff (PAS) (Fig. 1D and five glomeruli per piece were randomly selected per piece to assess glomerular hypertrophy by cell counting (Fig. 1E).The relative number of pixels (pink or red area) divided by the total glomerular area was measured to assess mesangial interstitial dilation (Fig. 1F).The evaluation results showed that LMWH treatment significantly delayed the progression of DN, which is basically consistent with the results of previous studies (16). Proteomic analysis The protective effects of heparin or heparin analogs on the diabetic kidney have been well studied (8,16,18,19).However, its mechanism has not fully understood, especially, the effects of LMWH intervention on the renal proteome have not been reported.It is well known that proteins play an irreplaceable role in physiology/pathology, and in vivo HS or exogenous LMWH may function by interacting with various target proteins.To further understand the mechanism of LMWH in improving DN, we analyzed it by renal proteomics (Fig. 2A).A total of 4955 proteins were identified using label-free quantification techniques and quantitatively compared with at least two replicates (Fig. 2B).In the quantitative comparison between the LMWH treatment group and the DN model group, the expression levels of 272 proteins were significantly changed, with fold changes greater than two or less than 0.5, p < 0.05 (Table S1).Among them, 169 proteins were upregulated and 103 proteins were downregulated in the LWMH treatment group (Fig. 2C).It is unclear how many of these altered proteins are closely related to the role of heparin, so it makes sense to look for key proteins and explore the causes of their expression changes and the mechanism of their effects. Bioinformatics analysis and screening of key proteins To screen the key proteins involved in kidney protection among the changed proteins, we performed GO and Kyoto encyclopedia of genes and genomes (KEGG) analysis and visualization of the 272 proteins that produced the changes in https://www.bioinformatics.com.cn, an online platform for data analysis and visualization. GO analysis can define and describe the function of genes and proteins, and the GO database is divided into three categories: cellular component, biological process (BP), and molecular function.The enriched cellular component mainly includes actin cytoskeleton, myocoagulin complex, myofibrils, etc. (Fig. 2D).Actin cytoskeleton provides structural and functional support, generates a framework for cells and their connection with the extracellular matrix, and the complex regulation of podocytes actin cytoskeleton is the basis for maintaining an intact glomerular filtration barrier (20).The main BP involved were the response to oxidative stress, Golgi apparatus-related transport, purine metabolism, amino acid metabolism, etc. (Fig. 2E).Among them, oxidative stress plays a very important role in the course of DN, which directly leads The molecular mechanism of LMWH improves DN to renal interstitium, glomeruli, and renal podocytes damage, and then damages the function of the kidney (21).The enriched molecular function mainly contains NADH dehydrogenase activity and protein binding (Fig. 2F).Among them, NADH dehydrogenase is the main component of mitochondrial respiratory chain complex I. Studies have shown that hyperglycemia can lead to mitochondrial dysfunction, which in turn inhibits respiratory chain complex I, which leads to the production of large amounts of reactive oxygen species to induce oxidative stress (22).GO analysis showed that LMWH may protect diabetic kidneys by participating in maintaining the renal actin backbone, protecting mitochondrial function, regulating oxidative stress, and other functions. The KEGG analysis results are shown in Figure 2G.Among them, the enrichment of pathways is related to neurodegenerative diseases such as Parkinson's and Alzheimer's disease, which is not difficult to understand, because high blood glucose and high blood lipids can lead to mitochondrial dysfunction, which in turn is closely related to neurodegenerative diseases (23).We note that the lipid sensor-PPAR pathway, which regulates whole-body energy metabolism and is involved in diabetes and DN, was enriched.PPAR can reduce hyperglycemia-induced oxidative stress and apoptosis, and improve endothelial and podocyte function (24).In addition, PPAR agonists (such as fibrates for PPARa and glitazone for PPARg) have been used for decades to treat dyslipidemia and diabetes (25).The PPAR signaling pathway and the location of the differentially expressed proteins Fabp1, Acaa1b, Acox2, Hmgcs2, Pltp, and Apoa2 involved in the pathway are shown in Fig. S2.These six proteins are the downstream target proteins of PPAR, and all them except Apoa2 were upregulated in the LMWH treatment group, suggesting that LMWH would protect diabetic kidneys by enhancing the PPAR pathway. To further screen for key proteins affected by LMWH, the protein interaction network (medium confidence (0.4)) of 272 proteins was analyzed on STRING, the protein-protein interaction was visualized using Cytoscape software (https:// cytoscape.org/)(Fig. S3), and subnet enrichment was performed using the MCODE plugin.The participating PPAR pathways Acox2, Hmgcs2, Fabp1, and Acaa1b were clustered into the same subnetwork (Fig. 2H), and all five proteins belonged to the downstream target genes of PPARa (26).Among them, intracellular FABP1 concentration was positively correlated with the transactivation of PPARa and PPARg (27)(28)(29).In addition, FABP1 is thought to be a renal endogenous antioxidant that can inhibit tubulointerstitial damage (30).FABP1 is mainly expressed in the proximal renal tubule of the ).E, glomerular cell number (n = 3, five glomeruli were randomly selected from each section for counting).F, mesangium fraction (n = 3, five glomeruli were randomly selected from each section to calculate the ratio of positive staining area to glomerular area).The data from the two cohorts were subjected to analysis using the independent samples t test.For comparisons involving multiple groups, one-way ANOVA followed by Dunnett's post hoc test was employed.Significance levels are denoted as follows: * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001.ACR, albumin/ creatinine ratio; DN, diabetic nephropathy; LMWH, low molecular weight heparin; PAS, periodic acid shiff. The molecular mechanism of LMWH improves DN kidney, and the FABP1 filtered out by the glomeruli in the internal circulation is reabsorbed in the renal tubule, so the increase of FABP1 in the kidney is not necessarily due to the increase in expression, but may also be caused by the endocytosis of the filtered FABP1 by the proximal tubule, so treatment with LMWH may cause endocytosis of FABP1 that affects the renal tubules.Therefore, the protein FABP1 in the PPAR pathway was selected as a key protein in the influence of LMWH to conduct subsequent mechanism studies. Verification of FABP1, Acox2, Hmgcs2, PLTP, and Acaa1b expression changes To further confirm the impact of LMWH treatment on the protein expression levels of FABP1, Acox2, Hmgcs2, Fabp1, The molecular mechanism of LMWH improves DN and Acaa1b, we employed immunohistochemistry (IHC) to assess the actual expression levels of FABP1, Acox2, Hmgcs2, PLTP, and Acaa1b in the kidneys of mice.The results of IHC analysis were calculated with Image-Pro Plus 6.0 to calculate the average absorbance of positive staining of sections, and three fields of view were randomly taken for quantification per section (Fig. 3A).Compared with the DN group, the expressions of FABP1, Acox2, Hmgcs2, PLTP, and Acaa1 in the LMWH treatment group were significantly upregulated, which was close to that in the normal group.The results of IHC analysis were consistent with the results of proteome quantification (Fig. 3B), indicating that LMWH treatment could reverse the expression levels of FABP1, Acox2, Hmgcs2, PLTP, and Acaa1b in DN to some extent. Changes in the HS chain on glycocalyx of renal tubular cells affect their endocytosis to FABP1 Since FABP1 is expressed in the proximal tubule of the kidney and FABP1 involved in the internal circulation is also reabsorbed in the renal tubule, the reduced loss of FABP1 protein in the internal circulation is speculated to be one of the mechanisms by which LMWH upregulated its levels.We used human renal cortical proximal convoluted tubular epithelial cells (HK-2) treated with high-glucose-high-fat to establish a cell model. Superpositively charged GFP (scGFP) was used to observe the highly negatively charged fraction on the surface of HK-2 cells.After heparinase treatment, the green fluorescence on the surface of HK-2 cells was significantly attenuated, indicating that the highly negatively charged components on the surface of HK-2 cells were HS (Fig. 4A).The results of quantification of fluorescence intensity showed that the high negatively charged glycan linkages on the surface of HK-2 cells were significantly reduced by the high-glucose-high-fat treatment,and the LMWH treatment group effectively protected the glycan chains on the cell surface from degradation (Fig. 4B).HS consists of eight different repeating disaccharide units with different sulfated and N-acetylated substitutions, so quantification of eight disaccharides can directly reflect the amount of cellular HS.To further reveal the structural changes in HS at the molecular level, eight disaccharides were quantitatively analyzed using LC/MS-multiple reaction monitoring (MRM) with stable isotope internal standards.The results showed that the levels of eight disaccharides in high-glucosehigh-fat group was significantly lower than those in the normal group (Fig. 4C), indicating that the high-glucose-highfat treatment would destroy the HS chain on the cell surface.However, its sulfation modification level is not affected, as the molar percentage of each disaccharide does not change significantly (Fig. 4D).In addition, in the endocytosis experiment, HK-2 in the normal group recruited a considerable amount of recombinant human FABP1 (rhFABP1) through endocytosis, and the rhFABP1 in HK-2 endocytosis in the high-glucose-high-fat group was significantly reduced, while this phenomenon was reversed to a certain extent in the LMWH treatment group (Fig. 4E), indicating that LMWH protects HS from disruption and plays an important role in mediating HK-2 endocytosis FABP1. Changes in endocytic FABP1 affect the level of activation of PPAR Endogenous FABP1 is known to be a coactivator of PPAR, which can promote the transport of PPAR ligands such as free fatty acids to PPARs, thereby enhancing transcriptional regulation.To verify the effect of changes in endocytotic FABP1 on cellular PPAR activation, we utilized the PPAR-Luc luciferase reporter containing multiple PPAR binding sites to detect the activation level of PPAR in HK-2 cells.First, the PPAR-Luc luciferase reporter plasmid was transfected into HK-2 cells, and then treated with rhFABP1 with different administrations, and the luciferase activity was detected, and the results are shown in Figure 4F.The results showed that luciferase activity decreased and PPAR activation level decreased after high-glucose-high-fat treatment, and luciferase activity and PPAR activation level were reversed after LMWH combined treatment.The results showed that endocytosed FABP1 could affect the activation level of PPAR.At the same time, we also performed Western blot analysis on five PPAR downstream target gene proteins that were upregulated in the proteome (Fig. 4G), and the results showed that the activation of PPAR upregulated the target gene proteins, which was consistent with the results of animal experiments. It is known that the high glucose environment increases the protein O-GlcNAcylation, and the abnormal O-GlcNAcylation of the protein plays a crucial role in the etiology and progression of diabetes and diabetic complications (31).The O-GlcNAcylation of PPAR may alter its transcriptional activity (32), to further evaluate the direct effect of high-glucose-highfat treatment on PPAR, we did not significantly change the activation level of PPAR in the detection of luciferase activity in high-glucose-high-fat treated HK-2 cells, indicating that the high-glucose-high-fat environment did not directly affect the activation of PPAR (Fig. 4H). Protective effect of FABP1 on HK-2 cells in a high-glucosehigh-fat environment To verify the role of FABP1 in DN, we used a cell model to silence FABP1 and detect changes in apoptosis of HK-2 cells in a high-glucose-high-fat environment, as shown in Figure 5.There was no significant change in the apoptosis rate in the mannitol treatment group with the same osmotic pressure as the high glucose medium, indicating that the osmotic pressure had no effect on the apoptosis of HK-2.The apoptosis of HK-2 cells increased significantly after high-glucose-high-fat stimulation, and the apoptosis rate of HK-2 cells in the high-glucose-high-fat environment after FABP1 silencing further increased, indicating that FABP1 had a protective effect on HK-2 cells in the high-glucosehigh-fat environment. Effects of high-glucose-high-fat on HS synthesis and degrading enzymes in HK-2 We investigated the expression changes of HK-2 cells involved in HS synthase and HS degrading enzymes using quantitative RT-PCR to further understand the possible mechanism of HS shedding on the surface of HK-2 cells exposed to high-glucose-high-fat (Fig. 6).The findings indicated a significant upregulation in the expression of glycosyltransferases exostosin-like 1 and exostosin-like 2 in HK-2 cells subjected to high-glucose-high-lipid conditions.Conversely, The molecular mechanism of LMWH improves DN The molecular mechanism of LMWH improves DN the expression levels of exostosin-1 and exostosin-2 were downregulated, implying that high-glucose-high-fat conditions influence HS biosynthesis in cells.However, the current data did not exhibit a clear trend, making it difficult to ascertain whether the synthesis is ultimately promoted or inhibited.Nevertheless, the HS-degrading enzyme heparanase (HSPE) exhibited elevated expression, elucidating the primary cause of reduced HS levels on the surface of HK-2 cells within a hyperglycemic and hyperlipidemic milieu.Moreover, we assessed the alterations in the expression of HS synthase and degradative enzymes under the influence of mannitol (which mimics the osmotic pressure of high glucose) as illustrated in Figure 6, to explore the potential impact of osmotic pressure on HS depletion.It was observed that HSPE expression was markedly upregulated, although the effect of osmotic pressure on HSPE expression was comparatively less pronounced than that in the high-glucose-high-fat environment. Molecular mechanism of binding between FABP1 and heparin With advances in analytical methods, the specific veil of GAG-protein interactions is being lifted.To elucidate the sequence characteristics of HS binding to FABP1, we used our previously developed Sep-GAG software combined with offline strong anion exchange chromatography (SAX)-tandem mass spectrometry (MS/MS) methods for characterization (33).Firstly, our biolayer interferometry (BLI) results showed that the affinity between FABP1 and LMWH was very strong (K D = 1.53 × 10 -9 mol/L), indicating that FABP1 belonged to HSBP (Fig. 7A). As shown in Figure 7B, LMWH was first fractionated by FABP1 affinity chromatography, with low salt solutions salts eluting nonaffinity components and high salt solutions eluting as affinity components.SEC was used to compare the changes in the polymerization degree of affinity components, nonaffinity components and LMWH, where the polymerization degree of affinity component oligosaccharide chains favored octasaccharide (dp8) and larger oligosaccharide chains, indicating that FABP1 has a certain length dependence on the binding of oligosaccharide chains.Therefore, we used affinity chromatography to enrich dp8 for population sequencing, in which complete enzymatic hydrolysis and nitrite degradation can provide information on the disaccharide composition units information of affinity dp8.Unfortunately, because enzymatic hydrolysis can well retain sulfate modification information and the hexosamine information well, it destroys the epimeric information of the C-5 uronic acid by forming unsaturated double bonds at positions 4,5 of the uronic acid, and nitrite degradation can compensate for this deficiency (Fig. 7C).The disaccharide information after complete enzymatic hydrolysis is shown in Figure 7D, and the DIS (DUA2S-GlcNS6S) content in FABP1-affinity dp8 exceeds the DIS content in LMWH-dp8 by 18.8%.The results of nitrite degradation showed that the content of IdoA2S-aMan6S in ABP1-affinity dp8 exceeds the content of IdoA2S-aMan6S in LMWH-dp8 by 14.2% (Fig. 7E).The information of the disaccharide constituent units indicated that FABP1 was more inclined to the IdoA2S-GlcNS6S structure of the junction and LMWH.The top 20 theoretical affinity dp8 sequences revealed by Seq-GAG software were present in Figure 7F.To obtain accurate oligosaccharide sequences, we first used SAX to further isolate affinity-dp8, which was characterized by MS/MS.A total of five major chromatographic peaks (P1-P5) were isolated from SAX (Fig. 7G), and the oligosaccharide sequences characterized by MS/MS are shown in Figure 4H, and the MS/MS mass spectra were shown in Fig. S4.Combined with the Sep-GAG sequencing and MS/MS results, the sequence structure of five affinity-dp8 (Fig. 7I) was obtained, and they had a common sequence "DUA2S-GlcNS6S-IdoA2S-GlcNS6S," where DUA2S could be IdoA2S or GlcA2S in the parent glycan chain.Therefore, we conclude that HS of tubular cells should contain a structure of "IdoA2S/GlcA2S-GlcNS6S-IdoA2S-GlcNS6S-IdoA2S-GlcNS6S" by which mediating FABP1 endocytosis.were selected and the fluorescence of all cells in each field of view was quantified).The scale bar represents 20 mm.F, assess the effect of altered intracellular FABP1 levels on PPAR signaling using luciferase reporter assays.G, Western blot analysis of target gene proteins downstream of PPAR.H, analysis of the effect of high-glucose-high-fat on PPAR activation.The data from the two cohorts were subjected to analysis using the independent samples t test.For comparisons involving multiple groups, one-way ANOVA followed by Dunnett's post hoc test was employed.Significance levels are denoted as follows: *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001.FAB1, fatty acid-binding protein 1; HS, heparan sulfate; LC, liquid chromatography; LMWH, low molecular weight heparin; MRM, multiple reaction monitoring; PPAR, peroxisome proliferator-activated receptor. The molecular mechanism of LMWH improves DN Furthermore, we employed GlycoTorch Vina to model the interaction between heparin hexasaccharide (PDB: 1FQ9) and FABP1 (PDB: 7FYA) (Fig. 8).The calculated optimal binding free energy was −3.6 kcal/mol, facilitated through both electrostatic and hydrogen bonding interactions.Key lysine residues (K90, K96, and K99) in FABP1 were essential for ligand binding, establishing critical electrostatic contacts.Each monosaccharide unit within the hexasaccharide chain contributed to the binding, with N-sulfation and 2-O-sulfation playing crucial roles. Discussion DN is an important cause of end-stage renal disease, affecting 40% of patients with diabetes.DN may occur in patients with previous hyperglycemic exposure, even if glycemic control is reasonable (16,34).The protective effect of heparin analogs on the diabetic kidney has long been reported clinically, which can restore the barrier function of the GBM by protecting glomerular capillary HS, and can also be used as an antagonist of RAGE to treat DN (8,16,35).Therefore, further exploration of the mechanism of action of heparin analogs is very valuable for the development of drugs to treat the diabetic kidney.HS can mediate many physiological processes by interacting with a variety of HSBPs (proteases, growth factors, cytokines, chemokines, and adhesion molecules) (5).LMWH can protect against the degradation of cellular glycocalyx HS in long-term hyperglycemic environments, thereby affecting HSBPs. PPARs are ligand-activated nuclear transcription factors that include three subtypes: PPARa, PPARb, and PPARg.It plays an important role in BPs such as lipid metabolism, glucose homeostasis, cell cycle progression, cell differentiation, inflammation, and extracellular matrix remodeling (36).Many studies have shown that the availability of selective agonists and antagonists of PPARs may provide new avenues for the treatment of DN.Among them, the PPAR-a agonist fenofibrate prevented DN by improving the function of db/db mouse endothelial cells and inhibiting M1 macrophages (37).PPARg agonists such as troglitazone (Rezulin), pioglitazone (ACTOS), and rogradone (Avandia) reduce insulin resistance, hyperinsulinemia, and hyperglycemia in diabetics (36).Therefore, studying the PPAR pathway may provide a new way to regulate DN. In this study, we verified in animal experiments that LMWH treatment can improve DN and revealed for the first time the changes in the renal proteome after LMWH treatment.The downstream proteins FABP1, Acaa1b, Acox2, Hmgcs2, and The molecular mechanism of LMWH improves DN The molecular mechanism of LMWH improves DN PLTP of the PPAR pathway were upregulated, so LMWH may improve DN by enhancing the PPAR pathway.FABP1 is mainly expressed in the proximal renal tubules, filtered through the glomeruli in circulation, and then reabsorbed in the renal tubules.Therefore, the elevated level of FABP1 in urine can be used as a marker of DN (38).Importantly, FABP1 has a protective effect in acute kidney injury and chronic kidney disease and may reduce glomerular injury in the early stages of IgAN (39,40).Therefore, in this study, FABP1 was investigated as a representative HSBP.LMWH has been proved to protect the glomerular endothelial HS and the normal barrier function of the glomerulus, thereby reducing protein loss (8).In this study, we visualized the HS of tubular epithelial cells by fluorescent protein labeling and quantified the changes in HS content by the stable isotope internal standard liquid chromatography-MRM.Immunofluorescence localization analysis was used to elucidate the mechanism by which HS is involved in the endocytic recruitment of FABP1 in renal tubules, and LMWH could protect glycocalyx HS and reduce the degradation induced by high glucose and high lipids.Furthermore, studies have shown that FABP1 is a coactivator of PPAR-mediated gene expression, as FABP1 acts as a cytoplasmic channel for PPARa and PPARg agonists such as fatty acids (27,41), and in this study, we utilized a luciferase reporter gene to monitor the activation level of PPAR, demonstrating that changes in endocytic FABP1 levels affect the activation level of PPAR.Therefore, LMWH indirectly promotes the PPAR pathway by protecting cellular HS and increasing the content of intracellular FABP1 levels to delay the progression of DN.Probably, due to the complexity of the HS structure and the lack of advanced analytical tools, few detailed mechanisms of HSBP and HS interaction have been elucidated.We enriched the oligosaccharide chains of FABP1 interaction by affinity chromatography and used Seq-GAG software combined with off-line SAX-MS/MS to characterize the HS structures participating in the interaction, including IdoA2S/GlcA2S-GlcNS6S-IdoA2S-GlcNS6S sequences.In addition, we also predicted the protein sites involved in the interaction by molecular docking technology, in which K90, K96, and K99 made major contributions to the binding of the two.This study revealed a novel mechanism by which LMWH can delay the progression of DN by promoting the PPAR pathway and elucidated the molecular mechanism of FABP1 interaction with HS, providing new insights into understanding the role of heparin in the pathogenesis of DN and the development of appropriate treatments. Preparation of LMWH The preparation of LMWH by b-elimination follows the protocol we previously reported (17).Briefly, we first prepared heparin benzathonium salt; the obtained heparin benzathonium salt was then redissolved in dichloromethane, and heparin benzyl ester was prepared by adding benzyl chloride (40 C, 12 h); next, 1 g of heparin benzyl ester was incubated with 25 ml of 0.1 M sodium hydroxide at 55 C for 2 h for depolymerization.Finally, LMWH was obtained by methanol precipitation and dialysis, and its polymerization distribution was analyzed by SEC.SEC analysis was performed on a Thermo Scientific Vanquish UHPLC System, and the ACQUITY UPLC Protein BSH SEC Columns (300 mm × 4.6 mm and 150 mm × 4.6 mm tandem, Waters) were used to separate the samples.Mobile phase was composed of 50 mM ammonium formate in 20% methanol.The flow rate was set at 75 ml/min for a total analysis time of 75 min.The molecular mechanism of LMWH improves DN DN mouse models and grouping Male mice (C57BL/6J, age 4 weeks, body weight 12-14 g, purchased from Beijing SPF Biotechnology Co., Ltd) were housed in an environmentally controlled room with 35 to 55% humidity at 25 C ± 0.5 C, 12 h: 12 h light/dark cycle and acclimated to this environment 1 week before the experiment.They were then randomized into a T2DM model group (fed a high-fat diet, 10% sucrose, 10% lard, 5% cholesterol) and a normal control group (fed normal chow).After 8 weeks, the model group was intraperitoneally injected with 75 mg/kg streptozotocin (Sigma) for three consecutive days, and the fasting blood glucose >16.0 mmol/L after 1 week was the standard for molding.After the model group continued the high-fat diet for 4 weeks, the measured ACR was significantly different from that of the normal group, and renal lesions were judged.The DN model group was randomly divided into DN group and LMWH treatment group (subcutaneous injection of HP 100 IU/pcs/day).After 8 weeks, ACR was measured and then euthanized for further analysis.The kidney was harvested for proteomics and sectioning (PAS staining).All experiments were approved by the Institutional Animal Care and Use Committee of the Scientific Investigation Committee of Shandong University. Histological evaluation Mice are perfused with 20 ml normal saline immediately after euthanasia, the kidneys were immersed in 4% paraformaldehyde for fixation; paraffin is embedded and cut into 4-mm thick sections.Staining was performed with PAS.The method of analyzing and evaluating glomerular hypertrophy and mesangial expansion with Image software refers to the published literature (42), and in briefly, three stained glomeruli (three mice per group) per slice are randomly selected to analyze the degree of glomerular hypertrophy.Semiquantitative assessment of mesangial stromal dilation is assessed by measuring the relative number of pixels (pink or red areas) divided by the total glomerular area. Renal protein extraction, tryptic digestion, and peptide prefractionation Kidney protein extraction, tryptic digestion, and peptide prefractionation steps were performed according to the reported literature method (43).Briefly, the isolated tissue was ground and dissolved in T-PER Tissue Protein Extraction Regent (Thermo Fisher Scientific) containing a protease and phosphatase inhibitor (Thermo Fisher Scientific), followed by ultrasonic fragmentation for 60 s (3-s on and 3-s off, amplitude 25%).Samples were centrifuged at 12,000g to remove residual tissue.The protein concentration was measured by bicinchoninic acid method.An equal amount of protein was denatured with denaturation buffer (8 M urea, 0.1 M Tris-HCl, pH 8.5), then the sample was reduced with 10 mM DTT for 4 h at 37 C, followed by an additional 30 min of 50 mM iodoacetamide alkylation in the dark at room temperature (all on a 30 kDa ultrafiltration membrane to facilitate buffer replacement).Then added trypsin (w/w = 1:50) to each filter tube, incubated at 37 C for 12 hand ultrafiltrated to yield peptides.Finally, the polypeptide was preseparated by high pH reversed-phase, vacuum concentrated and lyophilized, and stored at −80 C for later use. Nano LC-MS/MS analysis for label-free proteomics Peptides were analysed using a nano system (Thermo Fisher Scientific, EASY-nLC1200) coupled to a Nano Orbitrap Fusion Lumos Tribrid mass spectrometer system (Thermo Fisher Scientific).Briefly, the prefractionated peptides were dissolved in 0.1% formic acid (FA) in 2% acetonitrile, first collected on a nanoViper C18 silica gel column Acclaim PepMap RSLC (75 mm × 2 cm, 3 mm, 100 A, Thermo Fisher Scientific), and after washing, the peptide was transferred to a nanoViper C18 silica gel column Acclaim PepMap RSLC (75 mm × 25 cm, 2 mm, Thermo Fisher Scientific) for separation.Mobile phases A (0.1% FA) and B (80% acetonitrile, 0.1% FA) with flow rates of 300 nl/min.Gradient procedure: 0 to 4 min, 2% to 10% B; 4 to 44 min, 10% to 28% B; 44 to 55 min, 28% to 38% B; 55 to 60 min, 38% to 55% B; 60 to 65 min, 55% to 95% B; 65 to 75 min, and 95% B. Finally, the eluted peptide was sprayed into the mass spectrometer by nanospray ion source, and the cycletime data-dependent acquisition mode was adopted.The main scan interval is 3 s; scan range: 200 to 2000 m/z; 400 m/z resolution of 60,000; in the Orbitrap detector with a resolution of 15,000, the precursor ions intensity threshold greater than 4.0e5 in the quadrupole were selected for MS/MS fragmentation analysis with a normalized collision energy of 30%, and the dynamic exclusion time was 25 s to avoid repeated screening of polypeptides. Proteome data analysis All RAW files were processed using Proteome Discoverer (https://www.thermofisher.cn/order/catalog/product/OPTON-31105) (v.2.3, Thermo Fisher Scientific) with the Sequest HT search engine, parameters: trypsin specificity, up to two missed lyses; digested peptide lengths from 4 to 144 Da; oxidation of methionine and acetyl at the N terminus for dynamic modification and carbamidomethyl of cysteines in static modification; after quantification, we define proteins with fold change > 2, p value <0.05 as differentially expressed proteins. Bioinformatic analysis GO enrichment analysis of differentially expressed proteins and KEGG pathway enrichment analysis and visualization were performed in https://www.bioinformatics.com.cn, an online platform for data analysis and visualization, with adjusted p < 0.05 was set as the cut-off criteria.The proteinprotein interaction network was constructed from the STRING database and visualized by Cytoscape software (V3.10.0). Immunohistochemistry Paraffin-embedded kidney blocks were cut into 4-mm thick sections for IHC.Rabbit monoclonal FABP1 antibody (diluted 1:4000; Arkham) was used as the primary antibody and The molecular mechanism of LMWH improves DN horseradish peroxidase anti-rabbit IgG (Boster) as the secondary antibody.After the secondary antibody incubation is completed, DAB working solution (Boster) is added drop by drop, counterstained with Meyer's haematoxylin (Boster) and observed under a microscope.The brown-yellow area is positive. Cell culture and treatment HK-2 cells (human proximal tubular epithelial cells) were purchased from Boster Biological Technology co.ltd.HK-2 cells were cultured in Iscove's modified Dulbecco's medium supplemented with 10% fetal bovine serum at 37 C with 5% CO 2 .HK-2 cells were cultured with 45 mM glucose and a gradient concentration (50,100,200,300, and 400 mM) of palmitic acid for 48 h and cell viability was measured using the methyl thiazolyl tetrazolium kit. Visualization of glycocalyx on the surface of HK-2 cells Cell surface negatively charged HS was labeled with the highly positively charged fluorescent protein scGFP for visualization (44).Cells cultured under normal and high-glucosehigh-fat (45 mM glucose, 300 mM PA) for 48 h were incubated with a scGFP incubator for 5 min and then washed three times with PBS to remove free scGFP.After staining nuclei with 4',6-diamidino-2-phenylindole (Solarbio), visualization was performed using super-resolution laser scanning confocal microscopy (LSM900) (Carl Zeiss) and the ZEISS LSM image browser. Analysis of HS component disaccharides Analysis of HS component disaccharides was performed as previously described (45).Briefly, cells were rinsed three times with PBS, then tissue protein extract was added and sonicated in an acoustic bath for 10 min, then added a certain amount of stable isotopically labeled eight heparin disaccharides, added 400 ml of heparin complete digestion buffer and 20 mIU each of heparinase I, II, and III, incubated at 37 C for 12 h, and continued incubation for 12 h after adding an equal amount of enzyme.The stable isotopically labeled disaccharides were synthesized chemoenzymatically as described previously (46).The resulting disaccharides are obtained by a 3 kDa ultrafiltration membrane, freeze-dried, and labeled with 2-aminoacridone reductive amination.LC-MS-MRM was performed using an ExionLC UPLC system connected to a SCIEX Triple Quad 5500+ mass spectrometer.A Kinetex C18 column (2.6 mm, 150 × 2.1 mm, Agela Technologies) was used at a column temperature of 45 C. MRM transitions for 2-aminoacridonelabeled disaccharides refer to reported protocols (47). Real-time quantitative PCR Total RNA from HK-2 cells was extracted by Trizol method.Determine the purity and concentration of RNA utilizing an ultramicrovolume spectrophotometer.Reverse transcription of RNA to complementary DNA using HiScript reverse transcriptase for HPSE, exostosin-like 1, exostosin-like 2, exostosin-like 3, exostosin-1, exostosin-2 are shown in Table S1.The data were analyzed using the 2 -DDCt analysis method. Visualization of FABP1 endocytosis in living cells HK-2 cells are seeded in a 35 mm dish with a 14 mm glassbottom well and randomly divided into two groups after 24 h for further incubation for 48 h in 45 mM glucose, 300 mM PA and normal medium, respectively.The cells were then incubated with histidine (His)-tagged human recombinantly expressed FABP1 (10 mg/ml) for 1 h and washed 3 times with PBS to remove excess FABP1.The cells were then fixed with 4% paraformaldehyde for 20 min, washed with PBS, and permeabilized with 0.2% Triton X-100 for 15 min at room temperature.After blocking with 5% bovine serum albumin, the primary antibody (anti-His tag) and the secondary antibody were added sequentially, both were incubated for 1 h at room temperature, and washed four times with PBST after 1 h.Finally, staining with 4',6-diamidino-2-phenylindole was performed.Visualisation was performed using super-resolution laser scanning confocal microscopy (LSM900, Carl Zeiss AG) and the ZEISS LSM image browse. Detection of PPAR activation levels The activation level of PPAR was quantified by the assay of PPAR luciferase reporter plasmid.HK-2 cells were seeded into 24-well culture plates, 7.5*10 4 cells/well and incubated at 37 C for 24 h.PPAR luciferase reporter plasmid was transfected according to the protocol of Polyplus Transfection Reagent.After transfection, the cells were divided into normal group, high-glucose and high-fat group, and high-glucose and high-fat combined with LMWH treatment group, respectively, after 48 h, the cells were incubated with His-labeled rhFABP1 (10 mg/ml) for 24 h, and then the dual luciferase was detected according to the protocol of the TransDetect double-luciferase reporter assay kit. Western blotting Add 3 ml of prechilled PBS to the cell culture flask, wash the cells with gentle shaking for 1 min, and remove the wash, repeat three times.Add 400 ml of T-PER protein extraction reagent containing 1% protease inhibitor and 0.5% phosphatase inhibitor and scrape the cells with a cell scraper to extract the total protein.Centrifuge the cell solution at 12,000g at 4 C for 10 min, collect the supernatant and determine the total protein concentration using the bicinchoninic acid protein assay kit.The sample was mixed with loading buffer and boiled for 5 min.Electrophoresis parameters were 12.5% SDS-PAGE gel, 20 mg sample load, and 200 v constant pressure electrophoresis, protein transfer to polyvinylidene fluoride membrane parameters, 200 mA constant current for 90 min.After blocking with 5% skim dry milk, the primary antibody and secondary antibody incubation were performed sequentially.The VILBER Fusion FX Imaging System was used for imaging, The molecular mechanism of LMWH improves DN and Image J (https://imagej.net/ij/index.html) was used for data processing. Apoptosis assay HK-2 cells were seeded into 6-well plates and cultured overnight, 3.5*10 5 cells/well.Si-FABP1 (sense, GGGAAGCA-CUUCAAGUUCATT; antisense, UGAACUUGAAGUGCUU CCCTT) was transfected with jetPRIME buffer and then treated with high glucose and high fat for 48 h.Cells were collected and assayed using flow cytometry. Biolayer interferometry BLI is a well-established method for validating protein-GAG interactions.LMWH is first biotinylated by the method we have reported (48).Biotinylated LMWH was coupled to a streptavidin sensor.FABP1 was diluted with PBS buffer to form a gradient of five concentrations (0, 0.0625 mM, 0.125 mM, 0.25 mM, 0.5 mM).ForteBio Octet Red96e (ForteBio) was used to detect LMWH and FABP1 interactions according to our published protocols (49). FABP1-agarose affinity chromatography enrichment for affinity heparin oligosaccharides The preparation of the protein affinity column and the enrichment method for affinity oligosaccharides followed our previously reported protocol, with appropriate optimizations implemented.(33).Briefly, the heparin-blocked FABP1 was conjugated to the activated cyanogen bromide activated Sepharose 4B (4 C overnight) to prepare affinity columns.LMWH dissolved in loading buffer (10 mM Tris-HCl) was loaded onto affinity columns, nonaffinity oligosaccharides were eluted with low salt buffer (0.15 M NaCl, 1 mM Tris-HCl, pH = 6.5), and affinity oligosaccharides were eluted with high salt buffer (2 M NaCl, 1 mM Tris-HCl, pH = 6.5).Finally, the collected affinity oligosaccharides were separated and collected by polymerization using the Waters 1.7 mm SEC 125 Å Column (4.6 mm × 150 mm and 4.6 mm × 300 mm tandem). Acquisition of the dp8 theoretical sequences The method used by the Seq-GAG software to infer theoretical sequences in mixed oligosaccharide chains uses the protocol we have reported (33).Briefly, the dp8 was subjected to enzymatic digestion and nitrous acid (HONO) degradation, respectively, to analyze its basic building blocks.dp8 was dissolved in 8.75 ml heparin digestion buffer (prepared by mixing sodium acetate/calcium acetate buffer), 5 mIU each of heparinase I, II, and III were added, incubated at 37 C for 12 h, an equal amount of enzyme was added and incubation was continued for 12 h.Enzymatic digestion samples were analyzed using hydrophilic interaction chromatography MS/ MS.HONO degradation of dp8 at pH 1.5 followed the protocol previously reported (50).HONO-degraded samples were analyzed by porous graphitized carbon chromatography-MS.The liquid analysis described above was performed on a Thermo Fisher Scientific Ultimate 3000UHPLC coupled to an LTQ-Orbitrap XL mass spectrometer.Finally, Seq-GAG,a software we developed, was used for sequence prediction. Off-line SAX and ESI-MS/MS dp8 was further separated using SAX (Thermo Fisher Scientific, ProPac PA1 SAX column, 4 × 250 mm) and the components were collected by chromatographic peak.Mobile phase A was 0.2 M NaCl (pH 7.0), and mobile phase B was 2 M NaCl (pH 7.0).The gradient change of the mobile phases was 0% B for 5 min, 0% B to 30% B over 50 min, 30% B to 70% B over 100 min, 70% B to 100% B over 0.1 min, and 100% B for 10 min, and the total analysis time was 160 min.The collected components were desalted and analyzed using the Thermo LTQ-Orbitrap XL mass spectrometer for MS and MS/MS using the protocol we previously reported (51).The sample solvent and mobile phase consist of a 50% methanol solution (3 mM NaOH).The MS/MS parameters were set as follows: Iso width (m/z), 3.0; normalized collision energy, 50.0. Molecular docking The docking between heparin hexasaccharide and FABP1 is performed using GlycoTorch Vina software (https://www.glycotorch.com/).The structure of heparin hexasaccharide is derived from the Protein Data Bank code 1FQ9, and FABP1 is derived from the NMR structure in the database (PDB: 7FYA).Before docking, hydrogen atoms were added to the protein structure and the Gasteiger charge was used.The docking range is the whole protein unrestricted docking, and the receptor adopts full flexible docking. Data availability Proteome data are available via ProteomeXchange with identifier PXD045022.All relevant data generated during this study or analyzed in this manuscript (and its Supplement files) are available from the corresponding author on reasonable request. Supporting information-This article contains supporting information. Figure 1 . Figure 1.Evaluation of the therapeutic effects of LMWH in DN.A, schematic diagram of of the establishment of the DN mouse model and drug administration.B, ACR values in mice after 6 weeks of diabetes (n = 3).C, ACR values in mice after 8 weeks of heparin treatment (n = 3).D, PAS staining of thin kidney sections (n = 3).E, glomerular cell number (n = 3, five glomeruli were randomly selected from each section for counting).F, mesangium fraction (n = 3, five glomeruli were randomly selected from each section to calculate the ratio of positive staining area to glomerular area).The data from the two cohorts were subjected to analysis using the independent samples t test.For comparisons involving multiple groups, one-way ANOVA followed by Dunnett's post hoc test was employed.Significance levels are denoted as follows: * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001.ACR, albumin/ creatinine ratio; DN, diabetic nephropathy; LMWH, low molecular weight heparin; PAS, periodic acid shiff. Figure 2 . Figure 2. Renal proteomic analysis and functional enrichment analysis of differentially expressed proteins.A, schematic diagram of renal proteomics.B, number of proteins identified by LC-MS/MS label-free proteomics analysis (n = 3).C, volcano plot of quantitative proteomics results in the DN group and LMWH treatment group (n = 3).D, GO biological process.E, GO molecular function.F, GO cellular component.G, KEGG analysis.H, Subnetworks of PPI.DN, diabetic nephropathy; GO, gene ontology; LMWH, low molecular weight heparin; KEGG, Kyoto encyclopedia of genes and genomes; PPI, protein-protein interaction. Figure 4 . Figure 4. Visualization and fluorescence quantification of HK-2 cell surface HS and intracellular FABP1.A, superpositively charged green fluorescent protein (ScGFP) labels highly negatively charged components on the cell surface.The green color diminished after incubation with heparinase (bottom), suggesting that HS is the dominant negatively charged species in HK-2.The scale bar represents 20 mm.B, visualization and fluorescence quantification of HS on the cell surface after high-glucose-high-fat treatment and high-glucose-high-fat + LMWH treatment with HK-2.The scale bar represents 20 mm.C, results of quantification of HS on the surface of HK-2 cells by LC-MRM (n = 3).D, composition of HS disaccharides on the surface of HK-2 cells (n = 3, DIS: DGlcA2S-GlcNS6S, DIIS: DGlcA-GlcNS6S, DIIIS: DGlcA2S-GlcNS, DIVS: DGlcA-GlcNS, DIA: DGlcA2S-GlcNAc6S, DIIA: DGlcA-GlcNAc6S, DIIA: DGlcA2S-GlcNAc6S, DIIIA: DGlcA2S-GlcNAc, DIVA: DGlcA-GlcNAc).E, visualization and fluorescence quantification of HK-2 endocytosis FABP1.The change of red fluorescence revealed that LMWH could reverse the endocytosis of FABP1 by HK-2 in a high-glucose-high-fat environment to a certain extent (n = 3, three fields of view Figure 7 . Figure 7. Validation of FABP1-HS interactions and characterization of oligosaccharide structures involved in the interactions.A, BLI.B, FABP1 affinity chromatography separated affinity LMWH and SEC analysis affinity LMWH polymerization degree changes.C, schematic of complete enzymatic hydrolysis and HONO degradation of LMWH.D, comparison of the relative content of disaccharides after HONO degradation of LMWH-dp8 and affinity-dp8.E, comparison of the relative content of disaccharides after complete enzymatic hydrolysis of LMWH-dp8 and affinity-dp8.F, TOP20 sequence obtained by Sep-GAG software.G, SAX chromatogram of affinity-dp8.H, MS/MS sequencing results of the five major components of affinity-dp8.I, Sep-GAG combined with MS/MS sequencing to obtain sequences of the five components of affinity-dp8.BLI, biolayer interferometry; FAB1, fatty acid-binding protein 1; GAG, glycosaminoglycan; HONO, nitrous acid; HS, heparan sulfate; LMWH, low molecular weight heparin; MS/MS, tandem mass spectrometry; SAX, strong anion exchange chromatography; SEC, size-exclusion chromatography. Figure 8 . Figure 8.Molecular docking of dp6 and FABP1.Left panel shows the complex of the FABP1 (shown in part) and the hexasaccharide, visualized using AutoDock simulation.The right panel demonstrates the contributions of the protein and oligosaccharide binding motifs and the types of interactions (orange dashed arrows represent electrostatic attractions, and green dashed arrows represent hydrogen bonds).FAB1, fatty acid-binding protein 1.
2024-06-26T15:09:43.308Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "9ecaa4a7ec0d903906669a952e79f0bd45c95841", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "eef6e25abb0e8aa43d28023752fb00b62cd21b64", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15387527
pes2o/s2orc
v3-fos-license
Pathogenesis of Myeloproliferative Neoplasms: Role and Mechanisms of Chronic Inflammation Myeloproliferative neoplasms (MPNs) are a heterogeneous group of clonal diseases characterized by the excessive and chronic production of mature cells from one or several of the myeloid lineages. Recent advances in the biology of MPNs have greatly facilitated their molecular diagnosis since most patients present with mutation(s) in the JAK2, MPL, or CALR genes. Yet the roles played by these mutations in the pathogenesis and main complications of the different subtypes of MPNs are not fully elucidated. Importantly, chronic inflammation has long been associated with MPN disease and some of the symptoms and complications can be linked to inflammation. Moreover, the JAK inhibitor clinical trials showed that the reduction of symptoms linked to inflammation was beneficial to patients even in the absence of significant decrease in the JAK2-V617F mutant load. These observations suggested that part of the inflammation observed in patients with JAK2-mutated MPNs may not be the consequence of JAK2 mutation. The aim of this paper is to review the different aspects of inflammation in MPNs, the molecular mechanisms involved, the role of specific genetic defects, and the evidence that increased production of certain cytokines depends or not on MPN-associated mutations, and to discuss possible nongenetic causes of inflammation. Introduction Chronic myeloproliferative neoplasms (MPNs) are rare hematologic diseases characterized by the clonal proliferation of mature blood elements from several myeloid lineages, associated in certain cases with bone marrow fibrosis, splenomegaly, and/or hepatomegaly. They include chronic myelogenous leukemia (CML), three related entities named polycythemia vera (PV), essential thrombocythemia (ET), and primary myelofibrosis (PMF) (called Philadelphia chromosome-negative (Phi-negative) MPNs), chronic eosinophilic leukaemia, mastocytosis, and unclassifiable MPNs [1]. CML and other MPNs are classified based on the presence or the absence of the BCR-ABL fusion gene which is the hallmark of CML [2]. This review focuses solely on Phi-negative MPNs. Three types of molecular markers are associated with Phi-negative MPNs: activating mutations in the JAK2 gene (JAK2-V617F being the most frequent mutation, present in all subtypes of MPNs); activating mutations in the MPL gene (MPL-W515L/K mostly); and alterations of CALR, the gene coding calreticulin (CALR), detected in ET and in PMF [3][4][5][6][7][8][9][10][11]. A small percentage of MPN patients (<15%) do not carry mutations in the JAK2, MPL, or CALR genes. The exact roles played by JAK2, MPL, and CALR mutations in the pathogenesis, phenotype, and complications of the three MPN subtypes are not fully elucidated. None of the JAK2-V617F, MPL-W515L/K, or CALR mutations is specific of a particular MPN subtype. They are detected in patients with very different phenotype and disease evolution, and therefore their presence alone is not sufficient to explain the clinical presentation and complications observed in MPN patients. Mediators of Inflammation Moreover, for subsets of patients, the JAK2-V617F mutation has been shown to be a rather late event, sometimes recurrent, which indicates that other genetic events are responsible for clonality in these patients [14][15][16][17][18]. Interestingly, some of the clinical symptoms and complications appear to be linked to the chronic inflammation which almost always accompanies MPN disease, and reduction of symptoms linked to inflammation is beneficial to patients [19,20]. Presently it is unclear whether the inflammation-related biological markers and clinical symptoms observed in MPN patients are consecutive or reactive to, or perhaps even precede, the main mutations harbored by MPN clones. Obviously, a better understanding of the mechanisms that underlie inflammation in the different MPN subtypes should have a significant impact on the design of future protocols tested for the therapy of MPNs. To help address this issue, the present review describes the role played by somatic as well as germline genetic defects in the increased production of inflammatory cytokines and other inflammation markers in MPNs; potential nongenetic causes of chronic inflammation are also discussed. Chronic Inflammation, including Inflammation Associated with Solid Cancer or MPNs Inflammation is a pathological process typically triggered by an external aggression, which may be a physical or chemical injury, irradiation, or infection. In addition, chronic hypoxia (e.g., when cells accumulate in a solid tumor or in the bone marrow in the context of blood malignancy or in any type of tissue in case of venous or arterial thrombosis) can also lead to inflammation [21][22][23]. Chronic inflammation is characterized by the prolonged stimulation of the production of immune blood cells from the lymphoid and myeloid lineages and the release of various mediators, notably inflammatory cytokines, in blood vessels and in tissues. Myelopoiesis is stimulated during inflammation so as to produce sufficient quantities of polyclonal granulocytes, monocytes, and macrophages to ensure the destruction of damaged cells, tissues, or infectious pathogens, adequate phagocytosis, and presentation of antigens to lymphocytes. The production of polyclonal megakaryocytes and platelets is frequently increased, to ensure thrombus formation and hemostasis in case of damaged blood vessels in inflamed tissues. Chronic inflammation may lead to hypoxia of variable severity in the damaged tissues and, accordingly, to increased production of polyclonal erythroid progenitors and red blood cells in an effort to improve cell and tissue oxygenation. Conversely, hypoxia can lead to increased production of inflammatory cytokines: individuals with mountain sickness present with elevated levels of inflammatory cytokines in peripheral blood, and healthy volunteers exposed to a hypoxic environment (three nights in high altitude above 3400 meters) presented with a high level of interleukin-(IL-) 6 [24,25]. Patients with Chuvash polycythemia associated with homozygous germline mutation in the Von Hippel-Lindau (VHL) gene, a major actor of the hypoxia sensing pathway, present with elevated levels of tumor necrosis factor-(TNF-) and interferon-(IFN-) [26]. Inflammatory diseases such as inflammatory bowel disease and rheumatoid arthritis also provide evidence of cross talk between hypoxia and inflammation [27]. In rheumatoid arthritis, hypoxiainducible factor-(HIF-) 2 is the HIF isoform that plays a major role in inflammation, notably by inducing expression of IL-6 and TNF- [28]. Importantly, HIF-1 plays an essential role in survival and function of myeloid cells during inflammation [29]. If the initial "injury" persists, the inflammation response and associated chronic stimulation of hematopoiesis are prolonged, and the risk of DNA alteration increases in cells from the damaged tissues or/and in overstimulated hematopoietic progenitors. Over time the acquisition of genetic defects in the inflamed tissues or/and hematopoietic progenitors may eventually lead to the development of solid cancer or/and clonal hematopoiesis and hematological malignancy ( Figure 1). In fact, all types of solid and blood cancers, including MPNs, are accompanied by some degree of chronic inflammation [21,22]. The mechanisms of inflammation in the context of cancer are complex and multiple. Chronic inflammation is an early event in many types of cancers and in certain lymphoma but in MPNs, the possibility that chronic inflammation precedes the acquisition of the main MPN mutations is a new subject of research. Whatever its chronology, chronic inflammation facilitates further DNA alteration in cancer and adjacent cells, and targeting inflammation and its causes should offer new opportunities of cancer treatment and also help reduce complications [21][22][23]. In the context of solid cancer, chronic inflammation may be reactive to a persistent tissue injury (exposure to toxics or to infectious agents) or/and to the tumor itself; it may also be a consequence of tumor-associated mutations or of treatment (radiotherapy or chemotherapy) ( Figure 2). Thus inflammation may precede or/and accompany malignancy, and polyclonal hematopoietic cells of the myeloid and lymphoid lineages participate in the inflammation process. Whatever the cause(s) of inflammation, sustained stimulation of the proliferation of lymphoid or myeloid cells to maintain inflammation over months or years increases the risk of DNA alteration in these cells and the subsequent emergence of a mutated clone (initiation of malignancy) or of additional mutated clones (during or after radio-or chemotherapy). Figure 2 represents progression from chronic inflammation and stimulation of polyclonal myelopoiesis to clonal myelopoiesis, expansion of a mutated myeloid clone, and myeloid malignancy. In MPNs, cells from all myeloid lineages may belong to the malignant clone: erythroid cells, megakaryocytes, neutrophils, and monocytes; B-lymphocytes or/and Tlymphocytes may be mutated too, but only rarely and usually in PMF [30]. In contrast to patients with solid cancer, for whom myelopoiesis is normal and polyclonal, the immune response in patients with MPNs includes the mobilization and activity of mutated (clonal) myeloid cells as well as of healthy myeloid and lymphoid cells. Depending on the small or large size of the MPN clone, the myeloid part of immune and inflammatory responses may be partially or mostly clonal and subsequently mildly or severely defective. This side of Acquisition of genetic alterations Figure 1: Progression from chronic inflammation to solid and blood cancers. A physical, chemical, or infectious injury leads to tissue and cell damage and activation of antiapoptosis signaling pathways in affected cells, which results in the autocrine and paracrine production and consumption of prosurvival, inflammatory cytokines, as well as chemokines, to attract immune cells of the lymphoid and myeloid lineages to the site of injury. Over time, established inflammation (chronic inflammation) constantly overstimulates the production of hematopoietic cells and induces more tissue and cell damage, hereby increasing the rate of DNA duplication and risk of defective DNA reparation and mutation, both in cells from affected tissues (increased risk of solid cancer) and in lymphoid and myeloid cells participating in the immune/inflammatory response (increased risk of hematological malignancy). Chronic inflammation may be related to solid cancer or to other causes (infectious, toxic, and physical). In all cases the immune response includes an increased stimulation of the production of myeloid cells, with the associated increased risk of DNA alteration in dividing progenitor cells. Over the years, a myeloid progenitor may acquire a defect in a gene critical for survival or proliferation (MPL, JAK2, and CALR?) and a MPL-, JAK2-, or CALR-mutated malignant clone may expand and lead to a MPN. Other mutations providing a mild growth advantage (TET2, IDH1/2?) may occur before or after the MPL, JAK2, or CALR mutations. In the case of inflammation related to solid cancer, cancer cells and the inflammatory cytokines they produce likely affect immune cells. myeloid malignancy is often neglected but likely important in the pathogenesis and complications of MPNs. One cause of chronic inflammation recognized as increasing the risk of malignant transformation of affected cells and tissues is chronic infection. Indeed it is now well established that latent infection can be associated with various types of solid cancer or/and with lymphoid malignancy [31][32][33][34][35][36][37]. In blood malignancies, two main transforming mechanisms may be at play: direct cell infection and transformation by oncogenic molecules or indirect transformation via chronic antigen stimulation and cell proliferation resulting in increased risk of acquisition of genetic defects. Molecular Pathways Activated in Chronic Inflammation. During inflammation, cytokines are released which signal cells such as T-lymphocytes and monocytes-macrophages to travel to the site of injury. In turn, activated immune cells increase their production of inflammatory cytokines, chemokines, hematopoietic cytokines, and other growth factors, hereby stimulating numerous cell types from their environment (fibroblasts and endothelial cells), which further increases the production of inflammatory cytokines. In this context, the nuclear factor kappa-B (NF-B) and JAK1/STAT1 pathways are the two main molecular pathways activated to enhance the production of inflammation cytokines (Figures 3 and 4) [12,21,38]. In case of inflammation linked to hypoxia, which may occur after thrombosis or because of cell accumulation, the production of inflammatory cytokines and growth factors by the cells exposed to hypoxia is upregulated via the HIF-1 pathway [39,40]. As shown in Figure 3, the NF-B, HIF-1 , and JAK/STAT pathways interact closely. They act in synergy, NF-B activating the HIF-1 pathway, which in turn leads to increased activation of several signaling pathways, including JAK2/STAT5 (via the production of erythropoietin (EPO)), STAT3 (via inflammation cytokines from the IL-6 family or via EPO, hepatocyte growth factor (HGF), plateletderived growth factor (PDGF), and vascular endothelial growth factor (VEGF)), and STAT1 (via type I and type II inflammatory cytokines) ( Figure 4). Moreover, the level of JAK activity affects the expression of transcription factors HIF-1 and HIF-3 [13,41]. In the context of malignancy, the genetic mutations associated with the tumor may or may not induce the production of inflammation cytokines in mutated cells. This aspect is particularly important in the context of blood cancers since the mutated cells are involved in the immune response or/and are major sources of production of inflammatory cytokines. Situations where chronic inflammation results from more than one cause are not rare: physical injury and infection, thrombosis and hypoxia, solid cancer and infection, JAK2mutated MPN and thrombosis, and so forth. The degree of activation and overall synergistic action of the three main pathways which control the production of inflammatory cytokines may vary widely, which allows for infinite qualitative and quantitative differences ( Figure 4). Thus the cytokine profile and degree of overproduction of inflammatory cytokines and other mediators of inflammation are expected to vary from patient to patient, according to the cause(s) of inflammation, the cell types being stimulated, and the molecular pathways involved. In MPNs, several inflammatory cytokines and growth factors (IL-6, IL-8, GM-CSF, HGF, VEGF, b-FGF, and TGF-) are found to be significantly overproduced in all subtypes, JAK2 activates STAT5 and STAT3. JAK1 activates mainly STAT1 and to a lesser degree STAT3. The different STAT transcription factors form homodimers as well as heterodimers, which allows for a differential regulation of the expression of inflammatory cytokines. In MPN clonal cells, the JAK2-coupled receptors of EPO, TPO, and G-CSF may form complexes with and activate wild type JAK2 only, V617Fmutated JAK2 only, or wild type and V617F-mutated JAK2, which likely result in different levels of activation of the JAK2/STAT pathways concerned. Moreover, EPO, TPO, and G-CSF activate other molecular pathways besides the JAK/STAT pathways, such as the antiapoptosis, prosurvival PI3K/AKT pathway and the proproliferation RAS/MAPK pathway. Of note, activation of HIF-1 leads to an increased production of inflammatory cytokines in all cell types, but HIF-1 induces EPO expression only in the rare EPO-producing cell types (renal cells, neuronal cells, and certain tumors). LNK loss-of-function mutants result in enhanced activation of JAK2/STAT5. The CBL mutants detected in MPNs also enhance JAK/STAT signaling. Blue arrows represent JAK/STAT pathways, and red arrows represent HIF pathways. yet with a large variability in quantity (Table 1). Of note, TGF-1 inhibits normal hematopoiesis in humans via its receptor II (TGF-RII). In cancer cells, a reduced expression of TGF-RII is frequent, which suggests that malignant MPN progenitors may also acquire resistance to TGF-1 by downregulating TGF-RII expression [47,48]. For certain cytokines, qualitative and quantitative differences in production can be related to the MPN phenotype. Excess production of IL-4, IL-10, and TNF-has been reported in ET; elevation of IL-11 levels has been described only in PV; and in PMF, many cytokines, growth factors, and chemokines are produced at high levels but IFN-levels are usually low (Table 1) [45,[49][50][51][52][53][54]. The cellular sources of production of inflammation cytokines, chemokines, and growth factors are many and of course vary depending on the MPN subtype and associated complications (thrombosis and bone marrow fibrosis). However, they usually include most of the cell types which constitute the bone marrow microenvironment or hematopoietic niche, fibroblasts, macrophages, T-lymphocytes, and endothelial cells, as well as healthy or mutated (clonal) hematopoietic progenitors and mature blood elements, platelets, neutrophils, monocytes, and macrophages [55][56][57][58][59]. Macrophages may present with a M1 phenotype, where they produce large amounts of TNF-and IL-12 (both elevated in PV and PMF) as well as IL-23. Macrophages of the M2 phenotype secrete IL-4 or IL-10 (both elevated in ET). In MPNs the EPO level is low and typically undetectable in PV [60]. The presence of the activating JAK2-V617F mutation in >95% of PV cases likely compensates for the low EPO production by rendering erythroid progenitors highly responsive to low doses of EPO, to result in polycythemia. All MPNs IL-6, IL-8, IL-2, soluble IL-2R, HGF, TNF-, TGF-, GM-CSF, VEGF, and bFGF Bone marrow fibroblasts, endothelial cells, monocytes, macrophages, T-lymphocytes, Another intriguing observation is the elevated production of IL-33. IL-33 is an alarmin known to help fight viral infection that is implicated in autoimmunity, and an increased risk of autoimmune disease has been reported in MPN patients [61,62]. Chronic stimulation by the above cytokines also facilitates the survival and expansion of fibroblasts and fibrosis (IL-6 and b-FGF), monocytes-macrophages (IL-6 and GM-GSF), and platelet production (IL-6) and neoangiogenesis (VEGF), whereas IL-12 and IL-33 activate T-lymphocytes and natural killer (NK) cells. In addition, MPN cells accumulate in the bone marrow, which leads to some degree of hypoxia and subsequent activation of the HIF-1 pathway, with upregulation of STAT3 expression and production of cytokines which further promote cell survival (IGF-2, HGF, and IL-6), fibrosis (PDGF, FGF2, and IL-6), and neoangiogenesis (VEGF) [42,63]. Altogether, the qualitative and quantitative differences found in cytokine production in the three MPNs subtypes hint that the causes and mechanisms of chronic inflammation likely differ in ET, PV, and PMF. The JAK2-V617F, MPL-W515L/K, and CALR mutations likely influence clinical symptoms but do not explain differences in inflammation. For instance, JAK2-V617F, MPL, and CALR mutations are detected at similar levels of expression in ET (associated with mild or very mild inflammation) and in PMF (characterized by severe inflammation). Thus it is important to investigate and understand the mechanisms of inflammation at play in each MPN subtype, including those independent of JAK2, MPL, or CALR mutations. Main Clinical and Biological Symptoms. The main clinical symptoms observed in MPNs which are linked to an increased production of inflammation cytokines are fatigue, fever, itching, night sweats, weight loss, and, to some extent, splenomegaly. These symptoms are frequent in PMF; they occur in PV but are mostly absent in ET, which is a good reflection of the degree of production of inflammation cytokines characteristic of PMF (high or very high), PV (moderate to high), and ET (mild). The main biological parameters routinely assessed which are affected in case of inflammation include blood cell counts (in particular leukocyte, neutrophil, and platelet counts), iron levels, and several proteins: the C-reactive protein (CRP), haptoglobin, alpha-1 acid glycoprotein (orosomucoid), ferritin and fibrinogen (increased), and albumin and transferrin (decreased). The major stimulus of increased synthesis by liver (hepatocytes) is IL-6. These inflammatory proteins present with different kinetics: inflammatory positive markers which are increased early include CRP, haptoglobin, and alpha-1 acid glycoprotein, whereas fibrinogen, ferritin, and transferrin are late-acting inflammatory proteins. Elevation of the leukocyte and platelet counts is typical of MPNs and thus does not allow distinguishing between inflammation and MPN. CRP is elevated in MPNs, particularly in PMF, and pentraxin 3 has been reported to decrease in MPNs [64]. A high CRP and low pentraxin 3 were linked to a high risk of thrombotic events in PV and in ET, and a high CRP was associated with shortened leukemia-free survival in MPN patients with myelofibrosis [64,65]. The level of IL-6 in serum is almost constantly increased in case of MPN but IL-6 Activation of the Molecular Pathways of Inflammation by MPN-Associated Mutations MPNs are characterized by the activating JAK2-V617F and MPL-W515L/K mutations, the CALR mutations, and also high levels of total Jak2 (wild type and V617F-mutated) in neutrophils and platelets [3][4][5][6][7][8][9][10]. The effect of JAK2-V617F mutation is to activate primarily the STAT5 pathways but the STAT3 pathways are also activated ( Figure 4) [66]. The MPL-W515L/K mutations presumably stabilize Mpl, the Jak2coupled dimeric receptor for TPO [67]. TPO is known to stimulate the JAK/STAT pathways and also PI3K/AKT, ERK, p38, NF-B, and HIF [67][68][69]. Accordingly, in transfected cells expression of MPL-W515L/K mutants resulted in increased activation of ERK (extracellular signal-regulated 8 Mediators of Inflammation kinases) 1 and ERK 2 (ERK1/2) and AKT (protein kinase B) in absence and in presence of TPO [5, unpublished observations]. To our knowledge, the effect of MPL-W515L/K mutations on NF-B and HIF has not been studied. In any case, the JAK2-V617F mutation activates STAT3 and the MPL-W515L/K mutations activate STAT1 and STAT3, which implies that they may stimulate the production of inflammatory cytokines (Figure 4). However, in MPN progenitor cells and platelets, the expression of Mpl receptors at the membrane surface is often very low, which likely attenuates the effect of TPO stimulation and MPL-W515L/K mutation. The molecular pathways possibly activated by CALR mutations remain unclear. Calreticulin is a calcium-binding protein chaperone normally located in the endoplasmic reticulum (ER); the CALR mutations associated with MPNs all result in C-terminal truncated forms of calreticulin located in the cytosol. Thus it is presumed but not formally demonstrated that CALR mutants may affect intracellular calcium flux as well as the trafficking and secretion of glycoproteins, which could potentially lead to altered expression and activation of cytokines, receptors, Jak2, and other signaling molecules. Consistently, the initial papers reported an activation of the JAK2/STAT5 pathway in transfected cells which expressed CALR exon 9 mutants [8,9]. However, the precise molecular mechanisms linking CALR mutants and the JAK2/STAT5 pathway have not been identified. More rarely, in ET or PMF the "driving" mutation may be a loss-of-function mutation in the LNK gene or in the CBL gene [70][71][72][73][74]. LNK is an adaptor protein which acts as a negative regulator of TPO/Mpl-mediated activation of JAK2. Expression of LNK loss-of-function mutants also results in enhanced activation of the JAK2/STAT5 pathway. CBL codes for an E3-ubiquitin ligase which promotes the ubiquitination of signaling molecules, including tyrosine kinases. The CBL mutations detected in MPNs cause the loss of E3-ubiquitin ligase activity, thus resulting in increased signaling and cell proliferation. So far there is no evidence that LNK or CBL mutations induce the production of inflammatory cytokines, but they may alter their signaling. Figure 4 summarizes the pathways activated by the main MPN-driving mutations. Mutations in the TET2, IDH1, IDH2, EZH2, ASLX1, or DNMT3A genes may also be found in MPNs. They are not specific of MPNs: they are found also in other blood and solid malignancies. Their main action is to alter the regulation of gene expression [75][76][77][78][79][80][81][82][83]. TET2 and IDH1/2 mutants impair the hydroxylation of methylcytosine and thus affect DNA methylation. More precisely, TET gene products catalyze the conversion of 5-methylcytosine to 5-hydroxy-methylcytosine (5-OH-MeC), a reaction that depends in part on iron and oxygen [80,81]. EZH2 (enhancer of zeste homolog 2) gene codes for a histone methyl transferase, and ASXL1 (additional sex combs like transcriptional regulator 1) gene product belongs to the Polycomb group of proteins and thus is thought to disrupt chromatin and alter gene transcription. DNMT3A codes for a DNA methyltransferase and mutations presumably alter the epigenetic regulation of gene expression [82]. Thus one cannot exclude that these mutations may alter the expression of genes coding for inflammatory cytokines or receptors. Interestingly, some of these mutations have been shown to precede JAK2-V617F [15]. Inflammatory Cytokines Produced as a Consequence of MPN-Associated Mutations Not surprisingly, JAK2-V617F has received most of the attention. Several groups have studied the production of inflammation cytokines in JAK2-V617F-mutated cells or in murine JAK2-V617F-driven MPN models. So far published reports concluded that, in vitro, JAK2-V617F can increase the production of IL-6, IL-8, IL-9, OSM, CCL3, CCL4, and TNF- [53,59,84,85]. However, in MPN patients there is no correlation between the JAK2-V617F burden and the blood or serum levels of these cytokines. In fact, it is highly probable that only a fraction of these cytokines is under the control of JAK2-V617F. Firstly, IL-6, IL-8, and OSM are abundantly produced by nonhematopoietic (nonclonal and nonmutated) cells [51][52][53]. Secondly, certain molecules produced under the control of JAK2-V617F, such as OSM, in turn stimulate the production of other inflammatory cytokines in a JAK2-V617F-independent manner [85]. Thirdly, in the JAK2-V617F +/+ HEL cell line, anti-JAK2 miRNA experiments had only a partial inhibiting effect on IL-6 mRNA expression; in these experiments, anti-JAK2 miRNA experiments had no effect on the expression of IL-11 and HGF [53]. Thus in JAK2-V617F-mutated cells, major inflammatory cytokines may be controlled partially (IL-6) or totally (IL-11 and HGF) by molecular pathways not regulated by JAK2-V617F. Regarding MPL-W515 mutations, only one group reported the analysis of inflammation cytokines produced in MPL-W515L-mutated cells, in a murine bone marrow transplantation model: expression of MPL-W515L was associated with a significant increase in the production of IL-6, IL-10, IL-12 (p40), TNF-, CSF3, and chemokines CCL2, CXCL9, and CXCL10 [84]. Again, MPL-W515L-mutated cells were not the sole source of production of these cytokines. Regarding the CALR exon 9 mutations associated with MPNs, their effect on cytokine expression is not known. It is interesting to note that soluble calreticulin has been reported associated with increased production of IL-6 and TNF- [86]. Regarding TET2, IDH1/2, and ASXL1 mutations, it was reported that mutated forms of IDH1/2 were associated with specific DNA hypermethylation profiles, and the list of genes found to be differentially methylated includes several genes linked to inflammation, particularly the IL11-R and TGF-RI receptors [79]. Interestingly, IL-11 and TGF-are secreted at high levels in case of inflammation and both alter myelopoiesis. IL-11R is also differentially methylated in TET2-mutated cells [79]. Hypermethylation of the genes encoding IL11-R and TGF-RI receptors would presumably lower their expression and subsequently make clonal progenitor cells less sensitive to the inhibiting action of TGFand anti-inflammatory action of IL-11. Since TET2 and IDH1/2 mutations are mostly found in PMF, it is possible that these mutants play a role in the aggravation of inflammation observed in severe forms of PMF [87,88]. In myelodysplastic syndromes, ASXL1 mutations combined with SETBP1 mutations were reported to repress the TGF-pathways [89]. However no study of cytokine or receptor protein expression in relation to ASXL1 mutation in MPNs has been published. Mediators of Inflammation 9 Inflammatory Cytokines Produced as a Consequence of Germline Genetic Defects Germline defects, variants, or haplotypes can affect, directly or indirectly, the expression or signaling of inflammatory cytokines and receptors, thus potentially attenuating or aggravating chronic inflammation. The 46/1 (JAK2 GGCC) haplotype and single-nucleotide polymorphisms (SNP) in JAK2, in the telomerase reverse transcriptase (TERT), in the MDS1 and EVI1 complex locus (MECOM), or in HBS1L-MYB have been reported to be associated with a predisposition to mutation in the JAK2 gene on the same allele (JAK2 GGCC haplotype) or a predisposition to the development of a MPN (MECOM, TERT, JAK2, and HBS1L-MYB variants) [90][91][92][93][94]. To this day it remains unclear how these hereditary genetic variants act to facilitate the development of MPNs, but alteration of the transcription of the concerned genes is possible. Regarding germline JAK2 variation, inappropriate expression of JAK2 would clearly disturb myelopoiesis and alter the contribution of myeloid cells to inflammation responses. Consistently, the JAK2 GGCC haplotype was reported to be associated with a defective response to cytokine stimulation, increased risk of inflammation, and impaired defense against infection [95,96]. In CML, cells with short telomere length were found to express a specific "telomere-associated" cytokine and chemokine secretory phenotype [97]. Little is known on the functional effects of MECOM variants on cytokine production but Yasui et al. recently reported that the EVI1 oncoprotein could alter TGF-signaling and TGF-mediated growth inhibition [98,99]. It is established that variations due to SNPs in the promoter region of genes coding for inflammatory cytokines and receptors potentially affect their production. Many groups have published SNPs associated with an altered production of a cytokine or a cytokine receptor, and such SNPs concern all the main cytokines involved in inflammation: IL-1, IL-1R , IL-2R, IL-6, IL-8, IL-10, IL-12, IL-33, TNF-, HGF, and MCP1/CCL2 [100][101][102][103][104][105][106][107][108][109][110][111][112][113][114][115]. SNPs have been shown to control the expression of these cytokines in vitro and individuals who carry the SNP are described as high or low producers [116][117][118]. Cytokine polymorphisms have been studied in association with specific diseases, response to infectious agents, or immune response to inflammation. To our knowledge, this type of analyses has never been performed in MPNs. Clonal and Nonclonal Chronic Inflammation in MPNs Chronic inflammation associated with MPN may have several causes, and their recognition should allow offering improved and individualized treatment to MPN patients. Nonclonal, Reactive Inflammation. Any malignant process induces nonclonal immune responses which aim to restrict and eventually destroy the malignant cells. In case of advanced disease, clonal cells accumulate and nonclonal, hypoxia-induced inflammation can develop. Nonclonal inflammation may also be reactive to treatment. In MPNs, the mature myeloid cells which participate in the antitumoral or hypoxia-induced or therapy-related "reactive" inflammation response may be clonal or nonclonal. Depending on the MPN subtype, the size of the clonal population is likely to be large (PV and PMF) or moderate or small (ET), implying that the clonal part of reactive inflammation is probably more significant in PMF and PV than in ET. This should not be overlooked because clonal cells likely mount a less efficient immune response than healthy cells, meaning that the inflammation/immune response could be rather inefficient in PV and PMF. Consistently, an increased risk of a second cancer was reported in MPN patients [121]. Chronic Inflammation and Myeloid Stimulation as Predisposition to MPNs. The observation that major inflammatory cytokines are produced independently of MPN-associated mutations and the demonstration that JAK2-V617F can be a late event in MPN development are consistent with the hypothesis that chronic stimulation of myelopoiesis (via inflammation) may precede the acquisition of mutation in the JAK2 (MPL and CALR?) gene(s) in subsets of MPN patients. A frequent objection is the lack of evidence of inflammation or myeloid stimulation prior to the diagnosis of a MPN. However, it is not rare that routine blood tests of patients, especially older patients, reveal a slight elevation of leukocyte or platelet counts, or hematocrit, with or without biological evidence of mild inflammation. There are dozens of reasons for mild alterations of blood counts, ranging from smoking, stress, obesity, and diverse latent infections to mild forms of chronic inflammatory diseases (intestinal, rheumatoid, skin, type 2 diabetes, atherosclerosis, etc.). Such patients are simply observed; investigation begins when blood counts rise significantly (reach at least one of the WHO criteria of MPN) or when patients present clinical symptoms related to MPN or to the underlying cause of chronic myeloid stimulation or inflammation. Also, it is not rare to detect lymphoid infiltrates in the bone marrow of MPN patients and monocytosis or lymphopenia in peripheral blood, sometimes prior to the diagnosis of MPN; these observations may be considered as evidence of a disturbed immune response. Thorough investigation of the stages preceding the diagnosis of overt MPN, similar, for instance, to the studies that established monoclonal gammopathy of undetermined significance (MGUS) as the precancerous stage of multiple myeloma, is needed in MPNs to validate the hypothesis of chronic (antigenmediated or not) stimulation of myelopoiesis preceding the acquisition of JAK2, MPL, or CALR mutation. The chronic inflammation and myeloid stimulation hypothesis is attractive, because it can explain several if not all of the mysteries that persist in MPNs. For instance, chronic myeloid stimulation allows the recurrent acquisition of JAK2-V617F, multiple JAK2 mutations, and combinations of JAK2, MPL, or CALR mutations regularly reported in MPNs. Early chronic inflammation and myeloid stimulation would explain that JAK2-V617F burden and clinical symptoms and disease severity are not correlated. The recent reports that patients under treatment with JAK inhibitors may develop or reactivate viral infection, possibly due to impaired NK cell function, are also consistent with chronic infection contributing to the inflammation associated with MPNs [122]. Importantly, the chronic stimulation hypothesis allows for multiple causes of inflammation, infectious or not, some mild (as observed in ET) and some severe (as typical of PMF). Last but not least, the chronic myeloid stimulation hypothesis allows for many different initial causes and thus would explain why the JAK2-V617F mutation and to a lesser degree the MPL exon 10 and CALR exon 9 mutations are associated with very different diseases (ET, PV, PMF, RARS-T, and splanchnic thrombosis for JAK2-V617F; ET, PMF, and RARS-T for MPL and CALR mutations). For all these reasons, chronic myeloid stimulation and inflammation, and notably latent infection, deserve investigation as initial, early, or complicating events of MPNs. In support of the hypothesis that infection may predispose to chronic hematological malignancy, we showed that, for about 25% of patients with multiple myeloma, the purified mc Ig specifically recognizes an antigen from HCV, EBV, or H. pylori [124][125][126][127]. These important findings suggest that infectious agents may also initiate multiple myeloma, not just certain types of lymphoma, which opens new possibilities of curative treatment, as demonstrated recently by the regression of one case of HCV-associated myeloma following treatment by IFN- [128]. Antigen-driven proliferation as a facilitator of DNA mutation acquisition and cell transformation is rarely investigated in the context of myeloid malignancies but since chronic antigen stimulation also concerns myeloid cells, latent infection as a cause of inflammation in chronic myeloid disorders should not be excluded. Thus a promising research approach for chronic myeloid disorders is to propose that, for subsets of patients, malignancy may result from chronic, polyclonal abnormal immune response by myeloid cells, eventually facilitating excessive myeloid proliferation, acquisition of genetic alterations in genes that are critical for myelopoiesis (JAK2 and MPL; CALR?), and transformation of progenitor cells from the most stimulated lineage(s) and then expansion of a malignant clone. Consequences for the Treatment of MPNs Logically, the JAK2-V617F mutation rapidly became the main target of treatment in MPNs after its discovery in 2005. In contrast, chronic inflammation has so far been neglected in the treatment of these diseases. Recognition of the importance of inflammation in the pathogenesis of MPNs offers great opportunities to improve therapy. The JAK inhibitor trials showed that blocking JAK2 function significantly reduced inflammatory cytokine levels and other markers of inflammation in PMF patients, resulting in improved clinical symptoms. Patients benefited from JAK inhibitors even when the JAK2-V617F mutant burden was not reduced or when their disease was not associated with JAK2 mutation. Although the comprehension of the causes and mechanisms of inflammation in MPNs is still very incomplete, accumulated knowledge indicates that NF-B and JAK1 are major pathways for the production or/and signaling of inflammatory cytokines. In certain cases, the HIF-1 pathways may also be activated. The three pathways are closely linked (Figures 3 and 4), and used alone, inhibitors of the JAK/STAT pathways (or inhibitors of NF-B) cannot be expected to completely block cytokine production and signalling in MPNs. In a murine model of JAK2-V617Fmutated MPN, selective STAT blocking resulted in increased inflammation and thrombocytosis [129]. In fact, alteration of STAT3 function (deletion or hyperactivation) is known to lead to altered myelopoiesis and increased expression of STAT1 and inflammatory cytokines, notably IL-6, a strong stimulant of platelet production, fibroblast proliferation, and inflammatory acute phase protein production [130][131][132]. In support of this mechanism, Grisouard et al. reported increased expression of STAT1 and STAT1 target genes in JAK2-V617F mice after STAT3 deletion; IL-6 and other inflammatory cytokines were not measured in this study [129]. Ideally to cure a MPN, one should aim to reduce the effects of the JAK2, MPL, or CALR mutant carried by the MPN clone, as well as the production and signaling of the main inflammation cytokines produced by the patient. This can be achieved by blocking the three main pathways responsible for cytokine production (these include the JAK1/JAK2 pathways) and by suppressing the cause(s) of MPN mutation, when identified. Used alone, JAK1/2 inhibitors have the capacity to block the JAK2-V617F and MPL-W515L/K mutations and a large fraction of the production and signaling of inflammatory cytokines. But for complete treatment of inflammation and mutations in MPNs, the addition of NF-B and HIF-1 inhibitor drugs should benefit patients [133][134][135][136][137][138]. This contrasts with myeloma, a disease not driven by the activation of the JAK2/STAT pathways where NF-B and HIF-1 inhibitors used alone can reduce both disease and inflammatory cytokines [139]. Another advantage of such combination therapies would allow lowering the dose of each drug and hopefully reduce toxicity. Of note, one reason why IFN-is able to induce a complete clinical, biological, and molecular remission (JAK2-V617F-negativation) in PV and in ET patients is that IFN-acts on several JAK/STAT pathways as well as on other pathways critical for the production of inflammatory cytokines [140,141]. In short, as represented in Figure 5, the ideal MPN therapy may combine the following: (1) inhibition of the JAK1 pathway and JAK2-V617F, MPL-W515L/K, or CALR mutations with a JAK1/2 inhibitor and (2) NF-B and HIF inhibitors (note that (1) and (2) may be achieved with IFN-). Whenever an early cause of chronic inflammation is identified, adequate treatment should be added: for instance, antibiotics or antiviral treatment in case of latent infection. The complexity of inflammation in MPNs should not discourage attempts to define it biologically at the time of diagnosis, prior to therapeutic decisions, and during treatment monitoring. Knowing the precise inflammation status of each MPN patient would greatly help improve his/her treatment. As described earlier, the inflammation status and cytokine profile of a MPN patient are expected to vary according to the MPN subtype, presence of JAK2, MPL, or CALR mutation, eventual cause of inflammation preceding MPN-driving mutation, and personal genetic background. Yet what matters for therapy is the resulting cytokine profile of the patient, and nowadays establishing the inflammation cytokine profile of an individual is technically simple and not overly expansive and requires only a blood sample. Knowing that a patient is a strong producer of IL-6, HGF, or TNF-, for instance, would allow focusing treatment on the target cytokine(s), perhaps by adding to the patient's combination therapy one of the existing antagonist drugs or neutralizing antibodies that specifically block these cytokines or receptors [119,[142][143][144]. Last but not least, extensive genetic studies and murine models have not succeeded to fully explain most of the chronic hematological malignancies, including MPNs. This suggests that genetic aberrations, although crucial, are probably not sufficient for a lymphoid or myeloid malignancy to develop, and more attention is now given to the hematopoietic niche and cytokine production, the human microbiome and oncogenic infectious pathogens, and the host's immune response [145,146]. There is no reason to limit these important pathogenic mechanisms to lymphoid malignancies and solid cancer, and perhaps the next major research effort in the MPN field should be to investigate the validity of the hypothesis of chronic inflammation/myeloid stimulation preceding mutation acquisition. More specifically, a systematic search for latent infection in MPN patients is feasible and simple, thanks to various tests based on the multiplexed antigen or peptide microarray technology; these assays require only a small blood sample [127,147]. Obviously the identification of an infectious cause of inflammation in subsets of MPN patients would offer additional possibilities of combined treatment (with antibiotics or antiviral therapy) ( Figure 5). Regarding research and animal models of MPNs, it is possible to develop new murine models of chronic myeloid stimulation, antigen-mediated or not [148]. In conclusion, inflammation is very complex yet there are relatively simple laboratory tools to diagnose and characterize inflammation in patients. Several predictive inflammation markers are already identified in MPNs, and potent drugs that target the molecular pathways of inflammation or the inflammatory cytokines detected in excess in patients already exist. Designing new, more complete, and individualized combination treatments that include drugs that block MPN mutations as well as the main inflammation pathways is possible, and such protocols should benefit MPN patients.
2016-05-04T20:20:58.661Z
2015-10-11T00:00:00.000
{ "year": 2015, "sha1": "8912830a98764bebf74bf574c1ddc9a8a7a24a18", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mi/2015/145293.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9226fe39e1a3753bb4a50d4b1b3b989541e4fad5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221290270
pes2o/s2orc
v3-fos-license
Analysis of Conventional and Microwave Assisted Technique for the Extraction of Concentrated oils from Citrus Peel Citrus peel belongs to orange and lemon plant that contains concentrated oils. Oil from citrus peel is widely used in foods, perfumes and pharmaceutical industry worldwide. In this study, the advantages of citrus concentrated oil extracted by conventional and novel techniques were studied. Microwave extraction techniques have come out as new alternatives to conventional techniques (hydro distillation) for extraction of oils. This paper reviews the novel separation technique with the conventional techniques in terms of extraction time, yields and energy. Extraction of oils with solvent free microwave extraction (SFME) was comparatively better in terms of extraction time that is 50 minutes while for microwave assisted hydro distillation (MAHD) is 60 minutes and for hydro distillation (HD) it is 3 hours. Yields percentage was almost same for the three processes that are 1.67%. Energy savings were greater in both MAD and SFME that is 0.4 kWh while in Hydro distillation it is 1.3 kWh. Overall MAD was better in performance than other techniques. I. INTRODUCTION Citrus fruits including types of oranges and lemons are largely grown in different regions of the world [1]. It is one of the major crops in Pakistan. The annual production of citrus fruit in Pakistan is about 2000 k. tons [2] while global production of lemon is about 7.3 million tons and that for oranges are 2.4 million tons [3]. Citrus fruits are largely consumed for juice Besides these conventional methods, solvent extraction is also used for extracting essential oils. In the experiment of Lopresto et. al. found that a yield of 0.95% (v/w%) was observed for Lemonene through solvent extraction in Soxhlet apparatus. Hexane was used as solvent [13]. Other solvents used for extraction of oils from citrus peels include: Methyl Alcohol, Methylene Chloride, Ethyl Alcohol and Acetone etc. [14][15][16][17]. These conventional techniques provide good basis for extraction but on expense of energy and time. Here in this research article experiments are done for the comparison and analyzing microwave assisted distillation (MAD) with the conventional techniques. The main difference is in heating mechanism. In microwave heat in the molecules are generated through waves, these waves vibrate the molecules and thus heat is produced so in MAD heat generation takes place from inside of body while in the steam and hydro distillation heat transfer occurs from out to inside of the body [18]. extraction so it leaves a lot of waste in the form of its peels. About 50% of wet fruit waste is of citrus peels [4]. These waste products are used to feed animals or a major portion of it is directly dumped into open-air, which causes adverse effects on the environment. Citrus fruits are rich in vitamin C, carotenoids and bioactive compounds [5]. Citrus peels are rich in essential oils that contain some significant compound good for skin and health and are widely used for flavoring, in perfumery, cosmetics, medicine and pharmaceuticals [6]. Many essential compounds can efficiently be extracted from citrus peels. Citrus peels have bio-active phenols i-e flavonoids and phenolic acids. These phenols have significant properties like antioxidants, antiviral, antimicrobial, anti-inflammatory and neuroprotective, etc. [7]. Therefore, it is very important to use this waste to extract these valuable compounds from it. The extraction of essential oils from citrus peels is done conventionally by two methods. 1. Hydro distillation and 2. Steam Distillation. In hydro distillation, dry peels are subjected to heat along with some ratio of water in a specific Clevenger type apparatus for several hours. Water and essential oils evaporate out and then condensed through the condenser in a separate flask where it is left for phasing out from water and thus collected for further analysis. Salma et. al. in their experiments found that after 4 hours extraction of essential oils from clementine the yield obtained was about 0.73% (v/w%) [8]. In one other experiment by Mohamed et. al. obtained yield of 0.24% (v/w%) from 3-hour long hydro distillation [9]. 0.42% (w/w%) yield was obtained in similar experiment performed by Uysal et. al. [10]. International Journal of Engineering Works Vol. 7, Issue 08, PP. 282-285, August 2020 www.ijew.io Steam distillation is still widely used for extraction of oils. In its steam is passed through the material from which oils are to be extracted and the rest of the process is same as of hydro distillation. Some times third solvent is used for layer separation of extracted oil from water and then heat treatment is done to vaporize the concentrated oil leaving behind. In one such experiment performed by Pauline et. al. They extracted oil by steam distillation in two-hour long process and at temperature of just above 100 oC, then sodium chloride and chloroform were added to the separating funnel and shaken back and forth several times and thus layer separation occurs. A yield of 2.475% (w/w%) was observed [11]. Kusuma et. al. obtained a yield of 0.59% (v/w%) by steam distilling citrus peels for 7 hours [12]. II. MATERIALS AND METHODS Citrus Limonum peels were used for extraction of oil along with water. Sodium sulphate was used to remove any water present in essential oils. For hydro distillation Clevenger type apparatus was used. 100 g of citrus peels cut in 3-5 mm in size were taken in 500 ml flask with 200 ml of water. This flask was connected to Clevenger type apparatus through special connector. A temperature was kept at 120 oC. Hydro distillation was performed from 3 hours. Oil obtained was dehydrated with sodium sulphate and then kept in air tight bottle in freezer. Hydro distillation setup is shown in figure 1. For Microwave assisted distillation modification to Whirlpool VIP 34 microwave oven was done. A hole was made at top of the oven to hold a special connector that is then connected with condenser. Oven has a wave frequency of 2.45 GHz. It can deliver a maximum power of 1100 Watts. Flat bottom flask was used with 500 ml capacity. The setup is shown in figure 2. For microwave assisted hydro distillation (MAHD) the time range was 45 to 75 minutes with 15-minute interval and the ratio of peels to water was 1:2 (w:v). While for Solvent free microwave extraction (SFME) dry peels were subjected for extraction with a time range of 40 to 60 minutes with 10-minute interval. Energy calculations were done through digital clamp meter by measuring the drawing current. Experimental results for MAHD show that a yield of 1.2%, 1.7% and 1.8% was observed at 45 to 75 minutes time range with 15-minute interval respectively. A total increase of 34% was observed from initial reading, at 60 minutes while 6% increment occurs in yield from 60 to 75-minute time. Power consumption per gram was 0.42, 0.44 and 0.51 kWh/g for 45-, 60-and 75-minutes time. The increase in power consumption is very high for 75-minutes time i-e. about 16% as shown in table 1. SFME results observed a yield of 1.4%, 1.7% and ≈1.7% for 40-, 50-and 60-minutes time with a yield increment of 22% and 4.2% for 50-and 60-minutes respectively. Power consumed per gram for 50-minute was 0.38 kWh/g with 11% increase while for 60-minute an increase of 10% i-e 0.04kWh/g was observed. For hydro distillation a experiment was conducted for 3-hours until a yield of 1.7% was obtained. Power consumed per gram of essential oil was the higher of all which is 1.3 kWh/g. As presented in CONCLUSIONS The aim of this research was to investigate the effectiveness of MAD. It was observed that MAD provides higher yields with less extraction time and low power consumption per gram of essential oil. Yield with connection to time and power consumption for SFME per gram was slightly and significantly better than MAHD and hydro distillation respectively. As shown in figure 3(a) and (b).
2020-08-24T18:02:48.696Z
2020-08-20T00:00:00.000
{ "year": 2020, "sha1": "c29a9154308ddc2dfaad11d564a77715cdecaba4", "oa_license": null, "oa_url": "https://doi.org/10.34259/ijew.20.708282285", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c29a9154308ddc2dfaad11d564a77715cdecaba4", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Materials Science" ] }
14553777
pes2o/s2orc
v3-fos-license
Ergodic theory and visualization. II. Harmonic mesochronic plots visualize (quasi)periodic sets We present a new method of analysis of measure-preserving dynamical systems, based on frequency analysis and ergodic theory, which extends our earlier work [1]. Our method employs the novel concept of harmonic time average [2], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets or arbitrary periodicity in the phase space. Besides identifying all periodic sets, our method is useful in detecting chaotic phase space regions with a good precision. The range of method's applicability is illustrated using well-known Chirikov standard map, while its full potential is presented by studying higher-dimensional measure-preserving systems, in particular Froeschl\'e map and extended standard map. Introduction Methods of computational investigation of complex dynamical systems are of crucial importance today, given the increasing range of interdisciplinary phenomena currently under examination [4]. As virtually none of the naturally motivated dynamical system can by entirely treated with analytical techniques, the necessity of numerical investigation is beyond doubt. When choosing from a diverse spectrum of available computational approaches, one takes into account the nature of the problem as well as the study direction that is to be followed. Typically, when the dynamical systems are considered, the direct integration of system's trajectories is performed for long integration times, seeking to optimize the precision with respect to the numerical cost. The obtained trajectories are then analyzed, either in the context of specific trajectory-approach to the system, or by looking for their universal properties that report about global nature of the system under investigation. In particular, one is often interested in frequency-analysis of the trajectory done by computing the power spectrum of the trajectory's time-signal. Power (Fourier) spectrum of a time-signal gives a frequency decomposition of the trajectory, showing it in terms of its frequencies [5,6]. The idea of frequency analysis of time series inspired new approaches to study of the dynamical systems [7], specifically in the context of chaotic motion analysis [8,9], high-dimensional dynamical systems [10] and the study of celestial dynamics [11]. Namely, an algorithm can be constructed based on generalization of power spectrum analysis that estimates the system's fundamental frequencies in a decreasing order [8]. Approaches of this sort were very useful, both in the context of continuous-time [7] and discrete-time [12] dynamical systems. More recently, a new approach has been proposed based on wavelet frequency analysis that investigates the properties of specific chaotic orbits and detects resonance trappings and transitions [13,14]. In particular, the method is suited for examination of weakly chaotic orbits. Furthermore, another recent time-frequency analysis method for chaotic time series based on the power spectrum estimator was suggested, that distinguishes between noisy flow and the colored noise [15]. The key downside of the mentioned approaches lies in their locality: the frequency analysis is usually performed on single trajectories/time-signals only. While providing a detailed analysis of trajectories under examination, the method remains focused on pre-defined ensembles of trajectories, unable to give conclusions regarding the global characteristics of the system, that would follow from comparison among frequency properties of many trajectories. In the present work, we extend the previously exposed idea of invariant sets visualization method based on ergodic partitioning [1], to the domain of global frequency analysis of the dynamical phase space. We construct harmonic time averages of functions defined over the phase space, and produce color-plots reporting about frequency phase space structure by visualizing periodic subsets of prescribed frequencies. We therefore obtain algorithm that produces detailed analysis of phase space in terms of periodic sets resonating with a single frequency: by using more frequencies we extend the method to full phase space frequency analysis. By employing parallel computing we are able to save numerical time and obtain global phase space plots relatively fast. As opposed to other approaches, our method is global in its nature, as it simultaneously provides insights into the entire phase space. Also, along the lines of our previous study [1], this method is suitable for highdimensional systems in terms of considering phase space sections of smaller dimensionality. Furthermore, as it will be shown, the method is applicable to visualization of the chaotic zone of the dynamical system. We will examine the method using well-known example of standard map, as this systems possesses a large variety of periodic sets and a chaotic zone whose locations are known. This paper is organized as follows: we briefly expose the theoretical background of our method in Section 2. and show its implementation with single function in Section 3. The convergence and precision issues are settled in Section 4. In Section 5. we show the application of multi-functional approach, designing a simple algorithm for visualization of periodic and chaotic partitions. We show the applicational extent of out method in Section 6., and give a few concluding remarks in Section 7. The Visualization Idea In this Section we present the concept of harmonic time averages [3] that relate the ergodic theory ideas to the frequency analysis. We expose the basics of the mathematical background necessary for our study; for more rigorous details we however address the reader to [3,16,17]. Let a discrete-time measure-preserving map x = Tx on a compact phase space A ⊂ R n be denoted as: x n+1 = Tx n or x n = T n x 0 n ≥ 0. Our central aim is to visualize the periodic sets B p ⊂ A with the periodicity p for the map T, defined as: Periodic sets are essentially generalizations of the periodic orbits, with the period-1 set being the usual invariant set. Note that there are always p distinct period-p sets B 1 p , . . . B p p such that a phase space point x 0 makes p jumps among them before completing a whole cycle. Union of a sequence of such sets ∪ k B k p = B is an invariant set composed of p disjoint parts, that we will call a periodic chain. In what follows we will expose a computational method based on ergodic theory designed to graphically visualize such set in the dynamics phase space, based on the methods described in [1]. Consider L 2 real-valued functions on A and let the harmonic time average f * ω (x 0 ) of a function f ∈ L 2 (A) for the frequency ω ∈ [0, 1 2 ] corresponding to a phase space point x 0 ∈ A, be defined as: By the Ergodic Theorem this limit exists almost everywhere in the phase space for any measure-preserving map T [16,17,18]. Harmonic time averages are a generalization of the time averages known from the standard ergodic theory , which is easy to see by observing that f * ω=0 = f * [19]. Furthermore, from the definition Eq. (3) it follows: which hence implies: meaning that while f * ω is not an invariant function, its absolute value |f * ω | is. That is to say each trajectory (periodic or not) has a constant |f * ω | for each point. As throughout this work we will be considering only the absolute values of the harmonic time averages, we introduce the notation: We will refer to functions h ω (x) as harmonic time averages, despite really intending their absolute values. Consider now the circle S 1 ≡ [0, 2π[ and the shift map Θ ω mapping the circle onto itself: for some constant angle ω ∈ [0, 1 2 ]. Given a map T : A → A and a set B ⊂ A, a shift map Θ ω is called the factor map to T on B if there is a measure-preserving homeomorphism F : A → S 1 such that: ∀x ∈ B. This means the dynamics T on B ⊂ A is topologically equivalent to a shift map, which implies that the set B is a periodic set or a union of periodic sets with the period 1/ω. The following results hold: Theorem Given a dynamics T : A → A and f ∈ L 2 (A), if the harmonic time average h ω is non-zero on some set B ⊂ A then there exists a factor map for T on the set B given by the shift map with the angle ω. Theorem If T : A → A admits a shift map with the angle ω as a factor map on some set B ⊂ A, then there exists an f ∈ L 2 (A) such that its harmonic time average h ω is non-zero on B. Therefore, the harmonic time averages for the specific frequencies ω can be used for detecting the periodic sets and periodic chains of the period 1/ω. If h ω 's frequency ω resonates with the frequency of a periodic set, its final limit value for the points within this set will be non-zero; otherwise the summation of the complex phases e −i2πωn will average out its limit value to zero. Computing the harmonic time average h ω over the whole phase space will therefore expose only the phase space subsets whose frequency resonates with ω, otherwise h ω will average out to zero (we will define the periodic sets mostly by their frequency that is inverse of their periodicity). Observe the relationship between a harmonic time average and a Fourier transform: while the later gives the entire frequency spectrum of a certain time-signal (a single trajectory), the former gives the phase space partition that resonates with the particular chosen frequency. A harmonic time average value for a given trajectory is simply the Fourier transform's value at that frequency for this trajectory. While the Fourier transform provides us with a full frequency analysis for a single trajectory, the harmonic time average gives a global phase space analysis, but for a single frequency only. Furthermore, a shift map with an angle ω is also a shift map with all the angles that are integer multiples of ω. Therefore, a harmonic time average of frequency ω reveals as non-zero all the periodic sets with the periodicities that are integer multiples of 1/ω: a harmonic time average for ω = 1 2 will not only resonate with all the period-2 set, but also all the even-period sets. This reduced the sequence of interesting periods to the numbers that are mutually prime. Throughout this work we will consider periodicities given by the sequence of prime numbers: 2,3,5,7 etc. We will not consider the period-1 sets (frequency ω = 0) as h ω=0 will simultaneously expose periodic sets of all other periodicities (as expected given that h ω=0 = |f * |), which does not allow a detailed analysis. The final value of a harmonic time average over a resonating periodic set will also be determined by the properties of the function f in question: the limit value for a resonating periodic set can happened to be zero due to location of the set itself with respect to the properties of the function. For this reason we will, in a way similar to our previous study [1], employ the harmonic time averages of more functions for the same frequency in order to optimize the visualization of periodic sets of that frequency. The computational algorithm and its implementation are described in the rest of this Section. The computing algorithm and the numerical details We construct the algorithm for the phase space partitioning into a periodic partition of a given frequency using harmonic time averages of multiple functions. For example, the periodic partition for the frequency ω = 1 2 is constructed as a partition of all even-period periodic sets (and of course the non-resonating rest of the phase space). Following [1], we limit the discussion to the case of a two-dimensional map with the step 1 Set up a grid (lattice) of initial grid-points (x 0 , y 0 ) on the phase space A and select the visualization frequency ω in form 1 p with p prime number 2,3,5,7 etc. and calculate their partial harmonic time averages for n final iterations for each initial grid-point, which approximate the real harmonic time averages {h 1 ω , . . . h N ω } step 3 To every initial grid-point (x 0 , y 0 ) associate the harmonic time average vector corresponding to it: Observe the distribution of the harmonic time average vectorsh ω (x 0 , y 0 ) throughout R N and group them optimally into clusters. Divide A into a union of subsets, with each subset being given by those grid-points (x 0 , y 0 ) whose harmonic time average vectorsh ω (x 0 , y 0 ) belong to the same cluster of vectors. This union of subset is the N -order approximation periodic partition of A with frequency ω. The optimal number of iterations n final is to be set in accordance with the harmonic time averages convergence properties that will be studied in the Section 4. Visualization also depends on the clustering procedure, that we will examine in the Section 5. The choice of functions {f 1 , . . . f N } is again done by looking at an orthogonal basis on L 2 (A) as it provides the simplest source of linearly independent functions. Throughout this work we will employ functions from the 2D Fourier orthogonal basis given as: These functions are among them linearly independent for any n = m. In a way identical to [1] we will be using parallel processing to enhance the efficiency of computation. We typically use the grid of 800 × 800 initial grid-points and iterate the dynamics for n final = 30, 000 iterations for each grid-point to approximate the harmonic time average value for each function. It takes about 10-15 minutes to compute one time average on a single processor for a grid of this size, a typical run was however done using 5-10 processors (depending on the availability) and took about 5 minutes for a single function. Computing more functions simultaneously insignificantly increases the computation time. The standard map as the testing prototype As in the previous work, we will rely on the Chirikov's standard map [20] for testing the described method, as this map possesses a variety of periodic chains and sets and other dynamical behaviors for a wide range of parameters [21,5,22]. The map is a homeomorphism on a 2D torus given by: where (x, y) ∈ [0, 1] × [0, 1] ≡ [0, 1] 2 (the usual standard map's parameter k is here k = 2πε). Phase Space Plots with a Single Function In this Section we consider the simplest case of N = 1 and show the color-plots of harmonic time averages for a single function under the dynamics of the standard map Eq. (10). As mentioned, we are using a grid of 800 × 800 initial grid-points and n final = 30, 000 iterations. For simplicity reasons, in this Section we will focus only on the perturbation value ε = 0.12 and the function f (x, y) = sin(2πx + 3πy). In Figs. 1a,b&c we show plots for the frequencies ω = 1 2 , 1 3 , 1 5 that are visualizing the periodic sets with the periodicities that are multiples of 2, 3 and 5 respectively. For clarity we are using a log-scale in for the h ω value and stretching the colorbar from the smallest to the biggest value of log h ω . The chains (families) of periodic sets are correctly predicted in all the plots, together with the periodic sets of higher (integer multiple) periodicities. The final harmonic time average value h ω is modulated by the values that the employed function is taking over the the periodic set in question, which explains why are some periodic chains better visible than the others. Recall that the large period-2 island around the elliptic fixed point ( 1 2 , 1 2 ) (cf. Fig. 1a) is actually a nested set of infinitely many quasi-periodic orbits enclosing the fixed point. Each of these quasi-periodic orbits is of course a period-2 set, but the color-difference among them is less visible due to the use of the log-scale. By using the log-scale we are focusing on revealing the periodic chains themselves as phase space subsets, rather then trying to discern their internal sub-structure. Some of the chains of periodic sets are visible in more plots (like period-6 chain in Fig. 1a and Fig. 1b), as their periodicities are common-multiples of various basic frequencies. On the other hand, some periodic chains (e.g. period-10 chain) are visible in one plot but not in the others (Fig. 1a and Fig. 1c) despite resonating with both frequencies: as already mentioned, this is due to the properties of the function used for averaging, and will be taken care of by using multiple functions (see the forthcoming Sections). Also, observe that the chaotic zone (already locally present for this ε-value around the hyperbolic fixed point (0, 0) ≡ (1, 1)) is resonating at all frequencies, although much more weakly. Moreover, the chaotic zone is also weakly resonating in case of an irrational frequency 1/π as visible in Fig. 1d, at which no other periodic set resonates at all (within the limits of the precision of numerically irrational number). As we shall see, this can be used for visualization of the chaotic zone itself using more functions. The Convergence Study In this Section we study the convergence of the harmonic time averages addressing the questions of the precision of their final values and the optimal number of iterations required. We will distinguish between the regular (periodic) and chaotic orbits, and between rational and irrational frequencies, justifying the visualization results from the previous Section. Consider the n-th partial harmonic time average for some frequency ω given by: with lim n→∞ |f n ω (x 0 , y 0 )| = h ω (x 0 , y 0 ) assumed to exist for all the grid-points (x 0 , y 0 ). The difference: in function of n is a sequence whose behavior is to be studied in relation to the orbits/frequencies mentioned above. We iterate the dynamics for 10 7 iterations recording the first 10 6 iterations and defining |f n=10 7 ω | = h ω . This gives the sequence ∆(n) whose asymptotic properties are investigated. The Regular Region. As mentioned earlier, for all the regular points (trajectories), the harmonic time average converges to zero in case its frequency mismatches the periodicity of the underlying set, and otherwise converges to some positive value smaller then 1. In analogy to what observed in [1,23], the convergence pattern of ∆(n) in this region neatly follows 1/n regardless of the frequency, as shown in the convergence plot Fig. 2a. Moreover, this holds also in the case of irrational frequency, as illustrated in Fig. 2b (all h ω s converge to zero). These results apply universally for all the regular points/periodic sets independently from the ε-value, periodicity or the function involved, allowing a good precision estimation. The Chaotic Region. In further analogy to the previous studies [1,23], the convergence pattern exhibited by ∆(n) in this region is at best given by 1/ √ n and hence bounded by a − 1 2 slope, which is in general better pronounced at bigger ε-values. At smaller ε-values ∆(n) is at best asymptotically bounded by the weak slopes between − 1 2 and zero (see discussion of weakly chaotic region in [1]). These results hold uniformly for both rational and irrational frequencies, as shown in Figs. 2c&d regardless of the particular grid-point, ε-value or function involved. Note that in analogy with the case of the usual time averages, the precision of (weakly) chaotic trajectories is to determine the lower precision bound of a harmonic time average phase space plot. We therefore set the iteration limit to n final = 30, 000 having the overall precision of O(10 −2 ) or better. Furthermore, observe that in the case of an irrational frequency, a harmonic time average always converges to zero, but with a rate that depends on the nature of the underlying trajectory. For a periodic set of any periodicity, this limit will converge to zero in way identical to the case of a harmonic time average with a non-matching frequency, whereas for any chaotic orbit it will always "weakly resonate" converging at a slower rate of ∆(n) ∼ 1 √ n . This provides a simple way to approximately visualize the chaotic zone as it will be shown later in this work. The Phase Space Plots with Multiple Functions In this Section we consider the harmonic time averages corresponding to more linearly independent functions for the same frequency. In analogy with the construction of the ergodic partition [1], we examine the correspondence given by the Eq. (8) and seek conclusions regarding the periodic sets phase space distribution by investigating clustering of the harmonic time average vectors. We also propose a simple algorithm for visualizing the periodic partition for a given frequency ω and the phase space ergodic regions. Two-function scatter plots We start by N = 2 case considering two linearly independent functions for a fixed frequency ω: computed using the same grid and n final as before. We construct a scatter plot by plotting h 1 ω on one and h 2 ω on the other axis, obtaining a 2D plot as the one in Fig. 3. As each h ω takes values from the interval [0, 1], with the values being zero (more precisely ∼ O(10 −8 )) only for non-resonating sets, the plots in Fig. 3 report about qualitative distribution of periodic sets for four computed frequencies. Each branch of the scatter plot vectors for a given frequency (defined by a color) represents one periodic chain of periodicity integer multiple of the basic period. As the frequencies considered in Fig. 3 are mutually prime, the structure of the entire scatter plot approximates the full periodic structure of the phase space at ε = 0.12 (in the approximation of four frequencies). As opposed to scatter plots of usual time averages discussed in [1], these scatter plots are less structured as they show the resonating periodic sets only, disregarding the rest of the phase space. Recall that each point in a scatter plot here has x-coordinate (y-coordinate) given by the value of h 1 ω (h 2 ω ). By clustering the scatter plot points of a given frequency, one can obtain a new correspondence between the (available) colors and the initial grid-points in a way analogous to what discussed in [1], in order to obtain a better approximation of periodic sets locations in phase space. Again, it is of interest to consider as many functions as possible, preferably with both small and large resolution numbers n&m, in order to visualize both local and global phase space features. A phase space partition can be constructed this way, that visualizes all the periodic sets of a chosen frequency, and therefore giving more insight into the phase space structure then a single-function approach discussed earlier. Visualization of the single-frequency periodic partitions Consider N linearly independent functions and their corresponding harmonic time averages {h 1 ω , . . . h N ω }. Their scatter plot will be enclosed in the interval [0, 1] N with a branching structure similar to Fig. 3. Observe that each scatter plot point is actually a harmonic time average vectorh ω (x 0 , y 0 ) corresponding to some grid-point (x 0 , y 0 ). We introduce a norm on the scatter plot-space by considering the Euclidean norm defined on the harmonic time average vectors: There are clearly more scatter plot points, even with many functions, having the same (or very similar) norm h ω . However, note that the correspondence: maintains the key properties of the usual harmonic time average: h ω is zero for non-resonating periodic sets, it is very small for chaotic points and it is O(10 −1 ) order for resonating periodic sets. On the other hand, the scalar field h ω (x 0 , y 0 ) will yield better visualization of the frequency-ω periodic sets as it "sums up" the visualization done by all considered N functions: it approximates the periodic partition of the phase space for the given frequency with the approximation-order given by the number of functions involved. It is enough for a periodic chain to resonate with at least one considered function, and it will be visualized by the h ω (x 0 , y 0 ) . The downside of this idea is that separate periodic sets belonging to different periodic chains might have the same (or very similar) colors assigned: as we are primarily interested in visualizing the periodic chains as a whole, this issue will be neglected here. However, by applying phase space zooms (similarly to what discussed in [1]), one can analyze the sub-structure of periodic sets at the desired scale. We therefore construct a straightforward clustering algorithm of scatter plot points: let a new coloringvalue for each grid-point (x 0 , y 0 ) be given by H ω (x 0 , y 0 ) defined as: which is the field h ω (x 0 , y 0 ) normalized by its biggest phase space value, and therefore with the values between 0 and 1. The field H ω (x 0 , y 0 ) approximates the periodic partition for the frequency ω. In Fig. 4 we show approximations of period-2 and period-5 periodic partitions done with four functions of different resolution numbers, and colored (clustered) using the algorithm described above. Harmonic time averages were computed for each function separately with the same grid 800×800 and n final = 30, 000. Observe the improved visualization with respect to the previously shown plots in Fig. 1. By using diverse (a) (b) Figure 4: Period-2 (a) and period-5 (b) partitions visualized using f 1 = sin(2πx+3πy), f 2 = sin(3πx+7πy), f 3 = sin(5πx + 8πy) and f 4 = sin(7πx + 11πy) for ε = 0.12. Log-scale was used for the H ω field. resolution numbers we managed to simultaneously visualize periodic chains at all scales (including the integer multiple periodicities). Given the initial grid-point resolution, the pictures in Fig. 4a&b are rather exhaustive in terms of the quantity of visualized period-2 and period-5 phase space subsets. However, as already mentioned, the price paid amounts to having red color for different unrelated periodic chains (essentially for all of them). But on the other hand, note the clear coloring difference between the resonating periodic chains (red and dark red), chaotic zone (light green -yellow), and non-resonating periodic chains (blue and dark blue). Period-5 partition in Fig. 4b even visualizes the secondary chaotic zone between the elliptic period-2 points in the middle. Visualization of the ergodic regions As observed in the previous Section, harmonic time averages for irrational frequency do not resonate with none of the periodic sets, while still resonate weakly with the chaotic zone points (just as any other harmonic time average resonates weakly within the chaotic zone). As shown in Fig. 1d and examined in the Section 4., this can be used for visualization of the chaotic zone. Furthermore, from the discussion in the previous paragraph it follows that more functions can be used in order to improve the visualization, in a way analogous to the periodic partition approximation. In Fig. 5 we show plots obtained by clustering according to the Eq. (14) for three functions (of diverse resolution numbers). Each harmonic time average was computed separately with the same grid, but for a longer transient of n final = 100, 000 and an irrational frequency. Observe the precision in visualization of the chaotic zone, specially for the larger ε-value in Fig. 5b. We see not only primar (diffusing) and secondary chaotic zones (located around the period-2 chain), but also higher-order ones like around the period-3 chain (Fig. 5a). Also, as opposed to previously shown one-function chaotic zone visualization (Fig. 1d), in these plots the coloration of the chaotic zone is more uniform (yellow) and hence more visible. This is again due to more functions producing a better approximation for the final field H ω (x 0 , y 0 ). In order to show the visualization properties of harmonic time averages in the context of differing between the periodic sets and the chaotic regions, in Fig. 6 we show a histogram of H ω (x 0 , y 0 ) values for pictures in Figs. 4&5. We use the log-scale for H ω in order to have an analogy with the mentioned pictures. We indicate three clearly visible peaks common to all distributions: non-resonating periodic set always have values ∼ O(10 −9 ), chaotic regions always resonate weakly with values ∼ O(10 −5 ), and the periodic sets in the case of the appropriate frequency resonate having the values ∼ O(10 −1 ). Observe that both rational and irrational frequencies respect the mentioned range of H ω (x 0 , y 0 ) values in relation to the dynamical behaviors (for non-clustered values h ω the same properties hold but less precisely). This proves that by using more functions we improve the quality of visualization in sense of better pronounced peaks. Furthermore, a cut-off can be introduced between these peaks in order to visualize only one preferred dynamical zone, corresponding to the examined peak. For instance, a full picture of the regular phase space portion (all frequencies) could be constructed by cutting out the resonant peaks for many diverse rational frequencies/functions and considering corresponding phase space points. Alternatively, the same phase space portion could be observed if one considers many functions with irrational frequencies, focusing on completely non-resonating part of the phase space. Extended Standard Map In this Section, we show the application of the exposed visualization method on the Extended Standard Map proposed in [24] and already studied in [1]. It is a three-dimensional measure-preserving action-action-angle map (a particular generalization of the standard map) defined as: x = x + ε sin(2πz) + δ sin(2πy) [mod 1] y = y + ε sin(2πz) [mod 1] z = z + x + ε sin(2πz) + δ sin(2πy) [mod 1] It has been suggested that this map is ergodic for any non-zero values of perturbations ε and δ, backed by the observation that no invariant surface persists for any non-zero perturbation [24]. We follow the argumentation from [1] and add a further argument to the mentioned claim. We set a three-dimensional grid of 20 × 20 × 20 points and evaluate harmonic time averages for two functions with the frequencies of ω = 1 2 , 1 3 , 1 5 and the transient of n final = 10 7 . We then cluster the values separately for each frequency following the exposed procedure Eq. (14) and plot the histograms for H ω (x 0 , y 0 , z 0 ) in Fig. 7. Two different examples of perturbation values are considered: there seems to be only a small difference in the profiles that resemble log-normal distributions, while they both tend to zero for all the investigated frequencies. This means that at the considered non-zero perturbation values there appears to be no persisting periodic sets of any periodicity, which confirms the proposed claim within the given range of precision. Moreover, the range of values for the distribution of H ω (x 0 , y 0 , z 0 ) (around 10 −4 ) indicates presence of the chaotic dynamics, in the sense of a weak resonance for all the frequencies. We expect that for a longer time-evolution the distributions shrink towards zero, confirming the initial ergodic hypothesis. Conclusions We exposed a new method of frequency analysis of dynamical systems based on harmonic time averages [3], following the ergodic theory visualization technique previously proposed in [1,2]. Using the known properties of standard map, we showed how the method can be implemented in the context of discretetime dynamical systems. Harmonic time averages for various frequencies were computed over the standard map's phase space and their precisions estimated. A simple algorithm suggesting a clustering method for improving visualization by using more functions was constructed and employed. The periodic partitions of different periodicities were correctly visualized, including the corresponding chaotic zones. The distribution of values of H ω was analyzed, and the peaks corresponding to different types of dynamics were identified. The method was applied to the Extended Standard Map, confirming the previous suggestions regarding its dynamical properties. The most clear improvement of the method lies in the optimization of the clustering algorithm discussed in Section 5. A better version should design a parametrization for all the scatter plot branches, in order have a continuous correspondence between the scatter plot vectors and the (available) colors. This would moreover solve the issue of same-coloring for the independent periodic sets. Furthermore, the same discussion from [1] applies in the regard of optimizing the shape/volume of the clustering cells: in cases of more complex dynamical systems (higher dimensionality) where scatter plots are not showing the branching structure, cell division of scatter plot space might be the only choice. In addition, similarly to the case in [1], relationships between the convergence properties and the nature of the underlying orbit is to be investigated. While the convergence slope gives rough estimate of the orbit's type, a detailed analysis of the convergence pattern for rational and irrational frequencies might yield further insights. The method was here exposed using the standard map, as it is a very investigated case of 2D map. However, the full applicational extent of the method lies in high-dimensional systems, where 2D phase space sections can be considered, in analogy with the discussion in [1]. Furthermore, the method is fully applicable to continuous-time dynamical systems as well (ODEs), although the computation of harmonic time averages in these cases might be somewhat more demanding. Finally, it would be interesting to see if the method is applicable to complex systems like multi-dimensional coupled maps on networks [22,25].
2014-07-26T14:19:33.000Z
2008-08-15T00:00:00.000
{ "year": 2008, "sha1": "fd1c2c85568073c64a9a5c2ff3d1251f03fc2792", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "123fc4e829b0f80967198fd167679b26ec72499c", "s2fieldsofstudy": [ "Computer Science", "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
26500076
pes2o/s2orc
v3-fos-license
Oral contraceptives and cancer Dr. Kretzschmar: No, there isn't. There is no evidence that the Pill increases the risk of breast cancer and in fact, it has been shown that oral contraceptives may offer some protection against benign breast disease. For example, when Vessey and his group com pared 345 women admitted to London teaching hospitals with breast lumps (90 malignant and 255 benign) against a matched control group admitted for acute medical or surgical condi tions, it was found that oral contraceptives were in no way related to the risk of breast cancer. It was discovered that the risk of admission to a hospital for a breast biopsy among Pill users was reduced by about 75 percent, compared to those women who had never used the Pill at all. The Boston Collaborative Drug Surveillance Program did another retrospective study, and had similar findings. One hundred twenty-one patients with breast disease (cancer, fi brocystic disease, fibroadenoma and miscellaneous problems such as fibrolipoma or benign duct ectasia) were compared to 842 patient controls. Among the women with newly diagnosed breast cancer, three of 23 (13 percent) had received oral con traceptives, compared to 20 percent who had received oral contraceptives among the controls. Of the patients with benign breast tumors, six percent had received oral contraceptives, compared to the 20 percent among the controls. When these findings were analyzed for the patients' ages, it was revealed that at each age level, Pill usage was less common in those with benign breast tumors. According to their data, hospital admissions in the Boston area for breast diagnos@ were reduced by almost half for those women using oral contraceptives. Dr. Kretzschmar: No, there isn't. There is no evidence that the Pill increases the risk of breast cancer and in fact, it has been shown that oral contraceptives may offer some protection against benign breast disease. For example, when Vessey and his group com pared 345 women admitted to London teaching hospitals with breast lumps (90 malignant and 255 benign) against a matched control group admitted for acute medical or surgical condi tions, it was found that oral contraceptives were in no way related to the risk of breast cancer. It was discovered that the risk of admission to a hospital for a breast biopsy among Pill users was reduced by about 75 percent, compared to those women who had never used the Pill at all. The Boston Collaborative Drug Surveillance Program did another retrospective study, and had similar findings. One hundred twenty-one patients with breast disease (cancer, fi brocystic disease, fibroadenoma and miscellaneous problems such as fibrolipoma or benign duct ectasia) were compared to 842 patient controls. Among the women with newly diagnosed breast cancer, three of 23 (13 percent) had received oral con traceptives, compared to 20 percent who had received oral contraceptives among the controls. Of the patients with benign breast tumors, six percent had received oral contraceptives, compared to the 20 percent among the controls. When these findings were analyzed for the patients' ages, it was revealed that at each age level, Pill usage was less common in those with benign breast tumors. According to their data, hospital admissions in the Boston area for breast diagnos@ were reduced by almost half for those women using oral contraceptives. The prospective study being done by the Royal College of General Practitioners in Great Britain has also ruled out an association between breast cancer and oral contraception. Their 1974 interim report showed that of 46,000 womenâ€"SO percent using the Pill continuously, and 50 percent not â€"¿ 11 cases of malignant neoplasm of the breast were found in Pill takers, four cases in ex-takers, and 16 among the non-taking control group. A lower incidence of benign breast neoplasm became apparent after two years of continual Pill usage. Editor: Aside from causing cancer de novo, is there any evidence that the Pill might exacerbate existing tumors? Dr. Kretzschmar: As far as the belief that women with benign breast tumors have a higher risk of developing breast cancer, the apparent protective effect of oral contraceptives against benign breast tumors could be considered a protection against subsequent development of breast malignancy as well. Also, prolonged use of oral contraceptives simulates pregnancy in some ways, and we know that, relative to other women, those who become pregnant early in life are at a lower risk of developing breast cancer. There is some evidence that women who have fibroids may have enlargement of their fibroids related to the Pill. Of course, women with a strong family history of breast cancer should be followed with caution, and should have careful breast exami nations at frequent intervals; women with known or suspected carcinoma of the breast should choose another form of con traception. But I am quite convinced that oral contraceptive intake is in no way responsible for breast carcinoma, de novo or otherwise. Editor: Have oral contraceptivesbeen linked to ovarian cancer? Dr. Kretzschmar: No. There is no relationship. If anything, the Pill has an in hibitory effect on the development of ovarian cancer. In the British study of 46,000 women, there have been fewer deaths from both breast and ovarian cancer among those women using the Pill than among those who did not. Editor: Is there a relationshipbetweenthe Pill and uterinecervical cancer? Dr. Kretzschmar: No. Among women receiving estrogen from a combination type oral contraceptive, no convincing relationship to cancer of the uterine cervix has been established. In a case-control study done at the State University of New York-Downstate Medical Center in Brooklyn between 1969 and 1975, 689 con secutive patients with cervical carcinoma were interviewed and compared with a control group of 1,300 with normal cervical smears. Each case subject was matched with a control subject for age, ethnic origin, age at first coitus, age at first preg nancy, and socioeconomic status. The findings: no significant difference between the case and control subjects in the use of oral contraceptives. This conclusion duplicates the results of many other stud ies as well. Worth and Boyes compared the use of oral contra ceptives among 310 women, 20 through 29 years of age, who had carcinoma in situ of the cervix with 682 control subjects matched for age who had negative smears. Again, there was no significant difference in the use of oral contraceptives. Thomas compared 324 women who had positive cervical smears showing dysplasia or carcinoma in situ of the cervix with 302 women from the same locality. No difference was found.in their use of the Pill. Although there has been a significant increase in recent years in pre-invasive lesions of the uterine cervix, this increase has been found in both users and non-users of the Pill. This brings out one of the more positive aspects of the Pill. Just think of the number of women who routinely see their physicians now and obtain a Pap smear because they are on the Pill. Women who are on the Pill have a much better chance for earlier diagnosis of changes in their cervical epithelium than those who are not â€"¿ and that's certainly a plus factor in the relationship of cancer to oral contraception. Editor: You discount a link between oral contraceptives and uterine Dr. Kretzschmar: No, there is no relationship. The sequential tablets, which provided estrogen alone for 14 days, and then estrogen and progestogen in combination for seven more days, were taken off the market when controversial evidence linking estrogen with possible cancer of the uterus came to light. In 1975, 21 cases of endometrial cancer among Pill-taking women under the age of 40 had been recorded, but in eight of the cases, fac tors were found which militated against a close relationship between oral contraceptives and carcinoma, and of the re maining 13 cases, 11 had takeasequential agents. So, there is absolutely no evidence to link endometrial cancer with com bination or progestin-only oral contraceptives. In fact, since only about eight percent of women taking oral contraceptives at that time used sequential agents, the unduly high incidence of sequential agent therapy in these 21 cases might suggest that women who are predestined to develop this tumor are actually protected against it by the combination pills. If you wish to discuss replacement estrogen, exogenous long-term estrogen for menopausal women, that's controver sial. According to an article published in the New England Journal of Medicine in 1975,the useof theseexogenousestro gens in menopausal and post-menopausal women was associ ated with a 4.5 times greater risk of endometrial cancer. How ever, some physicians do not believe that there is a connection, and there are many knowledgeable people on both sides of the fence. Women taking estrogens should have frequent pelvic cancer-screening examinations and physicians should be on guard if abnormal bleeding develops. I would point out, in this respect, that the Pap smear is not a totally effective screen ing device for endometrial cancer; in fact, 40 to 60 percent of patients with adenocarcinoma of the uterus will have negative Pap results. An evaluation of the endometrial cavity with a suction curette, or an endometrial biopsy should be performed on all patients at high risk, or on those with a suspicious his tory. If a physician then feels he has not gotten a sufficient sampling, a dilation and fractional curettage should be done under local or general anesthesia. Our diagnostic procedures in this area should be further developed. Editor: Would you comment on the evidence linking oral contracep tive use to benign liver tumors? Dr. Kretzschmar: Yes, this was looked at carefully, and in April 1977, the Amer ican College of Surgeons released documented evidence on the increased incidence of benign hepatomas related to oral con traceptives. Their survey material consisted of 543 cases of primary liver tumors among both sexes; 378 in females and 165 in males. Among the males, 8.5 percent were benign, while among the females, 56.1 percent were benign. A positive his tory of oral contraceptive use was reported in 49.5 percent of the female patients, and in 29 percent the contraceptive history was unknown. However, it is reasonable to assume that a cer tain number of these â€oe¿ unknowns― included Pill-users; there fore, among the female patients in this study, more than SO percent of primary liver tumors occurred in users of oral contraceptives. The majority (73.8 percent) of the liver tumors diagnosed in Pill-users were benign. On the other hand, among non-users, the percentages of benign and malignant tumors were roughly equal. This difference in the proportion of benign to malig nant tumors among users and non-users is substantial, and further supports the association between Pill use and the occurrence of benign liver tumors. Also, the frequency of ma lignant tumors in this study increased with age, and resembled the distribution of malignant liver tumors in the various age groups of the general population. But the distribution of be nign liver tumors peaked in the age group of 26-30 years, and this parallels the age distribution of oral contraceptive use in the general population. Editor: What were the histologic types of these,benign liver tumors? Dr. Kretzschmar: The survey showed that among non-users, the benign tumors were proportionately divided among four histologic types. But VOL.28,NO.2MARCH/APRIL 1978 among users, there was a preponderance of hepatic cell adeno mas and focal nodular hyperplasia; together, these two types represented 82.6 percent of all benign tumors in users. So it appears that the association between oral contraceptives and benign liver tumors applies only to these two types. The in cidence of adenomas peaked significantly in the 26-30 year old group, and then declined sharply; the incidence of focal nodular hyperplasia peaked in the 31-35 year-olds and then remained rather constant in the older groups. Editor: Do the statistics vary with different types of oral contra ceptives? Dr. Kretzschmar: Two synthetic estrogens are used in oral contraceptives: ethinyl estradiol and mestranol. (Mestranol is demethylated in the liver to ethinyl estradiol.) Where information was avail able on the type of synthetic estrogen used, 66.7 percent of the tumors were found in women who had used mestranol. But that correlation should be interpreted rather cautiously, since mestranol was marketed first, and until 1970, was used more frequently than ethinyl estradiol by the general population. Editor: What were the most com@non presenting sympto@ns, and how were these tumors treated? Dr. Kretzschmar: Many presented with symptoms of intraperitoneal bleeding, although masses and pain were generally the most frequent presenting symptoms in the survey. It would appear that con traceptive users had highly vascularized tumors, and this might suggest that oral contraceptives exacerbate clinical symptomatology of these tumors. But it should be noted that a high proportion of these benign liver tumors were asympto matic and were discovered incidentally. I think clinicians should be especially aware of this diagnostic possibility when examining young women who appear otherwise healthy. Most hepatic cell adenomas studied in this survey were treated by surgical resection, but 13 percent were untreated, and of the cases of focal nodular hyperplasia, 14 percent were untreated. There may be a spontaneous regression of these tumors once oral contraceptive use has been discontinued. These two types of benign liver tumors have not been shown to be precursors of hepatocellular carcinoma, and there is no evidence that because of different pathogenic mecha nisms, the benign tumors in patients on oral contraceptives have any proclivity for malignant degeneration. But benign hepatic lesions can suddenly and unexpectedly rupture, and hemorrhage into the abdominal cavity. Emergency resection of the tumors has not always prevented fatalities. Clinicians should be aware that oral contraceptive users are at risk in relation to these benign liver tumors, and should follow their patients accordingly. Editor: Do you recommend the use of DES as a morning-after pill, given its proven correlation with vaginal carcinoma in female children of women who received the drug early in pregnancy? Dr. Kretzschmar: I think that for any patient who is fully aware of the contro versies about it, DES is an appropriate management for the morning-after situation, as is menstrual extraction. Editor: What are the major contraindications to the use of oral con traceptives? Dr. Kretzschmar: Women with present or past thrombophlebitis or thromboem bolic disorders should not take the Pill. Similarly, patients with a history of cerebrovascular and coronary artery disease should use another form of contraception. Impaired liver function, known or suspected carcinoma of the breast or es trogen-dependent cancers are other contraindications. The Pill should not be used when pregnancy is suspected, and any undiagnosed abnormal genital bleeding should be investigated and treated before an oral contraceptive is prescribed. Editor: In your experience, what is the most common side effect of the Pill, and how should it be treated? Dr. Kretzschmar: The most common side effect is breakthrough bleeding, and this should be treated with an increased dosage of estrogen. As a general principle, a patient should begin with the lowest level of estrogen that will prevent ovulation. If breakthrough bleeding persists, the estrogen dosage can be gradually in creased. And I'm sure the physician can find another oral contraceptiveâ€"assuming the patient is healthyâ€"that will not cause this side effect. Editor: Should the Pill beprescribedfor uses other than contraception? Dr. Kretzschmar: This is done, and I think it's acceptable. For example, it's effective and relatively safe in the management of severe dys mennorrhea. Oral contraceptives also provide an effective control of prolonged or excessive bleeding. When there is no pathologic basis for the menorrhagiaâ€"such as leiomyomas, polyps, and the likeâ€"combination agents are very successful in reducing the blood flow. The advantage of this is obvious. Editor: Tosum up, for whom is it safe to prescribe the Pill? Dr. Kretzschmar: A safe candidate for the Pill is any healthy young woman who wishes to have temporary control of her fertility. And I em phasize the word temporary. Neither patients nor physicians should avoid the Pill out of fear of carcinogenicity. There is simply no convincing evidence that the Pill causes cancer.
2019-03-07T14:05:15.755Z
1978-03-01T00:00:00.000
{ "year": 1978, "sha1": "3d5fafa1dd628b61201d5277847e953f991cfe5e", "oa_license": null, "oa_url": "https://doi.org/10.3322/canjclin.28.2.118", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "60208bd03541c011ef5d5d21af48c6058909884e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220398059
pes2o/s2orc
v3-fos-license
A database of human gait performance on irregular and uneven surfaces collected by wearable sensors Gait analysis has traditionally relied on laborious and lab-based methods. Data from wearable sensors, such as Inertial Measurement Units (IMU), can be analyzed with machine learning to perform gait analysis in real-world environments. This database provides data from thirty participants (fifteen males and fifteen females, 23.5 ± 4.2 years, 169.3 ± 21.5 cm, 70.9 ± 13.9 kg) who wore six IMUs while walking on nine outdoor surfaces with self-selected speed (16.4 ± 4.2 seconds per trial). This is the first publicly available database focused on capturing gait patterns of typical real-world environments, such as grade (up-, down-, and cross-slopes), regularity (paved, uneven stone, grass), and stair negotiation (up and down). As such, the database contains data with only subtle differences between conditions, allowing for the development of robust analysis techniques capable of detecting small, but significant changes in gait mechanics. With analysis code provided, we anticipate that this database will provide a foundation for research that explores machine learning applications for mobile sensing and real-time recognition of subtle gait adaptations. Background & Summary Gait analysis is the science of functional assessment of human locomotion, and it has been applied in multiple areas such as medicine, sport, and ergonomics with promising results [1][2][3] . One specific successful application of gait analysis is to assess fall risk exposure and prevent falling injuries 4 . Fall risk is associated with multiple factors including human characteristics, health conditions, and the physical environment 5 . In particular, irregular walking surfaces in the outdoor built and natural environment expose people to potential fall injuries 6 . Unfortunately, traditional gait analysis requires expensive engineering technologies that are time and labor intensive, especially when the analysis involves heuristic hand-crafted feature extraction [7][8][9] . To overcome this limitation, machine learning methods are increasingly being integrated into gait and posture related investigations 10-12 . This data descriptor aims to contribute to machine learning research of gait performance when walking in different outdoor environments, which has surprisingly been limited in previous literature. Previous work has shown that gait adaptations utilized when walking on irregular surfaces may reflect reduced stability and increased fall risk [13][14][15] . However, one limitation of such previous studies is that they were conducted in simulated laboratory environments and thus lack real world validity. With the recent development of wearable motion tracking technologies such as Inertial Measurement Units (IMU), we now have the capability to extend gait analysis into outdoor settings to maximize ecological validity. In order to develop accurate, robust and generalizable machine learning algorithms to recognize subtle gait alterations, it is necessary to have sufficient amounts of properly annotated data. Unfortunately, very limited gait related data sets are publicly accessible. Among these, most were primarily generated for human activity 2 Scientific Data | (2020) 7:219 | https://doi.org/10.1038/s41597-020-0563-y www.nature.com/scientificdata www.nature.com/scientificdata/ recognition purposes so the activity tasks included have a very broad spectrum of coverage [16][17][18][19][20][21][22][23][24] . For example, gait is usually one category accompanied by other activities that have substantial differences (sitting, lying down, climbing stairs, running, etc.). Subtle gait alterations due to internal/external factors have never been considered or properly annotated in existing public data sets. A second category of data sets are focused on utilizing human gait performance as a biometrics characteristic for human identification 12,[25][26][27][28][29][30] . Therefore, creators of those data sets usually only considered between subject differences and only collected short duration of gait trials from each participant which is not sufficient to train advanced machine learning models. Furthermore, the environmental conditions in which these data were collected are not always reported in sufficient detail. In order to advance machine learning for the recognition of human gait changes caused by walking surface characteristics, there is an urgent need to create large data sets that have an exhaustive set of walking surfaces representative of the real environment outside the laboratory, preferably with wearable and non-intrusive sensors. Therefore, in this descriptor, we present a publicly accessible data set collected with wearable motion sensors where participants walked on different real-world outdoor surfaces. We anticipate that this data set will provide a foundation for subsequent research that explores the application of machine learning to mobile sensing and real-time recognition of subtle gait adaptations. Methods Participants. Thirty young participants with no reported neurological or musculoskeletal conditions that affected their gait or posture and no history of falling injuries in the previous two years volunteered for this study. The sample of participants is in proximity to normal urban US campus. Their anthropometry information is provided in www.nature.com/scientificdata www.nature.com/scientificdata/ centered on both the anterior thighs, 4 & 5) centered 5 cm above the bony processes of both ankles, and 6) posterior level of L5/S1 joint ( Fig. 1). Researchers palpated participant's bones to place the sensors. Participants were instructed to face southwest and perform a sensor calibration procedure three times prior to the experimental trial collection. The calibration procedure was: 1) line up directly centered with experiment computer; 2) forward trunk flexion about 30 degrees 3 times; 3) raise right arm 3 times; 4) raise right leg three times; 5) raise left leg three times. A researcher performed these movements with the participant. The calibration data are also included in this data set. The nine walking surfaces were: 1) flat even (horizontal, 0 grade, paved); 2) up stairs (cement); 3) down stairs (cement); 4) sloped up (cement); and 5) sloped down (cement) 6) grass; 7) banked left (paved); 8) banked right (paved); 9) uneven stone brick (Fig. 2). Participants were instructed to walk at their normal pace and to let their arms swing naturally. Participants stood still at the starting position and waited for the verbal cue from a researcher to start their walking trials. Each walking trial lasted for 16.4 ± 4.2 seconds until stop. Within each trial, walking was performed by participants without changes of direction (i.e. straight walking). Between trials, only walking on flat even, grass, and uneven stone brick were conducted with direction changes every other trial (i.e. walking forward for the first trial and walking back for the next trial). Surfaces were presented in a randomized order and adequate rest was provided to prevent fatigue between trials. Participants walked six times on each of these surfaces, and a researcher walked next to them with the experimental data capture machine to ensure a strong signal connection. A summary of the data collection conditions includes weather ('N/A' was filled if weather was not recorded), temperature, and time of day for each participant is provided in Table 2. www.nature.com/scientificdata www.nature.com/scientificdata/ Data processing. Wearable data were collected using the MTw Awinda software (Xsens, Enschede, Netherlands). The sampling frequency was set at 100 Hz. Raw sensors' outputs were synchronized by the software and then exported to a standard txt file format. Subsequently, all the data files were imported and processed under MATLAB (R2019a, The MathWorks, Natick, USA). Trajectories were smoothed using a 2 nd order Butterworth low pass filter with a 6 Hz cut-off frequency. Figure 3 is presented to give an example of the filtered signal pattern of the trunk sensor while walking on different surfaces. Data records raw data. All raw data files exported from MTw are stored as .txt format and have been uploaded into figshare 31 to provide free accessibility to the public. A total of 10,260 (30 participants * 57 trials * 6 sensors) files are available from the database. Files are grouped by folders with labels from 1-30 representing the participant number (30 participants in total). Each file was named systematically as '#-000_00B432**.txt' , where '#' represents the walking surface condition (Table 3) and '**' represents the sensor location (Table 4). For example, file '9-000_00B432CC.txt' stands for the trunk sensor ('CC') data while walking on the flat even surface ('9') for all participants. Furthermore, for each trial there was a .mtb file (i.e. binary motion tracker file). Sensors' outputs (e.g. 3D acceleration, 3D gyroscope data) as well as the recording information (e.g. start time, update rate, filter profile, and firmware version) are stored in each file with labels. The average duration for each surface condition (across all participants) is summarized in Table 3. A comprehensive description of the data structure and variable labels are given in Table 5. Processed data. A processed data file was also provided as a .mat format (data file format of MATLAB) in the repository. Raw sensor data from 30 participants were aggregated into one single file with participant as the first layer and sensor as the second layer. The outline of the MATLAB script is described as following: 1. import the raw txt files; 2. apply Butterworth low-pass filter (2nd order, cutoff frequency: 6 Hz, sampling frequency: 100 Hz); 3. count the missing frames; 4. export processed data into .mat file. Sensor placement. Participants were required to wear tight clothes during the experiment to prevent sensor movement. As described in the procedures (see Data Collection), the wearable sensor placement followed the instructions available in the manufacturer's documentation. In addition, before each experiment, the signal quality of each IMU sensor was manually verified through the system's acquisition software. IMU sensors were positioned by the same researchers (Authors BH and SC) for consistency. Missing data. The trial-wise data missing rate is recorded in the database for each participant (under the second layer of the .mat file). Due to transmission errors between the data collection computer and the IMU www.nature.com/scientificdata www.nature.com/scientificdata/ sensors, some data frames/packages were dropped. However, we have confirmed that missing data is not a major issue for this data set, only a small fraction of data packages were dropped (0.23% ± 0.69%). Data missing rate is summarized by sensor location in Table 6 and by walking surface in Table 7. Comparison with published data sets. The age of the participants differed significantly from previously published data sets, which varied from ages 2 to 78 years [18][19][20]22,[24][25][26][27]29 , whereas this data set only included young adults. The number of participants of previous data sets also varied significantly from 8 to 744. Subject number is an important technical component for database selection considering the need for large amounts of data during www.nature.com/scientificdata www.nature.com/scientificdata/ machine learning model training. Nevertheless, it also obscures the merit of data sets that have relatively few participants, but longer recording lengths. For example, although Ravi et al. 23 only recruited 10 participants in their study, a total of 30 hours of data were collected using different models of smartphones with an unconstrained phone placement setting. The data set can be treated as a suitable data resource of models designed for real-world application in which the models and placement of smartphones are always unspecified. Our data set includes 30 participants and each one has a relatively large amount of data collected. The current data set is well aligned with previous similar data sets. When using these data sets for gait-related machine learning model development, we should be aware that the relative homogeneous samples might restrict the generalizability to more heterogeneous data in terms of age distribution. The annotation of the ground truth for recorded activities is also important for publicly accessible data sets because it is needed to validate the predicted outcome. Most of the previous similar data sets have documented the types of activities participants performed. Among them, many include walking records on different surfaces (walking on concrete/grass field, walking upstairs/downstairs, etc.) 16,[18][19][20][21][22]24,26,27 . Compared to them, the current data set provides a larger amount of irregular walking surfaces. Machine learning algorithm developers could benefit from the diversified walking records contained in the present data set. Although some parameters about testing sites (e.g. the grade of the slope and the stair dimensions) were not systematically surveyed during the data collection phase, we believe they represent common public architecture features. To further improve the usability of the data, more details about measurement sites will be provided in the GitHub and publicly accessible data description in the future. Usage Notes Previous literature has shown that IMUs are a valid tool for measuring subtle changes in gait kinematics and the performance is as sensitive as the current standard in kinematic tracking (i.e. optical motion capture) 32 . To support a range of users in accessing the data set, other than raw data, processed data are provided in .mat format in the data repository. The .mat data file is readable by both Python and MATLAB environments. Existing Python and MATLAB open-source tools focused on gait and human motion kinematics could be used to analyze this data set. GaitPy provides python functions to read accelerometry data and estimate the clinical characteristics of gait (https://pypi.org/project/gaitpy/). It could be a complementary tool when utilizing this data set. For MATLAB, the Kinematics and Inverse Dynamics toolbox (https://www.mathworks.com/matlabcentral/fileexchange/58021-3d-kinematics-and-inverse-dynamics) can be utilized in investigating joint kinematics and dynamics. Moreover, biomechZoo, which help users analyze, process, and visualize motion data from various sensors 33 could support researchers aiming to explore this data set. Code availability The custom MATLAB script to process data is provided on the following Github repository: https://github.com/ UF-ISE-HSE/UnevenWalkingSurface. A Python script (python_version.py) was also provided for converting the processed data into Python compatible format. The .h5py file can be directly use as a standard file object in Python to process.
2020-07-08T15:07:12.076Z
2020-07-08T00:00:00.000
{ "year": 2020, "sha1": "c8a9dc05f7cea56ed3744c78f6ed5ada272b5187", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41597-020-0563-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8a9dc05f7cea56ed3744c78f6ed5ada272b5187", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
238222261
pes2o/s2orc
v3-fos-license
Coherent Anti-Stokes Raman Scattering Microscopy: A Label-Free Method to Compare Spinal Cord Myelin in Different Species Many histological techniques are used to identify and characterize myelin in the mammalian nervous system. Due to the high content of lipids in myelin sheaths, coherent anti-stokes Raman scattering (CARS) microscopy is a label-free method that allows identifying myelin within tissues. CARS excites the CH2 vibrational mode at 2845 cm−1 and CH2 bonds are found in lipids. In this study, we have used CARS for a new biological application in the field of spinal cord analysis. We have indeed compared several parameters of spinal cord myelin sheath in three different species, i.e., mouse, lemur, and human using a label-free method. In all species, we analyzed the dorsal and the lateral funiculi of the adult thoracic spinal cord. We identified g-ratio differences between species. Indeed, in both funiculi, g-ratio was higher in mice than in the two primate species, and the myelin g-ratio in lemurs was higher than in humans. We also detected a difference in g-ratio between the dorsal and the lateral funiculi only in humans. Furthermore, species differences between axon and fiber diameters as well as myelin thickness were observed. These data may reflect species specificities of conduction velocity of myelin fibers. A comparison of data obtained by CARS imaging and fluoromyelin staining, a method that, similar to CARS, does not require resin embedding and dehydration, displays similar results. CARS is, therefore, a label-free alternative to other microscopy techniques to characterize myelin in healthy and neurological disorders affecting the spinal cord. INTRODUCTION Myelin is a lipid-rich protective cover formed by oligodendrocytes that surround and protect axons. Lipids account for about 70% of the myelin and myelin sheaths are characterized by a high lipid-toprotein ratio. Furthermore, myelin displays different lipid compositions when compared to typical plasma membranes [1,2]. Myelin sheaths permit to increase the propagation speed of action potentials along axons [3,4]. Moreover, myelin is a dynamic structure spatially organized in heterogeneous functional domains that provide metabolic support to neurons [5]. Loss and alteration of myelin that results in the reduction of nerve conduction velocity and in the altered transfer of energy metabolites to neurons are reported in various diseases [6,7]. Damage to myelin sheaths in humans is observed in severe neurological conditions such as multiple sclerosis, idiopathic inflammatory demyelinating diseases, acute disseminated encephalomyelitis, and neuromyelitis optica [4,8]. To identify myelin on tissues, on the one hand, classical staining based on the specific lipid composition of myelin, such as Luxol fast blue [9], Sudan Black B [10], Baker's acid hematin method [11], and silver staining [12], had been originally developed. However, these stainings do not always reach a sufficient resolution and contrast to visualize individual fibers [13]. On the other hand, immunochemical methods permit to characterize myelin structure with single fiber resolution and a high reproducibility. Antimyelin protein antibodies most commonly used are myelin basic protein (MBP), proteolipid protein (PLP), myelin oligodendrocyte glycoprotein (MOG), myelin protein zero (MPZ), and myelin-associated glycoprotein [14]. As for all immunohistochemistry approaches, drawbacks are the potential lack of specificity and background noise. Moreover, they only permit a semiquantitative quantification. Coherent anti-Stokes Raman scattering (CARS) microscopy is a nonlinear optical technique using the endogenous contrast provided by molecules present in the sample [15][16][17]. The major advantage of this technique is to be done directly on tissues without staining, dehydration, and embedding steps that are detrimental to myelin preservation [18]. Lipid-rich myelinated tissues, such as the spinal cord and brain, appeared to be good samples for CARS imaging [19]. CARS had been used to develop an automated method for the segmentation and morphometric analysis of nerve fibers of spinal cord tissue [20]. CARS had also been used to monitor live myelinated fibers [21], in vivo mouse brain [22], and to carry out a longitudinal in vivo follow-up of demyelination and remyelination in the injured rats' spinal cord [23]. CARS also permitted to characterize demyelination in mouse models of diseases such as amyotrophic lateral sclerosis [24], experimental autoimmune encephalomyelitis [25], and brain tissues of multiple sclerosis patients [26]. This is the first study using a label-free method to compare myelin sheaths of two separated spinal cord tracts in three different species, i.e., mice, lemurs, and humans. Direct comparison of myelin characteristics between species will not only provide basic data on their similarities but also open the way to compare myelin alterations in animal models and human diseases. METHODS Study approval: Experiments were approved by the Veterinary Services Department of Hérault, the regional ethic committee n°36 for animal experimentation, and the French Ministry of National Education, Higher Education and Research (authorizations; mice: n°34118 and non-human primates n°A PAFIS#16177-2018071810113615v3). Experiments followed the European legislative, administrative, and statutory measures for animal experimentation (EU/Directive/2010/63) and the ARRIVE guidelines. Human samples collection was done under the approval of the "Agence de la Biomédecine" (PFS-ssNUM-BAUCHET). Mice: Three C57BL6/6J male mice of 3 months of age (Charles River, Wilmington, United States) were used. Non-human primates: three adult male lemurs (Microcebus murinus, 2 years old) were used. They were born and bred in the animal facility (the University of Montpellier, France (license approval 34-05-026-FS) and housed in cages equipped with wooden nests and an enriched environment. The temperature of the animal facility was constantly kept between 24-26°C with 55% of humidity. All Microcebus murinus were fed 3 times a week with fresh fruits and a mixture of cereal, milk, and eggs. Water was given ad libitum. Human: Low thoracic (T11-T12) spinal cords were obtained from three brain-dead organ-donor patients (2 males 45 and 51 years and 1 female 68 years) under the approval of the French Institution for Organ Transplantation. One patient died from cardiac arrest and two from a ruptured aneurysm. Body temperature was lowered and blood circulation and ventilation were maintained until 4 h before spinal cord removal. This shorttime interval permitted good preservation of the tissue, as already reported [27]. After organs removal for therapeutic purposes, T8-L5 vertebral bloc was isolated and spinal cord segments were removed and immediately fixed in 4% paraformaldehyde. Luxol Fast Blue and Neutral Red Staining 14-µm-thick axial spinal cord cryosections (Microm HM550, Thermofisher Scientific, Waltham, United States) were collected on Superfrost Plus© slides. Luxol fast blue staining was done as previously described [28,29]. Briefly, sections were placed 5 min in 95% ethanol and then incubated in 0.1% Luxol fast blue under mild shaking (12 h, room temperature). Slides were then rinsed for 1 min in milli-Q water, then placed for 1 min in lithium carbonate (0.05%), and finally washed in tap water (1 min). Subsequently, slides were incubated for 10 min in 0.5% neutral red solution, 5 min in 100% ethanol, and washed twice for 10 min in xylene. All slides were cover-slipped using Eukitt (Sigma Aldrich, Darmstadt, Germany). Coherent Anti-Stokes Raman Scattering and Quantifications We used LSM 7 MP optical parametric oscillator (OPO) multiphoton microscope (Zeiss, Oberkochen, Germany) with an upright Axio Examiner Z.1 optical microscope associated with a femtosecond Ti: sapphire laser (680-1,080 nm, 80 MHz, 140 fs, Chameleon Ultra II, Coherent, France) pumping a tunable OPOs (1,000-1,500 nm, 80 MHz, 200 fs, Chameleon Compact OPO, Coherent, France) to acquire CARS images. We imaged axial spinal cord sections (14 µm) in all species. A x20 water immersion lens (W Plan Apochromat DIC VIS-IR) with the following characteristics: 1024 x 1024 pixels frame size, scan speed of 6 (zoom x1.2) and 8 (mosaic, zoom x3, PixelDwell 3.15 and 1.27 μs/scan, respectively) and either a zoom x1.2 or x3 was Frontiers in Physics | www.frontiersin.org September 2021 | Volume 9 | Article 665650 used. CARS excites the CH 2 vibrational mode at 2845 cm −1 and CH 2 bonds are found in lipids and thus in myelin sheath [30]. Excitation wavelengths are 836 and 1,097 nm (synchronized Ti-sapphire and OPO, respectively) and the signal is detected at 675 nm (filter from 660-685 nm). The non-resonant background is reduced due to the use of femtoseconds impulsions [31,32] and EPI-detection [33] [for review see [34]]. We collected CARS signaling in the nearinfrared (670 nm) since this wavelength produces rather limited autofluorescence when using biological tissue. Moreover, and as previously reported, before getting a simultaneous scan of both lasers, we switch off sequentially one of the laser beams (either OPO or Ti : sapphire) to confirm a robust intensity decay when compared to the CARS signal [30]. Pictures are a stack of 3 µm (3 slices) and were taken in six locations within the lateral funiculus and three locations in the dorsal funiculus ( Figure 1C). In each picture, a square of 100 µmX100 µm located in the center of the image was quantified. Imaris 9.6.0 software was used (Bitplane AG, Zurich, Switzerland) for quantifications using numeric x3 zoom applied to the original image ( Figure 1G). Only fully identifiable fibers were quantified, and diameters were randomly selected and measured through unidirectional length, without selection criteria (shortest or longest diameter). For some acquisitions, a quick fluoromyelin (20 min, 1: 200, Invitrogen Carlsbad, United States), (rinsed 3 × 10 min in PBS) staining was added to observe eventual co-localization. Fluoromyelin Staining We imaged 14 µm-thick axial spinal cord cryosections of the same individuals as for CARS analysis for all species. Sections were incubated 20 min with fluoromyelin (1:200, Invitrogen, Carlsbad, United States), rinsed 3 × 10 min in PBS and mounted with fluorosave (Dako, Glostrup, Denmark). Images were acquired with THUNDER Imager 3D (Leica, Wetzlar, Germany; lens x 63). For all species, one field of 600 µmX400 µm was acquired in the lateral funiculus and one field of 600 µmX200 µm in the dorsal funiculus. In each field, a picture of 200 µmX200 µm located in the center of the image was taken for quantification, and 40 fibers were randomly selected and measured per location and sample. ImageJ software was used (National Institutes of Health, United States) for quantifications using numeric zoom to reach 300% of the original image. Diameter measurements were done for CARS analysis. Statistics For CARS analysis, at least 432 fibers were quantified per anatomical location and species (3 individuals per species) [number of fibers analyzed: mice (DF 432; LF 481); Coherent Anti-Stokes Raman Scattering Imaging Allows Discriminating Myelin Across Species Through G-Ratio Measurement We first compared coherent anti-stokes Raman scattering (CARS) imaging ( Figure 1A, fast scanning mosaic) with standard histological methods to detect myelin, including Luxol fast blue staining ( Figure 1B Figure 1F). We then calculated the g-ratio on numerically zoomed images (ratio of the inner-to-outer myelinated fiber diameter, Figures 1G-H) in the lateral (LF) and in dorsal (DF) funiculi in each species. No significant difference in between funiculi was observed in lemurs (p 0.0565; Figure 1J) nor in mice (p 0.34; Figure 1K). Conversely, in humans, the g-ratio was higher in dorsal than in lateral funiculus (p 0.0029, mean g-ratio DF 0.48 ± 0.005; mean g-ratio LF 0.45 ± 0.005; Figure 1I), that may reflect differences in conduction speed in between funiculi. For each species, no major difference in the distribution of the g-ratio between the lateral and the dorsal funiculi was observed ( Figure 2). However, g-ratio between 0.4 and 0.5 (0.45) was the most prevalent in both funiculi in humans (Figures 2A,B) and lemurs (Figures Figures 2E,F) where the peak was observed for g-ratio between 0.5 and 0.6 (0.55). Spectra of G-Ratio, Fiber Diameter, Axon Diameter, and Myelin Thickness in the Three Species Comparison of g-ratio, fibers, and axons diameters ( Figure 1J) in the lateral and dorsal funiculi (Figure 3) also highlighted species specificities. In both funiculi, g-ratio was lower in humans than in lemurs than in mice ( Figure 3A, p < 0.001 for all comparisons); that may reflect differences in conduction speed in between species. Moreover, in humans, fiber and axon diameters, as well as myelin thickness, were higher in the dorsal (fiber diameter: 6.99 ± 0.09 µm, axon diameter: 3.47 ± 0.07 µm and myelin thickness: (Figures 3B-D). Taken together, these data demonstrate that using CARS to compare fiber and axon diameters as well as myelin thickness allows interspecies discrimination of three healthy mammal spinal cords. Fluoromyelin Analysis Display Similar G-Ratio Values as Coherent Anti-Stokes Raman Scattering Imaging To confirm the accuracy of CARS imaging, we then carried out in the same samples, g-ratio analysis using fluoromyelin staining; another method that, similar to CARS, does not require resin embedding and dehydration and permits to visualize myelin ( Figure 4). In the first step, we acquired simultaneously CARS ( Figure 4A) and fluoromyelin staining ( Figure 4B), both signals partly co-localized ( Figure 4C). We then used a Thunder imager with computational clearing to obtain images without out-of-focus (Figures 4F, G); for lemurs, the g-ratio in both funiculi was predominantly at 0.55 (however, the proportion of fibers presenting a g-ratio of 0.45 was almost identical) ( Figures 4J, K); finally, in mice, the peak was observed for g-ratio at 0.55 (Figures 4). . Taken together, these data demonstrate that CARS is a label-free alternative to other microscopy techniques that allow to discrimination of myelinated fibers across species in the mammal spinal cord funiculi. DISCUSSION Here, we present the first CARS analysis of spinal cord myelin in two white matter tracts (lateral and dorsal funiculi) of three different species, i.e., mice, lemurs, and humans. We identified species specificities in particular regarding values of the g-ratio and thus confirmed the accuracy of CARS imaging as an alternative to other microscopy techniques to assess and compare myelin across species. G-Ratio Coincides with Species Evolution and Myelin Fibers Differ within the Same Species According to Their Location Myelin fibers g-ratio of both funiculi was higher in mice than in lemurs than in humans and thus inversely coincides with species evolution. This may partly reflect differences in fiber conduction speed across species. Indeed, a few studies have demonstrated that the g-ratio is not only a key determinant for the conduction velocity of a fiber [3,35,36] but also optimized for speed of signal conduction, cellular energetics, and spatial constraints [37]. The distribution of the g-ratio within lateral or dorsal funiculi in the three species highlighted a similar repartition in humans and in lemurs by opposition to mice. Moreover, we identified a higher g-ratio in dorsal as compared to the lateral funiculus only in humans. Conversely, no difference in g-ratio is observed between the lateral and the dorsal funiculus in mice and lemurs. Fibers and axons diameters, as well as myelin thickness, are higher in the dorsal funiculus in humans. This observation certainly mirrors anatomical differences in fiber tracts displaying sensory motor functions and may reflect species-specificities of conduction velocity of myelin fibers. Taken together, structural similarities between humans and lemurs central nervous system confirms the necessity to develop non-human primate models to study CNS diseases such as demyelinating disease, traumatic brain injury, and spinal cord injury. This is particularly important when studying spinal cord disorders since closer anatomical and functional characteristics of the motor systems, including the corticospinal tract, is observed between human and non-human primate as opposed to rodent [38]. Coherent Anti-Stokes Raman Scattering Microscopy, an Alternative Method to Analyze Myelin A neuroimaging method, termed multi-component-driven equilibrium single-pulse observation of T1 and T2 (mcDESPOT), allows to examine myelin water fraction (MWF) as an in vivo metric of myelin integrity and content [39,40]. It has been demonstrated that a combination of magnetic resonance (MR) markers that are sensitive to the myelin volume fraction (MVF) and to the intra-axonal volume fraction (AVF) is sufficient to compute a g-ratio for each voxel (aggregate g-ratio). However, it does not allow estimating axon diameter, myelin sheath thickness, and pitfalls of g-ratio imaging such as MR artifacts, lack of specificity, low spatial resolution, and long acquisition times remain [37]. Thus, as suggested recently [18], the emergence of exhaustive databases of myelin fibers structure using several modalities of investigation tools will facilitate further validation of non-invasive methods such as magnetic resonance imaging. The overall lower g-ratio values (about 0.5) that we obtained using both CARS and fluoromyelin staining as compared to those obtained using electron microscopy may result from variation in factors such as fixative, embedding, and dehydration steps. As recently reviewed, methods that do not require staining, embedding, and dehydration, which are all critical steps for myelin damage, may provide accurate measurement of parameters such as g-ratio and myelin sheath thickness [for review see [18]]. Slight differences in g-ratio repartition that we observed when using CARS imaging and fluoromyelin; in particular with lemurs, may thus also reflect differences in tissue processing. Indeed, even if both techniques do not require resins embedding and dehydration, conversely to CARS, fluoromyelin is not a label-free method and requires mounting. In conclusion, this study compared the first-time spinal cord myelin sheath in three different species using a label-free method and thus represents a new biological application of a label-free method in the field of spinal cord analysis. We identified species differences between axon and fiber diameters, myelin thickness, and g-ratio that may reflect species-specificities of conduction velocity of myelin fibers. The combination of several imaging techniques, including CARS, will thus permit to better characterize myelin structure in healthy conditions and its alterations in diseases. DATA AVAILABILITY STATEMENT The raw data supporting the conclusion of this article will be made available by the authors without undue reservation. ETHICS STATEMENT The studies involving human samples collection were reviewed and approved by the "Agence de la Biomédecine" (PFS-ssNUM-BAUCHET). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. The animal study was reviewed and approved by the Veterinary Services Department of Hérault, the regional ethic committee n°36 for animal experimentation, and the French Ministry of National Education, Higher Education and Research (authorizations; mice: n°34118 and non-human primates n°APAFIS#16177-2018071810113615v3). Experiments followed the European legislative, administrative, and statutory measures for animal experimentation (EU/Directive/2010/63) and the ARRIVE guidelines. AUTHOR CONTRIBUTIONS GP participated in the design of the project and in human samples collection, analyzed the data, and contributed to the writing of the manuscript; YG contributed to the design of the project as well as acquisition and analysis of the data; J-CP participated in the acquisition and analysis of the data; KO participated in the quantification of the data; NL participated in human samples collection; FV-L coordinated human samples collection; HB designed CARS acquisition and participated in the analysis, and FP conceptualized the research, designed the project, participated in the analysis and data interpretation, drafting the work and final approval. FUNDING This work was supported by the patient organizations "Demain Debout Aquitaine" (to YG and FP) and "Verticale" (to YG and FP). The funding sources were not involved in study design, collection, analysis, and interpretation of data and the writing of the report and in the decision to submit the article for publication. ACKNOWLEDGMENTS We thank the "Agence de la Biomédecine" for organization of human samples collection. We thank Fabrice Bardin and the "Montpellier Ressources Imagerie" for advices in CARS acquisition. We also thank Nadine Mestre-Frances for her expertise in non-human primates.
2021-09-30T13:26:36.505Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "a1d213d99ea7e97ec83e118dd799bbfdb873943a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2021.665650/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "a1d213d99ea7e97ec83e118dd799bbfdb873943a", "s2fieldsofstudy": [ "Biology", "Medicine", "Materials Science" ], "extfieldsofstudy": [] }
232379885
pes2o/s2orc
v3-fos-license
Research on Hopf Bifurcation and Stability of Heterogeneous Lorenz System with Single Time Delay Time-delay chaotic systems refer to the hyperchaotic systems with multiple positive Lyapunov exponents. It is characterized by more complex dynamics and a wider range of applications as compared to those non-time-delay chaotic systems. In a three-dimensional general Lorenz chaotic system, time delays can be applied at different positions to build multiple heterogeneous Lorenz systems with a single time delay. Despite the same equilibrium point for multiple heterogeneous Lorenz systems with single time delay, their stability and Hopf bifurcation conditions are different due to the difference in time delay position. In this paper, the theory of nonlinear dynamics is applied to investigate the stability of the heterogeneous single-time-delay Lorenz system at the zero equilibrium point and the conditions required for the occurrence of Hopf bifurcation. First of all, the equilibrium point of each heterogeneous Lorenz system is calculated, so as to determine the condition that only zero equilibrium point exists. Then, an analysis is conducted on the distribution of the corresponding characteristic equation roots at the zero equilibrium point of the system to obtain the critical point of time delay at which the system is asymptotically stable at the zero equilibrium point and the Hopf bifurcation. Finally, mathematical software is applied to carry out simulation verification. Heterogeneous Lorenz systems with time delay have potential applications in secure communication and other fields. Introduction In respect of nonlinear dynamic systems, Hopf bifurcation and stability analysis of time-delay chaotic systems has attracted much attention for research. The distinctive characteristics of a timedelay chaotic system are detailed as follows. Firstly, the evolution of the system over time depends not only on the current state of the system but also on its past state. Secondly, it contains an infinite dimensional state space and exhibits extremely complex dynamic behaviors, which makes it different from the non-time-delay chaotic system. For these reasons, it has been widely applied in such fields as natural science, engineering technology, and social science [1][2][3] . As far engineering application is concerned, it is common for time-delay chaotic systems to cause system instability and bifurcation in various forms. Among them, Hopf bifurcation [4][5] is a commonplace and most discussed. For the generation mechanism of Hopf bifurcation, the condition is that the equilibrium point of the system switches stably with the change to a certain system parameter while the nonlinearity of the system restricts the disrupted divergent motion to a narrow range. Therefore, the precondition for the existence of Hopf bifurcation can be determined by analyzing the distribution of characteristic roots, which indicates that the existence of a certain system parameter value makes the system characteristic equation have negative real parts except for a pair of single conjugate pure imaginary roots. Moreover, the parameter value is taken as the Hopf bifurcation point when the corresponding characteristic root curve meets the transversal condition. In recent years, there is still little attention paid to the research on Hopf bifurcation of the Lorenz system with time delay. Since 1963 when the meteorologist Lorenz proposed the first classic Lorenz system, researchers have put forward various heterogeneous Lorenz systems [8][9][10][11] , such as Lü system [6] and Liu [7] system and conducted analysis of its chaotic mechanism for application in engineering settings. Using the first Lyapunov coefficient, Mello et al. analyzed the bifurcation characteristics of the three-dimensional Lorenz-like system [13] . Li et al. investigated the bifurcation characteristics of a novel Loren-like chaotic system at different equilibrium points [14] . Wang et al. demonstrated the fractional bifurcation of a five-dimensional Lorenz-like system [15] . Besides, the Routh-Hurwitz criterion and the high-dimensional Hopf bifurcation theory were applied to study the Hopf bifurcation characteristics of the three-dimensional autonomous Lorenz system [16] . In general, the time-delay chaotic system equation is linearized at the singularity to obtain the transcendental equation. In this way, the distribution of the roots of the transcendental equation is relied on to determine the Hopf bifurcation condition of the time-delay chaotic system. Through an in-depth discussion conducted by J.K. Hale [17] , a theoretical foundation is laid for the study of Hopf bifurcation in time-delay chaotic systems. Professor Wei Junjie et al. [18] applied Rouche's theorem to provide the zero-point distribution theorem of exponential polynomials, which promoted the research on Hopf bifurcation theory. Extending and applying the canonical type theory to delay differential equations, T.Faria and Magalhães proposed a canonical type calculation method, which contributed significantly to the development of bifurcation theory [19][20][21] . At present, the bifurcation research on time-delay chaotic systems has been on the rise gradually. By introducing a generalized form of a time-delayed Lorenz system (the Lorenz system has (2n+1) dimensions), Mahmoud Gamal et al. analyzed not only the stability of trivial fixed points and non-trivial fixed points but also the conditions required for the occurrence of Hopf bifurcation [22] . Kun et al. adopted an improved method of undetermined coefficients to verify the homoclinic orbit of the Chen system with linear time-delay feedback, based on which the spiral involute projection method was proposed [23] . Lian et al. [24] conducted research on the Hopf bifurcation of Lorenz-like systems with time delay. Li et al. explored Hopf bifurcation of disturbed Lorenz-like systems with time delay [25] . This paper is structured as follows. In Section one, a brief introduction is made of the timedelay chaotic system and its Hopf bifurcation. In Section two, a general heterogeneous single-timedelay Lorenz system model is proposed, and the condition of only zero equilibrium point is indicated. Section 3 elaborates on the Hopf bifurcation and stability conditions of the three types of heterogeneous Lorenz systems with a single time delay. Besides, mathematical software is adopted to carry out simulation verification, which reveals that the conclusions drawn are consistent with the results of theoretical analysis. Finally, the conclusions are detailed in the concluding section. General Lorenz System Model Proposed by Lü et al. in 2002, the unified chaotic system connects the Lorenz system, Lü system, and Chen system. Its system model is expressed as x y x y x xz y z xy z Where, x , y , and z represent state variables, while [0,1]    = , the system is classed as the generalized Lorenz system, the generalized Chen system, and the generalized Lü system, respectively. The unified chaotic system model demonstrates the basic structure of Lorenz using a single parameter. However, the number of its system parameters are too small, thus limiting the parameter range. Then, researchers proposed the corresponding bifurcation laws and stability conditions through continuous updates by forming many variants of Lorenz chaotic systems [26] (such as Lorenz-like systems [27] ). Without any compromise on generality, a general Lorenz system is proposed in this paper, and an investigation is conducted into the bifurcation law of its heterogeneous single-time-delay chaotic system. The dynamic equation of the system is expressed as follows: () x a y x y bx dy xz z cz xy Where, a , b , c , and d are system parameters. Eq. (2) involves 7 terms, among which there are only 2 nonlinear terms. Compared with other chaotic or hyperchaotic systems, the structure of this system is simpler, thus making it easier to implement the circuit. Therefore, the system is applicable in such fields as secure communication. Stability and Hopf bifurcation conditions of heterogeneous singletime-delay Lorenz systems Many researchers have imposed time delay on the state variables to develop the time-delay Lorenz chaotic system. However, the different positions of the added time delay can lead to a functional differential dynamic system with different dynamic behaviors. In Eq. (2), applying a single time delay to the state variable can give rise to nine forms of heterogeneous single-time-delay chaotic systems, which are different from other references. Three of the heterogeneous forms are presented as follows, which are also the stability and bifurcation conditions to be explored later in this study. denotes the amount of time delay, which can be understood as the time it takes for the predator to have the ability to prey, the incubation period of infectious diseases, or the delay time of signal transmission. The three systems ABC have three identical equilibrium points as follows: A chaotic system The linearization equation of the A chaotic system at the equilibrium point (0, 0, 0) O is expressed as: Where, the value range of system parameters a , b , c , and d is 0 Eq. (4) can be reduced to .Thus, the following lemma can be obtained. Proof: when 0  = , the characteristic equation (5) is transformed into pp + , and 35 0 pp + . According to the Routh-Hurwitz theorem, all roots of the characteristic equation (6) are common in having negative real parts. Thus, the equilibrium (  is an undetermined constant greater than zero) is a pure imaginary root of Eq.(5), so that imaginary part  satisfies According to the equality of plural numbers, it can be obtained that: Eq. (8) can be equivalently transformed into 6 2 A conclusion for Eq. (9) can be reached as follows. ( It can be derived from Eq. (11) and Eq. (12) that According to the theorem of the existence of function zeros, there is at least one real number 0 (0, ) u  + that makes 0 ( ) 0 fu = . Thus, Eq. (10) has one positive real root at minimum. 5) has a pair of pure imaginary roots The transversal conditions are presented below. Thus, the lemma is proved. According to Lemma 4 and Hopf bifurcation theory, the following conclusions can be drawn. is the Hopf bifurcation value of A system, suggesting that Hopf bifurcation occurs in A system at the equilibrium point (0, 0, 0) O . Considering that the parameters of B chaotic system The linearization equation of the B chaotic system at the equilibrium point (0, 0, 0) O is expressed as: Where, the value range of system parameters a , b , c , and d is 0 When 0   , suppose i  = (  is an undetermined constant greater than zero) is a pure imaginary root of Eq.(24), so that imaginary part  satisfies Thus, the lemma is proved. According to Lemma 8 and Hopf bifurcation theory, the following conclusions can be drawn. C chaotic system The linearization equation of the (2) when 0   , the equilibrium point (0, is the Hopf bifurcation value of A system, suggesting that Hopf bifurcation occurs in C system at the equilibrium point (0, Considering that the parameters of x y z ceases to follow the original law and move into a state of chaos, as shown in Fig. 11. Conclusions Since time delay is a common phenomenon in dynamic systems, it is necessary to explore the stability and bifurcation of functional differential dynamic systems. The position of time delay plays a significant role in the dynamic equation, with different time delay positions leading to different dynamic behaviors for the system. In this paper, a heterogeneous single-time-delay Lorenz system with different structures is constructed by loading time delays at different positions of the general Lorenz system. Three of the structures are selected to study the Hopf bifurcation and stability. According to the results, there is a single zero equilibrium point in the heterogeneous single-time- Conflicts of Interest The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding Statement The author(s) received no financial support for the research, authorship, and/or publication of this article.
2021-03-29T01:16:03.435Z
2021-03-23T00:00:00.000
{ "year": 2021, "sha1": "a83afbf76d4889f44b81eb12b896d8e74d0b1a03", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a83afbf76d4889f44b81eb12b896d8e74d0b1a03", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
111059634
pes2o/s2orc
v3-fos-license
A Fusion of Sensors Information for Autonomous Driving Control of an Electric Vehicle (EV) The study uses the environment of the road as input variables for the main system to control steering wheel, brake and acceleration pedals. A camera is installed on the roof of the Electric Vehicles (EV) and is used to obtain image information of the road. On the other hand, users or drivers do not have to directly contact with the main system because it will autonomously control the devices by using fuzzy information of the road conditions. A fuzzy information means in the preliminary experiments, reasoning of the various environments will be done by using fuzzy approach. At the end of the study, several existing algorithms for controlling motors and image processing technique could be combined into an algorithm that could be used to move EV without assist from human. Introduction Autonomous driving is increasingly attracting public interest due to various research projects over the past years [1][2][3][4][5][6][7]. Usually, conventional cars are converted with significant effort and many different sensors are placed on the roof. The advance of electro-mobility provides the chance for completely new vehicle concepts. By breaking away from classic approaches, it is possible to consider and integrate autonomous driving into the vehicle architecture with respect to Information Technology (IT) and sensor network systems, energy management and design. These kinds of cars are the upgrade version of the EV. Recently a lot of EVs and related vehicles such as a hybrid car have been developed to solve environment and energy problems caused by the use of an internal combustion engine vehicle. Developing such vehicles for solving the environment and energy problems is a great idea. Currently, many researches publish technical papers in journals, which are related to autonomous EV. In their researches, steering wheel, brake and acceleration pedals are control by using computers [8][9][10]. On the other hand, users and drivers do not have a direct contact with them. A touch panel is installed in the EV and it serves a user Graphical User Interface (GUI) for users and drivers interact with controlling devices. Unfortunately, based on current outcomes more effort should be done for making 1 sure that autonomous EV could move with safety. Although mechanism of mechanical could be used to solve safety and reliability issues of an autonomous EV, the computational approach is also very important. The computational approach for example the algorithm for controlling motor device, the capacity of data transmission device, image processing technique and etc. [11][12][13]. Literature Review The focus of current research towards electric, hybrid electrics, and fuel cell vehicles has been on increasing energy efficiency and reducing emissions. Future vehicles will include electric drive-train components that must be capable of performing conventional anti-lock braking, traction control, and active yaw control safety functions. From the viewpoint of electric and control engineering, Hybrid electric vehicles (HEVs) have evident advantages over conventional internal combustion engine vehicles (ICVs). Firstly, torque generation is very quick and accurate, for both accelerating and decelerating. This should be the essential advantage. In Hybrid electric vehicles, motor and traction control system (TCS) should be integrated into Hybrid Traction Control System (HTCS), since a motor can either accelerate or decelerate the wheel. Its performance should be advanced one, if we can fully utilize the fast torque response of motor. Secondly, output torque is easily comprehensible. There exists little uncertainty in driving or braking torque inputted by motor, compared to that of combustion engine or hydraulic brake. In recent years fuzzy logic control techniques have been applied to a wide range of systems. Many electronic control systems in the automotive industry such as automatic transmissions, engine control and traction control systems are currently being pursued. These electronically controlled automotive systems realize superior characteristics through the use of fuzzy logic based control rather than traditional control algorithms. Fuzzy Logic Control is a type of control, which is based on Fuzzy set theory and reasoning. David Elting and Mohammed Fennich in their research told that automotive systems realize superior characteristics through the use of fuzzy logic controllers [14] especially in nonlinear cases. The brake system is a challenging control problem because the vehicle-brake dynamics are highly nonlinear with uncertain time-varying parameters [15]. Fuzzy controllers have the benefit of not requiring a mathematical model of the plant [16], while still being highly robust [17]. Also, certain fuzzy control designs can be implemented that have the ability to learn [18] or to adapt [19] themselves to improve its performance. Because of these features, fuzzy controllers have been successfully implemented in the automotive field for controlling both wheel dynamics and vehicle dynamics. Methodologies In this study, a pure differential drive of the electric vehicle is considered as shown in figure 1 and it is assumed that the posture and the orientation of the vehicle are known at each instant. By referring to the figure, L is the base width of the vehicle and R is the distance between the centres of the wheel. The orthonormal inertial basis is {x, 0, y}, which is called the world coordinate system that is fixed to cartesian workspace. The centre of gravity of the mobile vehicle is the point C, and the basis {M 1 , C, M 2 } is attached to the EV, which is called the local coordinate system. M 1 refers to longitudinal direction, while M 2 is the vertical direction, and OC=xI 1 +yI 2 . The location of EV could be determined by three variables, which are the spatial positions x and y of the reference point C, and its orientation. The vector Ps describing the EV posture and is defined as in equation (1). (1) Where t 0 <t<t n and t 0 is the initial moment of motion and t n is a final moment of motion. For example, in the perspective of brake control of the EV, two degree of freedom is considered, which is velocity V C and angular velocity θ c as shown in figure 2. The vehicle kinematic associated with the Jacobean matrix and the velocity vector is defined in equation (2). ( 2 ) Figure 1. A pure differential drive of the EV model Besides brake and acceleration pedals, a steering wheel also needs to be considered in an autonomous EV. Directions of the EV depend on the current degree of the steering wheel. By extracting a rear wheels as shown in figure 2 as the driving wheel, figure. 3 give the description of the steering wheel. In the autonomously control of the EV, the input images from the camera are used to control the degree of the steering wheel. On the other hand, β(t) and α(t) must be adjusted for the EV to move straight in a straight road, and for the EV to turn left or right in the curvature condition. Figure 4 shows an electrical configuration of the proposed EV. Two personal computers will be installed in the EV. The first PC will be employed for the motor control included I/0 board, motor control board. The second PC is for image processing included image process board. These two PC is connected through Ethernet hub or a wireless data transmission device. The on-board computer via mouse and touch panel is installed to operate steering wheel, gear shift, brake and acceleration pedals. On the other hand, the computer must control the vehicle brake pedal without having direct access to the steering mechanism, drive motor, and the transmission of the vehicle. The computer must be able to control the vehicle brake in the same manner as a human being does. The accelerator and the brake use stepping motor and the steering wheel part uses AC servo motor. Results and discussions This paper presents the design and implementation of an autonomous Electric Vehicle (EV) with intelligent driving control to provide the driver assistance as well as unmanned driver. It is an automatic guided vehicle and able to move automatically along the tracks in a given region. For the purpose of prototyping, a buggy car will be re-designed and several sensors are installed. The camera will be installed to the EV as a vision system and connects to the personal computer (PC) for the processing of image information. Image processing algorithm will be employed for the detection of the signage and the centre of gravity (COG) of the road. Another PC will be installed for controlling motors to operate acceleration pedal, brake pedal and steering wheel. Information from several sensors is fused to move the EV intelligently without control by the human. In the experiment, the EV will follow the line in the middle of the road, and the line drawn in both left and right of the road. A CCD camera is used to capture images of the road while EV moves. The fuzzy information of the image captures is transferred to the second PC to control stepping and servo motors. The control mechanisms are stepping motors for brake and acceleration pedals, and a servo motor for the steering wheel. Figure 5, shows the image at time t captured by the CCD camera. The CCD camera is controlled by the first PC, and the image information is sent to the second camera for the main system to make a decision and produces command to the motor control board. For the purpose of reducing a computational cost, the image information contains the centre of gravity of the line and the values of θ. The image information captured by the CCD camera is processed by using image processing technique. The centre of the gravity of the line and the angle of the current location of the EV are sent to the second PC. Here, a predictive algorithm is needed to move the EV to the correct position at the time t+1. A Kalman filter technique is proposed to predict the future location of the EV. To move toward correct location, steering wheel, brake and acceleration pedals will be controlled. Figure 5. The road condition with the middle and side lines. Image information captures by a CCD camera. Conclusions The research of the electric vehicle (EV) in the proposal is developed in two fields. One is the control of the accelerator and the brake, the other is the controls of the steering wheel. Both of them are interfaced with stepping and servo motors with controllers. Needing the control of the accelerator and the brake is combined with the control of steering wheel is running at a constant speed. This becomes a necessary, indispensable technology in doing the driving support and actually running on the road. Therefore, we will set a speed control as a basic research to combine the control of the accelerator and the brake with the steering control by an image processing technique. The control of the steering wheel is done by referring to input images captured by using a CCD camera equipped on the EV. In this paper we produce the theory and conceptual of further experiment on Autonomous Driving Control of an electric vehicle EV. Hopefully after the future research and experiment, the results will appear as what we discuss on discussion and expected results.
2019-04-13T13:05:22.759Z
2013-12-20T00:00:00.000
{ "year": 2013, "sha1": "4e7305da41abd0cebce0a99ae7594de487fc7d1f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/53/1/012025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "16f1eba1e96eaf857535050e3a1c058c490d139a", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
268728461
pes2o/s2orc
v3-fos-license
Efficacy of trigger point injection therapy in noncardiac chest pain: A randomized controlled trial Objectives This study aimed to compare the effects of trigger point injections and stretching exercises in patients with noncardiac chest pain (NCCP) associated with myofascial pain syndrome. Patients and methods This prospective randomized controlled trial included 50 patients with noncardiac chest pain and trigger points in the pectoralis muscles between October 2019 and June 2020. The patients were randomly assigned to receive trigger point injections into the pectoralis muscles and exercise (n=25; 15 males, 10 females; mean age: 42.8±9.2 years; range, 25 to 57 years) or only perform exercise (n=25; 11 males, 14 females; mean age: 41.8±11.2 years; range, 18 to 60 years). The primary outcome was pain intensity at the first month and three months after the first treatment session, measured using the Visual Analog Scale from 0 to 100. The secondary outcome was the Nottingham Health Profile score. Results Treatment with stretching exercises and trigger point injection resulted in significant pain reduction compared to stretching exercises alone, and the reduction was persistent at the three-month follow-up (p<0.001). A between-group comparison showed no significant difference in the Nottingham Health Profile (p=0.522). Complications related to the procedure or severe adverse events attributable to treatment were not reported. Conclusion Trigger point injection combined with stretching exercises is an efficient treatment for noncardiac chest pain related to myofascial pain syndrome compared to exercise treatment alone. Chest pain is an important health concern worldwide.Although chest pain accounts for approximately 10% of noninjury-related visits to the emergency department, less than half of these patients receive a definite diagnosis of cardiac chest pain. [1]The rest are usually discharged without a definitive diagnosis, and their pain is labeled as noncardiac chest pain (NCCP). [2]The prevalence of NCCP may reach 70%.It may present at all levels of medical care.There is an unmet need for diagnostic approaches and therapeutic options in these patients.There are many clinical studies on the diagnosis and classification of NCCP, with lack of information on patient management and treatment. [3,4]Patients with NCCP continue to suffer from pain, which is associated with anxiety, fear of undiagnosed heart disease, loss of working capacity, and hospital readmissions. [5,6]e chest wall contains various bone and soft tissue structures.Therefore, it is difficult to pinpoint the exact cause of the pain.Physicians often try to identify the specific causes of NCCP.Pain is apparent in acute trauma or injuries such as rib fracture or contusion and strains in the pectoral or intercostal muscles. [7]In other cases, identifying the source of NCCP is difficult in the absence of standardized criteria for diagnosis and gold standard diagnostic tests. Myofascial pain syndrome (MPS) is a common cause of chest pain, associated with tender points called trigger points in muscles or surrounding connective tissues, presenting with pain, muscle spasms, limitation of range of motion, sensitivity, and weakness. [8]Symptoms generally occur in parts of the body distant from the trigger point. [9]Myofascial pain may worsen by muscle overuse, cold, anxiety, and postural imbalance.3] Inactivation of trigger points represents a challenge in the treatment of MPS.Various physical therapy modalities have been suggested for MPS to inactivate trigger points, including exercise. [14]However, these therapeutic modalities should be compared to determine their priorities and superiorities.In this study, we aimed to evaluate the effectiveness of trigger point injection plus exercise versus exercise alone in patients with NCCP. PATIENTS AND METHODS This prospective randomized controlled trial was conducted between October 2019 and June 2020 in the Ankara Gaziler Physical Medicine and Rehabilitation Training and Research Hospital.Sixty-three NCCP patients were assessed for inclusion in the study.Three patients refused to participate in the study, and five failed to meet the inclusion criteria.Furthermore, five patients were unable to complete the study.Thus, 50 consecutive patients from the outpatient clinic were included in the study.Twenty-five patients (15 males, 10 females; mean age: 42.8±9.2years; range, 25 to 57 years) were in the injection group and 25 patients (11 males, 14 females; mean age: 41.8±11.2years; range, 18 to 60 years) were in the exercise group. Myofascial pain syndrome was diagnosed based on the diagnostic criteria of Simons et al. [10] Among the patients who presented to our cardiology outpatient clinic with a complaint of chest pain, those with at least one trigger point or one taut band on the pectoralis muscles, as confirmed by normal electrocardiography, echocardiography, and treadmill exercise testing, were included.Patients diagnosed with a cardiac disease, fibromyalgia, inf lammatory rheumatic disease, cervical radiculopathy, myelopathy, diabetes mellitus, pulmonary-thyroid-gastrointestinal, and hepatobiliary diseases were excluded.Other exclusion criteria were trigger point injection treatment for a diagnosis of MPS in the past six months, neck and shoulder surgery in the past year, allergy to local anesthesia, bleeding disorders, use of anticoagulant drugs, pregnancy, and breastfeeding. Pain level of all patients was evaluated before treatment and at the first and third months after treatment using the Visual Analog Scale (VAS), and the quality of life was evaluated using the Nottingham Health Profile (NHP).On the VAS, 0 mm represented "no pain at all," whereas 100 mm indicated "worst pain imaginable." Each patient underwent physical examination as well as collection of detailed information on personal history, family history, smoking habits, and medications used.Routine tests and examinations (complete blood count, biochemistry, lipid profile, electrocardiography, echocardiography, and treadmill exercise testing) were performed for patients presenting to our cardiology outpatient clinic with chest pain. The patients were randomly divided into groups by the sealed envelope method.The injection group received trigger point injections with local anesthetic (lidocaine 1%) into the pectoralis muscles once a week for three weeks and a standard stretching home exercise program for anterior chest muscles, particularly the pectoralis muscles, for 20 min three times a week for a total of three week.The exercise group was only given the same standard stretching home exercise program provided for the injection group.Patients in the exercise group were asked to return to the outpatient clinic once a week for three weeks to monitor their compliance.Patients in both groups did not receive any analgesics during the treatment period.Injections were applied to the trigger points and taut bands under aseptic conditions (Figure 1).The solution was a mixture of 2.5 mL of saline and 2.5 mL of 2% lidocaine.Taut bands were identified by palpation.The same researcher performed all the procedures. Statistical analysis A total of 60 patients (30 in the exercise group and 30 in the injection group) were needed to detect a VAS within-between interaction effect of 0.25 (Cohen's f), with 0.95 power at the 0.05 significance level (number of measurements: 3; correlation among measurements: 0.3). Statistical analysis was performed using the SPSS version 15.0 software (SPSS Inc., Chicago, IL, USA) and RStudio version 2022.02.1 (Build 461 © 2009-2022 RStudio, Inc., Boston, MA, USA).The normality and multivariate normality of the variables were analyzed using visual and analytical methods (Shapiro-Wilk and Mardia's tests).Descriptive statistics were presented as mean ± standard deviation (SD) and median (min-max) for numerical variables and as frequency (percentage) for categorical variables.Comparisons between groups were made using the Pearson chi-square test and the independent samples t-test or the Mann-Whitney U test for categorical and numerical variables, respectively.A robust rank based method for longitudinal data (F1-LD-F1 design) was used to test the effect of group, time, and group-time (Gxt) interaction effect on NHP and VAS levels. [15]Relative treatment effect values with their 95% confidence intervals were used to make inferences.Relative treatment effect is the probability that a randomly selected subject from the treatment group has an observation value as large/larger than a randomly selected subject from the whole dataset, and overlapping confidence intervals indicate that there is no statistically significant difference in the outcome measure between groups or time points being compared.The nparLD (Nonparametric Analysis of Longitudinal Data in Factorial Experiments) package for R [16] was used to implement the F1-LD-F1 design, and due to small sample size, analysis of variance results were presented.A p-value <0.05 was considered statistically significant. RESULTS A f lowchart of the study is shown in Figure 2.Both groups had similar baseline characteristics (Table 1).No adverse effects were reported during the treatment and follow-up.Both treatments were well tolerated.The demographic characteristics are provided in Table 1.Age, sex, body mass index, and duration of pain were similar in both groups.The median duration of pain history was six (range, 2 to 13) months versus six (range, 3 to 12) months in the injection and exercise groups, respectively.The descriptive statistics of the study, outcome measures of the groups at baseline, and followup periods are presented in Table 2.There was a significant improvement in VAS levels at baseline, Week 4, and Week 12 in both groups (p<0.001).The group and Gxt interaction effects on VAS levels were found to be statistically significant (both p<0.001, Figure 3).Time had a statistically significant improvement effect on NHP (p<0.001), and the Gxt interaction effect was also statically significant (p<0.001).However, the group effect had no statistically significant impact on NHP scores (p=0.522, Figure 4). DISCUSSION Although musculoskeletal pain is a common cause of chest pain, it is often overlooked.Patients with NCCP are often underdiagnosed and untreated despite their benign nature.Trigger points in the pectoral muscles can be a source of pain referred to the chest wall and may cause ipsilateral chest pain that radiates down the ulnar side of the arm.It can mimic angina pectoris. [12]Therefore, it is important to consider the diagnosis of NCCP to avoid unnecessary high-risk procedures and apply appropriate treatment. We found that trigger point injection plus exercise was superior to exercise alone in pain reduction in both short-term and long-term follow-up.To the best of our knowledge, this study is the first randomized controlled trial to evaluate the effect of trigger point injection plus exercise versus exercise alone in the treatment of NCCP in patients presenting to the cardiology outpatient clinic.Shin et al. [17] administered an ultrasound-guided trigger point injection into the subscapularis and pectoralis muscles in 19 postmastectomy patients who developed chest pain and achieved successful results, which were similar to our results.9] In a case report, Westrick et al. [20] evaluated and treated a 22-year-old male military athlete with anterior chest pain refractory to traditional physical therapy using dry needling.They reported that trigger point dry needling in suitable hands is effective in treating local chest pain.In our study, trigger point injection was also found to be effective in NCCP. In a case series presented by Vargas-Schaffer et al., [21] trigger point injection was applied for chest pain associated with trigger point in the serratus anterior muscle, and it was seen that all patients had experienced a significant reduction in pain.Their results were also similar to our results.Berg et al. [18] found in a randomized controlled trial that treatment with deep friction massage with heat pack was significantly more efficient than heat pack alone to decrease musculoskeletal chest pain.Healthrelated quality of life scores showed no differences between groups.Similar results were observed in our study as well. While inactivation of trigger points represents a challenge in treatment, there are various physical therapy modalities that are used to control and loosen taut bands.include trigger point injection, hotpack, cold application, ultrasound, therapeutic massage, dry needling, stretch and spray technique, biofeedback using electromyography), skin conduction techniques, and transcutaneous electrical nerve stimulation. [14]xercise is also effective in reducing pain.In a study by Navarro-Santana et al., [22] a superior effect of TrP injection (wet needling) was suggested for decreasing pain in cervical muscle TrPs in the short term compared to dry needling.Moreover, Allam [23] concluded that the combined use of acupuncture and trigger point injection with lidocain provided promising results for pain relief in poststernotomy syndrome patients. We found that patients who were treated with exercise together with the injection treatment on the taut band and trigger points in the pectoralis muscles showed significantly greater improvement in pain reduction compared to the group who received exercise alone (p<0.001).This study showed that trigger point injection is an effective, rapid, and safe treatment method in patients with NCCP secondary to MPS.Nottingham Health Profile scores increased in both groups, and there were no between-group differences at baseline or at the three-month follow-up (p=0.522).These results showed that exercise therapy improved quality of life as much as injection therapy. Although exercise therapy is also an effective method, patient compliance and sustainability are low in daily practice.Combining exercise with injection therapy may increase patient compliance.Although trigger point injection has been shown to be an effective treatment method in body parts such as the neck and back in many studies, it is not preferred in the treatment of chest pain, probably due to the risk of pneumothorax. [24]No complications occurred in any of the patients in our study.This shows that trigger point injection is a safe and preferable treatment for NCCP secondary to MPS. The limitations of this study are the small sample size and treatment of patients with trigger points and taut bands in the pectoralis muscle only. In conclusion, this study showed that trigger point injection and exercise treatment significantly reduced chest pain compared to exercise alone.The posttreatment effect persisted for up to three months.Trigger point injection is an effective, fast, and safe treatment method in patients with MPS-associated NCCP.Although exercise therapy is also an effective method, its combination with injection therapy provides better results in patient compliance and sustainability.This study demonstrated promising results that require further research with larger sample sizes. Figure 2 . Figure 2. Flowchart of the study. Figure 1 . Figure 1.Trigger point injection to the pectoralis major muscle. TABLE 2 Study outcome data SD: Standard deviation; VAS: Visual Analog Scale; NHP: Nottingham health profile.Figure 3. VAS levels improvement with time in injection+exercise and exercise alone groups.RTE: Relative treatment effect; VAS: Visual analog scale; CI: Confidence interval. TABLE 1 SD: Standard deviation; BMI: Body mass index; a: Independent Samples t-test; b: Pearson Chi-square test; c: Mann-Whitney U test. Common physical therapy modalities NHP levels improvement with time in injection and exercise alone groups.
2024-03-29T05:13:30.844Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "0b50efb686d34cda7acde4b80fe6052f4dc948cf", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0b50efb686d34cda7acde4b80fe6052f4dc948cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266984
pes2o/s2orc
v3-fos-license
Melanotic Oncocytic Metaplasia of the Nasopharynx: A Report of Three Cases and Review of the Literature Melanotic oncocytic metaplasia of the nasopharynx is a rare condition which is characterized by the presence of usually a small, brown to black colored pigmented lesion around the Eustachian tube opening. Although it is a benign lesion, it may be clinically misdiagnosed as malignant melanoma. Microscopically, melanotic oncocytic metaplasia is a combination of oncocytic metaplasia of the epithelium of the gland and melanin pigmentation in its cytoplasm. In our present study, we report three cases of melanotic oncocytic metaplasia of the nasopharynx. All the three cases occurred in men and were presented as multiple black pigmented lesions around the torus tubarius. Microscopically, mucous glands with diffuse oncocytic metaplasia and numerous black pigments were observed. No cellular atypia was observed. Immunohistochemically, the scattering of S-100 protein-positive, and human melanoma black 45-negative dendritic melanocytes was evident. This is the first report of cases of melanotic oncocytic metaplasia of the nasopharynx in Korea. . Nasoscopic appearance of melanotic oncocytic metaplasia of the nasopharynx. Multiple, dark blue colored mucosal lesions around the bilateral torus tubarius are noted. around the right torus tubarius were noted. Clinician's impression of the lesion was that of melanotic oncocytic metaplasia of the nasopharynx. Biopsy was performed, but the symptoms persisted. The contact with the patient was lost and hence followup after two months was not feasible in the third case. The clinical information about each patient is summarized in the Table 1 below. Pathological findings The histology of all the three cases was similar. Microscopi-cally, the lesions were well circumscribed, but were not encapsulated. In case 1, the surface of the lesion was covered by normal respiratory epithelium. All the three lesions were composed of mucous glands with diffuse oncocytic metaplasia. Oncocytes comprised of abundant eosinophilc granular cytoplasm. Brown pigments were also observed in the cytoplasm of most of the oncocytic cells (Fig. 2). The brown pigments stained positive for Fontana-Masson staining and negative for Berlin blue staining, which were indicative of melanin pigmentation. Mitotic figures or atypia were not observed in the epithelial cells of the gland. Upon immunohistochemical study, dendritic cells in the basal layer of glands were found to be positive for S-100 protein (1 :1,000, Dako, Glostrup, Denmark), but were negative for human melanoma black-45 (HMB-45; 1:200, Dako, Carpinteria, CA, USA) (Fig. 3). Based on the above findings, all the lesions were diagnosed as melanotic oncocytic metaplasia of the nasopharynx. DISCUSSION We have described three cases of melanotic oncocytic metaplasia of the nasopharynx. Till date, only 17 cases of melanotic oncocytic metaplasia of the nasopharynx have been reported in the English literature. [1][2][3][4][5][6][7][8][9][10] Melanotic oncocytic metaplasia of the nasopharynx predominantly occurred in males (16/17), with an average age of 68 years (range, 56 to 80 years). 2,3 All the patients were identified to be of Asian origin. Clinically, the diagnosis was incidental in majority of cases, but some melanotic oncocytic metaplasia of the nasopharynx may occasionally cause symptoms such as otitis media, tinnitus, hoarseness, rhinorrhea, epistaxis, discomfort of the throat, and hemoptysis. These symptoms may be due to the obstruction of the Eustachian tube opening. However, some of the symptoms were thought to be unrelated with the lesions. Macroscopically, all the lesions were a few millimeters in size, single or multiple, and brown to black in color. Bilateral lesions were also seen sometimes (6/17). Clinician's impression of the lesion ranged from a malignant tumor, such as melanoma or carcinoma, to a benign melanocytic nevus. All the reported cases pursued a benign clinical course. 3 Till date, there has been no case of disease recurrence or progression in the cases. Simple excision is usually a sufficient treat-ment for melanotic oncocytic metaplasia of the nasopharynx. 6 All our three patients were males, between fifty-one to seventy-two years of age. They complained of headache, mild hearing impairment, hoarseness, and tongue pain. All of them showed multiple black pigmented lesions around the torus tubarius, and one of them had bilateral lesions. After biopsy, symptoms such as mild hearing impairment and hoarseness disappeared in two of the three patients, but it was not clear whether the symptoms were related to the lesions. Moreover, some complaints such as headache and tongue pain remained unchanged and were considered to be unrelated with the lesions. Histologically, melanotic oncocytic metaplasia is characterized in terms of coexistence of oncocytic metaplasia and melanin pigmentation in the same gland. Oncocytic metaplasia is most commonly encountered in certain epithelial organs such as the salivary glands, the lacrimal glands, and the thyroid glands. However, occurrence of oncocytic metaplasia in the upper respiratory tract is an uncommon finding. The origin of the melanin pigment in the oncocytic glands is still unclear. Melanocytes as a melanin source, have been reported to exist in the stroma and epithelium of the nasal cavity, paranasal sinus, and larynx. 11 The melanin pigment in the oncocytic glands may be derived from the adjacent melanocytes through their dendrites. 5 This hypothesis is supported by the immunohistochemical findings of our three cases. There was the presence of numerous S-100-positive, and HMB-45-negative melanocytes with dendritic processes stretching between the epithelial cells of the glands. Dong et al. 12 found Platts bacilli and Cryptococcus neoformans in the nasopharyngeal secretions of patients with melanotic oncocytic metaplasia. As these two types of bacteria can produce melanin, they suggested that the nasopharynx bacteria produced melanin, which was phagocytosed by oncocytes, leading to the formation of melanin containing oncocytes. The exact pathogenesis of melanotic oncocytic metaplasia of the nasopharynx is still unknown. However, Sakaki et al. 8 postulated that this lesion could be related to the age, smoking, ethnic, and neuro-immuno-endocrine network of the subject. Moreover, in some cases, stimulation by factors such as smoking may play a role in progression of melanotic oncocytic metaplasia. Neuropeptides may cause the oncocytic metaplasia or melanocyte to proliferate and produce melanin through neuroimmuno-endocrine network. Two of our three patients had a smoking history of 40 cigarettes per day from the past 50 years and 20 cigarettes per day from the past 40 years, respectively. However, smoking history in the third patient was not studied. Therefore, in further studies, it is important to clarify the pathogenesis and histogenesis of melanotic oncocytic metaplasia of the nasopharynx. In summary, we have described three cases of melanotic oncocytic metaplasia of the nasopharynx with review of relevant literature. Although this is a rare clinical and pathological entity, it is not so difficult for the pathologists and clinicians to make the diagnosis if they have had some experience in handling such cases. The recognition of this lesion is clinically important, since it might be misdiagnosed as a malignancy, such as malignant melanoma by the unwary.
2016-05-18T14:44:13.258Z
2012-04-01T00:00:00.000
{ "year": 2012, "sha1": "f5b341682a8e888dd49c6f31087160003be1a756", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4132/koreanjpathol.2012.46.2.201", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5b341682a8e888dd49c6f31087160003be1a756", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247763777
pes2o/s2orc
v3-fos-license
Propionic acid regulates immune tolerant properties in B Cells Abstract Interleukin 10 (IL‐10)‐producing B cells (B10 cells) are a canonical cell fraction for regulating other activities of immune cells. Posttranscriptional modification of IL‐10 in B10 cells is not yet fully understood. Short‐chain fatty acids play an important role to regulate the functions of immune cells. This study aims to clarify the role of propionic acid (PA), a short‐chain fatty acid, in regulating the expression of IL‐10 in B10 cells. Blood samples were collected from patients with food allergy (FA) and healthy subjects. Serum and cellular components were prepared with the samples, and analysed by enzyme‐linked immunosorbent assay and flow cytometry, respectively. The results showed that serum PA levels were lower in FA patients. PA concentrations were negatively correlated with serum cytokine Th2 concentrations, specific IgE concentrations in serum and skin prick test results. The peripheral frequency of B10 cells and the production of IL‐10 in B cells were also associated with serum PA concentrations. Activation of B cells by CpG induced the production of IL‐10 and tristetretrprolin (TTP), in which TTP caused the spontaneous decay of IL‐10 mRNA. PA was necessary to stabilize the IL‐10 mRNA in B cells by inducing the production of granzyme B, which resulted in the degradation of the IL‐10 mRNA. Administration of PA attenuated FA response in mice by maintaining homeostasis of B10 cells. In conclusion, PA is needed to stabilize the expression of IL‐10 in B10 cells. PA administration can mitigate experimental FA by maintaining B10 cell functions. | INTRODUC TI ON Immune tolerance is a condition whereby the immune system does It is recognized that short-chain fatty acids (SCFAs), such as acidic acid (AA), butyric acid (BA) and propionic acid (PA), plays an important part in maintaining homeostasis in the body. 4 The intrinsic sources of SCFAs are mainly from intestinal bacterial metabolites. SCFA have immune regulatory functions, such as inhibiting histone acetylase (HDAC), activating G-coupled receptors and serving as energetic substrates. 5 BA is a pan histone deacetylase inhibitor, and protects against cancer and inflammation. 6 AA and PA also have inhibitory effects on HDACs and involve in immune regulation. 5 It is reported that the PA administration effectively prolongs full-thickness skin grafts. 7 GRP40 to GRP43 are the receptors of SCFAs. 5 Interleukin-10 derived from immunoregulatory cells is known to play an essential role in immune regulation. However, IL-10 therapies were not used in the clinic yet. 8 Our previous work showed that the spontaneous IL-10 mRNA decay in B10 cells. 9 Other investigators also pointed to this phenomenon. 10,11 Insufficient supply of SCFA is associated with the pathogenesis of immune disorders. 12 We hypothesize that the short of SCFAs may involve in the instability of IL-10 in B10 cells. Thus, we an- subjects were also recruited. The experimental procedures were approved by the Human Ethical Committee at Shenzhen University and Zhengzhou University. Written informed consent was received from every human subject. The demographic data are presented in Table 1. | SPT Skin pitting was performed for all human subjects. Food allergens (Cow's milk, egg white, egg yolk, wheat flour, soybean, carrot, potato and peanut) were purchased from Allergopharma (Germany) and used in SPT. Saline and histamine (10 mg/ml) were used as negative and positive controls in SPT, respectively. The results of SPT were observed and recorded 15 min later. The SPT-positive criterion was set if the mean diameter of wheal was ≥3 mm larger than the negative control. The largest wheal size was recorded for each patient. The SPT results are presented in Table S1. | Assessment of serum-specific IgE The serum-specific IgE (sIgE) levels were determined by ImmunoCap (for human; Allergopharma) or ELISA (for mice) with commercial K E Y W O R D S food allergy, immune regulation, intestine, propionic acid, short-chain fatty acid reagent kits following the manufacturer's instructions. The positive sIgE criterion for human samples was 0.35 kU/L. | ELISA (Enzyme-linked immunosorbent assay) Cytokine levels in the serum or supernatant were determined by ELISA with commercial reagent kits following the manufacturer's instructions. | Preparation of peripheral blood mononuclear cells (PBMCs) Blood samples were collected from each human subject through the ulnar vein puncture. PBMCs were isolated from blood samples by Percoll gradient density centrifugation. Serum samples were collected and stored at −80°C until use. | Cell culture Cells were cultured in RPMI1640 medium supplemented with 10% foetal calf serum, 0.1 mg/ml streptomycin, 100 U/ml penicillin, and 2 mM glutamine. Cell viability was 98%-100% as assessed by the Trypan blue exclusion assay. | Flow cytometry (FACS) In the surface staining, cells were stained with fluorescence labelled antibodies (diluted to 1 ug/ml) or isotype IgG for 30 min at 4°C. Cells were washed with phosphate-buffered saline (PBS) 3 times, and analysed with a FACS device (BD FACSCanto II). In the intracellular staining, cells were fixed with 1% paraformaldehyde (containing 0.05% Triton x-100) for 1 h, washed with PBS 3 times. The cells were then processed with the same procedures of surface staining. The data were analysed with software package Flowjo (TreeStar Inc.). The data obtained from isotype IgG staining were used as gating references. In FACS, CD45 + cells were gated first; followed by gating and harvesting CD19 + cells (used as B cells), or CD19 + CD5 + cells (used as B10 cells). The post-FACS check results showed that the purity of isolated B cells and B10 cells was 96%-98%. | FA mouse model development BALB/c mice (6-8-week-old) were purchased from the Guangzhou Experimental Animal Center, and maintained in a specific pathogenfree facility. The mice were allowed to access food and water freely. The animal experimental procedures were approved by the Animal Ethical Committee at Shenzhen University. To develop the FA model, mice were sensitized by subcutaneous injection with ovalbumin (OVA, 100 µg/mouse in 0.1 ml Alum) on the back skin on day 0 and day 3, respectively. The immunization was boosted by gavage-feeding mice with OVA (1 mg/mouse in 0.3 ml saline) daily from day 9 to day 13. Mice were oral-challenged with OVA (5 mg/mouse in 0.3 ml saline) on day 15 (or started the treatment with PA). Core temperature was recorded 30 min after the challenge with a rectal thermometer. Diarrhoea was recorded in the period of 2 h after the challenge. The truncated blood was collected. The serum was separated from blood samples, and stored at −80°C until use. After the sacrifice, a 15-cm jejunum segment was excised, and rinsed with 3 ml saline in a syringe; the fluid was recovered, centrifuged at 10,000 g for 10 min at 4°C; supernatant was collected, and used as gut lavage fluid (GLF). The jejunal segments were then cut into small pieces, incubated with collagenase IV (1 mg/ml) at 37°C for 30 min with mild agitation. Single cells were collected by filtering through a cell strainer (100 µm first, then 40 µm), and used for further experiments. | Immunoprecipitation (IP) Proteins were extracted from cells collected from relevant experiments, and precleared by incubating with protein G agarose beads for 2 h, followed by centrifugation at 10,000 g for 10 min to remove the beads. Supernatant was collected and incubated with an anti-TTP Ab (1 µg/ml) overnight. Immune complexes were precipitated by incubating with protein G agarose beads for 2 h. The beads were collected by centrifugation at 10,000 g for 10 min. Proteins on the beads were eluted with an eluting buffer, and analysed by Western blotting. All the IP procedures were performed at 4°C. | Mass spectrometry (MS) Protein samples precipitated by anti-TTP Ab were sent to the MS cen- | RNA interference (RNAi) The expression of GPR41 and GPR43 in B cells was knocked down by RNAi with commercial shrine reagent kits following the manufacturer's instructions. Effects of RNAi were checked 48 h after the transfection by Western blotting. | Statistics The data are presented as mean ± SEM or median (IQR). The difference between two groups was determined by Student t-test or ANOVA followed by Dunnett's test or Bonferroni test. The Spearman correlation coefficient test was performed to determine correlation between two group data. p < 0.05 was set as the significant criterion. | Serum PA levels are negatively correlated with serum Th2 cytokine levels in AF patients Blood samples were collected from 40 FA patients and 40 healthy controls (HC) subjects. The serum was isolated from the samples and was analysed by HPLC and ELISA. In comparison with HC samples, serum butyric acid (BA), acetic acid (AA) and propionic acid (PA) concentrations were lower in FA samples than in HC samples ( Figure 1A-C). Cytokine analysis showed greater Th2 cytokine levels, a dominant Th2 profile, in FA samples than in HC samples ( Figure 1D-F). A negative correlation was found between serum Th2 cytokine levels, specific IgE (sIgE) levels, SPT size and the PA levels ( Figure 1G), but not either AA or BA ( Figure 1H-I). The results implicate a link between the lower serum PA levels and the Th2 polarization in FA patients. | Serum PA concentrations are associated with the peripheral frequency of B10 cells and the production of IL-10 With CD1d + CD5 + as B10 cell markers, 2 the frequency of B10 cells was found to be lower in the FA group than in the HC group ( Figure 2A-D). Over 80% of CD1d + CD5 + B cells also expressed IL-10, which can be called B10 cells ( Figure 2E,F). While B10 cells of HC subjects and FA patients expressed IL10 mRNA almost equally ( Figure 2G), the production of IL-10 was lower in the B10 cells sampled from the FA group ( Figure 2H). Positive correlation was detected between serum PA levels and the peripheral frequency of B10 cells (Figure 2I,J). The findings suggest that serum PA may be associated with homeostasis in B10 peripheral cells. | PA regulates the expression of IL-10 in B10 cells The data of Figure 1 and Figure 2 suggest that the PA may involve in the regulation of IL-10 expression in B10 cells. To test this, B cells were isolated from blood samples of 10 HC subjects, exposed to Table S2 and Table S3 The data of violin plots and boxplots are presented as median (IQR). Each bubble in violin plots and boxplots presents data obtained from one sample. ***p Figure 3A). Tristetraproline (TTP) has been known to destabilize IL-10. 9, 11 We found that the TTP levels in B cells were increased upon exposing to CpG; such an effect was counteracted by the presence of PA. Exposure to PA alone did not alter the TTP levels in B cells ( Figure 3B-D). The results suggest that, in line with previous reports, 9,11 TTP is also responsible for the IL-10 mRNA decay in this experimental setting, and PA can counteract with the effects of TTP. Additionally, by immunoprecipitation assay with anti-TTP antibody as a bait, we observed that TTP protein (extracted from the B cells) bound IL-10 mRNA ( Figure 3E). The results indicate that GPR43 mediates the PA effects on stabilizing IL-10 mRNA in B cells. | PA induces granzyme B (GrB) expression to inhibit TTP in B cells To better understand the mechanism by which TTP regulates the stability of IL-10 mRNA in B cells, B cells were treated with CpG and PA in culture. Protein extracts were prepared with the B cells, and immunoprecipitated (IP) with an anti-TTP antibody as a bait. The IP products were analysed by mass spectrometry (MS). MS results showed that it was GrB ( Figure S1) to form a complex with TTP in B cells ( Figure 5A). Guided by the MS results, the TTP/GrB complex in the IP products was verified by Western blotting ( Figure 5B). We then analysed RNA and protein extracts of B cell by RT-qPCR and Western blotting. The results showed that after exposure to PA in the culture, the expression of GrB was increased in B cells, which did not occur in those exposed to CpG alone ( Figure 5C,D). Because GrB is a protease, the results suggest that the binding of GrB and TTP can lead to degradation of TTP. We indeed colocalized TTP and ubiquitin in the complex of GrB ( Figure 5E). Treating GrB-deficient or GPR43-deficient B cells with PA did not suppress the CpG-increased TTP ( Figure 5F-G). The results demonstrate that PA induces GrB expression to suppress TTP in B cells, and thus, to contribute to IL-10 mRNA stabilization ( Figure S2). | Administration of PA attenuates FA in mice by maintaining the homeostasis of B10 cells Then, we did an in vivo study to look at the role of PA in maintaining homeostasis in B10 cells. An FA mouse model was developed ( Figure S3). Upon challenging with a specific antigen, FA mice showed the FA response, including diarrhoea ( Figure 6A), core temperature drop ( Figure 6B), high serum-specific IgE levels ( Figure 6C), Th2 polarization and high levels of allergic mediators in the gut lavage fluid (GLF) (Figure 6D-H). IL-10 levels were lower and IFNγ levels were not altered ( Figure 6I,J). FA mice also showed lower serum PA levels, but not AA or BA levels ( Figure 6K-M), lower frequency of B10 cells in the intestinal tissues ( Figure 6N,O). Administration of PA (Indole-3-PA) for one week markedly attenuated the FA response, increased B10 cells in the intestine and increased GLF IL-10 levels. The therapeutic effects of PA were abolished by inhibiting IL-10 with AS101 ( Figure 6). The results show that PA plays an important role in the maintenance of B10 cell homeostasis in the intestine. Both GPR41 and GPR43 can be recognized by the PA, AA and BA. 20 GPR43 is not only expressed by epithelial cells, but also expressed by immune cells. 21 This data show that GPR43, but not GPR41, is responsible for mediating the role of PA in stabilizing IL-10 stabilization in B cells. The underlying mechanism is that PA ligates GPR43 to induce GrB expression in B cells, of which the signal transduction pathway still needs to be further investigated. GrB forms a complex with TTP to induce TTP degradation, and thus, contribute to the IL-10 stabilization. With an FA mouse model, we verified the role of PA in stabi- There are many phenotypes of B cells have been recognized. These B cell subsets have various immune regulatory activities. 3 Whether PA also regulate the properties of these B cell subsets is of interest, and can be investigated in the future. In summary, current data show that serum PA levels are negatively associated with Th2 polarization. PA induces GrB expression in B cells, which suppresses TTP and stabilizes IL-10 expression. DATA AVA I L A B I L I T Y S TAT E M E N T All the data are included in this paper and the online supplementary materials.
2022-03-29T06:22:59.671Z
2022-03-27T00:00:00.000
{ "year": 2022, "sha1": "6f036f66c3c429b1a1603fa0dcd855e7804d938c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "8160887580aabbd9271fd33d4aa86d1125253cab", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250491711
pes2o/s2orc
v3-fos-license
Prospective surveillance study to detect antimalarial drug resistance, gene deletions of diagnostic relevance and genetic diversity of Plasmodium falciparum in Mozambique: protocol Introduction Genomic data constitute a valuable adjunct to routine surveillance that can guide programmatic decisions to reduce the burden of infectious diseases. However, genomic capacities remain low in Africa. This study aims to operationalise a functional malaria molecular surveillance system in Mozambique for guiding malaria control and elimination. Methods and analyses This prospective surveillance study seeks to generate Plasmodium falciparum genetic data to (1) monitor molecular markers of drug resistance and deletions in rapid diagnostic test targets; (2) characterise transmission sources in low transmission settings and (3) quantify transmission levels and the effectiveness of antimalarial interventions. The study will take place across 19 districts in nine provinces (Maputo city, Maputo, Gaza, Inhambane, Niassa, Manica, Nampula, Zambézia and Sofala) which span a range of transmission strata, geographies and malaria intervention types. Dried blood spot samples and rapid diagnostic tests will be collected across the study districts in 2022 and 2023 through a combination of dense (all malaria clinical cases) and targeted (a selection of malaria clinical cases) sampling. Pregnant women attending their first antenatal care visit will also be included to assess their value for molecular surveillance. We will use a multiplex amplicon-based next-generation sequencing approach targeting informative single nucleotide polymorphisms, gene deletions and microhaplotypes. Genetic data will be incorporated into epidemiological and transmission models to identify the most informative relationship between genetic features, sources of malaria transmission and programmatic effectiveness of new malaria interventions. Strategic genomic information will be ultimately integrated into the national malaria information and surveillance system to improve the use of the genetic information for programmatic decision-making. Ethics and dissemination The protocol was reviewed and approved by the institutional (CISM) and national ethics committees of Mozambique (Comité Nacional de Bioética para Saúde) and Spain (Hospital Clinic of Barcelona). Project results will be presented to all stakeholders and published in open-access journals. Trial registration number NCT05306067. AbstrACt Introduction Genomic data constitute a valuable adjunct to routine surveillance that can guide programmatic decisions to reduce the burden of infectious diseases. However, genomic capacities remain low in Africa. This study aims to operationalise a functional malaria molecular surveillance system in Mozambique for guiding malaria control and elimination. Methods and analyses This prospective surveillance study seeks to generate Plasmodium falciparum genetic data to (1) monitor molecular markers of drug resistance and deletions in rapid diagnostic test targets; (2) characterise transmission sources in low transmission settings and (3) quantify transmission levels and the effectiveness of antimalarial interventions. The study will take place across 19 districts in nine provinces (Maputo city, Maputo, Gaza, Inhambane, Niassa, Manica, Nampula, Zambézia and Sofala) which span a range of transmission strata, geographies and malaria intervention types. Dried blood spot samples and rapid diagnostic tests will be collected across the study districts in 2022 and 2023 through a combination of dense (all malaria clinical cases) and targeted (a selection of malaria clinical cases) sampling. Pregnant women attending their first antenatal care visit will also be included to assess their value for molecular surveillance. We will use a multiplex amplicon-based next-generation sequencing approach targeting informative single nucleotide polymorphisms, gene deletions and microhaplotypes. Genetic data will be incorporated into epidemiological and transmission models to identify the most informative relationship between genetic features, sources of malaria transmission and programmatic effectiveness of new malaria interventions. Strategic genomic information will be ultimately integrated into the national malaria information and surveillance system to improve the use of the genetic information for programmatic decision-making. Ethics and dissemination The protocol was reviewed and approved by the institutional (CISM) and national ethics committees of Mozambique (Comité Nacional de Bioética para Saúde) and Spain (Hospital Clinic of Barcelona). Project results will be presented to all stakeholders and published in open-access journals. trial registration number NCT05306067. IntroduCtIon Pathogen genomics has the potential to transform the surveillance, prevention and control landscape of infectious diseases. The rapid innovation in sequencing technologies has led to the development of robust strEngths And lIMItAtIons of thIs study ⇒ Next-generation sequencing will be performed in country through the establishment of technical and computational infrastructure as well as analytical tools. ⇒ The project builds from recent elimination experiences in southern Mozambique and uses a biorepository of already collected Plasmodium falciparum samples to select multiallelic short-range haplotypes (microhaplotypes) that increase the power of biallelic loci for phase inference in polygenomic infections. ⇒ A joint epidemiological-genetic analysis will enable better predictions of the operational efficacy of new interventions. ⇒ We will assess the value of a new surveillance system at antenatal visits to improve the programmatic performance of malaria control and elimination activities. ⇒ More evidence on the association between malaria transmission intensity and genetic data is required for the use of malaria molecular surveillance data to assess the effectiveness of malaria interventions. Open access next-generation sequencing equipment with the ability for high pathogen resolution at increasingly affordable prices. This development has subsequently facilitated the incorporation of pathogen genomics in disease surveillance systems in high-income countries, allowing for targeted and effective control of disease threats through the timely and in-depth pathogen characterisation. 1 Genomics-based surveillance is therefore becoming an integral strategy towards control and elimination of diseases such as COVID-19, tuberculosis, malaria, HIV and food-borne pathogens, among others. 2 The strategic use of genetic variation in Plasmodium falciparum can boost the capacity of malaria control and elimination programmes to deploy the most efficient interventions. 3 Molecular tools and use cases for decision making are currently being considered by the WHO which, through a technical consultation on the role of parasite and anopheline genetics in malaria surveillance, 4 identified different levels of action based on evidences available. Genetic data can flag the emergence of mutations conferring resistance to antimalarials (ie, artemisinins) 5 or deletions that affect rapid diagnostic test (RDT) sensitivity (ie, P. falciparum histidine-rich protein 2 [pfhrp2]). [6][7][8] Genomic scans for selection 9 can identify other parasite adaptations mediated by single nucleotide polymorphisms (SNPs) and structural variations (gene copy number) 10 that may require a programmatic response. Parasite-relatedness metrics such as identity by descent (IBD) 11 can be used to characterise the key drives of ongoing transmission to identify foci 12 13 and to discriminate between indigenous and imported cases in areas approaching elimination. [14][15][16] Bottlenecks in parasite population driven by control and elimination efforts have been shown to reduce P. falciparum genetic diversity and increase similarity due to inbreeding and recent common ancestry. 17 These evidences provide the basis for modelling efforts to recapitulate features of malaria transmission from genetic data and inform about the effectiveness of antimalarial interventions. [18][19][20][21][22][23] However, further evidence is needed to demonstrate the feasibility and appropriateness of using genetic data as a proxy for transmission intensity and define the conditions under which that feasibility applies. Moreover, standardised approaches for detecting resistance through molecular markers are lacking, and variation in sample type, collection, storage, DNA extraction, marker detection and analysis of results can undermine the comparability of findings, as well as the sensitivity and specificity of methods used. Adequate genotyping methods, sampling frameworks, analytical pipelines and demonstration studies are still required across a range of malaria intensities, programmatic environments and use scenarios. Strategic P. falciparum genetic information can be integrated into innovative cost-efficient surveillance approaches, such as those targeting pregnant women attending antenatal care (ANC) clinics. 24 Women at ANC are a generally healthy, easy-access population, contributing valuable data for infectious disease surveillance (ie, HIV 25 and syphilis 26 ) and wider health metrics at the community level, including a proxy of the malaria burden in the community. [27][28][29][30][31][32] Moreover, ANC-level malaria surveillance can provide a routine measure of the malaria burden in pregnancy, which countries lack, while potentially improving pregnancy outcomes by treating infections at first trimester. Women attending ANC also provide an attractive sampling population for measures of exposure to malaria beyond simply presence or absence of parasite infection. In particular, in addition to measuring complexity of infection or parasite flow-rates between populations, molecular analysis of P. falciparum samples collected from pregnant women may provide a means for the identification of adaptations developed by the parasite to control strategies, such as antimalarial resistance and deletions of antigens targeted by RDT that can compromise diagnosis, treatment and prevention. Despite the potential benefits and the greater need to control the high burden of infectious diseases, genomic surveillance capacity remains low for many public health programmes in Africa. 2 In order to reduce inequities in the access to sequencing technologies, this project aims to promote capacities in Mozambique for operationalising a functional malaria molecular surveillance (MMS) system for decision-making. 4 Mozambique is among the 10 countries with the highest burden of malaria worldwide, with an estimated 10.8 million cases in 2020. 33 However, malaria transmission is very heterogeneous in the country, with a high burden in the north and very low transmission in the south. Therefore, the project aims to address National Malaria Control Programme (NMCP) programmatic needs for elimination initiatives in southern Mozambique and burden reduction in the north (figure 1). MEthods And AnAlysIs study design This is a prospective genomic surveillance study of P. falciparum samples to be collected between 2022 and 2023 from a variety of transmission intensities and geographies in Mozambique to inform three use cases: appropriate malaria diagnostics and treatment; characterising transmission sources in low transmission settings and identifying intervention mixes with optimal effectiveness to reduce burden in moderate-to-high transmission areas. To achieve this, three different sampling approaches will be performed. First, all malaria cases will be sampled throughout the year in two low transmission districts of southern Mozambique currently targeted by reactive malaria surveillance activities (dense sampling). Second, a targeted approach will aim to collect a predefined number of samples at selected health facilities in the country. In low transmission settings, sampling will be conducted throughout the year, while two surveys will be conducted in medium-to-high transmission settings: one during the rainy and a second one during the dry season (which extend from November to April and May to October, Open access respectively). During the high transmission (rainy) season, an LDH-based RDT will be added to the standard routine HRP2-based diagnostics to identify potential false negative results due to pfhrp2/3 deletions among clinical cases. 34 And third, ANC sampling of pregnant women at first attendance will be conducted throughout the year at selected health facilities in the country. The overarching sampling strategy for the study will however remain flexible and iterative, informed by sample analysis as the study progresses, and in view of future sampling and research activities being conducted by the Ministry of Health, National Institute of Health and other stakeholders in Mozambique, to avoid sampling overlap and ensure a diversity of sampled locations. The project will also leverage from clinical trials and surveillance activities being conducted in Mozambique between 2021 and 2024, namely the Malaria Indicator Survey (2022-2023) in southern Mozambique; the therapeutic efficacy survey (2022) in sentinel sites in the country (Montepuez in Cabo Delgado, Moatize in Tete, Dondo in Sofala, Mopeia in Zambézia and Massinga in Inhambane); 35 36 reactive surveillance activities in Magude and Matutuine (Maputo Province; 2022-2023); a phase III cluster-randomised, open-label, clinical trial in 2022 to study the safety and efficacy of ivermectin mass drug administration to reduce malaria transmission in Mopeia District (Zamb'ézia Province); a large-scale implementation development project aiming at maximising the delivery and uptake of perennial malaria chemoprevention (formerly intermittent preventive treatment in infancy) in Massinga District (Inhambane Province; 2022-2024); a hybrid effectiveness-implementation study to evaluate the feasibility and effectiveness of seasonal malaria chemoprevention with sulfadoxine-pyrimethamine (SP) and amodiaquine inNampula Province (2022) and a programmatic delivery of a population-based mass drug administration with dihydroartemisinin-piperaquine in Manjacaze district (Gaza Province; 2022-2023). study settings and participants Nine provinces were identified through consultation with the NMCP for inclusion in the study: Maputo City, Maputo, Gaza, Inhambane, Niassa, Manica, Nampula, Zamb'ézia and Sofala. Selection of study sites will be stratified by transmission intensity into two major strata: (a) low transmission (Maputo city and Maputo Province, where individual case notification is being implemented to reach interruption of transmission) and (b) medium-to-high transmission areas (Gaza, Inhambane, Niassa, Manica, Nampula, Zamb'ézia and Sofala provinces, targeted by burden-reducing strategies). Overall, a total of 19 districts will be included, which provide a diverse range of epidemiological settings (see table 1 and figure 2). Dense sampling will be conducted in the low transmission districts of Magude and Matutuine (Maputo Province), where all the individuals of any age (>6 months old) with clinical symptoms of malaria (defined as axillary temperature ≥37.5°C or history of fever in the preceding 24 hours) and a parasitologically confirmed malaria diagnosis via RDT or microscopy (table 2) will be invited to donate their RDT for molecular analysis (dense sampling). Targeted sampling will be conducted at selected health facilities in the low transmission districts of Boane and Manhiça (Maputo Province ), and KaMavota, KaMaxaqueni and Nhamankulu Districts (Maputo City), where (C) long-term action. Arrows in colour at the right express the research required for action in the medium-term and long-term (grey, not essential for action; green, immediate evidence; yellow, medium-term evidence). ANC, antenatal care clinics; IPT, intermittent preventive treatment; MDA, mass drug administration; rfMDA, reactive focal MDA; SMC, seasonal malaria chemoprevention. Open access a drop of blood will be collected onto filter paper from consenting individuals of any age (>6 months old) with confirmed clinical malaria. In medium-to-high transmission areas, targeted sampling will focus on children aged 2-10 years of age attending selected health facilities with clinical symptoms of malaria and a parasitologically confirmed malaria diagnosis via RDT (table 2). Ten health facilities will be targeted in each district. Pregnant women attending their first antenatal care visit (any trimester) will be invited to participate both in low (Maputo Province) and high transmission provinces (Inhambane, Gaza, Nampula, Niassa, Manica, Sofala and Zambézia; table 1), irrespectively of malaria clinical symptoms. Enrolment of participants Dense sampling in Magude and Matutuine districts will be coordinated with district malaria focal points, community health workers (CHW), malaria volunteers (who provide a link between the CHW and the health facility, and assist the CHW in the follow-up of cases and administration of medication) and health facilities. All P. falciparum positive RDTs (i.e., SD Bioline Malaria Ag Pf, 05FK50, Abbott) will be stored for molecular analysis. RDTs of P. falciparum-confirmed household contacts will also be collected to estimate the rate of within-household transmission. Targeted sampling through health facilitybased surveys (HFS) in low and medium-to-high transmission settings will be carried out by one team comprised of one maternal and child health nurse, a laboratory technician or a medical technician. The number of people to be screened in each health facility and the duration of recruitment to achieve the sample size will be dependent on the RDT-positivity rate among people meeting the eligibility criteria. A second test including a non-HRP2 line (StandardQ Malaria Pf/Pan Ag Test, SD Biosensor) will be carried out in HFS during the rainy season and discrepant results suggestive of pfhrp2/3 deletions will be recorded and further analysed to confirm the deletion. Nurses at the ANC clinics will be in charge of the recruitment of pregnant women at their first visit. Pregnant women will be tested for malaria using a routine RDT and the result will be recorded in a standard questionnaire, together with routine ANC tests. Each enrolled individual will be assigned with a unique identification (UID) number and a barcode. Open access data and sample collection Field workers and nurses will be trained to ask for informed consent (online supplemental appendix 1-4), perform a simple questionnaire (online supplemental appendix 5-8) and collect biological samples for molecular analysis. The survey questionnaire will be administered to all study participants or children's parents/guardians meeting the inclusion criteria and will include inclusion criteria check, characteristics of the participant and malaria-related information. For pregnant participants, data will be collected on parity and gestational age at first ANC visit, as well as information related with malaria and use of preventive measures. A telephone contact number will be collected from pregnant women in low transmission settings in order to locate their residence for spatial analysis. In areas targeted by reactive surveillance activities (Magude and Matutuine in Maputo Province), travels during the previous 30 days to the case notification will be registered, including destinations and dates. A Site Coordinator will be responsible for supervising the work of field workers, nurses and the data entry clerk, and for reviewing and comparing questionnaires and samples for correct matching, completeness and accuracy. Nurses will be trained to collect blood by finger pricking (online supplemental table 1) following standard (online supplemental appendix 9) and COVID-19 safety procedures (online supplemental appendix 10). For each participant, either the P. falciparum-positive RDT used for routine malaria diagnosis (dense sampling) or four blood spots onto two filter papers (Whatman Grade CF 12; targeted sampling) will be collected. Specimens will be labelled anonymously (patient UID, study health facility and date), dried for 24 hours and kept in individual plastic bags with desiccants at 4°C. Every 2-6 weeks, the completed questionnaires, informed consents and samples will be sent to the data entry clerk at CISM through a local transportation agency. Informed consents will be received by study investigators. A data manager will be responsible for the receipt of the informed consents and double data entry at CISM, and a laboratory technician will be responsible for receiving the samples and store them at −20°C until analysis. Part of the dried blood spot will be stored in RNA-preserving solution. All samples will be kept in the CISM laboratory for a period of approximately 15 years. For quality control purposes, up to 5% of the samples will be analysed at UCSF (San Open access Francisco, USA) and/or ISGlobal (Barcelona, Spain). In order to identify errors in data or sample collections and take necessary corrective actions, a standardised checklist (online supplemental appendix 11) will be filled in by the monitoring officer during biweekly monitoring visits. Molecular analyses Informative SNPs (including-but not restricted to-markers of resistance to artemisinin [pfkelch13], 37 SP [pfdhfr, pfdhps], 38 or chloroquine (pfcrt) 39 ), microhaplotypes 40 and pfhrp2 and pfhrp3 regions 6-8 will be targeted using multiplexed primers on flanking sequences, with a range of amplicon size of ~225-275 bp (covered by a paired end read). Targeted amplicons obtained by PCR on genomic DNA using Illumina-specific adaptors and sample-specific barcode will be pooled to create a single product library, which will be sequenced (paired-end 150 bp) on a Miseq Illumina sequencer in the country or higher performing equipment when available. Amplicon representation and SNP and haplotype calling will be assessed in demultiplexed and trimmed sequencing reads after filtering sequencing errors. The designed panel will be validated using mixtures of P. falciparum lines to determine precision and repeatability. Genotyping methods, including number of SNPs and microhaplotypes to be characterised, distribution across the parasite's chromosomes, the proportion of putatively neutral versus nonneutral polymorphisms, pooling strategy and criteria for validating sequencing data (ie, minimum sequencing depth and maximum error rate) will be developed as part of this project. Samples will also be used for other molecular analysis of programmatic interest, such as the detection of Plasmodium species, parasite antigens, serological markers of parasite exposure (antibodies) and parasite RNA-based markers (ie, gametocytes). A quality control programme based on the sequencing of an artificially created set of samples (ie, mixtures of known laboratory controls at specific proportions and densities) will be processed at predefined times to guarantee the quality of the processes during the life of the project. data management Data will be collected using paper (targeted sampling) and password-protected electronic devices (dense sampling). Open access Data collected using paper will be double entered into the study database using RedCap. 41 Automatic quality checks will be performed to ensure data completeness. Confidentiality and security will be ensured through automatic encryption of sensitive data, storage in passwordprotected computers and locked locations, and data sharing using password-protected, encrypted files. Prior to analysis, data will be deidentified with the exception of geolocation codes, which are necessary for specific analyses. The study will also use data available from the NMCP, including intervention coverage, historical prevalence surveys, travel history or other mobility assessments and entomological data. Sequences generated through the analysis of samples will be integrated into a curated catalogue of genomic data together with relevant anonymised clinical and epidemiological information and will be made publicly available in public repositories such as the European Nucleotide Archive (ENA) and MalariaGen Resource Centre. In order to facilitate data accessibility and use, and to obtain a meaningful integration with other sources of surveillance data, genetic information will be incorporated into the DHIS2-based Integrated malaria information storage system (iMISS), which is currently being rolled out in Mozambique. 42 study outcomes and sample size calculations The primary endpoints are as follows: (a) prevalence of molecular markers of diagnostic and antimalarial resistance by period, study area and population (use case 1); (b) genetic-relatedness indicators between pairs of samples and populations by period, study area and population (use case 2) and (c) genetic diversity indicators by period, study area and population (use case 3). Sample size per sampling domain (Province) has been estimated considering antimalarial and diagnostic resistance as a primary use case, considering the negligible carriage of molecular markers of artemisinin resistance 5 and pfhrp2/3 deletions 6 in Mozambique, and setting 5% as the warning threshold. 43 Assuming a 10% of loss of samples or uninterpretable analysis, a sample size of up to 500 per sampling domain would be adequate to: (a) estimate a proportion of 0.05 (markers of drug resistance or pfhrp2 deletion) with 0.026 absolute precision and 95% confidence and (b) achieve a power of 80% for detecting an increase of genetic marker (resistance or deletion) from 0 ‰ to 5% at a two-sided p-value of 0.01. A flexible and adaptive sampling scheme will be followed, where (a) estimates generated during the first half of the project will inform subsequent sampling schemes and (b) not all the samples collected will be analysed (some of them will be stored as reference materials, for confirmation of findings or future studies on Plasmodium biology). The number of pregnant women to be recruited in order to reach the sample number will depend on the parasite rates in the study areas; assuming an overall RDT positivity rate of 25%, we expect we will be needing to recruit a total of 2000 pregnant women per site to get 500 P. falciparum positive samples, although numbers may differ between sites. Analysis plan Demographic and clinical charateristics of study participants will be described using summary statistics. A userfriendly and locally executable bioinformatic pipeline will be developed for analysis of P. falciparum targeted sequencing data. Highly informative SNPs and microhaplotypes showing geographic structuring will be selected using a supervised machine learning approach trained by genomes from known geographic origin in Mozambique. Population-level genetic diversity will be quantified using expected heterozygosity (He), number of alleles per locus, allele frequency, complexity of infection (COI) 23 as well as other genetic metrics. Deletions and copy number variations will be assessed based on sequencing coverage ratios. 10 44 Methods to be used for population genetic analysis (including the genetic connectivity among infections, use of all vs only neutral SNPs, treatment of multiple-clone infections and integration of genetic data with travel history data) will be developed during the project. We will use regression models adjusted by potential confounders (demographic and clinical factors, among others) to compare genetic metrics between seasons, before and after the antimalarial interventions, between pregnant women and community sampling populations and across different intensities of malaria transmission. Finally, we will integrate genomic surveillance data into epidemiological and transmission network models. For the first one, we will leverage two recent models developed at the Institute of Disease Modelling 45 (a malaria genetic model calibrated to a longitudinal genetic study in Senegal 18 and a disease transmission model calibrated with the Magude data) to build an end-to-end malaria transmission and genetics model for Mozambique ( figure 3). The transmission network model will include data for densely sampled in low transmission areas on individual and communitylevel case classification (imported, local and introduced), the extent and duration of sustained local transmission and how these change over space and time. Summary indicators will be visualised in graphical and tabular forms in the iMISS through genetic dashboards. We will establish risk profile algorithms and interpretation components that are capable of generating outputs on (a) countrywide antimalarial resistance profiles (rolling-basis); (b) in very low transmission areas (eg, Magude district), genetic connectivity and case classification (together with travel history and other parameters obtained from case-based notification tools) and (c) high burden to high impact specific analyses (ie, stratification and trend investigation for exploring the potential impact of intervention mixes implemented). EthICs And dIssEMInAtIon The protocol was reviewed and approved by the institutional (CISM) and national ethics committees of Open access Mozambique and the Hospital Clinic of Barcelona. Written informed consent will be sought from all study participants before blood sample collection is conducted (online supplemental appendix 1). Two copies will be signed, one will be kept by participant and the other by the investigators in a locked space. The information sheet and consent form will also include text explaining informed consent for future use of biological specimens to conduct additional analyses of the Plasmodium parasite. In case of minors (less than 18 years of age), consent will be sought from parents, relatives or guardians. Informed consents will specify that the data will be made public. First-line treatment for malaria will be provided to the enrolled participants in line with national treatment guidelines. Considerations related to preventing the risk of SARS-COV-2 transmission are detailed in online supplemental appendix 10. There will not be any economic incentive to participate in the study. Transference of data and materials out of Mozambique will be done only when appropriate data and material transfer agreements are signed between participating institutions (online supplemental table 2). Patient and public involvement Patients and the public were not involved in the development of this protocol. dIsCussIon There is a growing acceptance that genomics can play a critical role in policy and programmatic decisions. With the aim of demonstrating the programmatic application and feasibility of malaria genomic surveillance in Mozambique, we will generate parasite genomic data across varying transmission scenarios for supporting strategic decision-making. First, MMS data will inform drug and diagnostic choices through the monitoring of molecular markers of antimalarial and diagnostic resistance. The emergence of pfhrp2/3 deletions, 6-8 resistance to artemisinin 37 and partner drugs, as well as the resistance to SP used for chemoprevention, 38 46 47 threatens the global effort to reduce the burden of malaria. 33 The WHO recommends that countries with reports of pfhrp2/3 deletions, and neighbouring countries, should conduct representative baseline surveys among suspected malaria cases. If the prevalence of molecular markers of antimarial resistance or deletions causing false negative RDT results reaches the threshold of >5%, then there is need to consider alternative antimalarials and RDTs. Second, the project will help to target the reservoirs sustaining transmission by quantifying parasite importation, identifying sources and characterising local transmission in near-elimination settings. 48 49 Genomic surveillance and phylogenetic analyses have enabled the near real-time estimation of transmission chains of non-sexually recombining, rapidly evolving pathogens such as Ebola, 50 influenza 51 and COVID-19. 52 However, molecular and analytic advancements are still required to characterise transmission patterns of pathogens such as P. falciparum with a sexually recombining stage. 49 Third, the project will assess the value of P. falciparum genetic diversity measures to supplement traditional surveillance for improving stratification, monitoring and impact evaluations in different epidemiological contexts, especially where surveillance data are sparse. This use case still requires development of analytical and interpretative to infer malaria burden 18 20 53-58 and effectiveness of interventions, 18-23 53 59-61 as well as Open access validation of sampling frameworks. 4 Finally, the project will test if parasite populations within pregnant women are representative of the general population and expand the usefulness of this approach to inform genomic surveillance indicators. The project will use state-of-the-art sequencing and modelling approaches. Current P. falciparum genetic markers based on biallelic SNPs have limited support for polyclonal samples, which are frequent across all transmission intensities, and have limited resolution to calculate genetic relatedness between parasites, to estimate allele frequencies, 23 62 or to distinguish geographic origin. 21 23 63 Multiallelic short-range haplotypes (microhaplotypes) covered by a single read from high-throughput DNA sequencers allow an accurate statistical inference of phase and have the potential to derive more precise information than biallelic loci, [64][65][66] particularly in polyclonal infections, to tailor the genomic tool to specific transmission and geographic settings. In addition to being useful for identification and lineage/ family relationships, microhaplotypes can provide information on biogeographic ancestry and can be useful for strain detection and deconvolution. [64][65][66][67] Methods such as IBD 11 68 69 that can exploit the signal left by recombination on these microhaplotypes may have the power to detect geographic differentiation at small spatial scales relevant for malaria control programmes. Machine learning approaches 70 will be used for the selection of key SNPs and microhaplotypes that allow accurate inference of malaria transmission and geographical origin. Finally, models that integrate genomic and epidemiological data will be developed to assess the programmatic effectiveness of new malaria interventions and characterise sources of malaria transmission (imported vs local). 45 This project, guided by programmatic priorities and based on collaborative efforts, aims to boost the use of the genetic data for decision-making. To successfully achieve this, the project is grounded on three main principles: (a) strengthen sequencing capacities to implement a robust MMS system; (b) strong partnership and coordination to make MMS data sharing common practice for malaria control and elimination and (c) effective operationalisation of MMS implementation activities. Technical capacities will be built by establishing at CISM a sequencing platform and ancillary equipment for library preparation and quality control. Computational infrastructure and analytical tools will also be developed by establishing a user-friendly automated platform to analyse genomic data with simplified interpretation into actionable information. Training activities will target molecular biologists for wet laboratory analysis, a bioinformatician and molecular epidemiologists for data analysis and interpretation and a field epidemiologist for interpretation of the generated data, and public health specialists for adoption of the findings into policy. Genetic datato-action culture and engagement of NMCP on genetic analysis will be promoted by integrating genetic aspects in the NMCP activities (ie, data review meetings) as well as in training and annual meetings, by integrating genetic information with other surveillance data onto the iMISS, and by documenting all the processes, successes and failures to inform future molecular activities. The project will pursue the use of MMS data as an adjunct to traditional surveillance information for elimination initiatives in southern Mozambique and burden reduction in the north through the engagement with regional malaria elimination initiatives (eg, E8 and MOSASWA, a trilateral initiative to eliminate malaria from Mozambique, South Africa and Eswatini 71 72 ) and linking decision-making with the 'high burden to high impact' initiative under the guidance of WHO. We expect that the genomic intelligence developed through this project will complement current and new surveillance systems to drive decision-making for the control and eventual elimination of malaria in Mozambique and other malaria endemic countries. However, further steps are required beyond this 3-year project. Enabling policies and regulatory mechanisms for sample storage and sharing, 73 adequate procurement of materials and infrastructure, as well as local expertise for equipment installation and maintenance, need to be developed for an effective integration of genomic surveillance into public health. Countries, with appropriate support from mainstream funding bodies, should also develop sustainability plans as part of national disease control programmes, emergency responses and other surveillance programmes (ie, antimicrobial resistance) to ensure resources for genomic surveillance. Finally, regular assessments of the efficiency and effectiveness of incorporating genomic data in routine public health surveillance systems will be crucial to stimulate the use of genetic data for policy making.
2022-07-14T06:16:13.197Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "3c93ce50a77937f2c127051300cc82099689c28a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "BMJ", "pdf_hash": "bd2278c5f1eb86e9bafe9032fb8a311ee5a62d42", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
109208913
pes2o/s2orc
v3-fos-license
Second and higher harmonics generation with memristive systems We show that memristive systems can be used very efficiently to generate passively both double and higher frequency harmonics. A technique for maximizing the power conversion efficiency into any given harmonic is developed and applied to a single memristive system and memristive bridge circuits. We find much higher rates of power conversion compared to the standard diode bridge, with the memristive bridge more efficient for second and higher harmonics generation compared to the single memristive system. The memristive bridge circuit optimized for second harmonic generation behaves as a two-quarter-wave rectifier. In nonlinear optics, the phenomenon of secondharmonic generation (SHG), demonstrated in 1961 1 , refers to the possibility of creating an outgoing light beam with double the frequency of the impinging beam 2 . This is usually achieved with the help of nonlinear crystals. The smallness of the nonlinearity, however, limits the efficiency of direct SHG. The use of lenses and mirrors improves the efficiency up to 85% by making the light pass repeatedly through a nonlinear medium 3 . A common application of SHG in optics is the creation of laser sources with modified frequencies. In the field of electronics, SHG is termed as frequency doubling. Electronic frequency doublers can be divided into active ones, requiring an external power source, and passive ones, operating exclusively on the power of the input signal (see, e.g., Ref. 4). A straightforward realization of a passive frequency doubler is based on a diode bridge 5 , which produces a full-wave rectified output (mathematically, the absolute value of a sinusoidal input). Using Fourier analysis, it can be shown that the diode bridge transforms 4.50% of the input power into the second harmonic, and 18.9% of the input power into all higher harmonics 5 . In addition, by symmetry its output signal contains only even harmonics. Memristors 6 , which belong to the more general class of memristive systems 7 , are resistive circuit elements with memory. A recent understanding 8 of the memristive nature of resistance switching in memory cells has attracted a lot of attention to the field of memory elements 9 , which comprises also memcapacitive and meminductive systems 10 , namely, capacitors and inductors with memory. In this paper we demonstrate that a passive circuit with a single memristive device generates second and higher harmonics signals on a load resistor with significantly higher efficiency than that of the diode bridge. Moreover, the efficiency of harmonics generation can be further improved employing a memristive bridge introduced in this paper. We note that SHG in large memristor networks was anticipated in Ref. 11 but no detailed analysis was given. We start by showing that memristive systems indeed generate second and higher harmonics using a simple model of resistance switching introduced in Ref. 8 to describe the response of a thin TiO 2 film of a width D sandwiched between two Pt electrodes. The phenomenon of resistance switching in this system is understood 12 as a field-induced migration of oxygen vacancies changing the width w of a doped low-resistance region. Electrical current in the positive direction (towards the thick line in the circuit symbol of the memristor in Fig. 1(a)) increases w, decreasing the resistance. If the maximum and minimum resistance values, obtained at w = 0 and w = D, are denoted by R off and R on , respectively, and the charge flow through the device needed to completely switch it from one limiting state into another is q 0 , then the device memristance (memory resistance) can be written as 8 where w/D = q/q 0 , and q is the cumulative charge flowing through the device. We first consider a sine voltage source of period T connected in series to a memristive device R(w) and a load resistor R 1 ( Fig. 1(a)). Kirchhoff's voltage law for this circuit is where R(w) is given by Eq. (1) and ω ≡ 2π/T . Solving Eq. (2) with q(t = 0) = 0, we find the current where I(t) ≡q, and R 2 ≡ V 0 /(q 0 ω) is a constant with dimensions of resistance. First, we see that the current I(t) in Eq. (3) has the same periodicity as the source and, therefore, can be written as a Fourier series. Second, the numerator in Eq. (3) is a static resistivity current expression, while the denominator contains corrections due to memory. These corrections give rise to all higher harmonics. In many memristive devices R on ≪ R off , and it is reasonable to assume R 1 < R off . Under such assumptions, it follows from Eq. (3) that the condition for considerable generation of higher harmonics is having R 2 /R off of order one (while keeping the denominator real). For a given memristor, the ratio R 2 /R off can be increased either by increasing the amplitude of the source, V 0 , or by decreasing its frequency, ω/2π. The memristor model given by Eq. (1) combined with w/D = q/q 0 is convenient for analytical calculations. However, it does not limit R(w) between R on and R off . In order to obtain quantitative results, we then suggest a more realistic model consisting of Eq. (1) anḋ where the θ functions in Eq. (4) constrain w to satisfy 0 ≤ w ≤ D, in agreement with experimental data 13,14 . It follows from Eqs. (4) and (1) thatẇ has the periodicity ofq, and that the current I(t) has the same periodicity as that of the voltage source. However, I(t) cannot be found in a closed analytical form now, and we solve the problem numerically. If we denote the average power dissipated on the load resistor in the i-th harmonic as P i and the average power of the source as P source , three important ratios come to mind: P 2 /P source , P 3 /P source and ( ∞ k=2 P k )/P source , which are the second, third and higher harmonics conversion efficiencies, respectively. Identifying the system parameters maximizing these ratios will enable us to generate, upon application of a suitable band pass filter at the output, the desired harmonics passively and with minimal losses. Considering all possible parameters, we can write the general functional dependence of the above ratios as As noted above, these ratios cannot be written in closed analytical form. Moreover, the numerical maximization is difficult due to the large number of parameters. It is shown below that it is reasonable to select w(t = 0) = 0. The number of parameters can be further reduced utilizing the Buckingham π theorem of dimensional analysis 15 . . I(t) has been obtained using V0 = 1V, q0 = 3 · 10 −12 C and R off = 20kΩ. Its application results in functions of dimensionless variables for the above ratios: The left-hand sides of Eqs. (7) and (8) are maximized numerically considering the following realistic ranges of dimensionless parameters 8 : 10 −5 ≤ R 1 /R off ≤ 10, 10 −4 ≤ R 2 /R off ≤ 10 and 10 ≤ R off /R on ≤ 10 3 . We find that the maxima of P 2 /P source and P 3 /P source depend weakly on R off /R on and lie in the range 150 < R off /R on < 1000. Consequently, we set R off /R on = 200. For ∞ k=2 P k /P source it is found that R of f /R on = 1000 gives the best value. Table I presents conversion efficiencies and lists the optimal parameter values. Clearly, in all cases the single memristive device circuit provides significantly higher conversion rates than the diode bridge. Figs. 2(a)-(f) show the current through the load resistor and corresponding harmonics power spectrum. Since the memristive device is a dynamically adaptive system, I(t) is very different from the sine shape of the source. It is also important to check how the initial value of w(t = 0) affects the higher harmonics generation. We have performed extensive numerical simulations and found that the power conversion rates (Eqs. (5) and (6)) always increase with a decrease of w. This observation justifies our choice of w(t = 0) = 0 corresponding to R(w(t = 0)) = R of f . Moreover, we note that once a certain R(w(t = 0)) is selected, the value of R of f becomes not important if R(w(t)) > R on . In order to prove this statement, we notice that the load current has the same period as the source, which implies that the memristance also has such periodicity. The memristance increases/decreases with negative/positive current, which changes sign only at t = nT and (n + 1/2)T , where n is an integer. Since the memristance decreases immediately after t = 0, we conclude the memristance has maxima at t = nT , minima at (n + 1/2)T and no other extrema. Therefore, R on < R(w) ≤ R(w = 0) during the entire period, implying that having w 1 ≡ w(t = 0) > 0 and off resistance R of f is equivalent to having w(t = 0) = 0 and off resistance R(w 1 ). The decrease in the memristance range reduces the circuit nonlinearity, translating into lower efficiency of higher harmonics generation. The efficiency of the second and higher harmonics generation can be increased even further by utilizing a memristive bridge having the geometry of the diode bridge ( Fig. 1(b)). The memristive bridge consists of four identical memristive devices described by Eqs. (1) and (4) rectifying the input signal via the delayed switching effect 16 . This rectification reduces the load current periodicity to T /2, leaving only the even harmonics. Conse-quently, the second and higher harmonic generation efficiencies are improved relative to those of the single device circuit, while third harmonic generation is excluded. In complete analogy with the single device circuit, dimensional analysis for the memristive bridge leads to Eqs. (7) and (8) for the efficiency of harmonics generation. Dimensionless parameters maximizing P 2 /P source are R 2 /R off = 2.82, R 1 /R off = 0.0267 and R of f /R on = 1000 giving P 2 /P source = 40.3%, a substantial improvement over the single device circuit, with same initial conditions. I(t) and the power distribution of harmonics for the memristive bridge are given in Figs. 2(g,h). It is found that with an increase in R of f /R on , the optimized current shape tends towards | sin ωt| in even quarters of each period and zero in odd ones. Such a signal has P 2 /P source = 45.0%, which is indeed the asymptotic efficiency at R of f /R on → ∞. We recognize this signal shape as being two-quarter-wave rectified. For higher harmonics generation, we find the value after optimization of ( ∞ k=2 P k )/P source to be 56.4%, a modest improvement over the single device circuit. In conclusion, we have demonstrated the potential of memristive systems for passive second and higher harmonics generation. Using an approach to maximize the rate of power conversion for a specific harmonic, we have shown that memristive circuits are much more efficient for harmonic generation purposes (at optimal operation conditions) than the traditional diode bridge. In addition, the operation voltage in memristive circuits can be lower than that used in diode circuits because of ≈ 0.7V barrier voltage of silicon p-n junctions 5 . We also anticipate that memristive devices can be beneficially used in harmonics generation in combination with active circuits. An example of such a circuit is a memristive bridge operating with a high resistance load followed by an operational amplifier. Finally, memcapacitive and meminductive systems 9,10 can be used for passive (and low-dissipative) higher harmonics generation instead of memristive ones. The results of our investigation can be readily tested experimentally. This work has been partially supported by NSF grant No. DMR-0802830 and the Center for Magnetic Recording Research at UCSD.
2012-03-28T16:51:19.000Z
2012-02-21T00:00:00.000
{ "year": 2012, "sha1": "fc8f1699b1a18b3bfd0ac9d42a93c27fdea5c15e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1202.4727", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fc8f1699b1a18b3bfd0ac9d42a93c27fdea5c15e", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
244437245
pes2o/s2orc
v3-fos-license
How COVID-19 Exposed Water Supply Fragility in Florida, USA : Healthcare demand for liquid oxygen during the COVID-19 pandemic limited the availability of oxygen needed for ozone disinfection of drinking water in several urban areas of Florida. While the situation reduced the state’s capacity to provide normal drinking water treatment for millions of people, calls for water conservation during the emergency period resulted in virtually no change in water consumption. Here, we point out that 38–40% of the potable water produced by one of the major utilities in Florida is not used for drinking water but instead is used for outdoor landscape irrigation. This suggests that emergency-level calls for reduced water use could have been made if outdoor irrigation was limited, but we present data showing that there was little change in public behavior, and the state was unable to meet necessary water use reductions during the emergency. This inability to meet short-term emergency water conservation needs foretells a long-term lack of resilience against other global change scenarios and suggests that much work is still needed to build resilience into Florida’s water future. We conclude this Viewpoint paper by calling for more urgent sociohydrological research to understand the coupled human-natural drivers of how water supplies respond to global change. Introduction Florida, a peninsular state with a subtropical climate, is surrounded by water and receives almost 140 cm of rainfall each year. However, surprisingly, Florida faces a looming water supply shortage. In 2020, the population totaled more than 21.5 million individuals and is projected to increase by 26% to more than 27 million people by 2045 [1]. Regional water supply planners estimate that the state will need an additional 849 million gallons per day (MGD) of potable water to keep up with population growth through 2040 [2]. Not all of that 849 MGD can be met with existing developed drinking water sources [2], leaving the question of where the water needed for the next decades will come from. In the midst of this already-stressed water supply situation, the COVID-19 pandemic further exposed the fragility of Florida's drinking water supply. In short, the liquid oxygen used to make ozone for disinfection at various drinking water treatment plants was scarce because priority was given to COVID-19 patients needing the same oxygen supply. Ozone is a strong oxidizer and is used in water treatment as an alternative to chlorine disinfection to remove pathogens, hydrogen sulfide, and bad tastes and odors. However, as high demand from healthcare systems for liquid oxygen peaked in August and September 2021, Florida utilities that rely on ozone disinfection experienced a 30-50% reduction in oxygen supply shipments [3]. This led to reduced capacity to meet normal drinking water treatment and disinfection standards for millions of Floridians. In August 2021, the mayor of Orlando, Florida (the state's fourth most populous city) joined with a representative from the Orlando Utilities Commission to announce that they would need to reduce water demand by 25-50% in the coming weeks. They explained that liquid oxygen used in the drinking water treatment process was being diverted to area hospitals for critically ill COVID-19 patients [4] and made a call for citizens to cut down on all non-essential water consumption. A few days later, Tampa Bay Water, which provides drinking water to more than 2.5 million people in the Tampa Bay region, made a similar plea to its water customers [5]. While disturbances to water treatment processes such as this one associated with a global pandemic cannot always be predicted, this scenario does highlight how a single event can tip us toward drinking water vulnerability and foretell a lack of resilience. The pandemic-driven water treatment crisis in Florida also underscores how we cannot expect to achieve that resilience until we fully address the social dimensions of water supply. "How we got here" rests on one crucial statistic: Under normal operating conditions, 38-40% of the treated water produced by the Orlando Utilities Commission is not actually used for drinking water but instead goes toward landscape irrigation [3]. Reductions in non-essential water use (e.g., landscape irrigation) could have made large contributions in ameliorating the sudden threat to sufficient supplies of properly disinfected water for drinking during the COVID-19 pandemic. However, data collected over the ensuing weeks on how residents responded to the crisis showed that the necessary reductions in water use were not achieved. Below, we provide a brief background on Florida's public water supply and present data on how public use of drinking water during the COVID-19 emergency described above remained largely unchanged. The purpose of presenting this narrative is to highlight how the current pandemic exposed the fragility of our public water supply and the need to more effectively build resilience for an uncertain future in the face of population growth and climate change. Public Water Supply Trends in Florida Approximately 90% of all drinking water in Florida is sourced from the Floridan aquifer [6]. Although a renewable resource, groundwater in Florida experiences seasonal fluctuations due to distinct wet and dry seasons associated with a subtropical climate, and periodic droughts are not uncommon in the state. Florida is currently the third most populous state in the U.S. and continues to experience substantial population growth and urbanization. In the past two decades, the public water supply has surpassed agriculture as the state's largest user of freshwater. Total freshwater withdrawals for public supply increased by 170% between 1970 and 2015 and now account for more than half of all groundwater withdrawals in the state. Domestic water use, both indoor and outdoor, comprises approximately 64% of that demand [7]. Water management in the state is administered through five hydrologically defined water management districts. Florida Statutes require these districts to engage in regional water supply planning where it has been determined that the existing water supply is not adequate for all current and future "reasonable-beneficial uses" ( §373.709, Fla. Stat.). These plans indicate that over a twenty-year planning horizon (2020-2040), due to population growth, there will be a net demand increase of 849 MGD, of which 337 MGD (40%) is not available from currently developed sources [2]. Many of these water supply plans rely on water conservation and alternative water supply sources to meet future demands. Alternative water supply sources include seawater and brackish groundwater desalination, wastewater effluent reuse (including reuse for aquifer recharge), surface water withdrawals, or drawing groundwater from lower-quality aquifers. A recent analysis of more than 600 statewide water supply projects estimated the median cost of implementation at $3.54 million per project. The authors project that nearly $2 billion will need to be invested in the development of alternative water supply projects to meet future water demand in Florida by 2035 [8]. Additionally, expanded reuse of wastewater effluent will require overcoming several social and technical barriers, including any public resistance against using former wastewater and any potential human or ecological risks that may be associated with its use. The cost of alternative water supplies in Florida is 6-30 times higher than traditional water supplies and, depending on the source, can cost customers between $1.74 and $7.21 per 1000 gallons, as compared to $0.25 for traditional water supply sources [9]. It has been estimated that water conservation can also reduce the need for alternative water supplies by nearly 40% [8]. Therefore, water conservation is recognized by the state as a more economical and efficient means for meeting water supply needs than alternative supply projects ( §373.227, Fla. Stat.) and is a key strategy being relied upon in long-term water supply planning for the state. Successful water conservation strategies are contingent on identifying how consumers are currently using water and acting upon the opportunities and benefits associated with these efforts. In Florida, more than half of the residential water supply (equal to over 900 MGD) is used for landscape irrigation [10]. For high water users or those homes that fall within the top 25% of all water users in terms of daily water consumption, the proportion of water that is used for outdoor irrigation can be as much as 70% [11]. Given this, targeting landscape irrigation is identified in long-term planning reports as one of the best opportunities for water conservation in the public water supply sector and for building resilience against future threats to drinking water supply and infrastructure. Consumer Response to Emergency Calls for Water Conservation Just as calls for water conservation are part of the state's long-term planning against water scarcity in the face of future population growth, so were there calls for water conservation during the near-term COVID-19 water treatment emergency. The Orlando Utilities Commission (OUC) made direct requests for their customers to limit lawn irrigation and other non-essential uses of drinking water during the crisis. Public requests were made to consumers August-September, during Florida's rainy season. Over those two months, the OUC region received 10.95 inches of rain, which was 60-120% of the average for those months [12]. Although consumer demand for treated drinking water in Orlando fell slightly from the pre-emergency demand of approximately 90 MGD, the demand did not fall enough to meet necessary treatment capacity thresholds (Figure 1). The utility needed a 25-50% reduction in daily water demand, but as of 5 October 2021, the lowest daily usage since the beginning of the emergency was 71.5 MGD, only a 20% reduction. It is noteworthy that the lowest daily usage occurred on a Monday, the only weekday when irrigation is not permitted for any portion of the population under existing county watering restrictions. Other experiences nationally and internationally have demonstrated similar results. For example, calls for voluntary limits on landscape irrigation during a drought in Los Angeles, California, from 2008 to 2010 were ineffective [13]. A review of water conservation programs across the US found that voluntary irrigation restrictions achieved only 4-12% of actual water savings [14]. Figure 1. The blue line shows the total daily water pumped (consumer demand) by Orlando Utility Commission (OUC) in August and September 2020. The orange bar marks the initial call for water use reductions during the COVID-19 shortage of liquid oxygen supplies. Horizontal gray lines represent the 25-50% reduction thresholds needed (but not met) by the utility to continue normal disinfection processes. Rethinking Strategies to Ensure Long-Term Water Supply Other experiences nationally and internationally have demonstrated similar results. For example, calls for voluntary limits on landscape irrigation during a drought in Los Angeles, California, from 2008 to 2010 were ineffective [13]. A review of water conservation programs across the US found that voluntary irrigation restrictions achieved only 4-12% of actual water savings [14]. Rethinking Strategies to Ensure Long-Term Water Supply The futility of relying on calls for water conservation during the rainy season in the face of a short-term emergency in Florida demonstrates that there is much work to be done if we are to successfully rely on water conservation as a strategy to build long-term resilience in our water supply. Positioning water conservation as a tool in the toolbox against future water shortages is meaningless unless there is the societal will and action to use the tool, that is, to actually use less water. We have pointed out here that in Florida, more than half of all residential water use goes toward landscape irrigation, highlighting one key area for action. Achieving the necessary improvement in water conservation will require efforts in multiple sectors of society and will come from a combination of public education and engagement, advanced technologies (such as high-efficiency plumbing fixtures and appliances, and smart irrigation controllers), alternative landscaping practices, and best management practices. Other potential areas for change include regulatory actions such as irrigation codes and the enforcement of those codes, landscape efficiency ordinance adoption, and increasing-block rate structures from utilities. The fragility of Florida's drinking water supply system exposed by the COVID-19 pandemic also makes one thing very clear: We cannot fix an already-stressed water supply system at the last minute. Long-term regional water supply planning processes are an acknowledgment of this, but again, relying on plans for water conservation cannot be an effective strategy if plans are not fully executed through public action. Such calls were certainly not effective during Florida's COVID-19 emergency. Looking forward, we need transdisciplinary and socio-hydrological research to better understand the feedbacks between water resources and society and any uncertainty associated with future global change, and how society will respond to that. Importantly, fixing our fragile water supply system and building resilience into it will need to view water supply as a coupled humannatural system, and we call for urgent future research on the underlying mechanisms driving social responses to drinking water vulnerability.
2021-11-21T16:16:53.048Z
2021-11-18T00:00:00.000
{ "year": 2021, "sha1": "a70417206f1e1f36f2c34b32ac5ed453c95a3745", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2413-8851/5/4/90/pdf?version=1637208143", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7afeec54edcba73c8d17c27caf4719f5e01edc0d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
208644344
pes2o/s2orc
v3-fos-license
Expiratory flow limitation in intensive care: prevalence and risk factors Background Expiratory flow limitation (EFL) is characterised by a markedly reduced expiratory flow insensitive to the expiratory driving pressure. The presence of EFL can influence the respiratory and cardiovascular function and damage the small airways; its occurrence has been demonstrated in different diseases, such as COPD, asthma, obesity, cardiac failure, ARDS, and cystic fibrosis. Our aim was to evaluate the prevalence of EFL in patients requiring mechanical ventilation for acute respiratory failure and to determine the main clinical characteristics, the risk factors and clinical outcome associated with the presence of EFL. Methods Patients admitted to the intensive care unit (ICU) with an expected length of mechanical ventilation of 72 h were enrolled in this prospective, observational study. Patients were evaluated, within 24 h from ICU admission and for at least 72 h, in terms of respiratory mechanics, presence of EFL through the PEEP test, daily fluid balance and followed for outcome measurements. Results Among the 121 patients enrolled, 37 (31%) exhibited EFL upon admission. Flow-limited patients had higher BMI, history of pulmonary or heart disease, worse respiratory dyspnoea score, higher intrinsic positive end-expiratory pressure, flow and additional resistance. Over the course of the initial 72 h of mechanical ventilation, additional 21 patients (17%) developed EFL. New onset EFL was associated with a more positive cumulative fluid balance at day 3 (103.3 ml/kg) compared to that of patients without EFL (65.8 ml/kg). Flow-limited patients had longer duration of mechanical ventilation, longer ICU length of stay and higher in-ICU mortality. Conclusions EFL is common among ICU patients and correlates with adverse outcomes. The major determinant for developing EFL in patients during the first 3 days of their ICU stay is a positive fluid balance. Further studies are needed to assess if a restrictive fluid therapy might be associated with a lower incidence of EFL. Background Expiratory flow limitation (EFL) is a dynamic condition in which expiratory flow has already reached its maximal value [1]. According to Mead et al. [2], once the expiratory flow is limited at a given lung volume, there is a site in the intrathoracic airways where intrabronchial and extrabronchial pressure are equal, the so-called equal pressure point (EPP) [3]. Airways downstream of the EPP would be compressed, the diameter markedly reduced, with the expiratory flow becoming insensitive to increases of expiratory driving pressure or to the contraction of the expiratory muscles [1,[4][5][6][7]. The mechanisms leading to EFL can vary among different pathologies. COPD patients may develop EFL because of increased expiratory resistance [17] that tend to reduce the transmural pressure (i.e. the difference between the pressure inside and outside the airways), leading to the development of the EPP. Incomplete lung emptying is frequently associated with dynamic lung hyperinflation with the generation of intrinsic positive end-expiratory pressure (PEEPi) [18]. The latter can have several adverse effects on haemodynamic (i.e. cardiac output depression, increased pulmonary vessel resistance), respiratory muscle function (i.e. altered length-tension characteristics of the diaphragm, increased work of breathing) and patient-ventilator interaction (i.e. patient-ventilator asynchrony). On the other hand, patients with ARDS [19] or those undergoing general anaesthesia [20] can experience a reduction of functional residual capacity (FRC) able both to increase the expiratory resistance and to favour the collapse of the small airways. The ensuing inspiration re-open those airways, and repetitive opening and closure of small airways has been shown to induce histological damage of small airways probably due to the development of high shear forces [21,22]. This should elicit an inflammatory response and increase the risk of low lung volume injury [23,24]. Although EFL seems to represent a relevant pathological condition, surprisingly, only few studies evaluated the prevalence of EFL in critically ill patients. Alvisi et al. [8] demonstrated that almost every COPD patient (93%) is flow limited at intensive care unit (ICU) admission for an acute and chronic respiratory failure, while Koutsoukou et al. [9] found that EFL might be common in patients with ARDS. However, both studies enrolled a small number of selected patients so that it is difficult to derive conclusions on the clinical relevance of EFL and its determinants. The primary aim of the present study is to evaluate the prevalence of EFL in ICU patients requiring mechanical ventilation for acute respiratory failure, and to determine the main clinical characteristics and risk factors associated with the presence of EFL. Secondly, we explored the possible impact of the presence of EFL on patients' clinical outcome. Design, setting and patients We performed a prospective, observational study conducted in the general ICU of the S. Anna University Hospital, Ferrara, Italy. The study was approved by the ethics committee of our institution (Azienda Ospedaliero-Universitaria Ferrara Ethic Committee, protocol number: 74/2016). Informed consent was obtained from each patient or next of kin. Patients were recruited over a 12month period between April 2016 and April 2017. We screened and included all consecutive patients admitted to the ICU older than 18 years with an acute respiratory failure and with an expected length of mechanical ventilation of 72 h or more, as judged by the physician in charge. Exclusion criteria were (1) pregnancy, (2) haemodynamic instability (i.e. heart rate ≥ 120 beats/min or cardiac arrhythmia; systolic blood pressure < 90 or vasopressor use, i.e. dopamine or dobutamine ≥ 5 μg/kg/min or noradrenaline ≥ 0.1 μg/kg/min), (3) presence of laparostomy and (4) active air leakage (i.e. pneumothorax or presence of thoracic drainage) (Additional file 1). The observational period started within 24 h from admission to ICU and continued for at least 72 h. Patients were followed for outcome assessment until hospital discharge. Determination of EFL and respiratory variables All measurements were performed by three investigators (FDC, CR, EM) equally expert in respiratory mechanics and data collection. Patients were studied in a semirecumbent position, with a head of bed angle of 30°. The presence of EFL was determined by the PEEP test. The latter is based on a sudden decrease of PEEP from 3 to 0 cmH 2 O at the end of inspiration in order to increase the expiratory driving pressure and establish whether or not the expiratory flow increases. If the expiratory flow increases after subtraction of PEEP, then the patient is classified as not flow limited. On the contrary, if the expiratory flow does not increase after subtraction of PEEP, the patient is classified as having EFL. This approach requires a specific manoeuvre to show two different flow-volume loops in the same display, and it is available on all modern ventilators. The flowvolume curve with 3 cmH 2 O of PEEP is used as a reference and fixed on the screen. The flow-volume curve of the ensuing breath in which PEEP is reduced to 0 cmH 2 O is superimposed to the previous one in order to determine if the two flow-volume curves overlap (i.e. the expiratory flow does not increase), or the flowvolume curve at ZEEP exhibit an increase of the expiratory flow (patient not flow limited) (Additional file 2). The accuracy and the reproducibility of the PEEP test have been compared to the Negative Expiratory Pressure (NEP) test and validated previously [20]. Further, the same PEEP test was used to determine the value of PEEP able to eliminate the presence of EFL, the so-called PEEP-EFL. The latter was calculated as the minimal value of PEEP that, according to the flow-volume curve, allows the expiratory flow to increase during tidal expiration (Fig. 1). This was obtained by an incremental PEEP trial. Respiratory mechanics were performed at zero-PEEP (ZEEP) by the standard airway occlusion technique using a 5-s end-expiratory occlusion followed by a 5-s end-inspiratory occlusion [20]. The flow and additional resistance as well as the static compliance of the respiratory system were computed using standard formulas [20]. During these tests, patients were deeply sedated using continuous intravenous infusion of propofol (1-2 mg/kg) and paralysed with a bolus of rocuronium bromide (0.6 mg/kg) and mechanically ventilated in volumecontrolled mode. Patients with COPD were studied after at least 8 h from the administration of albuterol. As per our clinical practice, patients without COPD were not given bronchodilators. The severity of chronic dyspnoea was rated according to the modified dyspnoea scale proposed by the Medical Research Council (mMRC) [25]. Data collection and outcome data The presence of EFL was determined at the ICU admission (within 12 h) and daily during the first 72 h. Data of respiratory mechanics were assessed at day 1 and at day 3 from ICU admission. Demographics, anthropometrics, comorbidities, information and causes of hospitalisation were recorded into study-specific case report forms and database. COPD was defined according to recent ATS/ERS criteria [26], and COPD severity was assessed by the Global Initiative for COPD (GOLD) criteria [27]. Simplified Acute Physiology Score (SAPS) II and Sequential Organ Failure Assessment (SOFA) were determined during the first 24 h after ICU admission. The diagnosis of ARDS was based on the Berlin definition [28]. The occurrences of acute kidney injury (AKI) and septic shock were diagnosed according to international guidelines statements, Kidney Disease: Improving Global Outcomes (KDIGO) criteria [29] and surviving sepsis campaign (Sepsi-3) criteria [30], respectively. Daily fluid balance was recorded as the algebraic sum of fluid intake and output per day, not including insensible losses, while cumulative fluid balance (CFB) was calculated as the algebraic sum of daily fluid balance during the observational period. We reported CFB as absolute number or divided by the admission weight of the patient (CFB/kg). Cumulative fluid overload (CFO) was calculated by dividing the CFB by the admission weight of each patient and was expressed as a percentage, as previously proposed [31]. We considered a CFO ≥ 10% as severe fluid overload. Outcome data such as days of mechanical ventilation, ICU and hospital length of stay and ICU and hospital mortality were retrieved from the hospital's electronic patient chart. Statistical analysis Data are presented as frequencies and percentages and mean ± standard deviation or medians with 25th to 75th percentiles range [interquartile range], depending on the type of data and their distribution. The Shapiro-Wilk test was used to assess the assumption of normality. Categorical data were compared using the χ 2 test or Fisher exact test as appropriate. Unpaired Student's t tests or Mann-Whitney U tests for data with normal or non-normal distribution, respectively, were used to compare continuous variables. Friedman test was used to test differences in CFB, CFB/kg and CFO within groups among three different time points ( The association between the presence of EFL at admission and baseline patient characteristics was modelled using binary logistic regression analysis and reported as estimated odds ratio (OR) and relative 95% confidence interval (CI). Patients' characteristics independently associated with the presence of EFL at ICU admission were assessed in a multivariate logistic regression model. In the same fashion, a univariate logistic approach was used to assess the association between a CFO ≥ 10%, the development of AKI in ICU, AHRF, ARDS or septic shock at admission and the incidence of EFL during the first 72 h of ICU stay. Statistical analyses were performed using SPSS 20.0 statistical software (SPSS Inc., Chicago, IL). In all statistical analyses, a 2-tailed test was performed and the p value ≤ .05 was considered statistically significant. Results A total of 121 patients were enrolled, and their main characteristics at admission are shown in Table 1. The most frequent causes for ICU admission were acute hypoxaemic respiratory failure (AHRF) (43%), sepsis (37%), ARDS (24%) and haemorrhagic shock (9%). Among the 121 patients included, 28 had a diagnosis of COPD and 6 of them were admitted for an acute exacerbation of COPD. Cumulative fluid balance and EFL development Patients who developed EFL during the first 72 h of ICU stay had a higher cumulative fluid balance and cumulative fluid overload compared to patients without EFL and with EFL at ICU admission. The trend of cumulative fluid accumulation is shown in Fig. 2 and in Additional file 6. In patients developing EFL during the ICU stay, a higher cumulative fluid balance was associated with higher values of intrinsic PEEP on the day of EFL development (R 2 = 0.304, p = 0.010) (Fig. 3). Moreover, a CFO ≥ 10% over the first 2 and 3 days of ICU stay was associated with the development of EFL over the first 3 days of stay in ICU (OR 3.9, 95% CI 1.4-10.9, p = 0.011 and OR 3.1 95% CI 1.1-8.5, p = 0.030, respectively) ( Table 4). Discussion The main results of the present study can be summarised as follows: (1) EFL is frequent among ICU patients requiring mechanical ventilation for acute respiratory failure of different origin; (2) patients exhibiting EFL have worse parameters of respiratory mechanics and clinical outcome compared to those who did not; (3) the absence of EFL at ICU admission does not exclude its occurrence during ICU stay since part of the patients (17%) developed EFL after ICU admission; and (4) the development of EFL during the ICU stay was strongly associated with a positive fluid balance. The presence of EFL was previously detected in 93% of the COPD patients at ICU admission [8], and their pathophysiological pulmonary characteristics explain why they are prone to develop EFL compared to other categories of patients. However, the presence of EFL has been previously demonstrated in other patients so that it could be hypothesised that an unknown amount of ICU patients other than COPD can exhibit EFL. This could EFL expiratory flow limitation, Cst,rs static compliance of the respiratory system, Rrs,max total resistance of the respiratory system, Rrs,min flow resistance of the respiratory system, ΔRrs additional resistance of the respiratory system, P/F arterial partial oxygen pressure to fraction of inspired oxygen ratio, PEEPi intrinsic positive end-expiratory pressure, PEEP appl positive end-expiratory pressure applied at the ventilator, RR respiratory rate, V T tidal volume, IBW ideal body weight, Ppeak peak inspiratory pressure, Pplat plateau pressure, ΔP driving pressure have relevant clinical consequences since the presence of EFL has numerous side effects [1], such as the presence of PEEPi [32] that might have detrimental effects on respiratory efficiency and cardiovascular function. Further, the reduction of the expiratory flow and the inability to increase it by the expiratory muscle contraction decrease the efficacy of cough and secretion removal [14] favouring the development of atelectasis, bronchitis and pneumonia. Finally, EFL might imply cyclic opening/closure of the small airways [7,33] that can lead to hypoxaemia and ventilation/perfusion mismatch. Interestingly, a large amount of the patients enrolled in the present study (48%) were flow limited within the first 72 h of ICU stay, suggesting that EFL is common in ICU patients. Patients with EFL had higher duration of mechanical ventilation, ICU length of stay and ICU mortality. These outcomes were associated with a more compromised respiratory function since these patients exhibited increased inspiratory and additional resistance and higher PEEPi. However, our study shows another complementary aspect that deserves clinical attention. We were surprised that 21 patients (17%) became flow limited after ICU admission. While it is easily explainable that obese patients or those with COPD or heart disease can exhibit EFL at ICU admission, it could be less clear why patients might develop EFL during the ICU stay. Interestingly, the main determinant of EFL after ICU admission was a positive fluid balance. Patients who developed EFL during the first 72 h of ICU stay had the higher cumulative fluid overload (Table 4 and Fig. 2); further, a CFO ≥ 10% over the first 2 days of ICU stay was independently associated with EFL (OR 3.7, 95%CI 1.2-11.4, p = 0.025). Hence, fluid therapy can have relevant clinical consequences even at the lung level. Physician should pay particular attention to the amount of fluid administered. Excessive fluids administration can lead to EFL. The latter has been demonstrated to be responsible of damage of small airways that elicit an inflammatory response [22,23]. It was previously demonstrated that a positive fluid balance can worsen respiratory function, increase the occurrence of pulmonary complication and have an impact on patients' outcome in patients with acute lung injury and ARDS [34][35][36]. Detecting and abolishing EFL should be part of the lung-protective strategy. Ventilation at low lung volume leading to EFL could be avoided by the use of PEEP. In patients with flow limited at the ICU admission, the value of PEEP able to avoid EFL was 8 [6-10] cmH 2 O and then statistically decreased to 6 [5][6][7][8] at day 3, while in those developing EFL during the ICU stay, this value was 5 [5][6] cmH 2 O at day 1 and 5 [4][5][6][7] at day 3. The effects of PEEP on EFL have been previously tested in patients with ARDS. Koutsoukou et al. [37] demonstrated that 10 cmH 2 O of PEEP abolished the presence of EFL during tidal ventilation. The difference between the two studies could be the patients' population, the severity of the underlying disease and the number of patients enrolled. Koutsoukou et al. [37] studied 13 patients while we enrolled 121 patients with ARF of different aetiologies. The present study has some limitations: (1) it is a single-centre design with limited sample size. However, this is the first study aiming at identifying possible causes of EFL occurrence in an unselected cohort of ventilated patients; (2) we did not use other techniques such as extra-vascular lung water measurement or lung ultrasound to quantify lung oedema for confirming the association between cumulative fluid overload and EFL occurrence; (3) application of the PEEP test, as it is for all tests evaluating the presence of EFL, carries the need of having the patients for one breath at ZEEP. This could partially derecruit the lung, although the limited time on ZEEP ventilation should minimise the possible effects of the PEEP test on lung function; and (4) we have reported some data on the association between EFL prevalence/development and ICU mortality. However, these data should be regarded only as descriptive since the observational nature of this study and the small sample size do not allow us to make any conclusion. Future larger studies are needed to prove the potential effect of EFL development on the increase of ICU mortality. Conclusions The presence of EFL is common among ICU patients requiring mechanical ventilation for acute respiratory failure of different aetiologies. Interestingly, the major determinant for developing EFL in patients during the first 3 days of their ICU stay is a positive fluid balance. Further studies are needed to assess if a restrictive fluid therapy might be associated with a lower incidence of EFL.
2019-12-06T14:30:18.694Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "b1e46850aa53f22ce3dcf0f89a651eaab4530a7c", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-019-2682-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8a41cfeccf4193c69c0e76d3d8d92e964417989", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237605583
pes2o/s2orc
v3-fos-license
Three-dimensionality of mobile electrons at X-ray-irradiated LaAlO$_3$/SrTiO$_3$ interfaces Effects of X-ray irradiation on the electronic structure of LaAlO$_3$/SrTiO$_3$ (LAO/STO) samples, grown at low oxygen pressure and post-annealed ex-situ till recovery of their stoichiometry, were investigated by soft-X-ray ARPES. The irradiation at low sample temperature below ~100K creates oxygen vacancies (VOs) injecting Ti t2g-electrons into the interfacial mobile electron system (MES). At this temperature the oxygen out-diffusion is suppressed, and the VOs are expected to appear mostly in the top STO layer. However, we observe a pronounced three-dimensional (3D) character of the X-ray generated MES in our samples, indicating its large extension into the STO depth, which contrasts to the purely two-dimensional (2D) character of the MES in standard stoichiometric LAO/STO samples. Based on self-interaction-corrected DFT calculations of the MES induced by VOs at the interface and in STO bulk, we discuss possible mechanisms of this puzzling three-dimensionality. They may involve VOs remnant in the deeper STO layers, photoconductivity-induced metallic states as well as more exotic mechanisms such as X-ray induced formation of Frenkel pairs. Introduction Transition-metal oxides (TMOs) presently play one of the forefront roles in theoretical and experimental condensed matter research (for entries see [1]). An involved interplay between the spin, charge, orbital and lattice degrees of freedom in these materials results in a wealth of phenomena interesting from the fundamental point of view and bearing potential for technological applications. These include rich electronic and magnetic phase diagrams, metal-insulator transitions, colossal magnetoresistance, ferroelectricity, multiferroicity, high-T c superconductivity, etc. Interfaces and heterostructures of TMOs can add another dimension to their fascinating properties, giving rise to physical phenomena which cannot be anticipated from the properties of individual constituents, with the new functionalities having great promise for future device applications (see, for example, the reviews [2,3]). The interface between LaAlO 3 (LAO) and SrTiO 3 (STO) is a paradigm example of new functionalities that can be formed by interfacing TMOs. Although bulk LAO and STO are both band insulators, their interface spontaneously forms a mobile electron system (MES) [2,3]. Its high electron mobility co-exists with superconductivity, ferromagnetism, large magnetoresistance and other non-trivial phenomena which can in addition be tuned with field effect. The MES electrons are localized in the interfacial quantum well (QW) on the STO side and populate the Ti t 2g -derived in-plane d xy -states and out-of-plane d xz/yz -states [4][5][6]. The latter, furthermore, can be manipulated through artificial quantum confinement in thin STO layers [7]. Whereas in stoichiometric LAO/STO the MES is localized within a few layers from the interface and is therefore purely 2D, oxygen deficiency of STO can extend the MES to a much larger depth of more than 1000 Å, resulting in its essentially 3D character [8][9][10]. Two phenomena bearing key importance for virtually all physical properties of LAO/STO are (1) polaronic nature of the charge carriers, where strong electron-phonon coupling to the LO3 phonon mode reduces their low-temperature mobility by a factor of about 2.5, and coupling to soft phonon modes dramatically reduces mobility with temperature [11,12], and (2) electronic phase separation (EPS) where the conducting MES puddles are embedded in the insulating host phase [12][13][14]. X-ray irradiation can dramatically change electronic and magnetic properties of oxide materials, which is typically connected with creation of oxygen vacancies (V O s) [14][15][16][17][18][19][20][21][22]. In a simplified picture of oxygen-deficient (OD) STO, each V O s releases two electrons from the neighbouring Ti atom. One of them joins the MES formed by delocalized quasiparticles, which are Ti t 2g derived, weakly correlated, non-magnetic and form large polarons [11]. The other electron stays near the Ti ion to form localized in-gap state (IGSs) at binding energy E B~-1.3 eV, which are Ti e g derived, strongly correlated, magnetic and are often viewed as small polarons. In a more elaborate picture, the electron distribution between the MES and IGSs depends on particular configurations of V O s [23] which tend to cluster [14]. The V O s have a dramatic effect on the transport properties of STO, with their concentration of only 0.03% transforming STO into metal [24]. This picture of dichotomic electron system in the bulk as well at the surfaces and interfaces of OD-STO can be described within the combination of density functional theory with explicit electron-correlation schemes, such as e.g. dynamical mean-field theory (DMFT) (see, for example, Refs. [21,[25][26][27][28][29]). Recently, it has been experimentally confirmed by resonant photoemission (ResPE) [22]. The coexistence of the two radically different MES and IGS electron subsystems hugely enriches the physics of the OD-STO systems compared to the stoichiometric ones, critically affecting the conductivity, magnetism and EPS. Here, we use soft-X-ray angle-resolved photoelectron spectroscopy (ARPES) to explore the dimensionality of the MES generated by X-ray irradiation of LAO/STO heterostructures grown in oxygen-deficient conditions and post-annealed (PA-LAO/STO). Our analysis focuses on variations of the experimental k F across the Brillouin zones (BZs), with the k F values determined from the gradient of bandwidth-integrated spectral intensity. Although the V O s generated by X-rays at our low temperature of 12K should be located in the top STO layer, we observe an unexpected 3D-ity of X-ray generated MES, indicating its large extension into the STO depth. This behaviour, identical to the X-ray irradiated bare STO, contrasts with the 2D behavior of the intrinsic MES in standard LAO/STO samples. Based on self-interaction-corrected DFT calculations, we discuss possible scenarios how the 3D-MES is formed. Sample preparation Our LAO/STO samples with a LAO layer of the critical thickness 4 u.c. on top of TiO 2 -terminated STO (100) were grown with Pulsed Laser Deposition. We followed a non-standard growth protocol, where the STO substrate was annealed at 500 o C in vacuum, the O 2 pressure during the LAO deposition at 720 o C reduced to 8×10 −5 mbar, and the post-growth annealing for 12 hrs performed ex-situ at a temperature of 500 o C [14,22]. Although such samples are finally (nearly) stoichiometric in terms of the chemical-element ratio, as evidenced by their photoelectron spectroscopy characterization discussed below, their transport properties [30] are quite different from the LAO/STO samples grown and annealed under the standard protocol [11]. Their difference most important in the context of our ARPES study is that under X-ray irradiation at temperatures below~100K the post-annealed samples readily built up V O s in STO. In the ARPES spectra acquired in parallel with the irradiation, this fact is evidenced by gradual development of the initially absent IGS peak in the band gap and Ti 3+ component of the Ti 2p core levels, which are characteristic signatures of the V O s (for detailed analysis of the time evolution see [14]). The exact mechanism of irradiation-induced creation of V O s at the LAO/STO interfaces and even bare STO surfaces is not yet quite understood. There are a few intriguing observations: (1) As unfolded below in this paper, the X-ray generated MES is 3D, suggesting its expansion over a significant depth into STO; (2) The X-ray generated V O s and corresponding MES can be observed only at low sample temperatures below 100K. Once created, they stay stable without further irradiation for at least tens of hours. Upon increase of temperature well above~100K, however, they quench and can not develop even at relatively high synchrotron-radiation power densities (see below). This fact excludes that the classical thermally-activated diffusion of oxygen atoms out of STO be involved in the generation of the V O s, because in this case the temperature dependence would be opposite. Instead, their creation and/or stabilization may in some way be linked to the cubic to tetragonal phase transition and concomitant creation of domains in STO below 105 K [31,32]. The electric field in STO due to the interfacial band bending, separating the photoexcited electrons and holes [33] as well as the repulsion of the mobile photoexcited electrons from the localized ones staying at the V O s [34] may also contribute to the stability of the X-ray generated V O s and MES; (3) The V O s can be quenched by exposure of the LAO/STO samples to X-ray irradiation in O 2 pressure of about 10 -7 mbar on a time scale of seconds, which is much faster than the generation of V O s on a time scale of tens of minutes. The high efficiency of this reaction can be explained by that X-ray irradiation cracks the physisorbed O 2 molecules into atomic oxygen [19,20]. We note that whereas for STO surfaces only absorption of O 2 molecules is sufficient to quench the V O s [17], in our case of the buried LAO/STO interface their X-ray cracking is necessary. This observation can be explained, tentatively, by that oxygen would effectively penetrate through the LAO layer only in its atomic form; (4) The V O -generation rate strongly depends on the sample growth protocol, including temperature and O 2 pressure during the substrate annealing and the LAO deposition, and in-situ vs ex-situ annealing. The LAO/STO samples grown under the standard protocol (ST-LAO/STO), including substrate annealing in O 2 , high O 2 pressure during the LAO deposition and in-situ annealing, are fairly immune to X-ray irradiation [11]. The susceptibility of the post-annealed samples to X-ray irradiation may be connected with certain amount of structural defects remaining in them after the post-annealing despite their complete or nearly complete stoichiometric recovery. At the moment, it is difficult to reconcile all these observations in one single mechanism of creation and stabilization of the V O s, and this is not a prime objective of this study. Experimental and theoretical methods Our SX-ARPES experiments used ResPE at the Ti L-edge in order to boost the signal from the Ti derived electron states at the buried LAO/STO interface. The measurements were performed at the soft-X-ray ARPES endstation [35] of the Advanced Resonant Spectroscopies (ADRESS) beamline [36] at the Swiss Light Source, Paul Scherrer Institute, Switzerland. A photon flux of about 10 13 photons/sec was focused in a spot size of 30 x 75 μm 2 on the sample surface, and the combined (beamline and analyzer) energy resolution (ΔE) was 50 meV full width at half maximum (FWHM). The sample was kept at 12 K in order to suppress smearing of the k-dispersive spectral fraction [37] and, most importantly, allow a build-up of the V O s under X-ray irradiation (see above). The X-ray absorption spectroscopy (XAS) data were measured in the total electron yield (TEY). All ARPES and XAS data presented below are acquired at saturation after more than 2 hrs of irradiation time. The XAS and angle-integrated photoemission data were measured with circularly polarized incident X-rays. The in-plane Fermi surface (FS) maps and ARPES band dispersion images presented below were measured with s-polarized X-rays, and the out-of-plane FS map with p-polarized X-rays. Electronic structure overview Fig. 1 (a) shows the Ti 2p 3/2 core-level spectrum of our PA-LAO/STO samples taken at hv = 1000 eV, which is decomposed into the Ti 4+ and Ti 3+ components (linear background removed). The latter is characteristic of the V O s. (b) shows XAS spectrum through the Ti L 3 -and L 2 -edges (excited from the 2p 3/2 and 2p 1/2 core levels, respectively). The pairs of salient XAS peaks at each edge are formed by the Ti 4+ e g and t 2g states. The corresponding map of ResPE intensity as a function of E B and excitation energy hv, identifying the Ti-derived electron states [39][40][41][42][43] is displayed in Fig. 1 (c). Above the broad valence band composed from the O 2p states in STO and LAO, the map shows the resonating broad IGS peak at E B ~-1.3 eV which is a hallmark of the V O s. Recent analysis of its resonant behavior [22] confirms that the IGS-subsystem is derived from e g states of the Ti 3+ ions pushed down in energy by strong electron correlations [25,27,28]. The narrow resonating peak at the Fermi energy (E F ) identifies the t 2g -derived MES. Its intensity maximum is shifted from the Ti 4+ t 2g to Ti 4+ e g peak in the XAS spectrum due to remnant k-conservation between the intermediate and final states coupled in the ResPE process [22]. We note that Ti 3+ e g and t 2g signals in the XAS spectrum in Fig. 1 (b), corresponding to the Ti atoms hosting the V O s, are almost invisible. At the same time, the IGS signal in the ResPE map (c), corresponding to the Ti 3+ e g states of these atoms, is profound. This observation shows that most V O s are located in vicinity of the top STO layer, because the width of this region is much smaller than the probing depth of the TEY-XAS measurements of the order of 50 Å, and comparable with that of SX-ARPES measurements of the order of 11 Å at our excitation energy [20]. The image (d) in Fig. 1 shows the ARPES intensity measured along the ГX direction with s-polarized incident X-rays, selecting predominantly the antisymmetric d xy -and d yz -derived electron states [6,11,22]. Furthermore, the choice of hv = 466.4 eV at the L 2 edge enhanced the d yz states in comparison with the d xy ones [22]. The ARPES data is overlaid with a sketch of the d xy and d yz dispersions. We note pronounced spectral intensity variations along the bands, caused by energy-and k-dependent photoemission matrix elements. Particularly remarkable is the spectral intensity asymmetry relative to the Г points, absent in VUV-ARPES on bare STO surfaces [18,44]. With s-polarized X-rays in our experimental geometry, this phenomenon is beyond the conventional dipole approximation for the photoemission process (because the angle between the photoemission direction and electric vector of the light is the same) and should be attributed to the photon momentum that is~0.24 Å -1 for our hv in the soft-X-ray energy range. We observe that for PA-LAO/STO the lowest d xy band is much smeared compared to STO, and manifests itselves as intensity hot spots where it intersects the d yz band and hybridizes with it because of the symmetry breaking caused by the tetragonal lattice distortion and spin-orbit coupling. In view of the location of these d xy states in the top TiO 2 layer, the observed smearing should be connected with a disorder induced in this layer by the LAO overlayer and X-ray generated V O s. As we will see below, the absence of high-order d xy bands in PA-LAO/STO is consistent with the steeper electrostatic band-bending potential V(z) compared to bare STO caused by the additional electron density injected by the V O s. We note that the band order and band dispersions observed in PA-LAO/STO (and STO) and ST-LAO/STO samples [6,11] demonstrate the same d xy <d xz/yz band order. However, the observed band filling in PA-LAO/STO (STO) is somewhat larger than in ST-LAO/STO. Furthermore, the polaronic-hump weight and thus renormalization of the effective mass m* in PA-LAO/STO (STO) reduce to~1.5 compared to~2.5 in ST-LAO/STO [11]. Importantly, as we will see below, these materials demonstrate different dimensionality of the MES. Finally, Fig. 1 in our experimental geometry [35]. The map in Fig. 2 clearly shows FS contours of the d xz -derived interfacial states whose intensity periodically blows up when k z hits the Г-points of the BZ. However, this pattern alone does not necessarily indicate a 3D-ity of these states, because 2D states formed by the out-of-plane d xz -orbitals confined in the interfacial QW would also produce periodic intensity oscillations. The only difference is that the out-of-plane dispersion of the 3D states will manifest itself as rounded FS contours in the (k x ,k z ) coordinates, and the absence of it for the 2D states as straight FS contours (for in-depth analysis of ARPES response of 2D states see [45]). In our case, relatively poor statistics and variations of the photoemission matrix element with hv and k x do not allow reliable discrimination between these 2D and 3D patterns of the k z -dependent ARPES intensity from the MES. As the acquisition of this dataset has already required more than 5 hrs, further increase of the acquisition time for these off-resonance measurements is less practical. In order to confirm the 3D-ity of the X-ray induced MES, we returned to the resonant hv = 466.4 eV delivering maximal intensity to the d yz -states, and investigated the variation of the apparent k F as a function of k // when moving to the next BZ. The measured ARPES image through two BZs is presented in Fig 4 (a). As in our case the occupied bandwidth W~50 meV is basically equal to the experimental ΔE, accurate determination of k F is not trivial. Our ARPES simulations, described in the Appendix, have demonstrated that whereas the conventional method to determine k F from the peaks of spectral intensity at E F (intensity method) stays accurate only as long as W is much larger than ΔE, the extremes of the gradient dI W /dk of the bandwidth-integrated spectral intensity (gradient method) yield accurate k F values even when W is of the order of or even smaller than ΔE. The experimental dI W /dk (in our case integrated over an energy window of 100 meV, with the excluded polaronic introducing a negligible correction) is shown in Fig. 3 (b). Relevant for our MES dimensionality analysis is the d yz -band, where the dI W /dk extremes marked k F 0 and k F 1 identify its apparent k F (for the k F -determination method see Appendix and have found in this reference system remarkably similar k F 0~ 0.40 Å -1 and k F 1~ 0.29 Å -1 . This difference of the apparent k F between the two BZs is actually a clear manifestation of the 3D character of the d yz band in both PA-LAO/STO and STO. This situation is sketched in Fig. 3 (c), showing the d yz -derived FS sheet measured in our experiment. In this case the k // -coordinate of k F indeed depends on k z , which decreases as k // increases into the next BZ. The effect of the relatively small change of k z between the two BZs on the apparent k F is amplified by the strong elongation of the FS sheets along the k x -axis, with lightand heavy-electron axes being~0.12 and 0.8 A -1 , respectively, as measured in the previous polarization-dependent studies on for PA-LAO/STO [14,22] and STO [17,18]. We note that our hv = 466. [44] on STO surfaces prepared by annealing in vacuum. In this case, the emergent MES can be related to the V O s only [14][15][16][17][18][19][20][21][22], without any intrinsic polar-discontinuity contribution. In line with our findings, the experimental k z -dispersions measured in an hv range below 100 eV have also demonstrated the 2D-ity of the Ti d xy states and 3D-ity of the d xz/yz ones, with the latter forming closed contours in the out-of-plane FS map [44]. An indirect confirmation of the 3D character of the X-ray generated MES is an observation that such MES has a higher mobility compared to the intrinsic one [33] that can be attributed to its extension from the interface into the defect-free STO bulk. Remarkably, the ST-samples show a qualitatively different pattern of the ARPES variations with k // . Fig. 4 (a) represents an ARPES image from the dataset of Ref. [11]. This dataset was acquired at the same We note that recent results by Soltani et al. [46] suggest that a 2D-MES may also be realized at bare STO surfaces. This has been identified from a splitting of the d xz/yz -derived states, suggesting their quantum confinement and thus 2D-ity. Obviously, the dimensionality of the MES is determined by the depth and extension of the surface/interfacial QW which, in turn, is sensitive to the stoichiometry and preparation of the samples, which was in-situ cleavage in the experiments by Soltani et al. [46] vs in-situ annealing in the experiments by Plumb et al. [44] Discussion Bulk and interfacial oxygen deficiency in STO The than ARPES such as magnetotransport [8] and conducting-tip atomic force microscopy of the interfacial cross-section [9]. It has been noted that their n s can be two-three orders of magnitude larger than n s = 0.5 e per u.c. area predicted by the electrostatic arguments. This fact alone necessitates the existence of an extended and thus 3D component of the MES in OD-samples, and it has indeed been observed that sufficiently high concentrations of V O s result in a dimensionality transformation of the MES from 2D to 3D [8][9][10]. Electron mobility in these systems could significantly exceed that of the paradigm 2DES in ST-LAO/STO [8,30,47]; this phenomenon could trace back to stronger electron screening of the polaronic interactions in LAO/STO by larger electron density [11]. However, the V O distribution profile in these OD-samples would extend into STO by a few thousands of Å and more [48], also affected by oxygen trapping and diffusion at domain walls (see, for example, Refs. [49][50][51]). In contrast to these initially-OD samples, the post-annealing of our samples in oxygen, albeit ex-situ, ensures that before the X-ray irradiation they were at most stoichiometric. As discussed above, this fact is evidenced by the initial absence of the IGS peak and Ti 3+ core-level component in the ARPES spectra. The X-ray generated V O s should then locate mostly in the top layer of STO because, while easy out-diffusion of oxygen atoms from STO through the LAO overlayer can be explained by typically relaxed crystallinity of the latter [52], the out-diffusion from deeper layers of STO should be practically prohibited due to vanishing diffusion coefficient of oxygen at our low sample temperature. Our case should therefore be different from creation of V O s by deposition of thin metal films on STO at higher temperature where the V O s could be distributed over a depth of~1 nm [53,54]. This top-layer location of the V O s in our case is consistent with the above observation that the Ti 3+ spectral weight in the TEY-XAS spectra in Fig. 1 (a), which probe a significant depth into STO, is much smaller than in the core-level photoemission spectra in (b), which probe a much narrower vicinity of the interface. Extended-QW scenario In view of the 2D-ity of the X-ray generated V O s, the observed 3D-MES in our PA-LAO/STO samples seems puzzling. Which particular kind of the electron states and V(z) shape is needed to realize such 3D-MES? We will start from the well-known picture of the band bending at the ST-LAO/STO interface [4,6,48] sketched in Fig. 3 (d). The in-plane d xy orbitals from each TiO 2 plane form a sequence of the d xy electron states localized in the out-of-plane direction, whose energies climb V(z) to cross E F at a finite distance from the interface. The out-of-plane d xz/yz orbitals from each TiO 2 plane hybridize into electron states delocalized in the out-of-plane direction, which quantize in V(z) to form a ladder of the the d xz/yz states (where normally only the lowest one is occupied). With no occupied t 2g states propagating into the STO bulk, the MES confined in the interfacial V(z) is purely 2D. One possibility to realize the 3D behaviour (or rather quasi-3D in this particular context) is to increase the depth extension of V(z), which we will call the extended-QW scenario. In this case the 3D states will form as the convergence limit of a ladder of 2D states, quantized in the long-range confining potential, as the extension of this potential increases. Such a 2D-to 3D-dimensionality transformation has previously been demonstrated, for example, for multilayer graphene [55] and anatase TiO 2 [56]. A similar analysis for the bare STO surface [57] has suggested that whereas a confining potential with an extension of a few Bulk-metallicity scenario Another possibility to realize 3D-MES would be that under X-ray irradiation the whole STO depth within the light absorption depth becomes metallic, and the depth extension of V(z), other way around, shrinks to enable the ARPES experiment directly access the bulk electronic structure. This bulk-metallicity scenario is sketched in Fig. 3 (d). In our case such short-range V(z) seems more relevant than the long-range one in the previous scenario, because the top-layer V O s release additional MES electrons, as manifested by the observed increase of both k F and lateral conducting fraction [14] under X-ray irradiation. The increased electron density will then more effectively screen the interfacial-potential discontinuity, reducing the depth over which V(z) saturates to below the probing depth in our ARPES experiment. If upon further progression into the STO bulk the CBM stays below E F , the flat V(z) potential in this region forms the 3D-MES. This scenario explains the 3D-ity of the d xz/yz states observed by the ARPES experiment in PA-LAO/STO. Due to the flat potential, in the STO bulk the d xy states degenerate with the d xz/yz . Importantly, the bulk metallicity in STO required for such a scenario can only be realized if the X-ray irradiation causes some effective doping throughout the STO bulk. As we discuss below, such metallicity is essentially the well-known (albeit poorly understood) photoconductivity observed in STO-based systems. The scenario of short-range V(z) combined with the bulk metallicity naturally extends from PA-LAO/STO to bare STO surfaces; the 3D-MES in that case can not be explained only by the V O s [19,20] or any atomic rearrangements [44] in the top STO layer. One possibility for the bulk metallicity of PA-LAO/STO might be that already before the X-ray irradiation the deeper STO layers contained a minute amount of V O s below the sensitivity limit of our ARPES and XAS experiments. Fig. 5 The bulk metallicity in STO suggested by this scenario goes in line with the well-known giant photoconductivity of STO-based systems (see, for example, Refs. [32,58][add: Persistent Photoconductivity in 2D electron gases at different oxide interfaces]) that may exceed that in semiconductors by few orders of magnitude [59]. This phenomenon strongly depends on the sample preparation, with the defects including the V O s as well as the cubic-tetragonal phase transition [32] and lattice relaxation [34] playing the main role, and for some LAO/STO [34,60] and OD-STO [59] Summary We used soft-X-ray ARPES to investigate electronic structure of LAO/STO samples grown at low oxygen pressure and post-annealed ex-situ till their (nearly) complete stoichiometric recovery. Under X-ray irradiation at low temperatures below~100K, the ARPES spectra of these samples show a rapid Appendix: Gradient vs intensity method of the k F determination A crucial element of our further analysis of the ARPES data is accurate determination of the MES' k F . In fact, its signatures in the ARPES intensity can be strongly distorted not only by sharp variations of the photoemission matrix element but also by broadening of the experimental spectra due incoherent electron scattering (including defects) and limited experimental resolution in k and E b . Conventionally, one determines from the maxima of the momentum-distribution curve of the spectral intensity at E F (Fermi intensity I F ). However, Straub et al [60] have shown that the applicability of this so-called intensity method is restricted to wide-band systems, where the occupied quasiparticle bandwidth W much exceeds ΔE the experimental resolution, and that in the general case, including narrow-band systems, extremes of the gradient dI W /dk of the spectral intensity integrated through the whole W resolve k F with much higher accuracy. In fact, this so-called gradient method is based on the theoretical many-body definition of the Fermi surface (FS) as restricted by the extremal gradients of the momentum distribution function n(k) [60]. To identify the optimal k F -determination method in our case, we performed ARPES simulations for a 2D (or 2D cross-section of 3D) parabolic band whose parameters W = 50 meV and k F = 0.4 Å -1 closely resembled the d yz band along k x . With the electron lifetime broadening being negligible close to E F , the corresponding spectral function A(E,k) was modelled as the δ-function centered at the band's dispersion E(k) and terminated at E F . Its ARPES response was simulated by Gaussian convolution with FWHM = 50 meV in the E-direction describing the experimental ΔE, and 0.1 Å -1 in the k-direction describing the combined effect of disorder and experimental k-resolution. The results of these simulations, corresponding to the case W = ΔE, are presented in Fig. A1 (b) as the ARPES intensity with the corresponding I F (k) and dI W /dk curves on top. Whereas the intensity method to determine k F from the peaks of I F underestimates its value as much as 14%, the extremes of the gradient dI W /dk yield its value with a remarkable precision of~0.25%. Important for the gradient-method accuracy is that the spectral intensity is integrated over the whole bandwidth, because the gradient dI F /dk of the Fermi intensity only, also shown in Fig. A1 (b), overestimates the actual k F by 2.5%. Therefore, our ARPES simulations clearly demonstrate that the gradient method to determine k F is optimal in our case. Our ARPES simulations extended to smaller and larger ΔE/W values are shown in Fig. A1 (a) and (c), respectively. Obviously, the intensity method improves its accuracy towards small ΔE, whereas the I F gradient does so towards large ΔE compared to W. This trend calculated over a wide range of ΔE/W is presented in the panel (d) which shows the relative deviation of the I F (k), dI W /dk and dI F /dk extremes from the true k F . A reduction of the spectral k-broadening will increase the accuracy of the intensity and I F -gradient methods, in particular of the latter. However, the dI W /dk gradient method stays remarkably accurate through the whole range of ΔE compared to W, with a negligible rms deviation of only 0.3% even with the relatively large k-broadening used in our simulations. The gradient method to determine k F can therefore be deemed the most universal. An added advantage of this method is that the narrower width of the dI W /dk peaks compared to I F (k) makes it less susceptible to the photoemission matrix-element variations with k.
2019-09-09T19:03:27.000Z
2019-09-09T00:00:00.000
{ "year": 2019, "sha1": "62a4d1c94d86d787aee7d3817b13b3020c9bbd60", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "62a4d1c94d86d787aee7d3817b13b3020c9bbd60", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
118070176
pes2o/s2orc
v3-fos-license
The diamond rule for multi-loop Feynman diagrams An important aspect of improving perturbative predictions in high energy physics is efficiently reducing dimensionally regularised Feynman integrals through integration by parts (IBP) relations. The well-known triangle rule has been used to achieve simple reduction schemes. In this work we introduce an extensible, multi-loop version of the triangle rule, which we refer to as the diamond rule. Such a structure appears frequently in higher-loop calculations. We derive an explicit solution for the recursion, which prevents spurious poles in intermediate steps of the computations. Applications for massless propagator type diagrams at three, four, and five loops are discussed. Introduction Reducing complexities of Feynman integrals through integration by parts (IBP) relations [1,2] is an important component of modern multi-loop calculations.Finding more efficient reduction methods allows the computation of higher order terms in perturbative expansions which in turn aids in providing a better quantitative understanding of ongoing experiments.Since the 1980s, the so-called triangle rule [1,2] has been used for removing a propagator line from diagrams corresponding to a certain class of integrals.Any topology that has the following substructure can be simplified using the triangle rule: where D is the dimension which is set to 4 − 2 [3,4], and b, c 1 , c 2 are positive integers.The diagram corresponding to this integral is shown in Fig. 1.We write out the IBP relation ∂ ∂kµ k µ F = 0 (where the derivative must be performed before the integration) to obtain where A + i , B − , and C − i are operators acting on an integral that increase the power a i by one, decrease the power b by one, and decrease the power c i by one, respectively.Numerators that are expressed in dot products of k and an external line, contribute as a constant N to the rule.The rule of the triangle can be recursively applied to remove one of the propagators associated with k, p 1 , or p 2 from the system. The recursion in the triangle rule can be explicitly 'solved' [5], such that the solution is expressed as a linear combination of integrals for which either b, c 1 , or c 2 is 0. The advantage of the summed system over the recursion is that it generates fewer intermediate terms and it cannot have spurious poles: terms in which the factor D + N − a 1 − a 2 − 2b becomes proportional to more than once during the full recursion. In this work we introduce a more general class of diagrams that can be reduced using an extension of the triangle rule.We call this rule the diamond rule.We will show that the diamond rule 1) can be extended to any number of loops, 2) allows for a complete set of irreducible numerators that only contributes as a constant and 3) can be explicitly 'solved' to prevent spurious poles. It should be noted that throughout this paper we mean by reduction the removal of a single line.This does not necessarily mean that the remaining diagram(s) will be trivial.An example of this is the four-loop ladder diagram, which can be reduced twice with the rule of the triangle, after which one of the remaining diagrams involves a master integral and needs a complete reduction scheme of 14 steps in which all numerators are removed and the power of all denominators is lowered to one successively.In Section 2 the diamond rule is derived.Section 3 shows the explicit summation formula and Section 4 shows examples.Finally, Section 5 gives a conclusion and discussion. Diamond rule Consider the following family of Feynman integrals in D-dimensions arising from the (L + S)-loop diagram in Fig. 2: The diagram consists of (L + 1) paths from the top vertex T to the bottom vertex B with an external connection in between, and S lines without external connections.The upper, lower, and external lines of the diamond are represented by red with dashed lines, green with double lines, and blue with thick lines, respectively.The lines without external connections, we call spectator lines.In principle any pair of spectator lines can be seen as a two point function which can be reduced to a single line by integration.This line would then have a power that is not an integer.Depending on the complete framework of the reductions this may or may not be desirable.Hence we leave the number of spectators arbitrary.In any case, the contribution of the spectators is a constant (see below), which allows us to characterise integrals in the family only by 2(L + 1) indices a i and b i , and not by s i .Without loss of generality, we assign loop momenta k i to the lower lines of the diamond as well as l i to the spectator lines, except the last diamond line which is fixed by momentum conservation: In contrast, we do not require any constraints on the momentum conservation at the top vertex in the arguments below, hence any number of external lines can be attached to this point.In the middle of the diamond, external lines with momentum p i are attached by three-point vertices.The upper lines in the diamond may have masses m i , whereas the lower lines in the diamond and the spectator lines have to be massless.In addition, we allow arbitrary tensor structures of k i and l i with homogeneous degrees N i and R i , respectively, in the numerator. Constructing the IBP identity corresponding to the operator straightforwardly gives the following operator identity: Here A + i and B − i are understood as operators increasing a i and decreasing b i by one, respectively, when acting on F ({a i }, {b i }).Note that operators changing the spectator indices s i are absent in the identity. For a typical usage of Eq. ( 6), one may identify a diamond structure as a subgraph in a larger graph.If the line with the momentum p i has the same mass m i as the corresponding upper line, the term (p 2 i + m 2 i ) in the identity reads as an operator C − i decreasing the corresponding index c i of the power of the propagator (p 2 i + m 2 i ) −ci in the larger graph by one.Applying the rule where The above diamond rule contains the conventional triangle rule as a special case.For the one-loop case L = 1 and S = 0, the two lower lines may be identified as a single line and the triangle integral in Eq. ( 1) can be reproduced.Correspondingly, the IBP identity (7) becomes Eq.(2). Summation rule We now derive an explicit summation formula for the recursion in the diamond rule.First, we consider the possible connectivities.If we allow for some external lines to be directly connected to each other, we get at least one triangle that can be used for the triangle rule: suppose the external momenta of k i and of k j are connected and identified with p ij , then this triangle is k i , k j , p ij .In this case, the triangle rule generates fewer terms and is preferred to the diamond rule.Thus, we only consider the case where the diamond does not have direct connections of external lines. We follow the same procedure as outlined in [5].First, we rewrite Eq. ( 7) as: where E is the operator (L + S)D We identify the first class with the label +, since it increases E by 1, and the latter with the label −, since it decreases E by 1.The remaining part of the derivation is analogous to the one in [5]. Finally, we obtain the explicit summation formula: where Because the Pochhammer symbol that depends on E only appears once in each term, powers of 1/ 2 or higher cannot occur.Thus, the explicit summation formula for the diamond rule does not have spurious poles. Examples Several examples of diamond structures are displayed in Fig. 3.The role of each line in the diamond rule is highlighted by different colors and shapes.Red dashed lines, green double lines, and blue thick lines represent upper, lower, and external lines of the diamond, respectively.Label T represents the top vertex, and B the bottom vertex.In Fig. 3a a four-loop diagram is displayed.For this diagram, the line of either p 5 , p 6 , p 7 , p 8 , p 9 , or p 10 can be removed by recursive use of the diamond rule or by the explicit formula given in the previous section.The irreducible numerators of this diagram are selected as Q•p 8 , Q•p 10 , p 5 •p 10 , and p 5 •p 7 , such that they adhere to the tensorial structure in the diamond rule.The last numerator, p 5 •p 7 , lies outside of the diamond and does not interfere with the rule. If, in this figure, we draw an additional line from the top (T) to the bottom (B) vertex, we obtain the simplest nontrivial propagator topology with a spectator line.As a five-loop diagram it is unique. In Fig. 3b the three-loop master topology NO is displayed.Q•p 5 is chosen as irreducible numerator.One of the lines attached to the diamond is actually an off-shell external line.In general, if the line with momentum p L+1 is one of the external momenta of the larger graph, the factor (p constant with respect to the loop integration and has no role for reducing the complexity of the integral.As a result, the rule ( 7) is not applicable to remove one of the internal lines.Even for such cases, one can still find a useful rule by shifting a L+1 → a L+1 − 1: which decreases at least a L+1 or b L+1 by one.Repeated use of this rule from positive integer a L+1 and b L+1 reduces a L+1 or b L+1 to 1.For the NO topology, this variant yields the rule to reduce the line p 1 to 1 in Mincer [6,7]. 2n Fig. 3c we show two five-loop topologies for which the diamond rule can eliminate one line.The first diagram is unique in the sense that it is the simplest diagram for which L = 3, S = 0.The second diagram is a typical representative of the 29 five-loop topologies with L = 2, S = 0 and all three p-momenta of the diamond internal. Conclusion and discussion We have indicated an extensible, multi-loop topology substructure that can be reduced efficiently.We call the corresponding reduction formula the diamond rule.Additionally, we have derived an explicit summation formula for the recursion in the diamond rule, which avoids spurious poles. For parametric reduction applications such as the Mincer program, an implementation of the diamond rule would be faster than automatically generated reduction rules.This is already the case with the summed triangle as it is used in the Mincer program.It allows the program to avoid spurious poles altogether and hence it can run at a fixed precision in powers of .It is currently not clear whether Laporta approaches [8] as used in systems such as AIR [9], Reduze [10,11], and FIRE [12,13,14] benefit from applications of the diamond rule. It is important to note that the tensorial structure for the irreducible numerators should be adhered to.If one chooses numerators of the form (p i − p j ) 2 instead of dot products p i •p j , extra terms are introduced to the diamond rule.It is unclear to us whether the performance gain of using the rule is greater than the cost of rewriting the numerators to dot products in software such as LiteRed [15,16].In the Mincer approach with its dot products in the numerators, the diamond rule fits in perfectly. Neither at the four-loop level, nor at the five-loop level we have found structures that can be reduced by a single IBP identity, apart from the diamond rule. Figure 1 : Figure 1: A triangle subtopology where the loop momentum k is assigned to the central line.p 1 and p 2 are external momenta.a 1 , a 2 , b, c 1 , and c 2 represent the powers of their associated propagators. Figure 2 : Figure 2: (L + S)-loop diamond-shaped diagram.(L + 1)-lines have external connections and S-lines do not.Red with dashed lines, green with double lines, and blue with thick lines represent upper, lower, and external lines of the diamond, respectively.Label T represents the top vertex, and B the bottom vertex.k i , p i , and l i are momenta, and a i , b i , and s i are the powers of their associated propagators. L+1i=1 (b i +c i ) of integrals appearing in the right-hand side, at the cost of increasing L+1 i=1 a i .Starting from positive integer indices b i and c i , one can repeatedly use the rule until one of either b i or c i is reduced to zero. 1 and (a) b is the rising Pochhammer symbol Γ(a + b)/Γ(a).The first term decreases the power b r to 0, and the second term decreases c r to 0. The only significant difference between the two terms is the +1 in the Pochhammer symbol. Figure 3 : Figure3: Two topologies with highlighted diamond structures.Red with dashed lines, green with double lines, and blue with thick lines represent upper, lower, external lines of the diamond, respectively.Label T represents the top vertex, and B the bottom vertex.(a) shows a four-loop topology which can be completely reduced.(b) shows the three-loop NO master topology, for which a modified form of the diamond rule can be applied to lower the power of line p 1 to 1. (c) shows five-loop topologies, which the diamond rule can be applied to.
2015-04-30T14:54:26.000Z
2015-04-30T00:00:00.000
{ "year": 2015, "sha1": "33abcf20f74cd133f8cb0978184417c1276878f4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2015.05.015", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "33abcf20f74cd133f8cb0978184417c1276878f4", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
231954093
pes2o/s2orc
v3-fos-license
Exploring the Pivotal Neurophysiologic and Therapeutic Potentials of Vitamin C in Glioma Gliomas represent solely primary brain cancers of glial cell or neuroepithelial origin. Gliomas are still the most lethal human cancers despite modern innovations in both diagnostic techniques as well as therapeutic regimes. Gliomas have the lowest overall survival rate compared to other cancers 5 years after definitive diagnosis. The dietary intake of vitamin C has protective effect on glioma risk. Vitamin C is an essential compound that plays a vital role in the regulation of lysyl and prolyl hydroxylase activity. Neurons store high levels of vitamin C via sodium dependent-vitamin C transporters (SVCTs) to protect them from oxidative ischemia-reperfusion injury. Vitamin C is a water-soluble enzyme, typically seen as a powerful antioxidant in plants as well as animals. The key function of vitamin C is the inhibition of redox imbalance from reactive oxygen species produced via the stimulation of glutamate receptors. Gliomas absorb vitamin C primarily via its oxidized dehydroascorbate form by means of GLUT 1, 3, and 4 and its reduced form, ascorbate, by SVCT2. Vitamin C is able to preserve prosthetic metal ions like Fe2+ and Cu+ in their reduced forms in several enzymatic reactions as well as scavenge free radicals in order to safeguard tissues from oxidative damage. Therapeutic concentrations of vitamin C are able to trigger H2O2 generation in glioma. High-dose combination of vitamin C and radiation has a much more profound cytotoxic effect on primary glioblastoma multiforme cells compared to normal astrocytes. Control trials are needed to validate the use of vitamin C and standardization of the doses of vitamin C in the treatment of patients with glioma. Vitamin C, also referred to as L-ascorbic acid/L-ascorbate, is an essential compound that plays a vital role in the regulation of lysyl and prolyl hydroxylase activity [5].Vitamin C is a general term that describes its oxidized dehydroascorbate (DHA) and its reduced forms (ascorbate) [6].Vitamin C is a water-soluble enzyme, typically seen as a powerful antioxidant in plants as well as animals [7].Vitamin C was capable of preserving prosthetic metal ions like Fe 2+ and Cu + in their reduced forms in several enzymatic reactions as well as scavenge free radicals in order to safeguard tissues from oxidative damage [8,9].Also, vitamin C participates in numerous intracellular as well as extracellular biological processes to efficiently scavenge free radicals [7,10]. Vitamin C intake as a dietary antioxidant was capable of augmenting growth restriction of cancer cells in general and glioma cells to be specific [11,12].Studies have shown that vitamin C was capable of inhibiting cancer via mechanism, such as the argumentation of stromal integrity of normal tissue, activating lymphocytes to a greater level of immunocompetence, stimulating "auspicious modification in the steroid environment," blocking hyaluronidase activity in malignant cells, augmenting antiviral activity, and interfering with the metabolism of malignant cells [13][14][15][16]. Studies have demonstrated that vitamin C was selectively concentrated in tumors and may form cytotoxic quantities of hydrogen peroxide (H 2 O 2 ) within the tumor as a byproduct of oxidation [13,15,16].Vitamin C can act as a prodrug to deliver a substantial influx of H 2 O 2 to tumors after intravenous (IV) administration [17,18].A study established that H 2 O 2 was the key mediating factor in cytotoxicity to cancer cells through IV vitamin C [17,19].Vitamin C stimulated intracellular oxidation as well as energy generation resulting in total therapeutic potential.Also, vitamin C stimulated of activities like apoptosis and necrosis [17,20]. Studies have shown that vitamin C precisely eradicated a sizable quantities of cancer cells when plasma concentrations reach 1 mM or more [21][22][23].Another study revealed that vitamin C was capable of decreasing the adverse reactions triggered by chemotherapy during the treatment of cancer in patients [24,25].us, this explicit review explores the pivotal neurophysiologic and therapeutic potentials of vitamin C in glioma.e "Boolean logic" was used to search for article role of vitamin C in glioma.Most of the articles were indexed in PubMed and/or PMC with strict inclusion criteria being the neurophysiologic and therapeutic potentials of vitamin C in glioma.e search terms on PubMed and/or PMC were vitamin C and/or L-ascorbic acid and/or L-ascorbate and glioma. Vitamin C Levels in the Blood, Cerebrospinal Fluid, and Brain. e brain, spinal cord, and adrenal glands had the highest vitamin C levels of all the tissues in the body as well as the highest retention capacity of vitamin C [26,27].Studies have shown that brain tissue concentration of vitamin C was regionally dependent.Higher concentrations were detected in anterior regions like the cerebral cortex as well as hippocampus, with gradually lower concentrations in more posterior regions like the brainstem as well as spinal cord [28,29].Generally, brain tissue vitamin C levels were several millimolars (mM) with the average concentration in neurons likely to be 10 mM and merely 1 mM in glia [28][29][30].Under normal circumstances, turnover of vitamin C in brain is approximately 2% per hour [26,31]. Molecules with low molecular weight as well as passable hydrophilic/hydrophobic balance are allowed to penetrate the central nervous system (CNS) [32,33].It was established that endothelial cells of the brain capillaries, which form the blood-brain barrier (BBB), possess selective transport systems for particular nutrients as well as endogenous biomolecules besides unspecific permeation [32].us, they are conscientious for the transport of glucose, neutral, acidic, and basic amino acids like alanine and taurine, monocarboxylic acids, amines, and neuromediators like choline, vitamins, and nucleosides, as well as the peptide transport system for small neurotropic peptides [32,34,35]. e epithelium of the choroid plexus, which is a restricted part of the BBB, is implicated for the maintenance of CNS homeostasis for vitamin C [32,36].IV administration of vitamin C revealed that vitamin C reached the CSF via the choroid plexus and then gradually penetrates the brain substance from the CSF (Figure 1) [32,37].Vitamin C enters the CNS principally via active transport at the choroid plexus (Figure 1).Vitamin C concentration is modulated homeostatically after it diffuses from cerebrospinal fluid (CSF) to brain extracellular fluid (ECF) [32].Vitamin C was capable of entering the ECF via carrier-mediated uptake and via simple diffusion across brain capillaries at the BBB [26,38].Extracellular vitamin C levels are also vigorously regulated via glutamate-mediated activity through glutamate-vitamin C heteroexchange (Figure 1) [26]. It was established that vitamin C from ECF was taken up into brain cells, where its levels augmented up to 20-fold [26].It was further revealed that, in some neurons, vitamin C levels were up to 200-fold higher than the levels in the bloodstream [26,32].Vitamin C is transferred from the blood, where its levels are about 50 μM into the CFS where its levels are maintained at 200 μM via specific physiological mechanisms (Figure 1) [32,39].Vitamin C uptake from the blood into CSF involves active stereospecific Na + -dependent transport at the choroid plexus (Figure 1) [26,40].Furthermore, vitamin C serves as a cofactor in several enzymatic activities associated with the processing of neurotransmitters as well as an antioxidant offering neuroprotection within the CNS [32]. Tsukaguchi et al. indicated that the reduced form of vitamin C is absorbed via a mechanism that involves sodium-dependent vitamin C transporters 2 (SVCT2) [8].SVCT2 RNA was identified in the epithelium of the choroid plexus [7].Precisely, the neuroepithelial cells of the choroid plexus as well as the retinal pigmented epithelium secreted SVCT2 transporter.It was established that SVCT1 as well as SVCT2 each mediate concentrative, high-affinity vitamin C transport that was stereospecific and was driven by the Na + electrochemical gradient (Figure 1) [8].Higher levels of Na +dependent vitamin transporters such as SVCT1 and SVCT2 were detected in the choroid plexus but not in brain capillaries (Figure 1) [8,26]. In situ hybridization in the rat brain revealed that SVCT2 was more concentrated in neurons than glial cells, which was coherent with higher concentrations of vitamin C in neurons than glia [8,26].us, neuronal cells take up vitamin C, because these cells secrete SVCT2 [41].It was affirmed that SVCT2 was present in both glutamatergic as well as GABAergic neurons, including glutamatergic pyramidal cells of the hippocampus, glutamatergic granule cells of the cerebellum, and GABAergic cerebellar Purkinje cells (Figure 1) [8].It was affirmed that astrocytes were capable of 2 Journal of Oncology removing glutamate from the synaptic cleft when synapses are glutamatergically active [8]. Several studies have demonstrated that glutamate transport in these cells was capable of activating glucose transport, to stimulate glycolysis with lactate and vitamin C release (Figure 1) [42,43].Castro et al. demonstrated that intracellular vitamin C blocked glucose transport via direct or indirect blockade of GLUT3 and triggered lactate uptake (Figure 1) [44].us, when GLUT3 was downregulated, glucose utilization was not inhibited by vitamin C and lactate transport was not stimulated (Figure 1) [42]. Function of Vitamin C in the Normal Brain.Brain vitamin C concentrations are gender dependent, with lower estrogen-modulated levels in the female brain than in the male brain [26,45].Neurons store high levels of vitamin C via SVCTs to protect them from oxidative ischemia-reperfusion injury [46,47].Studies have demonstrated that the key function of vitamin C was the inhibition of redox imbalance from reactive oxygen species (ROS) produced via the stimulation of glutamate receptors (Figure 2) [48,49].Studies have further exhibited that vitamin C was capable of buffering glutamate-generated ROS and inhibited succeeding cell death in cultured neurons [50,51]. Studies have demonstrated that vitamin C serves as a neuromodulator for both dopamine-and glutamate-mediated neurotransmission besides its functions as an antioxidant in the CNS [52,53].It was further established that the main localization of vitamin C in neurons was coherent with such neuromodulatory functions [52,53].Furthermore, vitamin C was implicated as a fundamental cofactor for noradrenaline synthesis [26,54].Also, vitamin C was essential for the secretion of noradrenaline as well as acetylcholine from synaptic vesicles [26,55].In addition, vitamin C was a crucial cofactor in the synthesis of numerous neuropeptides [26,56].Moreover, at physiological concentrations, vitamin C augmented the secretion of theses neuropeptides [26,57]. e accumulate of vitamin C in the basal lamina triggered myelin formation by Schwann cells [26,58].Studies have shown that variations in vitamin C concentrations were associated with brain activity as well as brain energetics [59,60].Also, vitamin C servers as a metabolic switch in brain, modulating glucose consumption in neuronal cells via the blockade of neuronal GLUT3 [42].Owens and Bunge established that vitamin C was a fundamental in the enhancement of axonal ensheathment in Schwann cell-neuronal coculture [61].ey further revealed that vitamin C was crucial for periphery nervous system myelinogenesis because it was capable of stimulating the P0 protein gene in cultured Schwann cells [61]. Studies have shown that GLUTs mediate the facilitative transport of the DHA form of vitamin C [6,71,72].Also, studies have demonstrated that ascorbate, the reduced form of vitamin C, is transported by SVCT [6,8,73,74].Gliomas absorb vitamin C primarily via its oxidized form (DHA) by means of GLUT 1, 3, and 4 and its reduced form, ascorbate, by SVCT2 (Figure 2) [6].Nevertheless, it was established that SVCT2 had modest capacity in gliomas [75].DHA was reduced to vitamin C via the GSH-consumption enzyme DHA reductase (DHAR) once it gets into the cells (Figure 2 Journal of Oncology [5,79].Prasad demonstrated that sodium ascorbate triggered a cytotoxic stimulus on normal brain cells in culture [15].Benade et al. established that the toxicity of ascorbate was as a result of low catalase levels in tumor cells [80]. Vitamin C was able to inhibit DNA damage and the deterioration of subcellular structures like proteins, lipids, and DNA by scavenging of ROS (Figure 2) [5].Studies have demonstrated that therapeutic concentrations of vitamin C triggered H 2 O 2 generation in solid tumors (Figure 2) [22,81].Furthermore, studies have demonstrated that H 2 O 2 diffuses into cancer cells and overpowers their antioxidant defense system via the depletion of glutathione levels [22,23].Espey et al. established that vitamin C stimulated generation of extracellular H 2 O 2 was only partly accountable for cell death [82].Peterkofsky and Prather indicated that H 2 O 2 was either formed intracellularly and excreted in the medium or formed at the cell surface on culture medium [83]. It was further established that H 2 O 2 was not detectable in growth medium containing Na + -ascorbate alone [83].Studies have demonstrated that H 2 O 2 was capable of triggering lipid peroxidation, which resulted in cell death [84][85][86].It is also revealed that Na + -ascorbate was capable of triggering the formation of DHA exogenously or intracellularly or both.Also, sodium ascorbate was capable of blocking catalase activity in vitro [87].Furthermore, the blockade of catalase was capable stimulating the accumulation of H 2 O 2 in tumor cells resulting in cell death [87]. ese esters can easily cross the BBB because of their lipophilic nature [90].e breakdown products of ascorbyl stearate are ascorbate and stearic acids, which are nontoxic to biological system [88,89].Studies have indicated that ascorbyl esters, such as ascorbyl stearate (Asc-S) and ascorbyl palmitate, block the proliferation of mouse as well as human glioma cells [91,92].Furthermore, studies have demonstrated that Asc-S as well as ascorbyl palmitate suppressed the growth of murine (G-26) as well as human glioma (U-373) cells [91,92].Makino et al. established that Asc-S, a lipophilic derivative of vitamin C is a potent inhibitor of cell proliferation as compared to vitamin C [93]. Studies have demonstrated that human gliomas are capable of secreting insulin-like growth factor-(IGF-) I as well as IGF-II.It was further established that IGFs autocrine receptor was capable of stimulating glioma cell growth [94,95].Naidu et al. established that Asc-S was capable of modulating of secretion of IGF-IR as well as triggering of apoptosis in T98G cells (Figure 2) [96].ey revealed that Asc-S inhibited the growth of human GBM T98G cells via the arrest of cells at late S/G2-M phase of cell cycle as well as trigger cell death via apoptosis (Figure 2) [96].ey further indicated that Na + -ascorbate was capable of blocking the growth of T98G cells with an IC50 of 6.0 mM [96].Nevertheless, Asc-S was about 68-fold more potent than Na +ascorbate with an IC50 value of 88.5 μM [93]. It was also established that administration of Asc-S led to a substantial augmentation in the proportion of cells in late S/G2-M phase of cell cycle in comparison with untreated control cells [96].Also, DHA was capable of modulating cell cycle progression as well as trigger cell cycle arrest at G2/M DNA damage checkpoints during oxidative stress (Figure 2) [97].Furthermore, Asc-S stimulated cell cycle arrest at late S/ G2-M phase checkpoints was capable of blocking of cell proliferation as well as apoptosis [96]. us, vitamin C derivatives interfere with cell cycle progression [96].Ryszawy et al. demonstrated that Na + -ascorbate was capable of triggering significant impairment of GBM cell viability as well as invasiveness [5]. Also, the blockade influence of Na + -ascorbate on GBM cell motility resulted in heterogeneous viability-associated cell responses [5].Moreover, a rapid necrotic-like death was detected in a proportion of cells with Na + -ascorbate, which resulted in cell swelling, membrane break, and their release from cytoplasm [5].Furthermore, "autoschizis"-associated violent cell responses to elevated Na + -ascorbate doses substituted apoptosis in "hypersensitive" GBM cells [5]. is cell death mechanism was a self-excision of cytoplasm and was detected only in the coexistence of vitamin C and menadione [98,99]. Vitamin C and Glioma Angiogenesis. Angiogenesis is a normal physiological activity, obligatory for normal tissue repair as well as growth [100].Nevertheless, angiogenesis is depicted by the assiduous proliferation of endothelial cells as well as blood vessel formation in pathological situations [100].us, angiogenesis is very critical in tumor growth, invasion, and metastasis [100].Studies have implicated the association of circulating endothelial precursor cells (EPCs) to pathologic angiogenesis [101][102][103].Several studies have demonstrated that nitric oxide (NO) was associated with tumor angiogenesis [104][105][106]. Dulak et al. demonstrated that NO was capable of modulating for the secretion of endogenous angiogenic factors like vascular endothelial growth factor (VEGF) as well as basic fibroblast growth factor (bFGF) [107].Studies established that tumors that produced NO persistently had significantly supplementary vascular network and were more invasive [108,109].us, angiogenesis is determined by the level of NO, which also influence migration as well as precise motivity of the endothelial cells [100,110]. Telang et al. analyzed the effect of vitamin C on tumor development in animals after dietary consumption of low levels [111].Peyman et al. established that the total number of blood vessels were decreased in vitamin C depleted tumors compared to the totally supplemented animals.Contrariwise, high levels of vitamin C administered to cauterized corneas suppressed angiogenesis in a rat prototype [112].Mikirova et al. evaluated the effect of high levels of vitamin C (100 mg/dl-300 mg/dl) on in vitro endothelial cells as well as new blood vessel formation [100].ey observed that IV administration of 25-60 grams of vitamin C affect both endothelial progenitor cells as well as mature endothelial cell functions associated with process of angiogenesis (Figure 2) [100]. Furthermore, the effect of vitamin C on angiogenesis assessed via tube formation assay exhibited blockade of vessel structure after 3-24 h of exposure of the cells to vitamin C (Figure 2) [100]. is appeared as a result of vitamin C ability to block NO in endothelial cells (Figure 2) [100].Duda et al. established that NO is a key stimulus of new blood vessel formation [113].us, vitamin C was capable of inhibiting NO stimulation resulting in the inhibition of angiogenesis as well as vasculogenesis (Figure 2) [113]. 1.6.Signaling Pathways of Vitamin C in Glioma.Vitamin C was implicated in several signal pathways associated with the development of glioma [114][115][116][117][118][119].Vitamin C had much a stronger influence on the crucial stages of tumor cell proliferation as well as differentiation by shifting their epigenome and transcriptome.Naidu et al. observed antiproliferative as well as apoptotic effects of vitamin C on T98G glioma cells via modulation of IGF-IR secretion subsequent to the facilitation of programmed cell death [96].Also, vitamin C was capable of upregulating proteolipid protein (PLP) as well as myelin-associated glycoprotein (MAG) genes in glioma C6 cells of rat models (Figure 2) [76]. Nuclear factor erythroid 2-related factor 2 (Nrf2) is a fundamental constituent of cellular defense against a wide range of endogenous as well as exogenous stresses [114].It was observed that vitamin C was capable of influencing Nrf2 in GBM (Figure 2) [114].Hypoxia-inducible factor 1α (HIF-1α) is a transcription factor responsible for the cellular reaction to low O 2 conditions via the modulation of genes regulating various cellular transduction pathways [115].HIF-1α further modulates growth and apoptosis, cell migration, energy metabolism, angiogenesis, and transport of metal ions and glucose [115].HIF-1α is often intensely oversecreted in common cancers, cancer cell lines, and metastases [116]. Several studies have demonstrated that therapeutic levels of vitamin C downregulated cell survival pathways in cancer cells via HIF-1α as well as the nuclear transcription factor (NF-κB) [117][118][119].Vitamin C was capable of regulating HIF-1α in common cancers including glioma [120].Also, vitamin C was able to promote prolyl as well as lysyl hydroxylases in the hydroxylation of HIF-1α (Figure 2) [120].It was established that low vitamin C levels were able to decrease HIF-1α hydroxylation resulting in the promotion of HIF-dependent gene transcription as well as tumor growth [120].Bi et al. established that over secretion of Bcl-2 and blockade of Bax secretion correlated well with antiapoptosis/apoptosis imbalance of glioma cells (Figure 2) [121]. Duan et al. demonstrated that Maitake mushroom (MP)/vitamin C was able to inhibit the proliferation of glioma cells, augmented tumor cell apoptosis, and reduced mRNA/protein secretion of Bcl-2 while augmenting Bax mRNA or protein secretion (Figure 2) [7]. ey further observed augmentation in the secretion of caspase-3 as well as its endogenous substrate, cleavage of PARP [7].Moreover, MP/vitamin C was able activate key mediators of the apoptosis pathway, such as caspase-3, caspase-8, and caspase-9 in M059 K cells (Figure 2) [7].us, the combination of MP and VC triggered M059 K cell apoptosis [7].Holme et al. revealed that vitamin C was capable of decreasing the cytotoxic properties of N-hydroxy-acetylaminofluorene and decrease the covalent binding of N-acetyl-2-aminofluorene (AAF) to cellular protein [122].Further studies are needed to establish the effect of vitamin C on this protein in glioma. Hung established that rat glial tumor cells possess N-acetyltransferase (NAT) properties [123].Furthermore, the rat's brain tissue was able to modulate NAT activity as well as the stimulation of N-acetylation of 2-aminofluorene (AF) (Figure 2) [124].Hung and Lu demonstrated that vitamin C was able to block NAT activity in C6 glioma cells [125].ey also revealed that vitamin C reduced AF-DNA adduct formation in C6 glioma cells, but vitamin C did not influence DNA to transcript NAT mRNA [125].Miller and Miller showed that AF is N-acetylated via NAT and subsequently metabolized via cytochrome P450 (CYP) into a reactive metabolite, which binds to DNA to form DNA-AF metabolite adduct (Figure 2) [126]. Vitamin C in Glioma Treatment.Cameron and Pauling in 1976 suggested that IV vitamin C followed by oral maintenance was a beneficial therapy for patients with cancer [127].us, vitamin C, specifically at high therapeutic levels, has a long and widely been used as cancer treatment in history [127,128].IV vitamin C was demonstrated to be toxic to tumor cells, but not to normal cells [129].Furthermore, IV vitamin C was capable of inhibiting angiogenesis and inflammation, boosts the immune system, causes differentiation of cells, and improves quality of life of patients with cancer [100]. Currently, temozolomide is the drug of choice for the management of patients with glioma [24,25,130].It is an orally bioavailable, methylating agent that is able to pass through the BBB and trigger the death of tumor cells [24,25].Nevertheless, some tumor cells are capable of repairing DNA damage triggered by temozolomide and thus lessen the efficiency of the therapy [24,25].Laboratory and clinical studies have demonstrated that temozolomide's anticancer efficiency was augmented when combined with etoposide [24,25]. Gokturk et al. demonstrated that vitamin C alone was capable of triggering oxidative DNA damage in glioma [130]. ey revealed that cytotoxic as well as genotoxic effects of temozolomide and etoposide were reduced by vitamin C, but the utmost cytotoxicity with the least genotoxicity was attained with use of the triple therapy [130].us, vitamin C reduced the cytotoxic as well as genotoxic effect of the etoposide and etoposide-temozolomide combination, but it had no significant effect on temozolomide's toxicity [130]. Mikirova et al. were able to treat neurofibromatosis type 1 (NF1) patient with optic pathway glioma (OPG) with a high dose of IV vitamin C [131].ey suggested that vitamin C treatment may be appropriate for young patients' glioma who are not suitable to receive standard treatments regimes due to their toxicity [131].Studies have demonstrated that 6 Journal of Oncology radiotherapy offers a 6-month survival benefit at a median time frame for glioma patients [132,133].Herst et al. demonstrated that radiation dose of 2-Gy fractions alone for GBM patients and vitamin C alone at concentrations >1 mM was effective for GBM patients [21].Herst et al. indicated further that combination therapy using 0.5 mM vitamin C and lower radiation dose of 1-Gy fraction killed considerably more primary GBM cells and astrocytoma cells compared with single therapy [21].Nevertheless, the combination therapy had a much lesser effect on normal astrocytes, suggesting a certain level of specificity for GBM cells [21].us, they study exhibited that in the clinical situation, combination therapy triggers more specific GBM killing with lower doses of radiation as well as less damage to adjacent, healthy tissues [21]. Herst et al. demonstrated that administration of vitamin C was capable of inhibiting radiation-stimulated G2/M arrest in GBM primary cells, but not in astrocytes, inhibiting homologous recombination and hence DSB repair, which was specifically poor in GBM cells compared with normal astrocytes [21].Furthermore, both vitamin C and radiation therapy were able to trigger cell death associated with autophagy [134].Autophagy is a salvaging mechanism that is stimulated in cells under stress [135].Studies have demonstrated that 5 mM vitamin C, 6-Gy fractions, or combined therapy did not trigger apoptotic cell death in GBM primary cell [134,136]. Herst et al. postulated that our cells primarily use autophagy as a survival mechanism after exposure to radiation, vitamin C, or combined treatment [21].ey concluded that high-dose combination of vitamin C and radiation has a much more profound cytotoxic effect on primary GBM cells compared to normal astrocytes, and this combination could be a safe as well as clinically viable alternative for treating aggressive radiation-resistant GBMs [21].Prasad et al. report that vitamin C at nontoxic doses potentiated growth inhibitory capabilities of 5-fluorouracil (5-FUra), bleomycin sulfate, sodium butyrate, cyclic AMP stimulating agents, and X-irradiation on neuroblastoma (NB) cells, but it did not yield analogous capabilities on rat glioma cells in culture [137]. Prasad et al. further postulated that if vitamin C is used arbitrarily in combined therapies, it may reduce the efficiency of some chemotherapeutic agents [137].ey indicated that vitamin C was capable of reducing the cytotoxic effect of methotrexate as well as 5-(3,3-dimethyl-btriazeno)-imidazole-4-carboxamide (DTIC) on NB cells in culture [137]. is was perhaps due to deactivation of these medicines in vitro by vitamin C [137].Prasad et al. in another study demonstrated that vitamin C at nontoxic doses significantly potentiated the effect of methylmercuric chloride (MMC) on NB cells while it did not alter the effect of MMC on glioma cells [137]. e effect of vitamin C was most distinct at a MMC doses of 1 μM [137].Moreover, vitamin C was similarly effective in potentiating the effect of MMC on NB cells, but glutathione did not exhibit similar effect [137].Schoenfeld et al. demonstrate that increased labile Fe 2+ pool levels, triggered by mitochondrial superoxide and H 2 O 2 , expressively participated in cancer cell-selective toxicity of therapeutic vitamin C combined with standard radio-chemotherapy in GBM models [138].ey postulated that augmented labile Fe 2+ in cancer cells triggered an upsurge in oxidation of vitamin C to produce H 2 O 2 capable of further aggravating the differences in labile Fe 2+ in cancer compared to normal cells [138]. e above occurred, at least partly, because of H 2 O 2 -mediated interference of Fe-S cluster-containing proteins [138].e augmented levels of H 2 O 2 , in the company of an augmented labile Fe 2+ pool, triggered an upsurge in Fenton chemistry to produce hydroxyl radicals resulting in oxidative damage [138].Sharma and Khanna showed that vitamin C inhibited etorphine-stimulated compensatory upsurge in the concentrations of cyclic AMP with slight or no influence on the temporary response of NG108-15 hybrid cells to the effector agents, but it had no effect on the temporary blockade response of the cells to the drug [139]. Sharma and Khanna suggest the potential use of vitamin C in the prevention of the development of tolerance in therapeutic usage of narcotics as analgesics [139].Vita et al. demonstrated that menadione alone or in combination with vitamin C exhibited similar concentration-response curves as well as IC50 values [140]. ey indicated that menadione: vitamin C at a ratio of 1 :100 exhibited higher antiproliferative activity when compared to each medicine alone and permitted to decrease each medicine concentration between 2.5 and 5fold [140].Analogous antiproliferative effects were exhibited in 8 patients derived GBM cell cultures [140]. Conclusion e dietary intake of vitamin C has protective effect on glioma risk.Neurons store high levels of vitamin C via SVCTs to protect them from oxidative ischemia-reperfusion injury.e key function of vitamin C is the inhibition of redox imbalance from ROS produced via the stimulation of glutamate receptors.Vitamin C is able to inhibit DNA damage and the deterioration of subcellular structures like proteins, lipids, and DNA by scavenging of ROS.Also, therapeutic concentrations of vitamin C are capable of triggering H 2 O 2 generation in solid tumors including glioma.e total number of blood vessels was decreased in vitamin C depleted tumors compared to the totally supplemented animals, which means that vitamin C is capable of inhibiting tumor angiogenesis.High-dose combination of vitamin C and radiation has a much more profound cytotoxic effect on primary GBM cells compared to normal astrocytes, and this combination could be a safe as well as clinically viable alternative for treating aggressive radiationresistant GBMs.Proteolipid protein ROS: Reactive oxygen species SVCTs: Sodium-dependent vitamin C transporters VEGF: Vascular endothelial growth factor. Figure 1 : Figure 1: e neurophysiological mechanisms via which vitamin C and glucose cross the blood brain barrier (BBB) to influence normal brain tissues.VC � vitamin C; GL � glucose.All other abbreviations are indicated in the abbreviation list. )[6].Laszkiewicz et al. exhibited that, vitamin C is a potent modulator of the proteolipid protein as well as the secretion of myelin-associated glycoprotein gene in CNS-derived C6 cells[76].Salmaso et al. established that vitamin C could be utilized as a targeting agent to stimulate the disposition of drug loaded nanosystems in gliomas[32].Conklin et al. indicated that antioxidants are capable of safeguarding normal brain tissues from radiation damage resulting in better survival, because brain tissues possess oxidative milieus and are thus susceptible to radiation damage[77].Lawenda et al. demonstrated that antioxidants are capable of rending glioma more resistant to tumor killing by radiation, resulting in poorer patient survival[78].It was established that, at low doses, vitamin C was capable of protecting cells from oxidative stress, thus inhibiting the advancement of tumors Figure 2 : Figure 2: e mechanisms via which vitamin triggers glioma cell death.VC � vitamin C; ASC � ascorbate.All other abbreviations are indicated in the abbreviation list. 4 Control trials are needed to validate the use of vitamin C and standardization of the doses of vitamin C in the treatment of patients with glioma.
2021-12-17T16:39:59.546Z
2021-12-14T00:00:00.000
{ "year": 2021, "sha1": "d84fd75672b72995de38fbcbd20d8a3838a8a44f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jo/2021/6141591.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "12880d993e4d82e68c9aa8128d19c50ca7623026", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
250957871
pes2o/s2orc
v3-fos-license
Probabilistic Risk Assessment of Polycyclic Aromatic Hydrocarbons in a Colombian Reservoir The purpose of this research was to evaluate the concentrations, sources and ecological risk assessment of sixteen polycyclic aromatic hydrocarbons (PAHs) present in water from the La Fe reservoir, Colombia in the months of October and November of 2017 and 2018. Concentrations of PAHs in water were measured with semipermeable membrane devices (SPMD) which allow obtaining the dissolved concentrations of the PAHs in the reservoir, emphasizing the reactivity and bioavailability in the environment. The PAHs analyses were carried out by means of gas chromatography, coupled with tandem mass spectrometry (GC/MS–MS) with triple quadrupole (QqQ). The environmental risk assessment using the estimation of risk quotient with deterministic and probabilistic method, the predictive no-effect concentration (PNEC) and environmental exposure concentration (EEC) in water indicate a negligibe risk for probabilistic method for all PAHs evaluated (RQ < 0.1). Supplementary Information The online version contains supplementary material available at 10.1007/s00128-022-03567-7. Polycyclic aromatic hydrocarbons (PAH) are a type of organic compound that has two or more fused benzene rings within its chemical structure (Ehrenhauser 2015), which convert them into non-polar, hydrophobic, highly toxic, highly stable substances in the environment with great resistance to microbial degradation (Nikitha et al. 2017). These compounds can be transported over a long range of distances and enter the water through air-water exchange (Fang et al. 2012), forming suspended particles that are then easily deposited in aquatic sediments due to their hydrophobicity, which causes adsorption in surface sediments (Soliman et al. 2014). PAHs not only affect human health, through the ingestion of water and contact with the skin, but also sensitive species of different taxonomic groups in the aquatic environment (Sun et al. 2015). From these species, dose-response assessment can be developed, and therefore, daily exposure and risk indexes of these organic pollutants can be calculated. These estimates are key to manage the reduction of the emission and impact if the risk levels are harmful to the environment (Soltani et al. 2015). One of the newest methods for sampling trace pollutants in water are semipermeable membrane devices (SPMD) which can be used as in-situ monitors of dissolved pollutant concentrations and permit risk assessment exercises to evaluate the potential toxicity of pollutant mixtures, which is not possible with direct sampling methods (Petty et al. 2000). SPMD allow a passive sampling in situ, containing in their interior triolein that simulates processes of bioaccumulation in fish, and allowing the analysis of persistent organic pollutants (POPs) or other hydrophobic organic pollutants with the octanol/water partition coefficient greater than 3 (log Kow ≥ 3) as Polycyclic aromatic hydrocarbons (PAH), polychlorinated biphenyls (PCB), and organochlorine pesticides (Huckins et al. 2006;Narvaez and Molina 2012). The purpose of this study was to evaluate the presence, sources and environmental risk associated with PAHs found in La Fe reservoir. This study is essential to understand the risk faced by species and people who make use of this water body. The results contribute to the monitoring of PAH to determine the transport and destination of pollutants and mitigate the impact of these organic compounds on the environment using the risk quotient determined with probabilistic approach. Materials and Methods La Fe reservoir is located in the municipality of El Retiro, Antioquia (NW Colombia) (Fig. 1). The reservoir has a volume of 15 mm 3 , an area of 1.33 km 2 , an altitude of 2155 m above sea level, and is surrounded by a high traffic road that leads to the eastern municipalities Antioquia. This reservoir supplies water for consumption of two million people in the Metropolitan Area of the Aburra Valley. A pumping system from the Pantanillo River is used to regulate the level of the reservoir, which supplies the metropolitan aqueduct of about one million inhabitants in Medellin City through the Ayurá water potabilization plant (Hernani and Ramirez 2002). Four sampling campaigns were conducted in the months of October and November of 2017 and 2018 to compare the results after one year in six stations: San Luis-Boquerón tributary (TSB), Palmas-Espiritu Santo tributary (TPE), Espiritu Santo Inlet (EES), Catchment tower outlet (EBT), San Luis -Boquerón Inlet (ESB) and Pantanillo River Inlet (EPA). The first two stations are located at the entrance of the tributaries and the rest are located within the reservoir (Fig. 1). Water samples were collected by suspending stainless steel baskets 2 m below water surface at each station, where three SPMD were placed and retrieved after 21 day of exposure. From each station, an extract was obtained (24 extracts in total for all stations). The SPMDs were obtained from Environmental Sampling Technologies EST-Lab, St. Joseph, MO, USA, with the following specifications: 99% triolein, LDPE of 450 cm 2 and thickness of 75 µm. The procedure for the determination of PAH in SPMD was in agreement with Pogorzelec and Piekarska (2018). The collected membranes were first washed with a brush to remove biofilm and residues, then stored in aluminum foil and finally transported to the laboratory in sealed bags to avoid cross-contamination. In the laboratory, the membranes were washed with a 10% HCl solution to remove the inorganic salts and the biofilm adhering to the surface, followed by a wash with deionized water and isopropanol to remove the water residues. To separate the triolein from the organic compounds, the SPMDs were dialyzed in 500 mL Erlenmeyer flasks containing 200 mL of n-hexane. After 24 h of dialysis, the solvent was replaced and dialyzed again for 10 h. The triolein was separated with a Bio-beads SX-3 200-400 mesh column, by gel permeation chromatograhy (GPC) (Klučárová et al. 2013) with dichloromethane. A cleanup was done with acetone-hexane by a column with silica-activated gel. The extract was concentrated by means of nitrogen flow. The concentration of the 16 priority PAHs -Benzo was determined by Gas Chromatograph (GC) (Bruker 451), equipped with a PTV Injector for capillary columns, coupled to a Triple mass spectrometer (Bruker SCION TQ) with triple quadrupole analyzer (QqQ) and electronic Impact Ionization (EI) source. For the GC analysis, the external standard column BR5, splitless mode for PAH in MRM mode was used, with EPA PAH Mix (2000 μg/mL) in dichloromethane. Each calibration point was prepared for triplicate. Calibration curve were performed with R 2 between 0.9904 and 0.9984, LOD between 1.08-9.76 ng/mL and LOQ between 3.62-30.28 ng/L. The recovery percentages were determined with the PAH Mix reference material and the values vary between 80% and 120% (Table S1 and S2). Oven temperature was initially set at 110°C, and increased to 310°C at a rate of 10°C (3.5 min hold time). Performance reference compounds (PRC) are used to evaluate the SPMD-water exchange kinetics in situ, which are added to the membrane before exposure. The chosen PRCs cannot be present in the environment, normally Polychlorinated Biphenyls (PCB) 14, 29 and 50 are chosen. The dissipation of the PRC is equal to Eq. (1), where, N PRC is the amount of PRC in the SPMD membranes and N 0PRC is the amount of PRC at t = 0. By measuring N PRC ∕N OPRC we can estimate the elimination constant (ke) using Eq. (2). When, k e t < < 1, C SPMD is equal to Eq. (3), (2) ke = −ln(N PRC ∕N 0PRC) )∕t Because C SPMD increases linearly with time, this phase is called "linear absorption model" or "kinetic sampling". And the sampling is integrated with time, this type of model occurs when there are short periods of exposure or highly hydrophobic compounds. Therefore, the amount of analyte absorbed (N) by the SPMD membranes during kinetic sampling is equal to Eq. 4, where V SPMD is the volume of the SPMD membrane. From Eq. 4, we define the sampling rate (Rs) for kinetic sampling, using Eq. 5, The sampling rate (Rs) provides a conceptual framework between classical batch extraction techniques and passive SPMD sampling, since Rst is equal to the volume of water extracted (Huckins et al. 2006). A MANOVA test was initially conducted to compare the concentrations of the hydrocarbons at the different sampling stations. However, due to lack of compliance in the assumptions, a Kruskal-Wallis test was applied and finally a oneway ANOVA for the hydrocarbon Ant, in order to identify differentiated concentrations of this compound among the stations. For the ANOVA, a residual analysis was carried out to verify the assumptions of normality (Shapiro-Wilk test), homoscedasticity (Levene test) and independence (Autocorrelation function (acf)). The Tukey's post hoc test was to contrast pairwise differences among the locations. All analyzes were performed in statistical software R (R Core Team 2018). To determine the possible sources of PAH in the dialyzed extracts of SPMD, the ratios of PAH's molecular indices were used. The most common isomer ratios are Flu/(Flu + Pyr), Ant/(Phe + Ant), BaA/(BaA + Chr), Ind/ (Ind + BghiP) (Gdara et al. 2017), Fen/Ant, Flu/Pyr and BaP/ BghiP. If the Flu/(Flu + Pyr) ratio is < 0.4, the origin is petrogenic; between 0.4 and 0.5, the origin is a mixture of petrogenic and pyrogenic sources, whereas if it is greater than 0.5, the origin is pyrogenic. If Ant/(Phe + Ant) is < 0.1, the origin is petrogenic, if it is > 0.1, the origin is pyrogenic. If the ratio of Ind/Ind + BghiP is < 0.5, the origin is pyrogenic while values greater than 0.5, the sources are of petrogenic origin (Yilmaz et al. 2014). The ecological risk assessment (ERA) is a tool used in the organization, structuring and collection of scientific data, that allows the identification of potential dangers in order to establish priorities in regulatory control and apply corrective actions (Chen and Liu 2014). ERA is performed by obtaining Predictive No-Effect Concentration (PNEC) (Wu . 2011), which can be calculated from deterministic and probabilistic approaches. The deterministic approach uses the lowest value of acute (LC 50 ) or chronic (NOEC) toxicity in relation to an Assessment Factor (AF) for each PAH. The probabilistic method applies the extrapolation of a set of toxicity data in different taxonomic groups of at least 15 species that allow the construction of a species sensitivity curve (SSD), from which the PNECp value is calculated (Puerta et al. 2019;EC 2011). The ETX 2.1 software allows the calculation of the SSD curve for each of the compounds (Fig. 4). To guarantee data quality, the Anderson-Darling, Kolgomorov-Smirnov and Cramer Von Mises normality tests were performed, which are used as criteria for the parametric distribution of the data all levels (0.1, 0.05, 0.025, and 0.01). The figures of the species sensitivity curves were made with the RStudio software version 4.0.1. Results and Discussion Of the 16 parent PAHs analyzed, 14 were detected, and only two (Ace and BaA) were not found in the range of detection of the equipment. The most abundant PAHs were N, Pyr, Flu and Phe, F and Chr (Fig. 2a). The average concentration of PAH of four samples with SPMD in six stations of the water reservoir were in a range of ∑PAHs 5.762 ng/g. SPMD in the TSB station, up to a maximum of 21.491 ng/g. SPMD at the EBT station, being the average 11.447 ng/g. SPMD (Table 1). The Kruskal-Wallis for all the PAHs shows statistically significant differences between the concentration of Anthracene and the sampling stations. The ANOVA for anthracene presents a significant difference (p < 0.05) between the ESB-TPE stations (p = 0.035) and EPA-TPE stations (p = 0.022) (Fig. 2b). The concentrations of PAH in the six sampling stations during the two years ranged from ∑PAHs 219.28 ng/L in the TPE station and 1.002 ng/L at the EBT, being the average 420 ng/L for all stations (Table S3 and S4). The PAHs with higher estimated concentrations (Cw) are those with low molecular weight such as N, Acy, Phe, Flu, and Pyr which have 2 to 4 fused benzene rings in their chemical structure. High molecular weight hydrocarbons with 5 to 6 fused rings present low concentrations between 0.012 and 0.67 ng/L, among them BaP (0.19 ng/L). These PAH are considered carcinogenic by the World Health Organization (WHO), which established an upper limit of 50 ng/L for their presence in water (WHO 1998). Table 1 shows the values of molecular indices that allow the determination of the petrogenic or pyrogenic sources of PAHs. Figure 3a and b show the cross plots for Ind/ (Ind + BghiP) vs. Flu/(Flu + Pyr) and Ant/(Ant + Phe) vs. Flu/(Flu + Pyr), which allow determining if the sources come from petroleum fuel or from coal and wood. Table 2 shows the aquatic PNEC for the three PAHs, obtaining values of 4.21, 3.66 and 0.029 µg/L for Phe, Flu and BaP respectively. These values were calculated from the chronic toxicity data for different reported species in the literature. Figure 4 shows the sensitivity curves of the SSD species for each compound analyzed. The EEC values for the three PAHs are those reported in Table S4. By replacing these values and those of the PNECp. Figure 5 shows the RQs for all sampling stations. RQ < 0.1 indicate negligible ecological risk of these PAHs in the reservoir, which occurs for the three PAHs. Aquatic acute toxicity data evaluated in different ecological groups for Phenanthrene, Fluoranthene, and Benzo(a)pyrene are shown in Tables S6, S7 and S8 respectively. The results obtained for both the n-hexane solvent and the blanks indicate no contamination during the deployment and treatment of the membranes. The PAH values found in the SPMD extracts of the TSB and TPE tributaries are lower than those found in the stations within the reservoir, possibly due to the longer residence time of the water in the reservoir, which allows a greater diffusion of the water towards membranes and therefore an accumulation in triolein (Prest and Jacobson 1997). To highlight the importance of the use of passive sampling compared to traditional sampling, the results of this study can be compared with those of (Serna 2012) where they analyzed the 16 PAHs in La Fe reservoir stations and could not report any because they were below the quantification limit of the gas chromatography equipment. The PAH concentrations in the TPE and EPA stations were the highest, possibly due to the fact that TPE collects the water of the reservoir in the outlet to the La Ayura Potabilization Plant. As for the EPA station, the high concentrations may be due to the strong industrial and mining activities from the Pantanillo River and the Buey Stream in El Retiro (Salazar 2017). In this study, none of the samples exceeded the regulatory guidelines for water quality established by the World Health Organization (WHO 1998), which shows that the water from La Fe reservoir can be used for potabilization and human consumption since the concentrations are very low. These results are similar to those reported in the Three Gorges Reservoir in China, where 6 sites were analyzed during 7 and 24 days, where the highest values were obtained for Phenanthrene, Fluorene, Chrysen, Pyrene and Naphthalene (Wang et al. 2009), as well as the study of PAHs in 6 rivers and creeks of the Milwaukee Metropolitan Sewerage District area of Wisconsin in 37 days of passive sampling of July-August, 2007, where 35 and 560 ng/L of total PAHs were found (USGS 2014). From Table 1, the values of the ratio of Flu/(Flu + Pyr) isomers shows that the origin of the PAHs dissolved in water are of pyrogenic origin in the TSB, TPE, EES stations and a mix between petrogenic and pyrogenic origins in the EBT and EPA stations. The values of the Ind/(Ind + BghiP) isomer ratio show that most of the stations are of pyrogenic origin, with values < 0.5, except for the EBT station that has a value > 0.5, indicating petrogenic origin. The Ant/ (Phe + Ant) ratio shows that three stations (EPA, EBT, EES) are of pyrogenic origin since they have values < 0.1, whereas the other stations (TSB, TPE, ESB) are of petrogenic origin, with values > 0.1. Regarding the BaP/BghiP ratio, the source of most of the stations is of pyrogenic origin, since the ratio is > 0.6 due to vehicular emissions, probably due to the proximity to the highway that borders the reservoir, with the exception of the EES station, with a value less < 0.6, indicating sources other than vehicular emissions. According to the residues of the molecular index, most of the stations present PAH of pyrogenic origin, probably due to the incomplete combustion of diesel and gasoline engines, forest fires or the combustion of coal or residues. The cross plots for Ind/(Ind + BghiP) vs. Flu/(Flu + Pyr) and Ant/(Ant + Phe) vs. Flu/(Flu + Pyr) serve to establish that the origin of the PAH in most of the stations is fuel, derived from the incomplete combustion of automobiles that circulate around the reservoir and are transported to the reservoir by means of the winds (Chapman 1996). In the same way, Fig. 3 shows the cross plot of Ant/(Ant + Phe) vs. Flu/ (Flu + Pyr), which establishes the pyrogenic origin of the PAHs at all sampling stations in the reservoir. On the other hand, Fig. 3a and b allowed us to establish that PAHs sources in most of the different stations originate from fuel (58.3%), while 20% come from a fuel-oil mixture and 16% from a fuel-combustion mixture of coal and firewood, indicating that 75% of the stations have a pyrogenic origin. The sensitivity curve of the species for the three compounds analyzed present different ecological groups as the most sensitive: the marine zooplankton (ZP-MAR) and mollusca (MOL) For Flu; marine fish (FISH MAR) and zooplankton (ZP) for BaP; and marine microalgae (MIC-MAR) and zooplankton (ZP) for Phe (Fig. 4). The previous ecological groups are the most sensitive to each hydrocarbon. Based on the response of these ecological groups, the protection of the environment against contamination with these three compounds should be managed. The results of the probabilistic PNEC of the PAHs Phe and BaP found in our research with values of 4.21 and 0.029 (Table 2) are approximately double compared to the work of Wang et al. (2014), who calculated the values of the probabilistic PNEC for Phe and BaP in 2.33 and 0.011 µg/L. The RQ values calculated from the probabilistic PNEC for Phe, Flu and BaP indicate that there is no environmental risk due to the presence of these three hydrocarbons (Fig. 5). The PAHs evaluated by the probabilistic approach present low ecological risk, since they show risk ratios < 0.1 in all the stations. The PAH Phenanthrene is the one that exhibits the highest RQ values in the EPA and EBT stations with 0.0087 and 0.014 respectively (Zheng et al. 2016). Fourteen PAHs were detected in the SPMD matrix at the six stations in La Fe reservoir and tributaries during the two-year sampling. Sediments presented higher concentrations than water, possibly due to the fact that PAHs, being organic compounds, have low solubilities and high octanol/ water partition coefficient (log Kow) and due to the short periods of residence of the water in the tributaries and in the reservoir. The SPMD is a reliable and effective tool for the evaluation of contaminants in water, since it allows the estimation of the PAH concentration. The analysis of molecular indices of isomers of PAH, allowed to determine the pyrogenic origin of the PAH in the sampling stations. On the other hand, the probabilistic approach of the distribution of sensitive species (SSD) of three predominant PAHs in the reservoir (Fen, Flu and BaP), established that there is no risk (RQ < 0.1). This approach has advantages over as long as the reliability of the toxicity values used for the construction of the curve is guaranteed. Therefore, the evaluation of the ecological risk carried out through the deterministic and probabilistic approaches, allowed to establish that the ecosystem is not vulnerable to the vast majority of the 16 PAH evaluated. Funding Open Access funding provided by Colombia Consortium. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-07-23T13:20:30.649Z
2022-07-23T00:00:00.000
{ "year": 2022, "sha1": "24a01f1582be228b5518bb487285cb12ae39d6df", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00128-022-03567-7.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "0f337d6131cf9c81a0ad0b3203e476aaf42c1abf", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
265516093
pes2o/s2orc
v3-fos-license
A disengaging property of medial accumbens shell dopamine Electrical stimulation of the medial forebrain bundle vigorously reinforces self-stimulation behaviour, yet rodents perform operant responses to terminate this stimulation. The accumbens shell emerged as a substrate subserving the reinforcing properties of electrical medial forebrain bundle stimulation, whereas disengaging properties were attributed to incidentally recruited substrates near the electrode. Here, we examine whether there are dissociable reinforcing and disengaging properties of medial accumbens shell dopamine and probe the substrates underlying these properties. Using a temporally delimited self-stimulation procedure, transgenic DAT-Cre mice expressing channelrhodopsin-II in ventral tegmental area dopamine neurons were trained to hold-down a lever to engage, and then release the lever to disengage, optogenetic stimulation of dopaminergic inputs to the medial accumbens shell through an implanted optic fiber. The cumulative and mean duration of hold-downs show divergent frequency responses identifying dissociable reinforcing and disengaging properties of medial accumbens shell dopamine. At higher stimulation frequencies the cumulative duration of hold-downs grows, whereas the mean duration of hold-downs wanes. Dopamine agonists reduced the cumulative duration of self-stimulation hold-downs, but only a D1 agonist produced this reduction through decreases in the mean duration of hold-downs, which were lengthened with a D2 antagonist. Thus, reinforcing and disengaging properties of electrical medial forebrain stimulation may arise from the downstream activation of dopamine receptors, uncovering a disengaging property of medial accumbens shell dopamine. Significance Statement Dopamine is thought to promote behaviour by acting as a reinforcer or error signal. Here, we show that mice vigorously self-stimulate dopamine inputs to the medial accumbens shell but control the duration of duration of these stimulations and prefer them to be brief. This disengaging property of medial accumbens shell dopamine depends on downstream neurotransmission at dopamine type 1 and 2 receptors. Thus, a single dopaminergic substrate, inputs to the medial accumbens shell, reinforces and disengages self-stimulation behaviour, highlighting the complexity and regional specificity of striatal dopamine function. Introduction. The discovery that rodents [1] and people [2,3] performed operant responses to electrically stimulate the medial forebrain bundle ignited speculation about the psychological consequences of this stimulation: Did it substitute for natural goals, positive memories, or utility -generate hunger, reward, or drive [4]?A parallel, search for the neural substrates recruited by electrical stimulation of the medial forebrain bundle spurred [5], ever complicated by the swath of cells and transgressing fibers characterizing this large bundle [5].It was soon realized that the quality of this stimulation was not owed to a single psychological process.Rodents [6,7] and people [2] would perform responses to terminate ongoing stimulation.The reinforcing and disengaging properties of electrical medial forebrain bundle stimulation were presumed to arise from distinct neural substrates indiscriminately recruited by the electrode [8].An imprecise but indelible link, between substrate and psychology was established. A description of the neural substrates underlying the reinforcing properties of electrical brain stimulation focused on an area of the ventral striatum called the nucleus accumbens.Diverse patterns of electrical medial forebrain bundle stimulation similarly activated ventral tegmental area dopamine neurons [9] and their release of dopamine in the accumbens, but not dorsal striatum [10,11].Within the accumbens, the medial shell subregion aligned with the reinforcing properties of medial forebrain bundle stimulation because dopamine levels here closely tracked maximally reinforcing stimulation parameters [12].Further, pharmacologically elevating dopamine in the medial accumbens shell, but not adjacent striatal subregions, amplified the reinforcing efficacy of medial forebrain bundle stimulation [13].Thus, the reinforcing properties of medial forebrain bundle stimulation are partly attributable to elevations in medial accumbens shell dopamine occurring through transsynaptic activation of ventral tegmental area dopamine neurons [14][15][16]. Optogenetics [17] and transgenic rodents [18,19] confirmed that ventral tegmental area dopamine neurons robustly supported self-stimulation, particularly at high stimulation frequencies, suggesting that these neurons underpinned the reinforcing properties of electrical medial forebrain bundle stimulation [18,20].Once again however, questions were raised about diverse psychological processes engaged by optogenetic stimulation of ventral tegmental area dopamine neurons.When mice could choose between operant responses that delivered brief or prolonged bursts of high frequency optogenetic stimulation of ventral tegmental area dopamine neurons, brief bursts were preferred [21].The preferential activation of ventral tegmental area dopamine neurons for brief durations is akin to behavioural responses that terminate ongoing electrical brain stimulation.Differently, however, the substrate serving to reinforce and disengage optogenetic self-stimulation of ventral tegmental area dopamine neurons is unequivocal. Self-stimulation studies highlight the capacity for dopamine to reinforce behaviour, but the dominant view of dopamine as an error signal nuances the interpretation of such studies.The firing of ventral tegmental area dopamine neurons, and consequent striatal release of dopamine, is thought to encode a prediction error [22,23].Operant responses for stimulation of dopamine neurons creates an error such that the receipt of stimulation is better than expected, and the response is repeated with increasing vigour.Attempts have been made to disentangle the hypotheses that optogenetic stimulation of ventral tegmental area dopamine neurons acts as a reinforcer or error signal [15,24], but these procedures overlook separable roles of dopamine neurons with distinct striatal targets [25][26][27].Notably, dopamine activity in the medial accumbens shell is inconsistent with error coding [28], modulated by the value of natural taste stimuli [29], and directly supports self-stimulation [27], highlighting a role for this substrate as a reinforcer. A key feature of how animals engage with natural reinforcers is the capacity to control the duration of behaviour.Mammals [30], including wild mice [31], tightly control the duration of bouts of consummatory behaviour around taste [32].In lab studies mice increase the duration of their bouts of licking a solution as it becomes sweeter and decrease this duration as the solution becomes bitter [33].Changes in bout duration are often divorced from changes in bout frequency which relate more to the postingestive consequences of consumption, like satiety or malaise [34]. That is, the taste of a positively valenced sweet solution can support long bouts of licking but reinforce few bouts in total.It is unclear if, given the requisite control, mice would modulate the duration for which a non-sensory neural substrate was stimulated, much like they do with sensory taste receptors activated by natural reinforcers, like food. We developed a temporally constrained self-stimulation procedure wherein mice are trained to hold-down a lever to engage, and then release the lever to disengage, optogenetic stimulation of dopaminergic inputs to the medial accumbens shell.As such, two principal measures describe self-stimulation: the cumulative and mean duration of lever hold-downs.The cumulative duration of hold-downs reflects the reinforcing properties of stimulation, arising either from changes in the mean duration or number of hold-downs, or both.Differently, the mean duration of hold-downs reflects a quality of the stimulation that arises as it is being experienced, allowing mice to disengage stimulation by releasing the lever.We use the temporally constrained self-stimulation procedure to dissociate reinforcing and disengaging properties of medial accumbens shell dopamine by examining whether the mean and cumulative durations of hold-downs diverge across stimulation frequences.Then, by modulating neurotransmission at dopamine type-1 (D1) and type-2 (D2) receptors with pharmacology we determined whether distinct downstream substrates impact the cumulative or mean duration of self-stimulation hold-downs.Selective recruitment of a genetically defined and input-specific substrate clarifies speculation about separable or overlapping substrates subserving the reinforcing and disengaging properties of electrical brain stimulation which we argue both arise at the level of dopamine inputs to the medial accumbens shell. In the same surgery optic fibers (200 µm) were implanted bilaterally in the medial accumbens shell (10° angle, AP 1.5 mm, ML ±1.4 mm, DV -4.32 mm).Mice were left to recover for 2-3 months in their home-cage allowing sufficient viral expression in dopaminergic terminals. Behavioral apparatus. Behavioral training occurred in six conditioning chambers (ENV-307W-CT; Med-Associates Inc.), enclosed in sound-attenuating, fan-ventilated (ENV-025F) melamine cubicles (42.5 x 62.5 x 42 cm).Each chamber had a stainless-steel bar floor, paneled aluminum sidewalls, and a clear polycarbonate rear wall, ceiling, and front door.The upper left wall of the chamber featured a central white house-light (ENV-215M).The right wall contained two retractable levers (ENV-312-3) located on each side of a central fluid-port (ENV-303LP2-3).All experimental events were controlled and recorded using Med PC-V software. Temporally constrained self-stimulation. Mice received 40 self-stimulation training sessions (36 min) consisting of trials (30s; 48 trials/session) initiated and terminated with the extension and retraction, respectively, of active and inactive levers.In each trial, the active lever was pseudorandomly assigned to deliver pulsed laser stimulation (5 ms; 473 nm; 10 mW) at 2.5, 10, or 40 Hz such that one third of all trials represented each frequency.Trials occurred around a variable time inter-trial interval that was drawn from an exponential distribution from 2-18 s.When mice performed hold-downs by depressing the active lever, pulsed laser-light was delivered through an implanted optic fiber to the medial accumbens shell, which continued so long as the lever remained depressed and terminated when the lever was released.Inactive lever hold-downs had no consequence. Retraining sessions without injections intervened each test session to mitigate potential treatment carryover effects. Histology. Mice were deeply anesthetized with sodium pentobarbital (Euthanyl TM , 270 mg/kg) and perfused with a 4% paraformaldehyde 0.1 M phosphate-buffered saline solution.Brains were extracted and cryoprotected in a 4% paraformaldehyde 30% sucrose solution for 1-2 days before being frozen at -80°C.Brain tissue was sliced into 50 µm coronal sections on a Lecia TM cryostat, thaw-mounted on microscope slides, and cover-slipped with a MOWIOL + DAPI solution, then imaged using a Leica TM epifluorescent microscope to identify viral expression in the ventral tegmental area and optical fiber tracts in the medial accumbens shell. Data and Statistical Analysis. MED-PC V software controlled and recorded the timing of all experimental events.The cumulative and mean duration, number and latency of hold-downs were analyzed using repeated measures analysis of variance (RMANOVA) in Prism TM v9.5.1 or SPSS v27.The Kolmogorov-Smirnov test was used to compare distributions.Conditioning sessions were blocked into groups of five (training) or three (test) sessions to ensure that enough hold-downs occurred to calculate mean hold-down duration.All analyses were conducted on raw data, although difference scores are shown to decompose significant interactions.Two mice did not perform a hold-down at one set of frequency-treatment conditions during the test phase and their data was excluded from test analyses.Post-hoc t-tests were Bonferroni-corrected (one family) and multiplicity-adjusted [35]. Temporally constrained self-stimulation of dopamine inputs to the medial accumbens shell. DAT-Cre mice were trained (Fig Discussion The medial accumbens shell emerged as a substrate subserving the reinforcing properties of electrical medial forebrain bundle stimulation [12,13] and early studies attributed diverse psychological consequences of this stimulation to the indiscriminate recruitment of neural substrates near the electrode.Here, by allowing mice to control the duration for which they optogenetically self-stimulate a genetically defined and input-specific substrate we describe reinforcing and disengaging properties of medial accumbens shell dopamine.The principal evidence for this claim is that the cumulative and mean duration for which mice hold-down a lever to optogenetically stimulate dopaminergic inputs to the medial accumbens shell show divergent frequency responses.At higher stimulation frequencies the cumulative duration of hold-downs grows, consistent with more reinforcement, whereas the mean duration of hold-downs wanes, consistent with disengagement.Using pharmacology, we begin identifying substrates underpinning the reinforcing and disengaging properties of medial accumbens shell dopamine.Dopamine agonists reduced the cumulative duration of self-stimulation hold-downs, but only a D1 agonist produced this reduction through decreases in the mean duration of hold-downs, which were lengthened by a D2 antagonist.Thus, reinforcing and disengaging properties of electrical medial forebrain stimulation may arise from the downstream activation of dopamine receptors, uncovering a disengaging property of medial accumbens shell dopamine. Across training, mice increased the cumulative duration, mean duration, and number of active relative to inactive lever hold-downs, consistent with the acquisition of temporally constrained selfstimulation.One previous study reported that mice could not learn to hold-down a lever to selfstimulate dopaminergic cell bodies in the substantia nigra [46], which may relate to an impoverished role of nigral, relative to ventral tegmental area dopamine neurons, in reinforcing behaviour [27].Additionally, nigral projection targets in the dorsal striatum [26] differ from ventral tegmental area dopamine inputs to the medial accumbens shell and likely engage circuitries that serve different functions.Also, the previous attempt to use hold-down duration as a selfstimulation operant may have been unsuccessful because of the high frequency of stimulation used, 50 Hz, which generally supported sub half-second hold-downs, consistent with the disengaging property of high frequency dopamine activity discovered in the current work.The effects of stimulation frequency detected here are smaller than in procedures where rodents control only the number, not duration, of trains of dopamine stimulation.Importantly, most previous work directly stimulated dopamine cell bodies in the ventral tegmental area [21,47], rather than accumbal inputs and few studies varied stimulation frequency.Nevertheless, mice demonstrate sophisticated and considerable (~25%) control over the duration with which dopamine inputs to the medial accumbens shell are stimulated. We attribute decreases in mean hold-down duration during high frequency stimulation trials to the emergence of a disengaging property of medial accumbens shell dopamine.This interpretation is incongruent with a paradigm in contemporary neuroscience describing a relationship between striatal cells and motivated behaviour.In the striatum, the activity of neurons expressing D1 receptors is thought to encode positive valence and encourage behaviour, whereas neurons expressing D2 receptors oppose this influence, encoding negative valence when active and discouraging behaviour [48,49].Thus, input from the ventral tegmental area to the striatum, including the accumbens, is thought to reinforce behaviour because dopamine elevates the activity of D1 neurons and mutes the activity of D2 neurons, serving as a strong positive valence signal.Certainly, we observed increases in the cumulative time spent self-stimulating dopamine inputs to the medial accumbens shell, consistent with reinforcing and positively valenced properties of driving this substrate.Differently, the mean duration for which mice hold-down a lever to stimulate dopaminergic inputs to the medial accumbens shell wanes at high stimulation frequencies, consistent with the emergence of a disengaging property of this substrate.It is unclear whether this disengaging property is inherently valenced, although mice also reduce the duration with which they interact with negatively valenced taste stimuli [33,34].Ultimately, it is difficult to reconcile a disengaging property of medial accumbens shell dopamine with conventional views of striatum-wide dopamine as a reinforcer or error signal.Rather, the psychological consequences of striatal dopamine likely reflect subregion specificity and the modulation of downstream neuron activity, which is convergently shaped by corticostriatal inputs expressing dopamine receptors [50]. The disengaging property of medial accumbens shell dopamine may arise from the co-release of glutamate by ventral tegmental area dopamine neurons that innervate the accumbens shell.When mice could nose-poke at five concurrently available apertures armed with 40 Hz optogenetic stimulation of different durations (i.e., 1, 5, 20, 40 s), ventral tegmental area dopamine cell bodies predominantly supported responses for 5 s of stimulation.Differently, a subpopulation of ventral tegmental area dopamine neurons that co-release glutamate predominantly supported responses for 1 s of stimulation [21] and mice would perform an operant response to terminate noncontingent stimulation of these inputs to the medial accumbens shell [51].Thus, glutamate co-release from dopaminergic inputs to the medial accumbens shell may contribute to the current behavior, where high-frequency stimulation supported the briefest mean duration of self-stimulation hold-downs; however, the modulation of self-stimulation hold-downs with dopamine receptor agonists and antagonists, and a recent report that dopamine release absent glutamate is aversive [52], disfavours this possibility. Dopamine receptor agonists and antagonists produced dissociable effects on the cumulative and mean durations for which mice self-stimulated dopaminergic inputs to the medial accumbens shell.While dopamine receptor agonists reduced the cumulative duration of self-stimulation holddowns, only the D1 agonist SKF38393 did so by reducing the mean duration of hold-downs which was unaffected by the D2 agonist quinpirole.Augmenting neurotransmission at D2 receptors with quinpirole diminished the reinforcing properties of medial accumbens shell dopamine, which likely manifested through subtle coincident reductions in the mean duration and number of hold-downs.Although dopamine antagonists did not affect the cumulative duration of self-stimulation holddowns, blunting neurotransmission at D2 receptors with raclopride lengthened self-stimulation hold-downs, as if to mitigate the disengaging property of medial accumbens shell dopamine.Differently, augmenting neurotransmission at D1 receptors with SKF38393 shortened selfstimulation hold-downs, consistent with amplification of the disengaging property of medial accumbens shell dopamine. In the temporally constrained self-stimulation procedure, the first active hold-down of a trial is a clear measure of the disengaging properties of medial accumbens dopamine because mice cannot predetermine the available stimulation frequency.The first active hold-down on highfrequency trials was briefer than on low-frequency trials -this result is only explained by mice disengaging the lever to terminate stimulation in response to some quality of the stimulation arising in real time.Despite experiencing a disengaging property of medial accumbens shell dopamine on the first high-frequency hold-down, mice were quicker to initiate a second hold-down than during lower frequency trials.A single experience of high-frequency dopamine activity in the medial accumbens shell is sufficient to reinforce, and disengage, self-stimulation behaviour more strongly than lower frequency stimulation of the same substrate.Thus, the effects of stimulation frequency and pharmacological treatments on mean hold-down duration were observed on the first hold-down.Additionally, the D2 agonist quinpirole reduced the duration of the first active holddown, which explains the companion reduction in cumulative active hold-down duration. Neurotransmission at D2 receptors thus partially underpins a disengaging property of medial accumbens shell dopamine, however, this effect diminished on subsequent hold-downs.The modulation of mean, or first, active hold-down duration with changes in neurotransmission at D2 receptors suggests that these neurons may play a principal role in determining whether, or when, animals disengage behavior. Conclusion Dopamine has varied and complex actions on inputs to and cells within the ventral striatum, but the consequence of these actions are simple: to reinforce behaviour.This conjecture is supported by decades of preclinical research manipulating and observing dopamine activity during behaviour and particularly the indiscriminate capacity for drugs that elevate dopamine levels to reinforce behaviour [53].However, much of this work relied on changes in behavioural frequency to describe the reinforcing properties of dopamine.Even still, some findings hinted a disengaging property of dopamine.For example, psychostimulants paired with food consumption [54,55], even when administered volitionally [56], reduced food consumption while encouraging food approach [57].Further, the self-administration of psychostimulants is reduced, not amplified, by microinfusion of dopamine agonists into the accumbens [58].These findings point to an inherent role of dopamine receptor activation in disengaging behaviour. Here, using a temporally constrained self-stimulation procedure we report behaviour consistent with dissociable reinforcing and disengaging properties of medial accumbens shell dopamine.Specifically, intense dopamine activity in the medial accumbens shell is powerfully reinforcing but produces short, numerous, self-stimulation hold-downs.The reinforcing and disengaging properties of medial accumbens shell dopamine are differentially affected by neurotransmission at dopamine receptors.Notably, augmenting neurotransmission at D1 receptors serves to enhance the disengaging property, whereas blunting neurotransmission at D2 receptors amplifies the reinforcing property, of medial accumbens shell dopamine -effects which may arise from unintuitive striatal responses to dopamine activity [59,60].The current finding that medial accumbens shell dopamine is both reinforcing and disengaging runs counter to conventional views of striatal dopamine but is well accompanied by recent work diversifying the role of dopamine in psychology [61][62][63]. Fig. 1│Temporally constrained self-stimulation of dopamine inputs to the medial accumbens shell.a, Active and inactive levers were inserted into the conditioning chamber for 30 s trials (48 trials/session) which were separated by intertrial intervals (ITI; ~10 s).Active lever hold-downs triggered the delivery of patterned laser light (473 nm) to the medial accumbens shell at 2.5, 10, or 40 Hz which terminated when the lever was released.b, DAT-Cre mice (n=12, 7F, 5M) received microinfusion of the viral construct AAV2/9-EF1a-fDIO-hChR2(H134R)-eYFP in the ventral tegmental area and c, implantation of an optical fibre in the medial accumbens shell which are shown on modified panels from the atlas of Franklin and Watson (2007).Representative microscopy (5x) images with overlaid anatomical delineations showing viral expression (eYFP, green) and nuclei (DAPI, blue) in the d, ventral tegmental area and e, medial accumbens shell.f, Across blocks of 5 sessions, mice increased the cumulative time spent holding down the active lever during 30 s trials whereas the cumulative time spent holding down the inactive lever remained low.Higher-frequency trials came to support more cumulative active hold-down time than did lower-frequency trials [Lever x Frequency x Session, F(14, 154)=2.323,p=.006].g, The mean duration of active, but not inactive, lever hold-downs increased over sessions [Lever x Session, F(7, 77)=4.509,p<.001].h, Mice increased the number of active, but not inactive, lever hold-downs performed during 30 s trials across sessions particularly so for high-frequency relative to low-frequency trials [Lever x Frequency x Session, F(14, 154)=2.679,p=.002].Averaged data are mean ± s.e.m.
2023-12-02T14:13:10.673Z
2024-05-02T00:00:00.000
{ "year": 2024, "sha1": "5d3e862967ce9232e5e2718220340ede7bfb666f", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/11/30/2023.11.29.569116.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "5d3e862967ce9232e5e2718220340ede7bfb666f", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
149026906
pes2o/s2orc
v3-fos-license
THE SCHOOL – FAMILY EDUCATIONAL PARTNERSHIP The study brings to attention the issue of parenting and the school-family partnership, namely the case of young students, given the fact that their education raises more and more serious problems to their parents. They are entirely responsible for educating their children up to the school age. When the children start school, the responsibilities for their education are divided between family and school. The school-family partnership means effective communication, clearly-defined tasks, homogeneous actions for the benefit of the child; parents must be considered as active participants in the school education considering the fact that they know their children the best. In the past years, parenting programs are more successful than ever, by means of which educators teach parents how to raise their children with respect to positivity Appreciative Parenting. According to it, several types of parental support are delivered: individual counselling, coaching, group counselling, parenting courses, support groups. In order to form positive behaviours in children, both parents and educators need four elements: love, availability, direction and dosage. Introduction In today's society the education of the children is a real challenge for parents, especially after school age.From the moment the children start school, the responsibilities of the child education are divided between parents (family) and school.School needs the family support to develop the educational act, just as parents need help from the educators.Some parents believe that they do not need to be educated, in their turn, how to educate their children, which is true in a few cases.However, most often, there is a need of training for the "job" of parent. Where does the need of parent education come from?On the one hand, it comes from the discrepancy between what the current parents know from their parents about the child education, and on the other hand, from the children's needs regarding their relationship with the parents/ the school/ the society. The society has changed in the past decades: every day there appears new information about children and about the different characteristics corresponding to the various stages of their biological, psychological or emotional development.Also, the mentality regarding the roles of the parents with respect to their children has changed; for example, in most families, both parents work which leads us to believe that the responsibility for child raising/ education goes to both parents, not only to the mother, as it is the case in the traditional family.Children, in their turn, tend to adapt very quickly to the social changes and the new technologies, and are less obedient to the parents with a traditional vision.They don't take anything for granted; in fact they want explanations for all the aspects of the reality they are faced with.New challenges appear and the parents need to be prepared to "offer" appropriate answers to intelligent questions.From this point of view, school is the first support of the parents. Types of school-family cooperation Parents interact with the school, the educators/ school counsellors, education trainers in many ways.An important concept in the modern pedagogy is the school-family educational partnership, representing a form of communication and cooperation between school and the child's family. The connection between school and family is activated especially during primary school.This is when the problem of the educational partnership appears, which is according to Rodica Enache, "a form of communication, cooperation, collaboration in support of the child at the level of the educational process.It involves a unity of requests, options, decisions and educational actions among educational factors (Enache, 2011).The educational partnership is realised between: -education institutions: family, school and community; -educational agents: child, parents, teachers, specialists in solving educational issues (psychologists, counsellors, therapists); -members of the community with an influence in the child raising, education and development (doctors, representatives of the church and the police); -programmes of child raising, care and education.Among all these, the most important is the school-family partnership.This partnership involves parents (students included too) and the school staff.It is important to communicate the proposed objectives in the sense that school and family must act together; the education within the family should not contradict the education received at school.Also, nowadays, the child is part of the educational decision-making, according to his possibilities, especially since, through the educational process, the child should be responsible and able to make quick decisions (as a result of the positive education).The educational act becomes a partnership within which both the learner and the educator learns; the educator should be flexible, creative, dynamic, willing to find solutions for change. Thus, parents collaborate with the school for the benefit of the children, for a better awareness of their needs and a better social/ school integration.The beginning of the school represents a major change, not only for the child but also for the family.The adults in the life of the children (parents, grandparents, teachers/ educators) must work together, without having prejudgments of the type: "I was educated by mother without being taught this in school".School organises activities by means of which parents participate actively in the education of their children together with the educators.Through these activities, parents' counselling and training are achieved.The most important are: -parent-teacher meetings, representing meetings on a certain topic, organised by the teacher with the purpose of offering the parents the possibility of interrelating, exchanging opinions and experiences, asking questions related to education, finding solutions to different problematic situations occurred within the educational process; the meetings should be held every semester or whenever it is necessary.However, parents tend to be reluctant to attending meetings with all the other parents, during which they should require guidance/ advice.When a parent has a problem to discuss, they usually wait until the meeting is over in order to discuss the matter with the teacher in private; -parent-teacher conferences are organised by the school with the purpose of informing parents on the latest reports regarding child psycho-pedagogy, education and also in order to present various methods of teaching; they refer to approaching the problems in a positive manner; this is also called positive communication, positive discipline, which would seek to achieve an awareness of actions, excluding negative messages, accountability.Parents are inclined to prohibit various things to the children, which they perceive as negative messages.Parents are recommended to use positive messages (for example, a message like "You haven't tidied up your room" could be replaced with "I'm sure you can tidy up your room"); -counselling for parents towards the acquisition of certain skills, which are necessary in child education; this refers to orientation activities and parent supervision, along with individual discussions. In the work entitled "Consilierea parentală.Ghid metodologic" ("Parenting.A Methodological Guide"), Cuzneţov (2014) presents the activities of psycho pedagogical counselling as "an educational-formative act centred on exercising the individual competences and availabilities", in this case -the parents.The author presents the characteristics of psycho-pedagogical counselling referring to the fact that the parent has direct contact with the school representative in order to communicate efficiently, and has the opportunity to assume certain abilities and parental competences; thus, parents learn to listen to their children more, to spend more time with them, to assign them different tasks appropriate for their age, with the purpose of educating them as responsible people. Also, counselling can prevent certain mistakes which occur in child education (physical or verbal violence), it can teach the parents to find solutions on their own in order to solve unforeseen situations: -"it has a preventive and developmental role": parents "learn how to prevent possible situations of crisis, learn how to find satisfactory solutions to the problems they are faced with"; -"it is a specific educational-formative act" which has the purpose of teaching parents how to help themselves, how to find solutions; -it implies "conscious, active and responsible involvement from the part of the family and parents" who are training to build the parenthood life (Cuzneţov, 2014). The people responsible for psycho-pedagogical counselling are "competent and qualified people, they are called counsellors (school counsellors or educational counsellors) who intervene in family problems" (Cuzneţov, 2014). In the case of young children, the most appropriate person required to offer counselling is the educator/ primary school teacher.In order to be able to provide counselling, primary school teachers need specialised training. In many cases, in Romania, school counselling does not have the expected results, for many reasons: shortage of qualified personnel, respectively counsellors; the counsellors' lack of time, lack of interest from the parents with social problems and financial difficulties.A school counsellor works in more schools, having a program that does not allow them to be present in only one school unit, throughout the entire week.On the other hand, not all teachers have the required qualifications.This is the reason why the need of an educational partnership is on the agenda. The evolution of parenting in Romania The term "parenting" (borrowed from the English language) appeared relatively recently in modern pedagogy and it refers to a form of support given to the parents/ legal representatives in the child raising and education, from birth to adult age.Parenting helps parents to improve themselves in the educational process in order to raise happy children.In Romania, in the past years, parents have become more and more interested in parenting.From this perspective, parental education programmes have become more successful lately, by means of which parents learn to educate their children in the sense of positivity. Within the Swiss-Romanian Cooperation Programme, Our Children Foundation in collaboration with UNICEF, FONPC and Formation des parents ch.carried out a study in 2016, as part of the project "Promoting the National Strategy for Parental Education", the second study after another one which was carried out in 2010, on a number of 1173 respondents.The study is "a research on the opinions of parents, professionals and decision-makers, regarding the parenting activity" (Preda, 2016).The study covered 16 counties and sector 5 in Bucharest, in both urban and rural backgrounds. The conclusions show that the involvement of parents in the education of school children was low after 1989, due to the effects of communist policy prior to the revolution.The activity of parents' education began to develop in Romania in the late 1990s.Lately, lifelong training modules for accredited teachers and other child development and protection staff have also been proposed, focusing on parenting (inclusive education, socio-emotional education). The study carried out by Our Children Foundation shows that these programs and projects developed by the Ministry of Education as a beneficiary or partner highlight some important aspects, such as: identifying the need for support granted to the parents with the purpose of reducing school failure, dropout, discrimination, violence. The study also shows that there is an awareness of the need for parental education, both in urban and rural areas, and that it is very important to know and respect children's rights within parent education.On the other hand, "the parent educator is regarded as a qualified professional, with long duration studies, structured and developed in an appropriate training framework" (Our Children Foundation, 2016) -according to The Law of National Education no.1/2011, section 3, art.247, point g), with reference to psycho-pedagogical centres and offices.At the same time, the respondents considered the education provided to the parents to be very important, in various forms: counselling, more playgrounds and parental education. When the child starts school, there appears the problem of child development and learning, and as a result the school offers "support structures for both the child and the family" (Enache, 2011).At the same time, there are activities that support parents in the educational act.In school, there are offices for psycho-pedagogical assistance, counselling, speech therapy and professional guidance (according to The Law of National Education).There are also family and / or teacher resource centres (open by the previously mentioned projects with support from UNICEF).It isn't only the family that needs help to evolve, but also the school in order that children are best educated.From this point of view, teachers have to give up the self-centred attitude, to consider themselves at the heart of the educational act.Teachers need to have a strong desire for their students to learn.For this, they need to know their pupils very well so that they can help them understand what they will learn, taking into account their individual needs.Teachers should be aware of the fact that, in order to be effective factors, they need to emphasize the purpose of learning, to be able to provide students with options, to constantly assess by various methods / means, to use educational resources effectively, to ask for and offer support to their colleagues.On the other hand, the management of the educational institution must provide teachers with safety, trust, support, methods of progress supervision. Parents, in their turn, get involved in child schooling.They are interested in the school program, school progress and how they can help the child.Meetings between parents and school representatives are either formal, under the form of lectures, or informal, as in daily conversations with the teacher.Parents' associations have become active and started having discussions with school representatives on different aspects of school activity.The school has a partnership with parents regarding student counselling, and also other aspects. At the level of the educational institution, several types of relationships are developed: the most important is the relationship, then the individual relationships (between the pupils, between the teachers, the teachers and the administrative staff); relationships between parents and teachers, relationships between child development professionals and parents / teachers. The school-family partnership means effective communication, clearly defined requirements, unitary actions in the interest of the child; parents should be considered active participants in school education because they know their children best; therefore, they must participate directly in the decision-making process; the responsibility for educating children is divided equally between parents and teachers. The relationship between school and family has not always been considered in the form of a partnership.After 2000, studies show the influence of the family on school results, especially due to the fact that the family, as an institution, is going through a period of crisis.The most important challenge is the departure of parents to work abroad, leaving children in the care of their relatives.The school is blamed for the students' bad results, and also viceversa.In consequence, parents' associations and parents' organizations are beginning to appear, but without any involvement in the decision-making process regarding the aspects of the education.The providers of new-teacher training programs also consider the idea of introducing topics on the schoolfamily relationship.Mutual mistrust is thus removed along the way.The Teachers' Board includes parents' representatives with a decision-making role in all educational problems; parents' associations are encouraged to participate in school work.Also, teacher-training and parent-training courses are organized. After the establishment of ARACIP -the Romanian Agency for Quality Assurance in Preuniversity Education, in 2005, a special "legitimacy" is discovered for the school-family partnership, meaning that an important aspect of the evaluation carried out by this body is the consultation of the parents' representatives with reference to the quality of the educational act in school. Obstacles and solutions in the good cooperation between school and family The Institute of Educational Sciences (IŞE) in Romania studied the relationship between school and family (Ţibu and Goia, 2014) and observed that there are some obstacles in the school-family relationship at the behavioural levelboth from the teachers and parents part, especially in school areas of social and material risk, because the school-family relationship requires effort and time, both on the part of the school and the parents.A major obstacle is the mentality of parents, teachers, students, and social habits.Due to changing social contexts, as labour migration for example, nowadays many children are deprived of parental models.If problems arise in one of the two environments, the effects are noticed in the child's behaviour / attitude, but also in school results.This is the moment when accusations arise: parents are accused of not having an initiative in establishing communication with the school, of not supporting / assisting the child in doing homework, not managing correctly certain childish, rebellious manifestations, conservatism (opacity towards new ideas), excessive preoccupations for school results; teachers are criticized for the relationship with their pupils and parents, lack of adequate training to manage their relationship with the family, severe strictness or, on the contrary, too little constraint / permissiveness. Conflicts between school and parents may arise, especially due to differences in education, mentality, perceptions and attitudes or values.Other causes can be added too: poor communication between the two institutions, poor informing, intolerance towards a certain lifestyle, a small number of meetings between parents and teachers, previous unhappy experiences in similar relationships, ignorance of responsibilities and roles assigned to every part.These problems can be solved through: knowledge, effective communication, mutual tolerance and acceptance, evaluation of relations, cooperation in common activities, creating a positive atmosphere during meetings.Both school and family, have serious responsibilities regarding child education.That is why there is a need for a parenting school.Education institutions are called upon to include parents in the school educational act, showing them that school is a place of safety (students are protected from potential external dangers; they have a proper framework for development, they are accepted and relate to other children of the same age).The family also has certain responsibilities (to offer the children affection, support, the best living conditions).Specifically, when there is a heated discussion between teacher and parent, the teacher has to keep calm and postpone the discussion in order to get more information about the situation in cause.The parent must be allowed to talk without being interrupted and also must be listened to, keeping a calm attitude.The teacher will initiate positive communication without making accusations, without making hasty conclusions, without judging, having a positive attitude, showing that he/she understands the role of the parent and the challenges that appear.Also, the teacher will show the parent that he/she is important for the teaching activity; that accusations do not help to establish a positive parent-child relationship. The school is the institution with the most important role in the relationship with the family, having methods to train parents in common actions for the homogenous education of children; that is why the family is informed about the purpose of the instructive-educational process, the tasks / requirements given to the children; the school also informs parents of the methods that can be used in the family to fulfil its mission.Thus, teachers educate both children and parents. As an institution, school has specialists who contribute to building an authority in the education of both children and adults.Elisabeta Stănciulescu shows that the school identifies four ways of perceiving the family (Henripin, 1976, apud. Stănciulescu, 1997), each having distinctive features from the perspective of the importance of the two institutions: if "the family as a client" does not have a say in the relationship with the school, taking only the decisions imposed by the school, "the family as pressure group" is very active in putting a pressure in order to solve parents' claims; "the family as a guarantor" is consulted by the school, and -"the family as a partner" is actively involved in the decision-making, since the parents' representative is a member of the School Board. The same author cites two Canadian authors, J. Comeau and A. Salomon (1994) and shows that, from the perspective of school-family relationships, there are the following types of school: "authoritarian school", which offers minimal cooperation with the family and considers that parents don't have any psycho-pedagogical training; "the participatory school" which includes the parent in the educational process; "the community school" which appeals to all the resources of the community; and "the autonomous school" which does not accept external interventions, but "exploits the pupils' participatory potential" (Stănciulescu, 1997).In order to have a good partnership between the two institutions, it is preferable to have participatory and community school. School is an institution which provides education and, in this quality, it has resources, methods, goals, etc. that recommend it as a partner in the relationship with the family and the community (Iosifescu, 2001).School is the initiator of the partnership and, in order to establish a good collaboration with the family, it needs to broaden the participatory character of the school management; it also needs to attract the family as the main partner, expand collaboration to all factors that can help in education.Family is attracted mainly by opening up to social changes, educating children in a positive manner. Many parents believe that when their children start school, their role in education is highly reduced.In fact, their role becomes more important because at home they have to create a balanced environment based on trust and security on the one hand, and on the other hand, family needs to become an active member of the school community to support the educational act, both in school and at home.Teachers must increase parents' awareness of this important role.In fact, research has shown that "in programs where parents are involved, pupils have higher school performance compared to similar programs but where parents are not involved" (UNICEF, 2006).When there is a gap between teacher and parent communication, this is reflected in the child's school results.The responsibility for the education and development of young children should be equally divided between school and family.Thus, there are several benefits: it increases the self-esteem of the children, the parents understand better what happens at school and spend more time with the child, the pupils have better results, they do their homework every day, both children and parents develop positive feelings in relation to the school.However, many parents do not have time or are unwilling to work with the school.It is the duty of the teacher to make them aware of the importance of collaborating with the educator for the child's benefit: the child has various tasks both at school and at home and that is why a similar degree of responsibility should be assigned to both the school and the parents."The school needs to encourage and promote the involvement of parents as partners" (UNICEF, 2006). One example of such a program is currently being developed by UNICEF-Romania and the Ministry of Education.The course is held in several, including "Mihail Andrei" Secondary School in Buhuşi, Bacău County.Courses are organized for the teaching staff serving as training to learn methods, techniques in their relationship with parents and students for a positive education (which deals with the study of the qualities and virtues that help children to have positive improvements, to develop and lead a happy life). Appreciative Parenting Appreciative Parenting starts from the belief that parents are models that their children follow, and the family should provide a space of balance and should not be the place where daily frustrations come to life.Selfcontrol and positive communication are needed.The aim of the program is therefore to encourage parents to analyse their lives and the relationship they have with their own children, to discover positive alternatives to parenting, to develop a social support network together with other parents.Through this program, parents become aware of the positive alternatives to educating the child, all this leading to an improvement of both their lives and their children's lives. Within the Appreciative Parenting, there are two types of parental support: individual support and group support (HoltIS, UNICEF, 2017).The individual support is accomplished by "individual counselling", "mentoring" and "parental coaching", the later representing an "informal relationship between two people, one has more experience and expertise than the other, and provides advice and guidance" (pp.12-22).On the other hand, group interventions are of the following types: -"group counselling" is conducted by a specialized leader who prepares the meeting with thorough attention; -"Parental Education courses", "a prevention / intervention tool used precisely to support parents to develop healthy parenting practices"; -support groups (peer and self-help support) created on the premise that supportive interactions with people experiencing similar problems can give the individual a sense of empowerment, an increase in self-efficacy, and increase their adaptive abilities" (HoltIS, UNICEF, pp.24-31). In the work entitled "Cum ne antrenăm Parentingul Apreciativ: Manualul Educatorului Parental" ("How to Train Appreciative Parenting: Parent Educator's Handbook", 2015), it is stated that parents are sometimes overcome by the day-to-day stress in the context of a rapid changes that require a fast adaptation to new situations.This may result in the quitting of the role of parent or the continuation of this role accompanied by behaviours that may endanger the physical and mental health of their children. According to L. Yballe and D. O' Connor, (2000, pp.474-483), when we take into consideration children from grades I to IV, parental education is based on several principles.Those relevant in the context of our article are: -"focusing on experience" which means to start from the previous experience of both educators and parents "about life, about themselves, parents become sources of knowledge"; -"focusing on success" which refers to "making the best use of the moments of the greatest success, pride, or glory experienced by parents"; they the inspiration for future successes if they are comprehended and amplified.During the parenting course, parents are encouraged to identify the moments of success in educating their own children, then to identify their own and their children's qualities and the way in which they led to success; -"focusing on the connection between positive vision and positive action", by means of which a positive view of people, institutions, community, children is created and which encourages positive actions in relation to children."The more positive the questions we ask during meetings are, the faster and more successful the social change becomes"; -"the poetic principle" refers to the fact that the family environment is under permanent construction and reconstruction, "just as a poem can be interpreted and reinterpreted" with new meanings. Applications of appreciative parenting education within a school based training course At the moment, the Ministry of Education, in partnership with UNICEF is implementing the Appreciative Parenting Program in several schools where there are students in difficulty due to social or financial reasons, among which there is also "Mihail Andrei" Secondary School, Buhuşi, Bacău County.Within the project, teachers are trained to prepare parents so that they can educate their children on the principle of positivity. Parents are sometimes overcome by the day-to-day stress in the context of a rapid changes that require a fast adaptation to new situations.This may result in the quitting of the role of parent or the continuation of this role accompanied by behaviours that may endanger the physical and mental health of their children.Regarding the poor families (as it is the case of "Mihail Andrei" Secondary School), they are forced to work hard or to go abroad in order to survive.Thus, parental education has become very important nowadays for all categories of parents who need to understand their children better, to educate themselves in the benefit of their children, to learn to respect their children, the basis of positive education. The Appreciative Parenting Course involves parents in training sessions which help them to become better parents.These sessions are carried out by parent educators (may be the teachers themselves) who in their turn are trained for this position.The course is based on Kolb's experiential learning theory (learning through exercises) so that parents can apply what they learn during the course in real-life situations. The first stage is to learn to overcome stress and anger, by identifying the sources of stress/anger, its symptoms, and then by learning how to cope with stress and control anger.Parents are taught to help their children cope with anger. The next stage is efficient / positive communication between parent and child.Through different exercises, parents experience active listening, verbal communication and means of positive communication (for example, messages of the "I" type). In order to better understand the child, parents learn the characteristics of the child development at a young school age, including the concept of emotional intelligence.This helps parents understand certain emotional manifestations of the child and find solutions to various problems or concerns.With regard to these aspects, parents are encouraged to think, communicate and relate positively, so as to be a model for the child: to identify the causes of anger and avoid them, to avoid judging people by labelling them in front of the child, to get involved in the problems of the child by communicating positively, to engage the child in household chores, to restrict the child's access to the media, to discuss family feelings, to see the child as capable and wonderful. During the course, parents are trained to become trustworthy partners in the relationship with their own child: they learn about the rights of the child and that these rights must be respected; it is the parent's duty to learn how to strengthen the positive relationship with the children, by spending time with them in a constructive manner, by setting priorities according to emergency situations, by co-opting the child into the decision-making process. An important problem is disciplining the child in a positive way.Firstly, parents are invited to understand the concept of "discipline" and to eliminate the concept of punishment from their vocabulary because it is considered harmful.Discipline is presented as a form of learning and a need for setting a limit, thus the children will learn to discipline themselves.The punishment applied to children teaches them that this is the right way to treat others.Positive discipline methods are discussed: active listening, positive attention and encouragement, time spent with every child, affection.Parents' reactions to challenging behaviours must be positive too: hugs, breaks for both children and parents. Hitting and beating are presented as violent forms of reaction that prevent the child from having proper feelings of responsibility or guilt for his behaviour and prevent him from solving the problem.The possibility of learning is excluded in this way. important problem is the prevention of abuse and its effects on the child.Parents become aware of the precedence of knowing the forms of abuse on children: child neglect, neglecting their needs, verbal or physical, emotional abuse, and ignoring abuse.Also, parents learn how to observe any sign of child abuse and its effects on the child's development.This form of learning is experiential, which implies the fact that parents will understand all forms of abuse and its consequences by means of exercises. Among the styles of parenting, the closest to positivity is the efficient one: the adult has realistic expectations from the child, offers him/her the necessary emotional support, explains the reasons for setting rules and limits, establishes a set of rules in cooperation with the child, has a firm attitude, but at the same time, encourages the child to be independent and involves him/her in the decisions that concern them.These competences/ abilities are acquired by the parents by participating in The Appreciative Parenting Course.The positive long-term effects can be observed in the development of the child's personality and in the high quality of their social skills.The children that are educated in this way will assume responsibility for their actions; will have a high degree of self-esteem and respect, will show initiative; they are also emotionally balanced, have good communication skills, are creative and positive (Cojocaru, 2015). In order to form positive behaviours in children, both parents and educators need four elements: love (which implies forgiveness and patience); availability to be caring, to spend time with the child, to study at all times; direction towards certain goals and dosage as the ratio between what the parents should ask for and what they offer the child in return. Conclusions In Romania, there is an acute need for parenting, taking into account the problems the family is faced with regarding child education, in general, and young students' education, in particular.The models of parents' education acquired from parents' parents are outdated.The phrase "I educate my children just as I was educated" is no longer up to date, due to the complexity of the relationship between children and their parents, the school, the society.School supports family, by offering professional counselling within parents' meetings, which are held either by school counsellors or teachers, but this is not sufficient.Lately, in Romania, The Ministry of Education has developed programs of parenting, in cooperation with UNICEF and NGOs: "Save the Children", "Our Children Foundation", "Holt".This kind of activity started in Revista de Pedagogie/ Journal of Pedagogy • 2017 (2) • LXV 121
2018-12-05T13:50:02.966Z
2017-12-31T00:00:00.000
{ "year": 2017, "sha1": "a91eb8b12a13ac3751c8c8d311d828bcbaad6c22", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.26755/revped/2017.2/107", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a91eb8b12a13ac3751c8c8d311d828bcbaad6c22", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
210115823
pes2o/s2orc
v3-fos-license
Electrical Characterization of the Backside Interface on BSI Global Shutter Pixels with Tungsten-Shield Test Structures on CDTI Process A new methodology is presented using well known electrical characterization techniques on dedicated single devices in order to investigate backside interface contribution to the measured pixel dark current in BSI CMOS image sensors technologies. Extractions of interface states and charges within the dielectric densities are achieved. The results show that, in our case, the density of state is not directly the source of dark current excursions. The quality of the passivation of the backside interface appears to be the key factor. Thanks to the presented new test structures, it has been demonstrated that the backside interface contribution to dark current can be investigated separately from other sources of dark current, such as the frontside interface, DTI (deep trench isolation), etc. Introduction Backside illuminated (BSI) imager technologies are nowadays widely used thanks to their advantages, such as better fill factor and better light collection, compared to frontside technologies. As the frontside interface, the backside interface can be a source of dark current. Key factors are interface states and their passivation, here managed through negatively-charged high-K dielectric to easily accumulate holes and, therefore, passivate the interface [1][2][3][4][5]. Interface state density drives the interface generation, passivation modulates the quantity of electrons that could escape from the interface region and reach photodiode. Therefore, it is crucial to develop this interface characterization. A characterization method named COCOS (corona oxide characterization of semiconductor) [6] already exists which enables the extraction of the density of the interface states and the charges within the oxide just after the deposition of a material. Here a new methodology is presented that enables characterizing the backside interface at the very end of the process and so, in the final pixel environment, which cannot be done by COCOS because of the metal shield preventing the light needed for the measurement from reaching the interface. It is based on well-known and relatively simple electrical characterization techniques applied on new dedicated test structures that benefit from the tungsten (W) layer in the technology for light shielding purposes. Additionally, repeated measurements will be possible with these structures (see Section 4), which cannot be done with COCOS due to the charge injected for the measurement, which cannot be removed correctly. Finally, the test structures are embedded on fully-processed wafers, so wafers are not dedicated only for the extraction of interface states and charge within the oxide, unlike COCOS measurements done on unpatterned wafers. Indeed, multiple structures are also present for different tests important for the development of the technology. Thus, with these test structures, dark current measurements and extractions presented in this paper can be done on the same wafer. Our use-case detailed in this paper is based on three wafers from BSI CDTI (capacitive deep trench isolation [7]) technology with Ta 2 O 5 /Al 2 O 3 backside dielectric process variants. For the 31 × 863 array of 3.2 µm × 3.2 µm, covered by W shield pixels, the median dark current (Idark) distributions measured on three wafers are presented in Figure 1. Dark current is measured at 60 • C, and further in the paper the different measurements on the test structures are conducted at 25 • C, which is more convenient. This does not affect the results because the measured parameters do not vary with the temperature. The temperature during measurements is controlled by chillers and temperature controllers. For measurements, the pixels' CDTI are biased at −2 V, which is their operating voltage, in order to have an accumulation layer at their interface which is, therefore, passivated. The W-shield is not biased because, with negative charge within the high-K, the backside interface should be passivated. Atypical behaviors can be seen on wafers 2 and 3. For both wafers, there is a higher and non-uniform dark current. In order to investigate the root cause of the dark current, two special test structures are measured on the same dies and physically near the pixel array used for Idark extraction ( Figure 2) in order to investigate if the Idark excursions come from backside interface and dielectric properties. Sensors 2020, 20, x 2 of 15 for the extraction of interface states and charge within the oxide, unlike COCOS measurements done on unpatterned wafers. Indeed, multiple structures are also present for different tests important for the development of the technology. Thus, with these test structures, dark current measurements and extractions presented in this paper can be done on the same wafer. Our use-case detailed in this paper is based on three wafers from BSI CDTI (capacitive deep trench isolation [7]) technology with Ta2O5/Al2O3 backside dielectric process variants. For the 31 × 863 array of 3.2 µm × 3.2 µm, covered by W shield pixels, the median dark current (Idark) distributions measured on three wafers are presented in Figure 1. Dark current is measured at 60 °C, and further in the paper the different measurements on the test structures are conducted at 25 °C, which is more convenient. This does not affect the results because the measured parameters do not vary with the temperature. The temperature during measurements is controlled by chillers and temperature controllers. For measurements, the pixels' CDTI are biased at −2 V, which is their operating voltage, in order to have an accumulation layer at their interface which is, therefore, passivated. The W-shield is not biased because, with negative charge within the high-K, the backside interface should be passivated. Atypical behaviors can be seen on wafers 2 and 3. For both wafers, there is a higher and non-uniform dark current. In order to investigate the root cause of the dark current, two special test structures are measured on the same dies and physically near the pixel array used for Idark extraction ( Figure 2) in order to investigate if the Idark excursions come from backside interface and dielectric properties. for the extraction of interface states and charge within the oxide, unlike COCOS measurements done on unpatterned wafers. Indeed, multiple structures are also present for different tests important for the development of the technology. Thus, with these test structures, dark current measurements and extractions presented in this paper can be done on the same wafer. Our use-case detailed in this paper is based on three wafers from BSI CDTI (capacitive deep trench isolation [7]) technology with Ta2O5/Al2O3 backside dielectric process variants. For the 31 × 863 array of 3.2 µm × 3.2 µm, covered by W shield pixels, the median dark current (Idark) distributions measured on three wafers are presented in Figure 1. Dark current is measured at 60 °C, and further in the paper the different measurements on the test structures are conducted at 25 °C, which is more convenient. This does not affect the results because the measured parameters do not vary with the temperature. The temperature during measurements is controlled by chillers and temperature controllers. For measurements, the pixels' CDTI are biased at −2 V, which is their operating voltage, in order to have an accumulation layer at their interface which is, therefore, passivated. The W-shield is not biased because, with negative charge within the high-K, the backside interface should be passivated. Atypical behaviors can be seen on wafers 2 and 3. For both wafers, there is a higher and non-uniform dark current. In order to investigate the root cause of the dark current, two special test structures are measured on the same dies and physically near the pixel array used for Idark extraction ( Figure 2) in order to investigate if the Idark excursions come from backside interface and dielectric properties. In this paper the different test structures are first presented. In Section 2, the density of interface states (Dit) is investigated with a tentative correlation with Idark. A similar exercise is performed on the passivation quality, detailed in Section 3. Finally, a charging phenomenon present on both structures is presented and discussed. Test Structures Description To study the potential correlation between Idark, density of interface states (Dit) and quality of passivation achieved through negative charge density within the oxide Neff, two test structures have been developed in a BSI global shutter pixel technology with CDTI using the W-shield as a gate: a backside MOS capacitor ( Figure 3) (L = 500 µm, W = 31.6 µm) and a backside W-shield gate pseudo-MOS transistor ( Figure 4) (L = 10 µm, W = 27 µm). Sensors 2020, 20, x 3 of 15 In this paper the different test structures are first presented. In Section 2, the density of interface states (Dit) is investigated with a tentative correlation with Idark. A similar exercise is performed on the passivation quality, detailed in Section 3. Finally, a charging phenomenon present on both structures is presented and discussed. Test Structures Description To study the potential correlation between Idark, density of interface states (Dit) and quality of passivation achieved through negative charge density within the oxide Neff, two test structures have been developed in a BSI global shutter pixel technology with CDTI using the W-shield as a gate: a backside MOS capacitor (Figure 3 . It is composed of a W-shield used as a gate, a dielectric stack as the MOS oxide, and a p-doped substrate. CDTI on both sides are present to connect the source and drain to the surface controlled by the W-shield gate. An STI is present (shallow trench isolation) between the source and drain to prevent punch-through). The capacitor is a simple metal-dielectric-semiconductor device and is functional (see Figure 5). In this paper the different test structures are first presented. In Section 2, the density of interface states (Dit) is investigated with a tentative correlation with Idark. A similar exercise is performed on the passivation quality, detailed in Section 3. Finally, a charging phenomenon present on both structures is presented and discussed. Test Structures Description To study the potential correlation between Idark, density of interface states (Dit) and quality of passivation achieved through negative charge density within the oxide Neff, two test structures have been developed in a BSI global shutter pixel technology with CDTI using the W-shield as a gate: a backside MOS capacitor (Figure 3) ( = 500 µm, = 31.6 µm) and a backside W-shield gate pseudo-MOS transistor ( Figure 4) ( = 10 µm, = 27 µm). . It is composed of a W-shield used as a gate, a dielectric stack as the MOS oxide, and a p-doped substrate. CDTI on both sides are present to connect the source and drain to the surface controlled by the W-shield gate. An STI is present (shallow trench isolation) between the source and drain to prevent punch-through). The capacitor is a simple metal-dielectric-semiconductor device and is functional (see Figure 5). Figure 4. Illustration of the transistor test structure used to extract D it . It is composed of a W-shield used as a gate, a dielectric stack as the MOS oxide, and a p-doped substrate. CDTI on both sides are present to connect the source and drain to the surface controlled by the W-shield gate. An STI is present (shallow trench isolation) between the source and drain to prevent punch-through). The capacitor is a simple metal-dielectric-semiconductor device and is functional (see Figure 5). In the transistor structure, CDTI are biased in a way that there is always an inversion layer at their interface in order to connect surface source and drain to the bottom interface of interest, that is to say, the channel of the W-shield gate transistor. Figure 6 shows the Id-Vg characteristic of the transistor test structure. To perform this measurement, CDTI are biased at 4 V to be sure to have > where is the threshold voltage of the CDTI. In the W-shield transistor test structure, the channels facing CDTI are in series with the W-shield MOS, one can wonder if can have an impact on the measurements done with the transistor test structure. To study this potential effect, Id(Vg) measurements are performed with several . The results can be seen in Figure 7. These curves show that does not impact the MOS threshold voltage (Vt). The subthreshold conduction is also identical for the three biases. Only the conduction part looks different, as a higher leads to more current on the drain. To conclude, as long as it is high enough, does not seem to affect most characterizations that can be done on this structure and pure backside interface information can be extracted. In the transistor structure, CDTI are biased in a way that there is always an inversion layer at their interface in order to connect surface source and drain to the bottom interface of interest, that is to say, the channel of the W-shield gate transistor. Figure 6 shows the Id-Vg characteristic of the transistor test structure. To perform this measurement, CDTI are biased at 4 V to be sure to have V CDTI > V TCDTI where V TCDTI is the threshold voltage of the CDTI. In the transistor structure, CDTI are biased in a way that there is always an inversion layer at their interface in order to connect surface source and drain to the bottom interface of interest, that is to say, the channel of the W-shield gate transistor. Figure 6 shows the Id-Vg characteristic of the transistor test structure. To perform this measurement, CDTI are biased at 4 V to be sure to have > where is the threshold voltage of the CDTI. In the W-shield transistor test structure, the channels facing CDTI are in series with the W-shield MOS, one can wonder if can have an impact on the measurements done with the transistor test structure. To study this potential effect, Id(Vg) measurements are performed with several . The results can be seen in Figure 7. These curves show that does not impact the MOS threshold voltage (Vt). The subthreshold conduction is also identical for the three biases. Only the conduction part looks different, as a higher leads to more current on the drain. To conclude, as long as it is high enough, does not seem to affect most characterizations that can be done on this structure and pure backside interface information can be extracted. In the W-shield transistor test structure, the channels facing CDTI are in series with the W-shield MOS, one can wonder if V CDTI can have an impact on the measurements done with the transistor test structure. To study this potential effect, Id(Vg) measurements are performed with several V CDTI . The results can be seen in Figure 7. These curves show that V CDTI does not impact the MOS threshold voltage (Vt). The subthreshold conduction is also identical for the three biases. Only the conduction part looks different, as a higher V CDTI leads to more current on the drain. To conclude, as long as it is high enough, V CDTI does not seem to affect most characterizations that can be done on this structure and pure backside interface information can be extracted. Interface States Characterization The dark current generated by an interface may be dominated by interface states if their density is high enough. In order to extract the interface states density (Dit) for each die, the charge pumping method [8][9][10][11][12], usually used for classic transistors, is applied on the W-shield transistor test structure ( Figure 4). This method consists of applying a pulse to the gate which controls the studied interface. By doing so, the interface is alternatively accumulated and inverted. During transitions the interface states emit and capture electrons which create a pumped current which can be measured though the bulk contact. During accumulation or inversion, there is no pumped current measurable because all interface states are almost all occupied. The implementation of the method is illustrated by Figure 8. The CDTI are biased at 4 V in order to always have an inversion layer at their interface so, as it is explained above, the CDTI interface does not contribute to the pumped current, only the backside interface is characterized. The drain and the source are grounded. A trapezoidal pulse (Figure 9) with the following characteristics are applied to the W-shield gate: _ is swept from 2 to 12 V, ∆ = 6 V, minimum frequency of the signal = 20 kHz, and rise and fall time , = 1 µs. The Interface States Characterization The dark current generated by an interface may be dominated by interface states if their density is high enough. In order to extract the interface states density (Dit) for each die, the charge pumping method [8][9][10][11][12], usually used for classic transistors, is applied on the W-shield transistor test structure ( Figure 4). This method consists of applying a pulse to the gate which controls the studied interface. By doing so, the interface is alternatively accumulated and inverted. During transitions the interface states emit and capture electrons which create a pumped current which can be measured though the bulk contact. During accumulation or inversion, there is no pumped current measurable because all interface states are almost all occupied. The implementation of the method is illustrated by Figure 8. Interface States Characterization The dark current generated by an interface may be dominated by interface states if their density is high enough. In order to extract the interface states density (Dit) for each die, the charge pumping method [8][9][10][11][12], usually used for classic transistors, is applied on the W-shield transistor test structure ( Figure 4). This method consists of applying a pulse to the gate which controls the studied interface. By doing so, the interface is alternatively accumulated and inverted. During transitions the interface states emit and capture electrons which create a pumped current which can be measured though the bulk contact. During accumulation or inversion, there is no pumped current measurable because all interface states are almost all occupied. The implementation of the method is illustrated by Figure 8. The CDTI are biased at 4 V in order to always have an inversion layer at their interface so, as it is explained above, the CDTI interface does not contribute to the pumped current, only the backside interface is characterized. The drain and the source are grounded. A trapezoidal pulse (Figure 9) with the following characteristics are applied to the W-shield gate: _ is swept from 2 to 12 V, ∆ = 6 V, minimum frequency of the signal = 20 kHz, and rise and fall time , = 1 µs. The The CDTI are biased at 4 V in order to always have an inversion layer at their interface so, as it is explained above, the CDTI interface does not contribute to the pumped current, only the backside interface is characterized. The drain and the source are grounded. A trapezoidal pulse (Figure 9) with the following characteristics are applied to the W-shield gate: V g_low is swept from 2 to 12 V, ∆V gpulse = 6 V, minimum frequency of the signal f = 20 kHz, and rise and fall time T r, f = 1 µs. The pumped current is measured through the bulk contact. The maximum measured pumped current is expected to be proportional to the mean Dit (cm −2 ) [8]: Here, I pump max represents the maximum pumped current, q is the elementary charge, and A is the gate area. Figure 10 shows the maximum pumped current as a function of the frequency. It can be seen that the maximum measured pumped current is linear with f which is in agreement with Equation (1) Here, represents the maximum pumped current, is the elementary charge, and is the gate area. Figure 10 shows the maximum pumped current as a function of the frequency. It can be seen that the maximum measured pumped current is linear with which is in agreement with Equation (1). The slope extraction gives Dit. However, some cautions have to be taken. First it must be checked that the device response is not limited by a geometric component [11]. Indeed, the length of the gate is 10 µm which may be favorable for such a phenomenon. Actually, when the interface goes back from inversion to accumulation, electrons flow back to the source and drain. If the gate is too long, some electrons do not have the time to reach the source or the drain before the interface reaches the accumulation and they are recombined. This creates an additional current which does not come from the interface states and the pumped current is overestimated. An easy way to verify the presence of this effect is to perform the experiment on structures with different lengths. If the results depend on the gate length, the extraction of the density of interface states is inaccurate. In this study, only one length is available, so in order to find out if this phenomenon is present or not, the method described in [13] can be used. Here, represents the maximum pumped current, is the elementary charge, and is the gate area. Figure 10 shows the maximum pumped current as a function of the frequency. It can be seen that the maximum measured pumped current is linear with which is in agreement with Equation (1). The slope extraction gives Dit. However, some cautions have to be taken. First it must be checked that the device response is not limited by a geometric component [11]. Indeed, the length of the gate is 10 µm which may be favorable for such a phenomenon. Actually, when the interface goes back from inversion to accumulation, electrons flow back to the source and drain. If the gate is too long, some electrons do not have the time to reach the source or the drain before the interface reaches the accumulation and they are recombined. This creates an additional current which does not come from the interface states and the pumped current is overestimated. An easy way to verify the presence of this effect is to perform the experiment on structures with different lengths. If the results depend on the gate length, the extraction of the density of interface states is inaccurate. In this study, only one length is available, so in order to find out if this phenomenon is present or not, the method described in [13] can be used. However, some cautions have to be taken. First it must be checked that the device response is not limited by a geometric component [11]. Indeed, the length of the gate is 10 µm which may be favorable for such a phenomenon. Actually, when the interface goes back from inversion to accumulation, electrons flow back to the source and drain. If the gate is too long, some electrons do not have the time to reach the source or the drain before the interface reaches the accumulation and they are recombined. This creates an additional current which does not come from the interface states and the pumped current is overestimated. An easy way to verify the presence of this effect is to perform the experiment on structures with different lengths. If the results depend on the gate length, the extraction of the density of interface states is inaccurate. In this study, only one length is available, so in order to find Sensors 2020, 20, 287 7 of 14 out if this phenomenon is present or not, the method described in [13] can be used. In this work, the pumped charges (Q CP = I cp f ) is plotted as a function of the rise and fall time which are equals. If there is not geometric component, the pumped charges should vary as follow: where Q CP is the pumped charges (C) and T r, f is the rise and fall time (s). Measurements performed on W-shield test structures are shown in Figure 11. It can be seen that the pumped charge is actually linear with the rise and fall time, which confirms that the measurements are not affected by a geometric component. If it was the case, the measured pumped charge is no longer linear with time and starts to increase for small rise and fall time because, as for a too long gate, electrons do not have time to reach the source and drain and recombine. Here, the loss of the linearity and the decreased of the pumped charge measured for small rise and fall time is due to the fact that transition times are too short and the interface states do not have the time to respond, and so the measured pumped charge starts to fall. Sensors 2020, 20, x 7 of 15 In this work, the pumped charges ( = ) is plotted as a function of the rise and fall time which are equals. If there is not geometric component, the pumped charges should vary as follow: where is the pumped charges (C) and , is the rise and fall time (s). Measurements performed on W-shield test structures are shown in Figure 11. It can be seen that the pumped charge is actually linear with the rise and fall time, which confirms that the measurements are not affected by a geometric component. If it was the case, the measured pumped charge is no longer linear with time and starts to increase for small rise and fall time because, as for a too long gate, electrons do not have time to reach the source and drain and recombine. Here, the loss of the linearity and the decreased of the pumped charge measured for small rise and fall time is due to the fact that transition times are too short and the interface states do not have the time to respond, and so the measured pumped charge starts to fall. The second caution which has to be taken is the frequency range of the measurements. The pumped charges should be constant with the frequency. However, when the frequency becomes too low, the structure is in inversion or in accumulation for too long time and tunneling effect may occur [14]. If this is the case the measured pumped charge increases because the charge pumping method starts to characterize traps which are within the high-K [14] and not at the interface. At high frequency, only the interface is characterized. Measurements of pumped charge vs. the frequency are shown in Figure 12. The pumped charge is effectively constant with the frequency from 10 kHz. Under this frequency, starts to increase. To conclude, the measurements have to be done with frequencies higher than 10 kHz. The second caution which has to be taken is the frequency range of the measurements. The pumped charges should be constant with the frequency. However, when the frequency becomes too low, the structure is in inversion or in accumulation for too long time and tunneling effect may occur [14]. If this is the case the measured pumped charge increases because the charge pumping method starts to characterize traps which are within the high-K [14] and not at the interface. At high frequency, only the interface is characterized. Measurements of pumped charge vs. the frequency are shown in Figure 12. The pumped charge is effectively constant with the frequency from 10 kHz. Under this frequency, Q CP starts to increase. To conclude, the measurements have to be done with frequencies higher than 10 kHz. With the IdVg curves in Section 2, it is important to see if the charge pumping measurements are affected by the CDTI bias. Figure 13 shows the maximum pumped current as a function of the frequency for different CDTI bias. It can be seen that the slopes of the different measurements are the same so the interface state density which is extracted is equal for all CDTI biases. It can be concluded that the CDTI bias does not affect the characterization of the interface state density. Knowing these cautions, measurements are performed on the three wafers using the sampling map of Figure 2. Figure 14 presents the Idark vs Dit scatter plot for the three wafers. From the scatter plot no clear tendency can be extracted. Idark looks not to be correlated to Dit in this case. Therefore, the measured dark current differences between the three wafers cannot be explained by a difference of Dit, as all the wafers have an equivalent level of interface states (∼5 × 10 11 /cm 2 ). This value is adequate with what can be obtained with the COCOS measurement (∼3.5 × 10 11 /cm 2 ). Sensors 2020, 20, x 8 of 15 With the IdVg curves in Section 2, it is important to see if the charge pumping measurements are affected by the CDTI bias. Figure 13 shows the maximum pumped current as a function of the frequency for different CDTI bias. It can be seen that the slopes of the different measurements are the same so the interface state density which is extracted is equal for all CDTI biases. It can be concluded that the CDTI bias does not affect the characterization of the interface state density. Knowing these cautions, measurements are performed on the three wafers using the sampling map of Figure 2. Figure 14 presents the Idark vs Dit scatter plot for the three wafers. From the scatter plot no clear tendency can be extracted. Idark looks not to be correlated to Dit in this case. Therefore, the measured dark current differences between the three wafers cannot be explained by a difference of Dit, as all the wafers have an equivalent level of interface states (~5 × 10 / ²). This value is adequate with what can be obtained with the COCOS measurement (~3.5 × 10 / ²). With the IdVg curves in Section 2, it is important to see if the charge pumping measurements are affected by the CDTI bias. Figure 13 shows the maximum pumped current as a function of the frequency for different CDTI bias. It can be seen that the slopes of the different measurements are the same so the interface state density which is extracted is equal for all CDTI biases. It can be concluded that the CDTI bias does not affect the characterization of the interface state density. Figure 13. Ipump_max in function of the frequency for several CDTI bias in order to demonstrate that the CDTI voltage does not affect the extraction of the interface state density. Knowing these cautions, measurements are performed on the three wafers using the sampling map of Figure 2. Figure 14 presents the Idark vs Dit scatter plot for the three wafers. From the scatter plot no clear tendency can be extracted. Idark looks not to be correlated to Dit in this case. Therefore, the measured dark current differences between the three wafers cannot be explained by a difference of Dit, as all the wafers have an equivalent level of interface states (~5 × 10 / ²). This value is adequate with what can be obtained with the COCOS measurement (~3.5 × 10 / ²). Quality of the Passivation Characterization As interface states are not responsible for Idark excursions, the quality of the passivation might be the root cause of the Idark response seen in Figure 1. Indeed, with very similar Dit extracted on the wafers, the field effect passivation, that is, having an electric field at the interface to accumulate charges in order to passivate the interface, can have a key contribution. The electric field is induced Quality of the Passivation Characterization As interface states are not responsible for Idark excursions, the quality of the passivation might be the root cause of the Idark response seen in Figure 1. Indeed, with very similar Dit extracted on the wafers, the field effect passivation, that is, having an electric field at the interface to accumulate charges in order to passivate the interface, can have a key contribution. The electric field is induced by negative charges within the oxide (N EFF ). For this analysis the Maserjian's function is used on C-V measurements to estimate the effective charge density within the oxide on the capacitor structure ( Figure 3) [15,16]: Thanks to this function, the substrate doping concentration (N a ) and the flat band voltage (V FB ) can be extracted: where Y min is the minimum reached by the Maserjian's function and ε Si the silicon permittivity. Then it is possible to calculate N EFF : where N EFF is the effective number of charges within the oxide, W MS the metal semiconductor work function and C ox the oxide capacitance. For the measurements, the following characteristics are applied: V g is swept from 2 to 15 V, the signal frequency applied is f = 50 kHz, and the modulation of the signal is 0.02 V. An example of an obtained Y function for one capacitor structure can be seen in Figure 15. Figure 16 presents the scatter plots of Idark as a function of the extracted N EFF for the three wafers. On wafer 1, it was difficult to extract an eventual correlation between Idark and the density of charge within the oxide probably because the Idark variation is weak (Figure 1). However, on wafers 2 and 3, a clear tendency can be identified between N EFF and Idark. In order to observe if there is a more general tendency, a global scatter plot with the three wafers together is shown in Figure 17. On wafer 1, it was difficult to extract an eventual correlation between Idark and the density of charge within the oxide probably because the Idark variation is weak (Figure 1). However, on wafers 2 and 3, a clear tendency can be identified between NEFF and Idark. In order to observe if there is a more general tendency, a global scatter plot with the three wafers together is shown in Figure 17. In this figure, two regions of the scatter plot can be distinguished: a first one where the dark current is almost insensible to NEFF and a second one where the dark current increases according to the Neff reduction (in absolute value). According to [1], the plateau is explained by the fact that when a certain number of charges within the oxide is reached, the interface is fully passivated and the Idark from the backside interface becomes negligible. Therefore, the measured dark current comes from other parts of the pixel, such as DTI, frontside interface, silicon volume, etc. On the contrary, for the lowest NEFF values in absolute value, as on some wafer 2 and 3 dies, the passivation is not efficient enough and a significant Idark appears coming from the BS interface. Here, again, the results obtained are consistent with what can be measured with the COCOS technique (same order of magnitude) These new test structures, backside W-shield gate capacitor and transistor, are functional and enable to use very well-known and quite simple techniques in order to characterize the backside interface of a BSI pixel with W-shield at the end of the process. Thanks to these structures, it is possible to investigate if the measured dark current is coming from the backside interface or not and to determine the possible root causes of this dark current (here, the quality of the passivation). The extractions done with these measurements are in good agreement with what can be obtained with COCOS. To be complete, it can be determined that the structures with the W-shield represent the In this figure, two regions of the scatter plot can be distinguished: a first one where the dark current is almost insensible to N EFF and a second one where the dark current increases according to the Neff reduction (in absolute value). According to [1], the plateau is explained by the fact that when a certain number of charges within the oxide is reached, the interface is fully passivated and the Idark from the backside interface becomes negligible. Therefore, the measured dark current comes from other parts of the pixel, such as DTI, frontside interface, silicon volume, etc. On the contrary, for the lowest N EFF values in absolute value, as on some wafer 2 and 3 dies, the passivation is not efficient enough and a significant Idark appears coming from the BS interface. Here, again, the results obtained are consistent with what can be measured with the COCOS technique (same order of magnitude) These new test structures, backside W-shield gate capacitor and transistor, are functional and enable to use very well-known and quite simple techniques in order to characterize the backside interface of a BSI pixel with W-shield at the end of the process. Thanks to these structures, it is possible to investigate if the measured dark current is coming from the backside interface or not and to determine the possible root causes of this dark current (here, the quality of the passivation). The extractions done with these measurements are in good agreement with what can be obtained with COCOS. To be complete, it can be determined that the structures with the W-shield represent the dark reference shielded pixels very well. Their response can be different in terms of dark current than usual pixels (without the shield). However, this can be controlled by comparing the dark current from usual pixels with the dark reference pixel (shielded). The next section will present a charging effect that can be observed. This effect does not affect the results presented in this study, but it has to be known in order to perform the best measurements. Charging Effect Measurements on our W-shield MOS or capacitor are performed up to relatively high voltages to achieve extractions. If they are repeated a second time, and more, characteristics look shifted, as illustrated in Figure 18. This is attributed to a charging effect. The same observation can be done on the capacitor structure. On both structures, the curve shift seems to be related to a shift. During the measurement, reaches high voltage (15 V and above) which can induce a negative charge injection within the oxide and results in higher voltage. In addition to this charging effect, a hysteresis phenomenon may be seen on both structures as well. For example, Figure 19 shows successive IdVg measurements operated on the transistor test structure. is swept from 0 to 12 V and then from 12 to 0 V, and finally from 0 to 12 V again. An effect of charging/discharging can be seen. However, after a measurement the curve may come back to the initial state by applying a negative bias of −5 V for 1000 s on the gate. The example of Id(Vg) measurements can be taken. Figure 20 shows the results of this procedure. After making a third measurement, it can be seen that the curve obtained is almost the same as the first one, and the charging effect is recovered. On both structures, the curve shift seems to be related to a V FB shift. During the measurement, V g reaches high voltage (15 V and above) which can induce a negative charge injection within the oxide and results in higher V FB voltage. In addition to this charging effect, a hysteresis phenomenon may be seen on both structures as well. For example, Figure 19 shows successive IdVg measurements operated on the transistor test structure. V g is swept from 0 to 12 V and then from 12 to 0 V, and finally from 0 to 12 V again. An effect of charging/discharging can be seen. However, after a measurement the curve may come back to the initial state by applying a negative bias of −5 V for 1000 s on the gate. The example of Id(Vg) measurements can be taken. Figure 20 shows the results of this procedure. After making a third measurement, it can be seen that the curve obtained is almost the same as the first one, and the charging effect is recovered. Some other effects can be seen associated to this charging effect. First, in the case of the charge pumping, between the first measurement and the other ones, the maximum pumped current slightly increases but does not strongly affect the extraction of Dit (Figure 18). In the case of the C-V measurement, Figure 21 shows the hysteresis effect that can be obtained on the capacitor structure. structure. is swept from 0 to 12 V and then from 12 to 0 V, and finally from 0 to 12 V again. An effect of charging/discharging can be seen. However, after a measurement the curve may come back to the initial state by applying a negative bias of −5 V for 1000 s on the gate. The example of Id(Vg) measurements can be taken. Figure 20 shows the results of this procedure. After making a third measurement, it can be seen that the curve obtained is almost the same as the first one, and the charging effect is recovered. Some other effects can be seen associated to this charging effect. First, in the case of the charge pumping, between the first measurement and the other ones, the maximum pumped current slightly increases but does not strongly affect the extraction of Dit (Figure 18). In the case of the C-V measurement, Figure 21 shows the hysteresis effect that can be obtained on the capacitor structure. Some other effects can be seen associated to this charging effect. First, in the case of the charge pumping, between the first measurement and the other ones, the maximum pumped current slightly increases but does not strongly affect the extraction of Dit ( Figure 18). In the case of the C-V measurement, Figure 21 shows the hysteresis effect that can be obtained on the capacitor structure. Here, the different sweeps have been performed between −10 and 20 V. In addition to the hysteresis effect, it can be seen that the bump initially present in the first sweep disappears in the second sweep (from 20 to −10 V). This bump in C-V measurements can be the result of the interface trapped charge [17,18]. When the measurement starts, the structure is in accumulation and holes can Here, the different sweeps have been performed between −10 and 20 V. In addition to the hysteresis effect, it can be seen that the bump initially present in the first sweep disappears in the second sweep (from 20 to −10 V). This bump in C-V measurements can be the result of the interface trapped charge [17,18]. When the measurement starts, the structure is in accumulation and holes can be trapped by interface states leading to bump creation. When the high voltage is reached, the interfaces states are discharged, and they capture an electron, in addition to the charging effect, and so the bump disappears. It reappears with the third sweep showing that holes were captured again. Back to pixels operating conditions, it should be also noted that such conditions are not applied and such effects should not happen. The different extractions presented earlier in this paper were made during the first measurement when the structures are not already stressed. Multiple measurements can be made on these structures since the charging effect can be recovered (unlike COCOS measurements). Conclusions With these MOS capacitor and W-shield gate transistor test structures, it is possible to electrically characterize the backside interface of BSI technology at the end of a process using a tungsten shield. By means of two known characterization methods, Dit and N EFF , which are the two important parameters for dark current, can be extracted. It is, therefore, possible to investigate if the dark current mainly comes from the backside interface, and to discriminate the origin of the backside dark current. In the case presented in this study, the difference in Idark behavior is explained by quality passivation differences of the backside interface between wafers. COCOS measurements are useful to characterize the interface just after a material deposit, however, it cannot be used with a fully processed wafer, unlike the methodology used on the new structures presented in this study. A drawback of this method is the presence of a charging effect that forces some caution on the execution of measurements, but this effect can be recovered and is not present in pixel operating conditions. In addition to these Idark contribution studies, these dedicated devices with associated characterizations can be helpful for process monitoring, TCAD calibration, and reliability works. To go further, a comparison with an analytical model of the dark current generated at the backside interface and TCAD simulations are under study to reinforce all the obtained results.
2020-01-09T09:10:37.929Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "e2a276a891a92425306ecb9c20b923383608561d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/1/287/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b74163ceec99ee675789f3fe2883c963f37b6a59", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Materials Science", "Medicine" ] }
203132916
pes2o/s2orc
v3-fos-license
Study of turbulence intermittency in linear magnetized plasma The intermittent behavior of a quasi-coherent density fluctuation is observed in a laboratory plasma. The quasi-coherent fluctuation is localized but intermittent events are observed in the whole region of plasma. Conditional averaging shows the intermittent events propagate from the central region of the magnetized plasma column to the peripheral region. Auto-correlation function of fluctuations and Hurst analysis reveal the intermittency is highly auto-correlated and the Hurst parameter reaches to 0.8, indicating the existence of self-similar behavior and long-range time correlation, and self-organized criticality dynamics might be the mechanism. Cross-bicoherence between different radii shows the nonlinear coupling between the quasi-coherent fluctuation and ambient turbulence, which will contribute to the generation of intermittency of turbulence. Introduction A magnetically confined plasma is one of the systems far from equilibrium and thus exhibits dynamical behavior, and plasma turbulence is a key issue for understanding plasma dynamics. Turbulence intermittency, which is the enhanced particle bursts propagating across the open magnetic field region, has been observed in space plasmas [1,2], fusion plasmas [3][4][5][6][7][8][9], and laboratory plasmas [10,11]. The studies on intermittency reveal that it has a significant influence on plasma, which increases the radial non-diffusive transport and leads to flat density and temperature profiles [12,13]. As a result, the 'main chamber recycling regime' can be dominant [14,15] and the divertor efficiency is reduced. Besides, the intermittency can also increase the interaction of plasma with vacuum wall and thus enhance the erosion of the wall [16]. Furthermore, intermittency may also have a relation with the density limit of the discharge [17]. In order to reveal the underlying physics of intermittent turbulence, different statistical methods have been developed, including probability distribution function (PDF), conditional averaging analysis and Hurst parameter calculation [9,11,[18][19][20]. The intermittency is found to behave with self-similar characteristic and long-range correlations, and self-organized criticality (SOC) like mechanism or avalanche dynamics [18,[21][22][23][24] have been proposed to explain intermittent turbulence events. However, the exact origin of turbulence intermittency still remains unclear, and further experimental evidence should be provided to reinforce the picture of it. Laboratory plasma is very useful to study the turbulent intermittency, because it has excellent reproducibility and controllability and allows multi-point simultaneous measurement. This study was made to reveal the generation mechanism of the turbulence intermittency through the laboratory plasma experiment. Recently, a quasi-coherent fluctuation has been excited in the central region of linear magnetized laboratory plasma in the PANTA device through a large density gradient. The fluctuation is nonstationary and its amplitude varies in time intermittently. The impact of the intermittent event is observed in the whole plasma region, including the peripheral region where the density gradient is weak. This paper provides experimental results indicating a role of SOC dynamics in intermittency and correlation between intermittency and quasi-coherent fluctuation, and the paper discusses the generation mechanism of the turbulence intermittency. Experimental setup The experiment is performed in a linear device PANTA [25]. The PANTA is a 4-meter-long cylindrical device. The axial magnetic field, which is almost constant along the axis, is 0.09 T. The working gas is argon and the injected neutral argon pressure is 1 mTorr. Plasma is produced by a helicon wave ( = P 6 kW, = f 7 MHz) with a pulse duration of 500 ms. The center electron density, electron temperature and ion temperature of the plasma are approximatelý -1 10 m , 19 3 3 eV and 0.3 eV, respectively. The main diagnostic system used in this experiment is a frequency comb microwave reflectometer [26], which is installed at 1 meter in the axial direction as shown in figure 1(a). The reflectometer operates in ordinary mode (Omode), and has 29 frequency channels, ranging from 12 GHz to 26 GHz with an interval of 0.5 GHz, which means that it can measure the density ranging from´-1.79 10 m 18 3 tó -8.38 10 m . 18 3 The power spectrum density (PSD) of the incident wave is shown in figure 1(b). In this experiment, the microwave reflectometer starts the data acquisition from 100 ms of the discharge. Figure 1(c) shows the signal of the ion saturation current obtained by a 64-channel probe array at the mid-plane of 1875 mm in the axial direction and indicates that the plasma is stationary at the time during which the reflectometer collects data, as the red dashed lines denote. The reflected and incident waves are obtained and the phase delay between them is thus calculated, which indicates the time-offlight of the wave. The equilibrium density profile is reconstructed by using time-of-flight and phase delay of each frequency component. In working with phase, it is necessary to consider the ambiguity of p m 2 because the obtained phase difference is usually restricted in ( p -, p), here m is the arbitrary integer number. To solve this problem, we initialized the location of the center cut-off layer (corresponding to the channel with the highest frequency) based on the density profile measured with the Thomson scattering system, and the density profile is reconstructed by using the assumption that the differences of distance between two adjacent cut-off layers corresponding to comb frequency components are smaller than the wavelength of each comb component ( ( ) O 2 cm ). Besides, a frequency comb sweep microwave reflectometer has been developed to eliminate the half-wavelength ambiguity of the cut-off layers [27] and the density profile is modified according to the one measured with the comb sweep reflectometer at the same experiment condition. The modified density profile along with those measured by the comb sweep reflectometer and Thomson scattering system are shown in figure 2. These three profiles agree well with each other, and all of them reveal an extremely large density gradient at around = r 3.5 cm. Meanwhile, the fluctuation of electron density, or in other words the density cut-off layer, causes perturbation of phase delay AE. Since no radial-coherent fluctuation is expected in the low density gradient region, and the irregular density burst has a radially elongated structure (shown later), the radial wavenumber (k r ) is considered to be close to zero, which satisfies the small radial wavenumber condition ( where k in is the wavenumber of the incident wave and L N is the density gradient length) [28,29]. In this case, the back-scattering from such low-k r structures in this region is weak and has little effect on the reflected wave from the cut-off layer. In the large-gradient region where quasicoherent fluctuation is easily excited, the back-scattering occurs and leads to an increase in the error of radial location. However, as such a structure is localized inside the narrow large-gradient region (discussed later), the error bars of the locations are almost the same as those of the density profile, which are shown in figure 2. Therefore, the effect of backscattering can be neglected in our study, and the phase per-turbationAE can be used to study the characteristics of density perturbation. Characteristics of the quasi-coherent fluctuation The quasi-coherent fluctuation observed in the PANTA is first characterized. The time-frequency spectrum and PSD ofAE at = r 3.67 cm, which is proportional to the PSD of electron density fluctuation, are calculated and shown in figure 3. The frequency spectrum varies in time and two different frequency components are visible in the long-time averaged spectrum. These two components repeat growing and dumping irregularly. The low frequency component (f 2 kHz) is drift wave instability. We call the high frequency component (f 11 kHz) quasi-coherent fluctuation hereafter. This quasi-coherent fluctuation is only excited in the case of high power heating (6 kW), and has a higher frequency than the drift wave frequency (which is usually <5 kHz). Moreover, figure 3(c) shows the power spectrum in the whole frequency domain in the log-log plot, which is similar with other experimental observations [30,31]. Different regions of power spectra scale as power lows, indicating correlations on different timescales. The f −2 decay of the power spectrum indicates a temporal process composed of uncorrelated increments (e.g. Brownian motion) [22,32]. Actually, in the magnetized plasma turbulence, the high-frequency components (scaled as f −2 ) in the time series correspond to small-size transport events. Meanwhile the f −1 decay of the power spectrum is the distinctive feature of processes with long-range temporal correlations and often appears in conjunction with avalanche-like dynamics [21,22]. In the PANTA, the f −1 decay of the power spectrum indicates a correlation between large-scale bursts and small-scale fluctuations. Owing to a similarity between the temporal and spatial spectra of turbulence in magnetized plasma, it is considered that spatial spectra of the PANTA plasma also indicate the power laws within the corresponding range. These power laws are important signatures of the SOC systems [22]. The radial structure of the quasi-coherent fluctuation is shown in figure 4. Figure 4(a) is the radial profile of the PSD ofAE at the frequency of 11 kHz. It shows that the strongest fluctuation is located between = r 3 cm and = r 4.5 cm (denoted by the pink region), where the density gradient is largest as well. It is noted that the fluctuation strength decreases rapidly at the boundary of the large density gradient region. In addition, the formation of the strong gradient and excitation of the localized 11 kHz fluctuation in this region are confirmed by the Langmuir probe measurement [33]. Thus, it is reasonable to conclude that the fluctuation is excited by the gradient. Figure 4(b) is the radial wavenumber (k r ) spectrum, which is obtained by calculating the cross phase of two radial channels at = r 3.83 cm and = r 4.09 cm [34]. In this analysis, the time window is 1 ms and 23 ensembles with an overlapping of 0.5 ms are used. The k r is normalized by an ion sound Larmor radius r s (r~0.5 cm s in Figure 2. Electron density profiles measured by reflectometers and Thomson scattering system. the PANTA). Figure 4(b) reveals an extremely large wavenumber of the quasi-coherent fluctuation, which is around −5, indicating that the fluctuation has a small scale and is a pure inward propagating mode rather than a standing wave propagating in two opposite directions. This may help to identify the fluctuation, however, what the quasi-coherent fluctuation is still remains unknown now. Evaluation of intermittency An intermittent density burst is also observed in this experiment. The spatiotemporal evolution ofAE is shown in figure 5. In the region of   r 3.0 4.3 cm, the amplitude ofAE is dominated by the quasi-coherent fluctuation. The amplitude of the quasi-coherent fluctuation changes abruptly with a short timescale (∼0.1 ms). It is observed that sudden increases/decreases in the amplitude of theAE are radially synchronized beyond the outward boundary of the quasi-coherent fluctuation. There is a phase jump around = r 4.3 cm close to the outward boundary of the quasi-coherent fluctuation. A ballistic propagation or avalanche-like propagation can be observed at the outer region. A decrease ofAE starts from around 6.1 ms, and propagates from the central region ( = r 4.3 cm) to the peripheral region ( = r 9 cm), indicating the intermittent burst is a global event. It is noted that a decrease in the phase (i.e. the distance between the antenna and cut-off layer shortens) denotes an increase in the local electron density. The density burst originates from the outward boundary of the quasi-coherent fluctuation region, which indicates that the fluctuation may contribute to the generation of intermittency. It is also noted that theAE associated with the intermittent event in the peripheral region (   r 4. 3 9 cm) has an anti-phase to theAE in the quasicoherent fluctuation region. To investigate the transport of intermittency, it is necessary to separate the intermittent bursts from background turbulence. Conditional averaging analysis is an important tool to extract these intermittent features and to reduce the effect of background perturbations, and the process is as follows [19]. A reference signal is selected and the threshold is set (usually several times of the root-mean-square (RMS) of the reference signal). The intermittent events that have a larger value than the threshold are thus discriminated, and the maxima (or minima) peaks of each event are detected. After setting a time window for averaging, the time series data with the length of time window around the maxima (or minima) are detected. Conditional averaging is achieved by accumulating and averaging the detected data. If the maxima (or minima) detection and data averaging are applied on the same signal, then it is called auto-conditional averaging, otherwise it is called cross-conditional averaging [3]. Auto-and crossconditional averaging analysis allows us to simultaneously extract and average the intermittent bursts at different radii at the same timing, thus the spatial-temporal evolution of intermittency can be studied. In this experiment, a threshold of -1.5 RMS ofAE at = r 5.67 cm is used to detect the intermittent events. The time window is set as 160 μs, which is several times the decorrelation time (discussed later). Besides, we have made sure that the peaks of selected events are separated by at least 80 μs to avoid overlapping the two adjacent bursts. Auto-or cross-conditional averaging is thus performed to all channels, and the result is shown in figure 6. Similar with figure 5, phase delay decreases, i.e., density increase is observed and formed in the central region and propagating outward in the peripheral region, indicating the global structure of the intermittency. Figure 6 also reveals that the outward propagation velocity of the density bump is approximately 1 km s −1 , which is consistent with the results in other devices [5,6,11]. It is also noted that inside = r 4.3 cm, the phase delay increases before intermittency bursts, indicating that the Figure 7 gives the radial correlations ofAE. Figures 7(a) and (c) show the cross-power spectrum density (CSD) at inner-outer and outer-outer regions respectively, and figures 7(b) and (d) are the corresponding squared radial coherence. The strong radial correlation at the mode frequency of 11 kHz is clearly shown across the phase inversion layer ( figure 7(b)) and also at 2-3 cm far away from the phase inversion layer ( figure 7(d)). The SOC dynamics may play a role in the intermittent transport [3], and long-range time correlation (also called 'long-term storage' or 'persistence') is the key ingredient of the SOC behavior, which can be studied via autocorrelation function (ACF). The ACFs of phase delay perturbations at = r 3.16 cm and = r 5.32 cm are shown in figure 8. The dashed lines are eye-guide lines (exponential decay: y e t and Lorentzian-like long tail: 2 ). It can be seen that the decorrelation time of density fluctuation at = r 3.16 cm, which is the time lag for local ACF decaying lower than /e 1 (see red dashed line), is approximately 40 μs. The ACF at = r 3.16 cm shows a narrow peak when the time lag is smaller than 40 μs and a slow decay when the time lag is longer than 40 μs, indicating the existence of a long-range correlation. In contrast, the ACF measured at = r 5.32 cm drops rapidly with time and shows no long tail. However, it is not easy to accurately determine the longrange correlation via the tail of ACF. A typical parameter for evaluating the long-range correlation is called Hurst exponent (H) [35], which expresses the scale of the long-range increasing of a time series data. According to [34], H ranges from 0 to 1. < < H 0. 5 1 indicates that there is long-range time correlation, while < < H 0 0.5 indicates long-range anticorrelation. If = H 0.5, the process is an uncorrelated random process. The most commonly used method to calculate H is rescaled range ( / R S) analysis [36,37]. Here / R S stands for the cumulated range (R) of a time series data over its standard deviation (S), and H is obtained by calculating the scale of / R S with time. However, it is basic to test the stationarity of the data samples before calculating H [28,38], which cannot be accomplished with / R S analysis. Therefore, we started the test with structure function (SF) analysis. The structure function is defined as follows: where x is a time series data of signal of interest, τ is the time lag, q is the order of structure function, i denotes the ith time point and á ñ denotes the ensemble average. If the signal is stationary, then should be constant. Figure 9(a) shows the SFs for different orders ( = q 0.5, 1.0, 2.0, 3.0) of the phase delay perturbationAE at = r 4.35 cm. It is clear that is approximately constant with zero slope for all orders for t between 50 μs and 3 ms, which indicates that the data is stationary. The / R S method is used to calculate the Hurst component. Figure 9 figure 10. It is clear that at the region where the quasi-coherent fluctuation is located (   r 3.0 4.5 cm denoted by the pink region), the Hurst exponent is larger than 0.7, and the largest Hurst exponent reaches around 0.8. In contrast, the Hurst exponents out of = r 4.5 cm are mostly between 0.6 and 0.7. This result indicates that the density bursts at the quasi-coherent fluctuation region have a long-range time correlation and SOC dynamics exist in this region. Since the intermittency originates from the outward boundary of this region (figure 5), it is thus reasonable to speculate that the origin of intermittent density burst propagation is driven by SOC dynamics when the critical threshold is reached due to the quasi-coherent density fluctuation. Nonlinear coupling with ambient turbulence For intermittent events, another important property is the interplay between large-and small-scale fluctuations. Behaviors of the ambient turbulence are thus studied. The spatial and temporal scales of turbulence, which is obtained by a Bessel filter with bandpass frequency from 20 kHz to 50 kHz, are considered to be smaller and shorter than those of quasicoherent fluctuation. Here the frequency range of the turbulence is determined based on the range where the power spectrum is scaled asf 2 ( figure 3(c)), i.e. the micro-turbulence range. Besides, after testing several different ranges between 20 kHz and 100 kHz, we found that the turbulent behaviors are almost the same. The envelope of this microturbulence is thus calculated with Hilbert transform, as shown in figure 11. The power spectra and coherence of envelopes of ambient turbulence at = r 3.83 cm and = r 4.09 cm are given in figures 12(a) and (b), respectively. It is clear that even though the spectra are relatively weak, the cross-coherence of the turbulence envelope is significant at the frequency corresponding to the quasi-coherent fluctuation (f 11 kHz), which suggests that the ambient turbulence closely correlates to the quasi-coherent fluctuation and the radial nonlinear coupling with the quasi-coherent fluctuation is thus worth investigating. Correlation with low frequency waves (f 4 kHz) are also suggested but the low frequency modes seem not to have long radial correlation, as discussed later. To reveal the relationship between ambient turbulence and intermittency, the two-point cross-bicoherence, which is an index of multi-scale coupling at different radii, are calculated [39][40][41]. The two-point cross-bicoherence is defined as where X is the Fourier representation of time series data x and B is the two-point cross-bispectrum defined as where X* denotes the complex conjugate of X. The summed two-point cross-bicoherence is written as where N is the number of the elements satisfying = + f f f . is visible. This means that the ambient turbulence is modulated by the quasi-coherent fluctuation. Although turbulence modulation at lower frequency is suggested in figure 12 and summed two-point cross bicoherence is significant in the low frequency range, there is no clear peak in the low frequency range, as shown in figure 13(b). The turbulence modulation is also observed in the outer region. The summed two-point cross-bicoherence at = f 11 kHz is larger than that observed in the inner region. A small but clear peak is present in figure 13(e). This indicates that nonlinear coupling between ambient turbulence and the quasi-coherent fluctuation across the phase inversion layer exists. A plausible conclusion is that the nonlinear three-wave coupling between the microturbulence and the quasi-coherent meso-scale structure correlates global or long-range behavior of the intermittent event. By now, we can have a physical view of the intermittency generation according to the results above. A quasicoherent fluctuation is firstly excited by the large density gradient, which perturbs the background density. The initial positive and negative density bursts are formed and propagate outward and inward, respectively, due to the SOC dynamics when the critical threshold is reached. Because of the nonlinear three-wave coupling with the quasi-coherent fluctuation, the distant turbulence, which means the turbulence in the peripheral region and radially far away from the quasi-coherent fluctuation, is modulated, which enhances the radial correlation of density bumps and extends the ballistic radial propagation. The radial-scale of the burst propagation is thus elongated and the global intermittency is generated. Identification of the quasi-coherent fluctuation and discussion of avalanche-like transport behavior are left for future work. Reflectometry in conjunction with multi-probe systems will give more details of the spatial structure of the quasicoherent mode. In order to identify the avalanche transport [21][22][23][24], simultaneous measurement of temporal evolution of density gradient is required. Fast reconstruction of the density profile by using the microwave frequency comb reflectometer is one of the promising methods. Such a fast reconstruction method is under development. Summary In order to understand intermittency of turbulence and its impact on transport in magnetized plasma, the intermittent behavior of quasi-coherent density fluctuation and associated radial propagation of density bump/hole in the whole measured plasma region are investigated experimentally. A quasicoherent fluctuation (f 11 kHz) is excited in the narrow steep density gradient region ( = r 3.0 4.5 cm) in the PANTA. An abrupt change in the amplitude of the quasicoherent fluctuation accompanied by global radial propagation of the density bump in the peripheral region ( = r 4.3 9 cm) is identified. To observe long-term storage, long-range correlation and self-similarity of fluctuations, signal processing and data analysis are performed and results indicate: (i) the autocorrelation function of fluctuations displays a long tail at the inner region ( < r 4.5 cm) and (ii) the Hurst exponent is much larger than 0.5 (H 0.8) at the quasi-coherent fluctuation region, indicating that the intermittency is highly auto-correlated and has long-range memory, which resembles that in a SOC system. (iii) Finite crossbicoherences between two-distant locations are observed, which means there is nonlinear long-range coupling between ambient turbulence and the quasi-coherent fluctuation. The generation mechanism of intermittency is accordingly revealed. The background density is perturbed by the highfrequency quasi-coherent fluctuation, which is excited by the large density gradient. When the critical threshold is reached, the radial propagation of the density burst can be triggered due to the SOC dynamics. The distant turbulence, which is coupled with the quasi-coherent fluctuation, enhances the radial correlation of density bursts. It can elongate the radial propagation and can form the global intermittency. The observed intermittent behavior of turbulence and its characterization will have deep impact on our predictability of the evolution of turbulent plasmas.
2019-09-17T02:59:46.337Z
2019-10-07T00:00:00.000
{ "year": 2019, "sha1": "81cf03f792ca7f7fc2e55fad0bb22078d63f2395", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1361-6587/ab434f", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "b41810495425d476a062acf74c5c34bf844218f6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250275821
pes2o/s2orc
v3-fos-license
Case Report: A Case of Renal Cell Carcinoma Unclassified With Medullary Phenotype Exhibiting a Favorable Response to Combined Immune Checkpoint Blockade Renal cell carcinoma unclassified with medullary phenotype (RCCU-MP) is an extremely rare variant of kidney cancer with poor prognosis. Recently, immune checkpoint inhibitors (ICIs) have been the mainstay of treatment for advanced clear cell renal cell carcinoma (RCC). However, the efficacy of ICI in the treatment of RCCU-MP remains unclear. Here, we report about a 63-year-old Japanese man who was referred to our hospital with a diagnosis of RCC of the left kidney with multiple–lymph node involvement (cT3aN1M1). The patient underwent nephrectomy with lymph node biopsy, which was histopathologically diagnosed as RCCU-MP. Thereafter, he received combined immune checkpoint blockade with nivolumab and ipilimumab. After induction therapy, follow-up computed tomography revealed shrinkage of the metastatic lymph nodes. Moreover, the patient was relieved of his subjective symptoms and his performance status improved. However, after 15 months, maintenance ICI therapy was discontinued because of disease progression, and the patient died 28 months after diagnosis. Longitudinal analysis of peripheral blood mononuclear cells revealed increased stem cell memory and central memory CD8+ T-cell subsets during response to therapy and enhanced expression of exhaustion markers on CD8+ T cells upon treatment resistance. Combined immune checkpoint blockade could be effective in the treatment of metastatic RCCU-MP. INTRODUCTION Renal medullary carcinoma (RMC) is a rare and aggressive type of kidney cancer, typically associated with sickle cell trait. Clinicopathologically, RMC is characterized by the loss of SWI/SNF-related matrix-associated actin-dependent regulator of chromatin subfamily B member (SMARCB1)/integrase interactor 1 (INI1) protein (1). RMC without sickle cell trait, classified as renal cell carcinoma unclassified with medullary phenotype (RCCU-MP), is an extremely rare variant of kidney cancer. A small number of case reports suggest that RCCU-MP is associated with poor prognosis due to an unfavorable response to molecular-targeted therapies and chemotherapy (2,3). In contrast, the effect of immune checkpoint inhibitors (ICIs) on RCCU-MP remains unknown. Here, we report a case of RCCU-MP showing a favorable response to combined immune checkpoint blockade therapy with PD-1 inhibitor nivolumab and CTLA-4 inhibitor ipilimumab. To the best of our knowledge, this is the first report to describe a case of RCCU-MP treated with ICI. In addition to case presentation, we report the results of mass cytometry and multicolor flow cytometry analysis of peripheral blood mononuclear cells (PBMCs) collected from the patient at three different time points during ICI treatment. CASE REPORT A 63-year-old Japanese man presented to the referring hospital with a 1-month history of fever, back pain, and macroscopic hematuria. He had been undergoing hemodialysis for past 15 years for chronic kidney disease of unknown origin. Computed tomography (CT) showed a poorly enhanced tumor within the upper pole of the left kidney measuring 1.4 cm × 3.6 cm × 2.2 cm with multiple-lymph node involvement including para-aortic, retrocrural, and supraclavicular lymph nodes ( Figure 1). Metastasis to distant organ was not detected. A clinical diagnosis of RCC (cT3aN1M1) was made, and the patient was referred to our hospital for treatment. As the patient was symptomatic from the local disease and was already on hemodialysis, radical nephrectomy and paraaortic lymph node biopsy were performed. The pathological characteristics of this case has already been published (4). Histologically, the tumor consisted of rhabdoid cells, with notable lymphocyte infiltration and large areas of necrosis. Immunohistochemically, the tumor was negative for SMARCB1, and PD-L1 expression was observed in 20% of cells (Supplementary Data 1). All sampled lymph nodes were positive for malignancy. Because the patient had no evidence of sickle cell trait or sickle cell disease, he was diagnosed as having RCCU-MP (pT3a, pN1). One month after nephrectomy, the baseline CT scan showed para-aortic lymph node measuring 40 mm × 30 mm, retrocrural lymph node measuring 27 mm × 19 mm, and left supraclavicular lymph node measuring 23 mm × 21 mm. We administered a combined immune checkpoint blockade therapy with nivolumab at a dose of 240 mg and ipilimumab at a dose of 1 mg/kg every 3 weeks as induction therapy for the management of residual lesions. After four cycles of induction therapy without grade 3 or 4 immune-related adverse events, follow-up CT revealed partial responses with a substantial decrease in the size of the affected lymph nodes (Supplementary Data 2). The patient became afebrile, and his performance status evidently improved by the induction therapy. In addition, there was a significant reduction in serum C-reactive protein (CRP) levels. Taking the results into consideration, we considered induction therapy to be effective, and the patient continued maintenance therapy with nivolumab at a dose of 240 mg every 2 weeks. After 15 months of maintenance nivolumab therapy without disease progression, the patient presented with left lower limb edema because of left common and external iliac lymphadenopathy. Palliative radiation therapy at a dose of 50 Gy relieved the patient's symptoms. However, after 1 month, his blood test results showed abnormal liver function, and CT revealed obstructive jaundice requiring biliary drainage. We considered this situation as disease progression, discontinued nivolumab, and switched the therapy to chemotherapy consisting of carboplatin and paclitaxel. Chemotherapy was discontinued after two cycles due to declined performance status and side effects of myalgia. The patient received best supportive care and died 2 months later due to disease progression. The total time span between diagnosis and death was 28 months. To explore the longitudinal change in the profile of circulating T lymphocytes during ICI treatment, we performed mass cytometry and multicolor flow cytometry analysis of PBMCs isolated from the patient's blood samples, which were obtained at three different time points: beginning, end of induction therapy, and when the disease had progressed. This analysis was conducted according to the protocol described by Kagamu et al. (5). PBMC samples obtained before treatment and at the completion of induction therapy, when treatment response was evident, were subjected to mass cytometry. Mass cytometry data visualized using t-distributed stochastic neighbor embedding (t-SNE) technique revealed an apparent increase in stem cell memory and central memory CD8 + T cells expressing CD62L, CCR7, and CD27 ( Figure 2A) . This finding was supported by multicolor flow cytometry analysis showing an increase in the proportion of central memory (CD45RA − CCR7 + ) and naïve (CD45RA + CCR7 + ) CD8 + T cells after induction therapy, whereas the proportion of effector (CD45RA + CCR7 − ) and effector memory (CD45RA − CCR7 − ) CD8 + T cells decreased (Supplementary Data 3A). A similar increase in the stem-like population was also observed in CD4 + T cells ( Figure 2B). Upon Multicolor flow cytometry analysis also revealed that the population expressing exhaustion markers such as PD-1 + , LAG-3 + , and CTLA-4 + consistently increased upon treatment resistance across all CD8 + T cells and Th1 CD4 + cells (Supplementary Data 3B). In addition, the fraction of CD62L low CD4 + T cells among total CD4 + T cells before treatment, a parameter previously reported to be associated with response to ICI in non-small cell lung cancer, was 39.6%, classifying the present case to the responder group (5). RMC is also a rare, aggressive type of renal cancer, strongly associated with sickle cell trait or sickle cell disease. Most patients present with a metastatic disease. RMC is considered to originate from the collecting duct, and chemotherapy is commonly used for its management. However, several reports have demonstrated poor outcomes. Ezekian et al. reported 159 cases of RMC, wherein majority of the patients underwent surgery (60%) and chemotherapy (65%), with a median survival of 7.7 months (6). Recently, a few case reports describing RMC with a durable response to ICI have been published. Beckermann et al. reported a case of recurring RMC with lymph node metastasis, which showed complete response to PD-1 blockade, lasting for more than 9 months (7). Sodji et al. described two cases of RMC treated with nivolumab (8). In this report, one patient showed durable response with nivolumab for more than 15 months, whereas the other case showed disease progression in 3 months. In our case, the patient showed durable response to ICI for more than 15 months. It is intriguing that immunotherapy is also reported to be effective in soft tissue sarcoma with SMARCB1 loss (9,10). Although the mechanism is unknown, it is possible that immunotherapy could be effective for SMARCB1-deficient tumors regardless of the tumor type. Future accumulation of data is awaited. To the best of our knowledge, this is the first case report describing RCCU-MP treated with ICI. Two studies reported RCCU-MP cases treated with molecular-targeted therapy. Lai et al. reported a case of metastatic RCCU-MP treated with everolimus and bevacizumab, with unknown outcomes (2). Colombo et al. also reported a case of metastatic RCCU-MP treated with sunitinib and sorafenib; however, the patient died within 10 months of follow-up (11). Combination therapy with tyrosine kinase inhibitors (TKIs) predominantly targeting vascular endothelial growth factor and ICI is the standard treatment for ccRCC. Whether TKIs have additive or synergistic effect to ICI in RCCU-MP characterized by SMARCB1 loss remains unknown. Further accretion of RCCU-MP cases treated with a variety of combination therapies including ICI is necessary to determine the optimal treatment for this extremely rare variant of renal cancer. To examine the change in the profile of circulating T lymphocytes during ICI treatment, we performed mass cytometric analysis and multicolor flow cytometry analysis of PBMC. The results showed a shift of CD8 + T cell to naïve and central memory subpopulation, which are less exhausted and reported to possess strong anti-tumor activity during response to treatment (12). This phenomenon could be attributable to CTLA-4 blockade that activates and recruits CD8 + T cell subpopulation distinct from those activated by anti-PD-1 monotherapy through activation of CD4 + T cells, especially Th1-like T cells (13,14). Mass cytometry and multicolor flow cytometry also showed decreased expression of PD-1 during response to ICI and exhaustion of T cells upon treatment resistance, as evidenced by an increased population expressing PD-1, LAG-3, and CTLA-4. In addition, CD62L low CD4 + T cells, which are reported to be predictive of response to ICI in nonsmall cell lung cancer, were also enriched in the peripheral blood, favoring response to ICI. Taken together, our data clearly show a dynamic shift in peripheral blood markers following ICI, suggesting the potential for development of peripheral blood markers to predict response to ICI. The patients with 1% or greater PD-L1 expression have been demonstrated to receive significant clinical benefits from ipilimumab plus nivolumab compared with nivolumab alone. In contrast, those with less than 1% PD-L1 expression showed comparable outcomes to both treatment (15,16). Thus, selection of ipilimumab plus nivolumab in the present case was reasonable. In the present case, LAG-3-expressing T cells were increased at the time of disease progression. This may, in turn, imply that anti-LAG-3 therapy could have been effective in our case. In patients with advanced malignant melanoma, anti-LAG-3 antibody relatlimab has shown superior outcome in combination with nivolumab compared with nivolumab alone (17). Further research is required to explore novel ICI combination based on tumor microenvironment and, possibly, on peripheral blood markers as well. The limitation of this study was the absence of sequencing data of tissue-infiltrating lymphocytes (TILs), which would allow analysis of T-cell immunity in the tumor microenvironment, as well as the interaction between T cells in the tumor microenvironment and circulation. The TIL analysis could not be done due to the lack of sufficient viable cells in nephrectomy specimens for sequencing. It has been reported that central memory CD8 + T cells in TIL increase in those responding to ICI (18). Comprehensive analysis involving peripheral T cells and TIL is required to elucidate integrated T-cell immunity in response to ICI. We presented the first report describing RCCU-MP treated with ICI therapy. The patient showed durable response to a combination of ipilimumab and nivolumab. Further research involving larger cohort is needed to develop effective combined ICI treatment for RCCU-MP. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Kyoto University's Institutional Board (approval number G52). The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. AUTHOR CONTRIBUTIONS MT collected data and drafted the manuscript. SK collected data and interpreted the results. TKa made the pathological diagnosis. YF and TYo cared for the study patient and collected clinical data. TYa edited the manuscript. HK performed the analysis and interpreted the results. TKo edited the manuscript. SA supervised the work, interpreted the results, and edited the manuscript. All authors contributed to the article and approved the submitted version.
2022-07-05T14:30:20.047Z
2022-07-05T00:00:00.000
{ "year": 2022, "sha1": "0d1599f297d5d2a805b6f50f96ea671fc6cb2d71", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "0d1599f297d5d2a805b6f50f96ea671fc6cb2d71", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
35016327
pes2o/s2orc
v3-fos-license
EXTENDING CUSTOMER RELATIONSHIP MANAGEMENT: FROM EMPOWERING FIRMS TO EMPOWERING CUSTOMERS Purpose – The focus of customer relationship management (CRM) literature has been predominantly on the firm perspective and on IT, not on customer or service orientation and value co-creation. This paper explores and analyses contemporary CRM frameworks and suggests future research directions. To achieve this, a thorough literature review on CRM is conducted focusing on recent advances within CRM. This provides a good basis for critically analysing the current status of both CRM theory and practice. Design/methodology/approach – We review CRM literature published 2003–2011. Based on the literature review, we introduce a conceptual framework of the changing role of customer data in the CRM framework. Findings – Literature has not adequately addressed the role of the emerging service orientation, value co-creation and the opportunities provided by new technology and communication channels. Drawing on a thorough CRM literature review, we argue that a fundamental change in CRM thinking is needed to shift the focus of CRM from empowering firms to empowering customers. INTRODUCTION Customer Relationship Management (CRM) is generally defined as the management of mutually beneficial relationships (LaPlaca, 2004), in which customer data often has a major role (see e.g. Verhoef and Langerak, 2002). The diversity of the theoretical, practical, and managerial discussion around CRM is well characterized within its current domain that often applies two classifications: the strategic and the operational perspective (Richards, et al., 2006). From the strategic perspective, the core idea of CRM is to develop strategies to attract (the right) customer s and maximi ze their lif etime value by f osteri ng their loy alty. CRM i s all a bout acquiring, cultivating, managing, and retaining customers, which is why it underlines the importance of relationship strategy and the process used to identify customers, create customer knowledge, build customer relationships, and shape customer perceptions of the firm and its products and solutions. Furthermore, strategic CRM determines how a firm relates to its customers via channels, messages, products, and services (Richards et al., 2006). The operational perspective on CRM, in turn, deals with automating customer-facing processes such as interactions and general front-office processes including sales, marketing and customer service. According to Peppers and Rogers (2011, p. 9), operational CRM 'focuses on the software installations and the changes in process affecting the day-to-day operations of a firm -operations that will produce and deliverer different treatments to different customers'. This definition also reveals the very nature of the customer-focused way of doing business: treat different customers differently. Both practitioners and scholars identify analytical CRM, referring to plans needed to build customer value by managing customer databases to perform data analysis like data mining, as a third angle of CRM (e.g. Peppers and Rogers, 2011, p. 9). Conceptual complexity around CRM is further deepened, for example, by Parvatiyar and Sheth (2001) who underline the importance of CRM in integrating different company functions such as marketing, sales, customer service and supply chain functions to enhance efficiency in delivering value. The process-oriented definitions encourage companies to gather customer data, identify the most valuable customers over time, and increase customer loyalty by providing customized products and services (Rigby et al., 2002). In contrast, the managerial meaning of the term CRM refers to the collection of customer data and other activities related to the management of the customer-firm interface (Boulding et al., 2005), and so resembles the definition of operational CRM. Despite the increasing managerial interest in CRM, as well as the scholarly interest in both its operational and strategic perspectives, the CRM activity being undertaken by firms may be inadequate. This is due to many reasons. Companies are increasingly shifting attention from selling goods to supporting customers' value-creating processes, which is related to the current marketing thinking emphasizing intangibility, exchange processes and relationships (e.g. Vargo and Lusch, 2004;2008). As part of their quest to provide a better service, firms are establishing service applications where customer data is used for the benefit of the customer instead of the overarching focus on firm's value creation, as is largely emphasized within the contemporary CRM framework. Moreover, perceptions of the conventional roles of customers and firms are constantly being adjusted and reconfigured. Both customers and firms implement new ways to engage in each other's value-creating processes, often referred to as value co-creation. Certainly, the changes in the operational CRM and communications landscape, such as the evolution of the customer from a passive receiver of marketing communications to an active partner and discussant (Hennig-Thurau et al., 2010), opens up new opportunities for value creation for the firm and the customer, as well as offering a new source of customer data. To avoid acting on a yesterday's logic, CRM needs to evolve and reconfigure its very nature in order to better serve future business purposes. To address this challenge, the purpose of this paper is to explore and analyse the contemporary CRM framework and identify future research directions. To achieve this, a thorough literature review is conducted focusing on recent advances within CRM. This provides a good basis for critically analysing the current status of both CRM theory and practice as well as provide implications for future research. The remainder of this article is structured as follows. First, the research methodology is presented including a detailed description of the literature review process. Second, the results of the literature review are discussed in two phases: i.e. empowering firms and empowering customers, after which implications for the CRM framework are provided before the paper concludes with a discussion section. RESEARCH METHODOLOGY The reasons for conducting the literature review were twofold. First, it provided a systematic way to evaluate the characteristics of the recent developments within the CRM framework. Second, it provided well-grounded insights into emerging CRM trends and shifts in thinking. Consequently, the literature review offered a conceptual and empirical basis for addressing the research purpose of exploring and analysing the contemporary CRM framework and identifying future research directions. Given the diverse nature of the CRM framework, the studies that have contributed to its domain radiate from a number of journals from various disciplines. Consequently, online journal databases (ABI/Inform, EBSCO Business Source Premier, Emerald Full-Text, and Web of Science), journal special issues and other literature reviews were used to inform the current literature review. In order to assess the contemporary CRM domain the search was limited to articles published after the review period (1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002) used in a study by Ngai (2005). In the first phase, a search was conducted using the term 'customer relationship management'. This yielded over 2,500 scientific articles prompting the authors to limit the search to articles that contained the search term in the article title, keywords, or abstract. An additional manual search in international, scientific, peer-reviewed, high-quality marketing journals, for example The Journal of Marketing, Journal of the Academy of Marketing Science, Journal of Business Research, was conducted. In a manner similar to Ngai (2005), the articles' overall contribution to t h e c u r r e n t C R M l i t e r a t u r e a n d t h e j o u r n a l ' s r e l e v a n c e i n t e r m s o f b u s i n e s s , m a r k e t i n g , management, IT, and IS research were evaluated. Articles unrelated to CRM were excluded. The process resulted in 154 papers after which emphasis was placed on identifying the most relevant articles in terms of the paper's objective, theoretical background, methodology, results and overall contribution. The intention was to ensure that the literature in the CRM field that appeared after 2002 was identified with a focus on the studies most aligned with the research objectives. Thus, papers that managed to clearly contribute to the CRM domain theoretically, practically or methodologically were included to the final selection of papers. Papers that didn't directly address these issues and characteristics or were based on mixture of different theoretical approaches were excluded. These selection criteria further decreased the number of appropriate papers to 50. Table 1 lists the 50 papers used for the final review and classifies the papers by year of publication (in descending order), author(s), paper title, journal and an indication of whether the paper is conceptual or empirical. Table 1 about here" "Take in Given the research purpose of exploring and analysing contemporary CRM frameworks as well as identifying future research directions, the more detailed analysis of the selected papers were driven by an emphasis on CRM related themes and characteristics addressed by the papers. The researchers avoided taking a too narrow or limited approach to CRM, but concentrated on identifying emerging themes and broader perspectives addressed by the papers aiming at building a condensed analysis of the CRM framework. RESULTS The results of the literature review illuminate the pluralistic nature of CRM research. The discussion around CRM is multifaceted conceptually, empirically and practically. Since 2003, the domain has become even more fragmented and empirically inconsistent, which has impeded the development of a synthesized and common theoretical basis and as a result, can affect its ability to adapt to changes in the external environment. Despite the diversity, an emerging shift in orientation was identified that captures both the potential and challenges of future CRM. This fundamental change in perspective is next described in the form of the two evolutionary phases of CRM. From empowering firms CRM with a separate identity evolved in the early 1990s due to the data explosion of the 1980s. Vendors such as Siebel introduced commercial hardware and software solutions to better manage the overwhelming amounts of customer data confronting firms and to automate the sales process through contact management tools. After its introduction, CRM quickly developed into a rather firm-oriented construct. Commercial hardware and software solutions, such as sales force automation (SFA) and customer service and support (Kumar and Reinartz, 2006) were introduced to assist firms to better manage the sales force and customer service and support functions. Although there was a common interest in clarifying the misconception that CRM merely offered technological and software solutions to the rising number of challenges related to the management of customer data (Verhoef and Langerak, 2002), CRM was still centred on using customer data for managing customer relationships for the benefit of the firm. In the mid-1990s the rise of analytical CRM allowed firms to analyse large quantities of customer data and identify relevant behavioural data and relationship development stages (Peacock, 1998). It was argued that there was a natural fit between data mining and CRM (Shankar and Winer, 2006; see also Homburg et al., 2009). This was true especially in contexts where the amount of data from a single customer was substantial and provided opportunities to identify different customer segments such as those with the most potential growth and most profitable customers and also to identify emerging customer trends. Originally, both practitioners' and scholars' interest in CRM was driven by the paradigmatic change from transaction-based marketing to the management of customer relationships. In the early-2000s customer centricity, although having a slightly different meaning than the concept has today, was considered the basic building block of CRM (Bose, 2002; see also Bolton, 2004). At the core of customer-centric orientation was the interest in using CRM software to develop and establish long-term relationships with customers aimed at improving customer service and satisfaction (Stefanou et al., 2003; see also Rigby and Ledingham, 2004). During this phase, there seemed to be a common understanding that information technology would help the management of customer relationships (Dewhurst et al., 1999;Karimi et al., 2001; see also Campbell, 2003). However, gradually the research into CRM evolved under multiple banners resulting in a fragmented set of approaches, definitions, and research results. Firm centricity took over from customer centricity -research centred on firms' CRM activities such as segmentation, identification of the most profitable customers, and cross-selling (see e.g. Ryals, 2005). Firms used customer data instrumentally to serve their own purposes. Customer data was the firm's asset, forfeiting customer centricity and leaving it unguarded. More recently, the strategic nature of CRM has been emphasized over the view that CRM is a process, philosophy, capability or technology (Zablah et al., 2005). Strategic CRM is about treating each customer differently, and consequently, maximizing the lifetime value of each customer to the organization (Peppard, 2000;Reichheld, 1996). According to Peppers and Rogers (2011, p. 7) in today's business environment 'all businesses will be embracing customer strategies sooner or later, with varying degrees of enthusiasm and success'. This is mainly due to two factors: customers want to be treated individually and as a strategy it is a more efficient way of doing business. In practice, strategic CRM includes different ways of 'improving shareholder value through the development of appropriate relationships with key customers and customer segments' (Payne and Frow, 2005, p. 268; see also Payne and Frow, 2004). These include the generation of customer data, identifying the most valuable customers over time, enhancing customer loyalty by interacting with the customers, and providing customized services and products, all the while reducing costs (Rigby et al., 2002;Cao and Gruca, 2005). This basic idea of strategic CRM has also been referred to as the IDIC-model (identifying, differentiating, interacting, and customizing) (Peppers and Rogers, 2004;, emphasizing the importance to a successful CRM strategy of selecting the right customers at the very beginning. The lack of a common definition of CRM and the pluralistic nature of the concept have resulted in the research field becoming fragmented and inconsistent. For example, Reinartz et al. (2004) show that the implementation of CRM processes has the strongest effect on relationship maintenance followed by relationship initiation. However, they also argue that CRM technology might even have a negative effect on performance (see also Jayachandran et al., 2005). A recent study confirms this view by showing that CRM does not affect firm performance directly as the link is mediated by cost leadership and differentiation (Reimann et al., 2010). Towards empowering customers Recent research, with few exceptions (e.g., Ernst et al., 2011), has not succeeded in reclaiming customer centricity as the fundamental determinant of the CRM framework despite the increasing number of CRM articles emphasizing the strategic nature of CRM. The current CRM framework has not managed to address the customer perspective adequately, and consequently customers may feel that they receive no benefits from "giving" their data for the use of a firm: the dark side of CRM may become reality (Boulding et al., 2005, p. 159;Frow et al., 2011; for a practical illustration, see Humby et al., 2003, pp. 160-161). Private and public initiatives are increasingly reclaiming ownership of customer data on behalf of the customers. Service applications are developed that refine customer data for the benefit of the customer. For example, Nutrition Code by the Finnish retailer, Kesko Corporation, combines point-of-sale data with the nutritional information of the groceries and gives this information back to customers. As a result, customers are provided with information about the nutritional value of their groceries. In the same vein, the recent MyData initiative by the UK government, in which major businesses provide customers with an opportunity to reclaim their data for their own use, offers yet another sign of the emerging change in thinking in terms of customer data utilization. The MyData initiative attempts to put consumers in charge in a way that they are better able to get the best deals from retailers and pays special attention to the most vulnerable consumers who may not otherwise benefit from the rapidly changing technological and social environment. The British government's recent publication "Better Choices: Better Deals. Consumers Powering Growth" opens up the MyData initiative and finds three main drivers of the changing business environment (BIS, 2011). First, the growth of new technologies such as the internet and increasing use of mobile phones and applications have opened up new channels for consumers to find, compare and buy goods and services. Second, the increasing use of data coming from customers' purchasing histories has allowed firms to understand their customers' purchasing behaviour, facilitating personalized recommendations based on transaction history. Third, consumers are now collaborating across the economy for example by sharing products such as cars or bicycles or collectively giving feedback and offering new product development ideas. Current CRM research does not address these new forms of customer data usage, which confirms the need to reconfigure the role of customer data within the CRM framework. As a reaction to the forfeiting of customer centricity, some have recently characterized the CRM framework (as well as the general marketing framework) as a power shift from marketers to customers (e.g. Hennig-Thurau et al., 2010). The private and public initiatives reclaiming ownership of customer data for the customers illuminates this view, as does a more holistic cross-functional view of CRM and its role in managing customer relationships to co-create value (Lambert, 2010). A specific and important aspect of co-creation is the framework of social CRM (formerly called CRM 2.0), which refers to the transformation of communications technologies. This follows the advent of social media, which has changed how companies interact with their customers and prospects, and how customers interact with each other (Greenberg, 2010a). This dates back to 1996 when Stone, Woodcock, and Wilson (1996) proposed that future customers would increasingly manage their firm relationships themselves with the help of new technologies and that 'companies need to prepare themselves for this world ' (p. 675). In this new world, customers would increasingly use new digital communications channels to manage their relationships with firms (see Zheng, 2011). Social CRM highlights the importance of optimizing customer experience by placing greater emphasis on the growing number of customer touch points with the company. The purpose of the new communications channels such as blogs, discussion forums, and social networks in the new CRM framework is simply to engage customers in a regular dialogue. Greenberg (2010b, p. 139) states that the role of the new communication channels is 'to provide communication pipelines with your customers so you can have a conversation with them regularly'. Consequently, value co-creation becomes an increasingly important element of CRM strategies (Maklan et al., 2008;Nambisan and Baron, 2007). Against this backdrop, customers increasingly take part in the company's various processes such as new product development. In a recent study, Ernst et al. (2011) argue that firms should link CRM to new product development as 'CRM puts the customer into the central focus of multiple organizational activities' and 'therefore, CRM could be employed to systematically leverage customer-related information to better align NPD with market requirements' (p. 291). According to Hennig-Thurau et al. (2010), the new communications channels offer organizations novel ways to reach customers, communicate with them, and measure their activity. They state that this is of special value to marketing in general and CRM in particular. IMPLICATIONS FOR CRM RESEARCH The CRM framework in its current form is a heavily firm-oriented construct and focuses mainly on supporting firms' value-creating processes instead of identifying ways to harness the potential of customer data for the benefit of the customers. When we factor in the increasing concern over companies' misuse of customer data and other immoral CRM-related behaviour such as financial exploitation, customer lock-in, and invasion of privacy one can conclude that the shift toward "enlightened" CRM strategies (Frow et al., 2011) is a natural consequence of the inadequacy of the current CRM framework to serve customer needs. However, the view of customer data as being solely owned by the firm is being questioned as the private and public sectors launch various service applications and initiatives intended to support customers' value creation through the utilization of customer data. Customer data is not used only internally in support of the company's value creation but also externally for the benefit of the customer. Customer data is used not only as a firm resource, but also as a resource for customer value creation. This opens up a wide variety of business opportunities that go beyond the traditional exchange. In addition to goods, firms can provide customers with information that can support customers' value creation. This shift in attention also has major implications for the further development of the CRM framework. <Take in Figure 1 approximately here> Consequently, and as illustrated in Figure 1, future research should concentrate on identifying CRM activities that capture the potential of customer data that both empowers firms and can be used for the benefit of the customer too. This necessitates understanding customer value creation as a resource integration process, where in addition to goods or services, other resources, such as information resulting from reverse use of customer data, are needed to actualize the value (see Grönroos, 2008;Grönroos and Helle, 2010). Consequently, developing data mining techniques and skills that help firms to refine customer data into information that can be used as an input resource for customer value-creating processes is of the greatest importance to future CRM endeavours. According to Greenberg (cited in Peppers and Rogers, 2011, p. 461) the new media environment has empowered customers and created a new segment called 'social customers'. This group trusts their peer group more than anyone else and wants to actively use companies and brands as problem solvers according to the requirements of their personal agendas. This leads to a new situation in which organizations have to rethink how they interact with this new social customer segment. Good examples of companies that have attempted to become social organizations and are thus tuning their communications model to become a real dialogue are those that offer company-supported forums and communities such as Dell, Ford USA, Salesforce.com, Starbucks, Samsung and Nokia, to mention just a few. These firms have used the web for co-creation purposes and are constantly asking their community members how their products and services could be improved further. DISCUSSION AND CONCLUSIONS CRM is under pressure to adapt to changing market environments such as the increasing service orientation of firms, value co-creation and social media. Given these fundamental changes, the purpose of this paper was to explore and analyse the contemporary CRM framework and identify future research directions. To achieve this, a thorough literature review was conducted. It can be argued conclusively that as a management approach, CRM is gradually shifting attention toward customers and identifying ways of harnessing the potential of CRM for the benefit of the customer. Moving the locus of CRM from empowering firms to empowering customers, from internal to external and from using it as a vehicle for firm value creation to become a vehicle for customer value creation offers the key to serving customers rather than selling goods. It readjusts CRM to better meet the paradigmatic change from a goods orientation to a service orientation. This holds major managerial implications too. Firms should extend their view to customer data usage toward using customer data for the benefit of the customer. Hence, future CRM activities could be directed to better serving customers instead of the limited focus on internal customer data usage within the firm. This opens up both new opportunities for service development and building of competitive advantages. The extended role of CRM does not diminish the importance of customer data in a firm's current CRM practices. Customer data is still, and will continue to be, a critically important input resource supporting a firm's processes. However, also using customer data for the benefit of the customer, to serve customers better, is clearly an emerging phenomenon. Extending the CRM framework to address the customer benefits of customer data utilization can offer unique opportunities for firms designing and implementing their customer relationship strategies. CRM must adapt to a business environment where new forms of exchange are emerging and where traditional customer and firm roles quickly become outdated and are recreated. Firms operating within banking, telecommunications, retailing, hospitality, travel, and health care industries, to name but a few, possess large amounts of valuable customer data that, when combined with a holistic understanding of the resources needed in customer value-creating processes and practices, can provide a unique competitive advantage. These industries set the pace for revitalizing CRM in practice. Consequently, firms can establish a true win-win customer relationship that discards the conventional CRM-philosophy of "market to" and embraces "market with" (Lusch, 2007). Furthermore, refining and giving customer data back to customers may represent a future mechanism through which companies deepen their customer relationship management and develop it to a whole new level. This change in perspective opens up a whole new spectrum of ways in which companies can better engage with their customers' everyday lives, one of the most fundamental objectives of contemporary marketing management. Figure 1. From internal to external use of customer data -extending the CRM framework.
2018-01-23T22:42:21.292Z
2013-11-10T00:00:00.000
{ "year": 2013, "sha1": "41354da1d8b938d88899ce2d2cc76a02e4dec9d0", "oa_license": "CC0", "oa_url": "https://jyx.jyu.fi/bitstream/123456789/48159/1/josit.manuscript%20saarijarvi%20et%20alextending%20customer%20relationship%20management.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "35f5ad547feffdf88182e50a6cedc2c9c5f113b8", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science", "Business" ] }
253511234
pes2o/s2orc
v3-fos-license
Odd mobility of a passive tracer in a chiral active fluid Chiral active fluids break both time-reversal and parity symmetry, leading to exotic transport phenomena unobservable in ordinary passive fluids. We develop a generalized Green-Kubo relation for the anomalous lift experienced by a passive tracer suspended in a two-dimensional chiral active fluid subjected to an applied force. This anomalous lift is characterized by a transport coefficient termed the odd mobility. We validate our generalized response theory using molecular dynamics simulations, and we show that the asymmetric tracer mobility may be understood mechanically in terms of asymmetric deformations of the tracer-fluid density distribution function. We show that the even and odd components of the mobility decay at different rates with tracer size, suggesting the possibility of size-based particle separation using a chiral active working fluid. Chiral active fluids break both time-reversal and parity symmetry, leading to exotic transport phenomena unobservable in ordinary passive fluids. We develop a generalized Green-Kubo relation for the anomalous lift experienced by a passive tracer suspended in a two-dimensional chiral active fluid subjected to an applied force. This anomalous lift is characterized by a transport coefficient termed the odd mobility. We validate our generalized response theory using molecular dynamics simulations, and we show that the asymmetric tracer mobility may be understood mechanically in terms of asymmetric deformations of the tracer-fluid density distribution function. We show that the even and odd components of the mobility decay at different rates with tracer size, suggesting the possibility of size-based particle separation using a chiral active working fluid. Chiral active fluids are composed of constituents that convert energy into directed rotational motion [1][2][3]. The quiescent state of such a fluid breaks both parity and time-reversal symmetry and accordingly is not a state of thermodynamic equilibrium. These broken symmetries lead to a variety of exotic transport phenomena [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. For example, an odd viscosity that links dilational flow to shear stress and sheared flow to normal stress can translate forcing into fluid motion in an orthogonal direction [4,9,10]. Describing transport phenomena in such nonequilibrium fluids is challenging because classical linear response theories are valid perturbatively only about an equilibrium steady state [16]. Here, we consider a similar transport process, the motion of a passive tracer suspended in a two-dimensional, odd viscous chiral active fluid and subjected to a constant force, and we develop a response theory based on ensembles of trajectories [17,18]. Recent work has examined the advective [13] and driven [8] transport of passive tracers in odd viscous fluids. These theoretical treatments are phenomenological, however, illustrating the need for a microscopic description of tracer mobility in chiral active fluids. The framework employed here can be extended to arbitrary order in the applied forcing and is valid for a reference state arbitrarily far from equilibrium [19]. In general, the velocity response of a passive tracer is characterized by a second rank tensor called the mobility tensor by the relation V F = µ · F , where V is the tracer velocity, µ is the mobility tensor, F is an external force applied to the tracer, and angled brackets denote a nonequilibrium ensemble average [7,8]. In passive fluids, isotropy requires that the mobility tensor be proportional to the unit tensor, introducing a single transport coefficient, µ e , termed the even mobility. However, in two dimensional fluids breaking parity and timereversal symmetry, the isotropy of the second rank twodimensional Levi-Civita tensor permits the emergence of an additional transport coefficient, µ o , termed the odd mobility, analogous to the emergence of the odd viscosity [4]. The general expression for the mobility tensor is given in Cartesian coordinates by where 1 is the unit tensor, and is the Levi-Civita tensor. Under a force F = Fx applied in the positive x-direction, the resulting tracer velocity is then V F = µ e Fx − µ o Fŷ, indicating that the particle experiences an anomalous lift proportional to the odd mobility in the direction orthogonal to the applied force. This lift corresponds to the conversion of directed rotational motion of the underlying fluid into a uniform deflection of the tracer particle. Such a behavior could be observed experimentally by subjecting tracers in an odd viscous fluid [15] to an external force. Here, we consider odd mobility numerically with a simple model containing a passive Weeks-Chandler-Andersen (WCA) particle [20] suspended in a fluid of active dimers [2]. The dimers are composed of monomers linked by stiff harmonic bonds that interact with each other and the passive tracer via a purely repulsive WCA potential characterized by an energy parameter β = 1, where β ≡ 1/k B T is the inverse of the temperature times Boltzmann's constant. Each monomer is also subjected to an active force of constant magnitude F a , directed perpendicular to the instantaneous dimer bond. This active force results in a directed active torque experienced by each dimer of average magnitude F a d 0 , where d 0 = σ is the equilibrium bond length set equal to the monomer diameter σ. The active forces are oriented such that F a > 0 corresponds to a counterclockwise active torque. The monomers undergo underdamped Langevin dynamics with the equation of motion where i is the dimer index and α ∈ {1, 2} indexes the arXiv:2211.07003v2 [cond-mat.soft] 16 Mar 2023 monomers on dimer i, m is monomer mass, F α c,i are the conservative forces acting on monomer i-α, F α a,i is the active force, and γ is a linear friction coefficient describing dissipation from the particles to an underlying quiescent fluid bath. (See Supplemental Materials, including Refs. [21,22].) N i (t) is a delta-correlated Gaussian fluctuating force with mean zero and unit variance. A simulation snapshot is shown in the inset in Fig. 1A, and details of the dimer model can be found in Refs. [2,23]. This model represents a minimal model of a chiral active fluid exhibiting odd viscosity [2]. The Supplemental Materials contain further simulation details. The tracer equation of motion is where V ≡Ẋ is the tracer velocity, X is the tracer position, F WCA is the total WCA force on the tracer from the surrounding monomers, and F is a constant external force. We take the tracer mass and friction coefficient equal to that for the monomers in all simulations. Except in Fig. 2C, we also take the tracer diameter σ tracer to be equal to σ. The dimer activity has a profound influence on the tracer dynamics. This effect is contained in the average structure of F WCA , as we will explore below. The Gaussian nature of the noise allows us to write the path probability density as [24] is a tracer trajectory of duration t N → ∞, and the path action under an applied force F is given by the Onsager-Machlup construction [25,26] as In the second equality we have extracted the unbiased path action S 0 [X] obtained when F = 0. The relative path action ∆S F [X] is defined by with where we have defined the time-extensive current J ≡ dt V and frenesy Q ≡ − dt (F WCA /γ) [27]. We have retained terms only to first order in F as we are interested only in the linear response, and we have dropped the temporal boundary term proportional to dt mV because its contribution to the mobility exactly vanishes in the limit t N → ∞. (See Supplemental Materials.) Eq. 6 allows us to reweight path ensemble averages taken in a forced (F = 0) ensemble to those in an unforced one, which we may use to develop a linear response expression for the mobility. Rewriting the trajectory average of J in the presence of F as one in its absence using Eq. 6 and expanding to first order in F , we obtain We insert our definition for the mobility tensor V = µ · F and differentiate with respect to F to obtain a linear response relationship for the mobility tensor directly: This equation represents the key theoretical result of this work. The first term on the right hand side of Eq. 9 contains only the steady state fluctuations in the current and corresponds to the ordinary 'thermodynamic' response obtained about equilibrium steady states. The second term contains the current-frenesy correlation and is nonvanishing only in nonequilibrium reference states. Physically, the frenesy represents microscopic motion unassociated with the current. Though the particular form of the frenesy depends on the nature of the underlying microscopic dynamics [28][29][30], for tracer motion well described by the underdamped Langevin equation given in Eq. 3, the frenesy requires knowledge only of the forces and their time correlations. It therefore in principle may be measured experimentally for such a system. We can make explicit the time-integrals appearing in the definitions of J and Q in Eq. 9. Using timetranslation invariance and averaging the diagonal and off-diagonal components of the mobility tensor, we find and with and In the above, j ≡ V , and q ≡ −F WCA /γ. The contribution of the current autocorrelation to the odd mobility A) The current autocorrelation for the even mobility. B) The current-frenesy cross-correlation for the even mobility. C) and D) The current autocorrelation (C) and current-frenesy cross-correlation for the odd mobility. The inset in panel B shows a simulation snapshot and the decomposition of the velocity response of a passive tracer subjected to an applied force in terms of the even and odd mobility coefficients. The blue particle is the passive tracer, and the orange particles are the actively rotating dimers. Errors are smaller than the line thickness. exactly vanishes. This can be seen clearly from Eq. 9 where the thermodynamic contribution to the mobility is proportional to the explicitly symmetric variance of the time extensive current. Fig. 1 shows the contributions of each correlation function to the mobilities. We plot the results for several different activities, quantified by a Péclet number Pe ≡ 2βF a d 0 . We include the results for equilibrium, Pe = 0, for which the odd mobility must vanish. Indeed, we observe that C o jq is zero. For finite Pe, both the current (thermodynamic) and current-frenesy (frenetic) contributions to µ e are invariant under change of sign of Pe. This is a reflection of the fact that µ e enters into the dissipation, V F · F = µ e F 2 , and hence is constrained to remain positive under time-reversal. It therefore must be invariant under parity inversion, F a → −F a , in order to preserve overall parity-time symmetry. The initial values of the thermodynamic and frenetic correlation functions for the even mobility both grow with increasing magnitude Pe, though the current variance grows more rapidly. Adding these curves shows that the growth of the frenetic contribution acts to maintain a constant initial value of the overall correlation function, while the relatively larger frenetic integral relaxation time results in a decreasing overall relaxation time with increasing activity, and hence a decreasing even mobility. (See Supplemental Materials.) This is consistent with the decrease in overall even mobility with increasing activity shown in Fig. 2A. The thermodynamic contribution to the odd mobility, C o jj , exactly vanishes for all activities. As noted above this is predicted analytically and reflects the fact that the odd mobility is a purely nonequilibrium phenomenon. The frenetic correlation functions show a rich structure, possessing an initial shift in their decay rate at short times and a change in sign at longer times. The curves are inverted under a change in sign of F a , reflecting the parity -and hence time-inversion -antisymmetry of the odd mobility. This is permitted because the odd mobility does not contribute directly to the tracer dissipation, analogous to the odd viscosity of the underlying fluid [4]. In Fig. 2, we show the even and odd mobility responses as measured by Eq. 9 as a function of activity for several different densities. As anticipated, both the even and odd mobility responses are suppressed as density is increased. The maximum ratio of odd to even mobilities is obtained for the lowest density examined,n = 0.2 σ −2 , and is roughly 20% for |Pe| = 20. Surprisingly, we find that the even mobility is maximized for Pe = 0 and decreases with increasing activity, demonstrating that a simple understanding of the activity in terms of an effective temperature does not hold in this system, as previous results have reported an increase in the mean squared diffusivity with Pe [7]. At the highest density,n = 0.8 σ −2 , the odd mobility is observed to vary linearly over the range of Péclet numbers examined, with positive values, corresponding to downward deflections under a force applied in the postive xdirection, obtained for counterclockwise dimer rotations and vice verse. However, we find that at lower densities the odd mobility appears to saturate to a constant value as |Pe| is increased. Combined with the observed monotonically decreasing nature of µ e with increasing |Pe|, this indicates that the ratio of odd to even mobility may be made arbitrarily large for high activity, suggesting the possibility of a regime in which the particle is deflected by essentially 90 • from the applied force. We note that in Figs. 2A and B we have compared our simulation results in the reference steady state to runs in which a force is applied directly to the tracer particle and its velocity is directly measured, and the results are in excellent agreement. Finally, in Fig. 2C, we plot the ratio of even to odd mobility as a function of tracer diameter, σ tracer . We plot only for a single Pe as we observe that this curve is essentially independent of Pe for finite activity. We find that the ratio of mobilities decreases with increasing σ tracer , indicating that µ o decreases more rapidly than µ e as tracer size is increased. The angle θ between applied force and resultant tracer velocity is given by tan θ = −µ o /µ e [31,32], indicating that the particle deflection angle is therefore size-dependent. This suggests the possibility of size-based particle separation using an odd viscous fluid as a working fluid. We can understand the origin of the trends in mobility mechanically by deriving a relationship between the tracer-monomer pair distribution function (PDF) and the mobility response [29,30,33]. Averaging the tracer equation of motion, Eq. 3, over the noise, we obtain Given the pairwise nature of the repulsive interaction, the average of the components of the velocity parallel and orthogonal to the applied force can be written as and respectively, where g (x |F ) is the tracer-monomer PDF in the presence of an applied force F [34,35], and F WCA (x) is the force field characterizing the WCA interaction between monomers and the tracer particle held fixed at X = 0. We take F to be directed along the positive x-direction. Intuitively, we anticipate that a relative decrease in V || -and hence µ e -with increasing |Pe| will be associated with an increase in monomer density accumulated on the front of the particle. However, we observe the opposite in Figs. 3A and B, where we plot the PDF for Pe = 0 and 20 under an applied force. Instead, we observe that the ridge of monomer density accumulated on the front of the tracer decreases substantially in magnitude with increasing activity, ostensibly suggesting that µ e should increase with increasing magnitude Pe -the opposite of what is observed in Fig. 2. This apparent inconsistency is resolved by considering the relative density distribution. Though the accumulation of monomer density in front of the tracer decreases with increasing activity, a high density ridge is pushed further into the interaction region with the tracer, resulting in a much stronger resistive interaction. This is clearly illustrated in the inset in Fig. 3B, which shows the PDF obtained for Pe = 20 less that obtained for Pe = 0. Direct integration confirms that this shift in particle distribution results in the suppression of the even mobility at finite activity. Interestingly, this shift in the high density ridge is not observed in the PDF calculated using the dimer center of mass coordinates, indicating that the dimers accumulated at the front of the tracer are preferentially oriented with a component of their bond vector orthogonal to the face of the tracer, and that this 'orientational locking' increases with activity. This mechanism is fundamentally linked to the biased nature of the rotational motion of the dimers and is enhanced with greater activity. It therefore cannot be renormalized through an effective temperature that implicitly treats the effect of the activity as enhanced unbiased random noise of the dimer particles. Given the central nature of the WCA interaction, F WCA,⊥ is necessarily antisymmetric in y. Therefore, the deflection associated with the odd mobility must be the result of an asymmetry in the PDF under an applied force for finite Pe such that the integral in Eq. 17 is nonvanishing. Indeed, this is observed in Figs. 3C and D, where we plot the PDF in the upper half plane less that in the lower half plane, g(x, y) − g(x, −y). This is a direct measure of the asymmetry under the transformation y → −y and hence of a nonvanishing odd mobility. We observe that there is measurable asymmetry only in the case of finite Péclet number, and the odd mobility is therefore due to asymmetric deformations of the particle distribution about the tracer particle for finite activities. In this letter, we have derived a generalized Green-Kubo relation relating current and frenesy fluctuations in a steady state far from equilibrium to the mobility of a passive tracer suspended in a two-dimensional chiral active fluid. We have validated our results using molecular dynamics simulations and shown that, whereas the even mobility counterintuitively decreases with increasing activity, the odd mobility increases until it saturates to a finite value. The former result indicates that an effective temperature description of the mobility response is invalid in this system. We have further shown that the odd mobility decays more rapidly than the even mobility with increasing tracer particle size, independent of activity, indicating the possibility of size-based particle separation using an odd viscous working fluid. Our results provide strong evidence for the generality of the path integral framework for nonequilibrium response and a microscopic picture of the mobility response. Future work will focus on the development of effective hydrodynamic descriptions of the underlying fluid and tracer mobility. where i ∈ {1, N dimer } indexes the dimer, α ∈ {1, 2} indexes the individual monomer on dimer i, m is the monomer mass, v α i ≡ẋ α i is the monomer velocity, x α i is the monomer position, γ is a friction coefficient describing dissipation into the underlying bath, F α c,i is the sum of all conservative forces acting on the monomer, F α a,i is the active force acting on the monomer, and N α i (t) is a Gaussian white noise term satisfying N α i (t) = 0 (2) and This term represents noise from interactions with the underlying bath, and its prefactor √ 2γk B T is related to the friction coefficient γ by the assumption of local detailed balance and the consequent imposition of a fluctuationdissipation relation. Physically, the assumption of Langevin dynamics corresponds to scenario that monomer and tracer particles are suspended on a viscous, quiescent, passive bath at a fixed temperature that injects and dissipates energy such that a local detailed balance is maintained between each degree of freedom and the bath. We are neglecting any hydrodynamics of the underlying bath, including global flows induced by the motion of the suspended dimer fluid, as well as any memory effects or hydrodynamic interactions induced by flows generated around the individual monomers or tracer particles. The model does not conserve momentum as it is lost by dissipation to the fluid bath and would therefore be classified as a "dry model" according to the classification scheme of Ref. [1]. The total conservative force F α c,i acting on a given monomer is the sum of two terms: the harmonic bond between a given monomer and its intra-dimer pair and the Weeks-Chandler-Andersen (WCA) [2] repulsive interaction between the given monomer and any other nonbonded monomers within the WCA cutoff, r WCA = 2 1/6 σ. The harmonic and WCA potentials are given by, respectively, and In the above, k is a spring stiffness constant, set equal to 100 σ −2 in all of our simulations, is the instantaneous bond length of dimer i, d i is the instantaneous bond vector, d 0 = σ is the equilibrium bond length, set equal to the monomer diameter σ in all of our simulations, is the WCA energy parameter, and r αβ ij ≡ x α i − x β j is the magnitude of the distance between two non-bonded monomers i-α and j-β. We therefore have F α c,i = −∇ x α i (u harmonic + u WCA ). Each monomer experiences an active force of fixed magnitude F a directed orthogonal to the instantaneous bond vector d i associated with the dimer of which it is a member. This results in an instantaneous active torque of magnitude F a d i (t) acting on dimer i and an average active torque of magnitude F a d 0 , given that the density is not so large that it results in an average bond length d i < d 0 . The direction of the active force is such that F a > 0 corresponds to a counterclockwise active torque. That is, the active forces on monomers 1 and 2 of dimer i are related to the bond vector by the Levi-Civita tensor. Action of on a vector results in clockwise rotation of that vector by an angle π/2. This active dimer fluid model is identical to that introduced in [3], and we employ this model here because it is two-dimensional and breaks parity and time-reversal symmetry and is therefore a minimal model of a chiral active fluid that exhibits odd viscosity [4]. The underlying odd viscosity of the fluid is crucial to observe odd mobility of a passive tracer. Additionally, it is a simpler model than the frictional disk models employed in, for example, Ref. [5], because we only have to consider bonded point particle, rather than rigid body, dynamics. The tracer particles are propagated according to the Langevin equation of motion where I ∈ {1, N tracer } indexes the tracer particles, V I = X I is the velocity of tracer I, X I is the position of tracer I, and N I (t) is again a delta-correlated Gaussian white noise characterizing interactions with the underlying bath. F WCA,I is the sum of all WCA interactions between surrounding monomers and the passive tracer and is given by For simplicity, the mass m and friction coefficient γ are taken to be the same for the tracer particles as they are for the monomers. In all of our simulations except those where we explicitly examine the dependence of the ratio of odd to even mobilities µ o /µ e on tracer diameter σ tracer (Fig. 2C, main paper), we take σ tracer = σ. In this case, since the friction coefficient is anticipated to be a function of the particle radius for spherically symmetric particles, we anticipate γ to be the same for both the tracer and the monomers. For values of σ tracer > σ, the tracer friction coefficient would be larger than the monomer friction coefficient. However, in two-dimensions, the dependence of friction coefficient on particle radius is logarithmic and therefore relatively weak [6]. Any dependence of γ on the tracer diameter is therefore expected to be weak over the range of values of σ tracer examined here and to affect only the quantitative values of mobilities and not the qualitative behavior of the ratio µ o /µ e that we are interested in. All simulations are conducted in LAMMPS. The active forces on the monomers are implemented via a custom code available at [7]. We use Lennard-Jones units in all of our simulations, meaning that all quantities are measured in combinations of m, σ, and -respectively, the monomer mass and diameter and the energy parameter characterizing the monomer-monomer WCA interaction. This is equivalent to setting m = σ = = 1 in our simulations. We likewise take γ = 1 √ m σ −2 in all of our simulations. The energy parameter characterizing the monomer-tracer interaction is also set to one, and the length parameter is set to σ tracer , defining precisely what is meant by the tracer diameter. All simulations are conducted at a temperature k B T / = 1. The time steps used in simulation were on the order of 10 −3 √ mσ 2 −1 , and an initial configuration is generated for each value of the density, activity, and tracer size from an initial grid configuration of particles over ∼ O 10 7 time steps. Ensemble averages were evaluated over O (10 − 100) trajectories of lengths ∼ O 10 6 time steps with randomized seeds for the Gaussian noise until sufficient convergence was obtained. TOTAL CORRELATION FUNCTIONS DETERMINING EVEN AND ODD MOBILITY RESPONSES In Figs. 1A and B of the main text, we found that increasing activity resulted in a substantial change in the variances associated with the thermodynamic and frenetic contributions to the even mobility correlation function; this is also indicated in Figs. 1A and B here. We show in Fig. 1C the total even mobility correlation function, obtained as the sum of the thermodynamic and frenetic contributions. Surprisingly, we find that the changes in the thermodynamic and frenetic variances exactly cancel such that the overall variance is unity for all of the activities examined. This is the same value that is guaranteed by equipartition for the variance in equilibrium. The origin of this cancellation and apparent "effective equipartition" is beyond the scope of this article and the subject of future research. ROLE OF TEMPORAL BOUNDARY TERM IN MOBILITY RESPONSE In the following discussion, as in the main text, we note that the dilute tracer concentration (n tracer ≡ N tracer /L 2 ∼ O(10 −4 σ −2 )) allows us to neglect tracertracer interactions and therefore drop the tracer subscript I. In the main text, we define the time-extensive frenesy as Q ≡ − t N 0 dt F WCA /γ; however, from the Onsager-Machlup form of the path action (main text Eq. 5), we find Q = t N 0 dt mV − F WCA /γ. The stated form of the frenesy is obtained by neglecting the temporal boundary term Q boundary ≡ We can show that this boundary term must vanish in the limit t N → ∞, the limit in which our linear response results become valid. Including the additional additive boundary contribution to the frenesy Q boundary in the linear response relationship for the mobility tensor (main text Eq. 9), we find an additional additive contribution to the mobility tensor from the boundary term given by where in the second equality we have inserted the definitions of Q boundary and J ≡ t N 0 dt V and noted that V 0 = 0. We can rewrite the first term in the second equality as The first equality follows from time-translation invariance, and the second from a change of integration variable. The boundary contribution to the mobility tensor then becomes The fact that the thermodynamic contribution to the mobility tensor µ thermo = (β/2t N ) δJ ⊗ δJ 0 is finite implies that δJ ⊗ δJ 0 ∼ t N in the long time limit, and therefore V (0) ⊗ J 0 /t N and J ⊗ V (0) 0 /t N must tend to zero as t N → ∞. From this fact and Eq. 10 we conclude that µ boundary → 0; that is, calculations of the mobility coefficients must converge to the same value whether they include the unsteady term mV /γ in the frenetic contribution or not. In numerical simulations it is necessary to choose a value of t N that is finite, though much larger than the integral relaxation times associated with the underlying correlation functions. We should therefore also confirm that our results are sufficiently converged such that the unsteady term is also irrelevant for the finite value of t N = 30 √ mσ 2 −1 we have used to calculate our response estimates of the mobility. This is done in Fig. 2, where we compare our estimates of the even and odd mobilities without and with the unsteady frenetic contribution. We see that there is no statistically significant difference in the mobility estimates, and that the only effect of including the unsteady term is to greatly increase the noise in our estimates. Therefore, we conclude that we are justified in neglecting the unsteady frenetic contribution to the mobility response.
2022-11-15T06:42:49.587Z
2022-11-13T00:00:00.000
{ "year": 2022, "sha1": "b448c3ed357c81fbd44ee8f1284e308072d3539e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "13570c159ffadc04a0bead9a02ddc116168664a8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
49214590
pes2o/s2orc
v3-fos-license
Viral respiratory infections in BMT - A common cold is not just a cold in transplant recipients OBJECTIVE To investigate the diagnosis of sexually transmitted infections (STIs) with human papillomavirus (HPV) infection and the presence of cytological changes in the cervix in a cohort of sexually active women in Greece. METHODS Cervical cytology testing and the molecular typing of HPV and other STIs were performed for 345 sexually active women aged between 18 and 45 years (mean 33.2±7.2years) visiting a gynaecology clinic for routine cervical screening. The association of HPV and STI detection with cytological findings was investigated. RESULTS HPV was detected in 61 women (17.7%) and STIs in 82 (23.8%). Ureaplasma spp was the most frequently detected pathogen, which was found in 63 (18.2%) women, followed by Mycoplasma spp (21 women, 25.6%) and Chlamydia trachomatis (five women, 6.1%). HPV positivity only (with no co-presence of STI) was associated with an abnormal cytology (odds ratio 6.9, p<0.001), while women who were negative for both HPV and STIs had a higher probability of a normal cytology (odds ratio 0.36, p<0.01). Sixteen out of the 63 (25.4%) women who tested positive for Ureaplasma spp, harboured a high-risk HPV type (odds ratio 2.3, p=0.02). CONCLUSIONS In a population with a high prevalence of Ureaplasma spp, there was an association of this pathogen with high-risk HPV infection, a finding that needs further elucidation. Transplantation is increasing throughout the world, at the same time immigration and travel to and from developing countries and tropical areas are bringing new challenges for the management of transplant recipients. Parasitic infections (PI) can have a significant impact on donor and candidate screening, donor allocation and recipients infections and prophylaxis. Independent of where the transplant procedure is done, or the location of donors and recipients at the time of transplant, these infections represent a potential risk in the post-transplant period. Additionally, common parasitic infectious diseases in SOT recipients are frequently underestimated, and remain one of the most understudied groups of diseases with few prospective trials and no randomized studies in this setting. The most relevant diseases in this context will depend on the impact on the recipients outcome and the prevalence of the disease in the general population (e.g. Leishmaniasis, Chagas Disease, Malaria and Strongyloidiasis). It is necessary to discuss the main recommendations for screening donors and candidates aiming to achieve balance between minimizing the risk of disease and improving transplant activity with quality, cost-effectiveness and safety. It should be stressed that screening procedures recommendations should evaluate the epidemiological risk, the strengths and limitations of screening tests, and the rates of transmission or reactivation and consequences of these diseases to the recipient. On the other hand, if transmission or reactivation occurs one should provide adequate management warranting for a better outcome. Emerging virus pathogens are defined by newly discovered pathogen or the increase or threatening to increase in incidence of previously known viruses. In this lecture, the most relevant and current emerging and reemerging virus pathogens in the transplantation will be review. In transplant scenario, an emerging viral infection may also have a broader impact, causing an atypical or more severe presentation in immunocompromised transplant recipients, as well prolonged viremia and virus shedding sometimes, such as Hepatitis E virus. On the other hand, some of these new viruses identified during the last decades thanks to the new laboratorial techniques are not necessarily related to a disease, even among immunocompromised patients; an example is the newer polyomaviruses, which are not associated with recognized diseases. Target antiviral therapy is not available for most diseases. Therefore, reduction of the immune suppression, combined with supportive measures, remains the cornerstone of therapy for most of the cases of severe viral infection among transplant patients. There are also issues regarding prevention of these infections. Although some are vaccine-preventable illnesses, some vaccines are prepared with live-attenuated viruses, such as the yellow fever vaccine, and, therefore, are not recommended for the immunosuppressed patient. In addition, the epidemic spread of an emerging viral pathogen in the general population may increase the risk of donor derived virus infection, and may cause a negative impact on the transplant activity. Examples of these scenarios are the emergence of West Nile fever in North America or the Chickungunya outbreaks in Italy and Puerto Rico. The prevention strategies, however, should take into account the epidemiologic scenario and need for continuous updated. The strategies may differ among endemic and non-endemic regions. Risk stratification of the potential donor based on clinical data and laboratorial screening may help to mitigate this hazard. Nevertheless, the limitations of the performance of the screening strategy must be considered in order to appropriately balance the mitigation of the risk of donor-transmitted infection and the adverse consequences of organ shortage for patients who are in the waiting list for transplantation. O. de la Cruz Virginia Commonwealth University, Richmond, VA, USA Community acquired respiratory viruses (CARV) are an important cause of morbidity and mortality among Hematopoietic Stem Cell Transplant recipients (HSCT). Reported incidence of CARV in HSCT varies from 4% on early days of antigen testing to ∼40% using PCR based detection. Most commonly detected viruses are Rhinovirus/enterovirus (22-34%), followed by Influenza, Respiratory Syncytial Virus (RSV) and Parainfluenza on similar range. Less frequently, with important morbidity associated are Coronavirus (3-11%), Adenovirus and Human metapneumovirus (HMPV). Influenza pneumonia have attributable mortality in HSCT ∼ 12%. Progression to lower respiratory tract infection (LRTI) can occur in one third of patients. However, perceived less aggressive viruses can progress to LRTI with equally precarious outcomes. For example, RSV have attributable mortality ∼ 15%, with some series describing mortality around 80% in untreated patients. Adenovirus disseminated infection has been reported around 50% in small series, with mortality ranging 23%. Associated risk factors for LRTI progression include age greater than 65, lymphope-nia, neutropenia, unrelated donor and chronic graft versus host disease. Bacterial coinfection, bronchiolitis obliterans and decline in pulmonary function are complications frequently described after CARV infections. Allograft related shortcomings remained an important area of research. General preventive measures are recommended to reduce infection related complications. Great example is Influenza vaccination and antiviral prophylaxis in specific scenarios. Immunization for several other CARV remains in development and not commercially available. Impact of contact and respiratory precaution at level of health care has been documented in several studies and should be followed. Other interventions like palivizumab for RSV in adults still lacking enough data and difficult implementation due to cost. Therapeutic options are narrow given limited antiviral agents approved for the wide range of CARV. Influenza therapy is known to improve outcomes. Ribavirin (RBV) with or without IVIG has reported to beneficial for RSV, PIV and anecdotic reports for HMPV. RBV IV or inhaled (teratogenic and only FDA approved) administration poses a logistic challenge and associated to several side effects. Cidofovir for Adenovirus, ALN-RSV01 (RSV), DAS-181 (PIV), specific T-cell immunity therapies, among others, should accumulate more data to be suited for general use. At any time, 50% of patients in intensive care units are receiving antibiotics. Source control and early and appropriate antibiotics administration along with other measures are vital interventions for patients with sepsis. Dose optimization is one critical tool that should be use by clinicians to improve outcomes in critically ill patients. Adequate dosing of these patients not only can improve clinical outcomes but also can impact positively in the emergence of bacterial resistance in the unit. Patients with sepsis and systemic inflammatory response (SIR) suffer from hemodynamic changes such as increased cardiac output, reduced peripheral vascular resistance, changes in the volume of distribution, and fluid shifts. In this setting, other systemic changes frequently take place such as hypoalbuminemia, hepatic impairment, and acute modification of renal function (e.g., augmented renal clearance and acute kidney injury). All these physiological adjustments to SIR lead to changing drug concentrations in serum and at the infection site of the antibiotics prescribed for the underlying infection. In this regard, doses of antibiotics usually administered to non-critically ill patients are probably inadequate in most of those patients with sepsis or SIR. Knowing the pharmacokinetics and pharmacodynamics characteristics of each antibiotic is essential to optimize drug treatment in this setting. Individualizing dosing based on patient clinical status and antibiotic properties should be encouraged. Loading dose, continuous infusion of time-dependent antibiotics, therapeutic drug monitoring, and direct administration at the infection site are among the tools that could improve antimicrobial use in critically ill patients. Discussion of optimization options for most commonly used antibiotics in ICU will be presented. Background: Dermatitis linearis caused by Paederus spp is a distinct type of contact dermatitis, characterized by the presence of erythematous and vesicular lesions on exposed areas of the body, which usually follow a linear pattern of distribution. It is caused by toxins contained in the endolymph of Paederus beetles, which belong to the class Insecta, order Coleoptera (beetles), family Staphylinidae (rove beetles), subfamily Paederinae, tribe Paederini, subtribe Paederina. Methods & Materials: We present a series of five selected cases that reflect the clinical and epidemiological spectrum of this clinical entity from a recent outbreak in two departments of Colombia. The aim of the study is also to report the occurrence of the first outbreak ever reported in Colombia. A thorough physical examination was performed obtaining a detailed description of the cutaneous lesions and looking for signs and symptoms of systemic affection. Paederus beetles recovered from patient's dwellings were submitted for entomological identification. Results: Affected patient ages ranged from 32 to 50 years old, with an average of 42 years old and a female predominance. Lesions presented as eczema with latent burn sensation (60%); as erythematous maculopapular lesions (20%) and papular and erythematous lesions (20%). In all cases, lesions were accompanied by burning / stinging sensation (100%) with pruritus (20%). Afterwards the lesions became vesicular and finally squamous (100%) around the fifteenth day of evolution. In Latin America, reports of dermatitis linearis outbreaks are scarce. The present study reports the occurrence of the first confirmed cases in the Atlantic Coast of Colombia, along with a detailed clinical, ento-
2018-07-03T04:47:34.183Z
2018-06-11T00:00:00.000
{ "year": 2018, "sha1": "ecd0a745458b65500009d724ad8dc8ddf6adf43b", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijid.2018.04.3591", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "606c1c26f057b854d8577b080ec807313606c7d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229503405
pes2o/s2orc
v3-fos-license
Comparison of C:N:P Stoichiometry in the Plant–Litter–Soil System Between Poplar and Elm Plantations in the Horqin Sandy Land, China Afforestation is among the most effective means of preventing and controlling desertification. Silver poplar (Populus alba) is commonly planted tree species for afforestation of the Horqin Sandy Land of China. However, this species has exhibited some drawbacks such as top shoot dieback, premature senescence and mortality, and soil and ecosystems degradation. In contrast, Siberian elm (Ulmus pumila) rarely experiences these problems in the same regions. Ecological stoichiometry plays a vital role in exploring ecological processes and nutrient cycle relationships in plant–litter–soil systems. To explore the differences in the carbon (C), nitrogen (N), and phosphorus (P) balance, the stoichiometry characteristics and stoichiometric homeostasis in elm and poplar plantations in the Horqin Sandy Land, we measured C, N, and P concentrations in leaves, branches, roots, litter, and soils and analyzed N and P resorption efficiencies in the two plantations. The results showed that soil C and N concentrations, C:P, and N:P were greater in the elm plantation than in the poplar plantation. The leaf and root C:P and N:P during summer and litter N and P concentrations were greater, whereas N and P resorption efficiencies were lower, in the elm plantation than in the poplar plantation. Generally, elm exhibited greater N:P homeostasis than poplar. N and N:P homeostasis were greater in roots than in leaves and branches in the elm plantation, but they varied with soil N concentration and N:P in the poplar plantation. These findings indicate that poplar exhibited more developed internal nutrient conservation and allocation strategies but poor nutrient accumulation in soil, which may contribute to degradation of poplar plantation. In contrast, elm tended to return more nutrients to the soil, showing an improved nutrient cycle in the plant–litter–soil system and increased soil C and N accumulation in the elm plantation. Therefore, compared with poplar, elm may be a more suitable afforestation tree species for the Horqin Sandy Land, in terms of promoting the accumulation of soil nutrients and enhancing nutrient cycling in the plant–litter–soil system. INTRODUCTION Desertification is a serious global environmental problem, with substantial effects on the survival and development of some plant and animal species, human wellbeing and society, and ecosystem stability maintenance (Sterk et al., 2016;Capozzi et al., 2018). The Horqin Sandy Land (42 • 41 -45 • 15 N, 118 • 35 -123 • 30 E) is among the most seriously desertified and ecologically fragile regions in China's agro-pastoral ecotone; originally a prairie, this sandy land developed due to climate change and human disturbances such as overgrazing, non-manure cropping, and arbitrary land use and management (Zeng and Jiang, 2006). During the desertification process, an estimated 90% of carbon (C) and 86% of nitrogen (N) were lost from the ecosystem (Li et al., 2006). To combat and control desertification, afforestation programs have been launched since the 1970s, including the Three-North Shelterbelt Program, the Grain for Green Project and the Conversion of Cropland to Forest and Grassland Program (Wang, 2014;Bai et al., 2019;Chu et al., 2019). These afforestation programs selected tree species with drought tolerance, rapid growth, and high timber production traits (Song et al., 2020). Silver poplar (Populus alba) is among the most common afforestation tree species due to its relatively high initial growth and seedling survival rates (Lindroth and Clair, 2013), and it has been planted as a monoculture in large areas for wind speed reduction, sand fixation, and soil and water conservation (Zhao et al., 2008;Ahmed et al., 2020). However, these large-scale poplar plantations have many drawbacks including top shoot dieback, tree premature senescence and mortality, and soil and ecosystem degradation (Wang et al., 2017;Zhou et al., 2020). In contrast, Siberian elm (Ulmus pumila) rarely exhibits these problems in either natural or plantation forests in the same regions (Zhao et al., 2010). Nevertheless, elm is seldom planted for afforestation in the sandy land due to its slow growth rate and production of crooked trunks, which limit its economic value (Wang et al., 2017). Vegetation conversions after afforestation often involve tremendous changes in plant and soil nutrient concentrations, biomass production, soil quality, and nutrient cycling processes, which profoundly influence the stability and sustainable development of ecosystems (Zhao et al., 2008;Liu et al., 2018;Luo et al., 2020). Therefore, it is necessary to explore the differences in plant and soil nutrients and their interactions between poplar and elm plantations to determine which is more suitable for afforestation. C, N and phosphorus (P) are major macroelements necessary for life; their cycling in plant-litter-soil systems has substantial effects on the function and stability of ecosystems (Mulder and Elser, 2009). Soil C, N, and P greatly affect plant growth and development and are simultaneously affected by organic matter, litter, and microbes (Sinsabaugh et al., 2008). Litter stores nutrients and returns them to soil; these processes are restricted by nutrient resorption, which contributes to optimal nutrient use efficiency by plants (Deng et al., 2019). Plants adjust their growth rates by coordinating the ratios of C, N, and P and allocating nutrients among different organs to adapt to soil nutrient conditions (Daufresne and Loreau, 2001;Fang et al., 2019). Thus, the balances and interactions of C, N, and P are highly complex in plant-litter-soil systems (Manzoni et al., 2008;. Ecological stoichiometry, which is used to evaluate the balances of energy and chemical elements in ecosystems, is a powerful tool for understanding ecological processes and relationships among element cycles in plant-litter-soil systems . Plant C:N:P stoichiometry reflects the efficiency of plants nutrient use (Niklas and Cobb, 2005) and can be used to determine nutrient limitations for growth (Koerselman and Meuleman, 1996). Soil C:N:P stoichiometry reflects soil fertility and nutrient availability and regulates plant growth and the nutrient state (Bui and Henderson, 2013). Stoichiometric homeostasis, a central concept of ecological stoichiometry, is defined as the ability of plants to maintain a relatively stable nutrient composition, regardless of soil nutrients changes (Sterner and Elser, 2002). Higher stoichiometric homeostasis in plants contributes to sustaining the functions and stability of the ecosystem (Yu et al., 2010). When soil nutrients limit plant growth, plants can respond via multiple physiological mechanisms to improve the internal availability and use efficiency of the limiting nutrient, thereby maintaining stability and its associated functions in the body at the limited nutrient level (Hessen et al., 2004). These mechanisms of nutrient conservation in plants include excreting hydrogen ions or enzymes into the soil (Yuan et al., 2019), altering the allocation of photosynthetic products and nutrients among different organs (Peng et al., 2016), and remobilizing nutrients from senescent to other organs before senescence (i.e., nutrient resorption) (Kobe et al., 2005;Wang et al., 2020). Therefore, evaluating C:N:P stoichiometry and stoichiometric homeostasis in a plant-litter-soil system could improve our understanding of plant adaptive mechanisms, nutrient cycles and ecosystem stability. In this study, we examined seasonal variations in C, N, and P concentrations and their ratios in the leaves, branches, roots, and soils of poplar and elm trees in plantations in the Horqin Sandy Land, China, throughout the growing season. We quantified C, N, and P stoichiometry in leaf litter and analyzed the nutrient resorption efficiency (NuRE) and stoichiometric homeostasis of both tree species. One objective of this study was to determine whether the soil C, N, and P concentrations are lower in the poplar plantation than in the elm plantation, since poplar has a higher growth rate and greater biomass (Zhao et al., 2010) and therefore is expected to consume more nutrients than elm. We also aimed to determine whether poplar has lower plant N and P concentrations than those of elm, due to N and P dilution in response to higher poplar growth rates (Zhao et al., 2010), and whether elm, a native tree species, exhibits greater stoichiometric homeostasis than does poplar, an exotic tree species, since native tree species tend to adapt better than exotic species in local environments (Song et al., 2020). Study Site This study was conducted at the Zhanggutai Experimental Base of Liaoning Institute of Sandy Land Control and Utilization, Liaoning Province, China (42 • 32 -42 • 51 N, 121 • 53 -122 • 35 E; average elevation, 226 m), which is located in the southeastern region of the Horqin Sandy Land, China. This region has a semiarid climate, with a mean annual precipitation of 474 mm, largely during June-August, and mean annual potential evaporation of approximately 1,580 mm (Song et al., 2020). The mean annual temperature is approximately 6.8 • C, with minimum and maximum mean temperatures of −29.5 in January and 37.2 • C in July, respectively. The zonal soil in this region is classified in the Semiaripsamment taxonomic group, which develops from sandy parent material through wind; the distributions of soil salinity, texture, and structures were homogeneous (Zhu et al., 2008). The main vegetation type is psammophytes, which are typical Inner Mongolia flora. The Zhanggutai Experimental Base was established in 1978; it covers an area of 2,620 hm 2 and is characterized by flat, stable sand dunes and large Pinus sylvestris var. mongolica and P. alba plantations, interspersed with small patches of degraded grassland and U. pumila and Pinus tabulaeformis plantations. Experimental Design In 2017, three plots were selected among pure elm and poplar plantations, respectively on the Zhanggutai Experimental Base (Figure 1). These plots had similar site conditions and history and land management prior to afforestation. The plantations were established from non-vegetated sandy lands, and previous studies at the study region have shown that soil properties were homogeneous and similar in the non-vegetated sandy lands (Jiao, 1989;Zeng et al., 2009;Zhao et al., 2010). Based on investigation in the study region prior to afforestation, the distributions of soil nutrients were homogeneous and the C, N, and P concentrations were 3.15, 0.24, and 0.09 g kg −1 , respectively. Thus, we considered that the initial soil properties were similar prior to afforestation, and the differences in soil C, N, and P stoichiometry after afforestation were induced by the different tree species plantations. All the selected plantations were planted on flat topography, and shared the same soil type (arenosols) and similar elevation ( Table 1). All plots were within 10 km to ensure similar soil type and climatic condition. The selected trees were approximately 20 years old and therefore suitably represented the effects of plant and soil interactions on nutrients following afforestation (Zhao et al., 2010). No management techniques such as fertilization, pruning, or thinning were conducted in any of the plots. Three replicate subplots (20 m × 20 m) were established within each plot. Within each subplot, three healthy individuals with average diameter at breast height were randomly selected for plant sample collection. Leaf, branch, root, and soil samples were collected in mid-May, July, and September (i.e., spring, summer, and autumn). From each selected tree, we collected three branches from the upper, middle, and lower parts of the crown, and we sampled branches of similar diameter (approximately 5 mm) for each tree species during the different seasons. We selected mature leaves without diseases and/or insect pests as leaf samples. The fine roots (<2 mm) of each selected tree were excavated from several locations below the canopy by carefully removing the surrounding soil. Soil samples were simultaneously collected using a soil auger (diameter, 5 cm) at depths of 0-20, 20-40, and 40-60 cm. After removing the understory plants and surface litter, we randomly collected four soil samples within 1 m of the base of each selected tree; these were pooled into a single composite soil sample per tree. In mid-October, we collected newly fallen and undecomposed leaf litter from the litter layer under the canopy of each selected tree. Three replicate samples of leaves, branches, roots, litter, and soil were collected in each subplot. All plant samples were ground using a mechanical grinder after oven drying for 72 h at 60 • C, and soil samples were air-dried after the removal roots and stones. All plant and soil samples were passed through a 0.25 mm sieve and then used to measure C, N, and P concentrations. Chemical Measurements C concentrations in plant and soil samples were measured using the oil bath K 2 Cr 2 O 7 titration method. To measure N and P concentrations, plant and soil samples were initially digested with H 2 SO 4 -H 2 O 2 and H 2 SO 4 -HClO 4 , respectively, and then the total N and P concentrations were determined following the semi-micro Kjeldahl method using a Kjeldahl auto-analyzer (JY-SPD60, Beijing, China) and the colorimetric method using a spectrophotometer (T6, Beijing, China). Soil pH was measured using a soil/water ratio of 1:2.5 suspension. Soil water content determined from mass loss after drying for 12 h at 105 • C (Bao, 2000). Soil particle size was analyzed using a Horiba Master Sizer (LA-300, Japan). Plant and soil C, N, and P concentrations were expressed as in dry mass (g kg −1 ), and C:N, C:P, and N:P ratios were calculated as mass ratios. Calculations NuRE was calculated as follows: where N m and N l are the nutrient concentrations in mature leaves (July) and litter leaves (October), respectively , and the MLCF is mass loss correction factor, specifically the dry mass ratio of litter leaves and mature leaves (van Heerwaarden et al., 2003). Using the nutrient stoichiometry of plant organs and soils, the homeostatic regulation coefficient (H) was derived from the following model (Sterner and Elser, 2002): where y is the N or P concentration or N:P for leaves, branches, and roots; x is the corresponding value in the soil layer; and c is a constant. If the regression relationship is not significant (P > 0.05), then 1/H is set at zero, and the organism is considered strictly homeostatic. If the regression relationship is significant (P < 0.05), then species with | 1/H| ≥ 1 are considered not to be homeostatic, where those with 0 < | 1/H| < 1 are classified as follows: 0 < | 1/H| < 0.25, homeostatic; 0.25 < | 1/H| < 0.5, weakly homeostatic; 0.5 < | 1/H| < 0.75, weakly plastic; or | 1/H| > 0.75, plastic (Persson et al., 2010;Bai et al., 2019). Statistical Analyses For all datasets, Kolmogorov-Smirnov and Levene's tests were conducted to test the normality and homogeneity of variances before statistical analysis. Two-way ANOVAs were used to test the effects of tree species and sampling time on C, N, and P concentrations and C:N, C:P, and N:P for each soil layer and tree organ. Duncan's test was conducted for post hoc multiple comparisons. Nutrient stoichiometry of soil, plant, and leaf litter, as well as the NuRE, were compared between elm and poplar using the two-sample t-tests. All figures were prepared using SigmaPlot 10.0 software, and all data were analyzed using SPSS 16.0 software for Windows (SPSS Inc., Chicago, IL, United States). Significance was evaluated at a level of 0.05. Soil C, N, and P Stoichiometry Soil C concentrations showed an upward trend during the growing season and were higher in the elm plantation than in the poplar plantation (Figure 2). Soil N concentrations tended to increase in the elm plantation but remained unchanged in the 0-20 and 20-40 cm soil layers in the poplar plantation over time; soil N concentrations were higher in the elm plantation than in the poplar plantation (Figure 2). Soil P concentrations tended to decrease and then increase in the 0-20 and 40-60 cm soil layers but remained unchanged in the 20-40 cm soil layer in both the elm and poplar plantations over time, and no significant differences were found between plantations (Figure 2). Soil C:N ratio in the 0-20 and 40-60 cm soil layers showed a downward trend but decreased and then increased in the 20-40 cm soil layer in the elm plantation over time, while it showed an upward trend in the 0-20 cm soil layer but initially decreased and then increased in the 20-40 and 40-60 cm soil layers in the poplar plantation over time. Higher soil C:N in the elm plantation than in the poplar plantation was found in spring (Figure 2). Generally, soil C:P showed an upward trend over time and was greater in the elm plantation than in the poplar plantation ( Figure 2). Soil N:P tended to increase in the elm plantation but remained stable in the 0-20 and 20-40 cm soil layers in the poplar plantation during the growing season; soil N:P was higher in the elm plantation than in the poplar plantation (Figure 2). Plant C, N, and P Stoichiometry and Nutrient Resorption Tree species did not have a significant effect on C concentrations in all three organs, although sampling time had a significant effect, with an initial decreasing and subsequent increasing trend (Figure 3). N concentrations decreased in leaves but increased and then decreased in branches and roots among elm samples during the growing season, whereas they increased and then decreased in poplar leaves and branches but showed the opposite trend in poplar roots. N concentrations in leaves during spring and autumn, branches during autumn, and roots during spring and summer were higher in elm than in poplar (Figure 3). P concentrations tended to decrease in the leaves, branches, and roots of elm and in the leaves and branches of poplar, whereas they increased in poplar roots during the growing season. Leaf and branch P concentrations during summer and root P concentrations were higher in poplar than in elm (Figure 3). In elm samples, C:N ratio increased in leaves, whereas in branches and roots, it initially decreased and then increased during the growing season; in poplar samples, C:N ratio initially decreased and then increased in leaves and branches but showed the opposite trend in roots. Leaf and branch C:N ratio were lower whereas root C:N ratio were higher in elm than in poplar during autumn (Figure 3). C:P showed an upward trend in all three organs of both elm and poplar samples over time, with the exception of poplar roots, in which it decreased and then increased. Leaf C:P during summer and root C:P during summer and autumn were higher in elm than in poplar (Figure 3). N:P decreased in leaves but initially increased and then decreased in branches and roots in elm samples over time, while it increased and then decreased in leaves but showed the opposite trend in roots in poplar samples over time. Tree species had a significant effect on N:P, with higher in elm than in poplar except for branches during spring and summer and roots during autumn (Figure 3). Leaf litter N and P concentrations were significantly higher, but C:N and C:P were lower, in the elm plantation than in the poplar plantation, and no significant differences were found in C concentrations or N:P between the two plantations ( Table 2). N and P resorption efficiencies were significantly greater in the poplar plantation than in the elm plantation (Figure 4). Stoichiometric Homeostasis We detected the degree of stoichiometric homeostasis of N, P, and N:P in leaves, branches, and roots of elm and poplar, only significant relationships were shown in Figure 5. We found strict N concentration homeostasis in elm branches and roots (P > 0.05) and weak plasticity in elm leaves. However, N concentrations were not homeostatic in poplar leaves or roots and were strictly homeostatic in poplar branches (P > 0.05) ( Table 3). No significant relationships were found FIGURE 2 | Differences in soil C, N, and P concentrations and C:N, C:P, and N:P across the growing season between Siberian elm and Silver poplar plantations (n = 3). in P concentrations between soil and organs (P > 0.05), thus P concentrations were strictly homeostatic in the leaves, branches, and roots of both tree species (Table 3). N:P was weakly homeostatic, weakly plastic, and strictly homeostatic in elm leaves, branches, and roots, respectively, whereas N:P was not homeostatic in poplar leaves, branches, or roots, decreasing in leaves and branches and increasing in roots as soil N:P increased ( Table 3). Comparison of Soil C, N, and P Stoichiometry Between Elm and Poplar Plantations Afforestation can improve plant and soil nutrient concentrations and stocks, soil quality, and vegetation structure via more efficient use of resources for primary production (Nosetto et al., 2006). Afforestation increases water-holding capacity and nutrient retention (Evrendilek et al., 2004), increasing the efficacy of C sequestration and enhancing ecosystem biodiversity and resilience in semiarid regions (Hernandez-Ramirez et al., 2011). In this study, compared with nearby wild grassland without afforestation (the C, N and P concentrations were 4.50, 0.31 and 0.11 g kg −1 at 0-20 cm; 2.35, 0.12 and 0.10 g kg −1 at 20-40 cm; and 1.54, 0.10 and 0.10 g kg −1 at 40-60 cm, respectively), both elm and poplar plantations increased the concentrations of soil C and N, especially in deeper soil layers, whereas there was no significant influence on P concentrations (Figure 2). Similar results were reported for P. sylvestris var. mongolica afforestation in the Horqin Sandy Land (Li et al., 2012); however, Zhao et al. (2008) reported that afforestation significantly reduced soil P concentrations but had no significant effect on soil C or N concentrations, perhaps due to differences in tree species or stand density. Thus, the choice of afforestation species and plantation management technique may be key factors for successful afforestation (Zhao et al., 2008;Bai et al., 2019;Ahmed et al., 2020). As predicted, the soil C and N concentrations were greater in the elm plantation than in the poplar plantation (Figure 2), perhaps due to possible nutrient consumption for plant growth and nutrient return in the form of leaf litter. Poplar is a fast-growing, high-yield tree species (Ahmed et al., 2020) and had much larger total biomass than elm (Table 1), while it had similar N concentrations and higher P concentrations in leaves and branches compared with elm during summer. These findings indicated more nutrients were absorbed, assimilated, and sequestrated by poplar, which may contribute to lower soil C and N concentrations in the poplar plantation than in the elm plantation. Additionally, our elm plantation samples had higher N and P concentrations and lower C:N and C:P in leaf litter ( Table 2) and lower N and P resorption efficiencies in leaves (Figure 4) compared with poplar samples, indicating the return of high-quality litter to soil, which accelerated litter decomposition and nutrient mineralization in the elm plantation. Afforestation not only affects soil nutrients but also affects soil texture. Soil granulometric composition significantly correlated with soil nutrient concentrations on the sandy land after afforestation (Deng et al., 2017), thus, more soil clay and silt contents ( Table 1) maybe one of the reasons for higher soil C and N concentrations in the elm plantation (Deng et al., 2017). However, there was no significant difference in soil P concentrations between the poplar and elm plantations (Figure 2), which was inconsistent with predictions, perhaps due to the differences in soil P sources and transformation processes compared with soil C and N. The accumulation of soil C and N is driven mainly by the decomposition of plant litter and dead roots, whereas soil P transformation is driven primarily by phosphate decomposition which requires long periods of time (Deng et al., 2019). The N and P cycles can become decoupled under drought stress (Delgado-Baquerizo et al., 2013), and soil P diffusivity is more sensitive to soil water than that of N (Lambers et al., 2008); therefore, cation exchange and P sorption capacity are very low in sandy soil (Leinweber et al., 1999). Thus, the total P in the soil remained at levels too low for adequate absorption and use by plants (Ma et al., 2009). A similar result was reported for Pinus radiata in a temperate Andisol soil, in which the P concentrations remained unchanged following afforestation (Farley and Kelly, 2004). Generally, the soil C concentrations showed an upward trend in both plantations over time, and the increase was greater in the elm plantation than in the poplar plantation (Figure 2). Hu et al. (2016) reported that soil C sequestration was driven mainly by root input rather than leaf litter input after afforestation. Higher fine root biomass was found in the elm plantation than in the poplar plantation , which produced more root litter and exudates to facilitate soil C transformation processes. Soil N concentrations showed an increase trend in the elm plantation, but no significant changes were observed in the 0-20 or 20-40 cm soil layers in the poplar plantation during the growing season (Figure 2), perhaps due to the leaf litter decomposition rate (Wojciech et al., 2019). Elm leaves are small and soft, whereas those of poplar are larger and tougher with a thicker wax layer. Elm had higher litter decomposition and nutrient release rates than does poplar (Zhang et al., 2013). Soil P concentrations tended to decrease and then increase in the 0-20 cm soil layer in both plantations (Figure 2), perhaps because more P in topsoil was absorbed by plants during summer. P cycling is driven mainly by plant P demand and sustained by forest leaf litter inputs (Chen et al., 2008). Soil C:N:P stoichiometry is an important indicator of nutrient cycling and elemental limitations in plants . In our study, the average surface soil C:N, C:P, and N:P of the two plantations were 10.7, 55.6, and 5.1, respectively (Figure 2), which were lower than average values in China (14.4,136.0,and 9.3) and worldwide (14.3, 186.0, and 13.0) (McGroddy et al., 2004;Tian et al., 2010). Compared with sandy grassland and shrubland (Yang and Liu, 2019), we found lower C:N and higher C:P and N:P in the plantations, implying that soil P content is lower in forest plantations in the Horqin Sandy Land. This result may have been caused by greater sensitivity of P than N ion movement to soil moisture conditions (Walbridge, 2000;Smith, 2002;He and Dijkstra, 2014), leading to a greater dependence of soil P availability on soil water availability under the drought conditions at the study site (Yang and Liu, 2019). Furthermore, most of the P absorbed and assimilated by trees is sequestrated within biomass (Kuznetsova et al., 2011;Yan et al., 2017). Soil C:P has negative effects on soil P availability (Li et al., 2016). In our study, soil C:P and N:P were generally greater in the elm plantation than in the poplar plantation (Figure 2) indicating lower soil P availability in the elm plantation relative to the poplar plantation. Comparison of Plant C, N, and P Stoichiometry Between Elm and Poplar Plantations C, N, and P concentrations and stoichiometric ratios in different plant organs can reflect adaptive strategies to various regimes in terms of nutrient uptake, allocation, and utilization during If the regression relationship is not significant (P > 0.05), then 1/H is set at zero, and the organism is considered strictly homeostatic. If the regression relationship is significant (P < 0.05), then species with | 1/H| ≥ 1 are considered not to be homeostatic, where those with 0 < | 1/H| < 1 are classified as follows: 0 < | 1/H| < 0.25, homeostatic; 0.25 < | 1/H| < 0.5, weakly homeostatic; 0.5 < | 1/H| < 0.75, weakly plastic; or | 1/H| > 0.75, plastic. plant growth (Niklas and Cobb, 2005). C concentrations did not differ significantly among plantation species, whereas they decreased and then increased over time (Figure 3). Lower C concentrations often lead to higher specific leaf area and photosynthetic and growth rates (Niklas and Cobb, 2005), implying faster growth for both tree species during summer than during spring and autumn. Inconsistent with our second prediction, poplar had higher P concentrations and lower C:P than elm during summer, implying faster growth rate and more biomass accumulation. However, P concentrations and C:P were similar in poplar leaf and branch compared with elm during spring and autumn (Figure 3). Drought stress can induce a decrease in available soil P (He and Dijkstra, 2014), thus limited P absorption from soil leads to low P concentrations in plants to maintain C assimilation in arid and nutrient-poor environments (He and Dijkstra, 2014), which may explain the similar P concentrations observed in the leaves and branches between poplar and elm during spring and autumn. Drought often induces xylem embolism in taller tree species (McDowell et al., 2008), and water transport failure can affect nutrient translocation and allocation (He and Dijkstra, 2014). Poplar is taller and more susceptible to hydraulic failure in sandy regions (Song et al., 2021), which may lead to nutrient accumulation in roots. This process may also explain the higher NuRE of poplar than elm (Figure 4). Inconsistent with our second prediction, N concentrations and C:N in leaves and branches were similar between elm and poplar during summer, although lower N concentrations and higher C:N in leaves and branches were found in poplar than in elm during autumn (Figure 3). These findings may because of growth dilution effects after summer caused by the greater biomass and growth rate of poplar compared with those of elm. In poplar, N concentrations were higher in leaves and branches but lower in roots during summer than those during spring and autumn (Figure 3), implying that more resources are allocated to leaves and branches during the rapid growth season to promote the growth of aerial plant part. N and P concentrations increased in roots but decreased in leaves and branches during autumn (Figure 3), which indicated that most N and P were reabsorbed and transferred to roots for storage, implying a more conservative nutrient use strategy that benefits sprouting and new leaf growth during the following spring. Elm organ N and P concentrations decreased in autumn (Figure 3), whereas N and P resorption efficiencies were lower in elm than in poplar (Figure 4). This finding indicates the return of high-quality leaf litter to the soil, leading to greater N and P acquisition by elm via root uptake and implying more efficient nutrient cycles in the plant-litter-soil system. The N concentrations in branches and roots were higher during summer than during spring and autumn, with no significant differences between these organs (Figure 3). This implies that resource allocation between aerial and underground plant parts may be more balanced in elm than in poplar, promoting root growth during the rapid growing season for greater absorption of nutrient and water. The leaf N:P can be used to determine potential N or P limitations for plant growth, and a ratio < 14 indicates N limitation, whereas a ratio > 16 indicates P limitation (Koerselman and Meuleman, 1996). In this study, the leaf N:P of elm was generally > 16 during the entire growing season (Figure 3), indicating P limitation for elm growth. However, the leaf N:P of poplar was > 16 in summer but < 14 in spring and autumn (Figure 3), which indicates that poplar experienced more P limitation during the fast-growing season and more N limitation during the early and late growing seasons. Generally, elm may be more susceptible to P limitation as it had higher N:P than poplar. Comparison of Stoichiometric Homeostasis Between Elm and Poplar Plantations Stoichiometric homeostasis reflects the balance between resource consumption and storage in plants during growth period (Blouin et al., 2012), and it is positively correlated with vegetation stability (Yu et al., 2010). Stoichiometric homeostasis has been used to study the mechanisms of plant adaption to environmental change , and it has been compared among tree species with similar age following afforestation on the Loess Plateau of China (Bai et al., 2019) and in arid mining subsidence areas (Xiao et al., 2021). Although the soil granulometric composition may influence nutrient concentration in tissue, it would not affect homeostatic analysis. Variations in plant nutrient concentrations are the integrated results of soil improvement following afforestation (Bai et al., 2019). The homeostasis analysis is based on the current N and P concentration and N:P in soils and plants (Bai et al., 2019), regardless of the soil granulometric composition changes. In this study, both elm and poplar exhibited N and P concentration and N:P ratio homeostasis to some extent across the growing season (Figure 5), indicating relatively conservative nutrient use in both species, which improves their adaptation to this arid and nutrient-deficient environment. The maintenance of stable elemental composition in the plant body in a changeable environment is beneficial for growth, development and survival (Blouin et al., 2012). To minimize the effects of tree size, we only compared N:P homeostasis between elm and poplar. Consistent with our third hypothesis, elm generally showed greater N:P homeostasis than did poplar (Figure 5), indicating that elm may have more developed nutrient modulation systems than poplar, or that elm contains more functional materials, leading to a faster response to nutrient regime changes (Bai et al., 2019). Native species such as elm have a longer life history in a given local environment, which could allow it to adapt better to adverse environmental conditions, thereby improving ecosystem stability in the elm plantation compared with the poplar plantation. However, stoichiometric homeostasis is coupled to tree growth and development . In future studies, the stoichiometric homeostasis is needed to be evaluated in the two tree species of different age and size to better understand the mechanisms of nutrient conservation. Limiting elements in plants with homeostasis generally have low variability and environmental sensitivity (Han et al., 2011); thus, they are the main regulators of homeostasis (Sterner and Elser, 2002). Leaves, branches, and roots in both elm and poplar trees were found to have strict P homeostasis ( Table 3), indicating that P may be the main nutrient limiting factor for the growth of mature elm and poplar plantations. Similar results were found in P. sylvestris var. mongolica plantations (Zhao et al., 2009) and Caragana shrubs (Yang and Liu, 2019) in the same region. The degree of stoichiometric homeostasis appears to vary among organs (Bai et al., 2019), reflecting a fundamental trade-off in nutrient investment and allocation among organs (Gu et al., 2017). In this study, elm branch and root N concentrations and root N:P showed strict homeostasis, whereas the leaf N concentrations and N:P were weakly plastic and weakly homeostatic, respectively (Table 3). These results are inconsistent with those of previous studies demonstrating that leaf homeostasis is often greater than that of other organs such as branches, roots, and fruits (Bai et al., 2019;Wang et al., 2019), perhaps because leaf nutrient contents are constrained within a certain range to provide optimal physiological traits for the maintenance of survival and growth (Aerts and Chapin, 2000). Elm can survive after disastrous weather, insect or disease events, even if all leaves are lost. Therefore, maintaining the nutrient balance in elm roots may be an adaptive strategy in arid and barren environments. In poplar, the N concentrations and N:P among the three organs were not homeostatic, except for the N concentrations in branches (Table 3); the N concentrations and N:P decreased in leaves and branches but increased in roots as soil N concentrations and N:P increased (Figures 5B,D). These findings indicate that poplar coordinates nutrient allocation among organs and nutrient translocation between aerial (leaf and branch) and underground (root) part, which showed opposite trends. When poplar experienced nutrient limitation, it decreased nutrient supply to the aerial parts and increased nutrient storage in underground parts. Poplar produces many root shoots and can sprout from roots in the spring following nutrient limitation, even if the aerial parts have died. CONCLUSION In this study, nutrient conservation, use mechanisms, and stoichiometric homeostasis traits differed between elm and poplar plantations in the Horqin Sandy Land of China. Elm had lower organ N and P concentrations in autumn but greater litter N and P, and soil C and N concentrations, which enhanced nutrient cycling in the plant-litter-soil system. Elm evenly allocated N and P contents between aerial and underground parts. In contrast, poplar had higher root N and P concentrations in autumn and higher N and P resorption efficiencies but lower soil C and N concentrations, implying a more conservative nutrient use strategy and more developed internal nutrient cycles. Poplar had higher P concentrations and lower C:P than elm and allocated more N and P to leaves and branches during summer, implying faster growth rate and greater biomass, which contributed to lower soil nutrient concentrations. These traits are beneficial for early poplar growth, although stand degradation is expected to occur once soil nutrients can no longer sustain the nutritive requirements for growth. Generally, elm exhibited greater N:P homeostasis than poplar. Elm showed greater homeostasis in roots than in leaves and branches, whereas poplar coordinated nutrient allocation among organs. P was the main nutrient limiting factor in both elm and poplar plantations. Overall, elm was more adaptable to the arid, nutrient-deficient environment in terms of fostering soil nutrient accumulation and improving nutrient cycles in plant-litter-soil systems of the Horqin Sandy Land. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS KW and RZ conceived and designed the study. KW and EN performed the experiments. KW and TY wrote the manuscript. TY and LS reviewed and edited the manuscript. All authors read and approved the manuscript.
2020-11-26T09:07:16.241Z
2020-11-23T00:00:00.000
{ "year": 2021, "sha1": "7018609da679a7a6d40b648cf445a6502a68c2a5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2021.655517/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9dbfc3842f89552e3a9ba089cbad27a021df99d9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13324893
pes2o/s2orc
v3-fos-license
A high level interface to SCOP and ASTRAL implemented in Python Background Benchmarking algorithms in structural bioinformatics often involves the construction of datasets of proteins with given sequence and structural properties. The SCOP database is a manually curated structural classification which groups together proteins on the basis of structural similarity. The ASTRAL compendium provides non redundant subsets of SCOP domains on the basis of sequence similarity such that no two domains in a given subset share more than a defined degree of sequence similarity. Taken together these two resources provide a 'ground truth' for assessing structural bioinformatics algorithms. We present a small and easy to use API written in python to enable construction of datasets from these resources. Results We have designed a set of python modules to provide an abstraction of the SCOP and ASTRAL databases. The modules are designed to work as part of the Biopython distribution. Python users can now manipulate and use the SCOP hierarchy from within python programs, and use ASTRAL to return sequences of domains in SCOP, as well as clustered representations of SCOP from ASTRAL. Conclusion The modules make the analysis and generation of datasets for use in structural genomics easier and more principled. Background Bioinformatics tools often attempt to automatically predict the unknown properties of a given dataset. Manually curated data is therefore important both for training and for benchmarking new approaches to prediction. One database providing such a manual curation of data is SCOP (the Structural Classification Of Proteins) [1]. SCOP categorises all known protein domains in a hierarchy based upon the domain structure. This hierarchy is principally described by Class, Fold, Superfamily and Family. Crucially, the relationships between proteins grouped at the superfamily level may not be apparent from sequence considerations alone. This makes SCOP a valuable resource when examining the performance of algorithms that detect remote sequence relationships [2]. The ASTRAL [3] Compendium for Sequence and Structure Analysis complements SCOP. ASTRAL provides sequences and structures for each domain in SCOP, and also provides non-redundant subsets of SCOP with preference given to higher quality structures. The SCOP and ASTRAL databases provide their data in structured files available from the relevant websites. Using this data requires parsing and handling of these files. We present a small and intuitive application programming interface (API) to the SCOP and ASTRAL datasets which allows these databases to be used with a minimum of programming overhead. In the past we have successfully used this API to develop a web-based database of SCOP alignments, S4 [4]. The API described is now distributed as part of the Biopython suite for bioinformatics [5]. Implementation The API provides methods that allow the SCOP tree to be queried. Nodes in the SCOP tree can be found using their identification or their position in the tree. In addition, given a particular node, nodes lying on a different level on the tree (ascendents or descendents) can be found. Each leaf of the tree corresponds to a domain in the SCOP hierarchy. The API uses ASTRAL to provide information on the leaves of the tree, corresponding to domains. For each domain, the API provides its sequence and its membership of non-redundant subsets and sequence as defined by ASTRAL. Figure 1 shows a Unified Modelling Language (UML) diagram of the classes and methods involved in the API. Once a Scop object has been instantiated with either a reference to the file or a database, Node objects can be returned by specifying sids (SCOP identifiers that show the pdb identity plus a code that identifies the chain, such as "dlilk --") or sunids (SCOP unique identifiers, numerical identifiers assigned to each node in the tree that are guaranteed to be identical across releases, such as "63336"). These Node objects represent all nodes in the tree, including classes, folds, superfamilies and families. Usage Node objects can be queried for their parent or child objects, and can be queried for relatives further up or down the tree using the getAscendent or getDescendents methods. These methods accept as an argument a string describing the level required, either as the human readable name of the node (e.g. getAscendent ('superfamily')) or using the SCOP conventions for the levels ('cf, 'sf', etc.). Domains are leaves of the SCOP tree and have a special class Domain which stores the sid (e.g. dlh32a2) as well. In addition, each Domain object has a Residues object storing the pdb chain that the domain corresponds to, as well the list of residues from the chain that have been determined to be part of the domain. The Astral class is an abstraction of the ASTRAL database. ASTRAL provides a FASTA formatted file of all domains in the SCOP database based on PDB records. Using the Biopython framework for handling FASTA files, sequences for SCOP domains can be quickly returned. So, by calling getSeqRecord on a domain with an instance of the Astral class we can retrieve the relevant sequence. ASTRAL also provides FASTA files containing SCOP domains clustered at percent id of residues shared between sequences, or BLAST expect values. The Astral class can parse these files and return Domain objects for each domain in the file. Furthermore, a list of domains for a given percent id (e.g. 10%) or E-value (e.g -10) can be returned using getDo-mainsClusteredById or getDomainsClusteredByEv. Examples Having downloaded the SCOP parsable files and the ASTRAL scopseq resources, the Astral and Scop objects are instantiated: A more complex example would be to create a novel dataset to benchmark homology recognition [6]. The authors wished to create a dataset of highly populated sequence diverse superfamilies: those with more than twenty members at less than ten percent sequence identity. Using these modules, such a dataset could be generated in a few lines of code: The advantage of using an SQL approach is that it avoids constructing the entire SCOP tree in memory when the Scop object is created. Instead, database queries are made as and when nodes from SCOP are requested. This avoids the time consuming process of parsing the entire tree, and allows an application using these modules to start quickly. Evaluation The classes have been tested using a unit testing framework, and can correctly parse version 1.61 to 1.67 of the SCOP and ASTRAL databases. Loading and building the SCOP tree from flat files typically takes a few seconds on a modern workstation, although this wait can be avoided by using the MySQL backend.
2014-10-01T00:00:00.000Z
2006-01-10T00:00:00.000
{ "year": 2006, "sha1": "96770783d2784e17ba01ff91cf9d6c55cc2378a8", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-7-10", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96770783d2784e17ba01ff91cf9d6c55cc2378a8", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
59319323
pes2o/s2orc
v3-fos-license
The Impact of Distribution Intensity on Brand Preference and Brand Loyalty Several studies attempted to conceptualize and measure brand equity. Brand equity constructs identified include awareness, associations, perceived quality, and loyalty, among others. Further, brand performance has been operationalized in terms of market share, ability to charge price premium, and distribution coverage. While most studies focused on consumer-based constructs, few researchers tested the effect of distribution intensity on brand performance. This study advances a model that links distribution intensity with brand preference and loyalty, and empirically tests it on the fuel industry in Egypt. First, in-depth interviews with industry experts were conducted to validate research hypotheses. Then, online surveys were distributed to test model relationships on four leading brands. Results revealed that affect, satisfaction, perceived quality, as well as distribution intensity significantly affected brand preference; which in turn was the key driver to brand loyalty. It is recommended that firms consider the role of distribution while developing marketing strategies and brand-building activities. Introduction Several studies attempted to conceptualize brand equity and determine its main constructs (Aaker, 1991;Keller, 1993).Firms are in a continuous need to assess their brand performance in order to develop effective marketing strategies.Keller (1993) posited that evaluating the brand in the minds of consumers is prerequisite to brand market performance.As a result, it is crucial for firms to measure brand equity constructs and identify the ones that significantly impact their performance in the marketplace.Despite several attempts to measure brand equity constructs, there is no one single agreed upon framework in the literature (Washburn and Plank, 2002;Tolba & Hassan, 2009).Aaker (1991) posited that brand equity consists of five main constructs; awareness, associations, perceived quality, brand loyalty and other proprietary assets.Keller (1993) posited that customer-based brand equity could be measured through: brand knowledge, familiarity and consumers' response based on their perception, preferences and behavior towards the brand.Further, Erdem & Swait (2004) classified brand equity measurement models into: 1) component-based models (Aaker, 1991(Aaker, , 1996;;Keller, 1993, Lassar et al., 1995;Keller & Lehmann, 2003); and 2) holistic models (Swait et al., 1993;Park & Srinivasan, 1994, Kamakura & Russell, 1993).Yoo and Donthu (1997) utilized four of Aaker's five brand equity components, and advanced a scale to measure "Overall Brand Equity".Further, Aaker (1996) introduced the Brand Equity Ten model, which operationalized brand equity in terms of ten constructs, covering both consumer and market measures.Besides seven consumer-based constructs, the model recommends three market performance constructs: market share, price premium, and distribution coverage.While most brand equity studies focused on consumer-based constructs, few studies attempted to test the impact of distribution intensity on brand preference, loyalty, and performance. Distribution Intensity Despite its importance, distribution intensity has received little attention in academic research.Reibstein and Farris (1995) proposed that there is a convex relationship between distribution coverage and market share for consumer packaged goods.Empirical studies concluded that "distribution is one of the most potent marketing contributors to sales and market share" (Hanssens et al., 2001;Bucklin et al., 2008).Further, Srinivasan et al. (2005) developed a brand equity model that incorporates brand availability as a key brand performance driver.Finally, Bucklin et al. (2008) introduced a model that relates distribution intensity to buyer choice among competing consumer durables; and further applied it on the automotive industry in the United States.It is clear that distribution intensity is an under-researched construct that needs further investigation, and this paper attempts to bridge this gap in the literature.This study utilizes analyzes the factors affecting brand preference and loyalty, including distribution intensity, and empirically tests them on the fuel industry in Egypt.The key constructs selected are driven from Aaker's (1991) model, incorporating awareness, perceived quality, and adds the emotional dimension (affect), the experience dimension (satisfaction); along with distribution intensity (the focus of the study).The next sections detail the links between these five constructs and brand preference. Brand Preference This study utilizes brand preference as the primary factors affected by distribution intensity and other brand equity constructs.Several empirical studies in the literature supported the positive relationship between brand equity constructs and brand preference (Cobb-Walgren et al., 1995;Agarwal & Rao, 1996;Vakratsas & Ambler, 1999;Mackay, 2001b;Myers, 2003;Lavidge, 1961).Further, Agarwal & Rao (1996) developed a model that links brand equity to the hierarchy of effects model.Customer-based brand equity has been thought of as a prerequisite to brand preference, which in turn affects consumers' intention to purchase.Brand equity models assessed the impact of individual measures on market share, and utilized several brand equity constructs: awareness, familiarity, weighted attributes, value for money, and overall quality of the brand (Mackay, 2001b).Therefore, brand equity constructs are expected to affect brand preference; and the challenge is to determine which constructs to prioritize in order to increase preference and improve brand performance. Additionally, several studies differentiated between "Attitudinal Loyalty" and "Purchase Loyalty".(Morgan, 2000;Chaudhuri & Holbrook, 2001) Attitudinal Loyalty was defined as "the level of commitment of the average consumer toward the brand".Purchase (Behavioral) Loyalty was described as "the willingness of the average consumer to repurchase the brand."This study focuses on the effect of brand equity constructs and brand preference on attitudinal loyalty. Country-of-Origin According to Thakor and Lavack (2003), country-of-origin (COO) appears to have a powerful influence on consumers' perception of brand's quality; and therefore emphasizing COO information in marketing activities can help to improve the evaluations of brand image. The COO concept has been divided into three components: country of assembly (COA); country of design (COD) (Ahmed and d'Astous, 1996;Chao, 1993) or COM (Chao, 1998;Insch, 1995;Insch and McBride, 2004); and country of manufacturing (COM) (Ulgado and Lee, 1993;Iyer and Kalita, 1997).This study utilizes the country of design, which the only relevant factor to the fuel industry. Research Model This study was conducted in two phases.First, in addition to the literature supporting the model, an industry expert was interviewed in order to understand the dynamics of the fuel market in Egypt.Then, in-depth interviews with three consumers were conducted to validate the proposed model relationships.Findings revealed that distribution intensity is expected to be a very critical factor affecting brand preference and brand loyalty in the Egyptian fuel market.Accordingly, it was added to the proposed model as an independent variable to be tested on a larger sample. Accordingly, five constructs were utilized as independent variables affecting brand preference: 1) Awareness; 2) Perceived Quality; 3) Affect; 4) Satisfaction; and 5) Distribution Intensity.Some key variables, such as Value and Price Premium were not included in the model due to the subsidization of fuel in Egypt.Furthermore, having the price constant enhances the accuracy of the results, since price differences have been a major challenge that faced Bucklin et al. (2008) during their research on distribution intensity and brand preference.Figure 1 details the research model of the study. Research Hypotheses The research model involves eight hypotheses in order to measure the effect of brand equity constructs on brand preference and brand loyalty as well as test for the effect of country-of-origin as a moderating variable. H1: There is a positive relationship between Awareness and Brand Preference. Despite the fact that all brands in the Egyptian market utilize the exact same fuel product, interviewed consumers perceived differences in product quality from one brand to another.One explanation supported by the expert's opinion, is that the quality of other services provided at the stations affects consumers' perception.Therefore, the model considered perceived quality of core-related products/services as well as non-core related ones. H2: There is a positive relationship between Perceived Quality and Brand Preference. Affect and Brand Preference Brand Affect is defined as a brand's potential to elicit a positive emotional response to the average consumer as a result of its use.This study attempts to capture the emotional effect of the brand on preference and loyalty (Percy & Rossitier, 1992;Chaudhuri & Holbrook, 2001). Satisfaction and Brand Preference Satisfaction was identified as a key customer-based brand equity construct (Aaker, 1996;Tolba & Hassan, 2009).Roman (2003) argued that customer satisfaction is a prerequisite to customer loyalty.Further, in several models of customer retention, satisfaction has been explored as a key determinant in customers' decisions to continue or terminate a business relationship (Anderson & Srinivasan, 2003). H4: There is a positive relationship between Satisfaction and Brand Preference. Distribution Intensity and Brand Preference Based on expert's interview, a major factor that influences brand choice is the scale of its network (referred to as distribution intensity) as well as the appropriate location selection.This opinion was supported by interviewed consumers who see that the good location selection ensure that the customer will use the same brand at future purchases due to convenience thus leveraging brand equity (Reibstein & Farris, 1995;Yoo et.al, 2000;Hanssens et al., 2001;Yoo & Donthu, 2002;Bucklin et al., 2008). Country-Of-Origin Effect Lin and Kao ( 2004) developed a model that links COO effect to brand equity.Further, a study was conducted by Yassin et.al (2007) stressed on the fact that the image of the brand's COO influences brand equity, either directly or indirectly, through the mediating effects of brand distinctiveness, brand loyalty or brand awareness/associations.Accordingly, COO is included as a moderator for model relationships. Data Collection and Sampling An online survey was administered, targeting consumers in Egypt who own or use cars, using Zoomerang, an internet-based survey tool.All scales utilized to measure model constructs were driven from the literature.Table 1 highlights the sources of all study scales.All scales have been found reliable with Cronbach Alpha (α) ranging from 0.80 to 0.97.According to Nunnally (1994), a scale is considered reliable if Cronbach Alpha (α) is greater than 0.70.Respondents were requested to evaluate model constructs for four brands in the Egyptian market: 1) Mobil (a leading multinational brand); 2) Total (an average multinational brand), COOP (an established local state-owned brand), and Emarat (a new brand from the United Arab Emirates). This study targets car owners from A & B+ social classes in Egypt.Filter questions were included at the beginning of the questionnaire to avoid having non-targeted respondents.A snowball sampling technique was adopted, using the author's extended network.While this is a non-probability technique, leading to a risk of having a non-representative sample, two factors render this risk minimal.First, the fact that the survey was conducted online ensures that only upper-class educated consumers answered the questionnaire.Second, the target market of car owners is not particularly unique.A total of 150 responses were collected, which a relatively small sample.However, each respondent evaluated four brands; leading to a total number of 600 observations, which is an acceptable sample size (Sekaran, 1992). The sample was found adequate to represent the intended target market.More than 70% of respondents were male, which represents the car users in Egypt.Age was fairly distributed, and most respondents were from upper and upper middle classes in terms of income. Results To test research hypotheses, two multiple regressions were conducted to identify the factors that significantly affect brand preference and brand loyalty.Additionally, mean comparison analysis is presented to compare the four brands and drive managerial conclusions. Factors Affecting Brand Preference The first regression was performed to measure the effect of the five independent variables: awareness, perceived quality, affect, satisfaction, and distribution intensity on brand preference.Below is the best-fitting regression equation: BP = -1.105 -0.094 (AW) + 0.364 (PQ) + 0.420 (AFF) + 0.146 (DI) + 0.411 (SAT) [R 2 =0.808] It is concluded that all constructs under study significantly affect brand preference.Also, a very high coefficient of determination (R 2 =0.808) indicates a very high level of predictability, whereby the four constructs explain 81% of the variability in brand preference.Affect and satisfaction were found as the strongest predictors of preference, followed by satisfaction and distribution intensity.As for awareness, surprisingly, it had a negative effect on brand preference, which contradicts with the original hypothesis.Therefore, a "per brand" analysis was conducted, and yielded the results in Table 2. Results indicate that awareness had a negative effect on brand preference only in the case of Emarat, the newly introduced brand from the UAE, and is generally unknown to a large number of consumers.Once the level of awareness increases on average, the significant negative effect on brand preference disappears.Similarly, distribution intensity for COOP (the local brand) showed a negative relation with brand preference despite the fact that COOP does have one of the biggest distribution networks among competition, which contradicts with all supporting literature.This could be attributed to the fact that the targeted sample represents classes A to B+ which does not find COOP stations in their classy residential areas.Also, this means that the brand's quality is negative in the minds of consumers; and the more they are aware of COOP's stations, the more negative their perceptions are. Factors Affecting Brand Loyalty Another multiple regression was conducted to measure the effects of all independent variables as well as brand preference on brand loyalty.The regression analysis showed that Brand Preference, Satisfaction, Affect, and Perceived Quality have a positive and significant influence on Loyalty.Below is the regression equation: The above regression has a very high R 2 of 0.899, which indicates a very high level of predictability, whereby the four constructs explain 90% of the variability in Brand Loyalty.It is concluded that brand preference is the major factor that significantly affects brand loyalty, followed by satisfaction, affect and perceived quality.The direct relationship between distribution intensity and brand loyalty was not supported.Therefore, it is concluded that distribution intensity has an indirect effect on brand loyalty through brand preference.Figure 2 highlights the overall results of the research model. A further step was taken to identify the Loyalty regression equation per brand; its results are shown in Table 3.It is concluded that direct relations between Awareness and Distribution Intensity with Brand Loyalty were not supported.In addition, it was noticed that Perceived Quality has no significant effect for each brand separately despite being a significant factor in the overall the regression equation.This could be attributed to the smaller sample size per brand as compared to the total observations.The moderating role of country-of-origin at the consumer level has not been supported in this study.One reason could be the overwhelming differences in brand perceptions. Independent samples T-Tests were conducted to assess the effect of other possible moderating factors on the proposed model.There has not been any significant difference between genders.However, results varied significantly across the four brands under study. Conclusions and Managerial Implications This study advances a model that measures the effect of distribution intensity and other brand equity constructs on brand preference and brand loyalty.It was concluded that distribution intensity is indeed a major factor that drives brand preference; and ultimately brand loyalty.While preference is affected by functional an emotional dimensions (perceived quality and affect), as well as consumers' experience with the brand (satisfaction), it is evident that developing a strong distribution network is an important factor that companies should not undermine.Not only does it provide convenience and availability to consumers, it also increases their brand preference and loyalty. In order to analyze the fuel market in Egypt, all brands were ranked for each studied variables as shown in Table 4. It has been concluded that Mobil significantly higher than any other brand in all variables.This puts Mobil in a more comfortable competitive zone.On the other hand, COOP has been rated lowest for all variables except awareness and distribution intensity; two factors that were diluted due to the significantly low quality and image.This could be very alarming to the management of COOP unless they are consciously not interested in this niche market under study. Despite its short existence in the Egyptian market, Emarat shows great positioning especially if compared to Total, which possesses a multinational management that should be more experienced in this field.Further, Scheffe Post-Hoc analysis was conducted among brands out of which, it has been found that there is no significant difference between Total and Emarat for most variables, except awareness and distribution intensity; two factors that are projected to improve over time.This should put a lot of burden on Total, which has been in the market for a longer period of time; yet, it cannot distinguish its internationally-known brand from a new regional entrant brand like Emarat. Furthermore, there is a significant difference in awareness among age groups, particularly age groups of 21-40 and greater than 40.This could be due to the fact that the older group is more resistant to change; accordingly, they do not accept to be acquainted to new brands like Total and Emarat. The model proposes that distribution intensity significantly influences brand preference.It explains why Mobil and COOP brands are leading since they are among the early entrants in the market.However, the selection of the service station location, even for the new entrants, is of high importance as it will attract more of the targeted customers as much as the network is convenient to those customers.Therefore, it is recommended that newly entrants equally distribute their network in cities and suburbs in order to capture as much of their target population in different areas. Agenda for Future Research As an attempt to complement and enrich this study, it is recommended that future research would apply the same model targeting different target segments, mainly lower classes.Also, a future research could be conducted to study the effect of sequence of entrance in the market on brand preference and brand loyalty.This will help companies that consider entering this market in identifying the challenges that might hinder their growth, and finding the appropriate tools to overcome them. This study could be replicated on other industries to identify industry-specific factors.In particular fast-moving consumer goods could be significantly affected by distribution intensity, which could play a major role in strengthening their brands.Finally, replicating this study in other countries would be useful in verifying model relationships and comparing results across borders. Table 1 Sirgy et al. (1997)Behavioral LoyaltyThree nine-point Likert-type scale, measuring the likelihood that a person will use some object again. Table 2 . Regression Coefficients per Brand for Brand Preference Table 3 . Regression Coefficients per Brand for Brand Loyalty
2018-12-21T23:29:47.033Z
2011-08-01T00:00:00.000
{ "year": 2011, "sha1": "4d4dea47404a325f2913e98eba9a953fa7405781", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ijms/article/download/9916/8122", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4d4dea47404a325f2913e98eba9a953fa7405781", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
201301515
pes2o/s2orc
v3-fos-license
Investigations of thermal degradation and electrical properties of polyamide materials versus polybismaleimide materials for fire-fighters helmets In this paper, a study on thermal degradation of polybismaleimide materials for fire-fighters helmets compared to polyamide materials from comercial fire-fighters helmet was performed by applying simultaneous mass spectrometry and Fourier transform infrared spectroscopy of gas products from a thermogravimetric analyzer. The results of TG measurement indicate a difference in the mass loss behavior between the two polymers materials. The thermal decomposition of polyamide material takes place in the single stage in the temperature range of 400-460°C with a mass loss of 98% while that of polybismaleimide material takes place in two steps with a residuu at 650°C of 85%. The electrical analyzes consisted in the application of the electric breakdown test and the calculation of the breakdown parameters. Following the experiments, it has been found that the voltage breakdown has approximately 4 times higher values for polybismaleimide material than compared to polyamide materials. Introduction Personal protective equipment (PPE) is the equipment intended to be worn or held by the worker in order to protect him against the risks which might endanger his safety and health at work [1] and/or any additional accessory designed for this purpose, according to GD no. 1048/2006 [2]. PPE for firefighters is made up of protective helmets, masks, boots or other garments designed to protect firefighters from injuries [3]. PPE is necessary to protect against physical, electrical, thermal, chemical, biological hazards, and also to protect against various particles suspended in the air [4]. Firefighter helmets are made of various polymers such as polyethylene, synthetic resins, hybrid glass / jute reinforced epoxy composites [5] intended to be heat-resistant, mechanical-resistant, chemical and electrical shocks resistant. A fire helmet, according to SR EN 443/2008 [6] needs to provide head protection in various cases, such as: when a bulky object falls into the main areas of the head, a collision with a fixed or moving object, the propelling of solid or liquid materials, the fall or the movement of a part of the structure where the intervention takes place, the effect of a violent blow caused by an explosion, a strong and/or very strong combustion, contact with hot substances and contact with electricity or splashing with liquid chemical substances, respectively. Generally, a helmet consists of a cap and harness as well as other various accessories. Among the accessories we include: retaining strap, bracket and cable clip for the attachment of a lamp, eye shield, or face shield, neck flaps for protection against weather, molten metal splash, hot substances, lining for cold conditions or ear muffs [3]. In order to ensure the resistance to all listed, it is important that the material from which the helmet is made to exhibit good thermal, mechanical and electrical properties. A good candidate for achieving the protective helmet cap that possesses a very good thermal and mechanical properties is bismaleimide. This material presents a number of advantages such as: high temperature stability, low volatility and low cost [7]. The purpose of this paper is to present the thermal and electrical properties of a new material made of polybismaleimide, in comparison with polyamide material from the commercial fire-fighter helmets. Materials and methods The material analysed in the present study (polybismaleimide), was obtained by a method described in a previous paper [8][9][10][11]. In order to explain the thermal degradation process of polyamide, the most used plastic material from commercial fire-fighters helmet and the new designed polybismaleimide material, TG/ MS/ FTIR technique was used. The results were obtained with a STA 449 F1 Jupiter (Netzsch, Germany) thermo-gravimeter device coupled with a 403C Aëolos QMS mass spectrometer (Netzsch, Germany) and a Vertex-70 FT-IR spectrophotometer (Bruker, Germany). Samples with 8 and 10 mg were analysed using a procedure that involves a heating rate of 10°C / min between 25°C and 680°C temperature range. The electrical puncture test was carried out with the Megger® OTS60SX equipment, which is a semi-automatic dielectric resistance in oil test with a maximum power of 60 kV. For the electrical puncture test, four oil bath tests were carried out on samples of polyamide materials from commercial fire-fighters helmet and the polybismaleimide based material. Figure 1 shows the TG / DTG / Gram Schmidt curves for polyamide material samples from commercial fire-fighters helmet (a) and polybismaleimide material (b). Experimental results Gram Schmidt curve (black line) indicates that the maximum concentration of the gas mixtures resulting from decomposition process is at 430.4°C for the old firefighters helmet sample and in range of 200-600°C for polybismaleimides material. In case of polybismaleimide, the removed gases are mainly formed by solvent traces and water. The loss of mass for the sample of polyamide materials taken from commercial firefighter helmet in temperature range of 400-460°C is 98%, while for polybismaleimide is 14% between 25-680°C ( Figure 1, red line). The two-dimensional FT-IR spectra of the gasses resulted at different temperatures from thermal decomposition of polyamide material are shown in Figure 2 (a) and for polybismaleimide material sample in Figure 2 From the FT-IR charts of the fire-fighter's polyamide samples at 430, 500, 550, and 630°C, it is noted that the absorption band peak decreases with growth of the temperature above 450°C. The FT-IR spectra at 430°C of the polyamide samples from the commercial fire-fighters helmet (temperature at which the resulting gas concentration is the maximum -see Figure 1 a, black line) shows the absorption bands at 3073-2872 cm-1 (specific to amine and aliphatic groups), 2350 (CO2), 1498, 1493, 905, 762 and 695 cm-1, Figure 2 The electric puncture is the destruction phenomena of an electrical insulation, which leads to the direct passage of electric current through the dielectric mass, between two electrodes at a certain potential difference. The phenomena may or may not be accompanied by an electric arc, depending on the power of the power supply, and is characterized by the puncture voltage (Ustr). Dielectric rigidity (Estr) is the minimum value of the electrical field strength, when the material becomes inefficient. For the electrical breakdown test, four oil bath tests were carried out on samples of polyamide materials from commercial fire-fighters helmet and polybismaleimide material. The test results are shown in Table 1. From the above data, it can be seen that the puncture voltage, Ustr, is approximately 4 times higher for the sample of polybismaleimide material compared to the sample of polyamide materials from commercial fire-fighters helmet. Also, the same behaviour is observed for dielectric rigidity values (Estr). The experiments were carried out in compliance with the legal provisions on occupational safety [12][13], eliminating the risks which human resource may be exposed. Summary and conclusions The paper aimed at studying the thermal degradation and electrical properties from polybismaleimide materials for fire-fighters helmets compared to polyamide materials from commercial fire-fighters helmet. Given the results of experimental measurements, the following conclusions may be formulated: a) The polyamide material from commercial fire-fighter helmets have a mass loss of 98% for a temperature range of 400-460°C while the polybismaleimide material have a mass loss of 14% between 25-680°C. Furthermore, the quantities of volatile compounds generated in the degradation process are much smaller for polybismaleimide compared with the polyamide material. Taking all into consideration, the material containing polybismaleimide is more suitable for manufacturing firefighter helmets due to its heat-resistant properties and lack of volatile compound generated during temperature decomposition. b) Puncture voltage, Ustr and dielectric rigidity Estr is approximately 4 times higher for polybismaleimide material compared to polyamide materials from commercial fire-fighters helmet, making it a more efficient electrical insulator. This indicates that protection helmets made of polybismaleimide material can behave better to an electric shock during fire-fighters intervention, limiting the risk of accidental electrocution.
2019-08-23T14:28:53.955Z
2019-08-02T00:00:00.000
{ "year": 2019, "sha1": "06c98ab7f45a206433b28b78783329dc2cfa9058", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/572/1/012031", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8c6a3fab075ecf78d3998c8e5b5896cd728affb2", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
254154956
pes2o/s2orc
v3-fos-license
A Methodology for Evaluating the Robustness of Anomaly Detectors to Adversarial Attacks in Industrial Scenarios Anomaly Detection systems based on Machine and Deep learning are the most promising solutions to detect cyberattacks in the industry. However, these techniques are vulnerable to adversarial attacks that downgrade prediction performance. Several techniques have been proposed to measure the robustness of Anomaly Detection in the literature. However, they do not consider that, although a small perturbation in an anomalous sample belonging to an attack, i.e., Denial of Service, could cause it to be misclassified as normal while retaining its ability to damage, an excessive perturbation might also transform it into a truly normal sample, with no real impact on the industrial system. This paper presents a methodology to calculate the robustness of Anomaly Detection models in industrial scenarios. The methodology comprises four steps and uses a set of additional models called support models to determine if an adversarial sample remains anomalous. We carried out the validation using the Tennessee Eastman process, a simulated testbed of a chemical process. In such a scenario, we applied the methodology to both a Long-Short Term Memory (LSTM) neural network and 1-dimensional Convolutional Neural Network (1D-CNN) focused on detecting anomalies produced by different cyberattacks. The experiments showed that 1D-CNN is significantly more robust than LSTM for our testbed. Specifically, a perturbation of 60% (empirical robustness of 0.6) of the original sample is needed to generate adversarial samples for LSTM, whereas in 1D-CNN the perturbation required increases up to 111% (empirical robustness of 1.11). I. INTRODUCTION The industry is experiencing its fourth revolution, also known as Industry 4.0, which is mainly driven by the adaptation of industrial processes to new technologies and computational paradigms. Among the most relevant changes affecting current industries we highlight the integration of the The associate editor coordinating the review of this manuscript and approving it for publication was Xianzhi Wang . fifth generation of mobile networks (5G), bringing to reality minimum latency and high bandwidth in communications; the big data, optimizing the analysis of large amounts of data; and the Industrial Internet-of-Things (IIoT), connecting large amounts of heterogeneous and resource-constrained devices to the Internet [1]. However, despite the benefits of Industry 4.0, it is also opening the door to new cyberattacks affecting devices and critical industrial processes [2]. Every year, the number and variety of cyberattacks are growing, making traditional security approaches outdated in short time windows. In this context and due to the number of highly specialized and new (zero-day) attacks affecting heterogeneous industries, the research community is evolving towards the use of semi-supervised or unsupervised Machine Learning (ML) and Deep Learning (DL) techniques to detect cyberattacks [3]. In such a scenario, the current Anomaly Detection (AD) systems relying on ML and DL are the most promising and effective solutions to detect unseen attacks [4]. In contrast to traditional approaches, these systems discriminate between the normal and abnormal behavior of the industrial processes without relying on existing databases that store the cyberattacks patterns. However, the current AD solutions based on ML/DL are vulnerable to adversarial attacks, making them inappropriate for real systems. Adversarial attacks consist of manipulative actions to ML/DL models intending to cause model misbehavior or acquire protected information. Among the existing adversarial attacks, evasion attacks are some of the most relevant as they are performed during the evaluation phase of the system, once the model is trained. In industrial scenarios affected by malware, the main goal of evasion attacks is to craft samples modeling the malware behavior (anomalous samples) to misclassify them (as normal samples) and allow the malware to affect industrial devices or processes without being detected. Adversarial attacks raise new trust and security challenges affecting ML/DL in general, and AD-based solutions to detect cyberattacks in particular. In this context, data scientists are already making efforts to provide highly precise and trustworthy AI-based solutions in different application scenarios [5]. Recently, IBM has identified a set of pillars needed to achieve trusted AI [6]. One of these pillars is robustness, whose main goal is to measure how resilient ML/DL models are against adversarial attacks. Once the robustness level is calculated, it can be notified to end-users, in conjunction with classical performance metrics, or even be used to improve the model's robustness using adversarial training, where the network is fine-tuned with adversarial samples. The literature has offered different metrics to measure model's robustness. The three most widespread are Empirical Robustness (ER) [7], Local Loss Sensitivity (LLS) [8], and Cross Lipschitz Extreme Value for nEtwork Robustness (CLEVER) [9]. These metrics are highly effective in different application fields such as computer vision. However, they present limitations when used to evaluate the robustness of AD in industrial scenarios. One of the most relevant limitations is the impossibility of distinguishing between an adversarial sample that deceives the anomaly detector and an adversarial sample converted into a normal sample by an excessive alteration. For example, consider a water distribution process where a denial of service (DoS) cyberattack is launched. This cyberattack aims to stop the water supply for a certain geographical area. The water supply is controlled by valves that can take values between 0 (completely closed) and 1 (completely open). Therefore, the DoS cyberattack can modify such features to close the valves and stop the supply. Besides, an attacker who wants to launch a DoS cyberattack that goes unnoticed by the AD system could modify the DoS samples to make them adversarial. However, these features could take the value 1 (completely open) due to excessive disturbance, leaving the DoS cyberattack without effect. In both cases, the adversarial attack is considered successful, but in the second case, it does not have not a negative impact on the industrial device. For this reason, a mechanism is needed to differentiate these two adversarial versions, thus providing a reliable measurement of the model's robustness. An additional drawback is the heterogeneity of data types used in industrial environments. Unlike image recognition and other domains such as audio signals, where values are usually floats, there are discrete values, continuous values or even timestamps, usually with internal consistency constraints, which makes it not always possible to calculate a gradient or generate a valid adversarial sample [10], [11], [12]. In order to face the previous limitations affecting ER, LLS, and CLEVER, the current paper presents the following contributions: • A methodology for estimating the robustness of an AD model based on ML and DL techniques in industrial scenarios, using a set of additional ML models (support models) to determine if an adverse sample remains anomalous. This methodology considers four fundamental steps and proposes a robustness metric that is a modification of the ER metric. It is worth mentioning that the proposed methodology does not focus on training a robust AD model, but on measuring the robustness of AD model already trained. • Validation of the proposed methodology using a dataset generated from the Tennessee Eastman Process [13], an industrial scenario that, although simulated, is realistic. Specifically, we show the robustness calculation for Long Short-Term Memory (LSTM) and 1-Dimensional Convolutional Neural Network (1D-CNN) models, which are well suited to deal with time-series data. The model that achieves the highest robustness should be considered to be deployed in a real scenario. Our experiments show that the 1D-CNN model achieves a robustness of 1.1, approximately twice that of the LSTM model (0.6). The remainder of this paper is structured as follows. Section II reviews the state of the art. Section III shows a motivating example explaining the difficulty of generating adversarial samples in industrial scenarios. In Section IV, we detail the four-step methodology proposed to measure the model's robustness when using the AD paradigm with ML and DL techniques in an industrial scenario. The methodology implementation using the Tennessee dataset and its validation are detailed in Section V. Finally, the conclusions and future work are included in Section VI. VOLUME 10, 2022 II. RELATED WORK In this section, we present a brief review focusing on robustness to adversarial attacks in AD. In addition, we introduce different solutions in the context of AD in industrial environments to fully understand the proposed methodology. A. ANOMALY DETECTION IN INDUSTRIAL SETTINGS Cybersecurity in industrial environments is a field of great interest to the research community. In this context, a wide variety of approaches have been proposed. For example, the authors of [14] propose a collaborative trust-based unbiased control mechanism that performs a dynamic assignment of industrial control to avoid malicious nodes attacking industrial devices. However, the most widely used techniques are those in charge of detecting anomalies. These techniques can be categorized into DL techniques specially designed to work with time-series data and classical ML techniques. In the first category, we highlight the models LSTM and 1D-CNN, which are especially designed to deal with timeseries data. For example, the authors of [15] presented a scalable and efficient solution for real-time AD in industrial settings. In particular, the authors proposed a hybrid statistical-ML model that integrated a SARIMA (seasonal autoregressive integrated moving average)-based dynamic threshold model and an LSTM model to identify the abnormal behavior in a joint way with a low false-positive rate. Another example of LSTM usage can be found in [16] where the authors proposed a Variational LSTM learning model for AD based on reconstructed feature representation. The authors designed an encoder-decoder architecture associated with the Variational LSTM in order to learn low-dimensional representation from high-dimensional raw data. Then, the transformed data was fed into a lightweight estimation network to identify anomalies. As an example of using 1D-CNN in AD, we highlight [17]. In this study, the authors proposed an AD method based on measuring the statistical deviation of the predicted value from the observed value. Besides, the authors tested different configurations of 1D-CNN. After detecting 32 out of 36 attacks, the authors claimed the effectiveness of 1D-CNN in AD problems. Another example is presented in [18], where the authors introduced the 1D-CNN to diagnose anomalies from 1D time-series data generated by industrial sensors. To reduce the number of parameters, a 1D global average polling (1D-GAP) layer was designed to replace the fully connected layers. Furthermore, the authors replaced the usual final softmax layer with a nonlinear multi-class Support Vector Machine (SVM). The authors of [19] presented a novel approach that combined 1D-CNN and Gated Recurrent Units (GRU) to learn the spatiotemporal correlation between parameters. In the category of classical ML approaches, we highlight the solution proposed in [20], where the authors presented an adaptive approach for defense against cyber-attacks in the context of industrial systems. In particular, the solutions combined several algorithms such as Artificial Neural Networks (ANN), LSTM, Isolation Forest (IF), and One-Class Support Vector Machine (OCSVM). In [21], the authors performed a study to compare different ML and DL models to detect anomalies in industrial settings. In particular, the models compared were Random Forest (RF), SVM, DNN, OCSVM, and IF. The authors conducted experiments with the traffic of Modbus TCP and S7comm protocols, concluding that SVM and RF were the models with a higher F1-score in both scenarios. Despite the fact that classical ML models cannot deal with time-series data out of the box, several modifications can be made to use such models in time-series data. The most popular approach is to preprocess the dataset to create a lagged dataset as shown in [22]. B. ROBUSTNESS TO ADVERSARIAL ATTACKS IN AD The contributions in the context of adversarial attacks to AD models in industrial systems are relatively recent. In [12], the authors briefly describe the techniques used in the generation of adversarial samples, illustrating the main differences between the cyber-physical domain and the traditional image domain (constraints in the sample perturbation, system knowledge of the attacker, the timing of the attack, and the existence of a human detector). They demonstrate this by performing an attack on the SWaT testbed. Similarly, the authors of [23] describe how to slightly modify sensor values in the Tennessee-Eastman Process Control System so that they remain unnoticed by an anomaly detector. In addition, the authors of [24] present a new adversarial attack especially designed for industrial scenarios. They compare adversarial samples generated with the proposed technique and those generated with existing methods such as Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM). Since an AD model can suffer an adversarial attack, it is necessary to improve its adversarial robustness. In this context, a variety of defense mechanisms have been proposed to make the model more robust [25]. However, to the best of our knowledge, there are no proposed methodologies for determining the robustness of an AD model in industrial environments. The most similar approach -although applied to images-is the one presented in [25]. Additionally, when new adversarial defense techniques are presented, the authors tend not to measure the robustness achieved with a metric, but on the contrary, they use indirect methods like plotting the loss of accuracy with each technique. By way of illustration, in [26] the determination of the robustness of a model is based on the plotting of the drop in accuracy experienced in the presence of each adversarial attack. Nevertheless, we can find several robustness metrics [27] that can be used as a starting point to develop a suitable metric for industrial environments. Learning from the limitations of existing approaches, our proposal consists in a methodology to estimate the robustness of a model taking into account the exposed constraints of industrial environments when determining the adversarialness of a sample. III. MOTIVATING EXAMPLE In scenarios such as computer vision, an adversarial attack is successful if it simply causes the modified sample to be misclassified. In industrial scenarios, it is often further required that the adversarial sample be classified as harmless while preserving its anomalous nature. Misclassification can easily be achieved by altering the original anomalous sample until a clearly normal sample is obtained. However, this is not the goal of an adversarial attack. The modification of the adversarial sample has to be such that the model considers it as normal, but without belonging to the distribution of normal samples to have a negative impact on the target industrial process or system. This is achieved by techniques that take advantage of the peculiarities of the class separation boundary established by the trained model. However, there is an additional difficulty, which is how to determine when an adversarial sample has been altered so much that it has become a true and innocuous normal sample. Fig. 1 illustrates a simplified example in an industrial setting where adversarial attacks are applied on a binary classification model. Fig. 1 (left) shows the probability density function (p.d.f) of two classes, from which a set of samples has been extracted. We assume that class 2 is the normal class and take a sample from class 1 that is considered hazardous to the industrial system. After altering the sample, one of the three situations shown could happen. Sample a is clearly adversarial, because it belongs to the anomalous distribution, but the model classifies it as normal. Sample b, on the other hand, has been classified as normal by the model but, actually, it does belong to neither the anomalous nor normal classes. Finally, sample c has become a harmless sample, because it clearly belongs to the normal class. Fig. 1 (right) illustrates the real situation, where the p.d.f. of the classes are unknown and the boundary of the trained model serves as an estimate. Unfeasible areas, whose samples, would be considered corrupt and the industrial system would discard them, are also depicted (d sample). Additionally, the boundaries of two different models trained with the same dataset (support models) have also been plotted. The boundary of each model has a different shape, and, therefore, we can distinguish three zones: the region where all the models agree on classifying the samples as class 1, i.e. (a); the region where they agree on classifying the samples as class 2, i.e. (c); and finally the remaining region, where there are discrepancies in the classification, i.e. (b). Our proposal is based on how to use these support models to estimate whether an adversarial sample has reached the p.d.f of the normal class ceasing to be adversarial, e.g. (c). However, some samples retain their adversarial nature after alteration, e.g. (b) and (a), and we call them truly adversarial samples. IV. METHODOLOGY This section describes the proposed methodology to evaluate the robustness of anomaly detectors against adversarial attacks in industrial scenarios. A graphical representation of the methodology can be seen in Fig. 2. It can be divided into the following four steps: 1) Models Preparation: This step guides through the process of selecting and training models. In particular, it considers two types of models: the AD model employed to detect anomalies and whose robustness needs to be evaluated, and the support models that will help to discriminate between non-adversarial and truly adversarial samples. The support models are the core of the methodology and its main novelty. All these models need to be selected considering their suitability to be used with time-series data since most industrial systems produce this type of data. Once the models are selected, they need to be trained following a methodology focused on AD. 2) Adversarial Samples Generation: This step guides through the generation of adversarial samples. In this context, different approaches to performing an adversarial attack based on the AD model selected in the previous step are discussed. Besides, the methodology makes some recommendations about the parameters used together with the adversarial attack selected. 3) Adversarial Dataset Generation: This step draws the guidelines to generate a truly adversarial dataset that will be used later to evaluate robustness. Firstly, this step uses the support models trained in step 1 to discriminate between truly adversarial samples and non-adversarial ones. Finally, the adversarial dataset is generated considering only the truly adversarial samples. 4) Robustness Considerations: This step recommends using a specific metric to evaluate the model's robustness and discuss the considerations that must be taken into account. Specifically, the proposed metric is a slight modification of the original ER metric because it does not depend on the model. In other words, it can be applied to any model whether it is based on a gradient or not. Finally, once the robustness is evaluated, it is necessary to consider if the AD model achieves the desirable robustness. A. MODELS PREPARATION We define the next three tasks to prepare models. 1) AD MODEL SELECTION The first task is to select the proper model that will be implemented in the AD system. When selecting the AD model, it is essential to pay attention to the properties of the dataset used. Different model architectures have different implicit biases. For example, for tabular datasets with temporal dependencies, as in industrial systems, LSTM models might be the best option since they process features over the temporal dimension. In contrast, for datasets with tabular data but without temporal dependencies, Dense Neural Networks (DNN) models should be considered. Since, in industrial environments, most of the data have temporal dependence, the methodology recommends the use of models that can deal with time series out of the box, such as LSTM or 1D-CNN models. 2) SUPPORT MODELS SELECTION The second task is to select the proper support models used to identify the truly adversarial samples and avoid the problem explained in Section III. The support models need to be trained using the same dataset employed to train the AD model, and they will be in charge of evaluating each adversarial sample. In further steps, when a particular adversarial sample is evaluated as normal by a majority of the support models, the sample will be considered as belonging to the p.d.f of the normal class and, hence, non-adversarial. However, it is important to highlight that the support models need to be selected following a specific criterion. In particular, we defined three criteria to select such models. • Support models need to be as much deterministic as possible. Otherwise, each time they are trained they may result in a different boundary and, therefore, the robustness of a particular model may vary. With this restriction, those DL models with a large number of hyper-parameters should be discarded. Nevertheless, there are ML models with interesting properties which make them suitable as support models. • Supporting models need to achieve sufficient generalization ability to ensure that the p.d.f. of the normal class lies inside the intersection of their boundaries. This could cause support models to underperform the chosen AD models. • Support models do not need to evaluate samples as quickly as the AD model, and therefore, we can select models that do not achieve a high degree of parallelism, such as classical ML models. In particular, the methodology recommends using ensemble models like RF or gradient boosting models like XGBoost. These models have lesser hyper-parameters than DL models and, since their results are based on the average decision of many estimators, they achieve a high degree of determinism. Besides, both models can deal with time series data [28], [29] which is predominant in industrial scenarios. 3) MODELS TRAINING In this task, all the previously selected models are trained. A difference in the training process of both AD and support models is that AD in industrial scenarios is typically based on a multi-class classifier in order to discriminate between the different types of anomalies. In contrast, support models should be binary classifiers, since we are only interested in detecting if the sample is normal or abnormal. Considering this particularity, both models should be trained using the same dataset, but in the case of support models, the classes should be reduced to normal and abnormal. To train all the models, we recommend the methodology presented in [30]. Besides, to reduce the complexity of the AD model, we propose training such models using as few parameters as possible. We also recommend including regularization techniques such as dropout. Regularization smooths the decision boundary, improving the ability of the support models to distinguish between non-adversarial and adversarial samples. This recommendation is supported by the fact that industrial systems carry out repetitive actions, and therefore, the behavior of such systems should be able to be modeled with less complex models. In case the introduction of regularization techniques is not possible, either because we must use a pre-trained AD model or because they reduce the performance of the AD model, it is advisable to increase the number of models in the ensemble to capture the complexity of the AD model. Finally, one more circumstance that can arise in model training is that the support models do not achieve sufficient performance to distinguish between non-adversarial and adversarial samples. In this case, the methodology recommends carrying out an exhaustive grid-search strategy to find the optimal hyper-parameters for RF and XGBoost, paying special attention to the number of estimators, the maximum depth of the trees and the maximum number of leaf nodes. B. ADVERSARIAL SAMPLES GENERATION In this step, we identify three tasks to generate adversarial samples. 1) ADVERSARIAL ATTACK SELECTION The first task consists in selecting the proper adversarial attack to generate adversarial samples. In this task, the AD model previously chosen needs to be taken into account because not all attacks apply to all models. On the one hand, we need to consider what information we have concerning the model and the dataset. If we have full access to the model and the dataset, the methodology suggests using an adversarial attack based on the white-box approach. Similarly, if we do not have access to the model, but we have the dataset, the methodology suggests training a substitute model and using a white-box approach. Finally, if we do not have access to either dataset or model, the methodology recommends a black-box approach. In addition, the data types present in the dataset and the AD model need to be considered. In DL and other differentiable models, we can use attacks exploiting the model gradient. The drawback of these attacks is that they are specifically designed to work on continuous data. However, a slight modification of the adversarial attack can be made to become compatible with categorical data [24]. On the contrary, if the target model is based on certain ML techniques, such as RF, whose gradient cannot be computed, an adversarial attack based on the black-box approach needs to be adopted. Another consideration to keep in mind is whether to use a targeted or untargeted adversarial attack. A targeted attack attempts to deceive the model into predicting a particular class that is specified beforehand. In contrast, untargeted attacks perturb the sample to maximize the model loss, giving no special preference towards a particular class as long as it is not the original label. If an untargeted attack is used, the samples could change their class between different abnormal classes, but not necessarily to the normal class. A sample that changes between the different abnormal classes is not a problem as long as the AD system detects it as abnormal and it cannot reach the industrial target device. However, an abnormal sample classified as a normal sample could affect industrial devices. All in all, this methodology recommends using targeted adversarial attacks whenever possible. The methodology presented in this paper is designed to evaluate the robustness of AD models against a single and several adversarial attacks. In the second case, it would be necessary to select all those attacks we are interested in. In addition, in the case of being interested in estimating the robustness against a large number of adversarial attacks, the methodology recommends including adversarial samples generated by different adversarial attacks based on the gradient since they are the most common. 2) PARAMETERS SELECTION This task makes some recommendations when selecting the adversarial attack parameters. These parameters fundamentally influence the model's robustness and the time required to generate the adversarial samples. Let us suppose an attack based on the gradients, which has two parameters frequently shared with other attacks of the same type. These two parameters are the maximum magnitude of the final perturbation, ε, and the number of iterations. If an extremely large ε is chosen, the reported robustness will be wrongly high. This is due to the high distortion introduced in each iteration, greatly increasing the difference between adversarial and original samples. However, this leads to a misleading measure, since introducing such a large distortion may cause the samples to cease to have a physical meaning and, therefore, have no effect on the physical world. Likewise, if a small ε is chosen, the adversarial perturbations will be less prone to leave the anomalous class, and the model will seem overly robust. Concerning the number of iterations, a trade-off is observed. The greater the number of iterations specified, which allows for lower values of ε, the greater the quality of the adversarial samples. However, it comes at the cost of taking a long time to generate them. 3) ADVERSARIAL ATTACK DEPLOYMENT Once the adversarial attack and its parameters are selected, the third task involves its deployment. During this task, the attack will convert the original samples into adversarial ones. To do so, the methodology proposes employing the test dataset used to validate the performance of the AD model. Although it is also possible to use the training or validation dataset, it is more realistic to use the test dataset. Note that in a real adversarial attack, the attacker may not have access to the training dataset, and needs to use samples not previously seen by the AD model. C. ADVERSARIAL DATASET GENERATION The following tasks are suggested to generate a dataset with truly adversarial samples. 1) ADVERSARIAL SAMPLES EVALUATION The first task is to evaluate the adversarial samples previously generated to decide which are actual adversarial samples. Let X be a set of anomalous samples for the model M , and X adv the set of adversarial samples obtained from X . Then, the truly adversarial samples are determined using the support models M 1 , . . . , M n . A sample x adv j ∈ X adv is considered as non-adversarial if (M i (x adv j ) == Normal) for the majority of i values. Otherwise, it will be considered as a truly adversarial sample. There are two reasons why support models can VOLUME 10, 2022 evaluate an adversarial sample as normal. The first is the transferability of the adversarial perturbation between models. This methodology minimizes this possibility by recommending the use of support models whose architecture varies substantially from the model to be evaluated. The second reason is that the variations introduced by the adversarial attack can convert an abnormal sample into a truly normal sample as illustrated in Fig. 1. 2) ADVERSARIAL DATASET GENERATION D. ROBUSTNESS CONSIDERATIONS In this step, we establish the following tasks. 1) MODEL's ROBUSTNESS COMPUTATION This task consists in quantifying the model's robustness. For this purpose, the original sample dataset and the truly adversarial dataset are required. With these two datasets, the methodology uses the ER metric for the computation of robustness because it is independent of the chosen model. Many of the robustness metrics in the literature are targeted at specific models, e.g., differentiable models. However, the methodology allows the use of any other suitable metric. The formal definition of ER metric is presented in Equation 1. where o(x true i ) ∈ X is the original anomalous sample from which x true i ∈ X true was generated. This gives a measure of how robust the evaluated model is. The higher the ER, the larger the disturbance necessary to convert original samples into truly adversarial samples, and therefore, the greater the robustness of the model. 2) DETECTION PERFORMANCE AND ROBUSTNESS TRADE-OFF In this task, the methodology suggests evaluating both detection performance and robustness to decide which model needs to be deployed in the industrial scenario. Both aspects are crucial in industrial scenarios. On the one hand, if the detection performance is low, the possibility that a non-adversarial attack impacts the physical world is high. On the other hand, if the model's robustness is significantly low, it is easy to deploy adversarial attacks to generate adversarial samples. Therefore, the possibility of adversarial attacks impacting the physical world is high. In general, the methodology recommends selecting a model with the highest robustness as long as the detection performance does not differ significantly from the other candidate AD models. V. METHODOLOGY VALIDATION In this section, the methodology proposed in Section IV is applied to validate it in an industrial scenario. To validate the methodology, we used the dataset generated by Rieth et al. [31] using the Tennessee Eastman (TE) process, which is a simulated testbed of a chemical process where the authors introduced 20 anomalies. The authors published four files: training and test files with anomalies and training and test files free of anomalies. Each file contains 500 simulations for each anomaly type. The training files contain 500 samples in each of these simulations, while the test files contain 960 samples. These files contain 52 features, including 41 measurement variables and 11 manipulated variables sampled every 3 minutes. This amounts to 25 hours for the training datasets and 48 hours for the test datasets. The experiments performed in this work were carried out in a workstation with 94 GB of RAM, an Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz, and an NVIDIA GEFORCE GTX 1080. A. TESTBED The TE process is shown in Fig. 3, where five main modules can be observed: the reactor, the condenser, the liquid-vapor separator, the compressor, and the stripper. The process produces two products through the reaction of eight components: A, B, C, D, E, F, G, and H. These reactions are defined by the following equations: During the process, reactants A, D, E, and C are injected into the reactor, where parts of the reactions described above occur. This results in products in the form of vapor and unreacted components that pass to the condenser, where they change from gas to liquid through a cooler. These products and unreacted components pass through the liquid-gas separator, where the unreacted components are recycled and reinjected at the inlet through the compressor. Conversely, the products condensed go to a product stripping module where the remaining reactants are removed. Finally, products G and H are generated from the output of the stripper. Furthermore, all the reactions that take place are irreversible and exothermic, and they are approximately first-order with respect to the reactant concentrations. The reaction rates are a function of the temperature over an Arrhenius expression. Among all components, G is the one with the highest sensitivity to the temperature since it has more activation energy. The control objectives for this process are the following: • Maintain process variables at desired values. • Keep process operating conditions within equipment constraints. • Minimize variability of product rate and product quality during disturbances. • Minimize movement of valves which affect other processes. • Recover quickly and smoothly from disturbances, production rate, or product mix changes. B. MODELS PREPARATION This section details the implementation of the first step of the methodology where the AD and support models are trained. 1) AD MODEL SELECTION In this task, and considering the properties of the TE process, we chose two models that can deal with continuous data and manage time series. On the one hand, we selected an LSTM model that accepts time-series sequences as input and predicts the target class as output. On the other hand, we selected a 1D-CNN model that also accepts time-series sequences as input but whose internal architecture substantially differs from the LSTM model. 2) SUPPORT MODELS SELECTION Following the second task, we chose the support models used to determine truly adversarial samples. As suggested by the methodology, we selected two ensemble models that typically achieve a high degree of determinism. In particular, we selected RF and XGBoost models. 3) MODELS TRAINING Following the third task, we carried out the training of both AD and support models. With this aim, we created our training, validation, and test datasets from the training and test files provided by [31], considering the 52 features included in the dataset. To generate our training dataset, we selected all the samples of the first 400 simulations from the original training files. Our validation dataset was created by taking all the samples from the training files corresponding to the last 100 simulations. In both datasets, for each simulation, we ignored the initial 20 samples of the files containing anomalies because these samples were mislabeled as anomalous. Finally, our test dataset was generated by selecting all the simulations included in the original test files but considering only 500 samples of each anomaly, starting from sample 160 because the previous 159 samples were mislabeled as anomalous. Since our proposal does not focus on training the AD model, but measuring the robustness of such models, we did not carry out any feature engineering step except grouping samples, and therefore, the data used were identical to the ones provided by the authors of the dataset in [31]. As the last step, we created sequences with 5 timesteps, resulting in samples of shape (5,52) to train both AD and support models. Subsequently, the samples referring to anomalies 3, 9, and 15 were eliminated since they did not suffer enough variation to be considered as anomalies, as pointed out by [32]. In conclusion, taking into account the remaining 17 anomaly types, the size of the training, validation and testing datasets were 3 264 000, 816 000, and 4 250 000 samples, respectively. Additionally, we scaled the three datasets by using the mean and standard deviation of the training dataset. As an additional step to train, validate, and test the support models, we generated a two-class version of the dataset with only the normal and anomalous labels. The number of anomalous samples becomes much higher than the number of normal samples. Therefore, we obtained a balanced version of the dataset by taking all the normal samples and the same number of samples randomly chosen among all the anomalies. Before training the 1D-CNN and LSTM models, we performed a hyper-parameters optimization with training and validation datasets, resulting in the architectures shown in Table 1. A similar procedure was performed with the support models using the balanced binary dataset, and the results are listed in Table 2. To objectively compare the performance of these models, we used accuracy, precision, recall, and F1-score metrics on the test. Since the last three metrics are designed to be used with binary classifiers, we used for multiclass the weighted version of these metrics provided by the library Scikit-Learn. As can be observed in Table 3, the model that achieved the best F1-score was 1D-CNN (0.976) followed by LSTM (0.929). In contrast, the supported models achieved the worst F1-scores (0.781 for XGBoost and 0.844 for RF). C. ADVERSARIAL SAMPLES GENERATION In this step, we selected an adversarial attack to be deployed and to generate adversarial samples from LSTM and 1D-CNN models. 1) ADVERSARIAL ATTACK SELECTION Following the first task, we chose an adversarial attack according to the characteristics of the model and the testbed. To be specific, we selected an attack that handles continuous data and targets DL models. We also assumed that the attacker had access to both the model and the dataset, and, therefore, we applied a white-box approach. The adversarial attack selected was a slight modification of BIM, targeted and unclipped. The method is the same as proposed in [24], except for the mask for categorical features. We did not use such a mask because we wanted to modify all the features and the dataset does not have categorical features. The formal definitions of this method can be seen in Equation 2, where X is the original array of samples, X 0 is the first iteration where the original samples are considered, and X n+1 are the successive iterations. In this step, the gradient, ∇, of the cost function, J , of the previous samples, X n with respect to the target label, y target , is computed and added to the samples, modulated by the perturbation parameter, ε. X 0 = X ; X n+1 = X n + ε · sign(∇J (X n , y target )) (2) This method generates a batch of adversarial samples from a batch of original samples. Since it is a gradient-based attack, the fundamental parameters are the number of iterations and epsilon, ε, which indicates the magnitude of the disturbance introduced in each iteration. In the original BIM, all samples are altered in every iteration. This means that when a sample in the batch is converted to adversarial, i.e., its class changes from abnormal to normal according to the model, it will continue being modified until the algorithm reaches the last iteration. The main consequence is that a significant number of samples will undergo a great and unnecessary change. Conversely, our version considers, in each iteration, only those samples that are not adversarial. In other words, when a sample changes from abnormal to normal class, it is excluded from the following iterations. This means that only the minimum disturbance will be applied to change the class of the sample. 2) PARAMETERS SELECTION Following the second task, we selected the parameters used with the adversarial attack. The main goal of this task in our specific attack is to select the right parameter to craft adversarial samples as similar as possible to the original ones. To this end, we chose an ε of 0.005 since we considered this value small enough so that a large disturbance is not introduced at each iteration. Concerning the number of iterations, we selected 100, which may seem to be an excessive value. However, our version of BIM stops modifying the samples that have become adversarial regardless of the number of iterations. 3) ADVERSARIAL ATTACK DEPLOYMENT Following the third task, we generated the adversarial samples. To accomplish this goal, we employed all the anomalies of the test dataset. Then, we applied our version of BIM using the gradients of the LSTM model and the original abnormal samples in the test dataset (4 000 000), creating a new dataset of adversarial samples. This dataset was used to evaluate the robustness of the LSTM model. The same process was performed with the 1D-CNN model. After executing the adversarial attacks we obtained 3 007 479 and 2 652 192 adversarial samples for 1D-CNN and LSTM (see Table 4), respectively. As can be seen in the row labeled non-adversarial samples, the AD models (LSTM and 1D-CNN) generated samples that were subsequently classified as anomalous samples by the support models (RF and XGBoost), considering them as non-adversarial samples. The number of samples considered to compute robustness is presented in the row labeled truly adversarial samples, i.e., 2 545 456 and 2 382 082 for 1D-CNN and LSTM, respectively. D. ADVERSARIAL DATASET GENERATION In the third step, we used the support models to decide which samples were truly adversarial and create the adversarial dataset employed to measure the robustness of the models. 1) ADVERSARIAL SAMPLES EVALUATION Since we were evaluating the robustness of LSTM and 1D-CNN models, we carried out this first task twice. When both support models classified an adversarial sample as normal, it was removed from the dataset in the next task. Specifically, Table 4 shows the number of truly adversarial samples that were preserved, the number of non-adversarial samples that were removed, and the total adversarial samples generated after executing the adversarial attack. As can be seen, the adversarial attack managed to generate more truly adversarial samples using 1D-CNN than LSTM. Similarly, the number of non-adversarial samples was also greater for 1D-CNN than for LSTM. In particular, considering the 1D-CNN model, around 15% of samples were nonadversarial, whereas considering the LSTM model, this number decreases up to 10 %. The final output of this task is two sets. One set contains those adversarial samples that are not truly adversarial but are generated by the LSTM model. Similarly, the second set contains those adversarial samples not truly adversarial by the 1D-CNN model. 2) ADVERSARIAL DATASET GENERATION Following the second task, we generated the final adversarial dataset for each model, which was used to evaluate their respective robustness. To be specific, the final dataset is equal to the original dataset but removing the non-adversarial samples. Therefore, this dataset only contained truly adversarial samples, resulting in a dataset with 2 545 456 and 2 382 082 samples for 1D-CNN and LSTM robustness evaluation, respectively. E. ROBUSTNESS CONSIDERATIONS In this step, we followed the two tasks proposed in the methodology. In particular, we computed the robustness using Equation 1 and discuss the robustness and detection performance trade-off. 1) MODEL's ROBUSTNESS COMPUTATION Following the first task, we computed the ER of the LSTM and the 1D-CNN models previously trained. Fig. 4 shows the distribution of the ER (horizontal axis) for each of the considered models after 100 iterations. To clearly show the ER distribution, extremely large values were removed. As the figure also shows, the 1D-CNN model is more robust. To be specific, if we compute the median of ER (the white point in Fig. 4), we see that the 1D-CNN model achieved a robustness of 1.110, while the LSTM achieved a robustness of 0.601. This means that for generating adversarial samples for the LSTM model, it is necessary to modify the sample introducing a perturbation greater than 60.1% of the original sample for more than 50% of samples. In contrast, to generate adversarial samples for 1D-CNN, the perturbation needed is around 111% of the original sample. The model's robustness is related to the perturbation needed to convert original samples into adversarial ones. The larger the perturbation, the more robust the model. In this specific case, it seems that a more complex model (1D-CNN) is more robust than a simpler one (LSTM). In particular, the 1D-CNN model has 12 597 trainable parameters, while the LSTM model has 2 453 trainable parameters. Besides, the 1D-CNN model used the dropout regularization technique during the training phase, while the LSTM did not apply that technique. This implies that the boundary decision of the 1D-CNN model is smoother and, therefore, simpler than the LSTM model. 2) DETECTION PERFORMANCE AND ROBUSTNESS TRADE-OFF Following the second task, we discuss next the trade-off between detection performance and robustness. Table 5 shows a summary of the detection performance and both the ER using our methodology and without using it. Besides, although different metrics are not comparable, we decided to include the CLEVER score since it can tell us if one model is more robust than another. The CLEVER score was computed using the default parameters specified in the ART library. In addition, we set the maximum ball distortion to 10 and used l 2 as the norm. Finally, we selected 5 000 uniformly random samples from the original dataset and computed the median of their CLEVER scores. VOLUME 10, 2022 In this specific case, the 1D-CNN model achieved the best results in both detection performance and robustness. Therefore, this model should be used in the core of an AD system. Furthermore, the robustness varies substantially depending on whether our methodology is used or not. As previously mentioned, this is because calculating robustness without applying our methodology will include non-adversarial samples and may lead to wrong results. For example, in this particular case, both models are apparently twice as robust, and actually they are not. In general, the selection of a model depends on the particular scenario. In our case, due to the malicious intention that attackers could have, it is better to choose the model with the highest robustness. Otherwise, the model with the highest evaluation performance should be selected. As can be seen in Table 5, the median of the CLEVER scores also shows that the 1D-CNN model is more robust than the LSTM model. In this case, the value tells us the lower bound l 2 minimum distortion needed to convert the samples into normal ones. In particular, as shown by our approach, the distance required to craft an adversarial sample using the 1D-CNN (0.046) model is twice that using the LSTM (0.023). F. DISCUSSION Our work establishes a clear four-step methodology for computing the robustness of ML and DL models specially designed for AD in industrial scenarios in relation to adversarial attacks. Although we presented a general methodology, we applied it to a specific scenario. In particular, we validated it using the TE process [31], a simulated testbed of a chemical process widely used in works related to AD. The results of the experiments demonstrated that it is not only necessary to take into account the detection performance but also the robustness of the model against adversarial attacks. Thus, our work fills in a gap in the literature regarding methodologies to evaluate the robustness of ML and DL models. As we observed in Section I, there are several metrics to compute the robustness of a model. However, these metrics have important limitations. On the one hand, most metrics are specific to differentiable models, e.g., CLEVER [9] and LLS [8]. On the other hand, other proposals focus on computing the robustness in terms of the minimal perturbation needed to change the sample class [7]. However, in AD, the robustness needs to be computed considering the change from one of the abnormal classes to the normal class. One relevant aspect of the methodology that we propose is that only the truly adversarial samples are considered when computing the robustness. When an adversarial attack is performed, some samples can change their actual class to the normal class. As we discussed in Section III, in industrial scenarios, abnormal samples that change to normal classes are harmless. Furthermore, in contrast to other fields such as computer vision, identifying if an adversarial sample presents a potential threat to the industrial system requires expert knowledge. Therefore, when computing the robustness, we need to discard these harmless samples and only consider the adversarial samples that are misclassified by the AD system but continue being anomalous. In order to filter these samples, which we call non-adversarial samples, we propose using support models. Specifically, to discard non-adversarial samples, we propose a voting process carried out by these support models. Another relevant aspect of our methodology is that it can be applied to all ML and DL models irrespective of whether the model is not differentiable. This is because the metric that we propose computing the robustness is ER, which considers the original samples and the adversarial samples generated. However, unlike the original ER metric proposed in [7], our methodology allows computing the metrics using targeted adversarial attacks. Finally, the most critical limitation of our methodology is the selection of the support models. On the one hand, these models allow discriminating between truly adversarial samples and non-adversarial samples. However, these models also introduce a degree of uncertainty. In fact, the selection of different support models can lead different authors to obtain different robustness results. Therefore, as proposed in the methodology, these models need to be selected following specific criteria. For example, those models with a relevant number of hyper-parameters, such as DL models, need to be avoided. In contrast, an ensemble based on a voting process between different models achieves a high degree of certainty. Therefore, they are a convenient choice to be selected as support models. VI. CONCLUSION AND FUTURE WORK In this paper, we proposed a new methodology to measure the robustness of AD models to adversarial attacks in industrial scenarios. Its novelty is the consideration of the possibility that, after applying adversarial attacks, some adversarial samples become truly normal and do not need to be taken into account in the robustness computation. The methodology comprises four steps: models preparation, adversarial samples generation, adversarial dataset generation, and robustness consideration. To be precise, the methodology uses a set of models called support models to discriminate between truly adversarial and non-adversarial samples, and robustness is computed considering only the truly adversarial samples. Besides, we applied this methodology to the TE process, which is a realistic industrial scenario. In this scenario, we evaluated the robustness of two AD models: 1D-CNN and LSTM. The experiments showed that, in this specific scenario, 1D-CNN model achieved higher robustness (1.110) than LSTM (0.601). This means that to generate adversarial samples, the perturbation required in the LSTM is equals 60.1% of the original samples, while the perturbation needed in the 1D-CNN is about double, 110%. As future work, we plan to continue evaluating the robustness of AD systems in other industrial scenarios using different industrial datasets. In addition, we plan to study the properties of different models to be used as support models. Besides, we also plan to study the relationship between robustness, adversarial samples, and interpretability method. One application of this study can be the improvement of the robustness of the AD model by detecting adversarial samples using interpretability techniques. LORENZO FERNÁNDEZ MAIMÓ received the M.Sc. and Ph.D. degrees in computer science from the University of Murcia. He is currently an Associate Professor with the Department of Computer Engineering, University of Murcia. His research interests include machine learning and deep learning applied to cybersecurity and computer vision. His research interests include understanding the factors that influence neural networks robustness to adversarial perturbations, with a particular interest in the effect of the data used for training the networks. ALBERTO HUERTAS CELDRÁN (Member, IEEE) received the M.Sc. and Ph.D. degrees in computer science from the University of Murcia, Spain. He is currently a Senior Researcher at the Communication Systems Group CSG, Department of Informatics (IfI), University of Zurich (UZH). His research interests include cybersecurity, machine and deep learning, continuous authentication, and computer networks. GÉRÔME BOVET received the Ph.D. degree in networks and computer systems from Telecom ParisTech, France. He is currently the Head of Data Science for the Swiss Department of Defense. His work focuses on machine and deep learning, with an emphasis on anomaly detection, adversarial, and collaborative learning in IoT sensors.
2022-12-03T14:41:41.142Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "620c0af27381788c145db5ba30da7419805c3d93", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09964189.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "620c0af27381788c145db5ba30da7419805c3d93", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16776279
pes2o/s2orc
v3-fos-license
Long-term culture-induced phenotypic difference and efficient cryopreservation of small intestinal organoids by treatment timing of Rho kinase inhibitor AIM To investigate a suitable long-term culture system and optimal cryopreservation of intestinal organoid to improve organoid-based therapy by acquiring large numbers of cells. METHODS Crypts were isolated from jejunum of C57BL/6 mouse. Two hundred crypts were cultured in organoid medium with either epidermal growth factor/Noggin/R-spondin1 (ENR) or ENR/CHIR99021/VPA (ENR-CV). For subculture, organoids cultured on day 7 were passaged using enzyme-free cell dissociation buffer (STEMCELL Technologies). The passage was performed once per week until indicated passage. For cryopreservation, undissociated and dissociated organoids were resuspended in freezing medium with or without Rho kinase inhibitor subjected to different treatment times. The characteristics of intestinal organoids upon extended passage and freeze-thaw were analyzed using EdU staining, methyl thiazolyl tetrazolium assay, qPCR and time-lapse live cell imaging. RESULTS We established a three-dimensional culture system for murine small intestinal organoids using ENR and ENR-CV media. Both conditions yielded organoids with a crypt-villus architecture exhibiting Lgr5+ cells and differentiated intestinal epithelial cells as shown by morphological and biochemical analysis. However, during extended passage (more than 3 mo), a comparative analysis revealed that continuous passaging under ENR-CV conditions, but not ENR conditions induced phenotypic changes as observed by morphological transition, reduced numbers of Lgr5+ cells and inconsistent expression of markers for differentiated intestinal epithelial cell types. We also found that recovery of long-term cryopreserved organoids was significantly affected by the organoid state, i.e., whether dissociation was applied, and the timing of treatment with the Rho-kinase inhibitor Y-27632. Furthermore, the retention of typical morphological characteristics of intestinal organoids such as the crypt-villus structure from freeze-thawed cells was observed by live cell imaging. CONCLUSION The maintenance of the characteristics of intestinal organoids upon extended passage is mediated by ENR condition, but not ENR-CV condition. Identified long-term cryopreservation may contribute to the establishment of standardized cryopreservation protocols for intestinal organoids for use in clinical applications. subculture, organoids cultured on day 7 were passaged using enzyme-free cell dissociation buffer (STEMCELL Technologies). The passage was performed once per week until indicated passage. For cryopreservation, undissociated and dissociated organoids were resuspended in freezing medium with or without Rho kinase inhibitor subjected to different treatment times. The characteristics of intestinal organoids upon extended passage and freeze-thaw were analyzed using EdU staining, methyl thiazolyl tetrazolium assay, qPCR and time-lapse live cell imaging. RESULTS We established a three-dimensional culture system for murine small intestinal organoids using ENR and ENR-CV media. Both conditions yielded organoids with a crypt-villus architecture exhibiting Lgr5 + cells and differentiated intestinal epithelial cells as shown by morphological and biochemical analysis. However, during extended passage (more than 3 mo), a comparative analysis revealed that continuous passaging under ENR-CV conditions, but not ENR conditions induced phenotypic changes as observed by morphological transition, reduced numbers of Lgr5 + cells and inconsistent expression of markers for differentiated intestinal epithelial cell types. We also found that recovery of long-term cryopreserved organoids was significantly affected by the organoid state, i.e. , whether dissociation was applied, and the timing of treatment with the Rho-kinase inhibitor Y-27632. Furthermore, the retention of typical morphological characteristics of intestinal organoids such as the crypt-villus structure from freeze-thawed cells was observed by live cell imaging. CONCLUSION The maintenance of the characteristics of intestinal organoids upon extended passage is mediated by ENR condition, but not ENR-CV condition. Identified long-term cryopreservation may contribute to the establishment of standardized cryopreservation protocols for intestinal organoids for use in clinical applications. INTRODUCTION The gastrointestinal (GI) tract is lined by a monolayer of epithelial cells that separates the intestinal lumen and underlying tissues. The epithelial cells of the small intestine are organized into villi and crypt structures. Intestinal stem cells (ISCs), which express leucinerich repeat containing G protein-coupled receptor 5 (Lgr5), and their progenitors are located in crypts. ISCs generate daughter cells called transit-amplifying (TA) cells, which either return to stemness or differentiate into a secretory epithelial cells lineage, such as Paneth cells, goblet cells, enteroendocrine cells, or enterocytes [1,2] . The GI tract is highly vulnerable to the external environment, such as radiation. Exposure to high levels of ionizing radiation induces the clonogenic loss of crypt cells and villus depopulation and leads to malabsorption of nutrients and impaired physical barrier function. The resulting breach in the GI barrier, accompanied by immune suppression, results in a high risk of life-threatening infection [3][4][5] . Many studies have been focused on understanding the mechanisms of radiation-induced gastrointestinal syndrome (RIGS). However, in vitro analysis of RIGS has been hampered by the lack of a suitable culture system. Long-term maintenance of crypts in traditional two-dimensional (2D) cultures of primary intestinal crypts is difficult due to the poor survival of crypts in vitro [6,7] . Based on three-dimensional (3D) culture systems, long-term cultures in which crypts are able to differentiate and recapitulate normal cryptvillus architecture have been established using crypts isolated from the mouse and human intestine using two different media [8,9] . Initial defined factors present in epidermal growth factor (EGF)/Noggin/R-spondin1 (ENR) medium, which are associated with growth requirements of intestinal epithelium, include EGF to enhance intestinal proliferation, bone morphogenic protein antagonists to induce expansion of crypt numbers, and Wnt agonists to increase crypt proliferation [8,10] . Additionally, small molecule screening showed that ENR/CHIR99021/valproic acid (VPA) (ENR-CV) medium, which was associated with enrichment of intestinal stem cells (Lgr5 + ) and growth of the intestinal epithelium, included a combination of ENR components and small molecules, such as CHIR99021 (a glycogen synthase kinase 3 inhibitor) and VPA [a histone deacetylase (HDAC) inhibitor] [9] . Although both media can support the formation of organoid containing crypt-villus structures that recapitulate the native intestinal epithelium, there is little comparative study of the characteristics of the resulting cells, particularly after long-term continual passage. In vitro expanded organoids have recently been applied to treat gastrointestinal diseases in preclinical models, supporting the establishment of potential organoid-based therapies for repairing damaged intestine [11,12] . Because clinical applications require large numbers of cells, it may be indispensable to in vitro expansion of organoids in long-term culture with retaining their initial characteristics. In addition, the cells should be capable of being preserved for prolonged periods, while maintaining cell functionality for off-the-shelf use. Cryopreservation may be an attractive technique for maintaining the functional properties and genetic characteristics of cells through long-term storage in order to facilitate the experimental and clinical applications of cell-based therapies [13][14][15] . However, although various methods have been developed for cryopreservation of different types of stem cells, such as mesenchymal, hematopoietic, and pluripotent stem cells [16][17][18] , protocols for cryopreservation of intestinal organoids have not been described. Therefore, it is necessary to develop an efficient method for optimal cryopreservation of cultured organoids. In the present study, we performed quantitative assessments to compare the characteristics (e.g., cell morphological phenotype, proliferation, and composition of differentiated intestinal epithelial cell types) of small intestinal organoids subjected to long-term culture under two different media. We also sought to optimize the cryopreservation method by elucidation of the survival of cryopreserved small intestinal organoids through a combination of dissociation and treatment with a Rho kinase (ROCK) inhibitor during freezing. Our findings provided important insights into our understanding of 3D culture systems with similarities to the intestine and contribute to the establishment of standardized cryopreservation protocols for intestinal organoid for use in clinical applications. Isolation of small intestinal crypts from mice All animal experiments were approved by the Animal Investigation Committee of the Korea Institute of Radiological and Medical Sciences in South Korea and were performed according to institutional guidelines and national animal protection laws. Isolation of small intestinal crypts from mice was conducted as described previously with some modifications [8] . Briefly, the jejunum (10 cm from the stomach) of C57BL/6 male mice (8-10 wk age, n = 4) was opened longitudinally, cut into 5-mm pieces, washed three times with cold phosphate-buffered saline (PBS), and incubated with 2 mmol/L ethylenediaminetetraacetic acid (EDTA) in PBS for 15 min at 37 ℃. After removal of EDTA solution, the supernatant containing villi was replaced with cold PBS. Crypts were isolated from the basal membrane by vigorous hand shaking for 1 min. This procedure was repeated until enriched crypts could be observed in the supernatant using microscopy. After collection of isolated crypts from tubes by centrifugation, the crypts were resuspended in 2% D-sorbitol (Sigma, St. Louis, MO, United States) in PBS, passed through a 70-µm cell strainer (BD Biosciences, Heidelberg, Germany), and centrifuged at 100 × g for 3 min at 4 ℃. The pellet was resuspended in 10 mL basic medium [advanced Dulbecco's modified Eagle's medium/F12, 2 mmol/L L-glutamine, 10 mmol/L HEPES, 100 mg/mL streptomycin, 100 U/mL penicillin, 1 mmol/L N-acetylcysteine, 1% B27, and N2 supplement], and crypt numbers were counted using microscopy. 3D culture of crypts and organoid passage The isolated crypts were cultured in organoid medium with either ENR or ENR-CV, as previously reported [8,9] . Two hundred crypts in 50 µL matrigel (BD Biosciences) were seeded in each well of a pre-warmed 24-well flatbottomed plate. Crypts were then incubated for 30 min at 37 ℃, and 500 µL of complete crypt culture medium was added. The ENR medium contained basic medium plus 50 ng/mL murine EGF (Invitrogen, Carlsbad, CA, United States), 100 ng/mL murine Noggin (Peprotech, Hamburg, Germany), and 500 ng/mL human R-spondin-1 (R&D Systems, Minneapolis, MN, United States), whereas the ENR-CV medium contained ENR medium plus 1 mmol/L valproic acid (Invitrogen) and 10 µmol/L CHIR99021 (Invitrogen). The crypts were cultured at 37 ℃ in an atmosphere containing 5% CO2 for the indicated number of days. The medium was changed every 2-3 d. For subculture, the organoids cultured on day 7 were passaged using enzyme-free cell dissociation buffer (STEMCELL Technologies Inc., Vancouver, BC, Canada). Briefly, cultured organoids were washed with cold PBS, and 500 µL cell dissociation buffer was added to the wells and incubated for 5 min at 37 ℃. After washing with 0.1% BSA in PBS, dissociated organoids were passaged (a 1:5 ratio). Freshly prepared medium and Matrigel were then added for organoid culture. The passage of organoids cultured under ENR or ENR-CV medium was performed once per week until the indicated passage. Cell proliferation and crypt viability For analysis of cell proliferation in organoids by 5-ethynyl-2′-deoxyuridine (EdU) staining, the cultured organoids on the indicated day were incubated with fresh medium containing 10 µmol/L EdU (Molecular Probes, Eugene, OR, United States) for 30 min and then fixed in 4% paraformaldehyde in PBS overnight at 4 ℃. The fixed organoids were permeabilized with 0.5% Triton X-100 for 1 h, and following steps were performed using a Click-iT EdU Imaging kit (Molecular Probes) according to the manufacturer's protocol. Hoechst (1:2000) was used for nuclear staining to facilitate cell counting. Images were acquired using an immunofluorescence microscope (Olympus, Shinjuku, Tokyo, Japan). For quantitative analysis of growing organoids, cultured crypts were examined at the indicated time point under bright-field of microscope. Organoids exhibiting at least two budding structures in each group were counted. Experiments were performed in triplicate. The data were expressed as the mean ± SD. For quantitative analysis of crypt viability after freezing and thawing, we performed methyl thiazolyl tetrazolium (MTT) assays as previously reported [19] . Briefly, on the indicated days, cultured organoids were incubated with 10% MTT (AMRESCO, Solon, OH, United States) for 2-3 h at 37 ℃. After cell lysis by treatment with 2% sodium dodecyl sulfate (SDS) and dimethyl sulfoxide (DMSO), the optical density (OD) value of the solution was measured at 562 nm using a Synergy HT (BioTek, Winooski, VT, United States). Experiments were performed in triplicate. The data were expressed as the mean ± SD. Quantitative real-time polymerase chain reaction Total RNA was prepared from raw crypts (freshly isolated crypts from mice) and cultured crypts using an RNase mini kit (Qiagen, Valencia, CA, United States) according to the manufacturer's protocol. A total of 1 µg of RNA was reverse transcribed using an AccuPower RT PreMix kit (Bioneer, Seoul, South Korea). Realtime PCR was performed with FastStart Essential DNA Green Master Mix (Roche, Indianapolis, IN, United States). All reactions were performed in triplicate. mRNA expression was normalized to endogenous glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression and expressed relative to ENR-derived cells or raw crypts. The primers sequences are listed in Table 1. Freezing-thawing of in vitro cultured organoids For organoid cryopreservation, organoids cultured under ENR conditions were left intact or dissociated into single crypt-like colonies using enzyme-free cell dissociation buffer (STEMCELL Technologies Inc.). Undissociated and dissociated organoids were resuspended in freezing medium, e.g., 10% DMSO and 10% fetal bovine serum or recovery cell culture freezing medium (RCCFM; Invitrogen). To determine the effects of Y-27632, a specific inhibitor of ROCK (STEMCELL Technologies Inc.) on the recovery of organoids, organoids were treated with the ROCK inhibitor for different times, including pretreatment for 30 min prior to freezing (before freezing), direct addition into freezing medium (during freezing), and postthaw treatment for 3 d (after thawing). After storage in liquid nitrogen for 1-3 mo, vials were quickly thawed, and thawed organoids were then cultured for 7 d. Time-lapse live cell imaging Live cell imaging was performed on a JuLi stage system (NanoEnTek, Seoul, South Korea). A culture Table 1 Primer sequences used in quantitative polymerase chain reaction analysis Gene Primer sequences (5'-3') Annealing temperatures (℃) dish placed on the microscope stage was covered with a chamber in 5% CO2 at 37 ℃. Images for the growth of crypts were an acquired at 60-min intervals. The data were processed using JuLi stage software v1.0 (NanoEnTek). Animal care and use statement All procedures involving were reviewed and approved by the Institutional Animal Care and Use Committee of the South Korea Institute of Radiological and Medical Sciences in Korea, and performed according to the Guidelines for Animal Experimentation of Korea Institute of Radiological and Medical Sciences. The animals were acclimatized to laboratory conditions (23 ℃ ± 1 ℃, 12 h/12 h light/dark, 50% ± 5% humidity and libitum access to food and water) for two, three or four weeks prior to experimentation. All appropriate protocols for study were taken to minimize pain and discomfort of animals. Statistical analysis Data are expressed as the mean ± SD or ± SEM of at least two independent samples. Statistical comparisons between groups were performed with twotailed Student's t-tests or two-way analysis of variance (ANOVA) with Dunnett's T3 tests. Differences with P values of less than 0.05 were considered significant. Establishment of a small intestinal organoid culture system using ENR and ENR-CV media In an attempt to establish a conventional culture for intestinal organoids using two different conditions [8,9] , freshly isolated crypts from the jejunum of C57/B6 mice were cultured in ENR or ENR-CV medium. Representative images of crypt growth into organoids are shown in Figure 1A. On day 1, crypts formed a round shape, called an enterosphere, and these structures became larger over time. Budding of enterospheres was observed beginning on day 3, and robust budding was observed on day 10, demonstrating a morphology typical of small intestinal organoids with a crypt-villus structure. We found that organoids cultured under ENR-CV conditions yielded increased budding length and size compared with those of organoids cultured under ENR conditions ( Figure 1A). Consistent with the results of previous reports (Sato et al [8] , 2009; Yin et al [9] , 2014), ENR-based organoids exhibited proliferating cells within the crypt domains, whereas proliferating cells were present throughout the organoids under ENR-CV conditions, as shown by EdU staining ( Figure 1B). We further confirmed the effects of the ENR-CV medium on enhancement of cell proliferation within organoids by counting the numbers of organoids exhibiting at least two budding structures ( Figure 1C). Long-term culture induced phenotypic differences in organoids under ENR-CV culture conditions, but not ENR culture conditions The two different types of medium used in this study have been shown to support long-term culture of intestinal organoids [8][9][10] . Thus, we aimed to confirm the long-term culture of organoids under our experimental conditions. As shown in Figure 2A, during continuous passage, the morphology of ENR-based organoids was constant, whereas the enhanced size and budding length of organoids in ENR-CV culture conditions were gradually diminished after passage 8 (P8). For a more extensive comparative analysis, we classified organoid into early phase (P0-4) and late phase (P8-12) based on morphological criteria, as shown in Figure 2A, and data are presented the mean of the sum of organoids from P0 to P4 or from P8 to P12 after performing two independent experiments with each passage. To evaluate the characteristics of organoids at early and late passages, we analyzed the expression of Lgr5, known marker of ISCs [20] , in organoids cultured in the two different media during continuous passage. At early passages, organoids cultured under ENR-CV conditions showed a dramatic increase in Lgr5 expression compared with that of organoids cultured under ENR conditions. These findings were consistent with a previous study showing that the expression level of Lgr5 was upregulated more than 3-fold in organoids cultured under ENR-CV conditions [9] . However, at later passages, Lgr5 expression under ENR-CV conditions was dramatically decreased to a level similar to that under ENR conditions. In contrast, Lgr5 expression levels in organoids cultured under ENR conditions were similar during both early and late passages ( Figure 2B). Furthermore, reduced numbers of proliferating cells, which were generally positive for Lgr5, were observed in organoids cultured under ENR-CV conditions during the late phase, as observed by EdU staining (Figure Figure 1). Next, we compared the compositions of intestinal epithelial cells in long-term cultured organoids under ENR and ENR-CV conditions. qPCR of intestinal epithelial marker expression showed that the expression levels of Lyz (a paneth cell marker), Muc2 (a goblet cell marker), ChgA (an enteroendocrine cell marker), and ALP (an enterocyte marker) were low and unstable under ENR-CV conditions compared with that under ENR conditions, similar to the expression of epithelial markers in primary raw crypts ( Figure 3A). Consistent with this, the result of immunostaining showed the reduced expression of some epithelial markers in organoids under ENR-CV conditions upon continual passage ( Figure 3B). In contrast, no changes in these markers were observed in organoids cultured under ENR conditions (data not shown). Therefore, these findings suggested that ENR-CV culture conditions could be susceptible to phenotypic alterations in organoids upon extended passage and may be less relevant to the in vivo composition of intestine cell types. Direct addition of the ROCK inhibitor Y-27632 into freezing media was superior for the recovery of cryopreserved organoids without dissociation Recent studies have reported the use of continuously cultured intestinal organoids to treat GI disease in mice [11,12] , suggesting that organoid-based therapy may have applications in repairing damaged intestines. In order to improve therapeutic technologies, we have examined the optimal conditions for cryopreservation of organoids. To explore the cryopreservation of cultured intestinal organoids under ENR conditions, we first performed freezing-thawing of undissociated and dissociated organoids using 10% DMSO, a traditional cryopreservative [21] . After 1 mo, cryopreserved organoids were thawed in medium and incubated for 7 d. Organoids with a crypt-villus structure were visible from frozen stock only for undissociated organoids ( Figure 4A), indicating that undissociated organoids showed better recovery from cryopreservation with 10% DMSO compared with that of dissociated organoids. Similar results were obtained from RCCFM (data not shown). We also extended the storage period of cryopreserved organoids up to 3 mo, which has been used for long-term cryopreservation in previous studies [21,22] . The viability of organoids was dramatically decreased, even in commercial freezing medium, in a time-dependent manner, as shown by MTT assays ( Figure 4B). The survival of various types of stem cells, including ISCs, is enhanced by ROCK inhibition during subcul-ture [8,16] . In addition, ROCK activity and cytoskeletal phenotypes are almost completely inhibited by 10 µmol/L Y-27632 [23] . Thus, in this study, we aimed to further optimize the cryopreservation of cultured organoids by examining the effects of Y-27632, a specific inhibitor of ROCK activity, on the recovery of organoids from cryopreserved stocks when added before freezing, during freezing, and after thawing of organoids. By evaluating the densities of grown organoids after freezing-thawing, we found that direct addition of Y-27632 into freezing medium during freezing resulted in superior recovery compared with that of untreated organoids, organoids pretreated with Y-27632, or organoids treated with Y-27632 after thawing ( Figure 5A). Consistent with this, MTT analysis revealed that there was a higher rate of recovery from direct addition of Y-27632 during freezing (> 2.5 fold) upon cryopreservation, compare with that observed under other conditions ( Figure 5B). We also observed similar effects of Y-27632 in commercial freezing medium when the drug was directly added during freezing (data not shown), and the typical organoid morphology with a crypt-villus structure was further confirmed by tracing the growth of organoids for 7 d after freezing-thawing, as shown by live-imaging analysis (Video data). In contrast to undissociated organoids, we did not observe improvements in dissociated organoids following treatment with Y-27632 (Supplementary Figure 2). Taken together, these results suggested that the recovery of cryopreserved intestinal organoids was significantly 971 February 14, 2017|Volume 23|Issue 6| WJG|www.wjgnet.com improved when the ROCK inhibitor Y-27632 was used for treatment of undissociated organoids rather than dissociated organoids during freezing. DISCUSSION Previous studies reported that long-term culture of intestinal organoids could be supported through either ENR or ENR-CV medium in a 3D culture system with a Matrigel matrix [8,10,12] , and in vitro cultured intestinal organoids may have applications in organoid-based therapy as shown in studies investigating the repair of damaged intestines in mice [11,12] . Here, we have extended these studies to determine a suitable longterm culture system and optimal cryopreservation of small intestinal organoid. We found that the phenotypes of intestinal organoids under ENR media were maintained over a long duration, whereas organoid under ENR-CV media exhibited morphological alterations, reduced numbers of Lgr5 + cells and inconsistent expression of markers for differentiated intestinal epithelial cell types upon extended passages. We also identified an efficacious cryopreservation method for expansion of undissociated intestinal organoids. For undissociated intestinal organoids, direct addition of the ROCK inhibitor Y-27632 during freezing permitted superior recovery of crypts after long-term cryopreservation. Using established cultures of intestinal organoids under two different media, we confirmed that the characteristics of intestinal organoids under ENR-CV medium in the early passage were consistent with a previous report showing enrichment of Lgr5 + expression, enhanced organoid size and budding length, and rapid proliferating cells [9] . However, upon extended passaging under ENR-CV conditions, but not ENR conditions, we observed phenotypic changes, such as reduced size and budding length of organoid, accompanied by reduced expression of Lgr5, an ISC marker, and upregulation of Muc2, a goblet cell marker ( Figures 2 and 3). Although these findings are contradictory to the report by Yin et al [9] , who showed maintenance of Lgr5 + stem cells during long-term passage, our findings were consistent with other reports demonstrating conversion of proliferating progenitors into secretory cells, along with loss of stem cells expressing Lgr5 in the context of inhibited Notch signaling [24] . The Notch signaling pathway contributes to enhancement of Lgr5 + stem cell proliferation and suppresses the differentiation of these ISCs into secretory cells, such as goblet and enteroendocrine cells. In contrast, Wnt signaling is associated with the formation of paneth cells, which we found to be unaltered as shown by Figure 3A [ [25][26][27] . Thus, we analyzed the expression of Notch signaling-associated molecules, including Notch family members and Hes1, in ENR and ENR-CV cultured organoids upon extended passage. However, the gene expression patterns were similar for organoids cultured under both conditions (data not shown). This suggested that the Notch signaling pathway was not involved in the observed changes under our culture conditions. Interestingly, although the expression of Lgr5 in intestinal organoids cultured in ENR-CV medium was reduced to a level similar to that of ENRcultured intestinal organoids during late passages, indicating that these events may result from the reduced effects of small molecules, this relationship did not seem to be causal because the composition ratio of differentiated epithelial cells in long-term cultured intestinal organoids under ENR and ENR-CV conditions was not well correlated ( Figure 3A). It is unclear why enhanced expression of Lgr5 was diminished upon continual passage in this study; however, a recent report demonstrated that loss of Lgr5 + stem cells is often observed as an unexpected side effect in patients treated with HDAC inhibitors [28] . Therefore, it is likely that changes in the phenotype and composition ratio of functionally differentiated cells in intestinal organoids under ENR-CVin long-term culture may be attributed to prolonged treatment with valproic acid, a known HDAC inhibitor [29] . In order to determine the mechanisms underlying RIGS at the cellular level, in-depth characterization of intestinal epithelial cells within in vitro cultured intestinal organoids is necessary. A previous study compared the characteristics of these cells under two different media [9] . Moreover, our current findings further showed that both media could support the long-term culture of intestinal organoids, recapitulating the crypt-villus architecture in vivo with ISCs (Lgr5) and differentiated intestinal epithelial cells, consistent with previous reports [8,9] . Based on the comparative analysis in our study, including analysis of raw crypts, we found that the expression levels of most markers of differentiated intestinal epithelial cells in ENR-cultured organoid were higher than those in ENR-CV-cultured organoids, regardless of whether the organoids were cultured long term. Furthermore, upon continuous passaging, the expression levels of epithelial cell markers in intestinal organoids under ENR conditions were constant and similar to the expression levels of corresponding markers in primary raw crypts, suggesting that ENR conditions may be appropriate for long-term culture of intestinal organoids and that the characteristics of ENR culture were relevant to determining the in vivo composition of small intestine cell types. Given that the specialized cellular niche plays an important role in the maintenance of intestinal homeostasis by creating a unique environment in vivo [30] , our data emphasized that the ENR-based intestinal organoid system may be useful for analysis of the mechanisms of radiation induced-intestinal cell death and that results obtained from the ENR-CV culture system, particularly for longterm culture, should be interpreted cautiously. One of the most important findings in this study was that recovery of cryopreserved intestinal organoids was dependent on the timing of Y-27632 treatment and the absence of dissociation. We found that intact organoids, not dissociated organoids, were efficiently cryopreserved in the presence of 10% DMSO as standard components in slow-freezing protocols [15,21] . Among current cryopreservation methods, including slow or fast freezing (vitrification), conventional slowfreezing protocols are generally effective in presence of DMSO as a cryoprotectant, are less labor intensive, and allow for handling of bulk quantities of cells [15,31] . However, DMSO is known to be toxic to tissues and cells and is considered an appropriate cryoprotectant for short-term storage owing to its time-dependent toxicity [31] . Indeed, we observed that low survival rates after freeze-thaw of cryopreserved organoids following extended storage (Figure 4). Importantly, however, addition of Y-27632 at the time of freezing improved the recovery of freeze-thawed intestinal organoids. Although Y-27632 is known to be a potent inhibitor of apoptosis and to facilitate the survival of dissociated stem cells during subculture including ISCs [8,16,32] , we did not observe efficient recovery of cryopreserved intestinal organoids when dissociated organoids were treated with ROCK inhibitor directly into the freezing medium (Supplementary Figure 2). These differences may be explained by the toxicity of DMSO, which varies from cell type to cell type during cryopreservation [31] . Our live-imaging data indicated the characteristics of long-term cryopreserved intestinal organoids by tracing the growth of organoids having a typical intestinal organoid phenotype with a crypt-villus structure. Further studies are required to determine whether subtle genetic alterations can be induced by cryopreservation with the ROCK inhibitor Y-27632. In the present study, undissociated intestinal organoids, but not dissociated organoids, were effectively cryopreserved and propagated after long-term cryopreservation by incorporating the ROCK inhibitor Y-27632 directly into the freezing medium. In conclusion, using a comparative analysis of the characteristics of long-term cultured small intestinal organoids under two different culture conditions, we demonstrated that ENR-CV condition, but not ENR conditions, induced phenotypic transition in in vitro cultured small intestinal organoids upon extended passaging. We also identified an efficacious long-term cryopreservation method for intestinal organoids through optimization of the organoid state and timing of treatment with the ROCK inhibitor Y-27632. This method may contribute to the establishment of standardized cryopreservation protocols for intestinal organoids and subsequent clinical applications of these cell sources. aCKNOWleDgmeNTS The authors would like to thank Songwon Seo, Chief of the laboratory of Low Dose Risk Assessment, National Radiation Emergency Medical Center, Korea Institute of Radiological and Medical Science for his statistical support. Background Recent studies have suggested that in vitro cultured intestinal organoids can be introduced to manage gastrointestinal diseases, supporting the development of promising organoid-based therapies for repair of damaged intestines. To improve organoid-based therapeutic technologies by acquiring large numbers of cells for clinical application, it is essential for long-term maintenance of characteristics and optimal cryopreservation method of intestinal organoid. Research frontiers Two different media [epidermal growth factor/Noggin/R-spondin1 (ENR) and ENR/CHIR99021/VPA (ENR-CV)] can support the formation of organoid containing crypt-villus structures that recapitulate the native intestinal epithelium. However, there is little comparative study of the characteristics of the resulting cells, particularly after long-term continual passage. In addition, it has not been well described for optimal cryopreservation methods for maintaining the functional properties of intestinal organoids in order to facilitate the experimental and clinical applications of organoid-based therapies. Innovations and breakthroughs This is the first study to report a continuous passages-induced phenotypic difference of intestinal organoid under ENR-CV condition, but not ENR condition which is suitable to long-term culture. The authors also demonstrate that efficient long-term cryopreservation of organoids is associated with a combination of organoid state and timing of treatment with the Rho kinase (ROCK) inhibitor. Applications This study provide important insights into our understanding of 3D culture systems for intestine-related organs and contribute to the establishment of standardized cryopreservation protocols for intestinal organoids on application of organoid-based therapy. Peer-review The manuscript by Han et al described that phenotypes of mouse intestinal organoids under ENR media were maintained over a long duration, and organoids under ENR-CV media exhibited morphological alterations. They also found that adding the ROCK inhibitor Y-27632 during freezing benefits recovery of undissociated intestinal organoids after long-term cryopreservation. The manuscript is succinct and the conclusions are well supported by the data.
2018-04-03T02:00:10.356Z
2017-02-14T00:00:00.000
{ "year": 2017, "sha1": "e10ed763584262df31a70d02e8036c5353e1d005", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v23.i6.964", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e10ed763584262df31a70d02e8036c5353e1d005", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
213918198
pes2o/s2orc
v3-fos-license
Role of late surgical explantation of device from perimembranous ventricular septal defect for left bundle branch block and left ventricular dysfunction. Fascicular blocks, complete bundle branch blocks, or complete heart block after placement of a device may be late complications, and long-term follow-up Introduction Device closures of perimembranous ventricular septal defects (pm VSD) have been carried out for over 2 decades now and are considered an accepted alternative to surgical closure in selected cases. While the development of heart block or aortic regurgitation had been major concerns in the past, the occurrence of complete left bundle branch block (LBBB) after device closure of pm VSD is quite rare, with few published reports. Development of LBBB is nevertheless significant, leading to septal dyssynchrony and progressive deterioration of left ventricular (LV) function. We report a case of resolution of LBBB after surgical explantation of Amplatzer duct occluder, several months after transcatheter pm VSD closure. is mandatory after device closure of perimembranous ventricular septal defects. Introduction Device closures of perimembranous ventricular septal defects (pm VSD) have been carried out for over 2 decades now and are considered an accepted alternative to surgical closure in selected cases. 1 While the development of heart block or aortic regurgitation had been major concerns in the past, the occurrence of complete left bundle branch block (LBBB) after device closure of pm VSD is quite rare, with few published reports. 2,3 Development of LBBB is nevertheless significant, leading to septal dyssynchrony and progressive deterioration of left ventricular (LV) function. We report a case of resolution of LBBB after surgical explantation of Amplatzer duct occluder, several months after transcatheter pm VSD closure. Case report A 6-year-old, asymptomatic boy weighing 18 kg underwent device closure of a moderate-sized pm VSD with septal aneurysm using 12/10 Amplatzer duct occluder (Abbott, Santa Clara, CA) in April 2018. The left ventriculogram showed 2 separate jets on the LV aspect and opacification of the right ventricle and right atrium through several openings in the ventricular septal aneurysm (Supplementary Video 1). The combined diameter of all defects on the right ventricular aspect was around 15 mm. Device closure of the defect was considered owing to dilated left-sided chambers on echocardiography (LV end-diastolic diameter z-score of 12.0) and a cardiothoracic ratio . 0.5 on chest radiography. The preprocedural electrocardiogram (ECG) was normal, except for prominent "q" waves in left precordial leads, suggestive of LV volume overload ( Figure 1A). The procedure was uneventful. Postprocedural electrocardiogram at 24 and 48 hours after the procedure ( Figure 1B) showed normal sinus rhythm with 1:1 atrioventricular (AV) conduction and normal QRS duration (QRSd) of 40 ms. He was advised follow-up at 1 month post procedure. The ECG at 1 month follow-up showed normal sinus rhythm with complete LBBB and QRSd of 130 ms ( Figure 1C). The echocardiogram showed the device in position, no residual shunt, no aortic regurgitation, and normal biventricular function. Conservative management was preferred at this time, after which he was lost to follow-up for 6 months, when he presented with mild exertional dyspnea and increased precordial activity. An ECG at this point showed sinus rhythm with 1:1 AV conduction and persistent LBBB with QRSd of 130 ms ( Figure 1D). The echocardiogram showed ventricular septal KEY TEACHING POINTS Left bundle branch block (LBBB) is a less recognized complication of transcatheter closure of perimembranous ventricular septal defects and is more likely to occur in defects that extend into the trabecular septum. Device-induced LBBB can lead to progressive ventricular dysfunction due to septal dyssynchrony. Fascicular blocks, complete bundle branch blocks, or complete heart block after placement of a device may be late complications, and long-term follow-up is mandatory after device closure of perimembranous ventricular septal defects. While cardiac resynchronization therapy has been conventionally used to improve left ventricular function in this setting, surgical removal of the device (even if carried out much later) could lead to spontaneous resolution of LBBB with normalization of ventricular function. dyssynchrony, dilated left ventricle, and LV dysfunction (LV ejection fraction [LVEF] of 30%). There was no history to suggest an alternative cause for the sudden deterioration in ventricular function. Cardiac magnetic resonance imaging (CMR) showed the device in the perimembranous location with dyssynchronous contraction of the ventricular septum, mild LV dilation (indexed LV end-diastolic volume: 102 mL/m 2 ; indexed LV end-systolic volume: 76 mL/m 2 ), and severe LV dysfunction (LVEF: 24%) (Supplementary Videos 2 and 3). There was no myocardial edema or late gadolinium enhancement (LGE) to suggest inflammation or scarring in the area around the device and there were no findings to suggest an alternate etiology for LV dysfunction (Figure 2A-D). A 24-hour Holter recording showed persistent LBBB, whereupon angiotensin-converting enzyme inhibitors and diuretics were started for symptomatic benefit. A multidisciplinary team consisting of pediatric cardiologists, cardiac surgeons, electrophysiologists, and a cardiac imaging specialist debated on the further course of action. A decision to attempt device retrieval was taken in preference to implantation of a cardiac resynchronization therapy (CRT) device, given the child's age (potential need for multiple pack changes and lead replacement), the nitinol make of the device (persistent tendency to expand post implantation), and the absence of LGE in the peri-device area on CMR (barring the area of susceptibility artifact due to the device), which excluded overt focal fibrosis, thus suggesting possible recovery of LBBB. The decision was also guided by our experience with a similar case, wherein the child succumbed to LV dysfunction several months after the development of LBBB, following pm VSD closure using an Amplatzer duct occluder II device. The surgeons elected to use a standard midline sternotomy and aorto-bicaval cannulation for cardiopulmonary bypass, and cardioplegia was achieved using Del Nido cardioplegic solution at 30 Celsius. The aorta and right atrium were opened. The device was seen adherent to the septal leaflet of the tricuspid valve and its chordae. A fibrous capsule had formed all around the device and the retention skirt of the duct occluder was well apposed to the LV surface of the pm VSD. The screwing end of the device was isolated and the delivery cable was screwed on to the device for possible exteriorization into a loader sheath after collapsing the device. Because of endothelialization within and around the device, the device could not be slenderized within the sheath. The device was teased away from the surrounding fibrous capsule on both sides of the septum by meticulous dissection and extracted. The VSD was closed using a Sauvage patch (Bard Inc, Tempe, AZ) with interrupted prolene sutures. The aortic cusps were inspected and found to be intact. Post VSD closure, intraoperative pulmonary artery pressures were one-third of systemic with no step up on oximetry run. Two steroid-eluting, bipolar pacing leads (CapSure Sense 4968 -35 cm and CapSure 4968 -60 cm, Medtronic Inc, Minneapolis, MN) were placed epicardially on the lateral aspect of the left ventricle between 2 obtuse marginal vessels, at a distance of 2 cm from each other and tunneled to a pocket created in the left infraclavicular area, anticipating a need for CRT in the future. The patient came off bypass smoothly, with transient 2:1 AV block, which reverted to sinus rhythm with 1:1 AV conduction with persistence of LBBB and QRSd of 130 ms ( Figure 3A). He was discharged home a week after surgery on appropriate doses of anticongestive medications. A month after device explantation, his LBBB had recovered, with a normal QRSd of 40 ms ( Figure 3B). Echocardiogram (Supplementary Video 4) showed lesser degree of septal dyssynchrony and improved LV dimensions and systolic function (LV fractional shortening of 24% and LVEF of 50%). This resolution continued to be maintained at 6 months follow-up and 24-hour Holter recording confirmed resolution of LBBB. Guideline-mandated anti-heart failure therapy continues, with anticipated 2-year duration of therapy. Although there has been no recurrence of LBBB over the last 6 months, the pacing leads placed on the LV epicardial surface have not been removed. Discussion The proximity of the AV node in relation to a membranous VSD is well known, and surgical and device closures have an inherent risk of developing complete heart block. This most commonly occurs either intraprocedurally or immediately post procedure. The bundle of His traverses a distance before piercing the central fibrous body before bifurcation. In perimembranous defects, the bundle traverses inferoposterior to the VSD and usually bifurcates along its inferior margin, where the muscular septum starts. 4 While the right bundle branch is well defined and descends along the right ventricular aspect of the septum, the extensive left bundle fans out into anterior, septal, and posterior fascicles on the LV aspect of the septum. The specialized cells that form the left bundle, although interconnected, are more widely distributed on the LV aspect of the interventricular septum. 5 Not surprisingly, right bundle branch block and left fascicular blocks have been described more often following pm VSD device closure or even during sheath and catheter manipulation across the defect. 2,6 The left bundle branch would be more vulnerable in defects that extend inferiorly into the muscular septum, rather than true membranous defects. However, all perimembranous defects extend variably into the trabecular portion of the septum and have been closed using devices, with low risk of postprocedural complications. Oversized devices may predispose to conduction disturbances. As there were multiple openings on the right ventricular aspect of the aneurysm, along with an indirect Gerbode defect, complete closure of the VSD depended on the retention skirt, rather than the waist of the device, which would have occluded only the opening where it was deployed. The 12/10 mm Amplatzer duct occluder has a retention skirt measuring 18 mm in diameter. The septal aneurysm is prone to stretch, and placing the device entirely within the aneurysm may have resulted in small residual leaks. Intraaneurysmal device deployment is not feasible in all pm VSDs. Hence, the device was placed with the retention skirt apposed to the LV aspect of the septum (Supplementary Video 5). Whether downsizing the device could have prevented the complication while still achieving defect closure is speculative, but the larger device may have played a part. Variations in anatomy and branching pattern of the bundle might be a possibility in the few case reports describing complete LBBB. Peri-procedural LBBB has been erroneously classified as a minor complication in literature. Steroids have been used anecdotally to reduce edema and thereby improve conduction. 7 We do not know if commencement of steroids a month after the procedure would have altered the bundle branch pattern. We postulate mechanical compression of the left bundle branches by the retention skirt and persistent tendency of the nitinol material in the closure device to expand as potential reasons for the delayed appear-ance of LBBB. Over a period of time, this might lead to irreversible damage and fibrosis. CMR in our patient did not show any LGE to suggest fibrosis in any portion of the left ventricle. 8 We assumed a fair chance of recovery, if the device could be safely removed without causing further damage to the surrounding conduction tissue during surgery. As the device was presumed to have endothelialized, surgical dissection during device retrieval had to be meticulous and carried out with finesse to avoid the development of complete heart block. It is rather tempting to attribute such ECG changes to other nonspecific etiologies, such as myocarditis. Unless proven otherwise, bundle branch blocks, nonspecific intraventricular conduction disturbances, and complete heart block should be considered post procedural complications. The fact that LBBB was seen a month after the procedure, when the child still had normal ventricular function and absence of any other changes on CMR to suggest myocarditis, 9 only confirmed our suspicion of device-induced LBBB. LBBB in LV dysfunction secondary to myocarditis or idiopathic dilated cardiomyopathy is usually a late finding, portends poor prognosis, and does not precede the development of ventricular dysfunction. The LBBB in our patient was thus postprocedural and LV dysfunction was a result of chronic interventricular dyssynchrony. 9 CRT has been tried in an elderly woman who developed LBBB. 10 As mentioned earlier, we did not want CRT alone to be a treatment option. While preparing for it, we believed in giving a chance at spontaneous recovery. One cannot overemphasize the advantage of surgical closure of ventricular septal defects, where the patch is placed only on the right side of the septum, virtually eliminating this risk. Current device designs invariably depend on the left-sided disc for device anchor. Device designs need to be reviewed, keeping such possibilities of delayed conduction tissue trauma in mind, and device construction with alternate material needs to be explored. Conclusions To the best of our knowledge, this is the first report of late VSD device explantation from a perimembranous location for LBBB and resultant ventricular dysfunction. While transcatheter device closure of pm VSD seems attractive, with lesser morbidity, the procedure is not free of significant complications. Development of complete LBBB after pm VSD device closure is extremely rare and is probably more common with defects extending into the muscular septum or with abnormal bundle branch anatomy. There are currently no electrophysiologic tools to identify who might be at risk for such events. Our experience, though limited to this case, seems to suggest that late device explantation might still reverse LBBB and LV dysfunction, favorably altering the clinical course.
2020-01-09T09:13:58.553Z
2020-01-08T00:00:00.000
{ "year": 2020, "sha1": "f2b19d06f01bce06c09b3e51a1d744ddf700c1b3", "oa_license": "CCBYNCND", "oa_url": "http://www.heartrhythmcasereports.com/article/S2214027119301770/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9090e7cea8618171e460a0b7e03baf75795f6a9e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247514566
pes2o/s2orc
v3-fos-license
EMPLOYABILITY OF 2015 to 2018 GRADUATES OF NEUST SAN LEONARDO OFF-CAMPUS PROGRAM Purpose — This research aimed to determine the employment status of NEUST Off-Campus Program San Leonardo Nueva Ecija graduates from 2015 to 2018 which also identifies the degree of contribution of skills and work-related values to the job placement of the 335 graduate respondents. Design/methodology/approach — Descriptive type of research was utilized in this research. Out of 500 questionnaires administered, 335 graduates returned answered questionnaires representing the three Colleges: Education, Management and Business Technology and Information and Communications Technology, and Associate in Hotel and Restaurant Management. Findings — Findings show that there is 86 percent of the graduates are gainfully employed locally with regular or permanent status who landed a job within 1 to 6 months. Likewise, the academic-acquired skills and competencies of the graduates are relevant to their chosen occupations. Practical implications — NEUST Off-Campus Program-San Leonardo, Nueva Ecija produces marketable and appropriately trained graduates with the majority landing in course-related jobs within a short period after graduation. Originality/value — The graduates possess the skills and competencies necessary to succeed in this competitive world. However, expansion of tieups with private business entities is made to at least maintain the high employability level of the graduates. INTRODUCTION Empowerment in terms of socioeconomic, political, and technical progress relies heavily on education. The economy of a country is fueled by the collective wisdom and expertise of its citizens. External investment, technological advancements, and globalization are all driving changes in the skillsets required. As the world around us evolves, people need to learn new skills in order to remain productive and earn a living. Colleges and universities need to consider this when developing their curriculums for their student's educational needs. A tracer researchbased approach to course program quality assurance can have a significant impact on the long-term professional development of students who have already completed their studies. While many higher education institutions provide instruction to a range of clients, most of them forget about them when they graduate and leave the institution's surroundings without a way of contacting them any longer. The majority of students, naturally, place a high value on their potential to find work after graduation and in the years to come. To help students enhance their employability skills and increase their ability to communicate with them, the campus has recently expanded its range of higher education courses to include a wide range of subjects. These abilities, once learned, must be practiced throughout one's career, not only during job searches and interviews but also in personal development plans and in making the most of work experience possibilities. There is no doubt that a student's ability to study and grow throughout his or her life is increased by the university. Nueva Ecija University of Science and Technology Off-Campus Program-San Leonardo, Nueva Ecija is an Educational institution that responds to the fast-changing demands of industries to competitive individuals. The demand prompted the institution to offer courses BS in Business Administration, Bachelor of This research also aimed to delivered achievements. This research aims to deliver on how graduates were in faring in their chosen field and further check on how effective the structure of the curriculum. The findings may be prime to the importance of the colleges they may be able to gain relevant knowledge to update their records of the graduates while at the same time they can assess the performance of their graduates. This tracer study will also to provide new ideas to the alumni with regards to their employability readiness; this will help the teachers to update the standards who will take active participation in the globally competitive world for service and international development of the university. This tracer study has the following specific objectives: 1. Describe the profile of the respondents in terms of: Importance of job creation. Jobs in the private sector not only offer the labor force employment options and the purchasing power they receive as remuneration for their contribution to the product, but they are also responsible for creating the so-called multiplier effect through their employment. For example, the development of a new manufacturing plant will help bring about the usage of indigenous raw resources in the country, which in turn will contribute to the government's ability to fulfill its obligation to the people it strives to serve through taxes they pay. Labor force aggregate purchasing power helps to stimulate the economy by raising production as they are all consumers, resulting in more efficient use of the country's resources when there is a high incidence of employment, but not full employment. With more people working, there is less time for crimes and criminality since more people are putting their time to good use. All of these benefits should be made clear to workers, who are entitled to a portion of them. If only for their own sakes, they have a moral obligation to ensure that businesses continue to operate, even if the owners of the business are not involved. Whenever a business closes or reduces its operations, it has a direct impact on workers and the economy. The knowledge gleaned from tracer studies can also be used to alter educational programs so that they better reflect the needs of the workforce and academics. Surveys have certain drawbacks, such as the difficulty of locating graduates and allowing them to fill out surveys. Graduates may not always be able to identify the link between their studies and their professional lives, and research findings are significant only if they can be used to implement tangible reforms by planners. However, the primary goal of this work was to investigate the employment situation of the graduates and discover how many of them had found their first job following graduation. It was necessary for the researchers to look at the graduates' important characteristics and judge whether or not these profiles had met their expectations so that whatever outcomes they could account for would be a useful forum for curriculum improvement and institutional advancement. Graduate Employability New graduates need more than academic knowledge in today's tough economy; they need skills that will help them land a job. As a result, colleges and universities must adapt to these shifts. Academia has been historically associated with the formation of moral and intellectual virtues and as a hub of civilization. They have become more utilitarian, with a concentration on professional training, as a result of rapid economic growth. Priority is to guarantee that education and training are responsive to the demands of the economy's varied sectors. Furthermore, the Philippines' Commission on Higher Education is spearheading an effort to perform GTS among chosen HEIs in order to acquire data that would reveal if HEIs are producing graduates that fit the needs of industry and society. Furthermore, through the GTS, higher education institutions (HEIs) would be able to better connect their efforts with the industry's personnel demands (CHED CMO numbers 38, 2006, 11, 1999. Graduates will be more productive, efficient, and knowledgeable in their current jobs as a result of their undergraduate specialization knowledge, skills, and competencies. Employees are considered to be regular employees if they are engaged to perform activities that are necessary or desirable in the employee's usual business or trade; except where the employment has been fixed for a specific project or undertaking, and the completion or termination of which has been determined at the same time of engagement, the employee is not considered to be regular. More than a majority of Vol. 29, No. 3 December 2021 © Centre for Indonesian Accounting and Management Research Postgraduate Program, Brawijaya University graduates have found work in their chosen field, however, those who didn't get hired cited the following reasons for their lack of success: busy as stay-at-home moms, while some continue their education. Research Design This research used the descriptive research method to analyze Research Locale This research was conducted in San Leonardo Nueva Ecija. This municipality is where the 2015 to 2018 graduates studied their Bachelor's Degree, and the majority of the respondents lived. Sample and Sampling Procedure Total enumeration was utilized to collect data in this investigation. Australian Bureau of Statistics (2013) defined total enumeration as examining every unit, individual, or thing in a population. Additionally, it is referred to as complete enumeration, which simply implies a complete count. The researchers chose this sampling technique because the total number of respondents was sufficient to obtain accurate data and information. Data Collection The researchers collected secondary data and relevant information from the Internet. Before the distribution, the researchers asked permission from the heads of the travel agency associations through a request letter duly signed by the researchers. After the distribution, the answered questionnaires were retrieved, and the data were tallied for interpretation. because most of the respondents are fresh graduates and they can easily find a job if they start in their own province (10%) were not employed and 3% were self-employed. According to the respondents, they want to earn more experience in their hometown and province so that if they want to go abroad or to urbanized cities, they are more prepared and competent. Assistants (4%) are employed to Support the marketing manager, they will be at the heart of driving marketing campaigns for a product or Employment Status of the Graduates service. An important cog in the marketing wheel, they will be expected to be involved at all levels, including drafting press releases, updating clients, and organizing promotional events. Marketing Staff (4%) are employed to perform marketing programs of the company. Sales Specialists (4%) are in charge of establishing sales objectives and handle all sales activities to fulfill these objectives. Their main responsibilities include preparing promotional materials, handling sales, and coordinating and supervising sales staff in performing the daily task. Sales Agent (4%) they are employed basic pay and commission basis their duties is to assist the clients and closed sales for products or services. Senior Legal Clark (4%) is the one who administered the employee in different areas they are employed in the office or bank to keep records and accounts and to undertake other routine administrative duties. Teller (3%) is the employee of a bank or similar institution whose job includes the responsibilities of helping the bank customer with their banking needs, such as depositing a check or making a withdrawal. Most bank tellers are located behind a counter or desk at the bank and communicate with the customer across the barrier. Some banks have implemented a driver-through system where tellers can help the customers with their banking needs without the customer having to leave their car. 3. Determine the reasons of the unemployed respondents. Pursue studying 3 9 Total 35 100 According to Table 8, the majority of respondents during the course of this research had just resigned from their former work. According to respondents, they are looking for a more environmentally friendly Pasteur. Others have taken on parental responsibilities and begun their own families. On the other side, other respondents were awaiting a call from a prospective employer. According to respondents, once they receive the call, they will seize the opportunity promptly. 4. Determine the problems encountered by the employed respondents in terms of Job Performance. CONCLUSIONS AND RECOMMENDATIONS This tracer study was conducted to determine the current employment status of the graduates and how far they have gone particularly in their achievement. This research also aims to deliver on how graduate was faring in their chosen field and further checked on how effective the structure of the curriculum of the colleges. The research was composed of 335 respondents. The two-part questionnaire was used for gathering data from graduates. Data were analyzed using descriptive statistics like frequency count and percentage. A greater percentage of the respondents work along their field of specialization while others are working not related to their completed course. Salaries, benefits, and career challenges are some of the reasons for changing their job. They were Vol. 29, No. 3 December 2021 © Centre for Indonesian Accounting and Management Research Postgraduate Program, Brawijaya University looking for other companies where they can apply their knowledge and skills which will give them competitive compensation and benefits. In line with the above discussion, the researchers recommended continuously practicing the high standards of instruction that the school has been provided. This can be done by providing faculty members with the most current information that can assist them to adapt their teaching methods to the needs of their students. Another recommendation is to reevaluate the university's curriculum, methods, and other tactics in light of the industry's lack of requirements. Increasing students' exposure to a variety of training to better prepare them to take advantage of emerging market trends and opportunities. This research can serve as a starting point for curriculum building and syllabus revisions based on the most recent needs of various business, IT, and education sectors. Finally, it is suggested that a new tracer study be carried out on the results of graduates' performances as observed by employers. Identifying which fundamental subjects should be reworked as part of the curriculum improvement can help.
2022-03-18T15:14:07.564Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "bb76b8e0ad4f0ac2059920db486586ed04ae3041", "oa_license": "CCBY", "oa_url": "https://ijabs.ub.ac.id/index.php/ijabs/article/download/632/393", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "715a419d256edc3d7cad1d95dfc0b68ce49f7bbc", "s2fieldsofstudy": [ "Education", "Business" ], "extfieldsofstudy": [] }
154987902
pes2o/s2orc
v3-fos-license
Business for peace? The ambiguous role of ‘ethical’ mining companies Multinational companies are increasingly promoted as peacebuilders. Major arguments in support of such a position emphasise both interest-based and norm/socialisation-based factors. This article uses research on large mining MNCs in eastern DRC – those that, arguably, should be most likely to build peace according to the above positions – to engage critically with the business for peace agenda. First it demonstrates the limited peacemaking, as well as active peacebuilding, activities in broader society that companies undertake. Second, it finds that even those companies deemed most likely to build peace continue relying on hybrid (in)security practice. Third, this article calls for more reflexivity concerning the implications of the business for peace research agenda. While the latter might contribute to socialising businesses into contributions to peacebuilding, it also produces companies as legitimate authorities, despite their limitations as peacebuilders. As a result, new conflict and insecurity are produced, especially for/with those displaced from land and artisanal mining pits and left with no alternative livelihood options. Introduction There has been much debate regarding how natural resource extraction fuels conflict. 1 Eastern DRC is one of the prototypical examples used for the argument that precious stones cause rebel groups to fight. 2 When it comes to extractive industries, debates focused on how they sponsor violence or support authoritarian governments. 3 It has been demonstrated, however, that greed for precious minerals does not cause rebels to fight in the DRC, and rather is only implicated in some cases in prolonging the violence in the Kivus and Province Orientale. 4 While some mines in the eastern DRC are militarised and provide income to certain armed groups, others use a much more diverse set of revenue- generating activities, utilising forms of ad hoc taxation and other economic opportunities such as trade in charcoal or wood. 5 Over the last 10 years, however, the idea of 'business for peace' has emerged, powerfully guiding donor expectations and policies in (post-)conflict contexts, such as the eastern DRC. Governments, international organisations (IOs) and non-governmental organisations (NGOs) have come to promote business, including industrial mining companies, as agents of development and peace. 6 Research in International Relations and management studies has concentrated on businesses' contributions to peace ranging from the creation of private standard setting 7 to promoting economic development 8 and security provision 9 in areas of limited statehood and post-conflict settings. This paper takes issue with the paradoxical bifurcation of the two aforementioned literatures. The usual argument in support of attracting large-scale foreign investment to the mineral sector in post-conflict countries supposes that strengthening the 'formal' economy by bringing in 'good' business, and sanctioning 'bad' business, will replace the war economy. This article argues that this story is far too simplistic: such binary opposition between good and bad company does not exist. By examining security and community practices of multinational mining companies in eastern DRC that are committed to ethical business principles, this article demonstrates that there is very little active peacebuilding. Although some of the harm business itself does is reduced, insecurity and conflict caused by the very companies deemed most likely to build peace continue to exist. Hence companies are neither socially responsible and good, or bad. Instead, this article demonstrates that companies' violence-reducing and community engagement strategies coexist with practices that lead to insecurity and conflict. As a consequence, this article calls for a critical rethinking of the business for peace agenda. While high expectations might contribute to socialising businesses into more active contributions to peace, the business for peace agenda also produces companies as legitimate authorities, despite their limitations as peacebuilders and their ambiguous effects on local security. Moreover, this article finds that the promotion of large-scale investments in mining in eastern DRC produces immense insecurity for local communities that depend on artisanal miners 10 and thus requires critical reconsideration. The article focuses on multinational companies (MNCs) that are considered as advanced in their uptake of ethical business norms. Arguably such companies should be most likely to live up to the expectations expressed in the business for peace research. Two case studies are used, namely Canadian and South African gold mining companies Banro and Anglogold Ashanti (AGA). Canadian company Banro in South Kivu is the first company to have entered production phase in eastern DRC. AGA in Mongbwalu, Ituri has publicly committed to, and promotes, responsible security and human rights policies since a campaign against its complicity with armed groups in the past. Starting with a review of debates on business and peace/conflict in (post-)conflict contexts, this article then briefly traces the emergence of the business for peace discourse and illustrates it in relation to industrial mining companies in the DRC. The extent to which companies contribute to peace will be examined by concentrating on Anglogold Ashanti in Ituri and Banro in South Kivu, distinguishing active contributions to peacemaking and peacebuilding in broader society from attempts to reduce their own negative externalities. Business for peace: the literature In addition to the resource curse debate and the aforementioned literature on business and conflict, much interest has revolved around a positive relationship between commerce and peace, and more specifically around business contributions to peace. According to the dominant market-liberal 'business for peace' position, peacebuilding missions should work with companies, and most contemporary business actors should be intrinsically interested in peace. 11 As much as conflict was inimical to growth, the argument goes, commerce and peace will reinforce each other. 12 However, empirical evidence for this position is, at best, inconclusive. First, historical evidence demonstrates a close relationship between the capitalist economy and violence, such as in how capitalist modes of production were established in Europe and the (post-)colonies. 13 Capitalism might often require peace, or at least stability, but it also 'thrives on war and instability'. 14 Second, the supposedly positive correlation between commerce and peace is challenged by empirical research. Quantitative studies fail to find an inverse correlation between commerce and conflict. Barbieri and Schneider find foreign direct investment (FDI) fuels conflict, especially in asymmetrical relationships. 15 Michalache-O'Keefe and Vashchiko's study demonstrates that there is little difference in the level of war and peacetime influx of FDI, and points to peaks in investment during conflict in some cases. 16 These studies examine inter-state trade and conflict, and the conflict-proneness of entire states due to their position within international trade relations. Such a focus does not allow for an examination of the effects of specific corporate investment on peace and conflict in an area. As a result, others have shifted their focus of analysis from the unit of the state to individual companies as a way of exploring how companies contribute to peace. In fact, while only seven years ago there was a negative selection bias towards cases 11 See the Global Compact, http://www.unglobalcompact.org/. 12 For a good summary of this position see Berdal and Mousavizadeh, 2010. 13 Cramer, Civil War Is Not a Stupid Thing. 14 Ibid., 204. in which companies fuel conflict, 17 now the opposite might be the case. By taking the 'bad guys' out of the sample, the liberal idea of commerce for peace is promoted. 18 Based on the assumption that most companies are intrinsically interested in peace and are apolitical, the management literature views companies as external to local conflicts and concentrates on business responses to different conflict settings. 19 Similarly, it is suggested that companies could step in and broker peace between warring factions thanks to their outsider position with no direct stakes in the conflict. 20 Companies would build peace through improving economic conditions, improving access to markets and decreasing inequalities in economic opportunities for the local population. 21 In International Relations, attention has focused on corporate social responsibility as an emerging norm as well as related voluntary standards. Both are seen by liberals as potentially able to embed the market economy, through socialisation into socially responsible behaviour and through regulation. 22 Much has been achieved in this regard and a number of large extractive companies take part in a variety of regulatory initiatives, ranging from the Extractive Industries Transparency Initiative and the Voluntary Principles of Security and Human Rights, to the Kimberly Certification Scheme and various other certification schemes to ensure conflict-free sourcing of minerals. 23 However, 'successful cases' of such embedding -especially companies that take part in corporate social responsibility (CSR) initiatives -have received much more attention than other companies. 24 Many concentrate, furthermore, on policy formulation and how policies translate into specific programmes at firm level without looking into their outcomes or impact. Difficulties with operationalisation and data access partly explain this problem, however the positive selection bias and the focus on regulatory mechanisms tends to sideline the question of how effective these initiatives are. 25 Critical scholarship points to limitations of the CSR and business for peace agenda. 26 Many companies do not engage much in building peace and put little effort into reducing their negative impact on conflict. While this would not be contested by scholars interested in positive cases of corporate efforts in peacemaking and peacebuilding, studies that demonstrate the limitations of CSR efforts and the entanglement in conflict and insecurity by supposedly ethical companies are more concerning. It has been demonstrated, for example, for the Nigerian case, that oil companies' collusion with the Nigerian political regime limits and compromises their violence-reducing and conflict preventing efforts. 27 Promoting businesses as peacebuilders is also based on the idea that formal, ethical businesses could be separated from 'bad' business, and would promote long-term stability and peace in line with the market-liberal vision of politics. However, instead of transgressing from unethical to ethical, supposedly ethical companies keep using a hybrid set of practices in (post-)conflict settings. 28 A look at the entirety of security practices that mining companies committed to CSR use brings to light their continued use of heterogeneous strategies that have ambiguous effects on local peace and security. Whilst corporate community engagement and conflict prevention initiatives can have the potential to address root causes of conflict, such as inequality, abuse of public office and limited access to land, they often coexist with stability-oriented political alliances with political authorities, and managerial uses of community engagement that instead only contain local discontent rather than contributing to positive peace. Moreover, concentrating upon the peacebuilding activities that industrial mining companies undertake in eastern DRC might unduly enhance corporate authority and hide their conflict-inciting practices and insecurity-enhancing effects. The liberal business for peace agenda therefore needs critical rethinking. This article contributes to such an endeavour by investigating the most likely 'success cases' for peacemaking and peacebuilding by mining companies in eastern DRC and how they are entangled with, and effect, local peace and (in)security. The whole range of corporate security strategies will be considered in assessing whether they contribute to peace. These may range from withdrawal from a site of investment, or not entering a (post-) conflict setting at all, to fortress protection and engagement strategies with neighbouring communities. 29 They also include clientelist practices, including co-option of politicians and indirect rule 30 as well as 'alliance strategies' 31 to produce stable working conditions. They might also engage in peacemaking or peacebuilding practices in broader society (although this happens rarely). 32 Strategies thus range from those that are conflict-and insecurity-enhancing to peacemaking and peacebuilding practices. Peacemaking refers to attempts to end armed conflict, and peacebuilding to any reforms and institution-building initiatives designed to 27 Ukiwo, 'From "Pirates" to "Militants"'; A. Zalik, 'The Niger Delta: "Petro Violence" and "Partnership Development prevent new conflict and create sustainable peace. 33 It is also important to distinguish between different qualities of contributions to peacemaking and peacebuilding. Narrower contributions refer to attempts to reduce conflict-and insecurity-enhancing effects of corporate actions: in other words, activities to prevent doing harm and reducing the negative externalities of core business practices. Broader contributions are any activities aimed at supporting peace in broader society, and hence go beyond reducing or mitigating conflict and insecurity caused by companies' own activities. These two tend to get mixed up in the promotion of business for peace in policy discourse. However, the latter is the crucial one for evaluating the business for peace agenda in relation to extractive industries. Business for peace: the policy discourse In conjunction with some of the academic literature discussed above, donor governments and international organisations promote companies as peacebuilders and partners in (post-) conflict zones. In response to NGO campaigns, some efforts have been made to regulate companies in order to stop them fuelling conflict and negatively affecting local populations. Yet at the same time, the business for peace agenda emerged because governments see it as an opportunity to 'contract out conflict prevention to non-state actors, to reduce the costs of intervention'. 34 Today, governments and NGOs appeal to companies to support the peacebuilding efforts in the DRC. In order to move the Congo forward, so the idea goes, resourceful companies could help to bring peace and development to weakly governed areas in the country. Multinational companies with listings at international stock exchanges are expected to be most likely to meet these expectations. The idea of corporate security responsibility 35 is much younger than that of corporate accountability more generally. 36 The triggering factor was IOs' and NGOs' questioning of mining companies' activities in conflict zones. Extractive industries were accused of committing human rights abuses and of complicity with such abuses by state and commercial security providers. For instance, French oil company Elf Aquitaine was revealed to have supported the Cobra militias of Sassou-Nguesso after he lost elections in 1997, in order to secure its strong position in Congo Brazzaville. The conflicts in the DRC have put transnational mining companies and trading networks under the spotlight for fuelling a war economy, 37 and more than 80 companies from the OECD-world that exploited resources during the wars have been identified. In addition, companies such as Anglogold Ashanti, Anvil Mining and Freeport -to name some of the most prominent examples -have been accused of complicity in war and human rights abuses. 38 Subsequent attempts to regulate multinational companies' security and human rights practices include voluntary initiatives, such as the Voluntary Principles on Security and Human Rights (VPs), which focus on extractive industries. 39 These initiatives developed alongside a growing trend to promote partnerships with the private sector in order to solve public issues more generally. From the privatisation of formerly state-run services in Europe to foreign direct investment in so-called developing countries, market-liberals have argued that business would help. Böge et al. observe that it has become common wisdom that the private sector has to be included in efforts aimed at crisis prevention and conflict management. 40 Governments, international organisations and parts of civil society alike appeal to firms to engage as 'global governors' 41 of security in 'weak governance zones'. 42 This is true for the DRC as well. Developed together with the US and UK governments, the VPs require companies to teach their host state the virtues of anti-corruption and how to prevent human rights abuses. 43 Another illustration is the Responsible Investment Initiative launched in Kinshasa in 2008. Organised by the UN Global Compact and the German development cooperation company GIZ, it seeks to reframe companies as governance actors and as part of the solution to Congo's crisis. 44 This turn to companies as peacebuilders is remarkable. 45 To some extent, the narrative of conflict minerals and war economies has led to the idea that the exact opposite existed: ethical business that builds peace. The remainder of this paper will look at mining firms in eastern DRC in order to assess to what extent they actually contribute to peace. For the purposes of this article I will use interviews and observations from field research on two gold mining companies, AGA in Ituri and Banro in South Kivu. Ashanti Goldfields acquired shares in the Mongbwalu concession in 1996. It gave the company rights to mining concession 40 which included 2000 square kilometres around Mongbwalu, Ituri. Anglogold Ashanti was created as a merger of Ashanti with Anglogold in 2004 and took over. 48 Since AGA was targeted by a highly visible shaming campaign criticising the companies' support for armed group Front des Nationalistes et Intégrationnistes (FNI), 49 it has become actively engaged in promoting violence-reducing, conflict preventing business practices. 50 Canada-based company Banro, in turn, is committed to the VPs and other global standards. As the first company to have entered into production, it is hoping to develop a peace-oriented economy in the province. The company holds mining licences for a total of more than 2790 km 2 , plus research permits for an area that is even larger than this, roughly 40 km south-west of Bukavu. 51 Making peace? Mining MNCs for peace in eastern DRC? The Anglogold Ashanti and Banro cases While having invested in their respective mining operations in the 1990s already, both Banro and AGA adopted an avoidance strategy during the second Congo War and did not do exploration work during that time. AGA took up explorations in Mongbwalu from 2003, 52 and Banro from 2005. 53 To wait for a de jure sovereign government to be in place and lower levels of violence is what similar companies did as well. Freeport MacMoRan for instance only invested in the now largest copper mine in southern Katanga after the transitional peace agreement in 2003. 54 Hence, instead of contributing to ending conflict and making peace, those companies that are seen as most likely to contribute actively to peacebuilding seem the least likely to move into conflict-ridden areas early. This calls into question the practicality of an important aspect of the business for peace idea. Ashanti Goldfields moved into the DRC early but had a rough start. There was confusion over the initial contract they made with Laurent Kabila in 1996, which was temporarily lost to another company, then re-established in 2001. This was in the middle of the second Congo War and the Mongbwalu area, in which the concession is located, was not under the control of the government. As early as 2002, AGA sent representatives to evaluate the local situation in order to start exploration work. Instead of acting as a mediator and peacemaker, as the business for peace agenda would expect, they made contact with the rebel group Union of Congolese Patriots (UPC), which controlled the area at the time with the support of Rwanda, in order to negotiate access. When the UPC was forced to leave by the FNI, which was created and supported by Kinshasa and the Ugandan government, 55 the company started negotiations with them. 56 Declared president of the FNI, and of Mongbwalu at the time, Floribert Njabu, is quoted in the Human Rights Watch (HRW) report as stating: The government is never going to come to Mongbwalu. I am the one who gave Ashanti permission to come to Mongbwalu. I am the boss of Mongbwalu. If I want to chase them away I will. [ . . . ] The contract for Ashanti is with the government but we [the FNI] control Mongbwalu so they need to come to see me if they want to work there. 57 Rather than attempting to make peace, the company did not even strive for neutrality, instead negotiating with whoever controlled the area. The allegations raised against the company by Human Rights Watch are well known. The FNI gained in strength in the area last but not least thanks to the material support provided by AGA. It paid US$8000 to the FNI and allegedly provided accommodation, access to transport and paid levies on cargo flown into the local airport. 58 AGA supported the FNI by recognising them as a negotiating partner and hence providing legitimacy. 59 By doing so it got entangled with a much older local conflict over land and inequality between those identifying as Hema, making up the local auto-defence groups from which the FNI was formed, and those identifying as Lendu, of whom many had sided with the UPC. 60 This story has become one of the best known examples for how companies become entangled in war and prolong fighting. Of course, arguably, this incident happened before AGA turned into an 'ethical company', and actually did trigger the subsequent changes in the company's strategies. In response to the HRW report, AGA condemned that company staff at Mongbwalu had allowed extortion by the FNI and promised that it would never happen again. They argued that in the future, they would only have the 'moral right' to engage in a conflict zone 'if [ . . . ] we can honestly conclude that, on balance, our presence will enhance the pursuit of peace and democracy'. 61 So, has AGA turned from bad to good business practices, and have those significantly contributed to building peace in the area? As regards conflict resolution in broader society -the most important, but also most demanding criterion -the company did not actively engage in such activities after 2005. Rather it reacted to conflict, and safety in the area improved due to political processes that were beyond the company's purview. Specifically, the FNI was dissolved in 2005 and turned into a political party. Its leadership became part of the Kabila government and its fighters were integrated into the Congolese army, the FARDC. 62 Also later on the company reacted to newly increased insecurity rather than taking on a role as peacemaker. When fighting broke out in the area in November 2008, for instance, AGA withdrew from a number of camps for three months. 63 They were not directly affected by the M23 yet had to deal with indirect effects such as overall increased insecurity. 64 No active attempts at peacemaking were reported. Such attempts might well be beyond the options available to the company. Its former alignment with the FNI, and close links with the Kabila government do not suggest AGA as a neutral mediator in the first place. This case illustrates however the, perhaps unreasonably, high expectations on which the business for peace agenda is based. In contrast to AGA, it could be argued that Banro contributed to ending armed conflict in Luhwindja territory. However, it did so in a way that is not referred to in the business for peace agenda. While it is difficult to prove the direct involvement of Banro, the FARDC cleared Luhwindja and especially the Twangiza concession of the rebel group Democratic Forces for the Liberation of Rwanda (FDLR) in 2005 just before Banro's arrival in the same year. 65 The company has entertained close relations with the Kabila government and it is at least notable that the Congolese army launched a targeted military operation to clear exactly the territory in which the major part of Banro's concession was located, whilst other surrounding areas were not. The population was also disarmed 66 and subsequently Banro was effectively installed, beginning exploration work there. Since the arrival of Banro and the related high military and police presence in the area, no rebel attacks have taken place. 67 The campaign, however, did not end fighting in the broader area. Instead, it mainly displaced FDLR strongholds. Indeed, the FDLR continues to be present in surrounding areas such as the Itombwe forest. 68 Some also claimed that security from FDLR attacks was achieved much earlier. During the second Congo War, the population had armed themselves and defended settlements and mineral deposits against FDLR attacks. 69 Many of the artisanal miners in the area, who were part of these self-defence groups, complain that Banro deprived them of access to gold. 70 Economically the region depends very much on artisanal gold mining, which generates income for most households. However, Banro is restricting artisanal mining more and more on the concession. Similar to AGA, Banro has not remained neutral but has instead aligned itself with the de facto authority most likely to provide access to the concession. Having acquired shares in the concession in 1996/97 from Laurent Kabila, Banro lost its mining rights when the first Kabila government decided to nationalise mineral extraction. 71 It subsequently aligned itself with the rebel movement Rally for Congolese Democracy (RCD) that seized large parts of South Kivu in 1998. However, it remained unsuccessful in gaining access to the concession. Finally, Banro built ties with Joseph Kabila whilst at the same time bringing litigation against the DRC at the International Centre for the Settlement of Investment Disputes in Washington. As a consequence, its mining licence was reinstalled. The concession remained under the control of rebel groups well after 2003, yet the government's military campaign in 2005 eventually provided access to it. In sum, Banro and AGA do not demonstrate a record of mediating between warring parties in the conflicts in Ituri and South Kivu. They moved into their concessions when fighting reduced in the respective area in which their concessions are located. This access was reached by aligning themselves with the dominant political party. Both companies have hence not been neutral nor have they contributed as much to peace as envisioned by the business for peace agenda. At the same time, their contributions to peacebuilding in wider society remain small. Do no harm? Build peace in broader society? This section turns to peacebuilding, in respect of broader society as well as to narrower efforts to do no harm and hence reduce the negative externalities of their own core business practices. The elevated number of security forces in mining areas is a prominent concern in peacebuilding. In the eastern DRC it often increases insecurity for the population that lives on, and adjacent to, mining concessions. A crucial question is therefore whether AGA and Banro manage their security forces in a way that prevents that. The security situation in Mongbwalu is still rated as 'sufficiently elevated to require the inclusion of state military units on a near-permanent basis'. 72 Therefore AGA works directly with both state police and the Congolese military (FARDC). It also employs commercial security company G4 Security. In addition, a wide range of state actors are involved in security governance, such as the Mining Police, Judiciary Police Officers and the National Intelligence Agency. 73 AGA has done relatively well in training private security providers. By 2009, it had trained 86% of its employees and commercial security personnel in human rights. 74 However, this is an average across all its operations, and excludes state security forces. Apart from Congolese security forces, Banro employs private security provider Erynis. It is not a member of the VPs but says that it implements them. Private security forces concentrate on guarding the company's property and protected areas in the concession. They are supported by state police in situations of public protest or confrontations with artisanal miners. 75 The company is overall guided by a narrow vision of fortress protection. Were there ways in which AGA and Banro actively contributed to peacebuilding in broader society in the area? The VPs require companies to engage actively in reducing human rights violations by others and prevent violent conflict in wider society. Some of AGA's engagement with state security forces might be considered as such. The VPs stipulate that the company should discuss insecurity and human rights issues with political actors in the area in regard to preventing conflict and reducing human rights abuses. While no forum for such regular discussions was reported, AGA makes attempts to provide human rights training to the Congolese soldiers and police officers it works with. 76 Since such training addresses FARDC units notoriously known for their abusive behaviour, it could be said that this contributes to reducing insecurity in broader society. However, there are major challenges with such training. The government has been reluctant to allow such involvement by an external company. 77 Furthermore, state security contingents rotate on a regular basis. 78 While it might help to prevent the establishment of networks of corruption it might also support an 'it's my turn' mentality and makes any meaningful impact of training unlikely. Moreover, despite Banro's claim of implementing the VPs, such activities were not identified. The effectiveness of the companies' peacebuilding activities vary. In the AGA case, security forces related to the mines are still met with mixed perceptions. In particular in regard to state security forces, interviewees report that they provided a certain level of security. Similar to the Banro case, their presence keeps out other armed actors. AGA self-reports that no confrontations took place in 2011 and 2012 in Mongbwalu that resulted in injuries or fatalities and that involved state or private security working with them. 79 However, it was reported in focus groups and interviews in the Mongbwalu area that they elevated everyday insecurity for the population living around the concession. 80 FARDC soldiers, for instance, use threats to levy illegal taxes on gold traded by artisanal miners using AGA site control points for that. In the case of Banro, illegal taxation was not reported. Since the company has entered the production phase, however, encounters between security forces and disgruntled community members from relocated communities have taken place. They also violently clashed with artisanal miners. 81 For both Banro and AGA, their presence has triggered a major new conflict between industrial and artisanal miners. Besides relocation, local people report that violent encounters with company security forces and denied access to former artisanal gold mining sites constitute a major source of insecurity. Ilunga suggests that around 25,000 of the 120,000 inhabitants of the wider Mongbwalu area are artisanal miners. 82 A CAFOD report talks about an estimated 9500 artisanal miners on the AGA concession. 83 Many more depend on this as a major source of income. Estimates of artisanal miners in Luhwindja range from 6000 to up to 12,000, each providing a living for a family, on top of which come local traders, shop-keepers and service providers who depend on this sector. 84 While still in the exploration phase, AGA has already defined an exclusion zone and cleared up to 3000 miners from the site. More displacement will come the closer the company gets to production. Banro has already closed several of the major artisanal mining sites around its operations. At the same time, no sufficient alternative livelihood opportunities have been created in both areas. The companies rarely provide formal employment opportunities as an alternative. They have certain initiatives in place in order to develop alternative livelihood opportunities for miners. However, such programmes have reached out to a few hundred people at their best. Some of them are for delivering gravel or bricks to corporate constructions and the sustainability of these business models for once the operations are running are in doubt. 85 The perceived failure by the companies to bring about such alternatives and social benefits has fuelled distrust among locals. A community leader from Mongbwalu was recently quoted saying: 'If they don't provide alternatives, there will be a rebellion for sure . . . these miners are ex-fighters and have access to weapons.' 86 Similar voices can be heard in the still existing artisanal mining sites in Luhwindja from artisanal miners who fear being relocated again by Banro. 87 Kapelus et al. conclude a review of AGA in Mongbwalu by saying that there was little doubt that a mining operation such as AGA's proposed Ituri mine can provide crucial employment and income opportunities to the local population, the region and the country as a whole, as long as legitimate decision-making processes are established and supported at various levels of government. 88 The present article does not reject this claim entirely. However, although so far the project has provided income for certain people at various levels of government, this is very different from providing income and employment opportunities for the local population. Conflicts between industrial mining companies and hundreds of thousands of artisanal miners in the Eastern provinces (and other parts of the DRC) cannot be washed away as a necessary transitional phenomenon from a war to a peace economy. Competing development models might be at stake here. So far, the promotion of foreign direct investment, and also the formalisation of the sector supported by donors have been largely at the expense of artisanal mining. 89 At the very least, the limitations of top -down models that focus on the state and big investment are apparent in eastern DRC and will need to be balanced with local, livelihood-focused and decentralised strategies; including artisanal mining. Such rethinking therefore needs to go beyond the oversimplified narratives of state-focused development and conflict minerals. 90 Positive peace requires opportunities for people to make a living, and artisanal mining is one, together with the promotion of 'inclusive forms of resource ownership, control and access'. 91 In sum, much of the 'peacebuilding' that Banro and AGA undertake in eastern DRC revolves around attempts to mitigate new conflicts created by their very presence. Industrial mining comes with its own problems. While it is important to make companies reduce these, more caution should be applied in promoting companies as peacebuilders outright. Some more reflexivity regarding the 'unintended' consequences of a business for peace agenda is apt, too. Apart from not overestimating most companies' actual contributions to peacebuilding, more awareness is required of unintended effects, such as contributing to crowding small-scale and artisanal mining out of local economies, that will hardly lead to sustainable peace. Moreover, a final cautious note should be sounded. Banro and AGA have become part of an, arguably, highly selective state-building project. Both companies provide the government with substantial revenue. AGA pays $125,000 per month for its exploration activities. 92 Banro's payments went up with the start of exploitation and the company paid over 12 million dollars to the government in 2012, 93 including taxes and paying state agents, such as police and military, on their payroll. In addition, both companies sponsor state capacity building activities and social service delivery in local communities. That could be considered problematic in a context in which the government is one of the warring parties, and in which state security forces do not behave much differently from rebel forces. In addition to the payments above, both companies provide state security forces with privileged access to mining areas alongside the various economic opportunities they provide, such as smuggling of minerals and informal taxation. They sometimes also provide ideational support in terms of recreating the idea of the state as the provider of public goods, even though very little is done in that regard by the current government. The following quote from an interview in Mongbwalu illustrates this point: However, when a partner provides these services, it is because there is an agreement with the government. Q: What makes you believe that these services are provided by AGK because there is an agreement with the government? A: It's because a partner cannot conduct any activities in this country without the prior consent of the government. That explains we believe that activities carried by AGK are carried on behalf of the state. 94 Together with the other points made above, this quote illustrates that companies committed to corporate social responsibility materially and ideationally support a regime that is legal but has limited societal legitimacy. The results of the 2011 presidential elections in the DRC were highly contested and allegations of vote rigging severe and the regime has become more clearly autocratic. Moreover, elsewhere I have argued that companies have become part of a politics of 'indirect discharge'. 95 In order to recapture rents from resource extraction, the second Kabila government centralises access to concessions by giving them to multinational companies. Amidst competition over 'l'Afrique utile', the Kabila regime stages its claim to sovereign control through a hybrid network/coalition with foreign investors and local power holders. At the same time, it uses the outsourcing of policing and social service provisions to companies in order to consolidate its rule. 96 It is thus possible that the very socially responsible activities promoted by proponents of embedded liberalism might turn out to have counterintuitive effects. Conclusion This paper has critically reviewed the business for peace agenda in relation to eastern DRC. Starting from expectations formulated in the related literature and policies, it examined the extent to which multinational mining companies contribute to peacemaking and peacebuilding in eastern DRC. It has demonstrated that there is limited evidence for corporate peacemaking and active peacebuilding in broader society in both cases. While recognising the complicated contexts in which companies operate, their activities remain very limited compared to the expectations inherent in the business for peace agenda. In addition, even 'CSR firms' produce stable working conditions by hybrid means: while implementing aspects of ethical business standards, such as the VPs, other business practices continue and keep having problematic effects on local peace and security. This is not because they are simply not 'CSR firms', or because they were not in the past. While there is variation in the scope and quality of peacemaking and peacebuilding efforts by so-called ethical companies, it is important to increase awareness and better understand that even MNCs that strongly commit to ethical standards not only play a very limited role as active peacebuilders, but also remain entangled with violence and practices that create insecurity. The business for peace agenda has generated interesting research and a new market for consultants and NGOs. A first implication of the findings of this paper is that within this growing field, more nuanced, more modest and more critical evaluation of large companies' role in peacemaking and peacebuilding is required. Companies are neither socially responsible (and hence 'good') nor 'bad'. Instead, companies' violence-reducing and community engagement strategies need to be investigated in conjunction with their other everyday practices. Ethical business strategies remain entangled with the physical violence implied in traditional 'fortress' protection as well as with clientele strategies used by the same organisation in order to provide stable working conditions. 97 This article has also demonstrated that we need a better understanding of the limitations and unintended consequences of the liberal 'business for peace' agenda and the promotion of industrial mining companies in (post-)conflict settings such as eastern DRC. It is a call for more reflexivity concerning the implications of the business for peace research agenda, arguing that it needs to go beyond its pragmatic stance of identifying best practice in order to incite virtuous learning cycles among peers. While that might contribute to socialising businesses into non-violent behaviour and contributions to peacebuilding and public security provision, it also produces companies as legitimate authorities, despite their limitations as peacebuilders and their ambiguous effects on peace and conflict. This carries the danger of promoting large MNCs at the expense of smaller businesses and artisanal mining. The latter begs an even bigger question: how do you build peace without building it from the bottom -up and based on local livelihood opportunities?
2018-12-18T00:17:54.052Z
2014-05-04T00:00:00.000
{ "year": 2014, "sha1": "6044ee67f9df09ddf51bbfa15368028ed940e79d", "oa_license": "CCBYNC", "oa_url": "https://epub.uni-bayreuth.de/5836/1/H%C3%B6nke%202014%20Business%20for%20Peace_AMpre-proofs.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "77946e155f8efba010474ac93013686c3865076f", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
14589304
pes2o/s2orc
v3-fos-license
The Flux Tube Model: Applications, Tests, and Extensions A review and critique of the Isgur-Paton flux tube model of hadronic physics is presented. This entails a detailed comparison with recent lattice gauge theory results which exposes both the successes and shortcomings of the model. Applications to hybrid masses, meson and hybrid meson decays, hadronic charge radii, the spin-orbit force, baryonic hybrids, and hybrid photocouplings are also discussed. Finally, I comment on the issue of adiabatic surface crossing which appears in both the flux tube model and lattice studies. FLUX TUBES The impending upgrade of CEBAF at Jefferson Lab and the advent of a new experimental facility, Hall D, devoted in large part to the exploration of the gluonic excitations of QCD make it a propitious time to review the flux tube model. The flux tube model (FTM) of Isgur and Paton is now nearly 20 years old [1] and much has been learned in the meantime. This note shall review the foundations and predictions of the FTM and compare it, where possible, to lattice gauge or other theoretical tests. Extensions of the original ideas and new applications are also treated. The flux tube model is an attempt to form a tractable model of low energy QCD which is based on longstanding ideas of string-like gluonic degrees of freedom. The central idea is that gluons should rearrange themselves into flux tubes which, in the heavy quark limit, adjust their configuration instantaneously in response to quark motion. Thus quarks are constrained to move on adiabatic gluonic energy surfaces. The lowest such surface is the familiar 'Coulomb+linear' potential of the constituent quark model and lattice gauge theory. States based on this surface thus form the 'conventional' mesons of the quark model. Higher adiabatic surfaces represent gluonic excitations and mesons built on these surfaces correspond to 'hybrids' [2]. Thus the flux tube model is a simple and intuitive way to extend the nonrelativistic constituent quark model to include gluonic degrees of freedom. The model was originally motivated through a series of truncations on the Euclidean time strong coupling QCD Hamiltonian with a lattice regulator (Hamiltonian lattice gauge theory). The degrees of freedom are quarks fields on lattice sites and gluonic 'link variables' U ℓ = exp(−iagA µ (x)) where ℓ represents the link (x,μ). In the strong coupling limit the Hamiltonian is given by (1) where g is the strong coupling, a is the lattice spacing, and n is a lattice site. The velocity variablesU ℓ have been replaced by electric field operators E ℓ . Gauge invariant pure glue states are formed by closed (possibly multiply connected) loops of link operators. The commutation relation [E a ,U ] = T a U then implies that the energy of these states is simply the sum of the quadratic colour charges of each link: where C 2 = 4/3 for a field in the 3 or3 representations, 10/3 for 6 or6, etc. The presence of quarks permits gauge invariant states with open flux strings which terminate on quark colour sources or sinks. Perturbations to these states are provided by subleading quark hopping and magnetic terms. The former allow flux tube breaking via quark pair production or quark motion. The latter can change link colour representations, cause link hopping, or change loop topology. Isgur and Paton simplify the dynamics by (i) assuming an adiabatic separation of quark and gluon degrees of freedom (ii) neglecting 'topological mixing' such as loop breaking or loop Euler number changing transitions (iii) working in the nonrelativistic limit. The model is meant to be applied to the 'intermediate regime' where 1/a ∼ √ b. They then model link variable dynamics in terms of spinless colourless particles ('beads') of mass ba where b is the string tension in the static quark potential. Finally these particles are assumed to interact via a linear potential and perform small oscillations about their resting positions. The result is a simple discrete string model for glue described by the Hamiltonian: where y n is the transverse displacement of the nth of N string masses , p n is its momentum, b 0 is a bare string tension, and R = (N + 1)a is the separation between the static quarks. This Hamiltonian may be diagonalised in the usual way yielding where α † nλ creates a phonon in the nth mode with polarization λ . Notice that the string tension has been renormalized by the first term in brackets. The last term in brackets is the Lüscher term of string phenomenology [3]. The mode energies are given by ω n = 2/a sin[πn/2(N + 1)]. Thus ω 1 → π/R as N → ∞ is the splitting between the ground state Coulomb+linear potential and the first gluonic excitation surface at long range. The energy of a given phonon state is approximately where n m± is the number of left (right) handed phonons in the mth mode. HYBRIDS Hybrid mesons are constructed by specifying the gluonic states via phonon operators and combining these with quark operators with a Wigner rotation matrix: The projection of the total angular momentum on the qq axis is denoted by Λ = ∑ m (n m+ − n m− ). The parity and charge parity of these states are given by Table I. Underlined quantum numbers represent quantum number exotic hybrids (mesons with quantum numbers not available to a qq state). Isgur and Paton obtained hybrid meson masses by solving a model Hamiltonian of quark motion on the single-phonon excited surface: The interaction term incorporates several important additional assumptions. Namely the π/r phonon splitting is softened at short range. The parameter f which appears in the softening function was estimated to be roughly unity [1]. Furthermore, it was assumed that the attractive Coulomb (1/r) potential remains valid for hybrid mesons. Note that this is at odds with the expected potential of perturbative one gluon exchange (the colour factor is that appropriate to gluon exchange between quarks in a colour octet state). This is an important assumption which will be examined in more detail below. Finally the quark angular momentum operator is now complicated by the presence of gluonic/string degrees of freedom. One may write where L (L S ) is the total (string) angular momentum. Note that L S ⊥ mixes adiabatic surfaces. Using L S || = Λr and neglecting surface mixing yields the centrifugal term of Eq. 11. This additional assumption will also be examined in the next section. The hybrid masses obtained by solving Eq. 11 are labelled E IP in Table II. Isgur and Paton also estimated the effects of adiabatic surface mixing and used these as their final mass estimates (labelled E ′ IP ). The column labelled KW is explained in the next section. Finally, Table I implies that hybrids with quantum numbers 2 ±∓ ,1 ±∓ ,0 ±∓ ,1 ±± are all degenerate at this order in the FTM. The next section will present a survey of possible tests of this 'zeroth order' Flux Tube Model. Small Oscillation and Adiabatic Approximations The small oscillation approximation may be tested by considering a model of transverse beads interacting via a linear potential. Numerically solving such a Hamiltonian [5] reveals that the small oscillation approximation is accurate for long strings but overestimates gluonic energies by an increasing amount as the interquark distance shrinks. Typical energy differences are order 100 MeV at 1 fm. Similarly, the adiabatic approximation can be tested by numerically solving the coupled quark-bead system. One finds [5] that the adiabatic approximation underestimates true energies by roughly 100 MeV, with slow improvement as the quarks get very massive. It thus appears that these approximation errors tend to cancel each other, leaving the IP predictions intact. Gluonic Surfaces Recent advances in computational speed and algorithms make it possible to test some of the assumptions of the FTM against the predictions of lattice gauge theory. Figure 1 shows the assumed ground state Coulomb+linear (solid line) and first excited state (dashed line) potentials of Isgur and Paton. These are compared to lattice Wilson loop computations of the same interactions (points) [6]. It is apparent that the IP potential overestimates the strength of the Coulomb potential and underestimates the string tension. Of course both of these model parameters are obtained by fitting the meson spectrum and therefore include effects which are not in the Wilson loop. Should the Coulomb Potential Appear? More troubling is the first excited potential (bursts) which shows signs of saturating (or perhaps turning repulsive) at small distances, in disagreement with the assumption of Isgur and Paton. Indeed, simply omitting the attractive Coulomb portion of the IP hybrid potential dramatically improves agreement of the model with the lattice. This disagreement represents something of a conundrum because Isgur and Paton had a good physical reason to employ the colour singlet qq Coulomb interaction in their model: if one assumes the repulsive short range interaction of Eq. 12, it becomes energetically favourable for the system to emit a gluon once the interquark separation becomes small enough. This gluon combines with the 'valence' gluon of the hybrid to form a scalar glueball, thereby changing the quark colour configuration to that of a singlet, and the Coulomb potential to the attractive form of Eq. 11. The point at which this should happen is indicated by the arrow in Fig. 2. It is clear that the lattice sees no such behaviour. This may be simply because the minimal relative momentum permitted on the lattices employed in the study were too coarse to permit the decay. A physical reason for the suppression of this coupling is also possible. For example, if one considers the hybrid to be dominated by a Fock space component consisting of a constituent quark, antiquark, and gluon, then the postulated transition occurs through gluon emission from either the valence gluon or a valence quark. In the former case the coupling to a glueball is zero due to the colour overlap while in the latter case the coupling is suppressed by the large (infinite) quark mass. Thus one expects the coupling between the surfaces to be very small, which implies that the surface mixing will not be seen unless lattices with exceptionally large temporal extents are employed. Which short distance behaviour should the model use? It seems clear that surface mixing which changes Fock sectors should not be used in a potential model. Rather, such mixing should be incorporated by explicitly including the appropriate transition operator in the formalism. Thus, for example, flux tube breaking is expected to eliminate the linear potential for distances beyond roughly one fermi when light quarks are present. However, such a truncated linear potential should not be used to construct mesons, rather one should employ the linear potential to form a basis of bound states and then include mixing to four quark states in an appropriate form. This approach avoids the unpleasant situation of having a mesonic ionization energy. Similarly, the hybrid potential should not include the mixing term since this mixes hybrid with hybrid+glueball Fock states. Higher Surfaces The FTM predicts an entire tower of hybrid surfaces and it is interesting to examine their form in light of lattice data. Juge, Kuti, and Morningstar have carried out a detailed analysis of the relationship of the hybrid surfaces of Fig. 1 to the string excitations of Eq. 5 [7]. They have found that surface excitations only have π/r splittings for very large source separation (roughly 4 fermi or greater). This is shown in Fig. 3 where the dashed lines are the predicted Nπ/r energy differences of the FTM. This is something of a surprise since one expects a phonon-like excitation spectrum on general grounds. It appears that QCD strings are complex objects at intermediate distance scales. Finally, the figure shows a cross over region at about 1 fermi where the surfaces move from a perturbative behaviour (characterized by the 'gluelump' spectrum) to a more string-like behaviour. Hybrid Masses Revisited In light of these issues it is appropriate to revisit the original hybrid mass estimates of Table I. A number of variants of the IP calculation were made in Ref. [4]. Before presenting some of these, note that there is additional ambiguity in the quark angular momentum term of Eq. 11, namely it is also possible to set L qq = L − J g where J g is the total gluonic angular momentum. Squaring then yields It has been found that setting J g = 2 yields good agreement with lattice data [20](this was also found in a constituent quark model [21]). Notice that this is not numerically the same as the centrifugal term in Eq. 11. Employing the hybrid lattice potential of Fig. 1 and Eq. 14 yields the hybrid masses labelled E KW in Table II. It is seen that these can differ by up to 200 MeV from the IP predictions of the second column. The similarity to the third column is a fluke since adiabatic surface mixing is not included in this computation. Merlin and Paton have noted that the majority of adiabatic mixing effects may be absorbed into the static potential by including the moment of inertia of the string in the centrifugal term (see the discussion below): and with a more important effect which modifies the strength of the π/r splitting to be larger as r becomes larger than m q /b. Thus the final revisited prediction for a light hybrid is roughly 2.1 GeV. THE IKP DECAY MODEL Shortly after its introduction, the flux tube model of meson structure was extended by Isgur, Kokoski, and Paton to provide a description of meson [8] and hybrid [9] decays. The transition operator was envisioned as arising due to the quark hopping term of the lattice QCD Hamiltonian. The lowest terms in the expansion of this operator are If one assumes a smooth string then the first term dominates as the lattice spacing gets small and one has a 3 S 1 strong decay operator. Alternatively, if the string is rough then the first term averages to zero upon summing over all local string orientations and the second term dominates, yielding a 3 P 0 strong decay operator. The authors of Ref. [8,9] assume the second scenario since it is supported by experiment [10]. Flux tube degrees of freedom were incorporated by assuming factorization: The first matrix element on the right hand side is a typical 3 P 0 mesonic decay overlap. The second represents the overlap of the gluonic/flux tube degrees of freedom. Assuming that the quark pair creation occurs at a transverse distance y ⊥ from the interquark axis of the parent meson yields the results for hybrid decay. The factor f is a computable constant of order unity. The extra factor of y ⊥ in the hybrid decay vertex forces the decay to pairs of identical S-wave mesons to be zero. This is the origin of the famous 'S+D' selection rule in this model (it occurs in other models for different reasons). Unfortunately, it will be some time before lattice gauge theory has progressed to the point where models such as these can be thoroughly tested. However, preliminary computations reveal that a substantial closed flavour decay mode (such as bbg → χ b η) may exist [11]. In the meantime we hope that experiment will provide some clues. An alternative decay model is discussed in the next section. EXTENSIONS The hadronic decay model of the proceeding section is its most well known extension. However, the model has been extended in several other directions as well; some of these are described here. Charge Radii and Hybrid Decays Several years ago Isgur pointed out that the energy carried by the flux tube will change several features of the naive quark model [12] (see also [16]). For example, zero point oscillation of the flux tube about the interquark axis will induce transverse fluctuations in the quark positions, something which is not present when the flux tube is treated as potential. The additional fluctuations have the effect of increasing the charge radius of a heavy-light meson (qQ): (20) where the second term in the bracket is the new contribution. He estimated this to give rise to a 50% increase in charge radii of light quark hadrons. This observation has subsequently been expanded upon by Close and Dudek who observe that radiative decays of hybrid mesons may proceed because the recoil of the radiating quark affects the string degrees of freedom giving a nonzero overlap of the flux tube wavefunction with the ground state flux tube wavefunctions of ordinary mesons. Preliminary computations with this mechanism have appeared [13]. A similar scheme involving the emission of pointlike pions may be used to compute hybrid decays to final states such as πρ [14]. The most striking result here is that this decay mechanism evades the 'S+D' suppression discussed above. Adiabatic Surface Mixing Merlin and Paton examined the effects of adiabatic surface mixing on the leading order FTM by considering the complete quark-bead system ab initio [15]. Although the effects can be quite complicated, with mixing to all surfaces possible, they found that the majority of the effects can be absorbed in a redefinition of the hybrid potential by including the rigid body moment of inertia of the string in the centrifugal term and by modifying the strength of the π/r term (see Eq. 15). An explicit computation revealed found mass shifts of order -100 MeV for conventional S-wave light quark mesons and +200 MeV for light quark hybrids. The resulting mass splittings are quoted in column three of Table I and were used by Isgur and Paton to form their final estimates of the lowest lying hybrid masses. Spin Orbit Forces I Merlin and Paton also examined spin orbit forces in the context of the FTM [16]. The idea was to map the operators of the leading spin orbit term in the heavy quark expansion of QCD, namely V SO = g/2m σ σ σ · B, onto FTM degrees of freedom (phonons). Merlin and Paton did this by identifying the magnetic field with the lattice operator Since the plaquette operator moves flux links in a fixed topological sector, it is natural to identify the magnetic field with the bead kinetic energy. Doing so then allows one to write V SO in terms of phonons. Explicit computations revealed that spin orbit splittings due to V SO are small and that the majority of the splittings arise from Thomas precession, V T h = 1 4 (r r r q ×ṙ r r q ) · σ σ σ. This is modelled by including the effects of phonons on the quark coordinate (see the discussion of the charge radii above). The mass splittings for light hybrids are listed in Table III. One sees that the lowest member of the octet of light hybrids is predicted to be the 2 +− while the heaviest is the 0 +− . This appears to be in conflict with lattice gauge theory which finds that the lightest hybrids are 1 −+ . [16]. 20 20 40 140 280 0 0 Spin Orbit Forces II Spin-dependent forces in the FTM were also taken up by the authors of Ref. [17] in an attempt to resolve a conundrum in the spin-orbit sector of the quark interaction. The issue is that many models of hadrons prefer a vector Dirac structure of confinement, rather than the phenomenologically accepted Dirac scalar interaction. This may be studied by using the heavy quark Foldy-Wouthyusen version of the QCD Hamiltonian in Coulomb gauge. The resulting O(1/m) and O(1/m 2 ) operators, H 1 and H 2 , depend on the chromoelectric and magnetic fields and hence are nonperturbative. In a similar approach to that of Merlin and Paton, the authors of Ref. [17] chose to study these operators by mapping the chromofields to phonons. In this case the mapping was based on the idea that the electric field counts string length and therefore should be mapped as E a λ (x = na) = √ b 0 a 2 y a λ (n + 1) − y a λ (n) where a is a colour index and λ is a polarization index. Imposing the canonical commutation relations gives the magnetic field as a derivative with respect to transverse displacement. Finally both expressions can be mapped to phonon degrees of freedom with the standard Fourier transform. Notice that this approach has the flux tube beads carrying colour charge. The form of the effective spin-dependent interactions were then evaluated by inserting these field operators into H 1 and H 2 . It was found that an effective scalar spin orbit interaction could indeed arise in the flux tube picture of hadronic structure even though the static confining interaction was vector. This was the case because nonperturbative mixing of mesons with hybrids contribute to spin-dependent interactions and can change the naive expectations based on the nonrelativistic reduction of a simple interaction kernel. Vector Decay Model The same chromofield-phonon mapping which was used in the study of the spin orbit interaction [17] was used to examine hybrid decays in Refs. [18]. In this case the operator of interest is simply V = −g ψα α α · Aψ. The resulting decay vertex is given by where theê(r) are polarization vectors orthogonal tor. The integral is defined along the QQ axis only. This model also has an 'S+D' decay rule, however in contrast to the IKP decay model this is due to a node along the interquark axis (the cos(πξ )) which causes the selection rule, rather than a node perpendicular to the interquark axis. The phenomenology of this model has been presented in the second of Ref. [18]. One finds that it is similar to a 3 S 1 model of meson decays in that D wave decay modes tend to be suppressed. Recent comparison with experiment have proven surprisingly accurate and lend support to hybrid interpretations for nonexotic 2 −+ (2003) and 1 ++ (2096) states [19]. Hybrid Baryons Capstick and Page have made a detailed study of baryon flux tube dynamics [22]. This is a technically challenging problem due to the multitude of vibrational and rotational modes which are available to a Y-shaped string system. However, they have found that the problem simplifies considerably because the string junction decouples to good accuracy from the rest of the bead motion. Thus a hybrid baryon can be approximated by three quarks coupled via linear potentials to a massive 'junction bead'. The dynamics of this system are completely specified by the FTM and variational calculations indicate that the lowest lying hybrids are J P = 1 2 + and 3 2 + states at approximately 1870 MeV. Happily, lattice investigations of the static baryon interaction have begun [23]. The chief point of interest is whether the expected flux tubes form into a 'Y' shape or a '∆' shape. This may be addressed by carefully examining the baryonic energy in a variety of quark configurations. Current results are mixed, with some groups claiming support for the two-body hypothesis [24] and some for the three-body hypothesis [25]. Finally, a strong operator dependence in the flux tube profiles has been observed [26], which clearly needs to be settled before definitive conclusions can be reached. CONCLUSIONS The flux tube model is now nearly 20 years old. In this time it has been applied to an increasing array of problems and extended in several directions, chiefly by taking seriously the idea that dynamical string-like gluonic degrees of freedom are important in the low lying mesons. A number of additional extensions are: (i) glueballs (glue loops). A preliminary study has been made in the original paper [1], however much remains to be done here. Comparison with lattice gauge theory will provide a crucial test. (ii) FT effects in baryons. The charge radii ideas of Isgur, Close, and Dudek have immediate impact on baryons and should be studied. (iii) adiabatic surfaces. It is worthwhile to attempt to leverage new precise lattice data on the gluelump spectrum and the hybrid adiabatic surfaces to improve the FTM in detail. (iv) the FTM may allow one to improve the semiclassical fragmentation formalism [1]. (v) long range spin-spin and spin-orbit forces should be re-examined in an attempt to pin down this difficult aspect of nonperturbative QCD. In summary, the FTM provides a compelling picture of strong QCD dynamics; however, it is a picture only! We have seen that the FTM correctly describes the level orderings and, perhaps, splittings of gluonic adiabatic energy surfaces at large distances (perhaps as large as 4 fm). The model fails to describe the spectrum at small interquark distances (although, of course, it can be amended). Furthermore, the original IP model of hybrids is likely to be incorrect in many details, although its phenomenology may be surprisingly robust. We have seen that it is possible to extend the model in many different ways. Of course these extensions rely on detailed aspects of the FTM which are untested at best. It appears likely, for example, that the spin orbit splitting of Ref. [16] do not agree with lattice data. In the end, the utility of a tractable and appealing model should not be underestimated.
2014-10-01T00:00:00.000Z
2003-11-25T00:00:00.000
{ "year": 2003, "sha1": "5b055a9fdfeb6556ab7c040b8c2cdc394e3fff87", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2b27599fc26f893c4d474bbb2a4bd3491d98ef7f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248379439
pes2o/s2orc
v3-fos-license
In silico analysis of bacterial metabolism of glutamate and GABA in the gut in a rat model of obesity and type 2 diabetes Dysbiosis of gut microbiota has adverse effects on host health. This study aimed to determine the effects of changes of faecal microbiota in obese and diabetic rats on the imputed production of enzymes involved in the metabolism of glutamate, gamma-aminobutyric acid (GABA), and succinate. The levels of glutamate decarboxylase, GABA transaminase, succinate-semialdehyde dehydrogenase, and methylisocitrate lyase were reduced or absent in diabetic rats compared with controls and obese rats. Glutamate decarboxylase (GAD) was significantly reduced in obese rats compared with control rats, while the other enzymes were unaltered; different bacterial taxa are suggested to be involved. Levels of bacterial enzymes were inversely correlated with the blood glucose level. These findings suggest that the absence of GABA and reduced succinate metabolism from gut microbiota contribute to the diabetic state in rats. Gut microbiota play a significant role in the maintenance of animal health [1], and dysbiosis contributes to the development of obesity and several metabolic diseases, particularly type 2 diabetes [2][3][4]. Changes in gut bacterial communities result in changes in bacterial metabolic profile, mainly recognised through in silico analysis, and thus indicate the potential for altered interaction with the host. Examples include changes in the production of short-chain fatty acids along with increases in inflammatory products [3][4][5]. One consequence of these changes is postulated to be reduced production of mucin, and this is significantly associated with the onset of obesity and diabetes [2,6,7]. Neurochemicals such as serotonin, glutamate, and gamma-aminobutyric acid (GABA) [8] are other products of some interest and may have direct effects on host physiology. A previous study indicated that a reduction in the specific activity of the enzyme glutamate decarboxylase (GAD), which is responsible for the conversion of glutamate to GABA, is associated with type 1 diabetes [9], while treatment of nonobese diabetic (NOD) mice with a GAD-derived peptide helped to delay and reduce the incidence of cyclophosphamide-accelerated diabetes [10]. GABA and glutamate are directly linked by interconversion with succinate, a key component of intermediary metabolism (Fig. 1a). Development of software for the prediction of metabolic potential, such as Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt), and Kyoto Encyclopedia of Genes and Genomes (KEGG) orthologs (KOs) provide a theoretical picture of metagenome function from 16S rRNA metagenomics [11,12]. Thus, this study aimed to make predictions about the production of enzymes that form GABA and succinate by the gut bacteria of obese and diabetic rats. We previously described a rat model of obesity and diabetes [13] produced by feeding adult male Wistar rats a high-fat diet (HFD; 22% fat; #821424, SDS, UK) for 12 weeks in combination with an intraperitoneal (I/P) injection of citrate buffer vehicle (pH 4.4; Obese group) or the HFD for 12 weeks in combination with a single dose of streptozotocin (STZ; 30 mg kg-1 injected IP at week 4; Diabetic group). Two control groups were included; both were fed a normal diet (RM1, Rat and Mouse No. 1 Maintenance Diet, SDS, UK) for 12 weeks and injected IP at week 4 with either citrate buffer vehicle (pH 4.4; Control) or STZ (STZ alone). Animal weights and blood glucose were measured weekly. Glucose levels were measured using a glucometer (Accu-Chek Aviva System, Roche Diagnostics, Indianapolis, IN, USA). After 12 weeks, animals were housed in clean cages individually overnight, and faecal pellets were collected. Bacterial genomic DNA was isolated within one day of collection and immediately stored at −80°C as described previously [13]. Taxonomic profiles were generated for each group by 16S rRNA gene sequencing, and molecular functions were predicted from them by PICRUSt [11]; the detailed methodology for the *Corresponding author. Khalid S Ibrahim (E-mail: khalid.ibrahim@uoz.edu.krd) Note Bioscience of Microbiota, Food and Health Vol. 41 (4), 195-199, 2022 This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives (by-nc-nd) License. (CC-BY-NC-ND 4.0: https://creativecommons.org/licenses/by-nc-nd/4.0/) doi: 10.12938/bmfh.2021-075 ©2022 BMFH Press bioinformatic analysis has been described previously [13]. The SRA accession number for the sequencing data is "SRP152214". Here, we extended the metabolic analysis to consider the predictions for glutamate metabolic function and thus suggest mechanisms whereby changes of these metabolites could contribute to health or adverse effects. In addition, we analysed the predicted contribution of bacterial taxa to each individual activity with the metagenomic contributions script (https://picrust.github. io/picrust/scripts/metagenome_contributions.html). Statistical analysis of relative abundances, presented as mean differences within and between groups, was performed using GraphPad Prism 8.4.2 by controlling the False Discovery Rate using the method of Benjamini and Hochberg for corrected multiple comparisons after One-way ANOVA. Data distribution was tested by the Anderson-Darling, D'Agostino-Pearson, Shapiro-Wilk, and Kolmogorov-Smirnov (alpha=0.05) tests. In addition, Spearman's method was used for nonparametric correlation analysis. Significance was considered to be established when p≤0.05. Principal Coordinates Analysis (PCoA) analysis was conducted with RStudio Version 1.4.1717 (ggplot2 version 3.3.5,) and R for Windows Version 3.4.1 (https://cran.r-project.org/bin/windows/base/old/3.4.1/) to compare bacterial communities of the four groups. Here we extended our earlier work [13] to focus on the predictions for major bacterial functional activities for glutamate and GABA metabolism. As described previously [13], rats fed a HFD and administered a low dose of STZ displayed hyperglycaemia and insulin insensitivity [13], a model of type 2 diabetes, while rats fed a HFD remained normoglycaemic but exhibited significant body weight gain relative to controls. An analysis of 4 individual orthologous enzymes of this pathway in faecal samples collected from each rat model (Fig. 1a) was carried out using KEGG pathway maps as a reference. Statistical analysis of the enzymes showed them to be significantly higher in the Control, STZ-alone and Obese groups compared with the Diabetic group (Fig. 1b) Only one of these enzymes, glutamate decarboxylase (K01580), was significantly higher in the Control and STZ-alone groups compared with the Obese group and was absent in the Diabetic group, and this enzyme was used for biosynthesis of GABA [8]. Regarding other predictions, bacterial metabolism for the three enzymes involved in succinate synthesis is shown in Fig. 1b. These enzymes catalyse the conversion of GABA to succinic semialdehyde (SSA), bypassing the tricarboxylic acid (TCA) cycle and avoiding α-ketoglutarate dehydrogenase, and all three of them were significantly lower in the Diabetic group compared with the Control and STZ-alone groups. Many genera are predicted to contribute to these enzyme activities (Fig. 1b). The predicted bacterial taxa involved in the production of GAD activity suggest that the S24-7 family made the predominant contribution in the Control and STZ-alone groups and was greatly reduced in the Obese group and absent in the Diabetic group. In the Obese group, both Ruminococcus flavefaciens and unclassified species from the RF16 family contributed to the activity, and they were not seen in the Control or STZ-alone group. A previous study reported that both Gram-positive and Gram-negative bacteria have GAD genes [8]. Recently, Medvecky et al. [14] reported that glutamate decarboxylase is commonly encoded in representative members of Bacteroidetes. An animal study found that within the phylum Bacteroidetes, the S24-7 family, now renamed Muribaculaceae, was associated with glutamate metabolism and was predominant in control mice, and its abundance was reduced after treatment with vancomycin [15], which is similar to our data in the Control and STZ-alone groups, though it was significantly reduced in the Obese group and absent in the Diabetic group. The differences in the bacterial communities for GAD between animals or groups were also apparent when data were analysed by PCoA (Fig. 1c), and they revealed spatial separations between the groups and accounted for 61.5% of the variation in PC1 and PC2. Both the Control and STZ-alone groups were close to each other and separated from both the Obese and Diabetic groups. In addition, Fig. 1b shows various bacterial taxa that are associated with the three enzymes and other enzymes involved in succinate metabolism. Potential enzyme activity did not differ in the Obese group compared with the control groups but was reduced (K07250 and K00135) or absent (K03417) in the Diabetic group. In the Control group, the most abundant contribution was from genera in the S24-7 family, while in the Obese group, the most abundant bacteria were Prevotella copri, Bacteroides, [Prevotella], Dorea, and notably unclassified genera from the family Ruminococcaceae and order Clostridiales. In the Diabetic group, unclassified genera from both the family Ruminococcaceae and order Clostridiales contributed to the activity of succinatesemialdehyde dehydrogenase (EC: 1.2.1.16). In agreement with previous studies, K07250 and K00135 were identified in different bacterial taxa [16,17], and prpB was found in bacteria such as Escherichia coli [18] and used for the catabolism of propionate [19]. In the present study, the predicted levels of these bacterial enzymes in the synthesis of succinate were reduced in the Diabetic group compared with the other 3 groups. While the levels of the rats in the Obese group were the same level as those in the Controls group, changes in the taxa due to the HFD meant that different bacteria were involved in the production of succinate. Furthermore, the data from the Obese and Diabetic groups highlighted the significant reduction in S24-7 and associated increase of several genera in the phylum Firmicutes involved in succinate formation. In addition, when the Obese group was compared with both the Control and STZ-alone groups, significant enrichment was observed for the genus Bacteroides, while Turicibacter was significantly higher in the Diabetic group compared with the other three groups. It is generally accepted that the succinate produced by primary fermenters is the main precursor for propionate biosynthesis [20] and thus decreased production in type 2 diabetes (T2D) rats may be expected. In fact, a study employing genomes assembled from metagenomes of a large panel of human, mouse, koala, and guinea pig hosts [21] found a positive link between the presence of S24-7 and the production of succinate, propionate, and acetate [21]. Cultures of Bacteroides spp. are able to digest a variety of polysaccharides and are described as succinate-producing bacteria [22] under specific growth conditions, such as high CO 2 [20,23]. In addition, Turicibacter spp. were found to be significantly higher in type-2 diabetic patients with chronic kidney disease [24]. In our experiment a high abundance of P. copri was predicted to be involved in the expression of enzymes that synthesise succinate in the Obese rats. There is evidence indicating that colonizing C57BL/6 mice with succinate-producing bacteria, P. copri, provided metabolic benefits that help to improve glucose homeostasis through increased intestinal gluconeogenesis [25] as well as enhanced bile acid metabolism and farnesoid X receptor (FXR) signalling [26]. Indeed, this improvement may be mediated by action at succinate receptor 1 (Sucnr1, or GPR91) [25,27], which is present in the intestine and liver [28]. A recent study demonstrated that Sucnr1 promotes an anti-inflammatory phenotype in macrophages, and its expression was decreased in obese subjects [29]. A correlation analysis conducted by Spearman's method to analyse the correlation of GABA and succinate biosynthesis with blood glucose (mmol/L) based on the relative abundances of the three enzymes involved in succinate synthesis indicated a significant inverse correlation (Fig. 1d). Furthermore, a significant inverse correlation was also found between the relative abundance of GAD and animal weight (Fig. 1e). These findings suggest that these enzymes enhance the level of insulin and thus reduce the plasma glucose levels. It has been confirmed that oral administration of GABA to obese and type 2 diabetic mice doi: 10.12938/bmfh.2021-075 ©2022 BMFH Press results in decreased fasting blood glucose and improved glucose tolerance and insulin sensitivity [30]. Furthermore, a study of wild-type mice reported that dietary succinate also contributes to improved glucose and insulin tolerance [25]. This suggests that reduction or absence of these bacterial enzymes may have directly impaired the Obese and Diabetic rats in our experiment. Bacterial expression of these enzymes has beneficial effects on health. A possible mechanism by which bacterial GAD contributes to health is by GABA-stimulated expression of MUC1 in colonic mucosal epithelia, which produce mucin that acts as a protective intestinal barrier [31,32], preventing the adhesion of harmful microorganisms to mucosal surfaces [33]. In addition, GABA has a role in quorum-sensing signalling that initiates cell-cell communication between bacterial species [34]. This system of co-operation maintains the commensal bacterial population, provides resistance to invasive infectious diseases [35], and stimulates the transport of GABA across the membrane of the intestinal epithelium via Caco-2 [36]. Other benefits of GABA include participation in controlling the cholesterol level and blood pressure and reducing inflammation, and thus it is considered to have anti-diabetic properties [37]. This study indicates that analysing imputed functional contributions provides suggestions for bacterial taxa and orthologous enzymes associated with GABA and succinate production based on the metabolic potential of glutamate (Fig. 1a). The findings suggest that deficiencies of GABA and succinate production contribute to the diabetic state. ETHICAL APPROVAL All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. Rats were treated with full approval of the Institute's Animal Ethics and Welfare Committee; the procedures complied with the UK Animal Scientific Procedures Act (1986) and were approved by the Home Office. CONFLICT OF INTEREST There are no conflicts of interest to disclose.
2022-04-26T15:11:49.589Z
2022-04-25T00:00:00.000
{ "year": 2022, "sha1": "92a92f41e2eda5b87853e128be6bf7f684122437", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/bmfh/advpub/0/advpub_2021-075/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "808a258655858ba5f29595cdcd2685094a1dd1b1", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
231805683
pes2o/s2orc
v3-fos-license
Like a clock in the rabbit's visual cortex Three rules govern the connectivity between neurons in the thalamus and inhibitory neurons in the visual cortex of rabbits. FRÉ DÉ RIC CHAVANE AND DIEGO CONTRERAS T he cerebral cortex receives thousands of inputs from the sensory organs and uses them to create representations of the world and respond appropriately. With the exception of smells, all sensory signals travel through a part of the brain called the thalamus on their way to the cortex. The principal cells of the thalamus then relay this constant stream of information to the cerebral cortex in an orderly, topographical manner. When a neuron in the thalamus relays a signal to a neuron in the input layer of the cortex, the cortical neuron receiving the input can be excitatory or inhibitory, changing the nature of the signal. Figuring out the rules that govern the connections between the thalamus and the cortex is fundamental to understanding the transformation of sensory inputs that travel through this route. For example, how is it that activity of the whole visual cortex is driven by input from the thalamus when fewer than 1-5% of the synapses into the primary visual cortex originate in the thalamus (Douglas and Martin, 2004;Markov et al., 2011)? This is only possible if the thalamocortical synapses act cooperatively (Alonso et al., 1996;Swadlow and Gusev, 2001;Bruno and Sakmann, 2006). Now, in eLife, Yulia Bereshpolova, Xiajuan Hei, Jose-Manuel Alonso and Harvey Swadlowwho are based at the University of Connecticut and the State University of New York College of Optometry -report on how synapses form between the thalamus and the visual cortex in rabbits (Bereshpolova et al., 2020). First, they measured the activity of neurons in the lateral geniculate nucleus (LGN) of the thalamus, which relay visual information, and the response of neurons in the input layer of the cortex ( Figure 1A). Based on the neurons' firing patterns, Bereshpolova et al. established whether specific neurons in the input layer of the cortex were excitatory neurons or inhibitory neurons (which they call suspected interneurons or SINs). Next, they calculated the delay between a neuron firing in the LGN and a neuron responding in the cortex: when a synapse forms between a neuron in the LGN and a neuron in the cortex, this delay should be between one and three milliseconds. With these tools in hand, Bereshpolova et al. moved on to interrogate the cortical circuit to identify other factors that can influence the connectivity between the thalamus and the cortex. They found that, similar to cats, the most important condition for a synapse to form was that the 'receptive field' (the region of space in which a stimulus triggers a response) of a neuron in the LGN overlapped with the receptive field of a neuron in the cortex (Figure 1; Tanaka, 1983;Alonso et al., 2001;Sedigh-Sarvestani et al., 2017 LGN neurons could also form connections with excitatory neurons in the cortex, but in this case only 11% of pairs of cells formed synapses when their receptive fields overlapped. Since most of the studied LGN-SIN cell pairs with overlapping receptive fields were connected, this demonstrates that connections between the LGN and SINs are highly promiscuous (SINs receive inputs from almost all the LGN axons in their vicinity). Why and how are such differences established? The answers are not known, but Bereshpolova et al. were able to explain why the remaining 27% of LGN-SIN pairs with receptive field overlap were not connected. In six of the eleven pairs without connections, the axon belonging to the neuron in the LGN did not terminate close enough to the SIN to form a synapse. This was determined using a laminar electrode (which allows measurements across several cortical layers) and current source density (CSD) analysis (which allows an estimation of the source of a current; Figure 1). In the remaining five pairs, there seemed to be no input from the thalamus into the cortical neurons, since the cortical neurons took over three milliseconds to fire after the LGN neuron had fired. The pattern of cortical neurons receiving input from either the thalamus or from other neurons in the cortex resembles what is seen in the cat visual cortex (Finn et al., 2007). Despite these similarities between cats and rabbits, there are also differences between the two species. In rabbits, the connections from the thalamus into cortical SINs seem to be much more promiscuous than in cats (Sedigh-Sarvestani et al., 2017). Furthermore, in cats, the vast majority of putative interneurons have receptive fields with regions that respond exclusively to light increase or decrease and do not found that three rules regulate the connectivity between a neuron in the LGN and an inhibitory neuron in L4: their receptive fields (shown here by an orange circle and a blue square respectively) must overlap; the termination of the LGN neuron's axon (a location estimated by the peak in the CSD signal, grey line) must be located near the cortical neuron (small blue circle); and the delay between the firing of the thalamic and cortical neurons must be less than three milliseconds. All three conditions are met in the top left panel, so a connection is established. Only two of the three conditions are met in the other three panels, so a connection is not established in any of these cases. LGN: lateral geniculate nucleus; CSD: current source density; RF: receptive field. overlap (so-called simple cells); in rabbits, on the other hand, these two regions do overlap (socalled complex cells). A last important difference is the presence in cats of functional cortical columns, which are absent in rabbits and rodents. These columns are groups of neurons in the cortex that have a similar orientation preference and nearly identical receptive fields, leading to the emergence of orientation maps (regions of the cortex that have similar response properties). These distinctions between cats and rabbits may shed light on how species differ when it comes to thalamocortical connectivity. The work of Bereshpolova et al. suggests that, in rabbits, a finely tuned clockwork based on three rules governs the connectivity between neurons in the thalamus and inhibitory neurons in the input layer of the cortex: (i) physical proximity; (ii) receptive field overlap; (iii) a short delay (less than three milliseconds) between neurons firing ( Figure 1B). The same rules have yet to be explored for the excitatory neurons in the input layer of the cortex (Zhuang et al., 2013). Finally, the differences between the thalamocortical synapses in rabbits and in cats underlines the importance of studying neuroscience in different species.
2021-02-05T06:16:07.479Z
2021-02-04T00:00:00.000
{ "year": 2021, "sha1": "95c2c48d64864bf3411678649eb2f8b31cce510d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.65581", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "941ae246415258d3e2b8267c376b75ea75fd3af7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252769193
pes2o/s2orc
v3-fos-license
People Cheat on Task Performance When They Feel Bored: The Mediating Role of State Self-Efficacy It is unclear whether the state of boredom is related to morality. The present study investigated how state boredom influenced cheating behaviors on task performance. In Study 1 (N = 104), participants were induced to feel bored, and then reported whether they had finished an anagram task (two sentences in the task were unsolvable). The results found that people with higher boredom showed more cheating behaviors than those with lower boredom on task performance. In Study 2 (N = 139), participants completed the Multidimensional State Boredom Scale, and then completed the same anagram task as in Study 1, as well as a state self-efficacy scale. The results revealed that state self-efficacy mediated the effect of state boredom on cheating behaviors on task performance. In other words, a higher level of state boredom leads to a lower level of state self-efficacy, and the lower state self-efficacy then results in more cheating behaviors. The present study provides empirical evidence that state boredom has its moral function through state self-efficacy. Introduction The state of boredom is generally regarded as a concrete and short-lived emotion [1,2], which is "the actual experience of boredom in a given moment" (p. 70) [3]. Retana proposed that boredom is a moral emotion [4]. Similarly, Van Tilburg and Igou examined the positive correlation between state boredom and prosocial intentions, revealing the moral implications for the state of boredom [5]. However, Van Tilburg and Igou also indicated that boredom is perceived as an unrelated emotion to morals compared to other negative emotions (e.g., sadness, anger, disgust, etc.) [6]. Notably, the findings from Van Tilburg and Igou remain a limitation. They asked participants to read the description, "Some emotions are associated with morality (i.e., these emotions relate to the question of what makes a good or bad person) whereas other emotions are not associated with morality" (p. 12) [6]. Participants then evaluated whether the emotions (including boredom) were related to morality. Indeed, the method of measuring whether boredom is related to morality was biased, because the method of measurement was subjective and did not measure the causal relationship between boredom and moral behaviors. Thus, it was not sufficient to conclusively support the relationship between state boredom and morality. Given the inconsistent findings of previous studies and biased measurement, it is unclear whether the state of boredom is related to morality. Importantly, the relationship between state boredom and cheating behaviors so far has received little attention. Taken together, the present study mainly concerns cheating behaviors, extending previous studies on the correlation between state boredom and morality by providing novel empirical evidence. The current work has two primary goals. The first goal is to measure whether state boredom can predict cheating behaviors. The second goal is to find the underlying mechanism, namely the mediating role of state self-efficacy between state boredom and cheating behaviors. The current work would indicate the theoretical implications of linking state boredom to cheating behaviors. We begin with a literature review on state boredom and cheating behaviors. State Boredom Previous researchers have proposed various definitions of state boredom. Pekrun et al. indicated the different aspects of state boredom: "specific affective components (unpleasant, aversive feelings), cognitive components (altered perceptions of time), physiological components (reduced arousal), expressive components (facial, vocal, and postural expression), and motivational components (motivation to change the activity or to leave the situation)" (p. 532) [2]. In the present study, we regarded boredom as an activity-related emotion, which is most relevant for one's task performance [7][8][9]. State Boredom vs. Trait Boredom State boredom is related to trait boredom but is a distinct construct from trait boredom [10][11][12]. Specifically, trait boredom is an internal psychological characteristic, which focuses on one's tendency to become bored. In contrast, state boredom is "the actual experience of boredom in a given moment" (p. 70) [3], which is more influenced by external situational factors [3]. The issue of boredom has been discussed by Neu in terms of endogenous boredom (boredom that stems from within) and reactive boredom (boredom that stems from the environment) [11]. Similarly, Todman also has distinguished boredom as situation-independent boredom and situation-dependent boredom [12]. Accordingly, researchers developed the Boredom Proneness Scale (BPS) to measure trait boredom [13] and the Multidimensional State Boredom Scale (MSBS) [3] to measure state boredom [3]. We are interested in state boredom, because boredom sometimes occurs in situations of repetition, meaningless tasks, or too little challenge given one's skills [14][15][16][17][18], which is an unpleasant experience and always has negative consequences as a result. For instance, cheating behavior is one of the moral behaviors that is a widespread phenomenon [19] and has been found to be associated with trait boredom (e.g., organizational misbehaviors). However, the correlation between state boredom and cheating remains unclear. To the best of our knowledge, no prior studies have focused on the correlation. Thus, in the present study, we focused on the relationship between state (rather than trait) boredom and cheating behaviors. Boredom and Cheating Behaviors Some researchers have measured the correlation between trait boredom and moral disengagement at the workplace. For instance, people who are highly bored on the job have significantly higher scores on an objective measure of nonappearance in a job [20]. Similarly, highly bored employees are more likely to engage in organizational misbehaviors (e.g., taking long breaks at work, accepting bribes, misusing company resources, threatening colleagues, etc.) [21,22]. Given that state boredom and trait boredom are related constructs [10], we predicted that people with higher levels of state boredom would be more likely to cheat than those with lower levels of state boredom (Hypothesis 1). State Self-Efficacy vs. Trait Self-Efficacy Bandura identified state self-efficacy as an individual's belief about their capability to accomplish specific performance goals based on an individual's expectations and convictions about what he or she can accomplish under specified circumstances [23][24][25]. Thus, research on state self-efficacy emphasizes the contextual role in assessing efficacy [26]. Meanwhile, trait self-efficacy refers to the way in which one perceives himself or herself in particular domains of activity, which is closer to the general self-concept [26]. For instance, when it comes to academic domains, trait self-efficacy refers to how competently a person perceives themselves in academic settings, which is fairly stable [26]. In contrast, state self-efficacy refers to a person's belief that he or she is capable of successfully completing academic tasks at a designated level [27], which is relatively malleable based on the nature of the task. As indicated above, state self-efficacy (vs. trait self-efficacy) is more determined by a specific situation or under a certain task. We were mainly concerned with whether state boredom would decrease state self-efficacy when participants completed an unsolvable task in the present study. Thus, we predicted that the underlying mechanism may be state (rather than trait) self-efficacy. Linking State Boredom and Cheating Behaviors: The Mediating Effect of State Self-Efficacy Boredom is regarded as a "gadfly sting" that causes people to realize that they are incapable of engaging successfully in worthwhile pursuits. Previous research has indicated that boredom is negatively correlated with self-efficacy. For instance, theorists proposed that higher levels of boredom would allow people to process information less efficiently, gain less competence, and succeed less at academic tasks [8,28]. Similarly, research yielded empirical evidence for negative correlations between academic-related boredom and academic self-efficacy [29][30][31][32]. Specifically, repetitive activities might lead to feeling bored. In turn, state boredom decreases the level of activation on the task according to the measurement of heart rate or skin conductance [1], i.e., decreasing people's concentration on the task [7], a sense of control (e.g., self-efficacy) over the task [31], or a fear of failure [33], etc. Moreover, boredom was negatively associated with students' self-perceived competence and their performance [34]. In conclusion, state boredom may be negatively correlated with state self-efficacy. In addition, some evidence suggests that the likelihood of cheating is lower among students with confidence in their academic abilities [35]. Murdock et al. reported that the perception of academic self-efficacy is negatively related to cheating in school [36]. The correlation between self-efficacy and cheating has also been found in college samples [37]. It is because students with low self-efficacy "may doubt their ability to bring about a desired result, which may lead to reliance on other strategies for success" (p. 109) [36], i.e., cheating behaviors [37]. Although previous studies found correlations between boredom, self-efficacy, and cheating behaviors, previous research did not distinguish between state boredom and trait boredom, nor between state self-efficacy and trait self-efficacy. It is unclear whether and how state boredom, state self-efficacy, and cheating behaviors are correlated with each other. We predicted that state self-efficacy might play a mediating role between state boredom and cheating behaviors (Hypothesis 2). Present Study In the current work, we primarily aimed to measure the correlations between state boredom, state self-efficacy, and cheating behaviors on task performance. In Study 1, participants' state boredom was manipulated before they took a cheating behavior test. In a randomized experiment, we tested whether the cheating behavior differed between high-state boredom and low-state boredom. In Study 2, we used the MSBS [3] to measure state boredom and included measures of state self-efficacy as a mediating variable. We expected state boredom to be related to cheating behaviors on task performance. We also expected that the effect of state boredom on cheating behaviors would be mediated by state self-efficacy. Study 1 Does Cheating Behavior Differ between High-State Boredom and Low-State Boredom? Study 1 examined whether cheating behavior differed between high-state boredom and low-state boredom on task performance. In this study, participants were induced to Behav. Sci. 2022, 12, 380 4 of 11 feel bored, and then reported whether they had finished an anagram task (two sentences in the task were unsolvable). Participants G*Power software version 3.1 [38] was used to calculate the required sample size. We estimated, based on our theorizing, that the difference in cheating behaviors regarding a high level of boredom would be moderate (Cohen's d = 0.50). Power was set to 0.80, as recommended by Cohen [39]. In addition, the alpha was set at 0.05. The analysis revealed that we needed 102 participants in total. We recruited 104 Chinese students (49 males and 55 females) by advertisement from a university in Sichuan province, China. Ages ranged from 18 to 23 (M = 19.87, SD = 1.28, 3 participants did not report their age). Participants were randomly assigned to the high-boredom (N = 47) or low-boredom condition (N = 57). Participants received a gift for their participation. Measurement and Procedure The study was approved by an ethical panel at the Psychology Department at Southwest University of Science and Technology in Sichuan, China. Participants signed consent forms. Manipulated boredom experience. At the beginning of the study, participants were randomly assigned to the high-boredom or low-boredom condition according to the last number of their student ID. Even numbers were assigned to the high-boredom condition and odd numbers were assigned to the low-boredom condition. Participants in the highboredom condition completed the highly boring task of copying ten references, whereas those in the low-boredom condition completed the less boring task of copying two references. The boredom manipulation was adapted from Van Tilburg and Igou [18]. Then, they reported the extent to which they felt bored (1 = "not at all", 7 = "very much") after the reference copying task. Measures of cheating behaviors. Participants were asked to complete an anagram task, which we adapted from Kilduff et al. [40] and Pierce et al. [41]. We instructed participants to attempt to solve four sentences, in which we broke down each word into a syntactically illogical sequence (i.e., "on the playground, Zhang Liang, after class, played", "some, white things, he, on the ground", "mathematician, self-taught, is a, famous, Hualuo Geng", and "the heat, I am happy, in my hand, to pick"), in three minutes and indicated whether they completed each sentence on a yes/no response scale. Participants were told that they would receive more gifts at the end of the experiment if they solved more sentences correctly. Indeed, the first and third sentences could be solved logically, whereas the second and fourth sentences had no solution. Thus, those who reported to have solved neither the second nor the fourth sentence were coded as 0, those reported to have solved either the second or fourth sentence were coded as 1, and those who reported to have solved both the second and fourth sentences were coded as 2, with a higher score indicating more cheating behaviors. At the end of the experiment, participants were told that the amounts of gifts that they received were not related to the anagram task. Results Boredom manipulation check. We first coded the boredom condition as low = 0 and high = 1. Then, we entered the boredom feeling scores as the dependent variable and the boredom condition as the independent variable in an independent-sample T test. The results indicated that participants in the high-boredom condition felt more bored (M = 4.96, SD = 2.04) than those in the low-boredom condition (M = 3.49, SD = 1.76), t (100) = 3.81, p < 0.001, 95% CI = [0.72, 2.21], Cohen d = 0.77, as shown in Table 1. Cheating behaviors. We entered the cheating behaviors as a dependent variable and the boredom condition as an independent variable into an independent-sample t test. The results indicated that participants in the high-boredom condition showed more cheating be- Table 1. Correlation analysis. Age and gender were not significantly correlated with cheating behaviors in the low-(r = −0.18, p = 0.20; r = −0.06, p = 0.68) and high-boredom conditions (r = −0.07, p = 0.66; r = 0.12, p = 0.45). See Table 2. In addition, we obtained the correlations between the manipulation check and cheating behaviors within each condition and across the two conditions. Results showed that the correlation between the manipulation check and cheating behaviors was not significant in the high-boredom condition (r = 0.10, p = 0.50), in the low-boredom condition (r = −0.16, p = 0.25), and across the two conditions (r = 0.07, p = 0.47). A possible explanation is that when participants answered the manipulation check questions, their mental processes may have been affected in ways specific to that particular setting [42]. However, this affection is particularly difficult to identify [43]. Hauser et al. suggested that when manipulation checks are used, mediation analyses may not be able to eliminate confounding variables, since variables unmeasured may still influence the outcome [42]. Thus, the effect of the manipulation check on the outcome would be biased. This could be the reason that we did not find a correlation between the manipulation check and cheating behaviors. Brief Discussion Consistent with Hypothesis 1, Study 1 revealed that participants in the high-boredom condition showed more cheating behaviors than those in the low-boredom condition on task performance. In order to replicate the findings in Study 1, we tried to use different measurements to examine the correlation between state boredom and cheating behaviors. Thus, in Study 2, we applied the MSBS [3] to measure the current boredom state of participants. In addition, we further explored the underlying mechanism for the effect of state boredom on cheating behaviors, namely state self-efficacy. Study 2 Is the Effect of State Boredom on Cheating Behaviors Mediated by State Self-Efficacy? In Study 2, we examined whether the effect of state boredom on cheating behaviors would be mediated by self-efficacy on task performance. Participants in Study 2 completed the MSBS [3], and then completed the same anagram task as in Study 1, as well as a state self-efficacy scale. Participants We estimated the sample size based on previous research [33,44]; the effect of boredom on self-efficacy was of a large size (β = 0.50) and the effect of self-efficacy on cheating behaviors was approximately halfway between a small (β = 0.14) and medium (β = 0.39) effect (β = 0.29). In order to achieve 0.8 power to find the mediation effect at α = 0.05, we required 121 participants [45]. In the end, we recruited 139 participants (39 males and 100 females) from a university in Sichuan province, China. Ages ranged from 18 to 22 (M = 19.54, SD = 0.89, 3 participants did not report their age). Participants received a gift for their participation. Procedure The study was approved by the same ethical panel as in Study 1. Participants signed consent forms. Boredom. Participants completed the MSBS, which was developed by Fahlman et al. [3]. The MSBS is a 30-item questionnaire on a 7-point Likert scale ranging from 1 (= strongly disagree) to 7 (= strongly agree). An example statement is 'I am stuck in a situation that I feel is irrelevant'. The MSBS had acceptable reliability (α = 0.95) in the present study. Cheating behaviors. The measurement was identical to that in Study 1. State self-efficacy. Because we aimed to measure state self-efficacy according to the task of cheating behaviors in our study, we adapted the general self-efficacy scale [46] by revising the instruction of self-efficacy according to the task of cheating behaviors. Specifically, we asked participants to indicate their current belief in their capability to accomplish the anagram task above. The scale contains a 10-item scale. An example statement is 'I am confident that I could deal efficiently with unexpected events' (1 = Not at all true, 4 = Exactly true). The reliability in the present research was high (α = 0.91). Results Common method bias test. A common method bias test was conducted using the Harman single-factor test [47]. The results showed that the KMO value was 0.89 (p < 0.001), indicating that the scales were suitable for factor analysis. There were eight factors with eigenvalues greater than 1, and the first factor explained a total variance of 34.10%, which did not reach the critical criterion of 40%. Therefore, the influence of common method bias was not considered to be great in this study. Correlation analysis. As expected, state boredom was negatively related to self-efficacy (r = −0.43, p < 0.001) but positively correlated with cheating behaviors (r = 0.20, p = 0.02). Self-efficacy was negatively related to cheating behaviors (r = −0.30, p < 0.001). Regarding demographics, with the exception of age, which was correlated with state boredom (r = −0.17, p = 0.05), age and gender were not significantly related to any other variables. See Table 3. A mediation analysis using 5000 bootstrapping samples [48] was conducted to examine the proposed mediation model, with multidimensional state boredom as the predictor, self-efficacy as the mediator, and cheating behaviors as the outcome. See Figure 1 Correlation analysis. As expected, state boredom was negatively related to self-efficacy (r = −0.43, p < 0.001) but positively correlated with cheating behaviors (r = 0.20, p = 0.02). Self-efficacy was negatively related to cheating behaviors (r = −0.30, p < 0.001). Regarding demographics, with the exception of age, which was correlated with state boredom (r = −0.17, p = 0.05), age and gender were not significantly related to any other variables. See Table 3. A mediation analysis using 5000 bootstrapping samples [48] was conducted to examine the proposed mediation model, with multidimensional state boredom as the predictor, self-efficacy as the mediator, and cheating behaviors as the outcome. See Figure 1 Brief Discussion Consistent with Hypothesis 2, Study 2 revealed that state boredom was indirectly associated with cheating behaviors on task performance through reduced state self-efficacy. Brief Discussion Consistent with Hypothesis 2, Study 2 revealed that state boredom was indirectly associated with cheating behaviors on task performance through reduced state self-efficacy. General Discussion In the present work, we conducted two studies to examine the correlation between state boredom and cheating behavior on task performance and their underlying mechanism, namely state self-efficacy. The studies firstly provided evidence that people with higher state boredom showed more cheating behaviors than those with lower state boredom on task performance (Study 1). The findings were consistent with the existing literature on the correlation between trait boredom and organizational misbehaviors at work [20][21][22]. The present results also found that state boredom was indirectly associated with cheating behaviors on task performance through reduced state self-efficacy (Study 2). The findings were also consistent with the existing literature. For instance, academic-related boredom Behav. Sci. 2022, 12, 380 8 of 11 was negatively correlated with academic self-efficacy [30,37]. In addition, self-efficacy is negatively related to cheating [35,36]. Our correlation analyses in Study 2 showed that gender was not significantly correlated with state self-efficacy, which is inconsistent with some previous studies, in which males had significantly higher general self-efficacy than females [49][50][51] because males are sometimes overconfident of their abilities and performance when self-recognizing, whereas females show the opposite [52]. However, our findings are consistent with other previous studies. According to Hackett and Campbell, gender-neutral tasks did not reveal any differences between men and women, and the possible reason is that the nature of the task may influence the self-efficacy expectations [53]. For instance, there is a greater likelihood of gender differences in self-efficacy where there is more gender-role stereotyping in tasks (such as math examinations) [54][55][56]. Thus, we can presume that the inconsistency between our findings and previous findings may also be related to the nature of the task, because our cheating task was not relevant to traditional gender-role stereotyping. Consequently, we did not find a significant correlation between gender and self-efficacy. Our findings have important theoretical implications. This study first provides empirical evidence for the relationship between state boredom and cheating behaviors, which represents a limitation of previous research [5]. Our investigation extends previous studies on the correlation between state boredom and morality by providing novel empirical evidence. Second, although previous studies found that trait boredom was positively associated with organizational misbehaviors, the present study extends previous research and highlights the moral function of state boredom by studying the relationship between state boredom and cheating behaviors on task performance. Furthermore, Elpidorou proposed that boredom becomes a moral issue when it is regarded as a character trait [57]. The present study provided empirical evidence that state boredom also has its moral function, even if it is not a character trait. Thus, our findings help to advance research on boredom. Third, previous studies found correlations between boredom, self-efficacy, and cheating behaviors. However, previous research did not distinguish between state and trait boredom, nor between state self-efficacy and trait self-efficacy. It was unclear whether and how state boredom, state self-efficacy, and cheating behaviors were correlated with each other. In the present study, we fill this gap by making a distinction between state boredom and trait boredom, and between state self-efficacy and trait self-efficacy. We found that state self-efficacy plays a mediating role between state boredom and cheating behaviors. Our findings also carry important practical implications. Individuals may experience boredom for many reasons in everyday life, because life is not always filled with happiness, interest, and excitement; rather, it is often routine, dull or boring. In this situation, when individuals with a high level of boredom are requested to perform a task, their state boredom increases their cheating behavior through reduced state self-efficacy. Thus, our findings could offer an implication for schools or enterprises, i.e., it may be necessary to provide an engaging environment or to inspire individuals to perform tasks that they are interested in during daily work or practice, which would prevent them from acting immorally. The present study had a few limitations. First, the participants were all undergraduate students, well-educated, and mostly under the age of 22. The participants might have had a routine and simple life at university, so they may have possessed a different perception of boredom from other populations. It would be worthwhile to examine whether the correlation between boredom and cheating behaviors exists among the general population. Second, although the anagram task, which we adapted from Kilduff et al. [40] and Pierce et al. [41], is commonly used to measure immoral behaviors, the measurement could be confounded by participants' IQ, interests, attention, etc. Further studies could exclude these confounding variables in order to verify the present findings. Third, the anagram task in the present study was somewhat easy. Indeed, task difficulty is negatively correlated with self-efficacy [58]. Thus, if we increase the difficulty of the task, the correlation between boredom and cheating behaviors might become greater. This is because when a task is more difficult, self-efficacy would be lower, which in turn may lead to an increase in cheating behaviors. Future research could manipulate the difficulty levels of tasks to replicate the present findings. Fourth, in Study 1, we manipulated the high-vs. low-boredom conditions by asking participants to copy different numbers of (ten vs. two) references. Although the manipulation in Study 1 was successful and we replicated the correlation between state boredom and cheating behaviors in Study 2, participants were required to spend more time and use more cognitive resources to finish the task in the high-boredom condition relative to those in the low-boredom condition. The higher cognitive load of the high-vs. low-boredom condition might be a potential factor driving the result. In addition, when participants copied ten (vs. two) references, they may have experienced higher ego depletion and had less control in regulating their behaviors, which may also have led to cheating behaviors. Previous studies found that ego depletion was positively associated with unethical behavior [59]. Thus, it would be worthwhile to measure ego depletion and its role in a future study to verify the present study. However, the manipulation of boredom and ego depletion is different; for instance, the task of manipulating ego depletion (e.g., the letter "e" task) is challenging and requires attention to detail. In contrast, the task of copying ten (vs. two) references, which is a classical boredom task, is easier, more repetitive, and less challenging. It has been hypothesized that manipulating ego depletion (letter "e" task) may induce boredom, but less than the task's directly induced boredom [60], suggesting that there might be a confounding effect of boredom on ego depletion [60][61][62]. Notably, so far, it is not clear whether the manipulation of boredom (ten vs. two copying task) would trigger ego depletion. This is an interesting issue, which could be further measured in the future. Finally, we primarily aimed to measure the correlation between state boredom and cheating behaviors on task performance, instead of making broad inferences about the correlation by and large. Thus, the present reanalysis mainly illustrates the correlation in the study area of task performance. Conclusions People with higher boredom showed more cheating behaviors than those who experienced lower boredom on task performance. State boredom was indirectly associated with cheating behaviors on task performance through reduced state self-efficacy. The present study provides empirical evidence that state boredom has its moral function through state self-efficacy. Author Contributions: Conceptualization, methodology, and formal analysis, C.L.; data collection, M.Z.; writing-original draft preparation, writing-review and editing, and funding acquisition, C.F. and C.L. All authors have read and agreed to the published version of the manuscript. Funding: The research was supported by a grant to Chun Feng from the Southwest University of Science and Technology (Grant: 21sx7106) and a grant to Chuanjun Liu from Sichuan University (Grant: 2021CXC05). Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the ethical panel at the Department of Applied Psychology at Southwest University of Science and Technology.
2022-10-10T15:49:18.297Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "9b5347846582b02f332949d9792760f1d5c9d453", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-328X/12/10/380/pdf?version=1664802127", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a65ba1e971ff7c7a780a5acf6d5df7d672daf13", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
262031326
pes2o/s2orc
v3-fos-license
Comparative Effectiveness of Umeclidinium/Vilanterol versus Indacaterol/Glycopyrronium on Moderate-to-Severe Exacerbations in Patients with Chronic Obstructive Pulmonary Disease in Clinical Practice in England Purpose Chronic obstructive pulmonary disease (COPD) exacerbations are associated with significant morbidity and mortality and increased economic healthcare burden for patients with COPD. Long-acting muscarinic antagonist (LAMA)/long-acting β2-agonist (LABA) dual therapy is recommended for patients receiving mono-bronchodilator therapy who experience exacerbations or ongoing breathlessness. This study compared two single-inhaler LAMA/LABA dual therapies, umeclidinium/vilanterol (UMEC/VI) and indacaterol/glycopyrronium (IND/GLY), on moderate-to-severe exacerbation rates in patients with COPD in England. Patients and Methods This retrospective cohort study used linked primary care electronic health record data (Clinical Practice Research Datalink-Aurum) and secondary care data (Hospital Episode Statistics) to assess outcomes for patients with COPD who had a first prescription for single-inhaler UMEC/VI or IND/GLY (index date) between 1 January 2015 and 30 September 2019 (indexing period). Analyses compared UMEC/VI and IND/GLY on moderate-to-severe, moderate, and severe exacerbations, healthcare resource utilization (HCRU), and direct costs at 6, 12, 18, and 24 months, and time-to-first on-treatment exacerbation up to 24 months post-index date. Following inverse probability of treatment weighting (IPTW), non-inferiority and superiority of UMEC/VI versus IND/GLY were assessed. Results In total, 12,031 patients were included, of whom 8753 (72.8%) were prescribed UMEC/VI and 3278 (27.2%) IND/GLY. After IPTW, for moderate-to-severe exacerbations, weighted rate ratios were <1 at 6, 12, and 18 months and equal to 1 at 24 months for UMEC/VI; around the null value for moderate exacerbations and <1 at all timepoints for severe exacerbations. UMEC/VI showed lower HCRU incidence rates than IND/GLY for all-cause Accident and Emergency visits and COPD-related inpatient stays and associated all-cause costs at 6 months post-indexing. Time-to-triple therapy was similar for both treatments. Conclusion UMEC/VI demonstrated non-inferiority to IND/GLY in moderate-to-severe exacerbation reduction at 6, 12 and 18 months. These results support previous findings demonstrating similarity between UMEC/VI and IND/GLY on reduction of moderate-to-severe exacerbations. Introduction Chronic obstructive pulmonary disease (COPD) is the fourth leading cause of death globally, affecting 5-22% of adults aged 40 and above, and is a leading cause of hospitalizations worldwide. 12][3] In the United Kingdom (UK) alone, COPD is the second most common lung disease after asthma, with an estimated 1.2 million people (2% of the population) having been diagnosed. 2ach year in the UK, COPD costs the National Health Service (NHS) approximately £1.9 billion, which constitutes 29% of the total cost of respiratory illness, second only to asthma (£3 billion). 3Exacerbations are associated with significant morbidity and mortality, as well as increased economic burden of healthcare for patients with COPD. 4,5or patients receiving maintenance therapy with a long-acting muscarinic antagonist (LAMA) or a long-acting β 2 -agonist (LABA), but who still experience exacerbations, the Global Initiative for Chronic Obstructive Lung Disease (GOLD) strategic report recommends escalating to LAMA/LABA or inhaled corticosteroid (ICS)/LABA dual therapy. 5In the UK, the National Institute for Health and Care Excellence (NICE) guidelines recommend offering LAMA/LABA dual therapy for patients with dyspnea or exacerbations despite the use of a short-acting bronchodilator. 6For patients who develop further exacerbations on dual therapy with eosinophil counts ≥100 cells/µL, the GOLD strategy report recommends escalation to ICS/LAMA/LABA triple therapy. 5NICE guidelines recommend escalation to triple therapy in patients who experience one severe (requiring hospitalization) or two moderate exacerbations over the course of a year. 6 similar benefit in terms of frequency and proportion of patients experiencing exacerbations between umeclidinium/vilanterol (UMEC/VI), indacaterol/glycopyrronium (IND/GLY) and aclidinium/formoterol (ACL/FOR) was observed in the non-interventional DETECT study.7 The study showed clinical benefits in terms of lung function, quality of life, and early morning symptoms of COPD in patients with COPD across multiple sites in Germany, which improved following indexing on UMEC/VI, IND/GLY, and ACL/FOR.7 However, there is little evidence comparing LAMA/LABA dual therapies in a UK patient population. This study compares exacerbation outcomes, healthcare resource utilization (HCRU) and costs in patients with COPD newly initiating single-inhaler LAMA/LABA dual therapy with either UMEC/VI or IND/GLY in England in a routine primary care setting. Study Design The index date was defined as the date of the first single-inhaler UMEC/VI or IND/GLY prescription within the indexing period.The indexing period spanned from 1 January 2015 to 30 September 2019.This ensured that the overall study period did not include the severe acute respiratory syndrome coronavirus 2 (COVID-19) pandemic, as management of patients with COPD would not be representative during this period and the changes in HCRU were not under study.The minimum baseline for assessment of patient characteristics was 12 months before the index date.The follow-up period for assessment of clinical endpoints was variable, spanning from the index date to patient death, study period end date, or end of patient data availability, up to a maximum of 24 months. Study Population Inclusion criteria were applied prior to patient inclusion in the study.Patients needed ≥1 diagnostic code of COPD in primary care as an adult (≥35 years of age aligning with guidance from NICE 6 ), ≥1 prescription of single-inhaler UMEC/VI or IND/GLY within the indexing period, and forced expiratory volume in one second/forced vital capacity (FEV 1 /FVC) <0.7 at any time prior to and including index date for study inclusion.They also had to have ≥12 months of continuous registration with a general practitioner (GP) prior to their index date, and healthcare data which were eligible for linkage to HES. Patients were excluded from the study if they had a diagnosis of a medical condition incompatible with a COPD diagnosis at any time prior to and including index date.These included conditions related to lung developmental anomalies, degenerative processes such as cystic fibrosis, and other conditions potentially interfering with clinical COPD diagnosis or changing the natural history of the disease, such as pulmonary resections (Supplementary Table 1).Other exclusion criteria were prescription for both UMEC/VI and IND/GLY at index date, concomitant use of ICS at index date (two ICS prescriptions with ≤30-day gap between end of last supply date to subsequent new supply, overlapping the index date), and ≥1 prescription of any single-inhaler or open combinations of LAMA/LABA prior to index date. Patients were classified by indexed therapy (UMEC/VI or IND/GLY). Outcomes Outcomes were assessed up to a maximum of 24 months after the index date.The primary outcome was rate of moderateto-severe exacerbations in the 6, 12, 18, and 24 months following treatment initiation in patients with COPD newly initiating UMEC/VI versus those initiating treatment with IND/GLY.Secondary outcomes were rate of moderate exacerbations, rate of severe exacerbations, COPD-related and all-cause HCRU and direct healthcare costs at 6, 12, 18, and 24 months, and time-to-first on-treatment exacerbation (including moderate-to-severe, moderate, and severe exacerbations) up to 24 months.An exploratory endpoint of time-to-triple therapy up to 24 months was also investigated. Statistical Analysis Differences in baseline characteristics between UMEC/VI and IND/GLY were calculated using t-test (continuous), chi-squared or Fisher's Exact (for categorical), or Mann-Whitney (for ordinal) statistical comparisons.Rates of exacerbation were calculated by dividing the number of exacerbations observed by the total days at-risk across all patients, reaching the number of exacerbations per person per day.This could then be multiplied to calculate the rate of exacerbation per 1000 persons per day.Non-inferiority (NI) of UMEC/VI versus IND/GLY was assessed for the primary endpoint of moderate-to-severe exacerbations via rate ratio (RR) of exacerbations at 6, 12, 18, and 24 months; RRs were obtained using a negative binomial regression model.An NI margin of 10% was prespecified, as differences of >10% are widely regarded as clinically important in related respiratory studies. 8If NI was met, superiority was also assessed with a margin of <0.A sample size of 998-1342 and 448-499 patients receiving UMEC/VI and IND/GLY, respectively, was determined to be needed to demonstrate NI in the rate of exacerbations between the UMEC/VI and IND/GLY cohorts.A sample size of 1392-1875 and 625-696 patients receiving UMEC/VI and IND/GLY, respectively, was determined to be needed to demonstrate superiority of UMEC/VI in the rate of exacerbations compared with IND/GLY. Moderate-to-severe exacerbations were identified according to an existing algorithm previously validated against physician notes. 9,10Baseline demographics and clinical characteristics were evaluated and compared between treatment cohorts whereby subgroup counts and percentages were calculated for categorical variables, and means and standard deviations (SD) were calculated for continuous variables. Inverse probability of treatment weighting (IPTW) was used whereby weights were derived from the propensity score (PS) to create a pseudo-population in which the distribution of covariates in the population is independent of treatment assignment.Covariates considered for inclusion in the IPTW model included baseline sociodemographic and clinical characteristics, exacerbations, treatment use (including ICS use), and HCRU/costs.Selection of covariates was primarily based on background knowledge of the association of pre-treatment variables on the outcomes of interest. The IPTW model allowed estimation of the average treatment effect in the entire population by accounting for the effects of the covariates on the results.Prior to outcomes assessment, covariate balance was assessed using standardized mean differences in the unweighted and weighted cohorts, with a standardized difference of <10% in the weighted cohort being judged as indicative of adequate balance. 11hen evaluating the treatment effect, so as not to increase the probability of a type 1 error, an intention-to-treat (ITT) analysis was conducted whereby patients were classified into treatment groups at index and remained in the cohort for their indexed treatment for the entire follow-up period (maximum 24 months) and were not censored for any reason other than a treatment switch to the comparator treatment. An on-treatment sensitivity analysis was also conducted whereby patients were classified into treatment groups at index and censored at the time of the first prescription for any non-indexed, long-acting maintenance medication, or discontinuation of their indexed therapy. When reporting outcomes, results based on small numbers of patients (n < 5) were suppressed to protect patient confidentiality.Secondary suppression was also implemented, where required, to protect primary suppression. Study Population and Baseline Characteristics In total, 12,031 eligible patients were included in the study, of whom 8753 (72.8%) were indexed on UMEC/VI and 3278 (27.2%) were indexed on IND/GLY.Sample attrition following application of inclusion and exclusion criteria is shown in Figure 2. Demographic and clinical characteristics for both groups are shown in Table 1.Mean (SD) age at index was 69.6 years (10.7) for patients indexed on UMEC/VI and 69.4 years (10.3) for those indexed on IND/GLY.Distribution of patients per region differed between treatments.The UMEC/VI treatment group contained a greater proportion of patients from the South Central, South West, West Midlands, and Yorkshire and the Humber regions compared with the IND/GLY group, while the IND/GLY treatment group contained a greater proportion of patients from the East of England, London, North East, and South East Coast than the UMEC/VI group.The treatments included similar proportions of patients from the East Midlands and North West. The UMEC/VI treatment group also contained more patients in the GOLD grade A category compared with the IND/GLY treatment group (43.0%vs 36.7%), and fewer patients across the other grades.A smaller proportion of patients indexed on UMEC/VI had moderate-to-severe exacerbations in the 12 months prior to indexing: 30.6% compared with 34.2% for IND/GLY.Patients indexed on UMEC/VI also had lower mean (SD) total medical costs per patient, at £158.40 (163.80)compared with £190.90 (163.60) for those indexed on IND/GLY.LAMA monotherapy usage during baseline was also less common for patients indexed on UMEC/VI than those on IND/GLY (46.6% vs 61.4%, respectively).Mean (SD) % predicted FEV 1 was similar for both treatments, at 61.7 (16.8) for UMEC/VI and 61.1 (17.1) for IND/GLY. After IPTW, covariates were well balanced for all outcomes with all showing sufficient balance at all timepoints.No weighted analysis was done for the exploratory objective time-to-triple therapy. 2044 met for UMEC/VI at every timepoint apart from 24 months, where the rates were equivalent.Across all timepoints, superiority was not observed for any comparison.On-treatment sensitivity analyses showed similar results (Supplementary Table 2).In the 12 and 24 months post-index, the number of exacerbations for both treatment groups was similar with a mean number (SD) of 0.4 (0.7) and 0.4 (0.8) at 12 months and 0.3 (0.7) for both groups at 24 months.The proportion of patients with exacerbations in the 12 months post index was 26.8% for those indexed on UMEC/VI and 30.2% for those indexed on IND/GLY.At 24 months post index, the proportion of patients with exacerbations was 20.2% for those on UMEC/VI and 23.5% for those on IND/GLY (data not shown). Moderate Exacerbations At all timepoints, unweighted rates of moderate exacerbations for UMEC/VI were consistently lower than for IND/GLY (Supplementary Table 3).Unweighted RRs were below 1 at all timepoints, while weighted RRs were above 1 at 6 and 12 months post-indexing, and equivalent at 18 and 24 months (Figure 4).On-treatment sensitivity analyses showed similar results (Supplementary Table 2). Severe Exacerbations Across all timepoints, the severe exacerbation rates were lower in the UMEC/VI cohort than in the IND/GLY cohort in both unweighted and weighted analyses (Supplementary Table 4).Unweighted and weighted RRs of severe exacerbations were both consistently below 1; it was statistically significant at 6 months (Figure 5).On-treatment sensitivity analyses showed RRs remaining <1 at all timepoints (Supplementary Table 2). Time-to-First On-Treatment Exacerbation Time-to-first on-treatment exacerbation (moderate-to-severe, moderate, and severe exacerbations) occurred at a steady and relatively unchanging rate from indexing until 24 months post-indexing for both unweighted and weighted analyses; median survival (the point at which half of patients had experienced an on-treatment exacerbation) was not reached for 2046 either treatment (Supplementary Figure 1).In general, unweighted and weighted analyses showed no significant difference between UMEC/VI and IND/GLY in time-to-first on-treatment exacerbation, with the only exception being unweighted time-to-severe exacerbation, which was significantly greater for UMEC/VI versus IND/GLY (Figure 6).On-treatment sensitivity analyses showed similar results (Supplementary Table 2). HCRU In general, at 6 months post-indexing, the incidence rates of COPD-related and all-cause HCRU were lower among patients indexed on UMEC/VI than on those indexed on IND/GLY, with the exception of all-cause prescriptions (Supplementary Table 5).Statistical significance was met for number of all-cause A&E visits and COPD-related inpatient stays.After IPTW, at 6 months post-indexing, the weighted RRs were largely below 1, but only all-cause A&E visits and COPD-related inpatient stays were statistically significant in favor of UMEC/VI (Figure 7).A similar trend was observed at 12, 18, and 24 months post-indexing (Supplementary Table 5).On-treatment sensitivity analyses showed similar results for all elements except all-cause inpatient stays and A&E visits (Supplementary Table 2). Direct Treatment Costs At 6 months post-indexing, COPD-related and all-cause costs per-patient per-year were lower for all HCRU elements among patients indexed on UMEC/VI than for patients indexed on IND/GLY in both unweighted and weighted analyses (Supplementary Table 6).Patients indexed on UMEC/VI had significantly lower costs for inpatient stays and A&E visits versus IND/GLY in unweighted (COPD-related and all-cause costs) and weighted analyses (all-cause costs) (Figure 8).Lower costs for inpatient stays and A&E visits were also observed at 12, 18, and 24 months post-indexing (Supplementary Table 6).Total COPD-related costs were numerically lower and all-cause costs were significantly lower for patients indexed on UMEC/VI than IND/GLY at all timepoints (Figure 9, Supplementary Table 6). On-treatment sensitivity analyses showed similar results (Supplementary Table 2). Time-to-Triple Therapy Time-to-triple therapy was similar for patients indexed on UMEC/VI and IND/GLY.Approximately 9% of patients had escalated to triple therapy by 6 months post-indexing, 15% by 12 months and 20% by 18 months, irrespective of indexed treatment.Median survival was not reached for either indexed treatment (Supplementary Figure 2). Discussion In this study, we demonstrated NI on the rate of moderate-to-severe exacerbations in patients with COPD newly prescribed UMEC/VI versus IND/GLY in England.Across all timepoints, the rate of moderate-to-severe exacerbations was lower with UMEC/VI than with IND/GLY in both unweighted and weighted analyses, except for the weighted 24-month analysis where rates were equivalent.Treatment with UMEC/VI may result in a greater reduction of severe exacerbations in comparison with IND/GLY, as evidenced by the significantly lower incidence rates of UMEC/VI at 6 months post-indexing.This was possibly due to a reduced delay in accrued long-term benefit of treatment compared with IND/GLY.The demonstration of NI of UMEC/VI in moderate-to-severe exacerbations in this study is in line with the noninterventional DETECT study, which reported no significant difference in the frequency of overall exacerbations between patients indexed on UMEC/VI and those indexed on IND/GLY. 7wo network meta-analyses including 16 randomized controlled trials and 49 studies, respectively, also found no significant differences in the number of moderate-to-severe exacerbations over at least 48 weeks 12 or annualized moderate-to-severe exacerbation rates over 24 weeks 13 between UMEC/VI and IND/GLY.Our study therefore supports previous findings concerning similarity between UMEC/VI and IND/GLY in moderate-to-severe exacerbations specifically.There were no significant differences in weighted comparisons between cohorts for time-to-first on-treatment exacerbation regardless of exacerbation severity.These results suggest that despite the numerical reduction in rate of moderate-severe exacerbations for UMEC/VI versus IND/GLY in the 2 years following treatment initiation, there is no delay between treatments as to when the exacerbations begin.This is in line with a recent network meta-analysis which found similar results for UMEC/VI, IND/GLY and GLY/formoterol in hazard ratio of time-to-first exacerbation. 13The results presented here also support findings from a recent cohort study that collected data from patients with COPD in Taiwan, which found comparable time-to-first exacerbation rates between UMEC/VI and IND/GLY, along with lower annualized exacerbation rates with UMEC/VI versus IND/GLY. 14ime-to-triple therapy was similar between indexed treatments.Given the similarity in baseline demographic and clinical characteristics between the treatment groups, as well as comparable rates of exacerbations and time-to-first exacerbation, it is unsurprising that there was not a significant difference in time-to-triple therapy between the treatments, as a step up to triple therapy is recommended by both the GOLD strategic report and NICE guidelines for patients on LAMA/LABA dual therapy who continue to experience exacerbations. 5,6In this study, UMEC/VI was associated with a reduced rate of COPD-related inpatient stays and all-cause A&E visits, significantly lower all-cause medical costs, and numerically lower COPD-related medical costs than IND/GLY.The lower costs reported here may suggest that UMEC/VI is more effective than IND/GLY at lowering the burden of healthcare for patients with COPD, particularly regarding inpatient stays, which have been shown to drive COPD-related costs for patients with COPD newly initiating treatment with a LAMA/LABA in primary care in England. 15his study had several strengths.Firstly, an extensive and robust process was used to identify suitable covariates for inclusion into the PS models, ensuring that covariate balance in all models presented was sufficient for effective comparison of the groups.The suitability of the covariates used is demonstrated by the finding that across all objectives, PS models achieved high or satisfactory balance across all covariates once weighting had been applied.Secondly, use of IPTW to adjust for relevant covariates allowed for effective direct comparison between the two cohorts included in this study, despite the use of observational data.Finally, electronic medical records such as CPRD-Aurum represent an observation of effects in the real-world rather than under optimal conditions, providing the findings of this study with relevance in clinical practice. 16imitations of this study included generalizability of results to the wider UK population being potentially impacted as the linkage to HES data reduces the patient sample to only those registered at a GP practice in England, and CPRD-Aurum data covers <10% of UK practices, although the sample population is considered highly representative of the UK population. 17There was also missing secondary care data due to poor linkage from outpatient HES data to CPRD-Aurum, although as most treatments initially prescribed in secondary care are continued in primary care.As such, this may not have been greatly consequential but may have led to the index date being slightly later than actual initiation of dual therapy in some cases.Also, patients with recorded asthma diagnoses were not excluded from this study so as not to exclude those with asthma-COPD overlap syndrome, possibly increasing the risk of misclassification due to reduced diagnostic accuracy of COPD for patients with asthma. 18It also cannot be known for certain that medication was specifically prescribed for treatment of COPD, as many COPD medications can also be indicated to treat asthma, although a diagnosis of COPD was required for patient inclusion in this study, in line with a definition of COPD validated against patient notes. 18Further, head-to-head clinical studies would provide additional evidence to confirm the findings from retrospective database studies. Conclusions Across all timepoints, the incidence rates of exacerbations were lower in the UMEC/VI cohort than in the IND/GLY cohort in both unweighted and weighted analyses, except for the weighted 24-month analysis where the rates were equivalent.In this study, UMEC/VI demonstrated NI to IND/GLY with regard to moderate-to-severe exacerbation reduction at 6, 12, and 18 months post-indexing.These results support previous findings demonstrating similarity between UMEC/VI and IND/GLY in effectiveness concerning reduction of moderate-to-severe exacerbations after treatment initiation.
2023-09-17T15:16:29.308Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "1a3d5744e1e40b74b5a8a091d595011a6b3e8b12", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=92814", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "766088c971a66435f987e5c3805642c1f0e9fa12", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118257365
pes2o/s2orc
v3-fos-license
A search for B+ -->tau+ nu with Hadronic B tags We present a search for the decay B^+ -->tau^+ nu using $383 \times 10^{6}}$ BBbar pairs collected at the Upsilon(4S) resonance with the BABAR detector at the SLAC PEP-II B Factory. We select a sample of events with one completely reconstructed tag B in a hadronic decay mode ($B^- \to D^{(*)0} X^-$), and examine the rest of the event to search for a B^+ -->tau^+ nu decay. We identify the tau lepton in the following modes: tau^+ -->e^+ nu nu,tau^+ -->mu^+ nu nu, tau^+ -->pi^+ nu and tau^+ -->pi^+ pi^0 nu. We find a 2.2 sigma excess in data and measure a branching fraction of B(B+ -->tau^+ nu) = (1.8^{+0.9}_{-0.8}(stat.) \pm 0.4(bkg. syst.) \pm 0.2 (other syst.)) \times 10^{4}. We calculate the product of the B meson decay constant f_{B} and |V_{ub}| to be f_{B} |V_{ub}| = (10.1^{+2.3}_{-2.5}(stat.)^{+1.2}_{-1.5}(syst.))\times10^{-4} GeV The process B + → τ + ν is also sensitive to extensions of the SM. For instance, in two-Higgs doublet models [6] and in the MSSM [7,8] it could be mediated by charged Higgs bosons. The branching fraction measurement can therefore also be used to constrain the parameter space of extensions to the SM. The B + → µ + ν and B + → e + ν decays are significantly helicity suppressed with respect to the B + → τ + ν channel. However, a search for B + → τ + ν is experimentally more challenging, due to the presence of multiple neutrinos in the final state, which makes the experimental signature less distinctive. In a previously published analysis using a sample of 383 × 10 6 Υ (4S) → BB decays, based on the reconstruction of a semileptonic B decay on the tag side, the BABAR collaboration set an upper limit B(B + → τ + ν) < 1.7 × 10 −4 at the 90% confidence level (CL) [9]. The Belle Collaboration has reported evidence from a search for this decay and the branching fraction was measured to be B(B + → τ + ν) = (1.79 +0.56 −0.49 (stat.) +0.46 −0.51 (syst.)) × 10 −4 [10]. The data used in this analysis were collected with the BABAR detector at the PEP-II storage ring. The sample corresponds to an integrated luminosity of 346 fb −1 at the Υ (4S) resonance (on-resonance) and 36.3 fb −1 taken at 40 MeV below the Υ (4S) resonance (off-resonance). The on-resonance sample contains 383 × 10 6 BB decays. The detector is described in detail elsewhere [11]. Chargedparticle trajectories are measured in the tracking sys- * Deceased † Now at Tel Aviv University, Tel Aviv, 69978, Israel ‡ Also with Università di Perugia, Dipartimento di Fisica, Perugia, Italy § Also with Università della Basilicata, Potenza, Italy ¶ Also with Universitat de Barcelona, Facultat de Fisica, Departament ECM, E-08028 Barcelona, Spain tem composed of a five-layer silicon vertex detector and a 40-layer drift chamber (DCH), operating in a 1.5 T solenoidal magnetic field. A Cherenkov detector is used for π-K discrimination, a CsI calorimeter for photon detection and electron identification, and the flux return of the solenoid, which consists of layers of steel interspersed with resistive plate chambers or limited streamer tubes, for muon and neutral hadron identification. In order to estimate signal selection efficiencies and to study physics backgrounds, we use a BABAR Monte Carlo (MC) simulation based on GEANT4 [12]. In MC simulated signal events one B + meson decays to τ + ν and the other into any final state. The BB and continuum MC samples are, respectively, equivalent to approximatively three times and 1.5 times the accumulated data sample. Beam-related background and detector noise are taken from data and overlaid on the simulated events. We reconstruct an exclusive decay of one of the B mesons in the event (tag B) and examine the remaining particle(s) for the experimental signature of B + → τ + ν. In order to avoid experimenter bias, the signal region in data is blinded until the final yield extraction is performed. The tag B candidate is reconstructed in the set of hadronic B decay modes B − → D ( * )0 X − [1], where X − denotes a system of charged and neutral hadrons with total charge −1 composed of n 1 π ± , n 2 K ± , n 3 K 0 S , n 4 π 0 , where n 1 + n 2 ≤ 5, n 3 ≤ 2, and n 4 ≤ 2. We reconstruct D * 0 → D 0 π 0 , D 0 γ; D 0 → K − π + , K − π + π 0 , K − π + π − π + , K 0 S π + π − and K 0 S → π + π − . The kinematic consistency of tag B candidates is checked with the beam energy-substituted mass m ES = s/4 − p 2 B and the en- Here √ s is the total energy in the Υ (4S) center-of-mass (CM) frame, and p B and E B denote, respectively, the momentum and energy of the tag B candidate in the CM frame. The resolution on ∆E is measured to be σ ∆E = 10 − 35 MeV, depending on the decay mode; we require |∆E| < 3σ ∆E . The purity P of each reconstructed B decay mode is estimated, using on-resonance data, as the ratio of the number of peaking events with m ES > 5.27 GeV/c 2 to the total number of events in the same range. If multiple tag B candidates are reconstructed, the one with the highest purity P is selected. If more than one candidate with the same purity is reconstructed, the one with the lowest value of |∆E| is selected. From the dataset obtained as described above, we consider only those events in which the tag B is reconstructed in the decay modes of highest purity P. The set of decay modes used is defined by the requirement that the purity of the resulting sample is not less than 30%. The background consists of e + e − → qq (q = u, d, s, c) events and other Υ (4S) → B 0 B 0 or B + B − decays in which the tag B candidate is mis-reconstructed using particles coming from both B mesons in the event. To reduce the e + e − → qq background, we require | cos θ * T B | < 0.9, where θ * T B is the angle in the CM frame between the thrust axis [13] of the tag B candidate and the thrust axis of the remaining reconstructed charged and neutral candidates. In order to determine the number of correctly reconstructed B + decays, we classify the background events in four categories: e + e − → cc; e + e − → uu, dd, ss; Υ (4S) The m ES shapes of these background distributions are taken from MC simulation. The normalization of the e + e − → cc and e + e − → uu, dd, ss backgrounds is taken from off-resonance data, scaled by the luminosity and corrected for the different selection efficiencies evaluated with the MC. The normalization of the B 0 B 0 , B + B − components are obtained by means of a χ 2 fit to the m ES distribution in the data sideband region (5.22 GeV/c 2 < m ES < 5.26 GeV/c 2 ). The number of background events in the signal region (m ES > 5.27 GeV/c 2 ) is extrapolated from the fit and subtracted from the data. We estimate the total number of tagged B's in the data to be N B = (5.92±0.11(stat))×10 5 . After the reconstruction of the tag B meson, a set of selection criteria is applied to the rest of the event (recoil) in order to enhance the sensitivity to B + → τ + ν decays. We require the presence of only one well-reconstructed charged track (signal track) with charge opposite to that of the tag B. The signal track is required to have at least 12 hits in the DCH, momentum transverse to the beam axis, p T , greater than 0.1 GeV/c, and the point of closest approach to the interaction point less than 10 cm along the beam axis and less than 1.5 cm transverse to it. The τ lepton is identified in four decay modes constituting approximately 71% of the total τ decay width: τ + → e + νν, τ + → µ + νν, τ + → π + ν, and τ + → π + π 0 ν. Particle identification criteria on the signal track are used to separate the four categories. The τ + → π + π 0 ν sample is obtained by associating the signal track, identified as pion, with a π 0 reconstructed from a pair of neutral clusters with invariant mass between 0.115 and 0.155 GeV/c 2 and total energy greater than 250 MeV. In case of multiple π + π 0 candidates, the one with largest center-of-mass momentum p * π + π 0 is chosen. We place a mode-dependent cut on | cos θ * T B | to reduce the background due to continuum events and incorrectly reconstructed tag B candidates (combinatorial). The remaining sources of background consists of B + B − events in which the tag B meson was correctly reconstructed and the recoil contains one track and additional particles that are not reconstructed by the tracking detectors and calorimeter. MC simulation shows that most of this background is from semileptonic B decays. We define the discriminating variable E extra as the sum of the energies of the neutral clusters not associated with the tag B or with the signal π 0 from the τ + → π + π 0 ν mode, and passing a minimum energy requirement. The required energy depends on the selected signal mode and on the calorimeter region involved and varies from 50 to 70 MeV. Signal events tend to peak at low E extra values, whereas background events, which contain additional sources of neutral clusters, are distributed toward higher E extra values. Other variables used to discriminate between signal and background are the CM momentum of the signal candidates, the multiplicities of low p T charged tracks and of π 0 candidates in the recoil, and the direction of the missing momentum four-vector in the CM frame. For the τ + → π + π 0 ν mode, we exploit the presence of the π 0 in the final state and the dominance of the decay through the ρ + resonance by means of the combined quantity x ρ = [(m π + π 0 −m ρ )/(Γ ρ )] 2 +[(m γγ −m π 0 )/(σ π 0 )] 2 , where m π + π 0 is the reconstructed invariant mass of the π + π 0 candidate, m γγ is the reconstructed invariant mass of the π 0 candidate, m ρ and Γ ρ are the nominal values [4] for the ρ mass and width, m π 0 is the nominal π 0 mass and σ π 0 = 8 MeV/c 2 is the experimental resolution on the π 0 mass determined from data. We optimize the selection by maximizing s/ √ s + b using the B + B − MC and signal MC, where b is the expected background from B + B − events and s is the expected number of signal events in the hypothesis of a branching fraction of 1 × 10 −4 . The optimization is performed separately for each τ decay mode and with all the cuts applied simultaneously in order to take into account any correlations among the discriminating variables. The optimized signal selection cuts are reported in Table I. We compute the signal selection efficiency as the ratio of the number of signal MC events passing the selection criteria to the number of signal events that have a correctly reconstructed tag B candidate in the signal region m ES > 5.27 GeV/c 2 . We evaluate the efficiencies on a signal MC sample which is distinct from the sample used in the optimization procedure. A small cross-feed in some modes is estimated from MC and is taken into account in the computation of the total efficiency. Variable e + µ + π + π + π 0 Eextra ( GeV) < 0.160 < 0.100 < 0.230 < 0.290 π 0 multiplicity 0 0 ≤ 2 -Track multiplicity The total efficiency for each selection is given by: where ε j i is the efficiency of the selection i for the τ decay mode j, n dec = 7 is the number of τ decay modes that can contribute to the reconstructed modes and f j are the fractions of the τ decay mode as estimated from the signal MC sample with a reconstructed tag B. Table II shows the estimated efficiencies. II: Efficiency (in percent) of the most relevant τ decay modes (rows) to be selected in one of the four modes considered in this analysis (column). The All decay row shows the selection efficiency of each reconstruction mode, adding the contribution from the previous rows, weighted by the decay abundance at the tag selection level fj . The last row shows the total signal selection efficiency. The uncertainties are statistical only. Mode To determine the expected number of background events in the data, we use the final selected data samples with E extra between 0 and 2.4 GeV. We first perform an extended unbinned maximum likelihood fit to the m ES distribution in the E extra sideband region 0.4 GeV < E extra < 2.4 GeV of the final sample. For the peaking component of the background we use a probability density function (PDF) which is a Gaussian function joined to an exponential tail (Crystal Ball function) [14]. As a PDF for the non-peaking component, we use a phase space motivated threshold function (ARGUS function) [15]. From this fit, we determine a peaking yield N side,data pk and signal shape parameters, to be used in later fits. We apply the same procedure to B + B − MC events which pass the final selection and determine the peaking yield N side,MC After finalizing the signal selection criteria, we measure the yield of events in each decay mode in on-resonance data. Table III reports the number of observed events together with the expected number of background events, for each τ decay mode. Figure 2 shows the E extra distribution for data and expected background at the end of the selection. The signal MC, normalized to a branching fraction of 3 × 10 −3 for illustrative purposes, is overlaid for comparison. The E extra distribution is also plotted separately for each τ decay mode. We combine the results on the observed number of events n i and on the expected background b i from each of the four signal decay modes (n ch ) using the estimator Q = L(s + b)/L(b), where L(s + b) and L(b) are the likelihood functions for signal plus background and background-only hypotheses, respectively: 2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 Eextra distribution after all selection criteria have been applied. The upper plot shows the distribution of all the modes combined while lower plots show the (a) τ + → e + νν, (b) τ + → µ + νν, (c) τ + → π + ν, and (d) τ + → π + π 0 ν modes separately. The on-resonance data (black dots) distribution is compared with the total background prediction (continuous histogram). The hatched histrogram represents the combinatorial background component. B + → τ + ν signal MC (dashed histogram), normalized to a branching fraction of 3 × 10 −3 for illustrative purposes, is shown for comparison. fraction by: where N tag B + is the number of tag B + mesons correctly reconstructed, ε tag B and ε tag sig are the tag B efficiencies in generic BB and signal events respectively, and ε i are the signal efficiencies defined in equation 2. We fix the ratio ε tag sig /ε tag B = 0.939 ± 0.007(stat.) to the value obtained from MC simulation. We estimate the branching fraction (including statistical uncertainty and uncertainty from the background) by scanning over signal branching fraction hypotheses and computing the value of L(s + b)/L(b) for each hypothesis. The branching fraction is the hypothesis which minimizes the likelihood ratio −2 ln Q = −2 ln(L(s+b)/L(b)), and we determine the statistical uncertainty by finding the points on the likelihood scan that occur at one unit above the minimum. The dominant uncertainty on the background predic-tions b i is due to the finite B + B − MC statistics. We also check possible systematic effects in the estimation of combinatorial background by means of a sample of events with looser selection requirements; we find it to be negligible with respect to the statistical uncertainty. The background uncertainty is incorporated in the likelihood definition used to extract the branching fraction, by convolving it with a Gaussian function with standard deviation equal to the error on b i [16]. The other sources of systematic uncertainty in the determination of the B + → τ + ν branching fraction come from the estimation of the tag yield and efficiency and the reconstruction efficiency of the signal modes. We estimate the systematic uncertainty on the tag B yield and reconstruction efficiency by varying the MC B + B − nonpeaking component of the m ES shape, assigning a systematic uncertainty of 3% on the branching fraction. The systematic uncertainties due to mismodeling of charged particle tracking efficiency, E extra shape, particle identification efficiency, π 0 reconstruction and signal MC statistics depend on the τ decay mode. The uncertainty on the branching fraction is evaluated for each mode separately. We obtain the total contributions due to tracking and E extra systematics by adding linearly the contributions of each decay channel. The total contributions due to MC statistics and particle identification are obtained by adding systematics uncertainties of each reconstruction mode in quadrature. We check the low p T charged track multiplicity distribution agreement between data and MC with a sample enriched in background by loosening the selection criteria. The disagreement, which is mode dependent, is quantified by comparing the MC PDF with the data PDF. We correct the MC to reproduce the distribution in data and apply the correction to the signal MC distribution. We take 100% of the correction as a systematic uncertainty, resulting in a total systematic uncertainty of 5.8% on the branching fraction. The systematic uncertainty due to the E extra mismodeling is determined by means of a data sample containing events with two non-overlapping tag B candidates. The sample is selected by reconstructing a second B meson in a hadronic decay mode B − → D ( * )0 X − on the recoil of the tag B. In addition to the requirements on the tag B described above, we consider only second B candidates satisfying |∆E| < 50 MeV and m ES > 5.27 GeV/c 2 having opposite charge to that of the tag B. If multiple candidates are reconstructed, the one with the highest purity P is selected. We compare the distribution of the total energy of the unassigned neutral clusters E extra in data and in MC. We compute the ratio of the number of events in the signal region of each τ mode to the total number of events in the sample. For each τ mode, we evaluate the systematic uncertainty, comparing the ratio estimated from MC to the ratio estimated from data. This procedure results in a 8.8% systematic uncertainty on the branching fraction. Table IV shows the contributions in percent to the systematic uncertainties on the branching fraction. In summary, we measure the branching fraction where the first error is statistical, the second is due to the background uncertainty, and the third is due to other systematic sources. Taking into account the uncertainty on the expected background, as described above, we obtain a significance of 2.2 σ. Using Eq. 1, we calculate the product of the B meson decay constant f B and |V ub | to be f B · |V ub | = (10.1 +2.3 −2.5 (stat.) +1.2 −1.5 (syst.)) × 10 −4 GeV. We also measure the 90% C.L. upper limit using the CL s method [17] to be B(B + → τ + ν) < 3.4 × 10 −4 . (7) The significance of the combined result is 2.6 σ including the uncertainty on the expected background (3.2 σ if this uncertainty is not included). We are grateful for the excellent luminosity and machine conditions provided by our PEP-II colleagues, and for the substantial dedicated effort from the computing organizations that support BABAR. The collaborating institutions wish to thank SLAC for its support and kind hospitality. This work is supported by DOE
2007-08-16T18:08:52.000Z
2007-08-16T00:00:00.000
{ "year": 2007, "sha1": "e124ed3f380f9b8b7049492317fa40c626cff7b5", "oa_license": "CC0", "oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/131547/1/577711.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e124ed3f380f9b8b7049492317fa40c626cff7b5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
201666067
pes2o/s2orc
v3-fos-license
Reversal of Cognitive Impairment in gp120 Transgenic Mice by the Removal of the p75 Neurotrophin Receptor Activation of the p75 neurotrophin receptor (p75NTR), by the proneurotrophin brain-derived neurotrophic factor (proBDNF), triggers loss of synapses and promotes neuronal death. These pathological features are also caused by the human immunodeficiency virus-1 (HIV) envelope protein gp120, which increases the levels of proBDNF. To establish whether p75NTR plays a role in gp120-mediated neurite pruning, we exposed primary cultures of cortical neurons from p75NTR–/– mice to gp120. We found that the lack of p75NTR expression significantly reduced gp120-mediated neuronal cell death. To determine whether knocking down p75NTR is neuroprotective in vivo, we intercrossed gp120 transgenic (tg) mice with p75NTR heterozygous mice to obtain gp120tg mice lacking one or two p75NTR alleles. The removal of p75NTR alleles inhibited gp120-mediated decrease of excitatory synapses in the hippocampus, as measured by the levels of PSD95 and subunits of the N-methyl-D-Aspartate receptor in synaptosomes. Moreover, the deletion of only one copy of the p75NTR gene was sufficient to restore the cognitive impairment observed in gp120tg mice. Our data suggest that activation of p75NTR is one of the mechanisms crucial for the neurotoxic effect of gp120. These data indicate that p75NTR antagonists could provide an adjunct therapy against synaptic simplification caused by HIV. INTRODUCTION Despite the use of combination antiretroviral therapy (cART) (Ellis et al., 2007;Everall et al., 2009), approximately half of HIV-positive individuals are at a high risk for developing mild to severe cognitive impairments, termed HIV-associated neurocognitive disorders (HANDs) (Clifford and Ances, 2013;Saylor et al., 2016). Cognitive alterations seen in HAND subjects correlate with loss of synapses (Masliah et al., 1997;Albright et al., 2003;McArthur, 2004;Everall et al., 2005;Crews et al., 2009). However, our understanding of the mechanisms of HIV-mediated synaptic degeneration is incomplete. A better understanding of the molecular mechanisms underlying HIV neurotoxicity could lead to a new adjunct therapy for HIV positive individuals. The brain serves as a reservoir for ongoing HIV replication (Fois and Brew, 2015); in fact, HAND subjects have detectable levels of HIV RNA in their cerebrospinal fluid (CSF) even when the virus is undetectable in the blood (Di Carlofelice et al., 2018). However, HIV does not infect neurons and thus HAND must result from mechanisms other than neuronal infection. HIV may evoke neuronal injury through indirect mechanisms such as neurotoxins released by infected or immune-stimulated, inflammatory microglia and macrophages (Kaul et al., 2001). Neuronal injury may also result from neurotoxic action of viral proteins such as the activator of transcription Tat (Nath and Steiner, 2013) or the envelope protein gp120 (Meucci and Miller, 1996). The molecular mechanisms whereby gp120 promotes synaptic simplification are still under investigation. The loss of neurons, simplification of neuronal branching, and reduction in dendritic spines can also be triggered by the p75 neurotrophin receptor (p75NTR) (reviewed in Ibanez and Simi, 2012), a member of the tumor necrosis factor receptor family which contains a death domain (Feinstein et al., 1995;Liepinsh et al., 1997). Indeed, activation of p75NTR induces neuronal cell death (Teng et al., 2005) as well as axonal and dendritic spine pruning both during development (Singh et al., 2008) as well as in the adult nervous system (Park et al., 2010;Kraemer et al., 2014). There are many ligands for the p75NTR. These include mature as well as unprocessed neurotrophins or proneurotrophins (Chao, 2003), myelin-associated glycoproteins (Wong et al., 2002) and beta amyloid peptide (Perini et al., 2002;Knowles et al., 2013). A p75NTR ligand that promotes neuronal apoptosis and synaptic pruning is the proneurotrophin brain-derived neurotrophic factor (proBDNF) (Pang et al., 2004;Teng et al., 2005;Yang et al., 2014;Guo et al., 2016). Previous work from our laboratory has shown that gp120 increases the levels and release of proBDNF in primary neuronal cultures (Bachis et al., 2012). In these cultures, p75NTR inhibitors block gp120-mediated synaptic simplification (Bachis et al., 2012), suggesting that activation of p75NTR by proBDNF may be a crucial mechanism to underlying the synaptic simplification seen in HAND. This suggestion is supported by evidence showing that postmortem brains of HAND subjects exhibit higher levels of proBDNF than HIV positive subjects without cognitive alterations (Bachis et al., 2012). Consistently with this suggestion, recent data have shown that increased hippocampal proBDNF contributes to memory impairment in aged mice (Buhusi et al., 2017). This present study was undertaken to provide molecular and behavioral evidence of the role that p75NTR plays in gp120mediated loss of synaptic contacts. We utilized gp120 transgenic (gp120tg) mice intercrossed with p75NTR null mice. The gp120tg mice display a multitude of altered neuron-specific processes, including synaptic simplifications (Toggas et al., 1994;Bachis et al., 2016b) and impaired neurogenesis (Lee et al., 2011), as well as cognitive deficits (D'Hooge et al., 1999) and sensorimotor gating impairments (Henry et al., 2014), suggesting that these animals are a suitable model to study HAND (Thaney et al., 2018). We report that the reduction of p75NTR expression significantly decreases the neurotoxic effect of gp120 as well as impairment in memory evoked by gp120. Reagents Human T-lymphotropic virus (HTLV)-IIIB (HIV1 IIIB ) was obtained through the AIDS Research and Reference Reagent Program, Division of AIDS, National Institute of Allergy and Infectious Diseases (NIAID), National Institutes of Health (NIH). Gp120IIIB was obtained for Immunodiagnostics, Inc. (Woburn, MA, United States). Uncleavable proBDNF was purchased from Alomone labs (Jerusalem, Israel). Cell Viability The viability of primary cortical neurons was estimated by Hoechst 33258 and propidium iodide (Hoechst/PI; Sigma-Adrich) co-staining and visualized using a fluorescence microscope Olympus IX71, as previously described (Avdoshina et al., 2016a). Hoechst/PI-positive cells were then counted using ImageJ (National Institutes of Health, Bethesda, MD, United States) and expressed as a percentage of the total number of neurons. Animals Gp120tg breeding mice were obtained from Dr. E. Masliah (University of California, San Diego, San Diego, CA, United States). The characterization of these mice is provided elsewhere (Toggas et al., 1994). Female gp120tg mice were intercrossed with C57BL/6J male p75NTR −/− mice (The Jackson Laboratory, Bar Harbor, ME, United States) to generate males and female p75 −/− and p75 +/− gp120tg mice, as previously described (Bachis et al., 2016b). Wild type (WT) littermates (gp120 null/p75 +/+ ) were generated from these colonies and used as controls for our biochemical, behavioral, and histological studies. Animals were housed under standard conditions with food and water ad libitum and maintained on a 12-h light/dark cycle. Mice were maintained in our facility for up to 10 months. 8-10 month old mice (of both sexes) were used for these studies. An animal's genotype was confirmed through an outsourced genotyping service (Transnetyx, Inc., Cordova, TN, United States) from tail snips taken at time of weaning and at sacrifice. All studies were carried out following the Guide for the Care and Use of Laboratory Animals as adopted and promulgated by the U.S. National Institutes of Health and approved by the Georgetown University Animal Care and Use Committee. Behavioral Analysis All rodents in this study were tested during their dark (active) period. For each behavioral test, mice were brought to the testing room and allowed to habituate to the testing conditions for at least 1 h. White noise (50 dB) was played to obscure noises from outside the testing room. After the conclusion of each test, mice were returned to their home cages in the animal facility. The assays were scheduled in an order to minimize the impact of repeated testing on performance and occurred in the same order as they appear below within section "Materials and Methods." Open Field Measures The open field apparatus (Med Associates, Inc., Saint Albans City, VT, United States) measured 27 cm × 27 cm and had transparent walls of 20 cm. The apparatus also contained 16beam IR arrays on both the X and Y axes for positional tracking within the apparatus and on the Z axis for rearing detection. In order to encourage exploration, the open field was dimly lit by overhead room lights at 75 lux. The apparatus was cleaned with a 70% ethanol solution between trials. Mice were placed in the center of the field and exploration was recorded over a single trial of 60 min. Behavior was tracked through the IR beam array and analyzed by the Med Associates Activity Monitor software. The rodents' behavior in this apparatus was analyzed using IR beam breaks for locomotor activity throughout the trial. The center zone was defined as the zone greater than 6 cm from any of the walls. Passive Avoidance The modular passive avoidance chamber (Coulbourn Instruments, Holliston, MA, United States) had two enclosed chambers of equal dimensions separated by a wall. This center wall had a 6 cm by 6 cm guillotine door linked to a computercontrolled AMi-2 interface device (Stoelting, Co., Wood Dale, IL, United States). Each chamber in the apparatus measured 17.0 cm by 17.7 cm and had a height of 30.5 cm. One side of the chamber had opaque walls and provided a dark environment for rodents inside this compartment. The other compartment was brightly illuminated by an overhead light at 300 lux. The chamber was placed in the center of the room with indirect overhead lighting and a side-mounted remote USB camera for viewing mice within the apparatus. The passive avoidance task was conducted over three consecutive days with a single trial on each day (Day 1: habituation, Day 2: acquisition, Day 3: retention probe trial). In the habituation trial, mice were placed in the lighted chamber with the door closed and allowed to explore for 180 s. For the acquisition trial, mice were again placed in the lit compartment at the beginning of the test. After 30 s, the door lifted and mice were given access to the dark compartment. When a mouse had entered the dark compartment, the experimenter closed the door with a remote switch, and a computer program (Anymaze, Stoelting, Co., Wood Dale, IL, United States) initiated a 2 s foot shock at 0.2-0.4 mA. After five additional seconds in the dark compartment, the test was ended and the mouse was retrieved. The probe trial followed an identical procedure to the acquisition trial, but the door was closed and the mouse was not shocked when it entered the dark zone. The probe trial was limited to 300 s. If the mouse had not entered within 300 s, the mouse was removed from the apparatus and its probe latency was recorded as 300 s. The latency to enter the dark zone on the acquisition and probe trials was recorded by the Anymaze software via a keystroke from the experimenter. A mouse was judged to have entered the dark compartment when all four paws were completely inside the darkened chamber. Morris Water Maze The Morris water maze (MWM) apparatus consisted of a circular pool (120 cm in diameter), which was filled to a depth of 50 cm with 26 • C water. Habituation, acquisition, and reversal trials included a 6 cm by 6 cm escape platform submerged ∼1 cm below the water's surface. The maze was lit by overhead lights at 75 lux and surrounded by white curtains with large distal cues on each of the four cardinal directions. We conducted our MWM paradigm over 13 consecutive days. Briefly, rodents were given a single 60 s habituation trial in clear water with a submerged, but visible escape platform before training began. Spatial acquisition trials were performed four times per day and conducted over the next 5 days with the water now made opaque by the addition of white acrylic paint. A single probe trial was performed 24 h after the final spatial acquisition trial with the escape platform now absent from the maze. Reversal trials on the next 5 days were conducted in a manner identical to the spatial acquisition trials, but with the escape platform moved 180 • to its initial position within the apparatus. Finally, the reversal probe trial was conducted on the final day in a manner identical to the initial probe trial. The habituation trial and probe trials were limited to 60 s. The acquisition and reversal trials were likewise limited to 60 s, but mice were gently guided to the escape platform if they had not located this platform within 60 s. Animals were allowed to remain on the escape platform at the end of their trial for 15 s in order to examine their location with respect to the distal cues. We used an inter-trial interval of 15 min. The MWM was virtually divided into four equal quadrants and behavior was analyzed by Anymaze for latency to entry onto the escape platform, duration in target quadrant, duration in the center (non-thigmotaxic) area, passes over the former escape platform location, and average swimming speed. A trial was excluded from MWM analysis if the animal demonstrated non-searching behaviors in the maze, which we defined a priori as passive floating for greater than five consecutive seconds or panicked swimming at one location on the maze wall (less than 2% of all trials). An animal was omitted from a testing day if two or more trials were excluded within the same day, but every mouse was allowed to finish the trial and remain on the target platform for each exposure to the MWM. Preparation of Synaptosomes and Western Blot Analysis Mice were euthanized by cervical dislocation for the preparation of synaptosomes. Synaptosomes were prepared from brain lysates using Synaptic Protein Extraction Reagent (Thermo Fisher Scientific, Inc., Waltham, MA, United States) according to the manufacturer instructions. Protein content was determined by BCA Protein Assay Reagent Kit (Thermo Fisher Scientific, Inc.) according to the manufacture instructions. Proteins were separated in a NuPAGE 4-12% Bis-Tris Gel and transferred to a nitrocellulose membrane using iBlot device (Thermo Fisher Scientific, Inc.). Membranes were blocked with 5% milk in PBS and 0.1% Tween-20 and probed with antibodies against: PSD95 (1:2000, Thermo Fisher Scientific, Inc.), NMDAR2B (1:1000, Abcam, Inc., Cambridge, United Kingdom), NMDAR2A (1:1000, R&D Systems, Minneapolis, MN, United States), and synaptophysin (1:2000, Sigma-Aldrich, Co., St. Louis, MO, United States). Membranes were stripped with Restore Western Blot Stripping Buffer (Invitrogen) for 30 min at 37 • C and reprobed with and anti-β-actin antibody (1:15000, Sigma-Aldrich, Co.) in blocking buffer to serve as a protein loading control. Immune complexes were detected with the corresponding secondary antibody and chemiluminescence reagent (Fisher Scientific). The intensity of immunoreactive bands was quantified using ImageJ and expressed in arbitrary units (AUs) defined as optical densities of synaptic protein relative to β-actin. Statistical Analysis Data, expressed as the mean ± SEM, were analyzed using one or two-way analysis of variance (ANOVA) with either Tukey's HSD post hoc test for biochemistry, or Kruskal-Wallis for behavior, using GraphPad Prism software v. 7.0 (GraphPad). A p-value < 0.05 was considered statistically significant. RESULTS gp120 Is Not Neurotoxic in p75NTR −/− Neurons We have previously demonstrated that gp120, which induces the releases proBDNF, promotes synaptic pruning in rodent primary neurons (Bachis et al., 2012;Avdoshina et al., 2017). The neurotoxic effect of gp120 is prevented by p75NTR antagonists (Bachis et al., 2012). To further support these data, gp120 was applied to primary cultures of cortical neurons obtained from WT or p75NTR −/− mice for 6 h and neuronal processes were identified by an antibody against neuron-specific cytoskeleton protein tubulin β III (TUBB3). Consistent with our prior findings (Bachis et al., 2012), exposure of neurons from WT mice to gp120 (5 nM) reduced the overall TUBB3 immunoreactivity, suggesting a decrease in the number of neuronal processes (Figure 1). Importantly, we found that neurons lacking p75NTR expression have a more complex TUBB3-positive network than neurons from WT animals exposed to gp120 (Figure 1). To provide a quantitative assessment of the neurotoxic effect of gp120, we exposed WT and p75NTR −/− neurons for 24 h FIGURE 1 | gp120 does not cause neurite pruning in p75NTR -/neurons. Representative images of primary neuronal cultures prepared from E17 cortex of WT or p75NTR -/mice. At 7 days in culture, neurons were exposed to heat inactivated (boiled) gp120IIIB (control) or 5 nM gp120IIIB for 6 h. Cells were fixed and processes visualized by anti TUBB3 antibody (green). DAPI (blue) was used to stain nuclei. Bar = 20 µm. Note that the neuronal network (TUBB3 positive processes) in p75NTR -/neurons exposed to gp120 is preserved when compared to WT neurons exposed to gp120. The experiment was repeated twice. to gp120 (5 nM). Hoechst/PI was used to quantify the number of surviving neurons. Furthermore, we examined whether HIV, which shares a similar neurotoxic profile of gp120 in rodent neurons (Bachis et al., 2009), is neurotoxic via p75NTR. As a positive control for p75NTR-mediated loss of neurons, we also exposed both WT and p75NTR null neurons to proBDNF for 24 h (10 nM). WT neurons in the presence of gp120, HIV or proBDNF displayed the expected increase in the number of neurons with Hoechst/PI staining, indicating increased apoptosis (Figure 2); importantly, the lack of p75NTR significantly reduced neuronal loss caused by either gp120, HIV, or proBDNF (Figure 2). Overall, a two-way ANOVA for this set of experiments revealed significance for genotype (F (1,60) = 91.92; p < 0.001), treatment (F (3,60) = 27.71; p < 0.001), and interaction (F (3,60) = 12.88; p < 0.001) factors. Taken together, our data suggest that p75NTR mediates the synaptic pruning effect of gp120, most likely shed from the virus. gp120 Decreases PSD95 and NMDA Receptor Subunit Immunoreactivity We have previously shown that gp120 causes a decrease in the number of dendritic spines in the hippocampus, an effect that is significantly diminished by the removal of one p75NTR allele (Bachis et al., 2016b). Dendritic spines form the post-synaptic density of the majority of excitatory synapses. Thus, to determine whether gp120 affects post-synaptic spines, we prepared synaptic fractions from homogenized mouse brains of 8-10 monthold WT and gp120tg mice and measured the levels of postsynaptic and presynaptic proteins. These include post-synaptic density protein 95 (PSD95), an abundant scaffolding protein that determines the functional integrity of excitatory synapses, N-methyl-D-aspartate (NMDA) receptor (NR) subunit 2A and 2B (Kornau et al., 1995) and synaptophysin, a transmembrane protein that is involved in synaptic formation and exocytosis. We first verified the appropriateness of the method by determining PSD95 and NR2A and 2B subunits in brain lysates from WT mice containing synaptosomal and cytoplasmic preparation. FIGURE 2 | The neurotoxic effect of HIV and gp120 is reduced in p75NTR -/neurons. Cortical neurons from WT and p75NTR -/mice were exposed to gp120 IIIB (5 nM), HIV IIIB (1.5 ng/ml of p24), or proBDNF (10 nM) for 24 h. Boiled inactivated gp120 or HIV (IIIB) were used as controls. Neuronal cell death was determined by counting the number of Hoechst/PI positive cells, as described in Section "Materials and Methods." Data expressed as % of cell viability are the mean ± SEM of eight independent coverslips from four independent culture preparations; coverslip values were averaged from three random fields. * * p < 0.01, * * * p < 0.001 vs. WT control,ˆp < 0.05 vs. WT gp120 IIIB ,ˆˆˆp < 0.01 vs. WT HIV IIIB or proBDNF. Two-way ANOVA and Tukey's HSD. Figure S1 confirm that PSD95, NR2A, and 2B immunoreactivity are only found in synaptosomal preparations. We then examined whether the levels of these synaptic proteins are altered in the hippocampus of gp120tg mice. gp120-Mediated Deficits in Performance on a Passive Avoidance Task Is Inhibited by the Removal of p75NTR Alleles Gp120tg mice develop age-related cognitive abnormalities, which correlate with loss of synaptic plasticity and neuronal degeneration (Toggas et al., 1994;Krucker et al., 1998;Lee et al., 2011), as well as an increased in the levels of proBDNF in the hippocampus (Bachis et al., 2016b). These data allowed us to speculate that a reduction of p75NTR expression would avert the impaired performance on hippocampal-dependent memory tasks previously described in gp120tg mice (Krucker et al., 1998;D'Hooge et al., 1999). FIGURE 3 | Analysis of synaptosomal preparations from gp120 and p75NTR mice. (A) Example of a Western blot analysis of synaptosomal preparation from the hippocampus of the indicated mouse genotypes. The blot was probed with antibodies that recognized the indicated synaptic proteins. Beta-actin was used for loading control. Molecular weights of analyzed proteins are listed on the left. (B-E) Levels of synaptic proteins, quantified as described in Section "Materials and Methods" and expressed in AU, are the mean ± SEM of five animal per group. * p < 0.05, * * p < 0.01. In all panels, comparisons of means of WT and p75 -/mice reveal no significant differences. ANOVA and Tukey's HSD. To examine whether loss of hippocampal spines was associated with impaired long-term avoidance memory, we subjected 8-10 month-old WT, gp120tg, p75 +/− gp120tg, and p75 −/− gp120tg mice to a passive avoidance task. Mice of each genotype entered the dark compartment with similar latency on the acquisition trial (one-way ANOVA: F (4,81) = 0.5615, p = 0.6912) (Figure 4), suggesting that the absence of one or both p75NTR alleles does not affect exploratory drive in this apparatus. Across all probe trials, gp120tg mice showed a significant difference in latency to enter the dark compartment FIGURE 4 | Removal of p75NTR rescues gp120-driven impairments on a passive avoidance task in (8-10mo) animals. 3-(3mo) and 8-10 month (8mo)-old mice of the indicated genotypes were subjected to a passive avoidance task to evaluate impairments in avoidance earning. Latency to enter the dark compartment on acquisition (Ac) and probe (Pr) testing days was analyzed. ANOVA with Tukey's HSD for comparisons across genotypes. * * p < 0.001 vs. WT. Data are expressed as mean ± SEM. n = number of animals per group. The observed effect may be due in part to differential locomotor activity, vigilance, or alertness in a novel environment among experimental groups (i.e., more exploratory animals may enter the dark compartment at a greater rate regardless of a formed association). To address these confounds, we assessed the above behaviors in a single 5-min exposure to an open field. Both the cumulative distance traveled (Figure 5A) and the time spent ambulating (Figure 5B) in the trial were equivalent between experimental groups (one-way ANOVA: F (3,70) = 0.0973, p = 0.9613, F (3,69) = 0.6548, p = 0.5827, respectively). The percentage of total active beam breaks occurring in the center of open field was decreased in 8-10mo gp120tg mice when compared to WT, replicating previous findings that older gp120tg mice display a modest anxious phenotype (one-way ANOVA: F (3,69) = 3.748, p = 0.0148) (Henry et al., 2014;Bachis et al., 2016a). Interestingly, this effect was rescued in animals by deleting one or both p75NTR alleles ( Figure 5C). However, the total active beam breaks ( Figure 5D) were equivalent across experimental groups (one-way ANOVA: F (3,71) = 1.163, p = 0.3300). Thus, although gp120tg mice show modest anxiety-like behaviors in a novel environment, all groups have a comparable exploratory drive and activity level in a novel environment within short passive avoidance timeframes. Based on these data, it is unlikely that the deficits seen in the passive avoidance task arise from anxiousness in an open environment. Genetic Deletion of p75NTR Rescues gp120-Mediated Impairment in Spatial Memory Previous studies have demonstrated that spatial memory is impaired in gp120tg mice (D'Hooge et al., 1999). We hypothesized that spatial learning and memory, a hippocampaldependent behavior, would be improved in mice lacking one or both p75NTR alleles. To assess impairments in spatial memory, we employed a MWM navigation task over 13 days. WT mice performed significantly better than gp120tg mice in both the acquisition and reversal phase, in which the escape platform was moved 180 • from its original location. Indeed, the gp120tg mice showed impairments on the second and third acquisition and reversal days ( Figure 6A). The removal of one or both p75NTR alleles diminished the effect of gp120. Supplementary Table S1 displays all statistical measures and inter-group comparisons within the two MWM learning phases. A probe trial was administered 24 h after both the final acquisition and reversal trials. These probes revealed differences between gp120tg mice vs. WT controls with respect to the duration of time spent in the target quadrant ( Figure 6B) and passes over the former target platform's location ( Figure 6C). Both p75 +/− gp120tg and p75 −/− gp120tg groups had non-significant differences in these two probe measures vs. WT controls. Similar results were observed within the reversal probe (Figures 6D,E, respectively). To account for possible confounds due to impairments in swimming, we compared swim speeds during the initial habituation to the water maze. Swim speed did not differ significantly (one-way ANOVA: F (3,71) = 0.6611, p = 0.5787) between the four genotypes during this single trial ( Figure 6F). Likewise, the percentage of time spent swimming in the center of the MWM on the first trial following habituation was similarly equivalent across experimental groups (Figure 6G; one-way ANOVA: F (3,71) = 1.964, p = 0.1271), indicating comparable motivation to escape the maze. Taken together, these data indicate that there are differences in spatial learning and memory across genotypes. DISCUSSION Dendritic injury and synaptic dysfunction are believed to cause the cognitive decline in HAND and other neurodegenerative diseases. Loss of synapses, similar to what is seen in HAND, is also reproducible in transgenic mice overexpressing gp120 (reviewed in Thaney et al., 2018). In this work we have used this animal model to characterize molecular/cellular mechanisms underlying the neuropathology of HAND. Our previous studies have shown that gp120tg mice exhibit increased levels of proBDNF in the hippocampus (Bachis et al., 2016b). Moreover, gp120 induces the release of proBDNF from neuronal cultures (Bachis et al., 2012). Here, we show that gp120 neurotoxicity can be attenuated by the removal of p75NTR, a receptor that promotes synaptic pruning (Zagrebelsky et al., 2005;Singh et al., 2008) and neuronal cell death (Bamji et al., 1998;Bhakar et al., 2003). Thus, our results support the suggestion that gp120 promotes synaptodendritic injury by a mechanism that favors the activation of p75NTR. How can gp120 neurotoxicity be linked to p75NTR activation? ProBDNF, like other proneurotrophins, is cleaved into mature BDNF in the endoplasmic reticulum by the proconvertase furin (Seidah et al., 1996) or extracellularly by proteases such as plasmin and matrix metalloproteases (Pang et al., 2004). Gp120 decreases the level and activity of furin and plasmin, thus reduces the conversion of proBDNF to mature BDNF (Bachis et al., 2012). Consequently, gp120tg mice exhibit higher levels of proBDNF than WT in the hippocampus and other brain areas (Bachis et al., 2016b). Moreover, gp120 promotes the release of proBDNF from cortical neurons and alters the ratio mature BDNF/proBDNF in the synaptic cleft in favor of proBDNF. This release could compromise synaptic connections and neuronal survival as indicated by the increased neuronal loss in cortical neurons exposed to gp120 (Bachis et al., 2012). Our data obtained in p75NTR −/− neurons, in which the neurotoxic effect of gp120 was significantly attenuated, strongly suggest that gp120mediated synaptodendritic injury and cell loss depend upon an indirect activation of p75NTR. The number and morphology of dendritic spines have emerged as crucial components underlying synaptic plasticity. Dendritic spines express all ionotropic glutamatergic receptors, which play a central role in long-term potentiation (LTP) (Kasai et al., 2010;Rochefort and Konnerth, 2012), a well-studied form of synaptic plasticity that forms the cellular basis of hippocampaldependent learning and memory (Herron et al., 1986;O'Dell et al., 1991). Gp120 has been shown to inhibit LTP (Sanchez-Alavez et al., 2000;Dong and Xiong, 2006), which would be consistent with a reduced spine density in the hippocampus described in gp120tg mice (Bachis et al., 2016b). In this study, we have provided preliminary but complimentary data showing that gp120 decreases the levels of NR2A and 2B subunits. This decrease is particularly important because these subunits play a role in glutamate-mediated synaptic plasticity (Liu et al., 2004;von Engelhardt et al., 2008). Moreover, both PSD95 and NR subunits, are considered markers for excitatory post-synaptic sites (Sheng, 2001). Intriguingly, gp120 failed to change the levels of synaptophysin, a synaptic vesicle membrane protein found predominantly presynaptically (Tarsa and Goda, 2002). Thus, it appears that gp120 may target mostly the post-synaptic membrane. This suggestion, although still speculative, is in line with the fact that neurons release proBDNF (Yang et al., 2009), which then acts on post-synaptic p75NTR to decreases spine density of hippocampal pyramidal neurons (Zagrebelsky et al., 2005;Yang et al., 2014). FIGURE 6 | gp120tg mice show impairments in a task of spatial navigation. 8-10 month-old mice were evaluated with the Morris water maze (MWM) navigation task. (A) Latency to locate the submerged escape platform in the maze over five training days of acquisition and five days of reversal learning. These two learning phases were separated by a probe (P) on day 6 and a reversal probe on day 12. One-way ANOVA (within each TD) with Tukey's HSD. Each data point represents the genotype mean of the average of an animal's trials on a training day. (B) Duration in the target quadrant during the first probe trial. Dotted line indicates chance performance. One-way ANOVA with Tukey's HSD. (C) Passes over the former location of the target platform during the probe trial. Kruskal-Wallis with post hoc Dunn's. (D) Duration in the reversal target quadrant during the first probe trial. Dotted line indicates chance performance. One-way ANOVA with Tukey's HSD. (E) Passes over the former reversal escape during the reversal probe trial. Kruskal-Wallis with post hoc Dunn's. (F) Habituation swim speed and (G) percent of first trial in the center of the MWM were taken as control measures for locomotion and motivation to explore the maze, respectively. One-way ANOVA. * p < 0.05, * * p < 0.01, * * * p < 0.001. Data are displayed as mean ± SEM. n = number of animals per group. Frontiers in Cellular Neuroscience | www.frontiersin.org Cognitive impairment and reduced LTP seen in gp120 mice (Krucker et al., 1998;D'Hooge et al., 1999) correlate with loss of synapses in the hippocampus (Toggas et al., 1994;Lee et al., 2013;Bachis et al., 2016b). These effects appear when mice are at least 6 months old. In the present study, we have used a series of behavioral tests that assess loss of hippocampal connections to determine whether memory impairment in gp120tg mice could be abolished by the removal of p75NTR alleles. We have found that the hippocampal-dependent memory deficits observed in 8-10mo gp120tg mice is attenuated when either one or both p75NTR alleles are removed. Thus, reduced expression of p75NTR, which has been shown to slow down cognitive decline in an animal model of Alzheimer's disease (Qian et al., 2018), not only inhibits the loss of hippocampal spines that we have previously described (Bachis et al., 2016b) but also precludes the impairment in memory observed in 8-10mo gp120tg mice. In addition, we observed that the removal of p75NTR alleles reduces the impairments seen in the MWM reversal phase in gp120tg mice. Reversal learning has multiple neural substrates in rodents independent of the hippocampus, including the subnuclei of the basal forebrain and the prefrontal cortex (Ghods-Sharifi et al., 2008;Tait and Brown, 2008). Although we cannot exclude that the ability of gp120 to increase proBDNF in any of these areas may underlie a reversal learning impairment, it is difficult to interpret reversal learning deficits when impairments in general spatial learning are also seen within the MWM test. Therefore, we exert caution in interpretation of this curious finding and recognize that more rigorous assays of reversal learning are needed in future studies with this specific model of HAND. The mechanism(s) whereby proBDNF activation of the p75NTR reduces spine density remains to be established. p75NTR, after binding to sortilin family member SorCS2, activates several signaling pathways that are crucial for neuronal degeneration. These include c-Jun N-terminal kinase (JNK) (Friedman, 2000;Salehi et al., 2002), the RhoA (Park et al., 2010) and the NF-kB pathways (Carter et al., 1996). JNK is also activated by the HIV protein gp120 (Meucci et al., 1998;Bodner et al., 2004;Singh et al., 2005), suggesting a common neurotoxic mechanism between viral proteins and p75NTR. Experimental studies have also shown that p75NTR, destabilizes actin filaments through inactivation of Rac/fascin interaction (Deinhardt et al., 2011). Actin influences spine morphology and stability (Rust et al., 2010). Moreover, p75NTR has been shown to inhibit neurite outgrowth by interacting with the Nogo receptor complex (Barker, 2004) and Ephrin-A (Lim et al., 2008), important components of synapses and promoters of spine morphogenesis (Lai and Ip, 2009). On the other hand, we need to consider that the hippocampus of gp120tg mice as well as HAND subjects exhibits lower levels of BDNF than controls (Bachis et al., 2012(Bachis et al., , 2016b. BDNF has been shown to promote maturation and density of dendritic spines (Orefice et al., 2013). Thus, a reduction in BDNF levels in favor to proBDNF levels, as seen in gp120tg mice and HAND subjects, may accelerate synaptic pruning. Moreover, we cannot exclude that gp120-mediated synaptic pruning is linked to the ability of the envelope protein to decrease the levels of BDNF receptor trkB (Bachis et al., 2016b). This receptor modulates synaptic plasticity and spine density in the adult hippocampus (Yacoubian and Lo, 2000;Otal et al., 2005), as well as participates in spine maintenance (Chapleau and Pozzo-Miller, 2012). Higher proBDNF and lower trkB levels have been discovered in the postmortem hippocampus of HAND subjects compared to noncognitive impaired HIV subjects (Bachis et al., 2012(Bachis et al., , 2016b. Thus, HIV, through gp120, may promote synaptic pruning by a combination of increased p75NTR activation and a decreased trkB function. More experiments are needed to fully understand these mechanisms. DATA AVAILABILITY All datasets generated for this study are included in the manuscript and/or the Supplementary Files. ETHICS STATEMENT All studies were in accordance with the Guide for the Care and Use of Laboratory Animals as adopted and promulgated by the U.S. National Institutes of Health. The protocol was approved by the Georgetown University Animal Care and Use Committee. AUTHOR CONTRIBUTIONS IM designed the experiments and wrote the manuscript. AS designed and performed the behavioral studies, analyzed the data, and helped with the writing of the manuscript. GA and SS performed the molecular biology experiments and analyzed the data. VA designed, performed, analyzed the in vitro experiments, and helped with the writing of the manuscript. PF assisted with the interpretation of the behavioral data. All authors reviewed the results and approved the final version of the manuscript. FUNDING This work was supported by the HHS grants NS079172 and NS074916 to IM and T32NS041231 to AS from the National Institute for Neurological Disorders and Stroke.
2019-08-30T16:48:00.284Z
2019-08-30T00:00:00.000
{ "year": 2019, "sha1": "0a5a5ef8f35c0baf5bd596ef7bee42d1716f269c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2019.00398/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a5a5ef8f35c0baf5bd596ef7bee42d1716f269c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
249191981
pes2o/s2orc
v3-fos-license
Attosecond Entangled Photons from Two-Photon Decay of Metastable Atoms: A Source for Attosecond Experiments and Beyond We propose the generation of attosecond entangled bi-photons in the extreme-ultraviolet regime by two-photon decay of a metastable atomic state as a source similar to spontaneous parametric down-conversion photons. The 1s2s $^1S_0$ metastable state in helium decays to the ground state by emission of two energy-time entangled photons with a photon bandwidth equal to the total energy spacing of 20.62 eV. This results in a pair correlation time in the attosecond regime making these entangled photons a highly suitable source for attosecond pump-probe experiments. The bi-photon generation rate from a direct four photon excitation of helium at 240 nm is calculated and used to assess some feasible schemes to generate these bi-photons. Possible applications of entangled bi-photons in attosecond time scale experiments, and a discussion of their potential to reach the zeptosecond regime are presented. We propose the generation of attosecond entangled bi-photons in the extreme-ultraviolet regime by two-photon decay of a metastable atomic state as a source similar to spontaneous parametric down-conversion photons. The 1s2s 1 S0 metastable state in helium decays to the ground state by emission of two energy-time entangled photons with a photon bandwidth equal to the total energy spacing of 20.62 eV. This results in a pair correlation time in the attosecond regime making these entangled photons a highly suitable source for attosecond pump-probe experiments. The bi-photon generation rate from a direct four photon excitation of helium at 240 nm is calculated and used to assess some feasible schemes to generate these bi-photons. Possible applications of entangled bi-photons in attosecond time scale experiments, and a discussion of their potential to reach the zeptosecond regime are presented. Quantum entanglement is a fascinating quantum phenomenon that has no classical analog [1]. Entanglement is at the heart of quantum information science, quantum sensing, quantum enhanced imaging and spectroscopy, and other emerging quantum technologies. Entanglement of photons has particularly played an important role in many areas of basic and applied research that leverage the quantum advantage. For example, entangled photons have been used in nonlinear spectroscopy [2][3][4] which goes beyond the time-frequency uncertainty limit [5][6][7]; Moreover, a linear (rather than quadratic) scaling of two-photon absorption rate versus intensity is observed with entangled photons [3,[7][8][9], which enhances the process at low intensities. As a light source, entangled photons can collectively excite uncoupled atoms [10,11], and lead to entanglement-induced two-photon transparency [9], which cannot be obtained by a classical laser source. Typical sources of entangled photons use the process of spontaneous parametric down-conversion (SPDC) in nonlinear crystals in the visible and infra-red region of the spectrum [12]. These sources generate energy-time entangled photons with correlation times on the femtosecond time scale which has been only recently directly measured [13]. SPDC has also been demonstrated in the hard X-ray regime where the correlation times are expected to be attoseconds or smaller [14]. Recent experiments using nanophotonic chips for SPDC have demonstrated entangled photon generation with broad bandwidth of 100 THz (0.41 eV ) and a high generation efficiency of 13 GHz/mW [15]. Here we propose a method to generate entangled photon pairs in the extreme-ultraviolet (XUV) regime with ultra-broad energy bandwidth (> 20 eV ) large enough to create correlation times on the attosecond scale. It is well known that the 1s2s 1 S 0 metastable state of helium atom, its isoelectronic ions and the 2s 2 S 1/2 metastable state of the helium ion decay predominantly by two-photon emission [16][17][18][19][20]. The emitted photons are energy-time entangled with a correlation time related to the energy spacing between the 2s and 1s levels which is 20.62 eV and 40.81 eV for the helium atom and ion respectively. This large energy bandwidth of the emitted entangled photons corresponds to correlation times in the attosecond domain, thus opening up the possibility of attosecond time scale pump-probe experiments using these photons. We first consider a gedanken experimental set-up in which we have a spheroidal cavity, with two helium atoms placed at its two foci. One of the atoms is prepared in the 1s2s ( 1 S 0 ) excited state, which is used as an emitter (atom 1), and another atom is in 1s 2 ( 1 S 0 ) ground state, which is used as an absorber (atom 2), as shown in Fig. 1. Atom 1 decays to the 1s 2 ground state by a simultaneous emission of two photons, according to selection rules. This decay channel dominates over the magnetic dipole transition to the 1s2s 3 S 1 state. Since the longlived metastable 1s2s ( 1 S 0 ) state has a long lifetime of τ = 0.0197 sec [21], and since the energy gap between 1s2s ( 1 S 0 ) and 1s 2 state is 20.62 eV , the two emitted photons have both a good correlation in frequency and a narrow window in emission time difference; energy-time uncertainty implies that they are good sources of entanglement. The bi-photons should also be correlated in angular momentum, according to the angular momentum conservation rule. However, we do not address that aspect, since inside a spheroid cavity the entangled photonpair will be collected at the absorber with equal distance optical path, irrespective of their angular distribution or momenta. In treating this process, we assume that: 1. The cavity is large enough, that no quantization of photon frequencies or Purcell effect is relevant. 2. Both atoms are deeply trapped, so no recoil effects can be observed. 3. The mirror of the cavity is 100% reflective to FIG. 1. A schematic diagram of entangled-photon generation and absorption in a spheroid cavity. The emission and absorption atoms are placed at the two foci of the spheroid, the photons are reflected by the boundary of the cavity, and propagate along equal paths to reach the absorber. The shape of the cavity will influence the rate of this process, by a geometry factor as discussed in the Supplementary Material. all the frequencies, so no energy loss occurs during reflection of the photons. Further, it is noted that the emitted bi-photons are also polarization entangled but we do not discuss polarization entanglement in this letter. All possible polarization configurations are considered in our calculation. Inside the cavity, there are three stages of photoelectric processes : the population inversion of atom 1, the spontaneous two-photon emission of atom 1, and the photoabsorption of atom 2. In the first stage, we prepare the singlet 1s2s state using four photon absorption with each photon having energy ω 0 = 5.155 eV (240.54 nm). With a monochromatic incident electric field E 0ˆ 0 cos (ω 0 t), the four-photon excitation amplitude is, where |g is the 1s 2 ground and the initial state, |e is the 1s2s excited and the final state, |j 1,2,3 are the intermediate states. Since The resulting unnormalized state following the excitation, which is also the initial state for the emission process, is: eg |e . The photon-atom interaction for the second and third stages is: Where r is the space vector of the electron, E is the electric field, and V is the quantization volume, with the electric field generated by a single photon proportional to 1/ √ V . The photon modes s include the frequency ω s , propagation directionk s and polarization direction s . From a second-order perturbation analysis, the amplitude of the emission of two photons (|γ ⊗ |vac → |g ⊗ |1 s , 1 s ) is, |j denotes the intermediate states for the emission process. ∆ ej (∆ eg ) is the energy difference between the initial and intermediate (final) atomic state. From Eq. 3 we obtain the He 1s2s ( 1 S 0 ) lifetime as τ = 0.0197 sec, which agrees with the experimental value [21]. Since no singlet energy level exists between E i and E i +∆ eg for atom 2, the absorption process can only start after both photons have been emitted, with ω s + ω s = ∆ eg . The modes of the photons are not detectable inside the cavity, therefore the entangled photon state can be obtained by summing over all the modes (s, s ) [22]: Based on a second-order perturbation calculation, The entangled-photon absorption amplitude can be written as, |i , |m and |f denotes the initial, intermediate, and final states for atom 2. E(t 1,2 ) are the electric fields of the photons that are bounced back by the cavity (whose frequencies stay the same but propagation and polarization directions have changed), being absorbed at time t 1 and t 2 . The evaluation of Eq. 5 depends on the shape of the cavity, and it turns out that the absorption process can be described by a rank-0 tensor, which is discussed in the Supplementary Material. The time correlation of the entangled photon pair can be found from vac|E(t 2 )E(t 1 )|2ph , which is proportional to the fourier transformation of the spectrum [8,9,23], as The right hand side of Eq. 6 is plotted in Fig. 2, versus the time difference between two absorption events of the two photons. The time scale between the two absorption events is around ±4 a.u. which gives a correlation time [5] around 193 attoseconds. Finally, according to Eq.1, 3 and 5, the rate for the excitation, emission and absorption where an entangled photonpair is transferred coherently, is: Θ is a geometry factor which is introduced in Eq. S3, whose values are shown in Figure S1 in the Supplementary Material. Especially, for a spherical cavity, It is seen that the transition rate is proportional to J 4 . The entangled-photon absorption rate is known to be proportional to the beam intensity (when the beam intensity is not very strong) [2,7,9], and our result can be regarded as a generalization of this linearity. Since our excitation process is a four-photon process, we can consider the four-photon flux as a whole, which is the input of the system, J (4) = J 4 . Therefore it is an expected outcome that R trans ∝ J (4) . The above calculations assume a direct multiphoton excitation from 1s 2 to 1s2s. Since the 1s2s 1 S 0 metastable state has a narrow linewidth of ∼ 50 Hz, ideally a multiphoton excitation to this state requires intense lasers with a linewidth smaller than 50 Hz at a wavelength of 240 nm. While multiphoton excitations of metastable states with narrow linewidth lasers have been previously demonstrated [24], achieving the required high intensities with a narrow-band 240 nm laser is currently challenging. However, femtosecond lasers that can achieve peak intensities of ∼ 10 14 W cm −2 are readily available. Using our calculations for the four-photon excitation amplitude with a monochromatic electric field and ignoring loss due to ionization, we estimate the helium 1s2s 1 S 0 multiphoton excitation rate with a femtosecond laser at 240 nm having a typical bandwidth of ∼ 5 THz and obtain a bi-photon generation rate of ∼ 10 11 s −1 (see Supplementary Material and figure 3 (a)). An alternative scheme using a lambda-type transition between 1s 2 , 1s2p, and 1s2s states could be used to achieve significant excitation. The energy levels of the latter two are 21.22 eV and 20.62 eV above the ground state, respectively. A two-step sequential excitation to first excite 1s 2 → 1s2p and then 1s2p → 1s2s could be used. The oscillator strengths for one-photon excitation processes are f a→b = 2∆ ba | b|ˆ 0 · r|a | 2 , which gives f 1s 2 →1s2p = 0.28 and f 1s2p→1s2s = −0.36 for the two steps. To achieve this two-step sequential excitation, a high photon flux helium lamp source can be used in the first step to excite 1s2p and a 2059 nm laser can transfer population to the 1s2s state (see figure 3 (b)). The ∼ 1 GHz linewidth of the 1s2p state makes transitions to the 1s2s state using a broadband laser more feasible in comparison to direct multiphoton excitation. Currently available helium lamp sources are capable of generating ∼ 10 15 photons s −1 . Using a high pressure helium target, nearly all of these photons could be absorbed to generate helium atoms in the 1s2p state. A high repetition-rate pulsed laser source at 2059 nm, could transfer nearly all these excited helium atoms to the 1s2s state. We estimate a bi-photon generation rate of ∼ 10 13 s −1 using this method (see Supplementary Material). Another alternative approach to achieve significant population of the 1s2s singlet metastable state is to use Stark-chirped rapid adiabatic passage (SCRAP), previously proposed to excite the 2s metastable state in a hydrogen atom [25,26]. In this technique, a pump pulse excites the metastable state via a multiphoton transition in the presence of a Stark pulse that Stark shifts the 1s2s state across the bandwidth of the pump pulse (see figure 3 (c)). The combined effect of the two pulses results in a Landau-Zener-type adiabatic passage that can significantly populate the 1s2s state. The SCRAP technique [26] can also suppress ionization loss by laser-induced continuum structure (LICS) [27][28][29]. If we ignore ionization loss, for a typical femtosecond laser pulse-width of 50 fs with a bandwidth of 8.8 THz, rapid-adiabatic passage can excite nearly all atoms in the focal volume. When ionization loss is considered, since LICS can suppress ionization loss, it is reasonable to assume that ∼ 1% of the atoms can be excited using SCRAP for every pair of pump and Stark pulses. With ∼ 10 13 atoms in the focal volume corresponding to a 100 µm spot size and 1 mm path length at 1 bar target pressure, this results in ∼ 10 11 atoms excited per pulse. At a femtosecond pulse repetition rate of 100 kHz currently available, this results in an entangled bi-photon generation rate of 10 16 s −1 (see Supplementary Material). Among the three methods discussed here to excite helium to the singlet 1s2s state, the SCRAP method is expected to provide the highest excitation and hence the highest entangled bi-photon generation rate. The bi-photons from the decay of the 1s2s state are emitted in all directions with an approximate distribution given by 1 + cos 2 (θ) [20], where θ is the relative angle between the entangled photons. The photons that are emitted in a direction orthogonal to the excitation laser propagation direction can be collected within a large solid angle and sent along independent time-delayed paths towards a pump-probe target. Figure 3 (d) shows a schematic of a proposed experimental setup for generation of these entangled photons and their utilization in an attosecond pump-probe experiment. In this scheme, a grazing incidence toroidal mirror collimates the emitted photons which are then split into two halves using a grazing incidence split mirror that introduces a controllable time-delay between the two halves of the beam. Collecting bi-photons emitted along the same direction within a large solid angle (as opposed to those emitted in opposite directions), ensures that no time-smearing is introduced in the arrival times of the bi-photons. A 10% collection solid angle will result in 1% collection of biphotons. The split beams are then focused using a second toroidal mirror onto the target gas jet. A pump-probe experiment with attosecond time resolution can be performed by measuring a photo-ion or photo-electron signal arising from the absorption of entangled bi-photons by an atom or molecule. Recent work on entangled twophoton absorption sets upper bounds on the enhancements in two-photon absorption cross section with entangled photons when no intermediate resonances are involved [30,31]. Assuming a bi-photon rate of ∼ 10 12 s −1 at the pump-probe target and a two-photon cross section of 10 −50 cm 4 s, a pump-probe photoionization rate of ∼ 1000 ions per second, which is well-above detection threshold of ion spectrometers, is expected. When intermediate resonances are involved, such as broad absorption resonances in molecules typically studied in attosecond experiments, this photoionization rate can be increased by a few orders of magnitude (see Supplementary Material). Further, measuring a pump-probe photoionization signal as opposed to a photon absorption signal as in previous two-photon absorption experiments allows detection of low absorption rates. Such entangled photon pump-probe experiments will extend the capabilities of attosecond science, where currently attosecond pulses from high-order harmonic generation [32] or free electron laser [33] sources are used. The entangled photon generation scheme discussed here can be extended to the soft X-ray (SXR) regime using helium-like ions. Two-photon decay in helium-like ions has been well studied [16,18,20]. Similar to the 1s2s 1 S 0 state of neutral helium atoms, the 1s2s 1 S 0 states of helium-like ions such as N 5+ , O 6+ and Ne 8+ , predominantly decay by two-photon emission with a rate proportional to Z 6 , where Z is the atomic number. The large energy difference between such excited states and the ground state of the ions, which can be in the range of several hundred to thousands of electron-volts, results in entangled photon correlation times of a few attoseconds to zeptoseconds. For example, the 1s2s 1 S 0 state of Ne 8+ is located ∼ 915 eV above the Ne 8+ ground state and this bandwidth corresponds to an entangled photon correlation time of ∼ 5 attoseconds. The two-photon decay rate in this case is ∼ 1 × 10 7 s −1 which is significantly larger than the corresponding rate for neutral helium atoms of ∼ 5 × 10 1 s −1 . Ne 8+ has been previously generated using strong femtosecond laser fields [34,35] as well as using strong femtosecond X-ray pulses from free electron lasers (FEL) [36] both of which can potentially also create Ne 8+ in the 1s2s 1 S 0 excited state. In one possible scheme, strong laser field ionization could generate Ne 8+ ions in the ground state and an FEL could excite them to the 1s2s 1 S 0 state by two-photon excitation which then generate highly broadband entangled bi-photons at SXR energies. It has been previously demonstrated experimentally that the bandwidth required to generate fewattosecond pulses can be obtained from HHG using midinfrared pulses [37]. Further, it has been theoretically shown that zeptosecond pulses can be generated from HHG when suitable filters are used [38]. However, the shortest measured attosecond pulse is currently 43 attoseconds [32]. Our approach of using entangled photons from two-photon decay of helium-like ions offers an alternative path for carrying out ultrafast measurements in these extreme regimes of a few-attoseconds to zeptoseconds. In conclusion, an unconventional approach is presented here for generating attosecond entangled bi-photons in the XUV and SXR regimes using two-photon decay in helium atoms and helium-like ions. Multiple alternative schemes can be used to excite the 1s2s 1 S 0 metastable state in helium for which excitation rates have been estimated and an experimental scheme is suggested to collect and use the emitted XUV bi-photons in attosecond pump-probe experiments. The calculated photoionization rates indicate that attosecond pump-probe experiments with entangled photons are feasible. Potential extension of such metastable excitations to helium-like ions is additionally proposed, whereby SXR bi-photons can be generated with entanglement times in the fewattosecond range with the possibility of reaching the zeptosecond regime. This approach can open doors to using XUV/SXR entangled photons in quantum imaging and attosecond quantum spectroscopy of atomic, molecular and solid-state systems. Abstract This supplementary material provides details on the calculation of the geometry factor for the spheroid cavity, estimates for the entangled bi-photon generation rates in various schemes, and estimates for photoionization rates from entangled two-photon absorption in an attosecond pumpprobe experiment. I. GEOMETRY FACTOR FOR THE SPHEROID CAVITY In this section, we will discuss the influence of the cavity geometry on the photoabsorption process, and evaluate the geometry factor Θ introduced in Eq. 7 in the main article. We start with summing over all the optical modes, to obtain an entangled-photon state: However, in a spheroid cavity we need to reparametrize everything above. We assume the spheroid cavity has one major axis length 2a and two minor axes length 2b (a ≥ b), the two foci are aligned along the major axis of the spheroid (which isẑ axis), and the distance between them is 2l (l = √ a 2 − b 2 ). The photon is emitted from one focus, and no matter what direction it propagates, it bounces back and passes through the other focus. * wang3607@purdue.edu † niranjan@purdue.edu The propagation vector before and after the reflection can be parameterized as: where L ± = b 2 sin 2θ + (l ± a cosθ) 2 . The polarization basis fork andk are, (1) is perpendicular to the incident plane and doesn't change upon reflection, butˆ (2) does: (2)( ) =k ( ) ׈ (1)( ) . The angular integral is, We now consider the photon pair in modes s and s , their propagation directions are random and independent with each other. However, constrains are set on their polarization directions. In the spontaneous decay from e = 1s2s ( 1 S 0 ) to g = 1s 2 state, only the isotropic part of the dipole operator can survive: In the absorption process, since the photons are not detectable inside the cavity, all the angular directions are integrated coherently: FIG. S1. A plot of geometry parameter Θ versus the aspect ratio a/b in an log-log scale. When a = b, the cavity is a sphere and the geometry parameter obtained its maximum Θ = 64π 2 27 . Where [...] (k) µ is an rank-k spherical tensor with component µ(|µ| ≤ k), as a result of the tensor product of two vectors [1]. Given the emitted-photon tensor have only rank-0 component, from the orthogonality of spherical tensors, we have k = 0, µ = 0. So only rank-0 transition is allowed in the absorption process. This conclusion will no longer hold once the directions of the photons can be detected, i.e., by a recoil effect of the atoms. The geometry factor Θ is introduced to denote the polarization part of the integration. When the cavity is a prefect sphere, Eq. S3 gives Θ = 64π 2 27 . The change of Θ versus the aspect ratio of the cavity can be found in Figure S1, from the range of a = b to a = 148b. As the cavity becomes prolate spheroid shaped, Θ decrease with the aspect ratio a/b, and become stable at around Θ = 8π 2 9 . II. EXCITATION RATES FOR THE HELIUM 1s2s 1 S 0 STATE In this section, we give estimates for the rate of bi-photon generation from the helium 1s2s state under realistic experimental conditions, for the schemes outlined in the main article. For a helium gas pressure of 1 bar, the number density N ∼ 10 19 atoms/cm 3 . Such gas densities are easily achievable in static gas cells. In the following calculations, we will assume a pump focal spot diameter d ∼ 100 um, and interaction length L ∼ 1 mm. A. Four-photon excitation using 240 nm narrow-band and broad-band lasers The multiphoton absorption coefficient is related to the transition rate by the relation where I is the intensity of the incident radiation, R (n) the n-photon transition rate, and N is the number density in atoms/cm 3 . The variation in the intensity of radiation with distance is then given by for single-photon absorption, and for multiphoton absorption. The per atom four-photon transition rate for the 1s2s state in helium at 240 nm can be calculated as: where D The absorption fraction is given by The number of 240 nm photons per second in the focal volume for the given intensity (narrow-band continuous-wave laser) and focal spot size is ∼ 10 28 . This gives a bi-photon generation rate of ∼ 10 22 per second. The above calculation assumes a narrow-bandwidth (< 50 Hz) laser at 240 nm. To estimate the bi-photon generation rate for a broadband femtosecond 240 nm laser, we incoherently integrate over the different frequency components and write the 1s2s excitation rate (Eq. S7) as For a broadband 240 nm laser with a bandwidth of 5 THz, an excitation rate and hence a bi-photon generation rate of ∼ 10 11 s −1 is obtained. B. Sequential excitation using helium lamp and 2059 nm laser We consider a two-step sequential excitation to first excite 1s 2 → 1s2p and then 1s2p → 1s2s. From our calculations, the oscillator strengths for one-photon excitation processes are f 1s 2 →1s2p = 0.28 and f 1s2p→1s2s = −0.36 for the two steps. The corresponding transition rates can be calculated using where µ is the reduced mass, and the delta function is to be replaced by the lifetime of the excited state. Incoherent lamp sources that can generate ∼ 10 15 photons/s, resonant with the 1s 2 → 1s2p transition, and with a spot size of 100 um are commercially available (SPECS GmbH, µSIRIU S). The corresponding intensity for the lamp source is 34 W cm -2 , and we assume an intensity of 10 12 W cm -2 for the 2059 nm excitation laser. Assuming a 100 um spot size for both the excitation beams gives the transition rates R 1s 2 →1s2p ∼ 3.7 × 10 9 and R 1s2p→1s2s ∼ 5 × 10 21 per second. If we denote the lifetime of the 1s2p state by τ , we can calculate the steady-state number density of excited atoms after the first excitation step using [2]: At a pressure of 1 bar, the focal volume contains 7.8 × 10 13 neutral atoms, 47% of which are excited to the 1s2p state. Note that the number of 2059 nm photons for the given intensity and focal spot size is ∼ 8 × 10 28 , which is much larger than the number of excited atoms in the focal volume. It is possible to saturate the 1s2p → 1s2s transition and all the atoms in the 1s2p state could be promoted by the 2059 laser to the 1s2s state. This gives a bi-photon generation rate of ∼ 3.6 × 10 13 per second. Note that for a sufficiently strong pulsed 2059 nm laser at high repetition rates, both the number of photons per second in the focal volume and the excitation rate per atom will satisfy the above mentioned criteria, and thus, give a similar rate of bi-photon generation. C. Four-photon excitation using SCRAP We consider a scenario where rapid adiabatic passage (RAP) is used to transfer population from the 1s 2 to the 1s2s state via a four-photon coupling. Neglecting ionization leakage from the 1s2s state, the transfer efficiency is limited only by non-adiabatic transitions between the adiabatic states, which will be calculated in this section. The single-photon Rabi frequency Ω eg = µ eg · E / can be generalized for a four-photon transition as for linearly polarized light. If we consider the 1s 2 , 1s2s ( 1 S 0 ) subspace as a two-level system, the four-photon Rabi frequency at a 240 nm laser intensity of 10 14 W cm -2 becomes Ω eg = 7.35 × 10 −5 a.u. = 1.9 × 10 13 s −1 (S14) Let us assume that a Stark pulse sweeps the transition energy across the entire bandwidth of a pump pulse of duration τ seconds and bandwidth δ Hz. We will further assume a static detuning δ/2 for the four-photon pump pulse, such that the Rabi frequency Ω = Ω 2 eg + δ 2 /4. The rate of leakage due to non-adiabatic transitions can be calculated using the Landau-Zener formula [3]: where γ ∼ ∆ /4π. Assuming a linear Stark sweep, ∆(t) = t (δ/τ ) Hz (−τ /2 ≤ t ≤ τ /2), and γ = δ/4πτ . The corresponding transition rate is Γ(t) = Ω 2 δ/4πτ t 2 (δ/τ ) 2 + δ/16πτ (S16)
2022-05-31T01:15:48.757Z
2022-05-30T00:00:00.000
{ "year": 2022, "sha1": "24344577881365fe911a5989fae02373843e52eb", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.4.L032038", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "ab455f473ff5976592ae4df89a1ce23ce8497dde", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
22473026
pes2o/s2orc
v3-fos-license
The self-esteem and anxiety of children with and without mentally retarded siblings Background: The study was carried out with the aim of determining the factors affecting and to evaluate anxiety situations and self-esteem of children with and without mentally retarded siblings. Materials and Methods: The sampling included 227 healthy children: 108 of them have mental retarded sibling and 119 of them do not have mental retarded sibling. The context of this study consisted of 15-18 year of age healthy children with mentally retarded siblings and 15-18 year of aged healthy children having at least one sibling between the dates February 15st and June 26st 2010. Personal Information Form, Rosenberg Self-Esteem Scale and Trait Anxiety Scale were used. Results: It was found out that trait anxiety of 17-18 aged of children with mental retarded sibling (47.04 ± 7.3) was higher than that of the children without mental retarded siblings (44.05 ± 11.23) (P < 0.05). It was observed that self-esteem of children with mentally retarded sibling was not affected from the handicap of their siblings (P > 0.05). Trait anxiety score averages of children with mentally retarded sibling and experience some difficulties due to his or her siblings’s handicap (47.00 ± 7.76) were found higher than those of those of the children without any problem with the environment (42.61 ± 7.48) (P < 0.05). Conclusion: Although the average score of trait anxiety and self-esteem in both groups were not significant different, score of trait anxiety for children with mentally disabled siblings was higher in comparison. It was concluded that anxiety of children with and without mentally retarded siblings increased as self-esteem of these children decreased. INTRODUCTION Mental retardation is a constant state of regression and inadequacy in effective adaptive behaviours that arises as a result of ongoing senescence, discontinuation and regression in development and functionality due to various reasons before birth, during birth and after birth. [1]According to the World Health Organisation, 10% of the population in developing countries and 12% of the population in developing countries is disabled. [2]ffects on other siblings; deprivation of parental interest, increase in sibling care responsibilities and exposure to pressure caused by the disabled sibling's limitations, being labelled by society, deprivation of normal sibling interaction and role changes within the family. Meyer and Vadasy [10] categorized the concern of children with disabled siblings under eight headings; feeling guilty because of their sibling's disorder, feeling embarrassed and avoiding contact with their sibling because of their behavior or the way they look, feeling frightened because they might get the same illness as their sibling, feeling jealous or angry because they are paid less attention, isolating themselves because they think nobody understands what they are going through, feeling pressurised to achieve more in order to compensate for what their sibling cannot achieve, feeling that they are obligated to look after their sibling even if it coincides with other responsibilities and plans they have made with their friends and feeling the need to learn more about their sibling's disorder. Orsmond and Seltzer [11] expressed that, in general, disabled children tend not to think much about their siblings through life.According to Atasoy [12] determining the needs of children with disabled siblings in terms of influence and development is extremely important in self development, family interaction and developing a support system. It is necessary to determine the difference between those with and without disabled siblings and define the issues in order to inform healthy children about their disabled siblings, share their emotions and relieve their psychological pressure.In conclusion, defining problems may help the futures of families, with both disabled and healthy children and help them adapt to society. 0] The purpose of this study is to investigate the self-esteem and anxiety state of children with and without disabled siblings and determine the effective factors. Sample This study is a cross-sectional study.The population of the study comprises of healthy children aged between 15 and 18 that have mentally disabled siblings, registered with Private Education and Rehabilitation Centers of the Directorate of National Education and healthyhildren aged between 15 and 18, attending Directorate of National Education High Schools, having at least one sibling and with no disabled siblings.The sample group comprised of at 15-18 age of siblings of children that were diagnosed with a mental disability at least 6 months ago, registered with Private Education and Rehabilitation Centers of the Directorate of National Education in central Erzurum (City in the east of Turkey) willing to participate in this study, were literate and available (118).In the event that mentally disable children had more than one literate healthy sibling, the sibling with the smallest birth year, within the given age range was chosen to participate in this study.108 children were included in the group as seven children with mentally disabled siblings did not attend the interview and three children left the interview half-way through.( 1 A multistage sampling method was used for children without disabled siblings; By clustered sampling method [21] two high schools were drawn among 36 Directorate of National Education High Schools in central Erzurum and one class was chosen from every year (year 9, year 10, year 11 and year 12).A total of 119 individuals were chosen from these classes using a simple random sampling method.The total number of individuals participating in the study was 227.Children in the control group weren't matched with the case group. Data collection A Personal Information Questionnaire, the Rosenberg Self-Esteem Scale and the Trait Anxiety Scale were used to gather data for the study.The Personal Information Questionnaire, prepared by the researcher in accordance with information in literature, [1,4,7,12,13] comprised of two sections; the first section comprised of common questions based on the sociodemographic characteristics of children with and without mental disabled siblings; the second section comprised of questions that asked those with disabled siblings about their siblings. Rosenberg self-esteem scale Rosenberg developed the Rosenberg Self-Esteem Scale in 1965 as an instrument to measure the self-esteem directed at adolescents.The scale was adapted to Turkish by Çuhadaroğlu in 1985. [22]Answers of the items of the likerttype scale are designed of four options.Adolescents are expected to choose from one of the four points; "strongly agree," "agree," "disagree" and "strongly disagree." The reliability-validity coefficient of the scale is 0.71.The Cronbach Alpha for the scale in our study was 0.70.The minimum score for Rosenberg Self-Esteem Scale is 0 and the maximum is 6. Trait anxiety scale [25] ) to measure the anxiety level of individuals aged 14 and above, was used.The Trait Anxiety Scale is a 20-item, four-point likert type scale.The minimum score for the Trait Anxiety Scale is 20 and the maximum is 80.5] The reliability-validity coefficient of the scale is between 0.83 and 0.87. [23,24]The Cronbach Alpha for the Trait Anxiety Scale in this study was 0.84. Ethical considerations Prior to the study, ethical permission was obtained from the Ethical Board Directorate, Clinical Researches, Provincial Directorate of Health, Governorship of Erzurum and official approval was received from the Provincial Directorate for National Education, Governorship of Erzurum.It was key that the children and families participating in the study where doing so voluntarily.Both written and oral permission was obtained for families, after explaining the purpose of the study. Statistical analysis Statistical analyses were performed using the statistical software program SPSS (SPSS Inc., Chicago, IL, USA) for Windows (version 10).Percentage distribution, mean, the t-test for independent groups, the Chi-square test, the Kruskal-Wallis test, the Mann-Whitney U test (nonparametric tests were used with the aim of testing variables of ashame and guilty due to having mental retarded sibling [Mann-Whitney U test] and age when the mental retardation occured and the reason of mental retarded [the Kruskal-Wallis test]) and variance analysis was used to analyse data.For all the analyses, P < 0.05 was considered to be statistically significant. RESULTS 45.4% of children with mentally disabled siblings participating in the study were aged between 17 and 18, of which 47.2% were boys.48.7% of children without mentally disabled siblings participating in the study were aged between 17 and 18, of which 47.1% were girls.No significant difference was found between the groups with mentally disabled siblings and without mentally disabled siblings when compared according to age, gender, occupation of mother, occupation of father, and what number child they were [Table 1]. It was determined that the average age of mentally disabled children was 13.02 ± 6.32 years, 58.3% were boys, the reason for their mental disability was unknown for 31.5% and 62.1% were congenitally-disabled.68.5% of children with mentally disabled siblings feared that they would become mentally disabled, 77.8% were embarrassed of their disabled There was no significant statistical difference between the trait anxiety score mean and the self-esteem score mean among groups determined by the gender of the mentally disabled sibling, the reason behind the mental retardation and age the mental retardation occurred (P > 0.05). There was no significant statistical difference between the trait anxiety score mean and the self-esteem score mean among children with mentally disabled siblings according to the fear of becoming disabled like their siblings, being embarrassed by their siblings and their parents showing their mentally disabled siblings more attention [P > 0.05, Table 5].The trait anxiety score mean for children that felt guilty about their sibling's disability was higher in comparison to the trait anxiety score mean for children that felt no guilt about their sibling's disability; there was a significant statistical difference between the two groups (P < 0.05). The trait anxiety score mean for children with mentally disabled siblings that experienced issues in society (47.00 ± 7.76) was higher than those that did not experience issues (42.61 ± 7.48); there was a significant statistical difference between the two groups (P < 0.05). The self-esteem score mean for children experiencing problems with society due to their mentally disabled sibling (2.16 ± 1.64) was lower in comparison to those that did not experience issues (1.59 ± 1.32). In this study, there was a negative relationship (P < 0.01) between score means of the Trait Anxiety Scale and Self-Esteem Scale for those with mentally disabled siblings (r = 0.466), and those without mentally disabled siblings (r = 0.536).Anxiety and self-esteem were inversely proportional for children with and without mentally disabled siblings; as self-esteem decrease, anxietyrised (P < 0.01). DISCUSSION The difference, by age, gender, parents' occupation and what number of the child, have been found insignificant across the significance level (P > 0.05, Table 1).This finding indicates that the children have similar features in terms of the stated variables. In this study, 31.5% of children with mentally disabled siblings did not know the reason as to why they were disabled and the mental retardation was congenital mostly. In another study, disability reasons of 30% of those with severe mental disability, and 50% of those with mild mental disability were not known. [26]In a study conducted by Sarıhan, [27] he indicated that 38% of disabilities occurred congenitally and 37% did not know the reason behind the disability. In this study, more than half of children with mentally disabled siblings were frightened of becoming disabled (68.5%), were embarrassed because of their disabled sibling (77.8%), yet felt no guilt about their sibling's disability (95.4%).Meyer and Vadasy [10] stated that children with mentally disabled siblings had various concerns; feeling guilty because of their sibling's disorder, feeling embarrassed and avoiding contact with their sibling because of their behaviour and the way they look and feeling frightened because they might get the same illness as their sibling. After interviewing children with siblings that had various disabilities, McHugh [28] stated that nearly all of them suffered from guilt, embarrassment, fear and other similar emotional reactions.Findings of this study coincide with those found in studies conducted by Farber and Rychman, cited from Apalaçi [8] and McHale and Gamble. [7]cording to Lobato's [29] compilation, 45% of students with a disabled sibling stated the disadvantages of having a disabled sibling in a study conducted by Grossman, in which he interviewed 83 university students that had mental retardation.These emotions included guilt, embarrassment, neglect, and anger toward their disabled sibling. In the study, only 8% of families paid more attention to the disabled sibling.65.7% of children with a mentally disabled sibling stated that they did not experience social issues because of having a mentally disabled sibling.In his study, Sarıhan [27] stated that 68% of parents with disabled children put time aside for their children, while 87% of parents without disabled children put time aside for their children. The trait anxiety score mean for children with mentally disabled siblings was higher in comparison to trait anxiety score mean for children without mentally disabled siblings; however, there was no significant statistical difference among both groups [P > 0.05, Table 2].Şenel [20] found that the trait anxiety mean for children with disabled siblings was significantly higher than those with healthy siblings. There was no significant statistical difference between the self-esteem score means for children with and without disabled siblings [P > 0.05, Table 2].Similar to our findings, Auletta and DeRosa [30] compared the self-concept of 70 adolescents with severe mentally disabled siblings to the self-concept of 70 adolescents with healthy siblings.There was no significant difference between the self-concept of both groups.Dyson [31] compared the self-esteem of 71 brothers and sisters; 37 with siblings suffering from the developmental disorder (physically and mentally handicapped, growth deficiency, speech impediment, learning disability and hyperactivity) and 34 with healthy siblings.He concluded that there was no significant between both groups in terms of self-esteem.Rodrigue et al. [32] compared the self-esteem of three groups; 19 children with severely autistic siblings, 20 children with siblings suffering from Down syndrome and 20 children with healthy siblings.They concluded that there was no significant difference between their self-esteem.These findings are similar to that of this study.In a study conducted by Verté et al., [33] they stated that children with disabled siblings receiving good social support had a high level of self-esteem.Van Riper [34] also stated that the level of self-esteem was high for children with disabled siblings.Furman and Buhrmester [35] emphasised that healthy children living with disabled siblings were more complacent toward personal differences.Aydın [36] stated that having a disabled sibling enables the child to develop empathy, especially increasing helpful behaviors, which ultimately increased the level of selfesteem. The level of trait anxiety for children with mentally disabled siblings aged between 17 and 18 was higher than those without mentally disabled siblings.The ages of children with and without mentally disabled siblings affected the level of trait anxiety.The reason why the level of trait anxiety is high in children with mentally disabled siblings may be because their responsibility toward their disabled sibling increases with age and they become more aware of the adverse results caused by the disability and the healthy children are in constant contact with their disabled sibling. The level of trait anxiety for girls with mentally disabled siblings was higher in comparison to the level of trait anxiety for girls without mentally disabled siblings [P < 0.05, Table 3].According to Breslau et al. [37] healthy siblings, older than their disabled sibling, especially girls, experience more difficulties when adapting.In their studies, McHale and Gamble, [7] Gath and Gumley, [38] McHale and Harris [39] and Gold [40] stated that girls took on more responsibilities regarding the houseworks and caring for their disabled sibling in comparison to the boys in the family.Lindsey and Stewart [41] emphasised that extensive responsibility increases adverse behaviour.McHale and Gamble [7] conducted a study, in which they assessed the psychological adaptation and sibling relationships of children with and without mentally disabled siblings.They concluded that sisters (girls) had a higher depression and anxiety score than brothers (boys); sisters were left to attend and look after their siblings more often, sisters were more adversely affected than brothers and that the difference between the score mean of children with mentally disabled siblings and the score mean of children without mentally disabled siblings was significant. Self-esteem showed statistically no significant difference for children with and without mentally disabled siblings based on age, gender, occupation of mother, occupation of father, and what number child they were (Table 4) In this study, the difference between the trait anxiety and self-esteem for children with and without mentally disabled siblings was not significantly statistical based on the gender of the disabled sibling, the reason behind the disability, the age at which the disability occurred, the fear of becoming disabled, being embarrassed by the disabled sibling, and the amount of attention parents paid to the disabled sibling (P > 0.05). Kraemer and Blacher [42] conducted a study on 77 parents with healthy children aged between 7 and 18 and children with Down syndrome.They concluded that having a disabled child in the family had no adverse affect; in fact they had a positive effect on their healthy siblings.Taking on important family roles increases the self-confidence of healthy children, makes them feel responsible and enables them to mature. In the study, the level of trait anxiety was higher for those that felt guilty about their sibling's disability in comparison to those that felt no guilt about their sibling's disability. McHugh [28] stated that growing up with a disabled sibling was extremely distressful for a child, and that the guilt felt adversely affects their lives for many years.Gargiulo [43] emphasised that healthy siblings frequently felt guilty because of their bad feelings towards their disabled sibling, or as a result of being mean to their disable sibling. Children afraid of being disabled and children embarrassed by their disabled sibling had a higher trait anxiety score mean.Gargiulo [43] stated that healthy children embarrassed by their disabled siblings faced fear.They fear that they may become disabled in the future, or that their children will be disabled. There was a significant statistical difference in trait anxiety score means based on whether or not the child with the disabled sibling experienced social issues [P < 0.05, Table 5].The level of trait anxiety was higher in children experiencing social issues.McHale and Gamble [7] emphasise that growing up with a disabled sibling changes the daily life of healthy siblings in many ways and caused psychological adaptation and development difficulties. There was a negative relationship between the trait anxiety and self-esteem of children with and without mentally disabled siblings.As self-esteem decreased, trait anxietyincreased.McHale and Gamble, [7] and Apalaçi [8] stated in their study that self-esteem decreased as trait anxiety increased.These findings support our study. CONCLUSION Although the average score of trait anxiety and self-esteem in both group were not significant different, score of trait anxiety for children with mentally disabled siblings was higher in comparison to the trait anxiety score for children without mentally disabled siblings.Self-esteem showed no significant difference for children with and without mentally disabled siblings based on age, gender, occupation of mother, occupation of father, and what number child they were.The trait anxiety of children experiencing social issues because of their sibling's disability was higher in comparison to those that did not experience any social issues. In the study, it was concluded that anxiety of children with and without mentally retarded siblings increased as selfesteem of these children decreased. ) The sibling of children diagnosed with mental retarded 6 months ago.(2) Those who are smaller or older than 15 and 18 years of age.(3) Those who are diagnosed with psychatric.(4) Those who aren't communicated with.(5) Those who are illiterate are excluded from the study content. Table 1 : A comparison of the demographic characteristics for children with and without mentally disabled siblings* Demographic characteristics Children with mentally disabled siblings (n = 108) Children without mentally disabled siblings (n = 119) Total P value n (%) n (%) n (%) 6% felt guilty about their sibling's disability and 34.3% experienced issues with society because of their sibling's disability.The trait anxiety score mean for children with mentally disabled siblings (44.12 ± 7.82) was higher compared with the trait anxiety score mean for children without mentally disabled siblings (43.67 ± 11.15).However, there was no significant statistical difference between their trait anxiety scores [Table2].The trait anxiety score mean for children with mentally disabled siblings (47.04 ± 7.53) aged between 17 and 18 was higher in comparison to the trait anxiety score mean for children without mentally disabled siblings (44.05 ± 11.23) aged between 17 and 18 [P < 0.05, Table3].The trait anxiety score mean for girls with mentally disabled siblings was 47.14 ± 12.90 and the trait anxiety score mean for girls without mentally disabled siblings was 46.05 ± 7.90 (P < 0.05). *Column percent is taken in the table sibling, 4. Table 3 : A comparison between score means of the trait anxiety scale for children with and without mentally disabled siblings based on their demographic characteristics Demographic characteristics Trait anxiety Test and significance Children with mentally disabled siblings (n = 108) SD = Standard deviation, MWU = Mann Whitney U test Table 2 : A comparison between score means of the trait anxiety and self-esteem for children with and without mentally disabled siblings SD = Standard deviation Table 5 : A comparison between score means of the trait anxiety scale and self-esteem scale for children with mentally disabled siblings, depending on their views about their mentally disabled sibling (n = 08) SD = Standard deviation; MWU = Mann Whitney U test Table 4 : A comparison between the score means of self-esteem scale for children with and without mentally disabled siblings based on their demographic characteristics SD = Standard deviation, MWU = Mann Whitney U test
2017-06-18T12:40:15.800Z
2013-10-14T00:00:00.000
{ "year": 2013, "sha1": "903078a992762a1302acd7c31249b7df288b95b9", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParseMerged", "pdf_hash": "903078a992762a1302acd7c31249b7df288b95b9", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234887108
pes2o/s2orc
v3-fos-license
A Framework Analysis for Lean Transformation: A Case Study of a Public Utility in Greece Purpose: Nowadays, the worldwide economic recession has created numerous challenges in the Public Sector of Greece, which in turn affect the sustai-nability of public enterprises. Overall, there is a high pressure to increase productivity and reduce operating costs. The need to adapt to the new environment is pushing public organizations to implement Lean Management strategies in accordance with Private Sector standards. However, the limited budget, public deficits and debts, bureaucratic culture, political dependence and lack of transparency present several issues. These pose a serious threat to successful implementation of Lean practices in the long-run. Lack of relevant research and dichotomous role of public service organizations make this study very interesting. This research paper is aiming at exploring and highlighting the impact of the most important Critical Success Factors (CSFs) for the effective integration of Lean Management principles in Public Utility organizations in Greece by combining bibliographic research and empirical—with the quantitative method-research through case studies. Research Methodology: The research method combines bibliographic research and empirical—with the quantitative method. Quantitative method was employed on a sufficient sample of public employees from two of the largest public companies of Greece: Public Power Corporation (PPC S.A.) and Athens Water Supply and Sewe-rage Company (EYDAP S.A.) in order to investigate the impact of the most important CSFs on successful Lean Transformation of them. Findings: Quantitative findings from this research illustrated that the most important CSFs that have a positive impact on successful Lean Transformation on Greek public organizations/utilities are: Effective Communication, Introduction According to Mc Kinsley study conducted in 974 public organizations, the transformation of the public sector, ascertained that only 39% of the sample adopted the Lean Management practices to a viable level [1].The main cause of failure of the aforementioned actions was the absence of frequent control.Although the issue of saving resources in the public sector has been preoccupying academics for many years, Lean Management constitutes a relevantly current issue, given the continuous technological advancements that constantly modify the implemented practices and strategies [2].According to the findings of Gebre et al. [1] the presence of various Critical Success Factors (CSFs) is considered vital for the accomplishment of viable integration.Based on Antony et al. [3] every attempt to intergrade the principles of Lean Management must be developed individually depending on the needs of each enterprise and its employees.Simultaneously, the correlation between these specific principles and the organizational aspects of the corporation is considered of outmost importance.Though the Lean principles are more or less the same for the entire business sector, each attempt of incorporation is unique and specific for every organization [4]. However, taking into account that the foresaid individualization can be achieved by CSFs, their exact specification, as far as it concerns Lean management, is rather imperfect [5].Additionally, there are scarcely any references in international bibliography which adjust CSFs to the public sector and more specifically to the case of Greece [5] [6] [7], as most research conducted are centralized in the private sector. Taking advantage this specific research gap, the aim of the current research study, is to highlight the impact of the Critical Success Factors-CSFs, in the successful integration of the Lean Market into public companies.The quantitative findings of this research will lead to the development of various practices and strategies that will assist the senior management of the public corporations and the ones responsible for the formation of public policy to successfully intergrade the principles of Lean Management, in order for the effectiveness and the quality of the services offered to citizens to be improved. Literature Review Critical Success Factors are defined as "the basic factors of multiple sectors of activity, whose results are considered absolutely necessary for a senior executive, in order to accomplish the organizational targets" [8].This definition makes it clear that CSFs do not only attract researchers' interest but also that of the business managers' [9].Additionally, it must be noticed they comprise essential clues for the detection and prioritization of the factors that could affect the successful implementation of the Lean Management principles [10]. The strategic importance of the CSFs is evident, since they can increase the possibility for success, related to the limited use of resources and the reduction of running costs of a company and also they can contribute to the evasion of confusion that might occur in the company, due to the implementation of programs aiming at the constant amelioration [11]. Quite recently, researchers such as Juliani and De-Oliveira [5], Lande et al. [12], Manville et al. [13], Netland [14], Psomas [7] and Yadav et al. [10], are carrying out studies to pinpoint the CSFs in a business that will lead to the successful implementation of Lean Management programs, while preserve the resources.In particular it worth's mentioning, that the number of research examining the importance of CSFs for the successful implementation of Lean management in public enterprises is rather limited. In Table 1 the most vital CSFs tracked down in recent international bibliography are presented. Garg and Garg [19] classified CSFs in Organizational Individual-related and Project-related Factors.The category of Organizational Factors consists of Business Strategy, Communication, Customer Focus, Organizational Culture and Organizational Infrastructure.The category of Human Factors is divided into Selection of Staff, Training & Education, while the category of factors related to the project are referring to the Project Management, Management Commitment & Leadership.At this point it needs to be highlighted, that every researcher describes the failure or the success of the Lean Management implementation from different perspectives, which are classified in two categories: 1) giving emphasis on the work itself 2) to the accomplished results [20] [21].The first defines the success or the failure focusing on specific crucial factors of work, such as the cost or the time needed for its completion.The second focuses on the expected aims of work, like the merging of the organizational data, the most efficient decision taking and the most fruitful in-company communication [20] [22] [23]. According to all the above and the research performed by Finney and Corbett [24] and Psomas [7] The current research is focusing on the impact of these factors on the successful Lean Transformation of public companies. Research Question The unsatisfactory implementation and function of the factors that this research characterized as Lean Management CSFs, leads to the failure of the overall attempt aiming at the reshaping of a public corporation.So, the research question of great value which must be answered, is related to the investigation of the role and the effects of CSFs on the implementation of Lean Management on public companies.The severe deficiency of bibliography and worldwide practice makes this research of outmost importance. Research Methodology This paper, combines bibliographic research and empirical -with the quantitative method-research through case studies.More specifically, with extensive and thorough bibliographic research the most important CSFs were determined and via quantitative approach the above question was answered.The sample research with standardized questionnaire, was performed in two of the biggest and most important public companies in Greece, the Public Power Corporation S.A. (PPC S.A.) and the Athens Water Supply and Sewerage Company (EYDAP S.A). The quantitative approach is based on sample research with standardized questionnaire, offering the ability to approach a satisfactory proportion of the population for the investigation of theories and inquiries [25].The questionnaire was created based on surveys of companies in South Africa, Australia, China, Canada and Belgium which investigated the most important impact factors in six sigma and lean six sigma transformations. The questionnaire constitutes the most fundamental tool concerning the data collection in a quantitative research.Planning and conducting this questionnaire, for the specific study, was actualized oriented to the materialization of the goals of the research at the two biggest Greek enterprises of public interest and utility purpose: PPC S.A. and ΕΥDAP S.A.It was based on multiple rules, in order for the questions not to be biased and lead to erroneous outcome [26]. In order for the respondents to remain impartial, there was appropriate combination and composition of questions along with the capitalization of appropriate scales.More specifically, the questions were mixed, so that the participants would find challenging to set apart the individual and the dependent variable of the research.A convenience sample, which is characterized by low cost and time of realization was used, which when performed properly it can reduce the chance of a statistic error [26].This is a method of no-odds where the subjects of the research are chosen based on their proximity and easy accessibility [27] and it is the most popular when concerning workforce since it is feasible to include the vast majority of the population.Simultaneously, the method of snowball sampling was utilized, during which it was asked from the participants to forward the questionnaire to their colleagues working in the same sector.To be more exact, the procedure of sampling used in the current research is the following: Population: The employees in public companies (PPC S.A., ΕΥDAP S.A) occupied in Directorates and Sectors such as: IT, operations, energy, management, strategic planning, research, human resource management, marketing and procurement. Sample size: From the 480 questionnaires being handed out, 343 valid answers were collected.Consequently, the response rate was 71.5%. Realization of sampling plan and collection of data: Sampling was performed in the aforementioned Directorates and Sectors from January 2019 till September 2019.The research was being conducted in the morning and at noon, in order to ensure a satisfactory representation of the sample.The delivery of the questionnaire was performed both traditionally (hand by hand) and online.The online distribution of the questionnaire was made according to the lists of the e-mails given to the human resources Directorate of each company.At this point it worth's mentioning that we took advantage of snowball sampling, as it was asked from the participants to hand out the questionnaire to their colleagues. Procedure of codification and insert of data: The variables of the research (dependent variables CSFs, independent variable Lean Transformation) include various scales, such as the Likert scale.The possible alternatives were coded using relevant numbers and headlines with the assistance of "Microsoft Excel" spreadsheet.Moreover, the data from the participants' responses were inserted on a data basis "SPSS Version 23 for Windows", always in accordance with the previously mentioned codification.Finally, the data were analyzed with the aid of the specific statistic package and the "IBM SPSS Amos 21". Findings The normality of the data was checked through normality tests such as the levels skewness and kurtosis of the variables, along with the statistic test Kolmogorov-Smirnov. Skewness is a measure of symmetry, or more precisely, the lack of symmetry.A distribution, or data set, is symmetric if it looks the same to the left and right of the center point.[28].On the contrary, Kurtosis is a measure of whether the data are heavy-tailed or light-tailed relative to a normal distribution.That is, data sets with high kurtosis tend to have heavy tails, or outliers.Data sets with low kurtosis tend to have light tails, or lack of outliers.A uniform distribution would be the extreme case. On the other hand, the statistic test, Kolmogorov-Smirnov, was applied to verify if the sample follows the normal distribution of the population or not.This specific test is considered more trustworthy compared to the detection for Skewness and Kurtosis [29].For this reason, they are examined in combination in most research studies [30].In the current thesis it has been noticed that the variables being studied do not comply with the normal distribution, as the p-value is less than 0.05 (p < 0.05). Since convenience sample was applied in the current study, meaning that non-random sampling was performed, non-normal distribution is being noticed in Table 2 below, as expected. Confirmatory Factor Analysis The Confirmatory Factor Analysis (CFA) aims at the screening of the prediction rate of a set of suggestions per Critical Success Factor (CSF) for the composition of a uniform scale.The CFA was conducted in the statistic program AMOS, the results of which indicated the existence of an excellent model: χ 2 /d.f.= 1.51 (recommended between 1 and 5), CFI (Comperative Fit Index) = On the other hand, having as an ultimate target the further investigation of the credibility of each factor the indicator Composite Reliability (CR) was applied.This particular indicator is performed based on the tables produced by the CFA and AMOS.More specifically, the scales displayed a score between 0.71 and 0.90.As a result, in any case the CR indicator was above 0.60 which is the least permissible limit [33].Table 3 which follows, presents the results of CFA.Equally, for the multicollinearity check among the CSFs/Scales, a regression analysis was performed taking advantage the Variance Inflation Factor (VIF).This particular indicator tests the existence of inner correlation among the utilized scales.When the indicators VIF > 10, then multicollinearity discrepancies are occurring, consequently the indicator R 2 (R Squared) is over 0.90.In the case of the current research severe R 2 multicollinearity problems were not spotted among the scales since the indicator VIF < 10.However, even if in some cases prices > 3 and <5 were detected, they did not cause serious worries, as the Tolerance > 0.2, revealing the absence of standard errors into the regression analysis.Tables 4-6 that follow present the results of the multicollinearity tests among the Scales/CSFs. The Confirmatory Factor Analysis showed 14 Scales/CSFs in total.For the descriptive analysis of the scales various descriptive measurements were used, such as average, standard deviation, maximum and minimum.The correspondence of the average was performed based on the five-point scale of Likert type (1: I totally disagree -5: I totally agree) and more particularly as follows:  Prices ranging from 0.45 -1.44 reveal that the majority of the respondents stated they totally disagree. Prices ranging from 1.45 -2.44 showed that the majority of the respondents stated that they disagree. Prices ranging from 2.45 -3.44 showed that the majority of the respondents stated that they neither agree nor disagree. Prices from 3.45 -4.44 showed that the majority of the respondents stated that they agree  Prices from 4.45 -5 showed that the majority of the respondents stated that they totally agree In general, the average of the scales in the present research ranged from 2.39 (disagree) to 3.15 (neither disagree/nor agree).For example, the vast majority of respondents stated that they disagree (average = 2.39) as far as it concerns the fact that detailed training and empirical seminars are provided, concerning the resource saving principles.Furthermore, neutral opinions were expressed to the statements of the remaining factors.A characteristic example is the fact that the majority of the respondents said that they neither agree nor disagree (average = 3.07) concerning the Management Commitment for the adoption of resource saving practices.For more evidence with regard to the average of the Scales/ CSFs Table 7 is cited. For a complete examination of the Research Question the Structural Equation Modeling (SEM) method in AMOS was used, in combination with the multiple regression analyses in SPSS.In this way more credible and spherical models were produced for the examination of the effect of the CSFs in the fruitful Lean Transformation.The effect of CSFs on the Successful Integration of the Lean Management (Lean Transformation) The first model to be created through SEM is focusing on the connection between the CSFs and the successful incorporation of the Lean Management (Lean Transformation) principles.Specifically, it was considered particularly trustworthy given that the model fit indicators were more than enough: χ 2 /d.f.= 2.01, CFI = 0.96, TLI = 0.90, GFI = 0.94, RMSEA = 0.05, SRMR = 0.04 [33].The degree of influence of the independent variables (factors CSFs) on the dependent (Lean Transformation) is estimated by Standardized Regression Weights indicator (β), for which minimum acceptable levels do not exist, as long as there is statistically notable influence (p < 0.05 or p < 0.01 depending on the results of each type of analysis).In case an unsatisfactory level of statistic notability is failed to be traced, then the independent variable does not affect the dependent variable.On the whole, there is positive or negative connection depending on the sign of indicator β, whereas the specification of the effect is actualized based on the following limits [34]: β > 0.8, strong effect β ≤ 0.6, average to strong effect β ≤ 0.4, average effect β ≤ 0.2, weak effect Additionally, the existence of a satisfactory degree of R 2 indicator is deemed essential.When R 2 reaches 1 it suggests that the effect of the independent variables on the dependent variable is even more perfect [35]. In the overall model of the current research, the R 2 indicator of the dependent variable Lean Transformation came up to 0.86.That means that the 86% of the fickleness of the Successful Lean Transformation depends on the statistically important CSFs.The Organizational Factors which emerged through SEM and have a statistically important level of impact on the independent variable are the following: Project Communication (β = 0.19, p = 0.03 < 0.05), Organizational Infrastructure (β = 0.14, p = 0.01 < 0.05), Supplier Focus (β = 0.15, p = 0.01 < 0.05), Supplier Customer (β = 0.27, p = 0.001 < 0.01), Business Plan & Vision (β = 0.31, p = 0.001 < 0.01) and Change Management (β = 0.88, p = 0.001 < 0.01).From the above factors, the Project Communication, the Organizational Infrastructure and the Supplier Focus, influence positively, statistically importantly and to a miniscule degree the Successful Lean Transformation.On the contrary, the Customer Focus, the Business Plan & Vision influence positively and affect averagely.Additionally, it was found that the Change Management foresees positively leading at a higher level the Lean Transformation.Though, the Business Process Reengineering (p = 0.88 > 0.05) and the Business Strategy (p = 0.12 > 0.05) have been noticed not to affect in a statistically way the level of the dependent variable (Lean Transformation). The Project Factors that have a statistically significant result from an average to high level the Successful Incorporation Lean Management principles are Commitment Management & Leadership (β = 0.47, p = 0.001 < 0.01) and Top Management Support (β = 0.52, p = 0.001 < 0.01).Still, Project Management was found not to affect at a statistically important degree, given that the indicator p = 0.95 > 0.05.Nevertheless, all the Human Factors pose a statistically important influence on the dependent variable.More explicitly, Training & Education (β = 0.41, p = 0.001 < 0.01), along with Selection of Staff (β = 0.38, p = 0.001 < 0.01) leverage positively and in average degree in favour of Lean Transformation.What made an impression though, is the negative and statistically important influence of the Οrganizational Culture (β = −0.13,p = 0.001 < 0.01) on the Successful Lean Transformation.As a consequence, this hitches the successful transformation towards the Lean Principles. Table 8 that follows depicts the immediate effects of the CSFs on the Successful Lean Transformation and forms a successful model of installation, taking advantage the SEM method.At this point it is worthwhile to mention that the t-value indicator measures the intensity of the variance in the size of the sample.In other words, it refers to the size of the standard error.The greater this specific indicator is, the larger the possibility of accepting our research hypothesis and not reject it. On the contrary, the multiple regression analysis between the Organizational Principles depicted that all the factors influence statistically importantly.More specifically, as independent variables the Scales (CSFs), "Top Management Support", "Management Commitment & Leadership" and "Project Management" were applied, while as dependent variable the "Lean Transformation".The results of this particular analysis concluded that the Project Management positively affects, from an average to a high degree (β = 0.39, p = 0.00) the Successful Integration of the Lean Management.This finding is contradictory to the SEM analysis, which revealed that this specific factor does not impose a statistically important impact, when combined with all the CSFs.Furthermore, the Management Commitment & Leadership was proven to have a positive impact, statistically important and average (β = 0.34, p = 0.00) to the Lean Transformation. On the contrary, the Top Management Support affects positively and to the minimum (β = 0.16, p = 0.01) the Successful Integration of the Lean Management/Resources Saving Principles (Table 10). The total dependence of Lean Transformation on the aforementioned Project Factors reaches 68%, given that the R 2 = 0.68. Conclusions In conclusion, the present research with the SEM method proves the positive ef- Moreover, the findings of the quantitative research showed that the Business Strategy, the Business Process Reengineering and the Project Management do not influence on a statistically important degree the effective integration of the Lean Management.Consequently, the conclusions of Manville et al. [13], Angelopoulos and Pollalis [36] who demonstrated that the existence of efficient Project Management is ranked last among numerous factors CSFs, while at the same time does not a play a significant role statistically, were confirmed.Simultaneously, we confirmed the conclusions of Agaoglu et al. [9], who noticed that the Business Process Reengineering and the Business Strategy do not significantly define the successful integration of the Lean principles.On the opposite end of the spectrum, the conclusion reached by Antony and Desai [37], Juliani and de Oliveira [5], Nah et al. [38], Ramayah [39], who verified that the Business Strategy, the Business Process Reengineering and the Project Management have a favorable impact, at a significant level on the successful integration of Lean Management, were turned down. In the public sector, the lack of strong competition leads to the preservation of obsolete organizational structures, which do not encourage changes and consequently the Business Process Reengineering.According to Weerakkody et al. [40] the public companies do not possess the adequate motivation so as to implement organization alterations and redesign their procedures.More to the point, they experience severe pressures to maintain specific expenditure limits, and as a result they do not pursue extra cost savings since the required budget has been covered [41].At the same time, due to the intense bureaucratic procedures that characterize the public companies, the senior executive members do not retain the ultimate authority/power to proceed into changes of the organizational processes [42].Furthermore, the occurrence of changes is difficult to be performed without the consent of the involved parties (e.g. the government, regulatory body etc.) as well various legal restrictions and rules hamper the successful redesign of the business procedure towards the Lean principles [43].Consequently, it is plausible the Business Process Reengineering not to significantly influence the successful integration of the Lean Management. Additionally, in the current research, it was verified that the Organizational Culture of the public companies, participating in the quantitative research, hinders the successful transformation towards the Lean Management principles.This can be explained from the fact that the majority of the Greek public companies functions according to a bureaucratic culture, which is regulated by lack of efficiency, absence of transparency, scarcity of meritocracy, overloading and obscurity of labor roles [44]. Factors and the Successful Integration of the Lean Management Principles indicated that the Organizational Infrastructure (β = 0.11, p = 0.04 < 0.05), the Supplier Focus (β = 0.11, p = 0.02 < 0.05), the Customer Focus (β = 0.14, p = 0.00 < 0.05), the Business Plan & Vision (β = 0.28, p = 0.00 < 0.05) and the Change Management (β = 0.23, p = 0.00 < 0.05) have a positive and statistically substantial influence.The Business Strategy (β = −0.03,p = 0.18 > 0.05), the Business Process Reengineering (β = 0.08, p = 0.14 > 0.05) and the Project Communication (β = 0.01, p = 0.89 > 0.05), however, do not pose a statistically important effect on the Lean Transformation.Even though, Project Communication through the regression analysis failed to show that influences on a statistically important level when combined with the rest organizational factors, however, in SEM something like that is not true (Table9).Regression Model of OrganizationalFactors: Lean Transformation = 0.11 (Organizational Infrastructure) + 0.11 (Supplier Focus) + 0.14 (Customer Focus) + 0.28 (Business Plan & Vision) + 0.23 (Change Management) R 2 = 0.70, verifying that 70% of the Successful Integration of the Lean Management depends on the statistically importance of Organizational Factors.Additionally, the multiple regression analysis between the Project Factors and the Successful integration of the Lean Management (Lean Transformation) The results of the multiple regression analysis concerning the link between the Human Factors and the Successful Integration of the Lean Management Principles unveiled that equally the Training & Education, along with the Organizational Culture, influence at a statistically important level.Contrary to that, it has emerged that the Training & Education (β = 0.50, p = 0.00) and the Selection of Staff (β = 0.49, p = 0.00) have a positive, from average to powerful influence Lean Transformation.On the other hand, it was proven that the Organizational Culture affects negatively and averagely (β = −0.31,p = 0.00) the Successful Integration of the Lean Management.The findings of the multiple regression analysis are shown on Table11 andright afterwards the relevant regression model.Regression Model of Human Factors: Lean Transformation = 0.49 (Selection of Staff) + 0.50 (Training & Education) − 0.31 (Organizational Culture) R 2 = 0.74, revealing that 74% of the overall fluctuation of the Lean Transformation depends on the Training & Education, the Selection of Staff and the Organizational Culture. fect of the CSFs (Management Commitment & Leadership, Training & Education, Selection of Staff) as well as the very positive strong effect of Change Management, and the negative effect of Organizational Culture on Lean Transformation of public enterprises in Greece.On the other hand with Regression Analysis the research confirms that the CSFs that positively (moderately to strongly) influence the Lean Transformation of public companies and lead to long-term success are: 1) Project Communication, 2) the Organizational Infrastructure, 3) the Supplier Focus, 4) the Customer Focus, 5) the Business Plan & Vision, 6) the Change Management, 7) the Management Commitment & Leadership, 8) the Training & Education, and 9) the Selection of Staff while the Organizational Culture has a negative effect Table 2 . Control of sample distribution. Table 4 . Test multicollinearity of the organizational factors*. Table 5 . Test multicollinearity of the Project factors*. Table 6 . Test multicollinearity of the human factors*. Table 8 . Direct influence of the CSFs on the lean transformation. Table 9 . Multiple regression analysis-effect of the organizational factors on lean transformation. Table 10 . Multiple regression-impact of the project factors on the lean transformation. Table 11 . Multiple regression-impact of the project factors on the lean transformation.
2021-05-21T16:57:14.412Z
2021-04-12T00:00:00.000
{ "year": 2021, "sha1": "b9b8503c521bf955e0555cc47b2f7ddaee1de6a8", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=108575", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e759173cfed8c44a46d7976c1bd52f07cfca8d41", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
252869638
pes2o/s2orc
v3-fos-license
Unravelling the Differential Host Immuno-Inflammatory Responses to Staphylococcus aureus and Escherichia coli Infections in Sepsis Previous reports from our lab have documented dysregulated host inflammatory reactions in response to bacterial infections in sepsis. Both Gram-negative bacteria (GNB) and Gram-positive bacteria (GPB) play a significant role in the development and progression of sepsis by releasing several virulence factors. During sepsis, host cells produce a range of inflammatory responses including inducible nitric oxide synthase (iNOS) expression, nitrite generation, neutrophil extracellular traps (NETs) release, and pro-inflammatory cytokines production. The current study was conducted to discern the differences in host inflammatory reactions in response to both Escherichia coli and Staphylococcus aureus along with the organ dysfunction parameters in patients of sepsis. We examined 60 ICU sepsis patients identified based on the Acute Physiology and Chronic Health Evaluation II (APACHE II) and Sequential Organ Failure Assessment (SOFA II) scores. Pathogen identification was carried out using culture-based methods and gene-specific primers by real-time polymerase chain reaction (RT-PCR). Samples of blood from healthy volunteers were spiked with E. coli (GNB) and S. aureus (GPB). The incidence of NETs formation, iNOS expression, total nitrite content, and pro-inflammatory cytokine level was estimated. Prevalence of E. coli, A. baumannii (both GNB), S. aureus, and Enterococcus faecalis (both GPB) was found in sepsis patients. Augmented levels of inflammatory mediators including iNOS expression, total nitrite, the incidence of NETs, and proinflammatory cytokines, during spiking, were found in response to S. aureus infections in comparison with E. coli infections. These inflammatory mediators were found to be positively correlated with organ dysfunction in both GN and GP infections in sepsis patients. Augmented host inflammatory response was generated in S. aureus infections as compared with E. coli. Introduction Sepsis is an acute, often fatal syndrome that significantly contributes to fatality in the critical care unit and requires early diagnosis along with proper treatment. An inflammatory condition characterized by a dysregulated immune response to infection, sepsis is considered a life-threatening organ malfunction [1]. Gram-negative bacteria (GNB) and gram-positive bacteria (GPB) that have caused underlying infections both affect the severity of sepsis. Since inflammation is the leading cause of the pathogenesis of sepsis, both GNB and GPB conspire to produce different pathologies in this disease [2]. A range of virulence factors are triggered by these pathogens, empowering them to escape the immune defense and propagate to distant organs, releasing certain toxins interacting with host immune cells with particular receptors on the cell surface and producing a poor immune response [3]. The most prevalent bacteria causing sepsis are Staphylococcus aureus, Enterococcus faecalis, Escherichia coli, and Pseudomonas aeruginosa [4]. During the common course of infection, both GPB and GNB bacteria trigger specific signaling pathways [5]. The pathogenicity of GNB is often correlated with some particular components of their membrane, particularly the lipopolysaccharide (LPS), which binds to LPS-binding protein (LBP), enlisting LPS to CD14 [6]. Toll-like receptor-4 (TLR-4) then triggers the signaling pathways of mitogen-activated protein kinase and nuclear factor κB (NF-κB) while communicating with the CD14-LPS complex [7,8]. Similarly, GPB possesses peptidoglycans (PG) and lipoteichoic acids (LTA) as virulence factors that can bind to nucleotide-binding oligomerization domain-containing proteins (NODs), which activate the TLR-2 that recognizes κ-D-glutamyl-meso-diaminopimelic acid, which subsequently activates another signaling pathway by activating NF-κB to induce pro-inflammatory cytokine production [9]. Despite the difference in clinical manifestations of gram-positive (GP) and gram-negative (GN) sepsis, similar therapeutic approaches are used in clinics to treat both pathogens. Bacterial pore-forming toxins evoke pathophysiologic reactions, leading to differential host immune responses failing in multiple organs. As the first line of defense for the immune system against infection, neutrophils are believed to neutralize pathogens. During such a reaction, neutrophils release granular proteins and chromatin, forming extracellular fibril matrices known as NETs through an active process [10]. During conventional degranulation, cytokines are produced by the rapid secretion of interleukin-4 (IL-4) and tumor necrosis factor α (TNF-α) during exocytosis, resulting in a cytokine storm [11][12][13][14]. In macrophages, increased levels of interferon-γ (IFN-γ) promote the production of iNOS [15]. Excessive synthesis of inflammatory mediators causes the formation of reactive oxygen and nitrogen species, such as superoxide anion (O 2 . -) and nitric oxide (NO), resulting in the injury of tissues with increased inflammatory response. The function of oxidative stress [16], nitrosative stress [17], and hyper NETs production [18] in multiple organ dysfunctions during sepsis has been documented in previous papers from our lab. Due to the difference in induction of signaling pathways by GNB and GPB, the amount of inflammatory response would be different [1]. The current study was conducted to discern host inflammatory reactions in response to both S. aureus and E. coli bacterial infections. Herein, we determined the levels of inflammatory mediators including NOS expression, nitrite content, NETs release, and pro-inflammatory cytokines in S. aureus and E. coli bacteria-infected blood. Furthermore, different organ dysfunction parameters were evaluated in both S. aureus and E. coli infections in sepsis patients. Patients Selection Patients fulfilling the criteria of sepsis [19] were enlisted for the survey. Written and informed consent either from patients or their relatives, and ethical permission from SMS Hospital as well as Amity University Rajasthan, Jaipur (Reference number AUR/IEC/2019/01) was taken prior to initiation of the study. Patients were chosen based on (i) medical evidence of infection, (ii) hyperthermia (higher body temperature i.e., >38 • C) or hypothermia (lower body temperature i.e., <35 • C), (iii) tachycardia (elevated heart rate (>100 beats per minute), (iv) tachypnea (accelerated breathing i.e., >30 breaths per minute), and (v) evidence of poor organ function or perfusion within 12 h. Patients aged 80 years or more, also with (i) heart failure (class III or IV), (ii) hepatic insufficiency, (iv) immunosuppression (positive HIV, HBs Ag virus serologic result, malignancy), and (v) chronic antibiotic therapy were all excluded. Upon admittance to the ICU (Intensive care Unit), each patient's clinical and demographic characteristics were recorded independently, including temperature ( • C), heart Vaccines 2022, 10, 1648 3 of 11 rate, respiration rate, mean arterial pressure, SOFA (Sequential Organ Failure Assessment), APACHE II (Acute Physiology and Chronic Health Evaluation II), total bilirubin, creatinine, and PaO 2 /FiO 2 ratio. Blood Sample Collection Serial blood specimens (10 mL of blood) were collected from the central venous line in ethylene diamine tetra acetic acid (EDTA) vials at the time of admission. Blood Culture Post blood sample collection, culture was performed using a BACTEC (BD, Heidelberg, Germany) blood culture system at 37 • C. The culture vials that showed growth within five days of culture incubation were separated from those that did not show growth. The growth of bacteria from liquid broth was subcultured on standardized and selected media, including blood agar, chocolate agar, and MacConkey agar, and the results were studied after 24-48 h of incubation. Standard microbiological procedures, such as Gram staining, colony features, and biochemical properties, such as catalase and coagulase, were used to examine the results, which were validated using standard manuals [20]. Bacterial Identification by RT-PCR Identification of the most prevalent GNB (E. coli) and GPB (S. aureus) was done by real-time PCR (RT-PCR). Bacterial DNA was separated from whole blood using a standard kit (Nucleo-Spin ® Blood Quick Pure) as per the manufacturer's instructions and used as a template for RT-PCR, which was done using gene-specific primers for E. coli (16s rRNA gene) and S. aureus (16S rRNA gene). DNA was amplified by PCR (Bio-Rad, Hercules, CA, USA), using Primer3 (http://frodo.wi.mit.edu/, accessed on 1 August 2022), a specialized web-based program, to design the primers. The primers used to amplify the E. coli 16s rRNA gene were F: 5 TGGCGCATACAAAGAGAAGC3 and R: 5 TTTTGCAACCCACTCCCATG3 and to amplify S. aureus 16S rRNA gene were F: 5 GAACCGCATGGTTCAAAAGT3 and R: 5 CGGAAGATTCCCTACTGCTG3 which amplified a 192 bp and 191 bp products, respectively. The iTaq Universal SYBR Green RT-PCR master mix was used for real-time PCR. Reactions of a total volume of 20 µL included 10 µL of iTaq Universal SYBR Green Supermix, 50 ng of DNA template, and 0.5 µL of primers. The three-step PCR procedure consisted of 2 min at 95 • C, 40 cycles of denaturation at 95 • C for 30 s, the ideal annealing 56.4 • C for E. coli and 59.4 • C for S. aureus, extension at 72 • C for 1 min, and a final extension at 72 • C for 2 min. To ensure the specificity of the PCR result as a single peak, a melting curve was run at the end of the PCR. Isolation of Neutrophils from Blood Platelets were removed from the blood by centrifuging at 200× g for 10 min to isolate neutrophils. After that, another 10 min of centrifugation at 700× g was performed. In a separate tube, white blood cells were collected from a buffy coat layer and dextran sedimented for 30 min at room temperature. Platelet-deficient plasma was kept at −20 • C for nitrite and cytokines measurement. The supernatant (4 mL) was transferred to 4 mL Percoll (1065/1080, same volume) and centrifuged at 700× g for 15 min at room temperature. The supernatant was removed, and the neutrophil-containing band was collected and resuspended in RPMI 1640 medium supplemented with bovine serum albumin (2%). Further neutrophils were washed with Hanks' Balanced Salt Solution (HBSS) (NaCl 138 mM; KCl 2.7 mM; Na 2 HPO 4 8.1 mM; KH 2 PO 4 1.5 mM; Glucose 10 mM). The purity of the population was ascertained by Giemsa staining and viability by Trypan blue exclusion assay, with the results showing the population as 99% pure and viable. Spiking The pure culture of E. coli (ATCC E. coli 25922) and S. aureus (ATCC S. aureus 29213) have been taken from SMS Hospital, Jaipur. Cultures of S. aureus and E. coli were maintained in Luria-Bertani (LB) broth incubated overnight in 100 mL of nutritional broth at 37 • C. The bacteria were then diluted to a final concentration of 10 8 cells per ml using absorbance at 600 nm wavelength [21]. Neutrophils were infected with both the bacteria with the multiplicity of infection 10 for 1 h [22]. iNOS Expression For iNOS expression analysis, neutrophils were spiked with 200 µL of the culture of E. coli and S. aureus (McFarland 0.5) followed by incubation at 37 • C for 1 h. Total RNA from neutrophils (10 6 cells) was isolated using a Tri reagent (Sigma, St. Louis, MI, USA). First, using a Revert Aid H Minus First Strand cDNA Synthesis Kit (Thermo Scientific, Madison, WI, USA) and oligo (dT) primer as directed by the manufacturer, 1 g of total RNA was reverse transcribed. The cDNA was transcribed with primers for iNOS (F: 5 TGTGCTCTTTGCCTGTATGC3 ; R-5TTGCCAAACGTACTG GTCAC3 ) and βactin (F: 5 AACTGGAACGGTGAAGGTG3 ; R: 5 CTGTGTGGACTTGGGAGAGG3) with the amplified products of 222 bp and 210 bp products, respectively. iNOS mRNA was quantified by RT-PCR (Bio-Rad, Hercules, CA, USA) using an iTaq Universal SYBR Green RT-PCR master mix and the same primers and cDNA as mentioned previously. Following PCR, amplicons were held at 70 • C for 10 s before being melted at 90 • C at a temperature rate of 0.1 • C per second for melting curve analysis. One melting curve cycle included 95 • C for 15 s, 70 • C for 15 s, 95 • C for 10 s, and 40 • C for 3 min of cooling. As a single peak, this indicated the specificity of the PCR result. The experiment contained a control for all reaction components except the template. The reference gene β-actin was used for normalization. The difference in β-actin and iNOS quantification cycle values was used to determine the expression level of iNOS. Total Nitrite Assay The nitrite content in supernatant collected from neutrophils spiked with E. coli and S. aureus was determined by a Griess reagent kit (Thermo Fischer Scientific, Waltham, MA, USA) according to the instructions given in the manufacturer's protocol. NETs Release NETs were generated following the protocol described previously, with modification [18,23,24]. A total of 10 6 cells/mL of neutrophils isolated from healthy volunteers were adhered on poly-L-lysine coated petri plates, followed by washing with HBSS and treating with E. coli and S. aureus for 30 min. 4% Paraformaldehyde was used for 30 min to fix the cells, and stained with elastase antibody (1:250 dilutions) overnight. Cells were washed five times with HBSS and stained with antirabbit alexa fluor 647 antibody (1:5000 dilutions) for 4 h at room temperature in the dark. This was washed five times with HBSS. Sytox green was added and incubated for 15 min before being examined under a fluorescence microscope (Leica). Unstimulated cells were used as the negative control. The percentage of NETs was evaluated by counting the number of NETs formed by neutrophils out of the total number of neutrophils, as observed in a field in a fluorescence microscope [18,25]. Cytokines Estimation Levels of TNF-α, IFN-γ, and IL-8 were estimated in supernatant collected from spiked neutrophils with E. coli and S. aureus using an ELISA kit as determined by the protocol of the manufacturer (BD Opt EIA, East Rutherford, New Jersey, USA). Statistical Analysis after the Estimation of Total Nitrite Assay and Cytokine The data were analyzed with the SPSS program (SPSS Inc., Chicago, IL, USA) and reported as means + standard deviation from four independent experiments. The significance of the results was considered at p < 0.05. Statistical Analysis After the Estimation of Total Nitrite Assay and Cytokine The data were analyzed with the SPSS program (SPSS Inc., Chicago, IL, USA) and reported as means + standard deviation from four independent experiments. The significance of the results was considered at p < 0.05. Expression of iNOS Generation and Estimation of Total Nitrite Content Inflammatory reactions in response to E. coli and S. aureus infection were monitored in spiked neutrophils isolated from a healthy volunteer. Augmented expression of iNOS (p < 0.01) was found in S. aureus-spiked neutrophils as compared with E. coli-infected neutrophils, as demonstrated by RT-PCR (Figure 2A). Nitrite, a stable product of NO generated in the supernatant due to induction of inflammatory response, is increased in S. aureus infection (65 ± 2.1 µ mol/L versus 43 ± 1.4 µ mol/L, p < 0.01) ( Figure 2B). Expression of iNOS Generation and Estimation of Total Nitrite Content Inflammatory reactions in response to E. coli and S. aureus infection were monitored in spiked neutrophils isolated from a healthy volunteer. Augmented expression of iNOS (p < 0.01) was found in S. aureus-spiked neutrophils as compared with E. coli-infected neutrophils, as demonstrated by RT-PCR (Figure 2A). Nitrite, a stable product of NO generated in the supernatant due to induction of inflammatory response, is increased in S. aureus infection (65 ± 2.1 µ mol/L versus 43 ± 1.4 µmol/L, p < 0.01) ( Figure 2B). NETs Formation and Estimation of Cytokines In neutrophils, the prevalence of NETs was measured and compared in E. coli and S. aureus using confocal microscopy. A greater increase in the incidence of NETs was found in response to S. aureus than E. coli (81 ± 4.2% versus 64 ± 1.7%, p < 0.01) ( Figure 3A,B). The cell-free content of proinflammatory cytokines was estimated using ELISA in supernatant separated from neutrophils after spiking with S. aureus and E. coli. Levels of TNF-α (164 ± 8.3 pg/mL versus 123 ± 3.4 pg/mL, p < 0.001), IL-1β (65 ± 3.3 pg/mL versus 38 ± 2.7 pg/mL, p < 0.001), IL-8 (173 ± 6.3 pg/mL versus 129 ± 8.4 pg/mL, p < 0.001) were found to be augmented in S. aureus infection as compared with E. coli infection ( Figure 3C). The level of cytokines is less than 10 pg/mL in supernatant without bacteria presented. Our data are in accordance with a previous report which documented an almost two-fold greater amount of TNF-α generated by human monocytes in response to S. aureus infections than E. coli [26]. NETs Formation and Estimation of Cytokines In neutrophils, the prevalence of NETs was measured and compared in E. coli and S. aureus using confocal microscopy. A greater increase in the incidence of NETs was found in response to S. aureus than E. coli (81 ± 4.2% versus 64 ± 1.7%, p < 0.01) ( Figure 3A,B). NETs Formation and Estimation of Cytokines In neutrophils, the prevalence of NETs was measured and compared in E. coli and S. aureus using confocal microscopy. A greater increase in the incidence of NETs was found in response to S. aureus than E. coli (81 ± 4.2% versus 64 ± 1.7%, p < 0.01) ( Figure 3A,B). The cell-free content of proinflammatory cytokines was estimated using ELISA in supernatant separated from neutrophils after spiking with S. aureus and E. coli. Levels of TNF-α (164 ± 8.3 pg/mL versus 123 ± 3.4 pg/mL, p < 0.001), IL-1β (65 ± 3.3 pg/mL versus 38 ± 2.7 pg/mL, p < 0.001), IL-8 (173 ± 6.3 pg/mL versus 129 ± 8.4 pg/mL, p < 0.001) were found to be augmented in S. aureus infection as compared with E. coli infection ( Figure 3C). The level of cytokines is less than 10 pg/mL in supernatant without bacteria presented. Our data are in accordance with a previous report which documented an almost two-fold greater amount of TNF-α generated by human monocytes in response to S. aureus infections than E. coli [26]. The cell-free content of proinflammatory cytokines was estimated using ELISA in supernatant separated from neutrophils after spiking with S. aureus and E. coli. Levels of TNF-α (164 ± 8.3 pg/mL versus 123 ± 3.4 pg/mL, p < 0.001), IL-1β (65 ± 3.3 pg/mL versus 38 ± 2.7 pg/mL, p < 0.001), IL-8 (173 ± 6.3 pg/mL versus 129 ± 8.4 pg/mL, p < 0.001) were found to be augmented in S. aureus infection as compared with E. coli infection ( Figure 3C). The level of cytokines is less than 10 pg/mL in supernatant without bacteria presented. Our data are in accordance with a previous report which documented an almost two-fold greater amount of TNF-α generated by human monocytes in response to S. aureus infections than E. coli [26]. Estimation of Organ Dysfunction The level of organ dysfunction parameters and organ-specific dysfunction markers were evaluated and compared in the GPB and GNB group of patients. We found a higher level of SOFA (8.2 ± 1.7 versus 6.1 ± 1.4) ( Figure 4A) and APACHE II score (21.2 ± 3.7 versus 19.1 ± 2.9) ( Figure 4B) in the GPB group in comparison with the GNB group. The total bilirubin, which assesses the hepatotoxicity, was slightly higher in the GPB group compared with the GNB group (31.2 ± 5.7µmol/L versus 28.1 ± 4.4µmol/L) ( Figure 4C). The creatinine level, a potent marker of kidney function, was found to be higher in the GPB group compared with the GNB group (198.5 ± 5.3µmol/Lversus185.2 ± 6.1µmol/L) ( Figure 4D). The PaO 2 /FIO 2 ratio, which measures the lung function, was lower in the GPB group compared with the GNB group (216.4 ± 24.2µmol/L versus 296.5 ± 22.1 µmol/L) ( Figure 4D). representing the APACHE II score in the GPB and GNB groups (C) Bar diagrams representing the total bilirubin content in the GPB and GNB groups groups. (D) Bar diagrams representing the creatinine in the S. aureus and E. coli. (E) Bar diagrams representing the PaO 2 / FIO 2 ratio in the GPB and GNB groups. Discussion The purpose of this study was to investigate the detection of pathogens by RT-PCR and blood culture. We found that S. aureus bacteremia is a frequent clinical problem in most medical centers and is linked with high morbidity and mortality. Data showing a higher prevalence of S. aureus over E. coli are in agreement with a previous study, where 46% of isolates were GPB and 20% were GNB [27]. Similar trends were found in other reports [28], and few studies documented contrasting results with a greater occurrence of E. coli over S. aureus [29]. The possible explanation for the difference could be the design, geographic location, nature of difference of the etiological agents, and seasonal variation. S. aureus lives on human body surfaces and colonizes intravenous catheters, which become a source of infection in hospitalized patients, especially those with impaired immune systems, and leads to septicemia. In severe infections, S. aureus and E. coli damage internal organs including the brain, heart, lungs, as well as bones, muscles, and surgically implanted devices such as artificial joints or cardiac pacemakers [30]. We also contemplated nitrosative stress (iNOS expression /NO content/total nitrite) and pro-inflammatory cytokines (TNF-α, IFN-γ, and IL8) related to S. aureus and E. coli bacteria severity. Our PCR analysis and computational studies revealed an enhancement of iNOS in GPB (S. aureus) cases compared to GNB (E. coli). This might be related to the presence of LTA, which causes shock and multiple organ failure by activating tyrosine kinases and NF-kB in the signaling pathway, triggering the induction of iNOS [31]. In addition, macrophage-inducing iNOS is significant in the host defense system, as it stimulates NO production, which adds to bactericidal effects [32]. Similarly, LPS is required for iNOS induction, which activates certain cytokines such as IFN-γ and migration inhibitory factor that can activate macrophages to produce NO [33]. Our evidence of increased NO production adds to our understanding of the underlying cause for higher nitrite content that could be due to the presence of LTA and peptidoglycan over the cell wall of S. aureus, which has been shown to induce the formation of NO, followed by shock and organ injury in rats [9]. When NO, a precursor of nitrite, reacts with superoxide to form peroxynitrite, a potent oxidant, it causes cytotoxicity [34,35], thereby producing nitrous and nitric acids (NO 2 /NO 3 ) in an aqueous solution, which are important signaling molecules with regulatory properties that can influence the course of inflammation and immunological regulation [36]. Our results suggest that the increased frequency of NETs is because neutrophils were already primed while spiking the blood with S. aureus and E. coli. The cell wall component of gram-positive bacteria, LTA, is a known chemo-attractant for neutrophils that helps in priming and recruitment of neutrophils at the site of inflammation [37]. Furthermore, rapid NETs release has been documented in response to S. aureus infection due to TLR2 and complement-mediated opsonization [38]. In sepsis, pro-inflammatory cytokines released by neutrophils and monocytes can better determine the severity of infection [39,40]. Furthermore, the correlation between increased cytokine levels in our study was conducted using a spiked neutrophil model, where the level of IFN-γ, a pro-inflammatory cytokine, was found to be upregulated in S. aureus septicemia as compared with E. coli septicemia. Contrary to our results, increased TNF-α and IL-8 were released with E. coli compared with S. aureus. The major reason behind this discrepancy could be that it includes patients with severe abdominal sepsis who harbor different pathogens involved in the immunological response [41]. We also acknowledge organ dysfunction where the above data suggest that S. aureus infection causes more damage to the organs as compared with E. coli. The possible reason could be hyperactivation of inflammatory responses including aberrant oxidative stress [16], nitrosative stress [17], and NETs release [18], which induces multiple organ dysfunction as S. aureus evokes more inflammatory response than E. coli as we found in the current study. Mechanistically, the physical interaction of iNOS and the Rac2 protein plays an important role in hyper inflammation-induced cytotoxicity in lung epithelial cells [42]. Conclusions In summary, we found an increased prevalence of GP infections (S. aureus) as compared with GN (E. coli) during sepsis. In the whole blood spiking model, S. aureus evoked a greater inflammatory response (iNOS expression, total nitrite content, NETs release, and pro-inflammatory cytokine production) than E. coli. Thus, these inflammatory mediators have the potential to discriminate between GP and GN infections, thus bearing prognostic value. Additionally, we also found increased levels of host organ dysfunction in GPB infections than GNB infections during sepsis ( Figure 5). The major limitation in our study is that data obtained from a singly type of bacteria cannot be generalized for all the GNB and GPB groups. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Ethics Committee of SMS Hospital as well as Amity University Rajasthan, Jaipur (Reference number AUR/IEC/2019/01) for studies involving humans. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Raw data included in the manuscript is available and can be shared if required. Conflicts of Interest: Authors declare no conflict of interest. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Ethics Committee of SMS Hospital as well as Amity University Rajasthan, Jaipur (Reference number AUR/IEC/2019/01) for studies involving humans. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Raw data included in the manuscript is available and can be shared if required.
2022-10-13T15:22:59.900Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "861b263c1bd74db791e63d9d5b3074f0b28737b3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/10/10/1648/pdf?version=1664610238", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d27305286c6e9e1194120dc1e4efc38970686132", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
247322494
pes2o/s2orc
v3-fos-license
Long-Term COVID 19 Sequelae in Adolescents: the Overlap with Orthostatic Intolerance and ME/CFS Purpose of Review To discuss emerging understandings of adolescent long COVID or post-COVID-19 conditions, including proposed clinical definitions, common symptoms, epidemiology, overlaps with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) and orthostatic intolerance, and preliminary guidance on management. Recent Findings The recent World Health Organization clinical case definition of post-COVID-19 condition requires a history of probable or confirmed SARS-CoV-2 infection, with symptoms starting within 3 months of the onset of COVID-19. Symptoms must last for at least 2 months and cannot be explained by an alternative diagnosis. Common symptoms of the post-COVID-19 condition include, but are not limited to, fatigue, shortness of breath, and cognitive dysfunction. These symptoms generally have an impact on everyday functioning. The incidence of prolonged symptoms following SARS-CoV-2 infection has proven challenging to define, but it is now clear that those with relatively mild initial infections, without severe initial respiratory disease or end-organ injury, can still develop chronic impairments, with symptoms that overlap with conditions like ME/CFS (profound fatigue, unrefreshing sleep, post-exertional malaise, cognitive dysfunction, and orthostatic intolerance). Summary We do not yet have a clear understanding of the mechanisms by which individuals develop post-COVID-19 conditions. There may be several distinct types of long COVID that require different treatments. At this point, there is no single pharmacologic agent to effectively treat all symptoms. Because some presentations of post-COVID-19 conditions mimic disorders such as ME/CFS, treatment guidelines for this and related conditions can be helpful for managing post-COVID-19 symptoms. Supplementary Information The online version contains supplementary material available at 10.1007/s40124-022-00261-4. Introduction As of January 2022, there had been over 300 million confirmed cases of SARS-CoV-2 globally and over 5.4 million deaths [1]. While the vast majority of surviving patients return to their baseline health [2••], it has been evident from early in the pandemic that a proportion of patients experience chronic health impairments. Some of these conditions are sequelae of more severe acute COVID-19 such as acute respiratory distress syndrome, post-ICU syndrome, myocarditis, thrombosis, renal injury, stroke, and multisystem inflammatory syndrome in children (MIS-C). Sequelae of MIS-C and the more organ-specific complications have been discussed elsewhere [3-5, 6•]. The focus of this review is on adolescents who have developed long-term symptoms, including those with mild respiratory or systemic illnesses in the acute phase. These individuals have been described as having post-COVID-19 conditions, also referred to as long COVID [7•]. In this paper, we review proposed definitions of post-COVID-19 chronic conditions, discuss the epidemiology of pediatric long COVID, and the overlaps with orthostatic intolerance and other post-infectious illnesses like myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS). Based largely on our experience treating these overlapping conditions, we offer preliminary recommendations on management. Search Strategy In addition to papers and materials known to the authors, we conducted a search of PubMed, Embase, Scopus, World Health Organization (WHO) COVID database (which contains preprints from bioRxiv and medRxiv), and I Love Evidence. The search was conducted in mid-November 2021 and used multiple methods of identifying pediatric and adolescent articles, along with various synonyms for long COVID or post-COVID conditions (described in detail in Supplemental File 1). Definitions Definitions for post-COVID conditions and other disorders discussed in this review can be found in Table 1. We will use the terms post-COVID-19 condition, post-COVID conditions, and long COVID interchangeably throughout. Long COVID Symptoms Long COVID is a spectrum of disease, likely with multifactorial etiologies. Children and adolescents with long COVID are presenting with a variety of complaints, although some patterns are beginning to emerge. Long COVID symptoms can occur in hospitalized [8,9] or non-hospitalized children [10, 11•, 12]. As in adults, many pediatric patients present with symptoms of long COVID after experiencing only mild or asymptomatic acute infections [11•, 12]. The time course and constellation of symptoms may vary in adults and children with long COVID. As acknowledged in the WHO definition, some have persistent symptoms that linger after an acute infection, while others develop new or a relapse of symptoms after complete recovery from the initial infection [13 •, 14]. Some of the fluctuation in symptom frequency and intensity may relate to post-exertional exacerbation in symptoms (discussed below). The literature describing pediatric long COVID remains sparse [15,16], and there is a lack of consistency in how symptoms are elicited. Additional studies are needed to clarify the nature and duration of long COVID symptoms in pediatric populations. Fatigue or low energy is one of the most common symptoms reported in children with long COVID, with recent studies suggesting that up to 87% of affected children report fatigue [10, 11•, 14, 17•, 18-22, 23••]. Fatigue in this population often leads to difficulty with physical and cognitive activity, which can limit participation in school, extracurricular activities, and sports. Excessive sleep, problems initiating or maintaining sleep, or nonrefreshing sleep often accompany fatigue in pediatric long COVID. Fatigue may persist even with an improvement in sleep patterns. Post-exertional malaise (PEM) is also common in long COVID. PEM refers to an exacerbation not just of fatigue, but of many symptoms, including lightheadedness, cognitive fogginess, sensory sensitivity, headaches, and pain, occurring after relative increases in physical activity or cognitive demands. Orthostatic stress and neuromuscular strain are additional triggers of PEM in ME/CFS and may also be capable of causing symptom exacerbations in long COVID [24,25]. Sometimes typical activities of daily life like participation in a full day of school can lead to substantial PEM, thereby contributing to functional impairment and distress in these patients. Cognitive difficulties or "brain fog" are also commonly reported by children with long COVID [10, 11•, 18, 19, 21, 23 ••]. Cognitive difficulties, while inconsistently ascertained in the literature, tend to include problems with concentration, short-term memory, and school performance [10, 11•, 17•, 18, 23••]. Similar to physical fatigue and PEM, cognitive difficulties or "brain fog" can also be exacerbated by mental exertion, such as schoolwork or studying for examinations. Headaches are also commonly reported both in the acute and post-acute phase of COVID in children [10, 11•, 17•, 19-22, 23••, 26]. Additional studies are needed to characterize the nature and specific types of headaches experienced, as well as the best treatment options. Patients with a history of headaches prior to COVID infection may develop more severe or more frequent headaches, but these can arise as a new symptom. Headaches usually , the post-COVID-19 condition occurs in individuals with a history of probable or confirmed SARS-CoV-2 infection, usually within 3 months of the onset of COVID-19. Symptoms must last for at least 2 months and cannot be explained by an alternative diagnosis. Common symptoms of the post-COVID-19 condition include, but are not limited to, fatigue, shortness of breath, and cognitive dysfunction. These symptoms may follow an initial recovery period or persist from the initial COVID-19 infection and generally have an impact on everyday functioning. Other groups have used a variable duration of prolonged symptoms, commonly 1 month CDC The CDC presents post-COVID conditions as the failure to return to a previous state of health following a SARS-CoV-2 infection [7 • ]. As of July 2021, the CDC used post-COVID conditions as an umbrella term for a variety of health conditions in which patients of all ages present with new, returning, or ongoing symptoms, at least 4 weeks post-SARS-CoV-2 infection. Symptoms may return after a period of recovery following initial infection and may occur regardless of the severity of acute infection. Characteristic and persistent symptoms include, but are not limited to, dyspnea, fatigue, post-exertional malaise/poor endurance, cognitive impairment ("brain fog"), cough, chest pain, and headache Myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) The 2015 Institute of Medicine definition of ME/CFS [56 •• ] requires the following core symptoms to be present at least half the time and with at least moderate severity, with the assumption that other causes of these symptoms have been excluded: 1. A substantial reduction or impairment in the ability to engage in pre-illness levels of occupational, educational, social, or personal activities that persists for more than 6 months and is accompanied by fatigue, which is often profound, is of new or definite onset (not lifelong), is not the result of ongoing exertion, and is not substantially alleviated by rest 2. Post-exertional malaise 3. Unrefreshing sleep Plus at least one of the following: 4a. Cognitive impairment 4b. Orthostatic intolerance Orthostatic intolerance (OI) Orthostatic intolerance refers to a group of circulatory disorders defined by the provocation of symptoms with assuming or maintaining upright posture and the improvement in symptoms with recumbency [108]. Some orthostatic symptoms, such as fatigue and brain fog, can persist after lying down, but symptoms of lightheadedness typically respond promptly Initial orthostatic hypotension (IOH) Transient drop of > 40 mm Hg in SBP or > 20 mm Hg DBP within 15 s of standing, accompanied by lightheadedness and reflex tachycardia, with hypotension lasting for less than 1 min [109]. Syncope is uncommon. This condition is more common in adolescents. Pharmacologic treatment is not necessary unless there is impairment in overall function Classical orthostatic hypotension (cOH) A sustained 20 mm Hg drop in systolic or a 10 mm Hg drop in diastolic blood pressure within the first 3 min of standing or head-up tilt [110]. This is more common in adults, but can be seen in children with acute dehydration, anorexia nervosa, or in response to certain medications (e.g., tricyclic antidepressants, prochlorperazine, quetiapine) Delayed orthostatic hypotension (dOH) dOH involves the same drop in blood pressure as in classical OH but occurring after 3 min upright [110] are not attributable to any secondary cause (brain lesion, brain injury, etc.). Orthostatic symptoms: Many patients also report orthostatic symptoms, including lightheadedness or dizziness, syncope, blurred vision, exercise intolerance, dyspnea, chest discomfort, palpitations, tremulousness, anxiety, diaphoresis, and nausea [10, 17•, 22]. In some cases, patients meet criteria for postural tachycardia syndrome (POTS) or other forms of orthostatic intolerance (OI) [27]. Many patients with POTS also have overlapping symptoms with long COVID, including fatigue, cognitive difficulties, headaches, gastrointestinal symptoms, anxiety, and mood concerns. Other patients have heart rate changes that do not meet the threshold for POTS, but still experience significant orthostatic symptoms that may represent a spectrum of dysautonomia [ 28,29]. In the absence of severe acute pulmonary disease, cardiopulmonary work-up in these patients tends to be negative [28,29]. The underlying pathology for these symptoms is unclear at this point. Mental health and behavioral symptoms also are prominent in this population, with anxiety and depression being the most prevalent [17•, 21, 30]. Whether this is directly related to the effects of the virus, effects of physical symptoms of long COVID or effects of the pandemic in general is not completely clear. However, recent studies in both adults and children suggest that mood and cognitive functioning after SARS-CoV-2 infection are impaired when compared to controls with similar pandemic-related experiences [31,32]. Changes in taste and smell including anosmia, ageusia, parosmia, and dysgeusia are reported with acute SARS-CoV-2 infection in children [20] and adults [33]. Some children with long COVID also experience persistent alterations to taste and smell, impacting appetite and potentially leading to weight loss [10, 21, 23••, 26, 34]. These sensory alterations tend to cluster closer to the earlier phase of the illness. Long COVID is a broad umbrella term that encompasses many symptoms. The symptoms described here are a subset of the ones that have been reported. Evidence is beginning to emerge that different phenotypes may exist within long COVID. We have found that adolescents experiencing prolonged symptoms after mild, acute COVID infection often report a phenotype that overlaps with OI or ME/CFS. Epidemiology The reported rates of prolonged symptoms following COVID-19 vary based on the age of the participants, the duration of follow-up (4 weeks vs 3 months vs 6-12 months), and the design of the study. Differences in design include whether the study population was clinic based or population based, whether there was a clinical evaluation to exclude other causes of symptoms, whether ascertainment of symptoms and function relied on validated questionnaires or on self-or parent-report, and on the precision with which Neurally mediated hypotension (NMH) Characterized by an abrupt drop in blood pressure and frequently a relative bradycardia at the time of hypotension, often associated with recurrent syncope or pre-syncope in day-to-day life. However, syncope in daily life is not universal in this group, so we refer to this circulatory pattern as NMH, the physiology of which is identical to reflex syncope, often termed vasovagal syncope, neurocardiogenic syncope, and neurally mediated syncope [111][112][113][114] Symptoms can be present soon after assuming an upright posture, but hypotension usually is not detected unless the orthostatic stress is prolonged. Fatigue is common for hours following a vasovagal syncope or NMH episode Postural tachycardia syndrome (POTS) In adolescents, POTS is characterized by a sustained heart rate increment of at least 40 beats/minute within 10 min of standing or head-up tilt testing, in the absence of orthostatic hypotension in the first three minutes upright, and in association with chronic orthostatic symptoms. The onset is insidious for some, but it often appears after infection, immunization, surgery, and trauma [47 •• ] Inappropriate sinus tachycardia Characterized by a sinus rhythm with a heart rate greater than 100 bpm at rest [100]. The symptoms are similar to those of POTS Low orthostatic tolerance Characterized by prominent orthostatic symptoms without the heart rate and blood pressure changes of OH, NMH, or POTS [66 •• ] specific symptoms were investigated. Few studies differentiate the rates of prolonged symptoms after SARS-CoV-2 infection from more general symptoms caused by the pandemic itself [32]. Currently, published studies on long COVID have used various designs to answer different epidemiologic questions. For instance, some ask: in patients who have had COVID- 19 infection, what is the risk of developing long COVID? In hospital and clinic-based studies, 45-70% of patients with SARS-CoV-2 infection report prolonged symptoms for a variable duration after infection, but many of these studies do not have a comparison group, are subject to referral biases, and have relatively small sample sizes (reviewed by Zimmermann et al. [23••]). Larger studies and those with comparison groups have the potential to more accurately estimate symptom prevalence in patients with confirmed versus suspected SARS-CoV-2 infection. For example, the CLoCK study in the United Kingdom (UK) surveyed individuals aged 11-17-year-old 3 months after a positive PCR test and also enrolled controls who had a negative PCR test (performed due to symptoms, anxiety, contact, or other reasons) [22]. The prevalence of at least one symptom at 3 months of follow-up was 66.5% in test-positives versus 53.4% in test-negatives and 30.3% versus 16.2% respectively for three or more symptoms. These results must be interpreted with appropriate caution, as only 13% of the eligible test-positive and test-negative population responded to the survey, a limitation that also complicates interpretation of the incidence data from a nationwide study in Denmark [35]. Other studies have asked: what proportion of the population is experiencing symptoms suggestive of long COVID at this time? For example, a study by the National Office for Statistics in the UK reports a much lower prevalence of post-COVID conditions [36 ••]. The study randomly surveyed over 350,000 UK residents living in private households, defining long COVID as still experiencing symptoms 4 or more weeks after infection, including pre-existing symptoms that worsened after COVID. As illustrated in Fig. 1, post-COVID symptoms at 12 weeks and 12 months were least common in those ages 2-11 (0.21% vs 0.12%), increasing in 12-16-year-old (0.82% vs 0.26%) and approximating adult rates in the 17-24-year-old group (1.31% vs 0.48%). Several additional caveats are germane. Ascertainment of the full range of post-COVID symptoms was incomplete in early studies; many early post-COVID studies focused primarily on respiratory or infectious symptoms (e.g., shortness of breath, fever, congestion). Notably, symptoms associated with OI were not initially recognized, described, or assessed as being common [37•]. Moreover, attribution of individual symptoms to specific causes has been problematic in some studies. Labeling problems with attention, processing, and short-term memory as psychiatric symptoms could be misleading given that orthostatic stress can provoke cognitive problems in the absence of classical psychiatric conditions [38]. Similarly, in OI, orthostatic dyspnea can occur [39,40], and the hyperadrenergic response to reductions in cerebral blood flow can be misinterpreted as anxiety [41]. Future large scale longitudinal studies with comparison groups are needed in order to understand the full constellation of symptoms, risk factors, and prevalence of long COVID in children and adolescents. Orthostatic Intolerance After COVID-19 Infection The common forms of OI are listed in Table 1. Beginning with the report of Miglis [42•], and followed by reports from a variety of centers (reviewed by Bisaccia et al.), it became clear that syndromes of OI were common in association with COVID-19 [43•]. Information on pediatric post-COVID OI is more limited. One case report describes a previously healthy 12-year-old girl who contracted COVID-19 in March 2020; orthostatic symptoms progressed until she became bedbound by July 2020 [44]. Testing revealed severely symptomatic OI associated with a drop in blood pressure and resting tachycardia. Another case series describes a 19-year-old male with confirmed COVID-19 who developed orthostatic symptoms within the first 2 weeks of infection [45]. Orthostatic testing 3 months into the illness revealed a striking 70-bpm increase in heart rate (HR) from supine to standing, consistent with POTS. In our pediatric post-COVID clinic, we perform a 10-min passive standing test (Table 2) in all patients. In a case series describing the initial cohort of patients seen at the Kennedy Krieger Institute Pediatric Post-COVID-19 Rehabilitation Clinic, two of eight patients met criteria for POTS, and all but one experienced increased symptoms during the 10 min upright, even though they did not meet formal HR criteria for POTS [17•]. In a case series of 20 adults who developed circulatory dysfunction after COVID-19 infection, many experienced improvement in symptoms with treatment targeted to OI [46], emphasizing the importance of recognizing OI as a treatable complication of COVID-19 infection. Similar to the observed patterns of long COVID in children of different ages, OI affects adolescents more than pre-pubertal children, and females are more likely to be affected than males. OI can also be triggered by immunizations, pregnancy, surgery, or trauma [47 ••]. Symptom burden can be significant, resulting in limited ability to participate in school or work. POTS itself is heterogeneous, with several proposed mechanisms including autoimmunity [48], increased sympathetic activity [49], hypovolemia, and peripheral sympathetic noradrenergic denervation [50]. The development of OI following COVID-19 infection is not surprising. Prior to the pandemic, those with POTS specifically or OI in general often reported a history of infection closely preceding the onset of their orthostatic symptoms [51]. Among patients who develop lightheadedness and other orthostatic symptoms in the first 2 weeks of COVID-19 infection, it would be reasonable to postulate a direct effect of the virus on central autonomic networks [52]. The short time frame also argues against deconditioning or inactivity as an etiology of symptoms [45,53]. For those developing symptoms beyond 2 weeks, after the emergence of antibodies directed at SARS-CoV-2, an autoimmune pathogenesis has been proposed, consistent with pre-pandemic observations that POTS may have an autoimmune etiology [54,55]. It remains unclear whether OI is more prevalent or different in some manner after COVID-19 compared to other infections, and how long orthostatic symptoms after COVID-19 will persist [53]. As the number of patients affected by post-COVID OI increases, one challenge will be to meet the clinical demand, as the number of physicians treating OI was insufficient for the existing patient volume prior to the pandemic [46]. Is Long COVID a Unique Illness or Is SARS-CoV-2 Another Trigger for ME/CFS? The illness formerly termed chronic fatigue syndrome is now referred to by the US National Institutes of Health and the Centers for Disease Control and Prevention as ME/ CFS (Table 1) [56••]. While evidence of classical encephalomyelitis is not present, evidence of disturbed cognitive function and autonomic nervous system control are prominent. Early in the COVID-19 pandemic, it became clear that a subset of patients with prolonged symptoms had features consistent with ME/CFS. It remains to be determined whether these patients with long COVID persisting more than 6 months meet the criteria for ME/CFS or if long COVID is in some way distinctive. Our preliminary observations suggest that SARS-CoV-2 is emerging as a common trigger for ME/CFS. As is true for the pathogenesis of ME/CFS, the cause or causes of long COVID remain uncertain. There may be different phenotypes of long COVID, and causes of long COVID are likely to be multifactorial in some patients. For certain patients, acute COVID-19 might exacerbate pre-existing ME/CFS. Prominent hypotheses for ME/CFS pathophysiology include autoimmunity [57][58][59], a physiologic stress response that does not attenuate once the acute infection or stressor has resolved [60], a chronic inflammatory response to an initial infection [61,62] (including glial cell activation [63-65]), viral reactivation, or a hypometabolic cellular response. Any postulated mechanism must also explain the presence of circulatory dysfunction and reduced cerebral blood flow [66••] as a prominent component of the persistence of ME/CFS symptoms. OI has a prevalence of over 95% in pediatric patients with ME/CFS [56••, 67]. Recent evidence using extracranial Doppler echography of the vertebral and internal carotid arteries demonstrates that 90% of adults with ME/CFS experience significant reductions in cerebral blood flow during head-up tilt, confirming OI even when heart rate and blood pressure responses might be normal [66 ••]. Our experience in a single center suggests that a subset of adolescents with a moderate or severe burden of long COVID symptoms and impaired health-related quality of life have features consistent with ME/CFS [45]. One recent case-control study in adults shows that cerebral blood flow reductions during upright posture in long COVID patients are at least as severe as the reductions in comparison groups with ME/CFS and POTS and ME/CFS with a normal heart rate and blood pressure response to upright posture [82 •]. In the 10 long COVID patients, all of whom had POTS, cerebral blood flow fell 33% over the 30 min upright, comparable to the 20 with ME/CFS and POTS (29%) and the 20 with ME/CFS and a normal heart rate and blood pressure response (25%), all significantly different than the 4% reduction in cerebral blood flow for the 20 healthy controls. Whether the risk factors and co-morbid clinical conditions in long COVID are similar to those in pediatric ME/ CFS remains to be determined. Prominent biological risk factors previously identified for pediatric ME/CFS include age [83•] (adolescents more affected than pre-pubertal children), sex (females are affected 3-4 times more commonly [84]), and joint hypermobility [85] (seen in 60% with ME/ CFS versus 20-24% of age and sex-matched controls). One trial of IVIG for pediatric ME/CFS identified cutaneous anergy in 21% [86]. Conditions found more commonly in ME/CFS (possibly a consequence of the initial infectious or inflammatory trigger but also possibly preceding the illness and creating a risk of prolonged impairment) include OI in > 95% [67,87], allergic inflammation [88], mast cell activation syndrome (MCAS) in a subset [89], and restrictions [115] Physical therapy screening tests to look for limitations in symptom-free range of motion of the limbs and spine [116] Laboratory tests Complete blood count, with platelet count and differential white blood cell count Serum chemistries including electrolytes, urea, creatinine, total protein, albumin, calcium, alanine aminotransferase (ALT), aspartate aminotransferase (AST), and alkaline phosphatase T4 free, thyroid-stimulating hormone Erythrocyte sedimentation rate or C-reactive protein Ferritin or other measures of iron deficiency Vitamin B12, vitamin D Celiac disease screening Urinalysis Electrocardiogram Orthostatic testing (see below) Other testing is dependent on the history and physical examination (e.g., consider quantitative immunoglobulins in those with a history of recurrent, severe, or persistent infections; consider plasma histamine, and other tests for mast cell activation syndrome in those with a strong history of allergic inflammation or signs and symptoms of facial flushing, pruritis, or urticaria [117]) Questionnaires Supplemental questionnaires can provide more information vital to evaluating the impact of the patient's symptoms on their daily life. We recommend the following instruments in children and adolescents, all of which have the advantage of being brief and imposing only a minimal cognitive burden on patients: •Functional Disability Inventory [118] •Pediatric Quality of Life Inventory (Peds QL) [119] (Questionnaires exist for both the patient and an adult proxy, but a direct report from the patient is important) •Peds QL Multidimensional Fatigue Scale [120] •Wood Mental Fatigue Inventory [121] •Hospital Anxiety and Depression Scale [122] or Beck Depression Inventory [123] Our recommended battery for neuropsychologic evaluation that can be performed in person or via telehealth has been published elsewhere [17 • ] Orthostatic testing In all individuals with chronic fatigue, and at this stage of the investigation of long COVID, we recommend orthostatic testing of at least 10 min duration. This can be accomplished using either a passive standing test or a head-up tilt test Passive standing test [95] Laboratory head-up tilt table test [124,125] 5 min supine-> 10 min of quiet standing with the upper back against the wall and heels 2-6 inches away from the wall-> 2 min supine Heart rate and blood pressure were measured during a 70-degree head-up tilt 10-min tests are sufficient for diagnosing POTS and OH Prolonged testing of 40-45 min is usually required to identify neurally mediated hypotension or delayed OH Record Each minute Heart rate and blood pressure *To calculate the HR increment between lowest supine and peak standing, select the lowest supine HR value from either the 5 min pre-test or the 2 min post-test The end of the first supine phase and each minute standing Symptoms on a 0-10 scale (0 = no symptom, 10 = worst severity) Presence of acrocyanosis in symptom-free range of motion in > 80% [24,90]. Further research is needed to determine whether treatment for comorbid conditions in pediatric ME/CFS is relevant to and improves function in pediatric long COVID. Management There may be several distinct types of post-COVID conditions that require different treatments. At this time, there is no single pharmacologic agent for all variations of post-COVID symptoms nor is there a uniform treatment approach. Several groups have authored recommendations for investigation and management [91•, 92, 93, 94•]. The provisional CDC guidance states that some presentations of post-COVID conditions mimic disorders such as ME/CFS, MCAS, and OI [93]. Treatment guidelines for these conditions can be helpful for managing post-COVID symptoms. At present, the management of post-COVID conditions focuses primarily on addressing symptoms. Based on our clinical experience evaluating pediatric OI and ME/CFS, our approach includes the diagnostic testing in Table 2. Below we offer evaluation and management suggestions for specific symptoms: Orthostatic Intolerance Although lightheadedness/dizziness has been ascertained in some long COVID studies [10,26], OI is not ascertained in a consistent manner in the majority of the pediatric long COVID literature. It is important to ask about specific conditions that can provoke orthostatic symptoms, and not simply to ask about lightheadedness, as adolescent patients may not be aware that what they experience is abnormal. Typical situations that provoke symptoms in those with OI include standing in line, standing at a reception or religious service, shopping, showering, and being in hot environments. To elicit symptoms of OI, practitioners might ask questions such as "How long can you stand still before having to sit down?" and "Do you fidget and move around when standing, study in a reclining position, or sit with your knees to your chest or with your feet under you?". The CDC guidance recommends testing post-COVID patients with a 3-min standing test [93]. However, a 3-min standing test will miss 43% of adolescent and young adult patients with POTS [95]. While 10 min is insufficient to document NMH, most patients with NMH or low orthostatic tolerance will be quite symptomatic during the first ten minutes upright. An at-home 10-min standing test using a heart rate monitor might be a helpful screening measure, but needs to be supervised because of the potential for developing syncope. Formal tilt-table testing is expensive and not always available, but might be warranted in specific situations or research studies [96]. If the patient presents with OI, non-pharmacological management can be initiated with simple treatments like avoiding aggravating conditions, increasing dietary salt and fluid intake, using cooling garments, and employing postural counter-maneuvers and wearing compression garments to reduce gravitational pooling of blood and improve the blood return to the heart [97,98]. Gradual increases in activity, designed to avoid provoking PEM, are part of the overall approach [99,100]. If non-pharmacologic management alone is insufficient, a variety of medications can be used, such as vasoconstrictors (e.g., midodrine, stimulants), agents that improve blood volume (e.g., hormonal birth control therapy, fludrocortisone, desmopressin acetate), and medicines that control sympathetic tone, heart rate, or the effect and release of catecholamines (e.g., beta-blockers, clonidine, SSRIs/SNRIs, pyridostigmine bromide, ivabradine). Doses are published elsewhere [83•]. Medications can be selected based on a variety of clinical factors, including the resting heart rate and blood pressure, and whether two problems can be treated with a single medication. For example, a beta-blocker might be appropriate in the presence of a relatively high resting heart rate or in someone with headaches. Fludrocortisone might be a better first choice in somebody with a low resting blood pressure or a very high salt appetite. Stimulants or clonidine can treat both OI and cognitive dysfunction. OI is one of the more treatable components of ME/ CFS and may prove to have similar benefits in long COVID. PEM and Managing Activity Exercise intolerance was identified as a common symptom in some pediatric long COVID studies [8, 101•], but few have ascertained for PEM. We recommend utilizing pacing techniques for many post-COVID symptoms. Pacing as a management technique can be helpful to avoid exacerbating symptoms. Gradual return to activities should be approached with caution and modified to accommodate the severity of each patient's condition. For patients who are moderately to severely impaired and cannot tolerate exercise while seated or standing, exercise should occur while lying down. Start with stretching for 1-2 min and increase the duration of activity gradually as long as PEM is not provoked. For the mildly impaired, start with 5-15 min of walking. Manual forms of physical therapy can be a bridge to tolerating exercise. Sometimes exercise will not be tolerated until OI is adequately treated [83•]. Cognitive Dysfunction Cognitive impairments can be demonstrated on baseline neuropsychiatric testing, but may also emerge with more complex tasks and in response to upright tilt-table testing [38]. Aside from treating OI as mentioned above, helpful strategies include dividing work into smaller and more manageable sections, performing mental work lying down, snacks and regular fluid intake, and reducing stressors [83•]. In some instances, stimulants may be helpful. A gradual transition back to learning is recommended, with educational and environmental accommodations as needed. Behavioral Symptoms Referral to a psychologist can be helpful for those who are struggling to cope with the effects of the illness or who have true depression or anxiety disorders. A behavioral psychologist can also work with patients to incorporate the lifestyle modifications that are recommended for management of long COVID, OI, or ME/CFS. It remains to be seen whether behavioral symptoms in long COVID will be similar to ME/ CFS. While adolescents with ME/CFS might be frustrated and demoralized by their illness, they are less likely to endorse the primary features of depression such as feelings of worthlessness, guilt, and low self-esteem, or a lack of interest in friendships, relationships, or activities they previously enjoyed [83 •]. ME/CFS differs from depression in that individuals with depression often feel better after exercise, while untreated ME/CFS patients can have a prolonged post-exertional increase in symptoms. Patients with ME/CFS have plans for the future and would like to participate in school and other activities but are often physically limited by their symptoms [83•]. Headaches For those experiencing headaches, lifestyle modifications can be effective for many. These include stress management, adequate hydration, identifying and avoiding headache triggers, and satisfactory sleep patterns [102][103][104]. Neuroimaging may be warranted if abnormalities are present on a neurological examination or if there are red flags on history that are concerning for increase intracranial pressure or secondary causes of headache [103,104]. Medications for preventing headaches can include magnesium, riboflavin, cyproheptadine, beta-adrenergic antagonists, tricyclic antidepressant medications, anti-convulsants, and, for those with migraines, calcitonin gene receptor antagonists [102,104]. Sleep Disturbances Patients experiencing sleep disturbances can benefit from a specified, regular bedtime, avoiding daytime naps where possible, and avoiding caffeine late in the day. Using phones, computers, or other electronics after "lights out" can aggravate fatigue and should be avoided after bedtime. White noise or meditative phone apps can be helpful. Parents may need to awaken those with hypersomnolence after 12 h of sleep to ensure better hydration. Individuals can go back to sleep if needed, but long periods of uninterrupted sleep promote low blood volume and can aggravate OI. If insomnia is impressive and unresponsive to relaxation techniques and standard sleep hygiene measures, pharmacologic treatment may be needed. MCAS and Allergic Phenomenon MCAS has emerged as a co-morbid and potentially causal factor in patients with OI and ME/CFS and has been hypothesized as a pathophysiologic influence on the severity of COVID-19 [105•]. Infections of all types as well as physical and chemical stimuli can activate mast cells, leading to degranulation and release of multiple mediators, including histamine and cytokines [106]. Clinical suspicion for MCAS increases in those with recurring rashes, pruritus, urticaria, facial flushing, and an intolerance of multiple foods and medications. Treatment consists of avoidance of triggers, as well as the addition of antihistamines and medications to stabilize mast cell membranes such as cromolyn, leukotriene inhibitors, and others. Some allergic phenomena may be addressed with dietary changes. For example, up to 31% of ME/CFS adolescents in one study met the criteria for a delayed cow's milk protein hypersensitivity, which can be recognized by a triad of upper gastrointestinal symptoms that include epigastric pain, gastroesophageal reflux, and early satiety, sometimes associated with recurrent aphthous ulcers [88]. A diet free of cow's milk protein in those with milk protein intolerance usually improves the local upper gastrointestinal symptoms, and in some can improve overall well-being, fatigue, and orthostatic symptoms. Individualized Approach Mononucleosis can cause persistent fatigue in 13% of adolescents at 6 months post-infection [107]. As with mononucleosis, there is likely to be some spontaneous improvement over time in those with milder post-COVID symptoms. Consequently, some patients may not need intensive intervention and can expand activities as tolerated. Similarly, recommendations cover a broad range of symptoms and may not apply to each patient. Although a patient may meet long COVID diagnostic criteria, we recommend that patients be evaluated on a case-by-case basis. A standing test or pharmacological intervention will not be necessary for every presenting patient. In a setting of limited resources, practitioner discretion will help to avoid inundating long COVID clinics with those who do not require extensive care and will leave resources for individuals with increasingly severe impairments. We recommend that follow-up time and treatment be decided according to the impact long COVID has on the patient's quality of life. Conclusion Emerging data confirm that prolonged symptoms can develop following even mild or asymptomatic initial SARS-CoV-2 infection. The most common symptoms are fatigue, cognitive dysfunction, and headaches. As ascertainment for orthostatic intolerance in these patients improves, lightheadedness is becoming more commonly recognized. A proportion of long COVID patients meet the criteria for ME/ CFS at 6 months. At present, management of post-COVID conditions focuses primarily on addressing symptoms, borrowing management strategies from conditions like OI and ME/CFS. Compliance with Ethical Standards Conflict of Interest The authors declare no competing interests. Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors.
2022-03-10T14:42:05.904Z
2022-03-09T00:00:00.000
{ "year": 2022, "sha1": "3e24403b34c10dfd046b10b1ca5f8c7dd1632c5f", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s40124-022-00261-4.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3e24403b34c10dfd046b10b1ca5f8c7dd1632c5f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225645349
pes2o/s2orc
v3-fos-license
Composition of Games as a Model for the Evolution of Social Institutions The evolution of social institutions (e.g. institutions of political decision making or joint resource administration) is an important question in the context of understanding of how societies develop and evolve. In principle, social institutions can be conceptualized as abstract games with multiple players and rules about individual decision making and individual and joint outcomes. Here we propose a formal approach for the composition of games (e.g. Prisoner’s Dilemma – PD) to model the evolution of social institutions. Following a generalized description of the approach, we describe two examples of application for the composition of PD games. We assess the impact of the composed games on the level of cooperation. We discuss the implications of the proposed approach and how it may help to develop effective models of social institution Introduction Social institutions form the underlying decisional structures and processes of societies (Boyd and Richerson, 2008;Fukuyama, 2012). These institutions comprise of sets of rules, roles, norms and values that are organized in a specific pattern and which operate accordingly to produce social outcomes that can be interpreted as decisional outcomes (Boyd and Richerson, 2009;Fukuyama, 2012). For example, communal fishing or forest committees in rural communities manage the common resources for the benefit of every family in the community or judges and judicial support organizations manage the application of law in the case of disputes and deliver justice according to the law to the parties involved. Understanding how institutions change and evolve is important to understand how societies evolve. However, at the moment this understanding is effectively based on particular documented case studies, and most overarching theories are at best partial, often questionable at least in parts, and also often simplistic (Boyd and Richerson, 2008;Fukuyama, 2012;Olson, 1994;Turchin, 2006;Mokyr, 2017;Smaldino, 2019;Pletzer et al, 2018;Powers and Lehmann, 2013;Elster, 1989). One approach to model social institutions is to use inspiration from game theory and games as understood in the context of this theory (Rand and Nowak, 2013;Axelrod, 1997;Pletzer et al, 2018;Powers and Lehmann, 2013). According to this approach, a social institution is conceptualized as a multiplayer game, with a set of interaction rules, decision options and pay-off calculation rules. For example, voting games are used to calculate models of coalition forming (Roth, 1988). The theory of cooperation and the use of game theory and games in this context provide an example of how the games approach to modeling of social institutions can lead to a model of the evolution of individual behavior in the setting of a social institution (Powers and Lehmann, 2013;Rand and Nowak, 2013;Smaldino, 2019;Boyd and Richerson, 2009). Evolutionary game theory models and simulations can be used to show how cooperation emerges and stabilizes in social settings, where the cooperation is represented by sharing or solidarity decisions (Powers and Lehmann, 2013;Rand and Nowak, 2013;Smaldino, 2019;Pletzer et al, 2018). While evolutionary game theory is good to show how cooperation may emerge, it is also limited in the sense that it does not provide further tools to analyze how cooperative behavior may trigger institutional changes, leading to the evolution of the modeled social institutions. Here we propose the composition of games to model the evolution of social institutions. We assume that social institutions are modeled by games interpreted in game theoretical setting. We describe how such games can be composed in a consistent manner, using a set of composable components, such that they lead to meaningful new games. We describe how this approach can be used for the investigation of the evolution of social institutions. We describe two particular cases of using the proposed game composition framework to develop new multi-player games from Prisoner's Dilemma games. We analyze the impact of these games on the evolution of cooperative behavior and the stable patterns of cooperative behavior. Our results show that composed games lead to higher levels of cooperation than the one corresponding to the component games, and that considering further factors as well (e.g. available resources, population size) they lead to more successful populations than the ones that rely on the component games only. The rest of the paper is structured as follows. First we review related works. Then we describe institutions as games and the composition of games. This is followed by the description of the cases of game composition that we consider and the analysis of the impact of the new games on the level of cooperation and other features of simulated populations of agents. Then we discuss the implication of the proposed game composition framework and the application examples to the modeling of the evolution of social institutions. Finally, the paper is closed by the conclusions section. Related Works Cooperation theory aims to explain the cooperative behavior among self-interested individuals, such as in the case of humans, animals or cells that appear to act together for a common purpose and benefit (Rand and Nowak, 2013;Axelrod, 1997;Pletzer et al, 2018). The theoretical explanations of cooperation follow a few main lines of reasoning. The argument based on inclusive fitness and kin selection (Rand and Nowak, 2013) assumes that individuals are willing to support others who share their genes, maximizing the spreading of the shred genes. Reciprocal altruism (Rand and Nowak, 2013) assumes that a cooperative action supporting another individual may get reciprocated later and those who are willing to reciprocate will gain an advantage from this in the context of evolutionary selection. The image scoring (Rand and Nowak, 2013) argument assumes that individuals observe the behavior of others and are willing to cooperate with others who have been seen to be engaged in cooperation before. A further approach based on a joint investment argument (Roberts and Sherratt, 1998) assumes that cooperative action is seen as a joint investment which triggers the continuation of cooperative action in order to avoid loss of the joint investment. Further theoretical approaches consider particular circumstances, such as the network structure of interactions or spatial location of individuals (Mitteldorf and Wilson, 2000;Rand and Nowak, 2013). Cooperation theory generally relies on the use of game theory tools for the conceptualization of situations that offer the opportunity of cooperation. Social institutions deliver decision making mechanisms within the society, which are used to allocate resources, resolve conflicts and channel the representation of interests of individuals, groups and communities (Boyd and Richerson, 2009;Fukuyama, 2012;Olson, 1994;Elster, 1989). In general, participation of individuals in social institutions has the potential to deliver common benefits (this is true even in competitive cases, where one or some of the participants end up as winners and the others as losers). Social institutions also use incentives (i.e. individual payoffs, which may take the form of punishments and rewards) to nudge and compel individuals to follow the rules of the institution and contribute to the generation of common benefits (Sigmund et al, 2010;Traulsen et al 2012;Balafoutas et al, 2014;Han et al, 2017). Thus, models originating from cooperation theory fit for the purpose of modeling at least the simple cases of behavior in the context of social institutions. Consequently, cooperation theory and the corresponding games have been used to describe and analyze models of social institutions (Powers and Lehmann, 2013;Smaldino, 2019;Boyd and Richerson, 2009). Computational simulations using agent-based models are part of the core cooperation theory research (Rand and Nowak, 2013). Such simulations aim to capture key features of behavior in relatively simple models of agents that represent individuals. The computational simulation of such agents, their decision making and behavior, and of communities of agents in which agents interact according to their behavioral rules, serves as method for study of the evolution and emergence of cooperative behavior. The simplest two participant games are the Prisoner's Dilemma (Andras, 2016) and the Rock-Paper-Scissor (Andras, 2018), which have been implemented in a wide variety of agent-based models of cooperation evolution. The evolution of social institutions has been the subject of many investigations in the context of social sciences (Boyd and Richerson, 2008;Fukuyama, 2012;Olson, 1994;Turchin, 2006;Mokyr, 2017;Elster, 1989). For example, Fukuyama (2012) explored the impact of institution evolution on the potential of historical states and societies to grow and maintain themselves. Turchin (2006) analyzed how the levels of cooperation relate to the presence of successful social institutions in competing historical societies. For the purpose of modeling the evolution of social institutions researchers have used agent-based modeling approaches and analyzed how models of evolution of cooperation can be used to capture aspects of evolution of social institutions (Powers and Lehmann, 2013;Andras, 2018). One of the most advanced analysis of games and game playing has been developed in the context of two player multiturn strategic games, such as chess, go and other similar, but simpler, games (Mellies and Mimram, 2007;Ramanujan and Simon, 2008;Basset et al, 2014;van Benthem, 2002;Clairambault et al, 2012). In the context of the analysis of game playing strategy in such games, researchers have proposed the use of composition of strategies and defined how formally described strategies can be composed (Ramanujan and Simon, 2008;Basset et al, 2014). While this kind of game and game playing analysis is very interesting and has numerous application (e.g. in the case of industrial robotics) and in an abstract sense even resembles the playing of roles in social institutions, it is of limited use for the analysis and modeling of evolution of actual social institutions, due to the highly specific nature of the analyzed games (i.e. multi-turn, two player, with a set of very specific rules on possible moves and situation/outcome assessment). A further relevant aspect of the evolution of social institutions is the evolution of the conceptual language used in the context of these institutions (Skyrms, 2014). The development of new concepts that capture aspects of the social and physical environment, which become relevant because of the operation of existing institutions, is required for major innovations in social institutions. There is considerable work on the development of the language to conceptualize novel aspects of the environment (Skyrms, 2014;Barett et al 2019;Barett et al 2018;Lacroix, 2019). However, in this paper we do not address this aspect of evolution of social institutions and we restrict ourselves to modeling institutional evolutions at the level which does not require conceptual innovations in the language used to communicate for the delivery of roles within the social institutions. Composition of Games In this paper we conceptualize social institutions as games with a number of participants and a number of decision stages and outcomes for each participant. The participants in the game follow the rules of the game and choose from a set of decision options, possibly in a single stage or through multiple stages. The participant decisions get aggregated according to the rules of the game and possibly after several stages of aggregation lead to game outcomes for each participant. In A simple example of such games is the Prisoner's Dilemma game, in which there are two participants, each participant can principle, this generic formulation of games captures a wide variety of social institutions, such as community decision making over resources, resolution of conflicting claims over resources, selection of representatives for community decision making and so on. The participants choose from the same two decision options (cooperate and defect), and the game rules define the outcome for the two participants depending on their decision choices according to the following table. This game for example can be seen as a representation of social decision making about commonly used resources (e.g. fishing areas, pasture land, water for irrigation). Participant To define the composition of games we introduce here the key components of games in terms of diagram elements. The elements are Decision, Aggregation, Splitting and Impact blocks. Each of these is described below. Each block has an associated set of rules, which describe how the block operates over its inputs to generate its outputs. The Decision block as shown in Figure 1A. The Decision block has an input line carrying the identification of a participant of the game and has an output line carrying a decision label. The Decision block represents the decision choice made by the participant. For example, in the board of a company the representatives of the shareholders make a decision about supporting or opposing an investment proposal. Formally a Decision block can be defined as a sample generator for a random variable DB, where the random variable is defined over a set of possible decisions DS = {d1, …, dn} with a probability distribution over this set PD = {f1, …, fn} such that f1 + … + fn = 1. The probability distribution PD depends on the participant (e.g. status, resources, location) and also on the context (e.g. the knowledge about other participants and past experience of playing the game). Thus PD is formally set as the value of a function defined over the Cartesian product of sets of possible values for participant features and context factors. The Aggregation block is shown in Figure 1B. This block can have multiple decision input lines and has one or more decision output lines. The Aggregation block represents the conversion of a set of decisions into another set of decisions, according to the rules of the game. As a real world example, we may consider the case of decision about the location of a waste incinerator, where the relevant local authorities make decisions about their high level preferences (e.g. they may prioritize creation of jobs over pollution, or they may express their need for a district heating resource, and so on), then these decisions are translated through negotiation into another set of decisions that constrain the possible options for the location of the waste incinerator. The Aggregation block can be defined formally as function from one Cartesian product of decision sets to another Cartesian product of decision sets, A: The Splitter block has a single decision input and a number of decision outputs. This represents the derivation of a set of decisions from a single decision in the context of the rules of the game. For example, when a community association decides about organizing a local parade on a certain date in a certain location, this decision is converted into a number of operative decisions about who is in charge of various aspects of the event, which entertainment and security service providers need to be contacted and engaged and so on. The Splitting block can be defined formally, similarly to the Aggregation block, as function from one decision set to a Cartesian product of decision sets, S: We note that the primary outcomes following from decisions are represented in this approach as decision labels. For example, the decision about the location of waste incinerator, following the above example, is considered as a decision label with the specific location included in it. The final diagrammatic elements are the Impact blocks, which translate the outcome decisions into impact on Table 1. The Impact Block updates the resources of the participants according to the calculated payoffs represented by the outcome decisions. participants in the game. An Impact block has one or more decision inputs and has one output line carrying the identification of a participant. The Impact block alters the features (e.g. resources) of the corresponding participant according to the rules of the game about how outcome decisions are translated into impact on the participant who gets the outcome decision. A real world example is when a contested planning application for an estate development leads to an outcome decision that requires the alteration of the planned development, and as an effect the developer has to change the development budget assigned for the development project. Formally the Impact Block is a function over a Cartesian product of decision sets with values in a Cartesian product of sets that represent possible impacts on participant features. We note that the ordering of input and output lines of the blocks in general may matter, if the corresponding game rules treat inputs and outputs in a non-commutative manner. For a practical example, consider the case decisions over industrial patent disputes, where the timing of the filing of the patent applications determines the priority and the outcome of the decision. Similarly, the outcome decisions in the context of a dispute over an industrial contract may imply an inherent ordering of the application of the impacts, for example one company may need to deliver a particular action, before the other company can deliver a further required action, or one company may need to implement a required action in a particular location, while another company may also need to implement the same action, but at a different location. It is also possible that decisions of the same kind carry a weight associated with them. For example, in the case of voting in the board of a company the vote of the shareholders is weighted by their shareholding volume. Using the above introduced diagrammatic elements, we can represent the Prisoner's Dilemma game as shown in Figure 2. A diagram composed of decision, aggregation, splitting and impact blocks, where the blocks are connected by their output and input lines, represents a valid game, if for each participant input line there is a participant impact output line (some of the impact outputs may implicate no change to the respective participant) and the way how the blocks are connected is consistent with the corresponding decision, aggregation, splitting and impact rules that apply to the respective blocks. The diagram of a valid game should be such that the formal functions associated with the diagram blocks are composable in a meaningful way. Diagrammatic composition of games means the combinations of two games using their diagrammatic representation, such that the resulting diagram describes a meaningful game. The outcome decisions of one game may be used as initial decisions for the other game. Or in a more general sense, elements of the two games may be composed in novel ways to generate a meaningful game. As above, the composition of game blocks has to be meaningful in terms of the functions associated with the blocks. Below we provide two examples of composed games in Figures 3 and 4. The game with the diagram in Figure 3 is a combination three Prisoner's Dilemma games, played by three participants in all three combinations of pairs between them, and with the outcomes summed up for games played by each participant. participants (2r, 2r, 2r), (r + s, 2t, r + s), (2s, t + p, t + p), (2p, 2p, 2p). The game with the diagram in Figure 4, similarly is a combination of three Prisoner's Dilemma games, in a similar manner, however with the difference that the payoff decisions are rewritten by an additional aggregation block into a new payoff for each participant. One consistent option for the new payoffs is to have altered multipliers for the summing up of the Prisoner's Dilemma payoffs. In this case the final possible payoffs, for the three participants, are (1r, 1r, 1r), (2r + 2s, 2t, 2r + 2s), (3s , 3t + 3p, 3t + 3p), (4p, 4p, 4p), where 1, 2, 3, 4, 2, 3, 2, 3 are parameters. The first composed game is an aggregation of simpler games, the second composed game adds an additional layer of composition by mapping the aggregation of the direct composition of simpler games onto a new set of context dependent outcome decisions. The diagrammatic representation of games that we introduced here allows to compose games relatively easily in a meaningful way, i.e. such that the game blocks have appropriate inputs and outputs an these are combined in a meaningful manner. The evolution of social institutions can be captured through the composition of games representing social institutions. Social institutions evolve by adding further rules to their decision making processes, involving additional decisions, or involving additional participants. These steps of institutional evolution can be formulated in the context of game composition by adding of new aggregation blocks or splitting blocks to the game, or by extending the set of decision blocks and impact blocks, or by increasing the number of input lines to aggregation and impact blocks, within the diagram of the game. The addition of aggregation blocks or addition of extra input lines to aggregation blocks or impact blocks may also mean the integration of novel games as components, through composition with the original game. Thus, by representing social institutions as games and using the diagrammatic representation of games as described above, we can implement models of scenarios of evolution of social institutions. To make the modeling of social institution evolution complete, we need to define some measure of success for social institution. This then allows the simulation of evolution of alternatives of social institutions and the analysis of which one generates more successful outcomes for their social environment. Naturally, the measures of success may vary. The simplest options are to consider the size of the population of modeled societies characterized by different institutions or sets of institutions, or to consider the resource wealth of the simulated societies, or to consider the sustainability of the exploitation of the environmental resources, and so on. Composed Games and the Evolution of Cooperation We present here simulations of agent societies with different kinds of social institutions based on the Prisoner's Dilemma games. We aim to analyze the impact of different social institutions on the level of cooperation as this emerges and varies in the simulated agent societies. This analysis demonstrates the usefulness of the game composition based conceptualization of social institution evolution. We consider three variants of agent societies with different social institutions driving the interaction and joint decision making of the agents. First, we consider an agent society where the agents generate new resources by playing in pairs the Prisoner's Dilemma game represented diagrammatically in Figure 2. Next, we consider an agent society where triplets of agents play the game composed from Prisoner's Dilemma games represented in Figure 3. Finally, we consider an agent society where the resource production is managed by playing the game with the diagram representation in Figure 4, by groups of three agents. The performance of the agent societies is measured in terms of population size, average amount of resources of agents and the level of cooperation within the agent society, i.e. the percentage of agents who are involved in a Cooperate / Cooperate interaction. We note that in the case of games played by triplets of agents, the Cooperate / Cooperate interactions are considered for each pair of agents within the triplet. The general simulation settings follow the settings reported in previous papers (Andras, 2016(Andras, , 2018. The agents exist in a two-dimensional world, where they move by random movements. The boundaries of the world are reflective in terms of movement of agents (i.e. if an agent's move would move it beyond the boundary, it gets bounced back from the boundary by the amount of movement that would go beyond the boundary). The agents get involved into playing a game, which is used to generate resources for the participating agents. Depending on the game the agents form pairs or triplets to play the game. Only agents that are located sufficiently close in their two dimensional world can play together a game. Each agent starts with a randomly set age and when it reaches the maximal age (in our simulations this is set to 60 time turns) it reproduces asexually. Agents own resources, additional resources are generated by playing games, and each time turn has a set resource cost (1 resource unit in our simulations). When agents come to the point of reproduction, only agents with sufficient amount of resources can reproduce in our simulations the required amount of resources is set to be the resource amount which is half standard deviation below the mean resource amount for the current population. The number of offspring depends on the amount of resources of the agent that is reproducing, more available resource implying larger number of offspring. The offspring agents divide equally their parent's resources. The initial location of the offspring agents is set by a small random movement added to the position of the parent agent (i.e. the offspring are clustered around the position of their parent, following their generation). Agents that lose all their resources die without offspring. The outcomes of the games in terms of additional resources are set as described earlier. In the case of the Prisoner's Dilemma game, the outcomes are given by Table 1, with the specific setting of the payoff values as r = 3, t = 4, s = -2 and p = -1. In the case of the composed Prisoner's Dilemma game represented by the diagram in Figure 3, the payoffs are as indicated in the previous section, i.e. (6, 6, 6), (1, 8, 1), 3,Figure 5: The steady state level of cooperation in the three simulated agent societies. PDagent societies that play the Prisoner's Dilemma game in pairs. 3PD Combinedagent societies that play the combination of three Prisoner's Dilemma games between a triplet of agents. 3PD Modifiedagent societies that play the combination of three Prisoner's Dilemma games followed by the decision rewriting modification, between a triplet of agents. The data shown is calculated as a moving average over 21 time turns. The standard deviations are not shown to not clutter the figure. 3), (-2, -2, -2). In the case of the composed Prisoner's Dilemma games represented by the diagram in Figure 4, the parameter values are set as 1 = 2.5, 2 =1, 3 = 2, 4 = 2, 2 = 1, 3 =1, 2 = 2.5, 3 = 0.85, thus the payoffs are (7.5, 7.5, 7.5) , (1, 10, 1), (-5, 2.4, 2.4), (-2, -2, -2). The agents play the game in a probabilistic manner. Each agent has an inclination to cooperate, which is represented by a number  in the range of (0,1). The agent makes it decision choice by generating a random number  in the (0,1) range. If  < , the agent decides to cooperate, otherwise to defect. In the case of games played by triplets of agents, each agent makes a single decision, as indicated by the diagrams of the games. The agent's offspring inherit the cooperation inclination of their parent with a small random deviation. The simulations in each case are played for at most 1,200 time turns. In each time turn agents are matched into pairs or triplets, depending on the game they play. It is always possible that some agents are left out from the game playing if they do not get selected into playing pairs or triplets. In each time turn the agents move once, following the closing of all played games. We also have considered simulations where we disperse the offspring of the agents, so these do not form clusters after their generation. However, in all three cases of the games that we consider here, the spread-out offspring scenario led often to early die out of the agent populations, so these are not reported in the paper. Our simulations aim to produce long-lived agent communities in which we can measure the performance indicators for the agent society over many time turns (i.e. for the full 1,200 time turns). Thus, the simulations need to start with a sufficient number of initial agents (typically in the range of 1,500 -4,500 agents). Furthermore, we also implemented the use of a general multiplier  that is applied to all payoff values, to make sure that the game playing Figure 6: The relative population size of the three simulated agent societies during the steady state period. The games are as in Figure 5: PD, 3PD Combined, 3PD Modified. The data shown is calculated as a moving average over 21 time turns. The standard deviations are not shown to not clutter the figure. generates sufficient amount of resources that sufficient number of agents survive for their full life time and also that the population does not explode overly quickly beyond a manageable number of agents (the maximum allowed number of agents is set to 68,000). The value of  is set in the range of 0.3 -3.5, depending of the game played. We ran around 20 simulations for each of the three settings with the different games played by the agents. For the purpose of analysis we consider the characteristics of the agent societies during the steady state of their evolution, which in the case of our simulations is the final third of the simulated evolution, i.e. the time period between time turns 800 and 1,200. We calculated the average indicators across all simulations of the same kind and also the standard deviations of these indicators. Figure 5 shows the level of cooperation in the three social settings of the agent societies, where the social institutions are implemented as the three kinds of games played by the agents. The results show that the steady state level of cooperation is the lowest in the case of the agent society where the agents play the basic Prisoner's Dilemma game (diagrammatically represented in Figure 2). The highest level of steady state cooperation is achieved in agent societies that use the combination of three Prisoner's Dilemma games with the added rewriting of the decision outcomes (see the game diagram in Figure 4). The agent societies with a social institution implemented as the game represented in Figure 3, achieve a middling level of steady state cooperation. In our interpretation this result shows that social institutions of increasing complexity can facilitate the increase in the steady state level of cooperation between self-interested individuals. While the examples of representations of social institutions are very simple (i.e. Prisoner's Dilemma game and its combinations and a relatively simple alteration), these examples capture a key aspect of difference between social institutions, which is their decisional complexity, measured by the number of elementary decisions that lead to the final outcome of the interactions between the agents / individuals. The most complex social institution that we considered increases the benefit of full triplet cooperation relative to other outcomes, with the exception the outcome for the cheater, who plays with two cooperators. This modification of the outcomes, by rewriting the outcome decisions through the use of the Aggregation Block 2 (see Figure 4) delivers the increase of the steady state level of cooperation (see Figure 5). Next we consider the size of agent populations during the steady state period. Given that different simulations required different starting population sizes, to avoid early die-out and prevent rapid over-growth, we consider for the purpose of comparison relative population sizes. The relative size of a population is calculated as the ratio between the current size and the initial size of the population. The results in terms of population size are shown in Figure 6. The results show that the agent populations with more complex social institutions achieve higher relative population size than the agent society with the simplest social institution. The data indicates that the relative size of populations for agent societies with more a complex social institution is larger than the relative size of populations for agent societies with a less complex social institution. Finally, we considered the amount of resources available to agents in the simulated agent societies. For, this again we looked at relative resource volumes, again to avoid the impact of different initialization conditions and other differences in parameters, which make direct comparison of values difficult to interpret. The relative resources are calculated by dividing the current amount of average resources of agents by the initial amount of average resources of agents. We consider in particular the resources available to agents who participate in cooperation interactions in a given time turn. We note that in general, in a given time turn, the average resource of agents who decide to cheat is higher than the average resource of cooperators, and the average resource of agents, which cooperate, but have a cheating partner, is less than the average resource of cooperators. Figure 7 shows the result of comparison of average resources of cooperators during the steady state period. The periodic variations in the lines correspond to the periodic minor variations in the size of the agent populations, which are induced by the varying of the  value. The data shows that the agent societies with the most complex social institution have the lowest average relative amount of resources for cooperators, while the highest average relative resource amount is in the case of agent societies with the simplest social institution. The interpretation of this result is more complicated than the previous result interpretations. In a sense the lower average resource amount may reflect the larger relative population size and possibly also the higher level of cooperation, which implies lower level of occasional cheating by the agents. Cheating in general leads to higher resource accumulation, however, too much cheating risks to lead to the die out of the population. Thus the results seem to suggest that more complex social institutions require (or induce) higher level of cooperation, which reduces the frequency of occasional cheating and consequently leads to lower relative average resource levels across the population. On the other hand, less complex social institutions appear to require lower level of cooperation, allowing more opportunity for occasional cheating, which raises the relative average resource level across the population. Of course, all these are in the context of relative resource levels. Following the investigations of products of the considered social indicators we could not establish any further meaningful interpretation that could be helpful. However, we note that the product of relative resource amount and relative population size and of the difference between the overall resource gain for all cooperation and having one cheaters and the rest cooperators among the playing agents, gives similar values for all three games across the considered simulation time period. This supports the above reasoning in a general sense, i.e. the differential in the benefits of cooperation and cheating and the required level of cooperation are likely to induce the observed difference between the average resource amounts of cooperators across the three simulation scenarios corresponding to different complexity social institutions. Discussion We have introduced above a conceptual framework for the composition of games to model evolution of social institutions. We demonstrated the use of this conceptual framework using two different compositions of Prisoner's Dilemma games and by discussing the interpretation of the results in terms of social institutions of different complexity. However, we have not presented any general approach to derive novel decision, aggregation, splitting and impact calculation blocks that can be used to enhance existing game representations of social institutions or to make the composition of partially matching games meaningful. The two case of composed games that we presented explore the proposed conceptual framework, but both cases are hand-crafted to make the composition of games meaningful. In principle, the hand-crafting applied in the presented composed games can be generalized in the sense of capturing the decisional and environmental space of the games considered for composition using the game blocks. Considering all decisions coming out of the games to be composed, in general, we need to add aggregator blocks or modify aggregator blocks such that all decisions are captured as inputs for the aggregator blocks and the such that the aggregator blocks provide a composite combination of the outputs of the decision blocks. In addition to this, novel game blocks may get added to take into consideration both the decisional environment and the external resource environment. This may happen on the basis of some meaningful analysis of these environments that may reveal previously not considered regularities. For example, in the context of combination or Prisoner's Dilemma games, one such external environment factor may be the variability of pay-offs depending on some environmental uncertainty or risk indicator (e.g. in biological cases a such factor can be predation risk). Similarly, if games with many participants, the distribution of individual decisions or temporal variation of individual decisions may impact the outcome and quantifying and considering this offers the inclusion of additional decision, aggregation, and other game blocks to enhance the game. A further issue is the automated composition of games, which would be required for large scale analysis of models of evolution of social institutions using the proposed approach. Having the previously outlined way of considering environmental factors and completing games with component blocks is useful for this, but still leaves the question of how to automate the block completion to make the composed meaningful. The answer to this issue is provided in principle by the use of applied category theory (Fong and Spivak, 2019). This approach provides a way of defining formally what meaningful game composition means and also a way for automated completion of composite games to reach their meaningful composition. So far, this is an answer in principle, since more work is needed on the category theoretical translation of the proposed game composition methodology, which will be done in the future. We note that the proposed methodology allows a coherent and transparent conceptualization and model implementation of incentives (punishments and rewards) used by social institutions (Sigmund et al, 2010;Traulsen et al 2012;Balafoutas et al, 2014;Han et al, 2017). These can be implemented in principle by using splitting blocks that separate different aspects of decisions and appropriate aggregation blocks that apply the reward or punishment in function of the incoming decisions. For example, the aspects of individual decision making and derived decisions, such as the level of fairness, the contribution to the joint effort, the extent of bluffing and lying, can be separated off using decision splitting blocks and then combined using aggregation blocks to determine the due reward or punishment. The proposed approach can also be used for the coherent and transparent composition of models of social institutions with incentive mechanisms. Finally, let us summarize the limitation of the work presented in this paper. To a good extent, these are already highlighted in the previous two discussion points. The work that we present here is limited to two hand-crafted cases of composed games and their comparisons. As we pointed out, given the general conceptual framework that we have introduced here, there are clear ways of moving toward wider range and more general games, by calculated completion of partially complete compositions of games and also in terms of automated composition of games. Conclusions In this paper we have presented a conceptual framework to the modeling of evolution of social institutions using composition of games. We demonstrated the use of this framework by considering two particular compositions of Prisoner's Dilemma games. The results show that the structurally more complex compositions lead to higher levels of cooperation and larger relative size of the simulated agent populations. We have also discussed briefly the calculated completion of partially complete composed games and the principled approach to automated composition of games. The proposed conceptual framework provides a way to derive and analyze complex multi-participant games that can approximate much better real world social institutions than the currently used simple and usually two-participant games such as the Prisoner's Dilemma game and other similar games. This may lead to much better understanding of social institutions evolve and how they support more or less social integration and social optimization of resource distribution to support overall growth. Future work will focus on calculated completion of partially complete composed games, on environment-derived enrichment of games by adding in environment analysis based game blocks (including new decision blocks), and on category theory based automated composition of games.
2020-07-16T09:07:23.373Z
2020-07-14T00:00:00.000
{ "year": 2020, "sha1": "65cb988cd660b8647a328c9e9e6ba0ec44ed579e", "oa_license": "CCBY", "oa_url": "https://www.napier.ac.uk/~/media/worktribe/output-2809289/composition-of-games-as-a-model-for-the-evolution-of-social-institutions.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4a58739eed1b01f70f4f90db8e3934b72c09d556", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
245782099
pes2o/s2orc
v3-fos-license
Calebin-A prevents HFD-induced obesity in mice by promoting thermogenesis and modulating gut microbiota Background and aim Obesity is one of the complications of sedentary lifestyle and high-calorie food intake which become a global problem. Thermogenesis is a novel way to promote anti-obesity by consuming energy as heat rather than storing it as triacylglycerols. Over the last decade, growing evidence has identified the gut microbiota as a potential factor in the pathophysiology of obesity. Calebin A is a non-curcuminoid novel compound derived from the rhizome of medicinal turmeric with putative anti-obesity effects. However, its ability on promoting thermogenesis and modulating gut microbiota remain unclear. Experimental procedure C57BL/6J mice were fed either normal diet or high-fat diet (HFD) supplement with calebin A (0.1 and 0.5%) diet for 12 weeks. The composition of the gut microbiota was assessed by analyzing 16S rRNA gene sequences. Results and conclusion Mice treated with calebin A shows a remarkable alteration in microbiota composition compared with that of normal diet-fed or HFD-fed mice and is characterized by an enrichment of Akkermansia, Butyricicoccus, Ruminiclostridium_9, and unidentified_Ruminococcaceae. We also explored that calebin A reduce the weight and blood sugar of mice that are induced by HFD, and show a dose-dependent reaction. Moreover, calebin A decreases the weight of white, beige, and brown adipose tissue, and also restores liver weight. In cold exposure experiments, calebin A can better maintain rectal temperature through thermogenesis. In summary, calebin A has a good thermogenesis function and is effective in anti-obesity. It can be used as a novel gut microbiota modulator to prevent HFD-induced obesity. Introduction Obesity is one of the complications of sedentary lifestyle and high-calorie food intake. 1 Nowadays, obesity is considered a global problem, as in the 20th century the prevalence was so high that became a global epidemic. 2 Obesity is the result of long-term impaired energy intake and consumption balance. For the treatment of obesity, manipulating peripheral mechanisms to increase energy expenditure seems to be more valuable than some common reversible drugs that suppress central appetite. 3 Emerging evidence suggests that non-shivering thermogenesis may be a therapeutic target to improve the obese condition. 4 Brown adipose tissue (BAT) is the main organ involved in non-shivering thermogenesis, although white adipose tissue (WAT) adipocytes will browning during long-term cold exposure or exercise. 5 BAT has a high density of mitochondria harboring UCP1 (uncoupling protein 1), a mediator of non-shivering thermogenesis that uncouples oxidative phosphorylation, resulting in heat generation instead of energy production. 6 However, under conditions of enhanced adaptive energy expenditure, brown adipocyte-like cells (bearing UCP1 and perhaps offering other mechanisms of fuel oxidation for heat production) appear at sites of WAT, especially in the subcutaneous WAT depots. This is the so-called browning of WAT, and cells resembling brown adipocytes arising in this process are called "beige" or "brite" (from "brown-in-white"). 7,8 In term of trans-differentiation, beige adipocytes can also come from de novo differentiation from tissueresident progenitors. Activating BAT and inducing browning of WAT can accelerate the intake of glycolipids and reduce the insulin secretion requirement, which may be a new strategy to improve obesity. 9 Recently, a rapidly expanding field of research has shown that the gut microbiota is an important factor leading to obesity, mainly through the regulation of nutrient acquisition, energy regulation and fat storage. 10 In addition to changes the calories or host metabolism, the composition of the gut microbiota can also respond to the progression of obesity. It has also been reported that the intake of some micronutrients, fatty acids, prebiotics, and probiotics could have an impact on gut microbiota composition and on the regulation of gene expression at the liver, muscle, and adipose tissue site. 11 Phytochemicals are bioactive compounds that are abundantly distributed in fruits and vegetables. 12 A strong correlation between specific classes of phytochemicals and modification of the responding microbiota was observed. 13,14 However, comprehensive understanding of the interactions among phytochemicals and the gut microbiota remains in the early phase. Calebin A (4-[3 methoxy-4 hydroxyphenyl]À2-oxo-3enebutanyl 3-[3-methoxy-4 hydroxyphenyl] propenoate) is a non-curcuminoid novel compound derived from the rhizome of medicinal turmeric (Curcuma longa L., Zingiberaceae). 15 Calebin A has been shown to exert anti-inflammatory and anti-tumor properties by the induction of apoptosis and modulating different signaling pathways [e.g., mitogen-activated protein kinase (MAPK), extracellular signal-regulated kinase (ERK), p38, Jun N-terminal kinase (JNK)]. 16,17 Recent studies have delineated that calebin A possesses immense potential to suppress the tumor progression in colorectal cancer. 18 Our previous study demonstrated that calebin-A can inhibit adipogenesis and hepatic steatosis in high-fat diet(HFD)-induced obesity via activation of AMPK signaling. 19 Although the effect of calebin A on HFD-induced obesity has been reported, the effect of calebin A on inducing thermogenesis and modulating microbiota as a strategy of preventing obesity is not widely discussed yet. Therefore, this study is designed to clarify the result of calebin A promoting thermogenesis and modulating gut microbiota. Materials Calebin-A was obtained from Sabinsa Corp. (East Windsor, NJ). The purity of Calebin-A was determined by a HPLC higher than 99%. Animal experimental design and animal care The animal study was designed with 9 four-week old male C57BL/6 mice in each group, with a total of 36 mice, purchased from the BioLASCO Experimental Animal Center (Taiwan Co., Ltd., Taipei, Taiwan) and housed in a controlled atmosphere (25 ± 1 C at 50% relative humidity) with a 12 h light/12 h dark normal photoperiod cycle. Experimental procedures were approved by the Institutional Animal Care and Use Committee of the National Taiwan University (NTU-107-EL-00152). After acclimation for 1 weeks, the mice were randomly assigned to 4 groups with average body weight without significant difference. (1) normal diet (ND, 15% energy from fat), (2) high-fat diet (HFD, 50% energy from fat, the fat in the feed mainly comes from lard), (3) HFD containing 0.1% Calebin-A (LCA), (4) HFD containing 0.5% Calebin-A (HCA). The composition of the experimental diet was based on the Purina 5001 diet (LabDiet, PMI Nutrition International, St Louis, MO, USA) as described previously 20 and the mice are given free access to food and drinking water, and the food consumption and body weight were recorded every day. Animals were sacrificed after 12 weeks HFD and Calebin-A treatments intervention. Organs including liver, kidneys, spleen, gastrocnemius muscle and adipose tissues (perigonadal, retroperitoneal, mesenteric, brown and beige) were photographed and weighed. Plasma and organs were frozen atÀ80 C before being analyzed. Fasting blood glucose measurement The mice after being fasted for 8 h and the glucose levels of tail vein blood samples were measured using a glucose analyzer (OneTouch Ultra, Lifescan, Johnson&Johnson, Milpitas, CA). Acute cold tolerance test Mice were individually housed in precooled cages and exposed to a cold temperature (4 C) for 7 h with free access to food and water. Their rectal temperature was measured at time 0, 2, 4, 5, 7 h using a rectal probe thermometer (Center 301 type K, Australia). 21,22 2.5. 16S rDNA gene sequencing and analysis 2.5.1. Extraction of genome DNA Total DNA was extracted from fresh fecal samples (n ¼ 3) using the innuSPEED Stool DNA kit (Analytik Jena AG, Jena, Germany) according to the manufacturer's protocol. The DNA samples were sent to Biotools Co., Ltd, Taiwan. DNA concentration and purity was monitored on 1% agarose gels. According to the concentration, DNA was diluted to 1 ng/mL using sterile water. PCR products quantification and qualification Mix same volume of 1X loading buffer (contained SYB green) with PCR products and operate electrophoresis on 2% agarose gel for detection. Samples with bright main strip between 400 and 450bp were chosen for further experiments. PCR products mixing and purification PCR products was mixed in equidensity ratios. Then, mixture PCR products was purified with Qiagen Gel Extraction Kit(Qiagen, Germany). Library preparation and sequencing Sequencing libraries were generated usingNEBNext Ultra DNA Library Pre ® Kit for Illumina, following manufacturer 's recommendations and index codes were added. The library quality was assessed on the Qubit@ 2.0 Fluorometer (Thermo Scientific) and Agilent Bioanalyzer 2100 system. At last, the library was sequenced on an Illumina platform and 250 bp paired-end reads were generated. 2.5.6. Paired-end reads assembly and quality control C Data split: Paired-end reads was assigned to samples based on their unique barcode and truncated by cutting off the barcode and primer sequence. C Sequence assembly: Paired-end reads were merged using FLASH (V1.2.7,http://ccb.jhu.edu/software/FLASH/), 23 a very fast and accurate analysis tool, which was designed to merge paired-end reads when at least some of the reads overlap the read generated from the opposite end of the same DNA fragment, and the splicing sequences were called raw tags. C Data Filtration: Quality filtering on the raw tags were performed under specific filtering conditions to obtain the highquality clean tags 24 according to the QIIME(V1.7.0, http:// qiime.org/index.html) 25 quality controlled process. C Chimera removal: The tags were compared with the reference database(Gold database, http://drive5.com/uchime/ uchime_download.html)using UCHIME algorithm (UCHIME Algorithm, http://www.drive5.com/usearch/manual/ uchime_algo.html) 26 to detect chimera sequences, and then the chimera sequences were removed. 27 Then the Effective Tags finally obtained 2.5.7. OTU cluster and species annotation C OTU Production: Sequences analysis were performed by Uparse software (Uparse v7.0.1001,http://drive5.com/uparse/ ). 28 Sequences with !97% similarity were assigned to the same OTUs. Representative sequence for each OTU was screened for further annotation. C Species annotation: For each representative sequence, the GreenGene Database (http://greengenes.lbl.gov/cgi-bin/nphindex.cgi) 29 was used based on RDP 3 classifier (Version 2.2, http://sourceforge.net/projects/rdp-classifier/) 30 algorithmto annotate taxonomic information. C Phylogenetic relationship Construction: In order to study phylogenetic relationship of different OTUs, and the difference of the dominant species in different samples (groups), multiple sequence alignment were conducted using the MUSCLE software (Version 3.8.31, http://www.drive5.com/ muscle/). 31 C Data Normalization: OTUs abundance information were normalized using a standard of sequence number corresponding to the sample with the least sequences. Subsequent analysis of alpha diversity and beta diversity were all performed basing on this output normalized data. Alpha diversity Alpha diversity is applied in analyzing complexity of species diversity for a sample through 2 indices, Shannon and ACE. This indices in our samples were calculated with QIIME (Version 1.7.0) and displayed with R software(Version 2.15.3). ACE index was selected to identify Community richness: ACE -the ACE estimator (http://www.mothur.org/wiki/Ace); Shannon index was used to identify Community diversity: Shannon -the Shannon index (http://www.mothur.org/wiki/Shannon). Beta diversity Beta diversity analysis was used to evaluate differences of samples in species complexity, Beta diversity on both weighted and unweighted unifrac were calculated by QIIME software (Version 1.7.0). 4 Cluster analysis was preceded by principal component analysis (PCA), which was applied to reduce the dimension of the original variables using the Facto-MineR package and ggplot2 package in R software (Version 2.15.3). A distance matrix of weighted or unweighted unifrac among samples obtained before was transformed to a new set of orthogonal axes, by which the maximum variation factor is demonstrated by first principal coordinate, and the second maximum one by the second principal coordinate, and so on. Unweighted Pair-group Method with Arithmetic Means (UPGMA) Clustering was performed as a type of hierarchical clustering method to interpret the distance matrix using average linkage and was conducted by QIIME software (Version 1.7.0). Statistical analyses Statistical evaluate was performed by running the one-way analysis of variance (ANOVA) and Duncan's Multiple Range Test. The nonparametric Wilcoxon signed rank test for paired data was used in microbiota analysis. Data are presented as the means ± SE for the indicated number of independently performed experiments. A probability value of P < 0.05 or P < 0.01 was considered statistically significant. Excel was usedd for outlier detection, after using QUARTILE to calculate the quartile, calculate the interquartile range (IQR), and finally define the value beyond the upper and lower bounds as outliers and deleted. Results and discussion 3.1. Calebin A exhibits anti-obesity effects in high-fat diet induced obesity in mice Fig. 1A showed that after 12 weeks of induction, the weight difference between the HFD group and the ND group was about 10 g, and Calebin-A (0.1 and 0.5%) given at different doses can significantly reduce the weight of the mice and has dose-dependent reaction. These results are consistent with previous studies, 19 and confirmed that lower doses can still significantly reduce the weight of mice. The body weight of the groups treated with Calebin-A were lower than the HFD group during the whole 12 weeks of the experiment, showing that it can prevent obesity caused by high-fat diet, and the appearance of the mice are relatively small. Fig. 1B indicates the values obtained by measuring fasting blood glucose of mice after 8 h of fasting in the 12 th week. The fasting blood glucose of the HFD group is the highest, which is significantly higher than ND group. Supplementation with Calebin-A can lower fasting blood glucose, and the HCA group (0.5%) significantly reduce the fasting blood glucose that rises after the induction of a high-fat diet. The results of organ weight (Fig. 1C) showed that the weight of liver and spleen increased significantly after being induced by highfat diet. HCA group could significantly reduce liver weight to no difference from the ND group, but could not significantly reduce spleen weight. In the part of the kidney, there was no significant difference between the groups. From the appearance of the organs (Fig. 1D), it can be found that the liver color becomes lighter due to lipid accumulation after the high-fat diet is given, and the 0.5% Calebin-A treatments can improve the liver color and make it closer to the dark red of the ND group, the results of hepatic steatosis are consistent with previous studies. 19 The research points out that fasting blood glucose is one of the indicators for assessing obesity. 32 After fasting in normal mice, the fasting blood glucose will fall around 50e100 mg/dl, while the blood glucose of type 2 diabetic mice will rise to 150e300 mg/dl. 33 Compared with the experimental results, it can be seen that the mice induced by HFD have a tendency towards diabetes. Previous studies have pointed out that curcumin has the ability to improve blood sugar in mice, and this phenomenon is consistent with this experiment. 34 In toxicology experiments, the organ weights of treated animals and untreated animals are often compared to assess the toxic effects of samples. 35 The liver is an important organ responsible for metabolism, detoxification, and protein production, 36 the spleen is the largest lymphatic organ, responsible for the production of antibodies and various immune functions. 37 Previous studies have shown that high-fat diet increased the weight of the liver and spleen, which is consistent with the results of this experiment. 38 Calebin A ameliorates enlargement of white adipose tissue and induces brown fat-like changes in high-fat diet induced obesity in mice In addition to the effect on body weight, the anti-obesity effect of calebin A was also reflected in adipose tissue (Fig. 2). First, perigonadal, retroperitoneal, and mesenteric adipose tissues represent white adipose tissue (WAT) in mice. After a high-fat diet, the weight and volume of WAT increased significantly. LCA group can reduce the weight of WAT but there is no significant difference from the HFD group. HCA group can significantly reduce the weight of perigonadal fat and mesenteric fat. The results of this experiment are also consistent with previous studies. 19 Therefore, we further explored inguinal white adipose tissue (iWAT), and brown adipose tissue (BAT) in mice. The results in Fig. 3 indicated that the weight of iWAT and BAT was induced by high-fat diet, and the weight of iWAT and BAT was slightly decreased by calebin A. Apart from using iWAT and BAT to explore the brown fat-like changes in mice, we also further observed the expression of gastrocnemius muscle. The results showed that high-fat diet would cause muscle loss in mice. Similar results were also reported by Chou et al. 39 The study pointed out that obesity not only causes adipose tissues increase, but also causes muscle loss. 40 Therefore, muscle weight is measured in this experiment to assess whether HFD affects muscles. The muscle content of mice is represented by the gastrocnemius muscle used in the previous study. 40 The gastrocnemius is located with the soleus in the posterior compartment of the leg which adipose tissue does not grow and affect. However, the sample in this experiment cannot increase the amount of gastrocnemius by increasing the metabolic energy. Effect of calebin A on acute cold tolerance test The adaptive thermogenesis of mice is evaluated through the acute cold tolerance test. In this experiment, the mice are placed in a cold environment for a short period of time (4 C for 7 h) to observe the colonic temperature which is the core body temperature of the mice. The rectal probing is a more suitable temperature detection method when the mice are during hypothermia. 41 The mice have better adaptive thermogenesis capacity will have a smaller drop in body temperature and can better maintain its core temperature through heat production. It can be seen from the results (Fig. 4A) that mice given high-dose calebin A (0.5%) have less body temperature changes and can better maintain their own temperature, while mice in the HFD group have the sharpest drop at temperature. This result shows that it cannot maintain its own temperature in a low-temperature environment. The temperature change graph is represented by area under the curve (AUC) (Fig. 4B). It can be seen that the HCA group is significantly higher than the HFD group, which proves that calebin A can improve the obesity induced by high-fat diet in mice through enhanced the capacity for adaptive thermogenesis. In the present study, resveratrol treatment mice had a higher body temperature during the cold challenge in HFD-induced obesity, indicating that resveratrol treatment enhanced the capacity for adaptive thermogenesis and also increased metabolic activity in HFD-induced mice. 42 Calebin A alters microbiota composition in HFD-fed C57BL/6 mice The overall composition of the bacterial community in the different groups was assessed by analyzing the degree of bacterial taxonomic similarity between metagenomic samples. The gut microbiota of obese humans and HFD-fed mice is characterized by an increased Firmicutes-to-Bacteroidetes ratio (F/B ratio). 43 The results show that after 12 weeks of high-fat diet treatment, the composition of microbiota and the ratio of Firmicutes and Bacteroidetes were altered. (Fig. 5). Many previous studies have pointed out that high-fat diet increased the F/B ratio, leading to gut dysbiosis, although the F/B ratio of the HFD group was not significantly higher than that of the ND group in the results, the administration of high-dose calebin A could reduce this ratio (from 1.24 ± 0.29 to 1.03 ± 0.42) and increase the ratio of Verrucomicrobia phylum (from 0.002 ± 0.001 to 0.022 ± 0.02, p-value ¼ 0.22). It can also be seen from Fig. 5 that Proteobacteria has an upward trend after calebin A treatment (from 0.03 ± 0.01 to 0.05 ± 0.01, pvalue ¼ 0.10). This is related to the UPGMA trend of the bacteria phyla, the change of HCA group is most similar to the composition of the ND group, followed by the HFD group. It is shown that giving an HFD can cause a change in the microbiota composition, and giving a high dose of calebin A can make the change in the composition close to that of the ND group. Alpha diversity is an indicator that can reflect the abundance and diversity of microbiota. Alpha diversity is mainly related to two factors. On the one hand, it is the number of species, that is, richness; on the other hand, it is the diversity of the distribution of individuals in the community. Common indicators of richness include Chao1 estimator and ACE estimator, and common indicators of diversity include Shannon and Simpson index. Fig. 6 A and B indicates the ACE estimator and Shannon index to evaluate the richness and diversity of the microbiota after different treatments. The results show that the richness of the microbiota is reduced after HFD treatment. The richness of the group also tended to decrease after calebin A treatment, but there was no significant difference in Shannon index between the groups. However, in the part of the richness, the high dose of calebin A significantly decreased. It is speculated that this result is related to the significant increase in the proportion of Proteobacteria and Verrucomicrobia after calebin A administration, and the decrease in the proportion of other bacterial phyla. By analyzing the differences in the composition of the microbiota in each group, the principal component analysis (PCA) is used to classify the flora. It is currently a widely used method to analyze the microbiota. PCA can be multi-dimensional data dimensionality reduction, while maintaining a focus on both the data contributed the most characteristic differences, so as to effectively find information on the most important element methods and structures. Using PCA analysis can find the coordinate axis that can reflect the difference between samples to the greatest extent, so that the difference of multi-dimensional data can be reflected on the two-dimensional coordinate in a linear combination, so as to observe the difference between individuals or groups. If the community composition of the sample is more similar, the distance in the PCA diagram is closer. Fig. 6C shows that the ND group is farther away from the group given HFD, the group treated the different dose calebin A were closer because of its similar composition, and the position of the HFD group falls between the two. It shows that feeding the animals with different concentration of calebin-A affects the composition of the microbiota. Previous study has pointed out that curcumin which has similar structure to calebin A show similar performance in alpha diversity and PCA compare with our experiment. After administration of curcumin, the diversity (OUT estimates and Shannon index) decrease, and the result of PCA analysis shows that the distance between HFD group and HFD þ curcumin is closer. 44 Fig. 5 is the analysis of the Phylum level using UPGMA tree, and the PCA in Fig. 6C is the result of the Species level analysis, so the experimental results were inconsistent. At the Phylum level, a lot of studies have pointed out that the ratio of the Firmicutes and Bacteroides phylum is related to obesity, 45 so it is meaningful to analyze the Phylum level. Calebin A manipulated gut microbiota in obese mice after dieting In order to further study the relationship between changes in microbiota and calebin A, a heatmap was used to show the abundance of the first 35 OTUs that significantly affected the HFD and calebin A (Fig. 7A). The results showed that the specific unknown genus of Ruminococcaceae and Butyricicoccus genus of mice significantly decreased after HFD treatment, and the administration of different doses of calebin A could increase its content. Many studies point out that Ruminococcaceae and Butyricicoccus are associated with obesity. The main genera contributing to the gut composition among the non-obese individuals were Prevotella, unclassified Lachnospiraceae, and unclassified Ruminococcacea. 46 Rodriguez et al. also found that the levels of Anaerostipes, Akkermansia and Butyricicoccus decreased in obese patients. 47 In the HCA group, the level of Ruminiclostridium_9 and Akkermansia genus increased significantly compared with the HFD group. Akkermansia belongs to the Verrucomicrobia phylum. This result explains the administration of calebin A caused an increase in the Verrucomicrobia phylum. Hou et al. undergo the experiment that fed 60% HFD to the four-week-old C57BL/6J male mice for 12 weeks. The results showed that the relative abundance of Ruminiclostridium_9 was decreased 48 which consistent with the results of our experiment. However, another study also pointed out that giving HFD increased Ruminiclostridium_9, while giving HFD and resveratrol reduce the content of Ruminiclostridium_9, this is contrary to the results of our experiment. In addition, the author compared the microbiota of mice given a normal diet with the mice given a normal diet and resveratrol, and there was an increase in Ruminiclostridium_9. 49 It is speculated that different diet composition will cause different microbiota, and the host's obesity or leanness may also cause differences in the composition of the microbiota. Akkermansia is a gram-negative bacterium that can decompose mucin. It accounts for about 1e3% of human intestinal bacteria. In recent years, many studies have found that the proportion of Akkermansia in obese individuals will drop significantly. Maintaining the basic ratio can reduce the body's fasting blood glucose, waist-to-hip ratio and subcutaneous fat cell size, and improve insulin sensitivity. 50e52 Curcumin, an analog of Calebin-A was proved to improve the obesity in mice and alleviate HFD-induced hepatic steatosis through increase Akkermansia, which correlated with the improvement of gut barrier function and with the improvement of hepatic inflammatory and oxidative stresses in obese mice. This phenomenon is consistent with the results of our experiment. 53,54 Conclusion Based on the above experimental results, calebin A can reduce the weight and blood sugar of mice that are induced by high-fat diet, and show a dose-dependent reaction. Calebin A reduces the weight of white, beige, and brown adipose tissue, and also restores liver weight. In cold exposure experiments, calebin A can better maintain rectal temperature through heat production. In addition, in the gut microbiota, calebin A can increase the intestinal commensal bacteria Akkermansia, as well as Butyricicoccus, Rumi-niclostridium_9, and unidentified_Ruminococcaceae. In summary, calebin A has a good thermogenesis function and is effective in antiobesity. It can be used as a novel gut microbiota modulator to inhibit weight gain, enhance the effect of thermogenesis, and improve obesity. Declaration of competing interest The authors declare that there are no conflicts of interest.
2022-01-07T16:10:24.056Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "2e28b37afddff1436422002c8d4be2ca04deaaa6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jtcme.2022.01.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d814ed98eae5c1451969baca09b3cb236e78459", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55834003
pes2o/s2orc
v3-fos-license
Galaxy metallicity near and far Metallicity appears to be one the most important tool to study formation and evolution of galaxies. Recently, we have shown that metallicity of local galaxies is tightly related not only to stellar mass, but also to star formation rate (SFR). At low stellar mass, metallicity decreases sharply with increasing SFR, while at high stellar mass, metallicity does not depend on SFR. The residual metallicity dispersion across this Fundamental Metallicity Relation (FMR) is very small, about 0.05dex. High redshift galaxies, up to z~2.5, are found to follow the same FMR defined by local SDSS galaxies, with no indication of evolution. At z>2.5, evolution of about 0.6dex off the FMR is observed, with high-redshift galaxies showing lower metallicities. This result can be combined with our recent discover of metallicity gradients in three high redshift galaxies showing disk dynamics. In these galaxies, the regions with higher SFR also show lower metallicities. Both these evidences can be explained by the effect of smooth infall of gas into previously enriched galaxies, with the star-formation activity triggered by the infalling gas. The mass-metallicity relation has been studied by Erb et al. (2006) at z∼2.2 and by Maiolino et al. (2008) and Mannucci et al. (2009) at z=3-4, finding a strong and monotonic evolution, with metallicity decreasing with redshift at a given mass (see fig.1. The same authors (Erb et al., 2006;Erb, 2008;Mannucci et al., 2009) have also studied the relation between metallicity and gas fraction, i.e., the effective yields, obtaining clear evidence of the importance of infall in high redshift galaxies. If infall is at the origin of the star formation activity, and outflows are produced by ex- Fig. 1. Evolution of the mass-metallicity relation from local to high redshift galaxies from Mannucci et al. (2009) ploding supernovae (SNe), a relation between metallicity and SFR is likely to exist. In other words, SFR is a parameter that should be considered in the scaling relations that include metallicity, such as the mass-metallicity relation. The local Fundamental Metallicity Relation To test the hypothesis of a correlation between SFR and metallicity in the present universe and at high redshift, we have studied several samples of galaxies at different redshifts whose metallicity, M ⋆ , and SFR have been measured. A full description of the data set is given in Mannucci et al. (2010) Local galaxies are well measured by the SDSS project (Abazajian et al., 2009). Among the ∼ 10 6 galaxies with observed spectra, we selected star forming objects with redshift between 0.07 and 0.30, having a signalto-noise ratio (SNR) of Hα of SNR>25 and dust extinction A V < 2.5. Total stellar masses M ⋆ from Kauffmann et al. (2003) were used, scaled to the Chabrier (2003) initial mass function (IMF). SFRs inside the spectroscopic aperture were measured from the Hα emission line flux corrected for dust extinction as estimated from the Balmer decrement. The conversion factor between Hα luminosity and SFR in Kennicutt (1998) was used, corrected to a Chabrier (2003 IMF. Oxygen gas-phase abundances were measured from the emission line ratios as described in Maiolino et al. (2008). An average between the values obtain from [NII]λ6584/Hα and R23=([OII]λ3727+[OIII]λ4958,5007)/Hβ was used. The final galaxy sample contains 141825 galaxies. The grey-shaded area in the left panel of Fig. 2 shows the mass-metallicity relation for our sample of SDSS galaxies. Despite the differences in the selection of the sample and in the measure of metallicity, our results are very similar to what has been found by Tremonti et al. (2004). The metallicity dispersion of our sample, ∼0.08 dex, is somewhat smaller to what have been found by these authors, ∼0.10 dex, possibly due to different sample selections and metallicity calibration. The left panel of Fig. 2 also shows, as a function of M ⋆ , the median metallicities of SDSS galaxies having different levels of SFR. It is evident that a systematic segregation in SFR is present in the data. While galaxies with high M ⋆ (log(M ⋆ )>10.9) show no correlation between metallicity and SFR, at low M ⋆ more active galaxies also show lower metallicity. The same systematic dependence of metallicity on SFR can be seen in the right panel of Fig. 2, where metallicity is plotted as a function of SFR for different values of mass. Galaxies with high SFRs show a sharp dependence of metallicity on SFR, while less active galaxies show a less pronounced dependence. The dependence of metallicity on M ⋆ and SFR can be better visualized in a 3D space with these three coordinates, as shown in Figure 3. SDSS galaxies appear to define a tight surface in the space, the Fundamental Metallicity Relation (FMR). The introduction of the FMR results in a significant reduction of residual metallicity scatter with respect to the simple mass-metallicity relation. The dispersion of individual SDSS galaxies around the FMR, is ∼0.06 dex when computed across the full FMR and reduces to ∼0.05 dex i.e, about 12%, in the central part of the relation where most of the galaxies are found. The final scatter is consistent with the intrinsic uncertainties in the measure of metallicity (∼0.03 dex for the calibration, to be added to the uncertainties in the line ratios), on mass (estimated to be 0.09 dex by Tremonti et al. 2004), and on the SFR, which are dominated by the uncertainties on dust extinction. The reduction in scatter with respect to the mass-metallicity relation becomes even more significant when considering that most of the galaxies in the sample cover a small range in SFR, with 64% of the galaxies (±1σ) is contained inside 0.8 dex. The mass-metallicity relation is not an adequate representation of galaxy samples with a larger spread of SFRs, as usually find at intermediate redshifts. Galaxies at all redshifts follow well defined mass-metallicity relations (see, for example, Mannucci et al. 2009, and references therein). For this reason each of these samples, except the one an z∼3.3 that contains 16 objects only, is divided into two equally-numerous samples of low-and high-M ⋆ objects. Median values of M ⋆ , SFR and metallicities are computed for each of these samples. Galaxies up to z∼2.5 follow the FMR defined locally, with no sign of evolution. This is an unexpected result, as simultaneously the mass-metallicity relation is observed to evolve rapidly with redshift (see Fig.1). The solution of this apparent paradox is that distant galaxies licity. Circles without error bars are the median values of metallicity of local SDSS galaxies in bin of M ⋆ and SFR, color-coded with SFR as shown in the colorbar on the right. These galaxies define a tight surface in the 3D space, with dispersion of single galaxies around this surface of ∼0.05 dex. The black dots show a second-order fit to these SDSS data, extrapolated toward higher SFR. Square dots with error bars are the median values of high redshift galaxies, as explained in the text. Labels show the corresponding redshifts. The projection in the lower-left panel emphasizes that most of the high-redshift data, except the point at z=3.3, are found on the same surface defined by low-redshift data. The projection in the lower-right panel corresponds to the mass-metallicity relation, as in Fig. 2, showing that the origin of the observed evolution in metallicity up to z=2.5 is due to the progressively increasing SFR. have, on average, larger SFRs, and, therefore, fall in a different part of the same FMR. In the SDSS sample, metallicity changes more with M ⋆ (∼0.5 dex from one extreme to the other at constant SFR, see Fig. 2) than with SFR (∼0.30 dex at constant mass). Therefore mass is the main driver of the level of chemical enrichment of SDSS galaxies. This is related to the fact that galaxies with high SFRs, the objects showing the strongest dependence of metallicity on SFR (see the right panel of fig. 2), are quite rare in the local universe. At high redshifts, mainly active galaxies are selected, and the dependence of metallicity on SFR becomes dominant. Galaxies at z∼3.3 show metallicities lower of about 0.6 dex with respect to both the FMR defined by the SDSS sample and galaxies at 0.5<z<2.5. This is an indication that some evolution of the FMR appears at z>2.5, although its size can be affected several potential biases (see Mannucci et al. 2010 for a full discussion). A larger data set at z>3 is needed to solve this question. What the FMR is telling us The interpretation of these results must take into account several effects. In principle, metallicity is a simple quantity as it is dominated by three processes: star formation, infall, outflow. If the scaling laws of each of these three processes are known, the dependence of metallicity on SFR and M ⋆ can be predicted. In practice, these three processes have a very complex dependence of the properties of the galaxies, and can introduce scaling relations in many different ways. First, it is not known how outflows, due to either SNe or AGNs, depend on the properties of the galaxies. Second, infalls of pristine gas are expected to influence metallicity in two ways: metallicity can be reduced by the direct accretion of metal-poor gas, and can be increased by the star formation activity which is likely to follow accretion. Third, the star formation activity is known to depend on galaxy mass, with heavier galaxies forming a larger fraction of stars at higher redshifts, and this effect produce higher metallicities in more massive galaxies. The dependence of metallicity on SFR can be explained by the dilution effect of the in- falling gas. A simple model can be constructed (see Mannucci et al. 2010) where a variable amount of metal-poor, infalling gas, forming stars according to the Schmidt-Kennicutt law, can explain the dependence of metallicity on SFR. For this scenario to work, the timescales of chemical enrichment must be longer than the dynamical scales of the galaxies, over which the SFR is expected to evolve. In other words, galaxies on the FMR are in a transient phase: after an infall, galaxies first evolve towards higher SFR and lower metallicities. Later, while gas is converted into stars and new metals are produced, either galaxies drop out of the sample because they have faint Hα, or evolve toward higher values of mass and metallicities along the FMR. In this scenario, the dependence of metallicity on SFR is due to infall and dominates at high redshifts, where galaxies with massive infalls and large SFRs are found. In contrast, in the local universe such galaxies are rare, most of the galaxies have low level of accretion, and abundances are dominated by the dependence on mass, possibly due to outflow. In many local galaxies, timescales of chemical enrichment can be shorter than the other relevant timescales (e.g., Silk 1993), and galaxies can be in a quasi steady-state situation, in which gas infall, star formation and metal ejection occur simultaneously (Bouche et al., 2009). Assuming this quasi steady-state situation, in which infall and SFR are slowly evolving with respect to the timescale of chemical enrichment, it can be shown ) that our results support a scenario where outflows are inversely proportional to mass and increase with SFR 0.65 . The small scatter of SDSS galaxies around the FMR can be used to constrain the characteristics of gas accretion. For this infall/outflow scenario to work and produce a very small scatter round the FMR, two conditions are simultaneously required: (1) star formation is always associated to the same level of metallicity dilu-tion due to infall of metal-poor gas; (2) there is a relation between the amount of infalling and outflowing gas and the level of star formation. These conditions for the existence of the FMR fits into the smooth accretion models proposed by several groups (Bournaud & Elmegreen, 2009;Dekel et al., 2009), where continuos infall of pristine gas is the main driver of the grow of galaxies. In this case, metal-poor gas is continuously accreted by galaxies and converted in stars, and a long-lasting equilibrium between gas accretion, star formation, and metal ejection is expected to established. Abundance gradients in high-redshift galaxies Recently , we have obtained a direct evidence of the presence of smooth accretion of gas in high redshift galaxies. We selected three Lyman-break galaxies among the AMAZE (Maiolino et al., 2008) and LSD (Mannucci et al., 2009) samples which show a remarkably symmetric velocity field in the [OIII] emission line, which traces the ionized gas kinematics (see Fig. 4). Such kinematics indicates that these are rotationally supported disks (Gnerucci et al., in preparation), with no evidence for more complex merger-induced dynamics. Near-infrared spectroscopic observations of the galaxies were obtained with the integral field spectrometer SINFONI on VLT, and we used the flux ratios between the main rest-frame optical lines to obtain the metallicity map shown in Fig. 4. An unresolved region with lower metallicity is evident in each map, surrounded by a more uniform disk of higher metal content. In one case, CDFa-C9, the lower metallicity region is coincident with the galaxy center, as traced by the continuum peak, while it is offset by ∼ 0.60 ′′ (4.6 kpc) in SS22a-C16 and ∼ 0.45 ′′ (3.4 kpc) in SS22a-M38. On the other hand, in all the galaxies the area of lower metallicity is coincident or closer than 0.25 ′′ (1.9 kpc, half of the PSF FWHM) to the regions of enhanced line emission, tracing the more active star forming regions. The average difference between high and low metallicity re-gions in the three galaxies is 0.55 in units of 12+log(O/H), larger than the ∼ 0.2 − 0.4 dex gradients measured in the Milky Way and other local spirals (van Zee et al., 1998) on the same spatial scales. The measured gas phase abundance variations have a significance between 98% and 99.8% . It can be shown ) that variations of ionization parameter across the galaxies cannot explain the observed gradients of line ratios, and that different metallicities are really requested. Current models of chemical enrichment in galaxies (Molla et al., 1997) cannot reproduce our observations at the moment, as they assume radially isotropic gas accretion onto the disk and the instantaneous recycling approximation. Nevertheless, the detected gradients can be explained in the framework of the cold gas accretion scenario (Kereš et al., 2005) recently proposed to explain the properties of gas rich, rotationally supported galaxies observed at high redshift Förster Schreiber et al., 2009). In this scenario, the observed low metallicity regions are created by the local accretion of metal-poor gas in clumpy streams , penetrating deep onto the galaxy following the potential well, and sustaining the observed high star formation rate in the pre-enriched disk. Stream-driven turbulence is then responsible for the fragmentation of the disks into giant clumps, as observed at z ≥ 2 (Genzel et al., 2008;Mannucci et al., 2009), that are the sites of efficient star formation and possibly the progenitors of the central spheroid. This scenario is also in agreement with the dynamical properties of our sample, which appears to be dominated by gas rotation in a disk with no evidence of the dynamical asymmetries typically induced by mergers. The study of the relations between metallicity gas fractions, effective yields, and SFR show that the low-metallicity regions can be well explained by amounts of infalling gas much larger than in the remaining high-metallicity regions. Our observations of low metallicity regions in these three galaxies at z ∼ 3 therefore provide the evidence for the actual presence of accretion of metal-poor gas in massive high- Cresci et al. (2010). Lower metallicity region are surrounded by a more enriched disk. The crosses in each panel mark the position of the continuum peak. z galaxies, capable to sustain high star formation rates without frequent mergers of already evolved and enriched sub-units. This picture was already indirectly suggested by recent observational studies of gas rich disks at z ∼ 1 − 2 (Förster Schreiber et al., 2009;Tacconi et al., 2010), and is in agreement with the FMR describe above.
2010-11-01T08:01:36.000Z
2010-11-01T00:00:00.000
{ "year": 2010, "sha1": "b09c6d9834bf60a335d9e1042649d8a805f50b0c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b09c6d9834bf60a335d9e1042649d8a805f50b0c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248528076
pes2o/s2orc
v3-fos-license
Derivation and Validation of a Score for Predicting Poor Neurocognitive Outcomes in Acute Carbon Monoxide Poisoning Key Points Question Can a novel clinical scoring system predict poor neurocognitive outcomes after acute carbon monoxide poisoning? Findings This prognostic study developed and externally validated a prediction model including 5 risk factors associated with poor neurocognitive outcome at 1 month, creatine kinase level, hyperbaric oxygen therapy, Glasgow Coma Scale score, age, and shock (COGAS score), among patients with carbon monoxide poisoning. COGAS score showed excellent discrimination performance. Meaning These findings suggest that use of a reliable prediction model during the early phase of carbon monoxide poisoning could help identify patients at risk of poor neurocognitive sequelae. eAppendix 1. Definition of the Study Variables Carbon monoxide (CO) exposure duration, as reported by the patient or patients' guardians, was the expected maximum duration of CO exposure, measured from the time of normal state of consciousness to patient rescue. Any state of loss of consciousness was defined as a case of such, regardless of the length of loss of consciousness. Shock was diagnosed when a vasopressor was required to resuscitate the patient and if lactate levels exceeded 2.0 mmol/L. eAppendix 2. Global Deterioration Scale Explanation The Global Deterioration Scale (GDS) is a validated, reliable instrument for describing the clinical progression of dementia. 1 It is also used to determine the prognosis of patients with carbon monoxide (CO) poisoning, 2-4 severe chronic obstructive pulmonary disease, Alzheimer's disease, and vasculopathy-related dementia. 1,[5][6][7] Although the GDS score is not as diverse as a CO battery, it has the advantage of being able to identify neurocognitive functions, such as memory and concentration, as well as activities of daily living, through interviews. Moreover, many neurocognitive function tests may be difficult for patients with sequelae. The Short-Form General Health Survey-36, a commonly used testing tool, has a set of self-reported questions; however, it is limited in evaluating patients with severe neurological impairment as it requires an individual's ability to understand and address the questions. Digit span, trail making, and clock drawing are good evaluation tools but require short-term memory and visuospatial functions. Therefore, the GDS score can be used for all patients with CO poisoning regardless of poisoning severity. The scale consists of seven stages, with higher scores indicating greater severity. Stage Cognitive dysfunction Clinical characteristics 1 No cognitive decline Patients appear clinically normal. No complaints of memory deficits. No evident memory deficit on clinical interview. Very mild cognitive decline Patients complain of memory deficits. Most frequently, patients: (a) forget where they have placed familiar objects (b) forget the name of someone they formerly knew well. No objective evidence of memory deficit on clinical interview. No objective deficits in employment or social situations. Patients display appropriate concern about their symptoms. Mild cognitive decline Earliest clear-cut deficits. Objective evidence of memory deficit was obtained only through an intensive interview conducted by a trained geriatric psychiatrist. Concentration deficit may be evident on clinical testing. Patients may demonstrate a reduced ability to: The subtlety of the clinical symptoms may be exacerbated by denial that is often manifested by these patients. Mild-tomoderate anxiety also accompanies the symptoms, typically when the patients are forced to cope with challenging employment and social demands that render them unable to negotiate. Denial is often the dominant defense mechanism. The evident decline in patients' intellectual and cognitive capacities is too overwhelming with a loss of full conscious acceptance and recognition. Flattening of effect and withdrawal from previously challenging situations is observed. Moderately severe cognitive decline Patients can no longer survive without some assistance. During interviews, patients are unable to recall a major relevant aspect of their current lives. Examples include the following: (a) difficulty recalling their address or telephone number, names of close family members, such as grandchildren, or the name of the high school or university from which they graduated (b) some disorientation with time (date, day of the week, season) or location (c) well-educated patients may have difficulty counting backwards from 40 by fours or from 20 by twos. Patients retain the knowledge of many major facts regarding themselves and others. They invariably know their own names and generally know their spouse and children's names. They require no assistance with toileting and eating but may have some difficulty choosing the proper clothing to wear and may occasionally clothe themselves improperly (e.g., put their shoes on the wrong feet). Patients are largely unaware of all recent events and experiences in their lives. They retain some knowledge of their past but is very uncertain. They are generally unaware of their surroundings, the year, or the season and may have difficulty counting backward, and sometimes forward, from 10. Patients require substantial assistance with activities of daily living. These symptoms are quite variable and include the following: (a) delusional behavior (e.g., patients may accuse their spouse of being an impostor, may talk to imaginary figures in the environment, or their own reflection in the mirror) (b) obsessive symptoms (e.g., continual repetition of simple cleaning activities) (c) anxiety symptoms, agitation, and previously nonexistent violent behavior (d) cognitive abulia (i.e., loss of willpower because they cannot dwell on a thought long enough to determine a purposeful course of action). 7 Very severe cognitive decline All verbal abilities are lost. Frequently, there is no speech ability at all; only grunting remains. Patients have urinary incontinence and require assistance with toileting and eating. They lose psychomotor skills (e.g., the ability to walk). The brain may find it difficult to tell the body what to do. Generalized cortical and focal neurologic signs and symptoms are frequently present. Variables included in this model were as follows: older age (>50 years), low GCS (≤12), shock, no use of hyperbaric oxygen therapy, creatine kinase (>320 U/L), hypertension, and serum bicarbonate (≤19.6 mmol/L). ROC = receiver operating characteristic; AUC = area under the ROC curve; GDS = Global Deterioration Scale
2022-05-06T06:23:45.882Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "e9e011445c0db72ca7dc9f047d173086bf5ad914", "oa_license": "CCBY", "oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2791872/kim_2022_oi_220317_1651162468.08277.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5586d273f2ca476b69e73782764a6b9807d0a9f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252276581
pes2o/s2orc
v3-fos-license
Towards a Climate Neutral Housing Strategy for Egypt – Performance based Living-Action Levels and Responsibilities Egypt’s population has recently exceeded 100 Million inhabitants; a figure that is anticipated to double within the coming 20-30 years. At a current 2% annual growth rate, changing socio economics needs, and anticipated demographic change; more than ever, has housing provision become a challenging proposition. Residential construction in Egypt amounts to 21% of the total construction output by value in 2015 in Egypt (9.548 Billion LE) which is almost equivalent to the construction output by value of powerplants (EGP 9.67Billion LE). In addition, the construction and demolition waste accounts for 44% of the total solid waste of 94 Million tons produced in Egypt. Given the scarcity of resources, the resultant CO2 emissions and pollution, in addition to Egypt’s commitment to the World Sustainable Development Goals (SDGs); this paper, adopts a qualitative approach - literature review, to identify Egypt’s major housing challenges and potential solutions in light of SDG11. In this context, the concept of climate neutral design is investigated, different assessment tools are critically reviewed; and major components for strategy development are concluded in terms of levels and responsibilities. These are then used as a reference to reflect on Egypt housing strategies issued in 2016. This paper is exploratory in nature and is anticipated to furnish the ground for the following phase of the research to devise an action driven climate neutral housing strategy for Egypt. 1. Housing in Egypt Egypt historically, has always been solely dependent on agriculture along the river Nile, the main source for water in Egypt [1]. Thus, the population has been distributed along the Nile banks around 1,500 km in length [2]. Nevertheless, after the 1952 revolution, the country embarked on aggressive heavy industrial development in major urban cities, and particularly in Cairo. This has arguably resulted in surge of rural-urban migration for better job opportunities and improved quality of living [3]. Thus, experiencing increasing need for housing in urban areas. While the Government aimed to provide social housing as means for social justice at that time, it has not been able to meet the increasing demand since [4]. This has further exacerbated in the late 60s and in the 70s when war economy was assumed [3]. Since then, the informal housing has expanded throughout the years largely due it's affordability in comparison to Government provided housing. In addition, inhabitants in Government provided housing embarked on informal adaptation to housing units to increase unit area and adapt it to their needs [5]; [6]. With a current population of 100 Mill inhabitants and around 2% annual growth rate [2]; around 900,000 units are arguably needed annually. This is further exacerbated by additional estimated accumulated gap of 2-4 million units [7]. Furthermore, the increase in housing demand exacerbated the challenge for the Government to accommodate not only in terms of quantity, but also quality, and affordability. In addition, the increasing informal housing that does not abide by any codes or regulations has also resulted in housing units that do not meet the changing needs of the population (social, economic, and demographics). Consequently, overburdening existing infrastructure and consequently negatively affecting the quality of living [8]. Thus, rendering these as socially, economically, and environmentally unsustainable. Further to the environmental and economic aspect, the construction and demolition waste in Egypt represent around 44% of the total solid waste in Egypt exceeding both agricultural and industrial waste [9]. Given that construction output for public, private, and government projects by value in Egypt for residential sector and Powerplants is by far the largest in comparison to other sectors [10]; it may be argued that residential sector is contributing to the majority of construction and demolition waste in Egypt. Thus, making the construction sector in general and the housing sector in particular inefficient and consequently unsustainable. Therefore, there is a pressing need for sustainable solutions to the housing sector in Egypt. 2. Sustainable development indicators According to Brundtland report [11], sustainable development arguably implies limitations governed by the present state of technology and social organization on environmental resources and the ability of the biosphere to absorb the effects of human activities. In this context, technology and social organization can arguably be further managed and improved to allow for economic growth. Building on Daly's ends-means spectrum which illustrates the relation between human economy and earth for a steady economic state [12]; Meadows [13] suggested a framework for sustainable development indicators (Figure 1). In this context, the three main measures of sustainable development have been suggested to include: a) sufficiency (through which ultimate ends are realized), b) the efficiency (through which ultimate means are translated into ultimate ends), and c) the sustainability (i.e use of ultimate means). The ultimate means are those means out of which all life and economic transactions are built and sustained. Thus, considered the natural capital such as the sun's energy, the biogeochemical cycles, the ecosystems and the genetic information they bear, and the human being as an organism. These ultimate means are arguably not created, rather are considered the heritage that humans are born into, and out of them. These are then transformed using technology as intermediate means. These are e.g. tools, machines, factories, skilled labour, processed material and energy (i.e built and human capital and raw material). These are also referred to as inputs to the economy. While intermediate means are considered necessary but are arguably not sufficient to accomplish all higher purposes. The intermediate ends are considered e.g. the goals that governments promise, and subsequently economies are expected to deliver in the form of e.g. consumer goods, health, wealth, knowledge, leisure, communication, transportation), also referred to as 'outputs'. It was further noted that intermediate ends should not be considered as ends in themselves, rather are instruments to achieve something higher. Hence, the translation of intermediate ends to ultimate ends depends on e.g. an effective ethic, religion, or philosophy. At the top of the triangle of sustainability; the ultimate end has been located, thus; is desired for itself. Therefore, should not be considered as the means to the achievement of any other end. The definition or measurement of the ultimate end is faced with challenges according to Huovilla [12]: "Our perception of the ultimate is always cloudy, but necessary nonetheless, for without a perception of the ultimate it would be impossible to order intermediate ends and to speak of priorities." 3. Climate neutral design In line with Meadow's framework for sustainable development indicators [13], a climate neutral building falls within the intermediate ends. A carbon neutral building is defined as one with significantly reduced energy consumption combined with the increased use of low carbon energy sources to meet the remaining demand [14]. In this context, buildings would arguably only need very little energy where the remaining energy needs will mainly be met by renewable energy sources [15]. To better manage carbon emissions, existing residential buildings are suggested to be subdivided into classes (e.g single-and double-family houses (SDFH), small and medium-sized multi-family houses (SMH/MMH), and large multi-family houses (LMH) [15]. It is further suggested that the building types be subdivided into age groups whose energetic characteristics in their originally built state differ significantly. Whether the building exists or is in the development phase, and according to the United Nations economic commission for Europe, on how to make cities less energy and carbon intensive and more resilient to climatic challenges [16], the full life-cycle assessment including construction materials and end-of-service disposal or re-use should be considered; where, a climate neutral urban waste management should be implemented. Furthermore, it is advised that buildings to be maintained by welldeveloped maintenance industry. In addition, from a planning point of view, planning and development control should be in place to prevent sprawl; and ensure socio-spatial integration avoiding social segregation and social imbalances. This, however, require an evaluation/assessment mechanism to ensure cities and human settlements are inclusive, safe, resilient, and sustainable (SDG11). 4. Assessment versus Certification criteria To ensure energy efficiency in buildings and consequently a climate neutral built environment in general, several certification tools were developed and implemented. The very first certification tool developed was the British BREEAM in 1990, LEED in the US 1998, and the newest was DNGB in Germany 2009. The comparison carried out between the different certification systems [17] demonstrated different weighting criteria for the different components of sustainability, namely environmental, economic, and social quality. Only the DGNB system gave equal weights to the different sustainability components, whereas WELL has given the social quality the most emphasis. This further confirms that there is no consensus in terms of what is a sustainable building/built environment; and consequently, raises concerns in terms of the purpose of building certification schemes, and which scheme should be applied, and why? Further questions arise in terms of the possibility of achieving different alternatives for the same goal, what is/are the determining factor(s), and who makes the decision? In this respect, a building certified under a particular certification scheme, may fail to get certified under another scheme [18]. To help compare between the different certification schemes; Guldager et al. [19] devised 13 aspects to guide the comparison of a selection of certification schemes. This has been further investigated to conclude a common definition for the different aspects of the certification schemes for ease cross-referencing the different schemes [20]. It was further concluded that the different certification schemes may not only differ in terms of their weighting criteria but may also differ in terms of the different aspects, and even principles [20]. Hence, raising the concern of the effectiveness of the different schemes; and thus, the call for investigating a more holistic approach that is flexible enough to allow responding and be tailored not only to the needs of the different people, communities, and countries; but also taking into account the changing national and global challenges. The following section attempts to explore such a holistic approach in residential buildings. 5. Performance-based living The inability of housing to respond to the various inhabitants changing needs, further arguably affects the mental, social and physical health and wellbeing of the people who live in the homes [21]. Thus, jeopardizes people's within their community. In this respect, there is a need to not consider housing as merely the provision of stock of houses, rather it should be considered as means to enable a healthy and productive community. In this context, Gibson [22] suggested the concept of 'performance-based design (PBD), which calls for thinking and working considering ends rather than means [23]. This was in line with Vitruvius who referred to it as the art of building. Where the building should consider durability, convenience, and aesthetics [24]. This has been referred to in contemporary terms to identify user needs (UN) and performance requirements (PR) which are then to be translated into attributes for the different building sets and parts. These should have measurable impacts on health and wellbeing; and further relate to the building itself in terms of the fabric/envelope, the internal layout of the unit, systems, interface with the neighborhood, and strategy overall. The neighborhood should also enjoy certain characteristics to allow social interaction, exercise, access to nature, etc. so people would enjoy living in their community; and thus, would have a positive impact on their personal health and wellbeing [21]. The systemization of qualitative user expectations has arguably began in the US in 1970s as part of the framework of the Operation Breakthrough project [25] defining a list performance attributes. This list was later adapted and extended by different scholars to include: -Functionality -which refer to spatial characteristics and accessibility, serviceability, operation and maintenance, and structural serviceability. -Safety -refers to structural safety, fire safety, accident safety, body safety, and security. -Health and well-being -this include indoor air quality, moisture and mould safety, indoor climate, acoustics, visual comfort, hygiene, water quality. -Sustainability -refers to energy efficiency, durability, environmental impact The stakeholders identified as most relevant to performance-based design PBD include users/inhabitants, guests, services personnel, the public; in addition to the regulator who is concerned with addressing the true needs and the building do not directly and/or indirectly affect the environment throughout the life span of the building. Furthermore, the design team is also an important stakeholder who is responsible for ensuring all pertinent PRs including the regulatory framework in the different areas are met. This further requires a coordinated design process and teamwork. The manufacturers are also important stakeholders of the building material who are responsible for using well established processes and quality control, e.g., in Europe they use the CE marking referring to the fitness for use [26]. In this context, PBD is argued to: -encourage better apprehension and communication of client/user requirements -allow considerable flexibility with regards to design proposals -support innovation to cater for cost optimized solutions User requirements on the other hand are suggested to be grouped in a coherent fashion to define the performance categories, including: -functional performance -technical performance -economic performance -environmental performance -social performance -process performance Factors varying from the quality of the internal air, how much space and light there is, in addition to the amount of storage space available, can arguably have measurable implications on health and wellbeing. Notwithstanding these issues, even the design of the neighbourhood is critical as it allows opportunities for social interaction, exercise, access to nature, local amenities and schools. Thus, define the extent to which residents will enjoy living in their community and further impact their own personal health and wellbeing [21]. The following ten-steps are considered as the backbone for establishing a PBD process for any building occupancy and in every performance area: -Step-1: define potential User-Activity groups and their UNs. properties. In order to ensure that the 10 steps above are representing a real-life situation of people's needs, and consequently a comprehensive response to these needs, a Sinus-Milieus Models i.e. target group segmentation may be needed [27]. The model should be continuously adapted to socio-cultural changes in the society. These illustrate the everyday reality of societies, people's working and private lives, the changing family structures, the digitalisation of day-to-day living, and the growing polarisation of wealth. This is in line with Schäfers [28], who considered dwelling as part of most established cultural matters. In this respect, housing should not refer to the location where the basic needs are met, rather the spatial manifestation of individual needs, self-esteem, accomplishment, as well as the representation of cultural and civilization standards. Thus, promoting the identification and classification of living requirements (Wohnwünsche). 6. Performance-based building and design evaluation Performance-based building is an approach concerned with building related processes, products and services that is predominantly targeting the required outcomes, the ends, and not necessarily how these outcomes are attained, i.e. the means. This contrasts with the traditional prescriptive approach, that tends to focus on specifying the method or solution for achieving the required outcomes [29]. It calls for better apprehension and communication of client/user requirements, thereby minimising opportunities for IOP Publishing doi:10.1088/1755-1315/1078/1/012075 6 disputes; and thus, ensure satisfied customers; allows considerable flexibility for building practitioners in terms of design solutions; encourages innovation, provides the opportunity for cost-optimised solutions; and further, supports international trade. UNs and PRs convey the demand side of the building chain. Where the supply side provides design solutions as well as the final constructed facility. Nevertheless, design tools are needed to provide solutions. Furthermore, to ensure that supply meets demand, reliable assessment methods should be employed. It needs to be noted, however, that both the design tools and the assessment methods should be able to evaluate/simulate the behavior and response of the building to the generalized loads and anticipate the performance indicators specified in the performance criteria. Nevertheless, the tools required during design and for the final assessment of the integrated solution should not necessarily be the same. During the design phase, each professional is expected to seek answers for the set of given PRs under their responsibility. The process starts by identifying various conceptual solutions which should be first verified 'superficially' against other requirements. Those that clearly conflict with requirements are discarded. The architect, then, combines all remaining solutions into the most favourable combination or into several combinations of equivalent solutions. Each of the various members of the design team are then expected to elaborate the details in their area of specialization. It needs to be noted that every single decision made by any of the professionals may affect the performance in other areas that are not directly under his/her responsibility. Therefore, the final chosen combination should be re-assessed by the different design team members to ensure that it still responds to the entire set of requirements [29]. While assessment methods and tools employed by the different stakeholders may not necessarily be identical; those used by the authority having jurisdiction should, however, be specified in the regulatory documents. In addition, those used by the entrepreneur are suggested to be defined in the performancebased program and in the contracts. While there is little agreement in terms of which building performance evaluation criteria and methodologies should be best applied in the different situations; there is consensus, however, that performance-based approach ensures innovation, more open competition, allows transparent procurement, and ensures cost effective building [30]. In this context, there have been calls for evaluating the contribution of single buildings towards sustainable development. In this respect, the functional design, technical, economic, environmental, social, and process aspects should be considered simultaneously ( Figure 2). Nevertheless, to achieve a sustainable development, the performance-based housing design approach should not be implemented by individual projects; rather a performance-based national housing strategy may be needed to guide the national housing development and build consensus among the different stakeholders. Functional Technical Economic Environme ntal Social Process 7. Strategy development A strategy may be defined as "a unified, comprehensive, and integrated plan that relates to the strategic advantages ……. to the challenges of the environment. It is designed to ensure that the basic objectives …… are achieved through proper execution ……." [31]. Strategy formulation is arguably contextually based as it may be understood as a flow of events, values, and actions running through a context. Part of the context is the location of strategy in time. In this context, yesterday's strategies eventually provide some of the pathways to today's strategies; and today's strategies would bear a concept for the future. Eight different strategies have been identified [32], namely planned, entrepreneurial, ideological, umbrella, process, unconnected, consensus and imposed. Rao (2010) identified different forms of strategies, namely deliberate, emergent, and realised. A planned strategy takes the form of formal plans, with clear intentions identified by the leadership and supported by formal controls to ensure seamless implementation in controllable or predictable environment. It is further argued that a strategy can best be seen as the product of the political, cognitive, and cultural fabric of an 'organisation'. In order to agree on the strategy, comprehensive answers to the following questions should be determined: a) where are we now? B) what do we think will happen in the future? C) where do we want to go? This should be followed by devising the actions needed to achieve the strategies; and thus, conclude an action plan followed by a budgeting plan and measures of success. 8. Egypt's Housing Strategy Housing challenges in Egypt emerged almost 70 years ago, several attempts to overcome these challenges throughout the years seemingly failed; thus, challenges have compounded. 1948 witnessed the first public housing project in Egypt. Housing initiatives in Egypt took several forms ranging from speculative finished apartment blocks, unfinished apartment blocks, to self-built housing in existing as well as in new cities. Nevertheless, failures were attributed to units being not affordable, small for an average Egyptian family size, requiring long commuting, in addition to failure to monitor selfbuilt/incremental housing. All of which contributed to the informal adaptation of formal housing as well as the expansion of informal housing. In this context, Nadim [33], [8], [34], [35] explored the housing challenges and opportunities for sustainable smart solutions. The current housing strategy was issued in 2020 [36]. Four major challenges identified in the strategy include a) existing urban development areas, b) existing housing stock and vacant units, c) low-income housing, d) dimensions of sustainable development. The strategy is arguably consistent with the sustainable development goals (SDG), The New Urban Agenda [37], the Arab Strategy for housing and Urban Development [38], and Egypt sustainable development strategy (Egypt vision 2030) [10]. This strategy is intended to inform the housing sector for the next 20 years. The strategy includes providing lands suitable for construction with the needed basic services, studying the social and economic aspects of the targeted populations by the housing programs, developing an integrated strategy for new urban communities, in addition to developing existing deteriorated low-income areas. Household income has been and is still considered the common factor for categorising housing to include low, middle, uppermiddle. The strategy acknowledged the lack of information with regards to housing needs due to the lack of accurate data. In terms of sustainability, the concept of 'green building' in the strategy is considered for new developments, with no reference being made to the existing stock which exceeds 40 million units. Thus, calling for revised building codes and established legislative framework to implement green and sustainable building practices. In addition to laws with binding standards to ensure sustainable and environmentally friendly buildings. Furthermore, to apply incentive schemes to motivate the private sector to invest in green and sustainable buildings. In terms of affordability, the strategy suggests 'eased standards' for affordable housing with alleviated procedures to support self-built housing, promoting the concept of mixed-use development. In terms of innovation, the strategy calls for energy efficient housing, employing cheaper building materials, and adopting more effective construction technology, (b) allowing diversity in social housing projects, achieving more dense plans for large IOP Publishing doi:10.1088/1755-1315/1078/1/012075 8 residential blocks to ensure social involvement, and maximizing access to public transport, and (c) stimulating innovation to achieve affordable housing. In addition, the strategy calls for housing provisions that takes into account the specificity of the different Egyptian societies. 9. Discussion and Conclusion This paper attempts to pave the road for a climate neutral housing strategy for Egypt. This is of particular importance due to the increasing housing demand, the inability to respond to the changing socioeconomic needs, and the expansion of informal housing to fill the demand-supply gap in terms of quantity but not necessarily quality. The paper further argued that this has resulted in the increasing construction and demolition waste. Considering the above challenges, the paper investigated the different definitions for climate neutral design where socio-spatial integration should be maintained, and life-cycle assessment should be assumed to reduce waste by considering end-of-service disposal and/or re-use. This, however, require a solid assessment mechanism to achieve inclusive cities which are safe, resilient and sustainable according to SDG11. The paper further questioned the reliability of the different existing assessment and certification systems, particularly as each system has a different approach to sustainability. While the paper acknowledges the need for different alternatives for the same goal, this however, should be based on the needs of the people and not be determined by a standard assessment/certification scheme. In this respect, the paper investigated the concept of performance-based living and performance-based buildings to translate the needs of the users which are mostly qualitative into measurable technical solutions. Thus, the paper argues that the different stakeholders need for any design should be first identified, followed by a translation of these into functional, technical, economic, environmental, social and process performance to conclude the optimum solution for achieving the different needs of the different stakeholders. From a sustainability perspective in terms of ensuing performance-based buildings and cities in general, the papers highlighted the different types of strategies; arguing that a planned strategy would be most suitable for the complex housing context in Egypt. The paper finally explored the current housing strategy of Egypt issued in 2020. The strategy is understandably mainly concerned with the provision of affordable housing in terms of provision of adequate financing schemes. Nevertheless, while acknowledging the SDG goals and the provision of green concepts to the housing sector, no clear definition of what 'green' means, in addition these will be dependent on issuing a binding law and incentives to encourage the implementation thereof. Furthermore, the strategy promotes mixed-use and self-built housing schemes, and did not refer to the means to avoid previous failures in similar past schemes [8], [33]. Sustainability and innovative solutions are generally very broad which may bear different interpretations, which would make it real difficult to assess the efficiency of the proposed solutions. It is therefore expected that the concept of performance-based housing, may help achieve a holistic approach for sustainable climate neutral housing in Egypt by defining the ultimate needs and attempt to achieve them starting from the ultimate means. This however and overcome require the careful identification of the different stakeholders to help achieve the ultimate ends and avoid failures of previous initiatives.
2022-09-15T20:02:38.794Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "c3f07b60abdb224ec79a91259188b7ea786f49c3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/1078/1/012075", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c3f07b60abdb224ec79a91259188b7ea786f49c3", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Physics" ] }
256938588
pes2o/s2orc
v3-fos-license
The relation between gallstone disease and cardiovascular disease Gallstone disease (GD) is a common digestive disorder that shares many risk factors with cardiovascular disease (CVD). CVD is an important public health issue that encompasses a large percentage of overall mortality. Several recent studies have suggested an association between GD and CVD, while others have not. In this report, we present a meta-analysis of cohort studies to assess the association between GD and CVD. We included eight studies published from 1980 to 2017, including nearly one million participants. The pooled relative risk (RR, 95% confidence interval [CI]) from the random-effects model associates with GD is 1.23 (95% CI: 1.17–1.30) for fatal and nonfatal CVD events. The pooled RR from the random-effects model of CVD events in female patients with GD is 1.24 (95% CI: 1.16–1.32). In male GD patients, the pooled RR from the random-effects model for CVD is 1.18 (95% CI: 1.06–1.31). Our meta-analysis demonstrates a substantially increased risk of fatal and nonfatal CVD events among patients with a medical history of GD. We suggest that interested investigators should further pursue the subject. In addition, both male and female patients with GD have a risk of CVD, and women have a higher risk than men. publications, we included the article with the longest follow-up years or the largest number of incident cases. We qualified articles for further examination by performing an initial screen of identified titles and abstracts, followed by full-text review. Data extraction. The following information was extracted from the included studies: study name, authors, publication year, region, study population, study design, age range, percentage female patients, years of follow-up, sample size, outcomes, data collection, assessment of GD, adjusted relative risk (RR, 95% confidence interval [CI]) and confounder adjustment. The primary clinical outcome of the study was a combined endpoint including fatal and nonfatal CVD events. If the information was unavailable from the report, we attempted to collect relevant data by corresponding with the authors. We utilized the Newcastle -Ottawa Quality Assessment Scale (NOS) 17 to evaluate the quality of included studies with consideration of the following aspects: selection, comparability and exposure. Data synthesis and analysis. The fully adjusted RR was used to estimate the association between GD and CVD. Forest plots were created to visually assess the RRs and corresponding 95% CIs across studies. In the forest plots, each study as well as its summary effect was depicted as a point estimate bounded by a confidence interval. This representation showed whether the effects for all studies were consistent or whether they varied substantially from one study to the next. RR > 1 and 95% CI excluding 0 meant a positive correlation 18 . Heterogeneity across studies was assessed by the Cochrane Q statistic (significance level of p < 0.10) and the I 2 statistic (ranges from 0-100%, with lower values representing less heterogeneity) 19 . The RRs were pooled using random-effects models 20 . Pre-specified subgroup analyses were performed to examine the impacts of various study characteristics, including region, years of follow-up, sample size, rate of CVD events and the degree of adjustment for the most important confounders. A sensitivity analysis was conducted to assess the influence of each individual study on the summary risk estimate using the trim and fill method 21 . Remaining studies were reanalysed following the omission of one study at a time. Finally, the potential publication bias was examined by visual inspection of the funnel plot and the result of Egger's test (p < 0.10) 22 . A roughly symmetrical funnel plot suggested no publication bias 23 . All analyses were performed using STATA version 14.1 (Stata Corp, College Station, Texas). A p-value < 0.05 was considered statistically significant, except where otherwise specified 24 . Table 1. Flow chart of the meta-analysis of the relation between gallstones and cardiovascular disease. Results Literature search. A total of 566 articles were retrieved in the initial search. Of these, 20 duplicate articles were excluded. After a first round of screening based on titles and abstracts, 12 articles remained for further review. After comprehensive full-text examination, four articles were excluded as they were reviews. Ultimately, eight articles 16,[25][26][27][28][29][30][31] were eligible for analysis (Table 1). Study characteristics. There were 11 retrospective cohort studies among the eight articles: one article 16 included three cohort studies, and another article 25 included two cohort studies. The characteristics of the 11 studies among the eight articles are displayed in Table 2. Five studies were questionnaire-based, and six studies were reviews of hospital records. Five studies specifically reported results on CHD; two studies reported CVD mortality; one study reported IHD; one study reported stroke; and two studies reported multiple outcomes. The assessment of GD varied across studies: one study used definite hospital diagnosis; two studies relied on ICD GD and risk of CVD. The majority of studies reported a positive association, but the RRs reported by three articles were not statistically significant 25,26,29 . Patients with GD had a 23% higher risk of CVD than the patients in Figure 1. The squares and horizontal lines correspond to the study-specific RR and 95% CIs. The area of the squares reflects the study-specific weight. Weights are from random effects analysis. The diamond represents the pooled RR and 95% CI. Figure 2. The squares and horizontal lines correspond to the study-specific RR and 95% CIs. The area of the squares reflects the study-specific weight. Weights are from random effects analysis. The diamond represents the pooled RR and 95% CI. the control groups [95% CI = 1.17-1.30, Fig. 1]. We detected substantial heterogeneity among studies (I 2 = 74.2%; p < 0.000). Region Subgroup and sensitivity analyses. We conducted subgroup analyses by length of follow-up, sample size, region, rate of CVD events, and the degree of adjustment for the most important confounders ( 39) showed a marked decrease in heterogeneity. We observed a non-significant association between GD and fatal CVD events, but this result was not reliable due to a lack of data (only three studies reported the fatal CVD events). We therefore speculated that heterogeneity might result from years of follow-up, number of participants and the degree of adjustment for the most important confounders. There were five articles with eight studies reporting the relative risk for males and/or females. One study reported a RR < 1.00, but this estimate was not statistically significant. Pooled RR from the random-effects model for women was 1.24 (95% CI: 1.16-1.32, I 2 = 78.5%, Fig. 2). The pooled RR from the random-effects model for men was 1.18 (95% CI: 1.06-1.31, I 2 = 90.7%, Fig. 2). Both sexes with GD had a risk of CVD, but the risk for women was higher than that of men. A sensitivity analysis of omitting one study at a time showed no substantial change in the results. The trim and fill method showed no trimming, and the data were unchanged (Fig. 3). Cholecystectomy and risk of CVD. Wirth et al. 31 and Ruhl et al. 29 reported that cholecystectomy increased the risk of CVD, with a rate of surgery of 66.2% and 74.6% among GD patients. The RRs were 1.32 (95% CI: 1.05-1.65) and 1.3 (95% CI: 1.1-1.6), respectively. Olaiya et al. 28 and Zheng et al. 23 suggested a trend towards no differences among groups, but there were insufficient data to perform a statistical analysis. Publication bias. There was no publication bias according to the visual inspection of the funnel plot (Fig. 4) and the result of Egger's test (p = 0.467). Discussion In this meta-analysis comprising approximately one million participants, we demonstrate that a history of GD gives a 1.23-fold increased risk of CVD. We also demonstrate that women may have a higher risk of CVD than men. In addition, patients undergoing cholecystectomy may have a higher risk of CVD than GD patients without surgical treatment, but the data are insufficient to draw a statistically significant conclusion. Most of the studies attribute both GD and CVD to common risk factors. However, the RRs collected from included studies were all adjusted for these common risk factors, such as age, obesity, BMI, diabetes, hypertension, unhealthy diet and physical inactivity. All but two articles 16,27 show a decline in RR after adjustment, but the results still were significant, these two articles suggest that hypertension, obesity and diabetes mellitus are protective factors. Two studies 28,30 suggest that younger patients are at higher risk than older patients, but that the elderly in general tend to have more risk factors. Taken together, these results suggest aetiologies apart from the known common risk factors. Cholesterol accumulation is a major feature of both GD and atherosclerosis. The association between GD and CVD may due to a shared metabolic pathway involving cholesterol and other pathophysiological features. Low HDL level is known to increase risk of CVD morbidity and mortality 32 and has been shown to play a role in the development of GD 33 . One study suggests that insulin-like growth factor one (IGF-1) is involved in gallbladder emptying and may have an anti-atherosclerotic effect, which suggests that low plasma levels of IGF-1 may result in both GD and CHD 34 . Oxidative stress also plays an important role in the development of GD 35 and has been implicated in the pathogenesis of CVD as well 36 . Many studies indicate that the gut microbiota influences host health. A recent study suggests that altered composition of gut microbiota increase the risk of CVD by derived signalling molecules 37 , and GD is related to microbiota dysbiosis in the gut and biliary tract 38 . Mounting evidence suggests that non-alcoholic fatty liver disease (NAFLD) is a risk factor for IHD 39 . Additionally, a recent study shows an association between GD and NAFLD 40 , and preliminary evidence suggests that GD is associated with more severe liver damage in NAFLD patients 41,42 . Although the mechanisms have not been fully elucidated, these studies suggest new avenues for prevention and treatment. Traditionally, CVD has been thought of as a male disease. According to our study, however, women with GD may have a higher risk of CVD than men. The explanation for this phenomenon is unknown, but we speculate that it may be related to the following factors. Low HDL levels contribute to the development of GD 33 and CVD, peak total cholesterol levels occur later in men than in women, and HDL levels decrease in postmenopausal women. Diabetes increases the risk of GD 43 and death from CHD 44 , and the incidence of diabetes in women is higher than in men. Elderly women with CHD are more likely to suffer from metabolic syndrome 44 . Low socioeconomic status increases the risk of CVD 45 and GD 46 . There are two distinct points of view regarding whether cholecystectomy increases the risk of CVD in GD patients. Wirth et al. 31 and Ruhl et al. 29 suggest that cholecystectomy increases the risk, while Olaiya et al. 28 and Zheng et al. [23] find no significant difference. We agree with the former viewpoint, though there are not enough data to support this conclusion. Our reasons are as follows: cholecystectomized mice have elevated serum levels of very low-density lipoprotein 47 ; cholecystectomy may impact lipid and glucose metabolism 48,49 ; gallbladder-related hormones have a beneficial effect on metabolic syndrome 50 ; and cholecystectomy changes bile flow to the intestine and therefore alters the microbiota between bile acids and the intestine 51 . More studies are needed to establish a connection more firmly. Several limitations of this meta-analysis should be acknowledged. First, we find substantial heterogeneity across studies, possibly arising from years of follow-up, number of participants and the degree of adjustment for the most important confounders. Second, the meta-analysis is restricted to English-language publications, and the possibility of unpublished reports is not yet identified. Third, although the assessment of GD varies across these cohort studies, most studies include evidence of a cholecystectomy or a definite hospital diagnosis. Therefore, we do not believe that differences in assessments will reverse the results. Fourth, the varying degree of confounder adjustments across the individual studies hampers a systematic assessment of the impact of known risk factors on the outcome of interest. Finally, the observational retrospective design does not allow for establishing causality. The strengths of our study include the following: we performed a comprehensive systematic search for eligible studies; literature eligibility was assessed by two investigators independently; we included sufficient numbers of participants with ample follow-up time; no significant publication bias was found; and the sensitivity analysis showed no substantial change in the results. Conclusions Our meta-analysis demonstrates a substantially increased risk of CVD among patients with a medical history of GD. We suggest that interested investigators should further pursue the subject. We show that the women may have a higher risk of CVD than men and that cholecystectomy may increase the risk of CVD. Further research is warranted.
2023-02-17T14:23:48.473Z
2017-11-08T00:00:00.000
{ "year": 2017, "sha1": "77419193efa7a916e4be1281da8215e535e51137", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-15430-5.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "77419193efa7a916e4be1281da8215e535e51137", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
70677157
pes2o/s2orc
v3-fos-license
Feeding and Fluids in the Premature and Sick Newborn in the Low-Middle Income Countries Start feedings as early as it is safe to do so. This is especially important because most nurseries in low-middle income countries (LMICs) do not have access to total parenteral nutrition (TPN). Although controversial and not consistently practiced in high-income countries (Klingenberg, et al., 2011) infants in LMICs who are stable should begin feedings on day 1 or 2 at the latest. In very low birthweight infants, who are too ill to feed on day 1, it is helpful to give intravenous (IV) fluids on day 1 and then begin an intravenous to enteral titration on day 2. Generally feeds should be advanced as rapidly as tolerated, especially in locations without TPN. Once on full feeds and off intravenous fluids, most infants will require about 160-200 ml/kg/day (and occasionally more) to meet their caloric needs and ultimate weight gain goals of about 15g/kg/day (Gomella, et al., 2004). As stressed, later in the chapter, the first feedings ideally should be colostrum. The exact volume needed for adequate weight gain after the initial expected drop in weight in the first 7-14 days of life (up to 21 days in very small infants) will depend on the caloric content of the mother’s breastmilk or substitute feedings. Mothers’ breastmilk can vary in caloric content from about 14 to 35 calories per ounce depending on the fat content of her breastmilk (Meier et al., 2002). For detailed information on how to determine the caloric content of breastmilk see the Textbook of Global Child Health (AAP) (Slusher T et al, 2012) . Overview of the feeding principles in premature and sick infants There are many appropriate feeding regimes in neonatal nurseries around the world. (Adamkin, 2005;Chan, 2001;Gomella,2004;Eyal, & Zenk, 2004;Klingenberg, Embleton, Jacobs, O'Connell, & Kuschel, 2011;McCormick et al, 2010;WHO, 2007) Exact regimes are not as important as following basic principles of feeding these fragile infants. These principles include the following components below. Start feedings as early as it is safe to do so. This is especially important because most nurseries in low-middle income countries (LMICs) do not have access to total parenteral nutrition (TPN). Although controversial and not consistently practiced in high-income countries (Klingenberg, et al., 2011) infants in LMICs who are stable should begin feedings on day 1 or 2 at the latest. In very low birthweight infants, who are too ill to feed on day 1, it is helpful to give intravenous (IV) fluids on day 1 and then begin an intravenous to enteral titration on day 2. Generally feeds should be advanced as rapidly as tolerated, especially in locations without TPN. Once on full feeds and off intravenous fluids, most infants will require about 160-200 ml/kg/day (and occasionally more) to meet their caloric needs and ultimate weight gain goals of about 15g/kg/day (Gomella, et al., 2004). As stressed, later in the chapter, the first feedings ideally should be colostrum. The exact volume needed for adequate weight gain after the initial expected drop in weight in the first 7-14 days of life (up to 21 days in very small infants) will depend on the caloric content of the mother's breastmilk or substitute feedings. Mothers' breastmilk can vary in caloric content from about 14 to 35 calories per ounce depending on the fat content of her breastmilk (Meier et al., 2002). For detailed information on how to determine the caloric content of breastmilk see the Textbook of Global Child Health (AAP) (Slusher T et al, 2012) . Preterm infants less than 1200 grams require gavage feedings every 2 hours; ≥ 1200 g-1500g generally require gavage feedings every 2-3 hours; infants >1500-2000 grams can be given a combination of gavage and oral feedings (cup and spoon or dropper) every 3 hours. Infants www.intechopen.com greater than 2000 grams who are neurologically intact can generally be fed at the breast or via cup and spoon if unable to feed at the breast due to either infant or maternal problems (guidelines adapted from Gomella et al for LMICs) (Gomella, et al., 2004). When first beginning feeding at the breast, most premature infants will require supplementation with cup and spoon feeds because of insufficient milk transfer to support adequate weight gain. Bottle feedings should be strongly discouraged in LMICs because of the difficulty of keeping them clean and their association with diarrheal diseases and death in these environments (Alrifai et al, 2010;Eshete, 2008;Ghosh et al., 1997). Advance feedings as rapidly as is safe to do so. Generally this works best if there is a specific protocol that the nurses can follow without additional orders. Guidelines about when to deviate from these protocols and when to call the physician for feeding problems should also be in place. For infants less than 1000 grams birthweight, begin at 1cc every 1-2 hours and advance by 1-2cc every 24 hours if tolerated; for birth weight 1000g-1500 grams start at 1-2cc every 2 hours if possible (every 3 hours may be required depending on nursing shortages) and advance by 1-2cc every 12 hours; for birth weight greater than 1500 grams to 2000 grams start at 2-3cc/feed every 3 hours and advance every 8-12 hours; for birth weight greater than 2000 grams-2500 grams and unable to feed at the breast start at 5cc and advance by 5cc every 6-8 hours; for birth weight greater than 2500 grams and unable to feed at the breast start at 10cc every 3 hours and advance by 10-15cc every 3-6 hours (adapted from Zlatkin and Perman) (Zlatkin, 1988). All breastfeeding babies should get Vitamin K 1 at birth and be supplemented with Vitamin D (Leung & Sauve, 2005). Additionally, iron supplementation should be started in all breastfed premature infants as soon as they are tolerating feeds and in term breastfed infants by four months of age (Baker & Greer, 2010). Addressing feeding problems Observe for signs of feeding intolerance including signs of necrotizing enterocolitis (NEC). Some of the signs of feeding intolerance include increasing abdominal distention, increasing gastric aspirate (especially if > than 30% the previous feeding), bilious vomiting and bloody stools (Gomella, et al., 2004). Isolated delayed gastric emptying should not be used as the only criteria for initiating, advancing, or withholding feeds (Adamkin, 2005). This practice can lead to excess delays in reaching appropriate caloric goals with consequent poor weight gain. If the infant has only increasing abdominal distention or gastric aspirate, without other signs of NEC, it may be appropriate to hold 1-2 feedings and then resume feedings at a smaller volume and increase slowly to reach caloric goals. If the infant has other signs of NEC or a surgical abdomen such as abdominal tenderness, edema of the gut wall, thrombocytopenia, X-ray changes consistent with NEC or a surgical abdomen, feedings will need to be held, IV fluids and/or total parental nutrition (if available) started along with other appropriate care as indicated by the disease process including antibiotics, nasogastric decompression and surgical consults. Recognize the importance of distinguishing between swallowed blood and true gastrointestinal bleeding. Feedings do not need to be held for swallowed blood. Bloody gastric aspirates are not a sign of NEC but may be associated with swallowed maternal blood, gastric irritation, hypothermia, thrombocytopenia, and gastric ulcers. However, if available, it is appropriate to begin ranitidine and give Vitamin K 1 ( if not previously given) in infants with gastrointestinal bleeding not of maternal origin. Consider a second dose of Vitamin K 1 if bleeding is severe. Recognize contraindications for beginning enteral feeds and/or continuing or advancing feeds. Absolute contraindications include a complete obstruction at any level unless the obstruction is caused from a meconium ileus, which may be alleviated non-surgically; severe hemodynamic instability, and confirmed necrotizing enterocolitis. In the absence of TPN, supporting these infants for more than 1-2 weeks on IVF's alone is difficult. Therefore, feedings should be started or re-started as soon as it is safe to do so. Signs that it is safe to attempt feedings include a soft, non-distended, non-tender abdomen with bowel sounds present and minimal gastric drainage. Routes of feeding (gastric tube verses cup verses at the breast) Oral feeding of the preterm infants is a challenge to the provider, the mother and the family. The maturation of feeding skills occurs in the last trimester of pregnancy and therefore, preterm infants are born deficient in the skills necessary for effective feeding including the ability to latch, to suck effectively, and to coordinate sucking, swallowing and breathing. Due to early delivery and complicated by interventions including suctioning, intubation, and ventilation, as well as neurologic, gastroenterological and cardiac status, the development of appropriate skills may be delayed, or significantly affected. The motor activities necessary for feeding, sucking, swallowing and breathing, develop in utero and are observed developing as early as 10-12 weeks gestation with the infant opening the jaw and 3-4 weeks later beginning suckling with fingers in the oral cavity. Breathing movements and swallowing begin as early as the 12th week of life. By the 28th week of gestation the jaw is observed in rhythmic movements with alveolar ridge stimulation. As the infant matures in utero, so do the behaviors necessary to sustain feeding. By the 28th-33rd week of gestation, suckling bursts can appear erratic and non-rhythmical. A mature suck, swallow, pause pattern is not observed until 35 to 36 weeks of gestation. This consistent rhythmic organization of suckling coordinated with swallowing and respirations is often considered a hallmark for neurologic maturation (adapted from Delaney and Anderson) (Delaney & Arvedson, 2008). Even when able to suckle, preterm infants have limited fat suckling pads in the cheeks, which impact the ability to maintain suction and duration of feeding. Prior to initiating oral feeding, an evaluation of the infant's feeding skills should be performed by an experienced observer (Nightlinger, 2011). The respiratory rate during rest and sleep in the past several days should be noted. Infants who are tachypneic or have frequent apneic spells are at risk during oral feedings. Presence of excessive oral secretions with drooling or choking spells may be an indication of poor swallowing or anatomical abnormalities that need to be evaluated. Any abnormalities of the tongue, palate or lips should be noted as they may affect the method of feeding. An oral-digital exam may be helpful in assessing readiness for feeding. The examiner presents his or her gloved finger in the mouth of the infant, with the pad of the examiners finger toward the palate. It is not unusual to find the tongue elevated posteriorly pressed against the hard palate. As the jaw opens wide for feeding, this initiates a drop in the tongue www.intechopen.com and one can observe central grooving as the side of the tongue elevates, surrounding the nipple or examiner's finger. The observer should see or feel the tongue move in a peristalsis motion not retracting and protruding. There should be a smooth peristaltic rhythm to the suckle with pausing, while suction is felt on the finger. Persistent retraction of the tongue, lack of seal or suction, sustained milk leakage, during oral feedings, hyperactive gag reflex, jaw gapping with loss of suction, or jaw tightening, and jaw or tongue undulations are adversely affect feeding. If persisting, these may be signs of neurologic deficits or swallowing disorders rather than prematurity (Nyqvist et al, 2001;, Guilleminault et al, 1984). Feeding regimens typically begin with gastric tube feeding in infants under 1500 grams. The determination of when to transition to oral feeds and how to begin oral feeding depends on the clinical status of the infant and the nursery specific protocols. Studies have shown that many neonatal units have no set policy for breastfeeding and that neonatal nurses have not received training in breastfeeding techniques (Cricco-Lizza, 2009;Siddell & Froman, 1994). This lack of training translates into poor and erroneous feeding information, lack of guidance for mothers and families, introduction of artificial feeds, and ultimately the possibility of breastfeeding failure for the mother and infant (Buckley & Charles, 2006;Grossman et al., 2009;Manganaro et al., 2009). Work has been done by Nyqvist and Anderson (Nyqvist et al,2010) and de Aquino (de Aquino & Osorio, 2009) identifying a developmental care approach to feeding. This involves teaching staff and mothers to identify behavioral cues, stress cues, periods of wakefulness, feeding readiness and alertness. Reactions to overstimulation including excessive crying or staying asleep are behavioral cues of disorganization of state. Identifying these cues helps mothers and caregivers to adjust the environment and the feeding as needed for the infant. Thus feeding success is on a continuum of small developmental increments beginning with early tube feeding with gradual introduction of the breast through Kangaroo Mother Care (KMC) ( Nyqvist, 2004) [see Figure 1.]. Skills are acquired slowly and will likely be associated with ups and downs, which should not be regarded as a failure. Mothers should be actively encouraged during this process. It is appropriate to allow the infant to lick or suckle the breast at each feeding even before effective suckling develops. This continuum begins with the preterm infant in developmentally appropriate positioning from the day of birth, which affords the infant the best opportunities to develop physiologically appropriate skills for readiness for feeds. (de Aquino & Osorio, 2009) Kangaroo Mother Care [i.e. the infant unclothed except for a hat and skin to skin against the mother's chest with the infants back covered with a blanket] should be initiated as soon as possible. KMC promotes temperature stability, steady growth, early and prolonged duration of breastfeeding, parental ability to respond to infant cues and enhanced attachment. KMC also reduces length of hospital stay, maternal postpartum depression symptoms, pain and incidence of infection. (Hake-Brooks & Anderson, 2008;Nyqvist et al, 2010) KMC allows the mother to spontaneously offer her breast and the infant to readily feed when awake and alert (Kliethermes et al, 1999;. Nyqvist noted that some very preterm infants have the capacity for early development of oral motor competence that it sufficient for establishment of full breastfeeding even at a low post-menstrual age ( Nyqvist, 2008). De Aquino (de Aquino & Osorio, 2009) evaluated feeding retrospectively for infants who were tube fed at breast with expressed breastmilk. At discharge, 100% of the infants who were tube fed at the breast, were exclusively breastfed with appropriate weight gain, supporting breast feedings supplemented with oral gastric tube feedings as an efficient method in the feeding transition of preterm infants (de Aquino & Osorio, 2009). The impact of cup feeding or bottle feeding on weight gain, oxygen saturation, and breastfeeding rates of preterm infants was examined in 34 bottle-fed and 44 cup-fed preterm infants (Rocha et al, 2002). No significant differences between groups were found with regard to time spent feeding, feeding problems, weight gain, or breastfeeding prevalence at discharge or at 3-month follow-up. Possible beneficial effects of cup feeding were lower incidence of desaturation episodes and a higher prevalence of breastfeeding at 3 months of age (Rocha, et al., 2002). In another study Abouelfettoh (Abouelfettoh et al, 2008) evaluated the use of cup feeding as an exclusive method of feeding preterm infants during hospitalization and its impact on breastfeeding outcomes after discharge. Sixty preterm infants averaging 35weeks gestation and birth weight of < 2150 grams participated in the study. Control group infants received only bottle feedings during hospitalization and the experimental group received only cup feedings during hospitalization. At six weeks of life the cup fed infants had significantly more mature breastfeeding behaviors than bottle fed infants and had a significantly higher proportion of breast feedings one week after discharge (Abouelfettoh, et al., 2008). www.intechopen.com Meier reported outcomes for 34 preterm infants whose mothers used silicon nipple shields during breast feedings. The mean milk transfer was significantly greater for feedings with the nipple shield (18 vs. 4 ml), with all 34 infants consuming more milk during breastfeeding. Major factors limiting the use of breast shield in LMIC are availability of shields, costs, concerns about cleanliness, and promoting bottle-feeding. However, as with many technologies used in high-income countries, it may be appropriate to have shields available in the special care baby nurseries for use before discharge of the infant from the nursery. Monitoring for appropriate growth and responding appropriately to poor weight gain and growth Observe for adequate weight gain which is generally about 15g/kg/day. However, the smaller the baby the slower the initial weight gain and the longer it is expected to take to get back to birth weight. In LMIC's where TPN is generally not available using the older growth chart from Dancis et al (Dancis et al, 1948) (see Figure 1.) may be more appropriate than using newer growth charts included in current neonatal handbooks and textbooks. Premature infants should ideally be weighed on the same scale daily (or at a minimum of every 2-3 days) and plotted on their individual growth chart. Adjustments to feedings should be made if the infant falls off the growth curve for more than 2 days. Preterm nutritional guidelines and growth goals are currently based on the reference standard of intrauterine growth and fetal nutrient accretion rates (McLeod & Sherriff, 2007). This standard is difficult to achieve for this high-risk population especially in LMICs. If postnatal growth fails, preterm infants are at higher risk for adverse neurological outcomes and compromised health. The nutrient deficit occurring in the early weeks post delivery, when the infant is medically fragile, is difficult to overcome. Weight, length and head circumference measurements remain important clinical indicators of growth, but composition of weight gain is emerging as a necessary measure in determining the adequacy of nutrition intake and growth (McLeod et al, 1994). The need to monitor weight for estimation of fluid balance is a different task than monitoring for growth, and the practitioner needs to be aware of both issues. As noted on the growth chart (Dancis, et al., 1948), most preterm infants have a precipitous drop in weight in the first 7-14 days of life due to diuresis; the smaller the infant, the greater the percent of weight loss and the longer the time to regain birth weight. It is helpful to document this weight loss on a growth chart and establish the rate of growth from the lowest weight's plotting point (Zorlu, 2011). If early weight loss is not plotted, weight gain in the first several weeks of life appears inadequate. Inadequate growth is also identified by a lower growth rate than that required to follow the growth curves on the chart (Pridham et al., 2011). If growth is poor, it is important to evaluate the cause. It may simply be that the infant is getting inadequate calories. This can be addressed as noted below by increasing the volume and/or fat content of the breastmilk, ideally by improved breastmilk expression techniques and/or increasing the fat content of the feeds or lacto-engineering (discussed later in this chapter). Rarely, it may be appropriate, if available, to add breast milk fortifiers and/or supplemental artificial feeds. Additionally, fat malabsorption, chronic lung disease, fluid restrictions and increase in energy expenditure may all contribute to poor growth. In infants feeding at the breast, weighing the infant pre-feed and post-feeds has been discussed in the literature and used in clinical practice for the last decade (Funkquist et al, 2010). Data by Hasse supported the use of pre-and post-feeds weighing as accurate, and an objective assessment of breastmilk intake (Haase et al, 2009), although this requires very accurate scales, not usually available in LMICs. Advantages of breastmilk Breastmilk is the best food available for infants and should be strongly supported and encouraged worldwide. No substitute provides the same benefits as breastmilk to the infant. The many benefits are appropriately summarized in this table adapted from a wall hanging in Swaziland (Table 1.). These benefits of breastfeeding are also summarized in an article by Leung et al (Leung & Sauve, 2005) that highlights nutritional, immunological, anti-infective advantages of breastmilk, as well as the enhanced cognitive development and prevention of allergies, obesity, diabetes, and possibly sudden infant death and later hypertension. www.intechopen.com For the premature infant the benefits are even more numerous and as summarized in an article by Meier, et al (Meier , 2010) the use of breastmilk also reduces the risk of a multitude of problems including necrotizing enterocolitis, nosocomial infections and rehospitalizations in this vulnerable population. Breast Feeding is BEST Best for Baby Fresh milk never goes off (never spoils) Reduces Allergies Emotionally bonding Economical Easy, once established Antibodies-greater immunity Digested well Stool inoffensive-rarely constipated Immediately available-no mixing required Temperature ideal Nutritionally optimal Gastroenteritis greatly reduced Table 1. Breast Feeding is BEST When using breastmilk, use the colostrum first. Do not dilute breastmilk. In many cultures, discarding colostrum is common (Okolo et al, 1999;Rogers et al., 2011;Tiwari et al,, 2009). Education regarding the importance of giving the colostrum and not discarding the colostrum should be emphasized. As also noted in these studies (Okolo, et al., 1999;Rogers, et al., 2011;Tiwari, et al., 2009) and many others, pre-lacteal feedings are also common, (Ahmed et al, 1999;Chandrashekhar et al., 2007;Darmstadt et al., 2007;Lakati et al, 2010) are associated with increased morbidity and mortality (Engebretsen et al, 2008;Leach et al., 1999) and should be discouraged. Breastmilk production The challenge of producing sufficient quantities of breastmilk to support the growth of the preterm infant falls largely on the mother of the medically fragile infant. However, health care providers can help educate these mothers in improved expression techniques and thus, relieve some of this burden. Some mothers presented with the challenge of providing expressed breastmilk for their fragile infant have feelings of helplessness, powerlessness and inadequacy arising in an extremely vulnerable period for the mother-baby breastfeeding pair (Boucher et al, 2011;Rossman et al., 2011). The importance of initiating early breastmilk expression by hand and/or pumping cannot be emphasized enough. Creating nursing routines that document breastmilk expression and frequency into the regular record of care for the postpartum mother, legitimizes this cause and empowers the nurse to begin helping the mother down the lactation pathway. This journey of providing milk, a life saving substance for the infant, is one the entire nursery must take part in. Protocols and procedure as well as policies regarding breastfeeding, milk expression including pumping, milk storage and handling, will clarify each participants role in providing this life saving white gold to fragile infants (Dougherty & Luther, 2008). Breastmilk expression regimens that mimic the feeding behaviors of full term infants are best for establishment of lactogenesis (Slusher et al., 2007). This includes frequent expression of milk every two hours beginning as soon after delivery as possible, ideally within the first two hours. Recovery from birth can be arduous and pumping is often viewed as something a mother does once she feels healthy. This attitude, thereby, delays breastmilk expression by hours or days and mothers lose the natural window and hormonal levels necessary for ease of transition from colostrum to mature milk (Chen et al, 2001). When breastmilk expression is begun in the second hour of life or as soon thereafter delivery as is possible, mothers experience prolactin surges and prolactin cell receptor proliferation that promotes quick and efficient milk flow . Trophic feeds for sick infants can then be managed with mothers' own milk, thereby avoiding the risk of artificial feeds in the immature gut. As noted by Slusher et al (Slusher, 2011) many mothers can express enough breastmilk by hand expression and this method should be supported and encouraged for most mothers. If a mother's milk supply is inadequate or decreasing over time, high quality hand pumps are also useful, especially in hospitals without consistent electricity. However, breastmilk expression protocols should include procuring a hospital grade pump (double pump) especially in referral hospitals as they do increase total maternal milk volume expressed (Slusher, et al., 2007;Slusher, et al., 2011) and can prove to be invaluable in those mothers who have an inadequate or decreasing milk volume with hand expression and hand pumps. Teaching the mother to combine hand expression and breast massage with pumping helps increase the expression of hind milk and avoids the problem of overly full breasts (Renfrew MJ, 2009). A combination of techniques, hand expression along with pumping may actually prove to be optimal (Morton et al., 2009). Fitting the pump breast shield to the mother's breast is an important part of providing the proper equipment for the mother who is expressing milk for her infant via an electric breast pump. The breast shield, the portion of the pumping kit that actually fits on the breast, must have adequate room for the nipple to move easily in the tunnel portion of the shaft. If this opening is too small or too large, then the mother may experience trauma to the breast or nipple and reduced milk flow. Persistent use of inappropriate breast shields may limit milk production and lead to early weaning. Mothers should be advised to pump regularly every 2-3 hours, as well as given advice that if it appears that they will be traveling or busy during the scheduled pumping time to pump early rather than miss the pumping session. If mother's feel they cannot pump the entire pumping session of 15-20 minutes or two minutes beyond the time she no longer sees milk, then she should pump as long as possible for that pumping and then try to pump more frequently and more efficiently later to make up for the limited pumping session earlier in the day. Mothers who have prolonged pumping sessions beyond 20 minutes typically do not produce more milk and may have more nipple soreness. It is the repeated removal of milk from the breast that creates the stimulus for higher milk yield. There are large variances of how much milk the breast is able to store as well as how much milk the mother is able to pump. Mothers who appear to have plenty of milk but pump limited amounts may have mechanical problems with the pump, the wrong size breast shield, may be more successful hand expressing or need a different environment to promote let down. Breastmilk storage Milk storage containers are best when made of polypropylene, hard sided plastic or glass with a solid lid. Avoid open containers, containers closed with a bottle nipple, and polyethylene soft-sided storage bags (Cossey et al, 2011;Manohar, Williamson, & Koppikar, 1997). These bags often sequester nutrients and may split or tear. Concern has been www.intechopen.com expressed about regarding bacterial contaminant in breastmilk and the need for random culturing of milk. A recent analysis by Schandler demonstrated that breastmilk cultures are not predictive of infection in premature infants (Schanler et al., 2011) and therefore, are not recommended. Ideally, storage facilities in the nursery should be readily available for refrigerating freshly pumped milk and freezing extra milk while the infant is nil per os (NPO) or not consuming the amounts the mother is pumping. Even when such facilities are not available, or when her infant is taking only small volumes of milk at each feeding, the mother still needs to express until the breast is emptied. Emptying or nearly emptying the breast at each expression session increases milk production and increases the likelihood that the mothers of these infants will continue to be able to have enough milk to exclusively breastfeed their infants after discharge from the nursery (Chapman & Perez-Escamilla, 2000;Daly et al, 1996;Neville, 1999). Breast feeding mothers know when most of the milk has been extracted as the breast feels soft and lighter in weight. Unrefrigerated breastmilk can be generally be given to infants for up to 3-4 hours (6 hours if very clean conditions) when stored at room temperature (ABM, 2010). Breastmilk may be safely refrigerated however, the suggested times vary widely depending on the study and the conditions of refrigeration from 3-8 days (ABM, 2010). If facilities exist for freezing expressed breast milk and keeping it frozen (consistent power), it may be frozen in the back of the freezer for at least 3 months (ABM, 2010). If fresh milk is not available, then refrigerated mother's milk should be given. Give frozen milk to infants who have exhausted both fresh and refrigerated supplies. Increasing the caloric content of breastmilk One major advantage of having extra milk at each breastmilk expression session is the opportunity to alter the caloric content of the milk by pumping the milk in two or even more aliquots. The first milk that the mother expresses is low fat, low calorie foremilk and can be set aside or stored for later use if those facilities are available. The later milk is higher fat, higher calorie milk or hindmilk and can be fed to the infant preferentially and improves the growth rate of the infant. This process is called "lacto-engineering" and is described in detail in the AAP book "Textbook of Global Child Health" (Slusher T, 2012). If time for teaching and staffing allow, both mothers and health care providers can be taught to determine the caloric content of breastmilk using a simple hematocrit spinner and reader. This process determines the creamatocrit or cream content of the milk and is discussed in detail by Slusher and Lucas (Lucas, 1978;Slusher et al, 2012). If this is not possible, mothers can be taught to watch their milk as they express it and to change containers when it begins to thicken and feed the second milk to their infants. For infants feeding at the breast, the first milk extracted from the breast is high in lactose, is sweet to the infant and entices the infant to feed more. The last milk in the breast or expressed near the end of the pumping or feeding session is high in fat and can be over 30 calories per ounce (Bishara et al 2008;Ogechi et al, 2007;Slusher et al., 2003). Feedings of hind milk alone contribute to weight gain in the infant (Ogechi, et al., 2007). Hind milk feeding is often employed for the infant unable to tolerate advancing volumes of feed or who demonstrates inadequate growth (Griffin et al, 2000;Lucas, 1978). Other additives have been utilized to promote growth in breastfed infants. These additives include human milk fortifiers, powdered formula and exogenous oil to improve caloric and nutrient intake. Each of these has disadvantages especially in LMICs. The cost of commercial breastmilk fortifiers is prohibitive in LMICs, artificial milk is expensive and easily contaminated during mixing, and oils may be poorly absorbed and adhere to tubing (Hamosh, 1987;Mehta, Hamosh et al, 1988). Supplementing breast milk involves not only the direct cost of the formula or supplement but also that of training the mothers in techniques for feeding their infants without compromising breastfeeding or increasing the risk of infectious diseases (Griffin, et al., 2000;Lucas, 1978;Ruiz et al, 2002). Increasing breastmilk volumes Bishara (Bishara et al, 2009) determined factors associated with foremilk volume (milk produced in the first 3 minutes of pumping), hindmilk volume (remainder of milk produced), and total milk volume produced by mothers of very preterm infants at 3 weeks p o s t p a r t u m . M i l k v o l u m e s w e r e n o t a s s o c i a t e d w i t h m o t h e r ' s a g e , r a c e o r e t h n i c background, education, parity, reported pre-pregnancy body mass index, previous breastfeeding experience, frequency of milk pumping, longest time between pumps, infant birth weight, or multiple births. However, degree of pre-maturity (<26 weeks vs. 26 to 27 weeks) was significantly related to the relative proportion of foremilk/hindmilk volumes (Bishara, et al., 2009). Increasing breastmilk volume is a challenge to all mothers who provide milk for their infants. The best practice is to prevent low milk supply by expressing breastmilk as soon as possible after delivery preferably within 2 hours. During the early hours after birth this may require the assistance of nursing personnel, depending on the physical condition of the mother. Slusher and her research team physically assisted mothers who were too sick to hold the pumping equipment or physically participate in the pumping (personal communication). Instruction in assisting mothers should be included in training protocols for obstetric and neonatal staff. Some mothers of preterm infants express minimal milk volumes in the first few days of life and will need encouragement to continue expressing until their milk volume increases. Documentation of breastmilk expression, including method, by nursing staff and giving data in report to oncoming staff helps to establish a team approach to pumping and milk collection. Inadequate milk production needs to be investigated. Most mothers experience inadequate production due to improper removal of milk or a number of reasons which may include: infrequent pumping, shortened pumping, inadequate removal of hind milk, medications including birth control commencement, uterine hemorrhage, and thyroid conditions. Inadequate breast pumps and mechanical pump problems may also be at fault for inadequate milk removal and subsequent limited production. Correcting these problems may increase milk production (Hill et al., 2009). Mothers need encouragement, support, and an observation of breastmilk expression techniques to establish and build a supply. Advise the mother to eat as nutritious and high caloric diet as feasible, and drink fluids to thirst. Galactogogues (ABM 2011) have been used to help preterm and term mothers create more milk. These include both herbal remedies and prescribed medications that stimulate milk receptor cells or prolactin surges. Galactogogues typically will not produce more milk if the regular removal of milk is not occurring. Establishing a good milk removal routine is paramount in addressing low milk supply. Combining hand expression and pumping www.intechopen.com may increase milk expression (Morton, et al., 2009). Many mothers find that seeing more milk with pumping or hand expression is the best motivator for pumping more. Galactogogues based on herbs and other natural substances include fenugreek, galega (goat's rue) and milk thistle (Zuppa et al., 2010). Mothers with ragweed and peanut allergies should be advised to avoid fenugreek. Principle prescription medications contributing to increased milk supply include metoclopramide (Betzold, 2004) and domperidone (Wan et al., 2008). The latter is more effective, is associated with fewer side effects, and is preferred throughout much of the world. Galactogogues may be started at any time during the pumping process and should not be withheld as a last resort. The use of galactogogues should be limited to those situations in which reduced milk production from treatable causes has been excluded. Occasionally hormonal imbalances in conditions such a polycystic ovarian syndrome, thyroid disorders, insufficient glandular development of the breast, breast reduction surgeries or breast lumpectomy may reduce the ability for the breast to make milk (Andrade et al, 2010). Alternate feedings in premature and sick infants As previously noted, the preferential milk for the preterm infant is, of course, his or her own mother's milk fed fresh to the infant. Options if the mother does not wish to breastfeed or has an absolute or relative contraindication to breastfeeding include primarily donor milk, wet nurses and artificial feeds. Donor breastmilk is considered an important adjunct to infant feeding in many high-income countries where donor milk is both screened carefully and stored properly making it an unlikely source of feedings in LMICs. Likewise, wet nurses should be screened for infections to ensure their breast milk is safe. Wet nurses cannot be recommended in LMICs when this screening is unavailable. Preterm formula, if available, can be offered if breastmilk feedings are not an option. Milk based formula is preferred. Special nutrient and caloric enriched formulas that support good growth and development are available for use in preterm infants in the first six months of life (Jeon et al., 2011). In most LMICs, availability and cost mean that powdered artificial feedings designed for term infants are the only alternate infant food available to both premature and term infants for whom breastmilk is not an option. When using powdered formula it is essential that it be prepared hygienically with good hand washing, clean water (ideally boiled) and clean utensils. There is a very small risk of bacterial contamination in powdered milk formula with Enterobacter sakasakii, a rare opportunistic pathogen associated with meningitis, necrotizing enterocolitis and sepsis (Gurtler et al, 2005;Palcich et al., 2009). As pointed out in a study in Tanzania, this is of particular concern in the preterm, but can occur at any age (Gurtler, et al., 2005;Mshana et al., 2011). Therefore, breastmilk is preferred in the neonatal period particularly for preterm infants. Most other homemade formulas, including animal milks, are less ideal for the neonate than formula designed for infant feedings and should only be used as a last resort, especially during the neonatal period. As mentioned earlier, cups and spoons should be encouraged instead of bottles in any LMIC where hygiene is often not ideal. Breastmilk and the HIV positive mother Breastmilk was recognized to be a leading cause of maternal-to-child transmission (MCTC) of HIV early in the epidemic (Kreiss, 1997;Ogundele & Coulter, 2003;Ruff, 1994). Initial www.intechopen.com efforts to curtail this mode of transmission were focused heavily on substitute feedings for breast milk. Breastfeeding was strongly discouraged and heroic efforts were made to get breastmilk substitutes into LMICs. Unexpectedly, this effort was met with at least as many infants dying in the substitute feeding group as were dying in the breastmilk group due to an unacceptably high incidence of illnesses including diarrhea in the substitute feeding group. (Horvath et al., 2009;Rollins, 2007;Thior et al., 2006) In some studies (Shapiro et al., 2007) discontinuing breastfeeding was considered to be the primary risk factor for death. Additionally, mothers may choose to breastfeed because breastfeeding is culturally acceptable: and not breastfeeding can lead to discrimination or stigmatization (Cavarelli & Scarlatti, 2011;Sadoh & Sadoh, 2009) Unfortunately, women given breastmilk substitutes often choose to give mixed feedings with breastmilk and breastmilk substitutes. This combination of mixed feedings is identified as the most risky choice with the highest incidence of maternal to child transmission (MTCT) of HIV (Coutsoudis et al., 2001). Researchers and clinicians alike continue to struggle with the best feeding options and other interventions in low-resource settings to curtail MTCT of HIV (Kuhn et al., 2008). Significant strides have been made in recent years. All involved continuing to emphasize educating mothers on their infant feeding choices and supporting those choices whatever they are. If the mother concurs, the current recommendation is to support breastfeeding unless breastmilk substitutes are acceptable, feasible, affordable, sustainable and safe (AFASS)(WHO, 2010). Because exclusive breastmilk feedings are associated with the lowest incidence of HIV transmission, exclusive breastmilk feedings are recommended for the first six months of life (WHO, 2010). After six months of life complimentary foods should be added as appropriate. Breastmilk feeding should continue along with these complimentary foods unless breastmilk substitutes are now AFASS. At any point during the breastfeeding period that breastmilk substitutes become AFASS, the transition to breastmilk substitutes should be supported. This approach still has a significant ongoing risk of MTCT of HIV unless antiretroviral drugs are included in the regime (Cavarelli & Scarlatti, 2011;Horvath, et al., 2009;McIntyre, 2005). Additionally, boiling expressed breastmilk may be an option for some mothers where artificial feeds are not available (Cavarelli & Scarlatti, 2011;Savage & Lhotska, 2000). Recent advances in making antiretroviral drugs available and affordable to HIV+ mothers who choose to breastfeed have decreased the risk of transmission of HIV through breastmilk. If fully implemented, the WHO recommendations could potentially reduce the risk to 5% or less from the background risk of 35% in breastfeeding infants (WHO, 2010 pregnancy and breastfeeding instead of stopping them one week after weaning from breastfeeding (Ciaranello et al., 2011). For many LMICs daily nevirapine is more economical and therefore, more feasible than treating the mothers with ARV's but either regime decreases MCTC and can be supported with current evidence-based studies. For details of these regimes and others consult a pediatric HIV specialist and an up to date source of recommendations of ARV's for the prevention of MCTC. All recommendations regarding MCTC are frequently changing, therefore, all health care providers treating HIV+ mothers and their infants should check current recommendations from WHO, UNICEF and other updated sources. Maternal triple ARV prophylaxis (Option B) Mother Mother Antepartum twice-daily AZT starting from as early as 14 weeks gestation and continued during pregnancy. At onset of labour, sd-NVP and initiation of twice daily AZT + 3TC for 7 days postpartum (Note: If maternal AZT was provided for more than 4 weeks antenatally, omission of the sd-NVP and AZT + 3TC tail can be considered; in this case continue maternal AZT during labor and stop at delivery). Triple ARV prophylaxis starting from as early as 14 weeks of gestation and continued until delivery, or if breastfeeding, continued until 1 week after all infant exposure to breast milk has ended. Recommended regimes include: ATZ + 3TC + LPV/r or AZT + 3TC + ABC or AZT + 3TC + EFV or TDF + 3TC (or FTC) + EFV Infant Infant For breastfeeding infants Daily NVP from birth for a minimum of 4 to 6 weeks, and until 1 week after all exposure to breast milk has ended. Irrespective of mode of infant feeding Daily NVP or twice daily AZT from birth until 4 to 6 weeks of age Infants receiving replacement feedings only Daily NVP or sd-NVP + twice daily AZT from birth until 4 to 6 weeks of age ARV-antiretroviral; AZT-zidovudine; 3TC-lamivudine; NVP-nevirapine; sd-NVP-single dose nevirapine; LPV/r-lopinavir/ritonavir; EFV-efavirenz; FTC-emtricitabine; WHO Contraindications and relative contraindications to breastmilk feeding There are few conditions in which breastfeeding is not recommended for the infant or for the mother. Unless alternate feedings are acceptable, feasible, affordable, sustainable, and safe (AFASS), HIV positive mothers are encouraged to breastfeed. An infant diagnosed with galactosemia, a rare metabolic disease should not breastfeed. Mothers with certain untreated www.intechopen.com infections (e.g. tuberculosis) should not breastfeed at the breast but may be able to use expressed breastmilk depending on the particulars of their disease. Infants of mothers with Hepatitis B carriage may breastfeed provided their newborns are immunized against Hepatitis B immediately after birth as soon as dry and stable (IOM, 2011) . For specific situations consult an infectious disease specialist or the Red Book(AAP, 2009). Any mother using illicit drugs, taking cancer chemotherapy agents such as antimetabolites, or undergoing radiation therapy, generally should not breastfeed. Nuclear medicine studies vary in their components and advice should be sought in particular for the radioactive component of the scan (AAP, 2001;Hale, 2010). Team approach and education: Supporting and encouraging breastfeeding Breastfeeding incidence and duration rates are significantly affected by maternal educations and staff education (Brent et al, 1995). It is important to focus on providing evidenced based education and support regarding breastfeeding practices in special care baby units to mothers as well as all staff interfacing with the mother and baby (Meier, 2010). In a USA study, Hallbauer found that infants with a lower weight and gestational age, who had prolonged stays in the neonatal intensive care unit were less likely to be breast-fed after discharge (Hallbauer et al, 2002). This suggests that efforts to promote breast-feeding in the neonatal unit were ineffectual or inadequate. The author suggests that in order to remedy this situation it is necessary to keep the mother-infant pair together, to promote breastfeeding before and immediately after delivery and to train staff in the management of lactation (Hallbauer, et al., 2002). Family-centered care has been implemented in many neonatal intensive care units throughout the U.S. and is invaluable in helping families, whose infants require hospitalization, cope with the stress, fear, and altered parenting roles that may accompany their child's condition and hospitalization (Mulasky, 2005). In highincome countries, a combination of breastfeeding management and family centered training for neonatal intensive care unit staff enhances the care of these medically fragile infants and their families. In LMICs family centered care is the norm and essential in providing care for their family members. Mothers already play a vital role in the care of their premature and sick infants and in many of these LMICs and stay either with the infant in the nursery or nearby. However, education and support in the feeding and care of their infant is crucial to providing optimal growth, development and exclusive breastfeeding for these most vulnerable infants. Statistics specific for infants graduating from special care baby units in low-middle income countries is lacking. However, since breastfeeding is the norm in these countries it is likely that most infants go home with at least partial breastmilk feeds. The larger problem is promoting exclusive breastmilk feedings for the first six months of life. This is challenging especially in HIV positive mothers. Mixed feedings are common and misconceptions about the adequacy of mothers' own milk are widespread. There is an ongoing need to continually promote and reinforce the Baby Friendly Hospital Initiatives (UNICEF, 2009). Abolyan found the Baby Friendly training among staff increases breastfeeding rates as well as maternal satisfaction (Abolyan, 2006). Breastmilk is the life saving choice for infant feeding in high-risk special care baby nurseries and should be the standard of care for these infants. H o w e v e r , t h i s m u s t b e b a l a n c e d w i t h t h e needs of the individual infant and, when appropriate, alternative feedings recommended (i.e. some infants of HIV+ mothers, orphaned or abandoned infants, infants who are failing to thrive despite their mothers efforts to produce adequate breastmilk). All policies, procedures and training regarding infant feeding should promote, protect and support breastfeeding for the mother and their infants whenever possible. Indications for and appropriate intravenous fluids in premature and sick infants The goals of IV fluid therapy are to maintain adequate hydration, appropriate electrolyte balance and sufficient carbohydrate intake to support basic metabolic processes and avoid hypoglycemia. Fluid and electrolyte requirements for newborns change with advancing gestational age and postnatal age as total body water decreases, extracellular fluid volume contracts, renal tubular reabsorption of free water, sodium, and bicarbonate improves, and skin matures. Extremely preterm infants are at especially high risk for excessive rate of water loss through their very thin skin. Transepidermal water loss immediately after birth at 26 weeks gestation is 60 g/m 2 /h decreasing to approximately 25 g/m 2 /h by 32 weeks and to 10 g/m 2 /h by 40 weeks post-menstrual age (Seri I, 2005). A 25-27 week gestation infant placed in 50% humidity on the first day after birth may lose 129 ml of water/day through the skin alone (Modi, 2005). The skin matures quite rapidly so that even in the most immature infants born at 26 weeks gestation, the transepidermal fluid loss has decreased to about 43 ml/day by one week after birth. Antenatal steroids accelerate skin maturation in the preterm infants, thereby reducing postnatal transepidermal water loss. In the healthy term infant the immediate post-delivery period is characterized by a negative fluid and sodium balance. Hormonal changes associated with labor and delivery initiate a postnatal diuresis over the first 2-3 days at which time fluids should be limited so as not to impede this process. Normally during the first 1-3 days oral intake via breastfeeding is limited. Consequently, isotonic contraction of extracellular fluid volume occurs. The result in the healthy term newborn is an appropriate weight loss over the first several days after birth of 1-2% per day for a total weight loss of 5-10% (Kalhan , 2001). Birth weight is usually regained by day 7. Preterm infants are born with relatively more total body water to excrete, immature renal function, and greater transepidermal water loss. Therefore, they have a substantially greater initial weight loss, up to 15% over the first week, and regain birth weight between the second and third week after birth. In both term and preterm infants, weight gain over the first few days is abnormal and represents fluid and sodium retention due either to excessive administration of fluid or to neonatal conditions which compromise organ function or increase capillary leak. Fluid requirements Fluid requirements are influenced by clinical and environmental conditions. Insensible, evaporative fluid losses are increased by low humidity environments (<50%), care under radiant warmers, high ambient temperature, phototherapy, non-humidified respiratory gases, skin defects or breakdown (e.g., omphalocele, gastroschisis, burns), fever, and tachypnea. Gastrointestinal losses are readily evident when due to diarrhea or nasogastric drainage, but may be unapparent when due to third spacing associated with necrotizing enterocolitis or to large evaporative losses during GI surgery. Environmental factors such as clothing, high www.intechopen.com humidity, care in a double walled incubator, skin ointments, and humidified respiratory gases all decrease insensible losses. Some clinical problems decrease fluid requirements due to organ injury (e.g., birth asphyxia) or the underlying pathophysiology (RDS, PDA). Taking the normal adaptive changes into account, IV fluid administration in the first few days should be limited and then gradually increased to maintenance volumes over the next several days. Urine output should normally be at least 2 ml/kg/hour after the first day. Daily fluid intake will vary depending upon gestational age and medical condition. Extremely preterm infants have very high insensible water loss due to epidermal immaturity and will need a higher fluid intake to compensate for their transepidermal water loss. However, excessive fluid and sodium administration at this time is associated with complications, particularly in preterm infants, such as pulmonary edema, increased respiratory distress, patent ductus arteriosus and a greater risk of bronchopulmonary dysplasia. Because of the contraction of total body water, Na supplementation is not needed until 3-4 days after birth. For term infants begin IV infusion rates at 60-80 ml/kg/day on days 1-3, increasing slowly by 10-20 ml/kg/day to 100 ml/kg/day. On days 3-7, if clinically stable with appropriate weight loss of 1-2% per day, continue increasing total fluid intake (IV + PO) by 20 ml/kg day up to a maximum of 180 ml/kg/day. After day 7, do not exceed a maximum fluid intake (IV plus PO) of 180 ml/kg/day until the infant is off IV fluids and entirely on ad lib oral feeds. The fluid requirement for premature infants varies depending upon the degree of prematurity, as well as environmental factors and clinical problems that increase or decrease insensible fluid loss. In general, on days 1 and 2 after delivery, infants with birth weights < 1000 g require 100-150 ml/kg/day; infants with birth weights 1001-1500 require 60-100 ml/kg/day ; and infants with birth weights >1500 require 60-80 ml/kg/day. For all infants total fluids (IV + PO) are gradually increased by 10-20 ml/kg/day to reach 150-160 ml/kg/day by 7-10 days of age. When growing, and without medical complications, premature infants may tolerate up to 180 ml/kg/day. Providing preterm infants with humidified incubators or placing them under small plastic tents will decrease insensible loss and total fluid requirements. Fluid intake should be based on birth weight until birth weight is regained. In general, too much fluid is more deleterious than too little fluid It is better to err on the side of cautious fluid administration. For all infants, be sure to include the volume of oral feeds when calculating the daily total intake/day. If receiving only intravenous (IV) fluids write the order to indicate how much is to be given each hour. If the infant is also receiving oral feeds, an IV+ PO order will avoid inadvertently giving too much or too little fluid as IV and/or PO volumes are changed (ex. if the infant is receiving a total of 360 ml of fluid/day and is being fed every q 3 hours: "Give 20 ml D5W IV + 25 ml breastmilk PO/NG every 3 hours = 45 ml q3 hours= 360 ml/day total.") Rewrite the order to maintain the appropriate interval volumes as IV and PO intakes change. Glucose Begin with a glucose infusion rate of 6 mg/kg/min given as D5W if birth weight is < 1000g and 8 mg/kg/minute as D10W if birth weight is greater than 1000 g. Increase the glucose www.intechopen.com infusion rate by 3-6 gm/kg/day (10-20 kcal/kg/day) to a maximum of 12-14 mg/kg/min. Keep the serum glucose less than 150 mg/dL. It is helpful to calculate the glucose infusion rate (GIR) in mg/kg/minute [(% glucose X ml/kg/day) /24 hours per day/60 minute per hour] and the caloric intake (Cal/kg/day) provided (IV glucose =3.4 Cal/gm). A GIR of 4-6 mg/kg/minute approximates basal hepatic glucose production. About 50 Cal/kg/day are needed for maintenance of weight and basic metabolic function (Kalhan SC, 2001). This level of caloric intake is barely achievable using D10W at maximum fluid volumes. Initiation of enteric feeds is necessary as soon as possible in order to provide enough calories and nutrients for growth. Electrolytes In the first 3 days after birth, electrolytes are not needed and IV fluids containing only 5% or 10% glucose (D5W or D10W) are adequate. Sodium (Na) supplementation (3-4 mEq/kg/day) should be started by day 3 to avoid hyponatremia and help establish the positive sodium balance needed for growth. Nasogastric and ostomy drainage contain a considerable amount of NA (45-140 mmol/L) which can generally be replaced with equal amounts of 1/3-1/2 NS. Potassium (K) can also be added (1-2 mEq/kg/day) on day if urine output is adequate and the infant is not yet taking enteric feeds. Adding 10 mEq K/1000ml of IV fluid provides 1 mEq K/100 ml which is adequate for most infants. Since inadvertent administration of excess potassium may be fatal, potassium should be added to IV fluids only when necessary and the preparation carefully checked by two nurses. Types of IV fluids If stock IV solutions are not available, adding 25 ml of Ringers Solution or Ringers Lactate to 100 ml of D5W or D10W will approximate ¼ NS and deliver 3-4 mEq Na/100ml (Slusher et al, 2011). If only D5W is available, an appropriate amount of D50W can be added to the D5Wsolution to make D10W as described in detail in the AAP book"Textbook of Global Child Health" (Slusher T, 2012). Care must be taken to maintain sterility when mixing IV solutions together. All fluids except for acute volume expansion should contain glucose. Total parenteral nutrition (TPN) containing protein and lipid in addition to glucose, is necessary to achieve adequate intravenous caloric and nutrient intake when enteric feeds are not possible. TPN has dramatically improved the survival following neonatal surgical procedures such as diaphragmatic hernia, tracheoesophageal fistula, omphalocele, gastroschisis and bowel resection associated with prolonged inability to take enteric feeds. However, TPN is associated with a much higher risk of sepsis because the solution itself is an excellent culture medium and infusion lines, especially if centrally placed, are at high risk for contamination. Although used routinely to support nutrition for sick newborns in developed countries, TPN solutions require preparation and administration under strictly controlled, aseptic conditions with appropriate facilities and staff. Methods of fluid administration IV fluids are best given continuously by infusion pumps. Alternatively a non-mechanical drip set with a buretrol may be used. Accidental acute fluid overload may be life threatening. It is therefore critical to limit the amount of fluid that can be rapidly infused by www.intechopen.com filling the pump syringe, pump chamber or buretrol with only 2-4 hours of IV fluid. When continuous infusion is not possible, IV fluid can be given intermittently as frequent, small boluses every 2 hours. However, intermittent bolus administration of glucose increases the risk of hyperglycemia and hypoglycemia, acute changes in serum osmolality and risk infection from frequent entry into the IV line. This method of IV fluid administration should be used only if safer methods of IV infusion with pumps or drip sets are unavailable. Routes of fluid administration IV fluids may be given by peripheral or central lines. Do not exceed D12.5W when using a peripheral IV or D20W if using a central venous line. NS, 1/2NS or D5W can be used in central arterial lines; hypertonic solutions should be avoided. Monitoring fluid status and electrolytes Monitoring fluid and electrolyte balance is essential. All fluid intake should be systematically recorded including IV infusions, medications, blood products, and feeds. Passage of urine and stool should also be recorded. Whenever possible urine output should be measured. Any drainage from NG tubes and chest tube should be measured and recorded. Vital signs including heart rate, respiratory rate, temperature and blood pressure should be recorded at least once or twice per day. A careful daily physical exam should include evaluation of capillary refill, skin turgor, mucous membranes, and the anterior fontanel. An accurate, unclothed, daily weight is one of the best ways of assessing overall fluid balance in the first several days after birth. Obtaining a daily weight in preterm infants, especially those who are extremely immature, must be balanced against the difficulty of doing so, the accuracy of the scale and potential complications which may occur during the process of weighing (e.g., hypoxia, hypothermia, exposure to contaminated surfaces). If possible, serum electrolytes should be checked using small capillary blood samples every 24-48 hours during the first week or until stable. At a minimum Na and K should be checked on days 3 and 7. The normal serum sodium level is Na 132-144 mmol/L; the normal K level is 3.8-5.7 mmol/L (Modi, 2005). Complications Common complications of IV fluid therapy include hyponatremia (< 130 mmol/L), hypernatremia (> 150 mmol/L), hyperglycemia (> 150 mg/dl), hypoglycemia (< 40 mg/dl) if fluids are abruptly discontinued, accidental fluid overload, and skin injury due to IV infiltration. Avoiding severe burns due to IV infiltration requires frequent inspection of the IV infusion site. Treat IV infiltration in an extremity by elevating the limb. If circulation to an area on the infiltrated extremity is compromised, warm the opposite extremity which will help reflexly dilate blood vessels in the affected limb without increasing oxygen demand. If the skin is broken down, the area should be treated as a burn. Fluid requirements in common neonatal conditions Birth asphyxia is often associated with renal insufficiency due to acute tubular necrosis resulting in severe oliguria or anuria. In this circumstance, fluid intake should be limited to insensible loss, approximately 30 ml/kg/day on day 1 in term infants. Fluids intake is www.intechopen.com liberalized slowly as urine output improves. A tight nuchal cord may result in hypovolemia when venous return from the placenta to the fetus via the umbilical vein is obstructed. Affected infants have poor capillary refill, tachycardia, may have weak pulses, and often have respiratory distress. These infants usually respond promptly with improved perfusion and resolution of respiratory distress and tachycardia after volume expansion with boluses of 10 ml/kg normal saline IV boluses up to a total of 20-30 ml/kg. A symptomatic patent ductus arteriosus is common in preterm infants, especially if fluid administration has been excessive. Indomethacin, used to pharmacologically close the ductus, is associated with transient oliguria, fluid retention and hyponatremia. Extremely premature infants (24-27 weeks gestation) have very high transepidermal free water loss and renal immaturity in the first several days after birth that increase the risk of hypernatremia and hyperkalemia. Changes in weight and electrolytes must be closely monitored and fluid intake adjusted accordingly. Respiratory distress syndrome is associated with increased pulmonary fluid and failure to diurese until 3-4 days after birth at which time the respiratory disease improves. Use of diuretics is ineffective in hastening the spontaneous diuresis. Immediately after gastrointestinal surgery, infants are often oliguric or anuric. Usually this is due to intravascular volume depletion from large, intraoperative insensible fluid losses from the exposed gut and/or post-operative third. The appropriate treatment is volume expansion, not administration of a diuretic. In shock, due to acute blood loss, the best treatment is immediate volume expansion with blood products. Diagnosis and treatment of hypoglycemia in sick and premature infants Glucose production in the fetus is normally very low, and most glucose for fetal energy utilization is obtained from the maternal circulation via facilitated diffusion across the placenta (Hay, 2006). Under normal circumstances, birth represents the beginning of a transition period when the neonate will develop the ability to maintain glucose homeostasis independently. Serum glucose levels fall initially, reaching a nadir around 2-3 hours of age. Catecholamine release, cortisol surge, insulin production, enhanced glycogenolysis and gluconeogenesis are a few of the important hormonal and metabolic events taking place that ultimately lead to activation of independent glucose production in the normal neonate. Serum glucose concentrations will rebound within the first 4 hours after birth from this physiological nadir. With established feedings, the serum glucose level will continue to stabilize over the first 24 hours of life. Routine glucose monitoring is not recommended in the normal, healthy, term neonate (Committee on Fetus and Newborn, 2011). However, there are specific neonatal populations that are at increased risk of developing hypoglycemia and warrant close observation and monitoring. Glucose homeostasis is the result of a balance between energy production and energy utilization. When this balance is skewed, hypoglycemia results. Preterm infants, small for gestational age (SGA) infants, and very low birth weight (VLBW) infants have an impaired ability to produce glucose due to limited hepatic stores available for glycogenolysis. Illness in any neonate increases metabolic demand and energy utilization. Infection, asphyxia, and hypothermia thus increase the risk of neonatal hypoglycemia. Infants of diabetic mothers (IDM) and large for gestational age infants have increased metabolic demands secondary to macrosomia. Prolonged hyperglycemia in utero leads to abnormal glucose metabolism in the IDM after birth. IDMs have higher levels of circulating www.intechopen.com insulin, lower serum glucose levels, lower availability of alternative fuels, and impaired counter-regulatory hormones, all of which contribute to a high risk of neonatal hypoglycemia, often developing immediately after birth earlier than what is seen in other at risk populations (Martin, 2011;Peace O, 2010). The overall incidence of neonatal hypoglycemia is difficult to define. Studies have reported incidence rates from 0.4% to 29% during the first 24 hours of life depending on the study population and level of serum glucose used to define clinically significant hypoglycemia (Burdan DR, 2009;Depuy AM, 2009;Johnson, 2010;Najati N, 2010). The risk of hypoglycemia is higher in resource poor countries, even in populations not characteristically thought of at risk. A study in Nepal among term infants born by uncomplicated delivery in a hospital setting found 10% with serum glucose <37 mg/dl (<2.0 mmol/l) and over 50% with at least mild hypoglycemia <50 mg/dl (<2.8 mmole/L) within the first 24 hours of life (Pal Deb, 2000). Contributing factors that are often more common place in resource poor countries include lack of prenatal care, poor maternal nutrition, and delayed feeding practices. Neonatal hypoglycemia may have long-term sequelae. Hypoglycemia is often associated with hypoxemia, respiratory compromise, prematurity and other problems of the ill or premature infant. This can make it difficult to associate neurodevelopmental outcomes with hypoglycemia alone. Generally, transient hypoglycemia in an otherwise healthy neonate has a good prognosis whereas recurrent, severe, or prolonged hypoglycemia has been associated with poor developmental outcomes ranging from attention disorder to cerebral palsy (Martin et al, 2011). Signs and symptoms of hypoglycemia Signs and symptoms of neonatal hypoglycemia are subtle and nonspecific. Jitteriness, irritability, feeding difficulty, apnea, hypothermia, cyanosis, tachycardia, lethargy, floppiness, eye rolling or tachypnea could all be signs of symptomatic neonatal hypoglycemia. Diagnosis is confirmed with the identification of a low serum glucose, and resolution of symptoms after glucose administration. Because the symptoms are nonspecific, consideration should be given to other diagnoses as well, including sepsis and asphyxia. More severe signs, including seizures and coma, often result from recurrent and/or prolonged hypoglycemia. Seizures and coma are not as quickly or easily reversed with glucose administration (Martin et al, 2011). Serum glucose levels are among the most common laboratory tests performed on neonates. Hypoglycemia is common, easily confirmed by laboratory testing, and the symptoms are reversible if intervention occurs in a timely manner. Any infant with nonspecific signs of illness should raise concern for possible neonatal hypoglycemia. Glucose values and Interpretation The definition of clinically significant hypoglycemia is not known. There is no current research to indicate a level of serum glucose that consistently leads to permanent neurologic injury. Instead, general consensus and an operational approach guides management and treatment decisions in neonatal hypoglycemia. Serum glucose levels between 40-50 (2.2-2.7 mmol/l) during the first 4 hours of life is generally regarded as normal, less than 35 mg/dl is regarded as abnormal. After reaching a physiologic nadir, infants' serum glucose levels should stabilize. As a result, glucose levels >45 mg/dl is considered normal in infants 4-24 hours of age (Martin et al, 2011). Treatment options for hypoglycemia Treatment begins with prevention and screening of at risk infants. Early breastfeeding, ideally within the first hour following birth, and frequent feedings every 2-3 hours, can avoid severe neonatal hypoglycemia. Delayed neonatal feedings for up to several days is sometimes practiced in developing countries. This places all infants, but especially the ill and preterm infant at risk for significant and prolonged hypoglycemia. Known at risk infants should be screened for hypoglycemia by 3 hours of age and sooner if clinical concerns arise. Neonatal hypoglycemia is treated by replenishing the substrate supply. The preferred method is by breastfeeding. As mentioned, frequent feeding helps maintain euglycemia. Infant formula, or expressed breastmilk, can also be given by bottle, cup or gavage as necessary. This provides immediately accessible glucose through carbohydrate metabolism, as well as fats and protein for glycogenesis and maintenance of glucose stores during times of fasting. Enteral glucose administration via breastmilk, formula, or D5W should be used in mild to moderate neonatal hypoglycemia. The goal is to achieve a balance of energy production with energy utilization through frequent bolus feeds occurring every 2-3 hours, for a total fluid goal of 60 ml/kg on the first day of life. Once feeds become well established, continued persistent hypoglycemia is an indication of an underlying metabolic or endocrine disorder. Intravenous glucose should be used if the neonate is symptomatic or if moderate to severe hypoglycemia is identified, which can be defined as a serum glucose <25 mg/dl during the first 4 hours of life, <35 mg/dl from 4-24 hours of age. Intravenous dextrose should also be given to the infant manifesting late stage symptoms of coma and/or seizures. Intravenous dextrose is given as a small bolus of 2 ml/kg of D10W followed by a maintenance glucose infusion rate (GIR-see Equation I for calculating) of 5-7 mg/kg/minute. Although suboptimal, in the rare occasion that intravenous fluids are not available for the treatment of symptomatic or moderate to severe hypoglycemia, D5W at 10 ml/kg by gavage can be given orally or by naso-gastric tube. Hyperosmolar glucose solutions (D25 and D50) should not be used in neonates and have been associated with intracranial hemorrhage and rebound hypoglycemia (Marx JA, 2010). Available solutions should be diluted to a concentration of D10. Summary Neonatal hypoglycemia is the result of abnormal glucose metabolism after birth. It is a manifestation of the imbalance between energy production and energy utilization. While the incidence of hypoglycemia is unknown, it is potentially more common in resource poor countries, and known to occur at higher rates in susceptible populations such as the ill or www.intechopen.com preterm neonate. The neonatal brain is the primary consumer of serum glucose, and is at risk during periods of energy depletion. As such, prolonged, recurrent, or severe hypoglycemia may result in neurologic sequelae. Identification of at risk infants, close monitoring and prevention strategies, prompt recognition and effective treatment can be achieved in the developing world.
2017-09-16T07:50:15.158Z
2012-03-21T00:00:00.000
{ "year": 2012, "sha1": "38421828b7971e0937922c9a25b5aa7d40ac6e80", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/31650", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "df61bfda14f72201d9ec5b9a9196aa0dccf672e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219496172
pes2o/s2orc
v3-fos-license
Environmental Determinants of a Country’s Food Security in Short-Term and Long-Term Perspectives : About 10% of the world population suffered from hunger in 2018. Thereby, the main objective of this research is the identification of environmental drivers and inhibitors of a country’s food security in the short and long run. The Food Security Index (FSI) was constructed from 19 indicators using Principal Component Analysis. Identification of the short- and long-run relationships between the FSI and environmental factors was realized with the pooled mean-group estimator for 28 post-socialistic countries for 2000–2016. Empirical research results showed that a country’s food security in the short run is affected by greenhouse gas emissions but boosted by the increase of renewable energy production. Reduction of carbon dioxide emissions, electrification of rural populations, access to clean fuels, renewable energy production, arable land, and forest area growth might be essential tasks in order to ensure countries’ food security in the long-run. Introduction Global economic development at the end of the XX century led to the boosting of industrial and technological development. However, these processes also triggered numerous destructive trends, especially for the environment. In turn, the scale of the environmental problems needed cooperation of the global community to solve them, so the Agenda 21 and, recently, the Millennium Development Goals were developed in order to coordinate the efforts of different countries on the way to elimination of global damages and implementation of sustainable development. At the Millennium Summit in 2000, eight Millennium Development Goals were developed, aimed at poverty, hunger and child mortality reduction, decrease of different diseases, expansion of education, banning of gender inequality, triggering of cooperation of local community, and promotion of sustainable environment development. All of these goals have quantitative measures that needed to be achieved in order to fulfill the goals. Global community cooperation during the last decades allowed the partial fulfilment of these goals. Nevertheless, considering such achievements and newly appeared damages in 2015 at the United Nations General Assembly, the 17 Sustainable Development Goals by 2030 were introduced [1]. It is worth noting that most of the Sustainable Development Goals focus on food security or environmental issues that clarify their urgency and importance both at national (local) and supranational levels. Elimination of hunger and different forms of malnutrition in order to overcome food insecurity continues to be an urgent global task because of the insufficient economic growth dynamics in different countries, climate change, existence of war conflicts and political instability zones, etc. Namely, according to the Food and Agriculture Organization of the United Nations (FAO) report [2] in 2000, there were 792 million people in 98 countries who met food insecurity problems, while, in 2018 [3], more than 820 million people were still suffering from hunger. Such a situation proves the extreme urgency of the need for the global community's cooperation in order to fulfill the Zero Hunger goal by 2030. Moreover, it is also essential to continue scientific research aiming at clarification of factors strengthening or worsening country food security. That might help to develop a more well thought out and scientifically grounded economic policy at both national and supranational levels. Particularly, according to the FAO [4], nowadays, "food security exists when all people, at all times, have physical and economic access to sufficient, safe, and nutritious food that meets their dietary needs and food preferences for an active and healthy life". Moreover, in terms of the FAO approach, food security has four dimensions, namely food availability, food access, food stability, and food utilization. Food availability is about physical existence of foodstuffs of appropriate quality that might be supplied to the population. Food access characterizes the possibility of getting food considering legal, political, economic, and social conditions. Food utilization illustrates rationality and effectiveness of consumption, sanitation, and water access conditions. Food stability is about ensuring foodstuff provision at any time range, even in cases of insufficient economic situations or realization of some other risks [4]. As it becomes evident from the essence of the food security perspectives, some of them mostly dependent on economic conditions, but the majority of pillars are reliant on environmental preconditions. Consequently, environmental determinants play a crucial role in foodstuff production, distribution, and the quality of its consumption. However, the functioning of food-producing enterprises is quite often (especially in developing countries) accompanied by numerous adverse environmental effects (air pollution, soil degradation, elimination of certain species of flora and fauna, reduction of forest area, greenhouse gas emissions increase, etc.). On the contrary, spurring environmental problems would likely lead to an increase in food insecurity and disruption of sustainability of the national economy. It should be noted that there is plenty of research that specifies the influence of social, economic, and environmental factors on a country's food security as a whole and on its perspectives separately, but they sometimes contradict each other. In addition, different groups of scientists focused on various environmental aspects and food security pillars, so it might be hard to see the situation comprehensively. Therefore, from both theoretical and empirical points of view, it is crucial to identify the impact of environmental (ecological) factors on a country's food security in the short-run and long-run perspectives using up-to-date data and scientific approaches. Specifically, this research aimed at clarifying several important issues: 1) Identification of the relevant environmental factors that influence a country's food security (we used to think that some environmental problems might damage foodstuff production, distribution, and consumption value, but the existence of contradictory empirical research results about such an impact reveals the necessity of further theoretical and empirical findings in this direction); 2) Comprehensiveness: As a rule, empirical research is narrow and focused on the clarification of influence of some certain environmental determinant on a country's food security or its pillars; however, we try to consider the vast majority of potential environmental factors mentioned in previous empirical research; this approach might be useful from the regulatory perspective because it could help to identify the priorities of environmental, economic, and social government policy (to some extent); 3) Clarification of the short-and long-run impacts of environmental factors on a country's food security (basically, most of the empirical research is based on classical regression analysis and aimed at confirmation or rejection of some empirical hypothesis, but it should be taking into consideration that environmental factors likely have no immediate influence on a country's food security; thus, it is by far more valuable to clarify this impact in different time perspectives). Moreover, the food security concept originated in 1974 during the World Food Conference, but gained its modern features in 1996 at the World Food Summit [4]. Despite the conceptual clarification from the mid-1990s of the XX century, the possibility of tracking countries' progress in terms of food security appeared only in 2012 with the launching of the Global Food Security Index. Thus, there is no considerable amount of similar research results aimed at testing the influence of the environmental factors on a proxy of countries' food security, especially in different time perspectives. Consequently, this research might have significant theoretical and empirical value both in terms of development of countries' environmental and food security policies and tracking of changes the environmental determinants of countries' food security. Literature Review In order to fulfill the task of comprehensiveness of the research, it is necessary to generalize potential environmental determinants influencing a country's food security (alternatively, foodstuff production, agribusiness performance, etc.) that were previously mentioned by scientists. Basically, some theoretical and empirical findings confirm the hypothesis about social, economic, or environmental factors' impacts on countries' food security as a whole or its particular perspective. It should be noted that there is a set of scientific research that, in general terms, supports the hypothesis about the influence of environmental factors on a country's food security or its proxies. Namely, Musová, Musa, and Ludhova [5], Dwikuncoro and Ratajczak [6], and Vasa [7] researched factors influencing food purchasing (food utilization) in the Slovak Republic, Poland, and Hungary. They found out that consumer behavior is mostly driven by economic factors (quality and prices of products, household income). However, environmental factors also matter-69% of respondents mentioned that they prefer environmentally friendly goods. Moreover, Jakubowska and Radzymińska [8] found out that Czech students, who participated in the research, declare environmental motives as dominant in their consumer choices. Dabija, Bejan, and Dinu [9] also identified that consumers of Generation Z prefer green suppliers. In turn, Gadeikienė, Dovalienė, Grase, and Banytė [10], Arslan [11], Olasiuk and Bhardwaj [12], and Ahmad [13] reveal that environmental preconditions and comprehensive nutrition knowledge play an important role in ensuring sustainable consumption. Thus, this group of scientists supports the idea that environmental image and responsibility are impactful for food consumption (food utilization proxy of a country's food security). In terms of discussing the impact of environmental determinants on the performance of food producers and foodstuff trade, i.e., food availability and partial food access, Morkūnas, Volkov, and Pazienza [14], Morkūnas et al. [15], and Tomchuk et al. [16] mentioned that economic and environmental factors have an impact for resilience of agricultural enterprises. Similarly, Handayani, Wahyudi, and Suharnomo [17], Mikhaylova et al. [18], Akhtar [19], Kheyfets and Chernova [20], Stjepanović, Tomić, and Škare [21], Cismas et al. [22], Jayasundera [23], and Harold [24] proved that green innovations positively influence business performance, sustainability of agriculture, and food security. Haninun, Lindrianasari, and Denziana [25] mentioned that environmental performance has an effect on financial performance. Ortikov, Smutka, and Benešován [26] reveal that increase of innovativeness and eco-friendliness might be among essential preconditions of an increase of competitiveness of Uzbekistan's agrarian foreign trade. However, Shuquan [27] empirically proved the existence of the relationship between international trade and countries' environmental performance (case of China). In turn, Smutka, Maitah, and Svatoš [28], Falkowski [29], and Kadochnikov and Fedyunina [30] pointed out that, in the case of Russian foodstuff imports, not environmental, but economic and political factors matter. However, in the case of Russia's exports to EU countries, political and environmental determinants play a more significant role. This block of research supports the idea that eco-friendliness and environmental responsibility are not just influencing consumers' motives, but also argue that agricultural enterprises are also driven by environmental motives. Nevertheless, these researches also allow us to conclude that environmental factors play a prior role in foodstuff trade in developed countries, but a secondary role in developing countries. Previous parts of the literature review proved the hypothesis that environmental (ecological) factors, in general terms, do influence a country's food security and its perspectives. Moreover, this allows the revelation that environmental responsibility is triggered by regulatory and institutional preconditions and is an essential determinant of consumer choice and agricultural business performance. Thus, it creates a background for more in-depth analysis regarding the identification of specific environmental factors that have impacts on a country's sustainable development and food security. In this perspective, it should be mentioned that Vasylyeva and Pryymenko [46], Mekhum [47], Lu et al. [48], Androniceanu and Popescu [49], Lyeonov et al. [50], Abdimomynova et al. [51], and Mentel et al. [52] clarify renewable energy production and consumption as among key environmental determinants. Additionally, Aitkazina et al. [53] pointed out that an increase in greenhouse gas emissions by agrarian enterprises and expansion of use of chemical fertilizers create threats for sustainable development and, consequently, a country's food security. Similarly, Sibanda and Ndlela [54], Dkhili and Dhiab [55], Mačaitytė and Virbašiūtė [56], and Odermatt [57] also argue that increase of carbon emissions negatively influences company performance, countries' food security, and sustainability. In turn, Vasylieva [58] mentioned that a country's food security is dependent on yields, rational land use, development of innovations, and infrastructure. However, Aliyas, Ismail, and Alhadeedy [59] supposed that a country's food security and agricultural sustainability are based on environmental friendliness, decrease of chemical fertilizers, and effective ecological state policy. Consequently, a comprehensive analysis of the theoretical and empirical research results aimed at clarifying factors affecting countries' food security leads to the conclusion that economic factors are still among key determinants of foodstuff consumption (it mostly depends on prices of goods and household income) and agribusiness performance (as a key sphere of food production and distribution). At the same time, there is a considerable block of research proving that the influence of ethical, institutional, and specific environmental factors on a country's food security become more significant. In turn, among major environmental determinants affecting a country's food security, scientists mention water and soil usage, energetic issues (expansion of renewable and traditional energy production and consumption), greenhouse gas emission, fertilizer usage, etc. Nevertheless, the influence of these factors on a country's food security is revealed, but scientists have no unified position about the scale and character of such an impact, so it might be valuable, from both theoretical and practical perspectives, to identify which factors are more influential in the long run and which in the short run. Materials and Methods Previous studies [60] were mainly related to primary empirical research. Specifically, they allowed the identification of the potential blocks of environmental determinants affecting a country's food security, such as: 1) Measures concerning natural resource availability and usage; 2) energy production and consumption items; 3) fertilizer usage; 4) greenhouse gas emissions by agricultural enterprises; 5) parameters of agribusiness yield. In turn, as a result of this literature review, a set of 37 environmental determinants was collected from the World Bank DataBank [61] and the United Nations Environment Program Data Explorer [62]. Correlation analysis helped to select the most influential factors and eliminate multicollinearity problems. It allowed the choosing of 14 out of 37 environmental factors. Additionally, two of these 14 variables were eliminated because they had negative influences on regression model quality parameters. Therefore, previous research [60] helped to clarify a set of environmental factors that do have an impact on a country's food security. The realization of this research task implied the need for several stages: 1) Construction of the comprehensive food security indicator; 2) identification of certain ecological factors influencing food security in the short and long run. In general terms, the research was based on data collected from public sources (the World Bank DataBank [61], the United Nations Environment Programme Data Explorer [62], and the Food and Agriculture Organization of the United Nations database (FAOSTAT) [63]) for 28 post-socialistic countries (Albania, Armenia, Azerbaijan, Belarus, Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, Estonia, Georgia, Hungary, Kazakhstan, Kyrgyz Republic, Latvia, Lithuania, Macedonia, Moldova, Montenegro, Poland, Romania, Russia, Serbia, Slovak Republic, Slovenia, Tajikistan, Turkmenistan, Ukraine, and Uzbekistan) from 2000 to 2016. As for the first stage, it might be noted that The Economist in cooperation with the FAO have developed the Global Food Security Index, which consists of 28 measurement indicators of affordability, availability, quality, and safety of food. Nevertheless, this index has been calculated from 2012, which is too small a period for gaining reliable modeling results. That is why the Food Security Index (FSI) was constructed. The FSI consists of 19 indicators of food availability, food access, food stability, and food utilization. The FAO officially identifies these parameters as measures of food security. The descriptions of the indicators used for the FSI's construction are in Table 1. Perspective of Food Security Indicators of Food Security Measurement Food availability Average dietary energy supply adequacy, % (ADESA); Average value of food production, USD per capita (FoodProd); Share of dietary energy supply derived from cereals, roots, and tubers, % (СRT); Average protein supply, gr/capita/day (Protein); Average supply of proteins of animal origin, gr/capita/day (AnProt). Food access Rail line density, total route in km per 100 square km of land area (Railway); GDP per capita, USD (GDPpc); Prevalence of undernourishment, % (Under); Depth of the food deficit, kcal/capita/day (FoodDef). Food utilization Percentage of population with access to improved drinking water sources, % (ImWater); Percentage of population with access to sanitation facilities, % (Sanit); Prevalence of obesity in the adult population (18 years and older), % (Obesity); Prevalence of anemia among women of reproductive age (15-49 years), % (Anemia). The FAO does not clarify a certain algorithm for aggregation of food availability, food access, food stability, and food utilization indicators. Therefore, the Principal Component Analysis (PCA) in Stata software was used to realize this particular task. Namely, the eigenvalues of the first principal component were used as weighted coefficients for the FSI's construction. It is worth noting that we use the PCA method rather than the Analytic Hierarchy Process (AHP) because it is a rather complicated task for realizing pairwise judgments to prioritize measures of food security on a scale of 1 to 9. Thus, we decided to apply not a subjective, but a more objective method (PCA), which aimed at clarification of data trends and identification of weight coefficients based on them [64]. In addition, before applying the PCA, all of the above-mentioned indicators were primarily normalized considering their stimulating or unstimulating influence on the state of countries' food security. The normalization process allows us to arrange them from 0 to 1. In turn, the second stage of the research is focused on the identification of environmental determinants influencing a country's food security in short-and long-run perspectives. As the research sample includes rather huge number of observations, both in terms of periods, countries, and independent variables (panel data sample), a pooled mean-group (PMG) estimator, developed by Pesaran, Shin, and Smith [65], was used. Traditionally, in research based on panel data with a large number of cross-sections but a small number of time observations, fixed effects are applied, as well as random effects estimators or generalized method of moments. However, an increase in the number of time observations might result in non-stationarity. As this research covers a rather large number of cross-sectional observations and time observations, it is better to apply the PMG estimator. Moreover, this research method allows us to manage the problem of non-stationarity and better fits heterogeneous panels. In addition, the PMG estimator considers both pooling and averaging approaches (it allows short-run coefficients to differ across countries, but long-run coefficients might be equal for the whole panel). Thus, it helps to mix some technical aspects from the mean group estimator and fixed effects estimator [66]. The PMG estimator allows testing of the hypothesis about the existence of influence on food security (specifically, the FSI) in the long-term and short-term perspectives of the following environmental indicators: X1-access to clean fuels and technologies for cooking (% of population); X2-access to electricity in rural areas (% of rural population); X3-agricultural methane emissions (% of total); X4-agricultural nitrous oxide emissions (% of total); X5-arable land (% of land area); X6cereal yield (kg per hectare); X7-CO2 emissions (metric tons per capita); X8-electric power transmission and distribution losses (% of output); X9-electricity production from renewable sources, excluding hydroelectric (% of total); X10-fertilizer consumption (kilograms per hectare of arable land); X11-forest area (% of land area); X12-renewable electricity output (% of total electricity output). The summative statistics for the set of dependent and independent variables are in Table 2. Notes: X1-access to clean fuels and technologies for cooking (% of population); X2-access to electricity in rural areas (% of rural population); X3-agricultural methane emissions (% of total); X4agricultural nitrous oxide emissions (% of total); X5-arable land (% of land area); X6-cereal yield (kg per hectare); X7-CO2 emissions (metric tons per capita); X8-electric power transmission and distribution losses (% of output); X9-electricity production from renewable sources, excluding hydroelectric (% of total); X10-fertilizer consumption (kilograms per hectare of arable land); X11-forest area (% of land area); X12-renewable electricity output (% of total electricity output); Obsamount of observations; Std. Dev.-Standard deviation. Based on the results presented in Table 2, it should be noted that the number of observations differs for some variables. Nevertheless, the panel is strongly balanced, which allows us to get reliable and significant empirical research results. Results Taking into account weight coefficients (Table 3), the FSI was constructed with the PCA approach. It is also worth noting that the calculated FSI is quite representative. Its comparison with the Global Food Security Index for those 13 countries, which are matched in both samples (Belarus, Kazakhstan, Poland, Hungary, Poland, Hungary, Poland, Hungary, Russia, Serbia, Slovakia, Tajikistan, Ukraine, and Uzbekistan) for the years 2012-2016, revealed a correlation of 90.20%. Consequently, the FSI allows the characterization of the same trends as those displayed by the Global Food Security Index. Analysis of the FSI level in 2016 shows that the highest level of food security is in the Czech Republic (2.25 from 2.39), and the lowest is in Tajikistan (0.16). It is also worth noting that such countries as Albania, Armenia, Azerbaijan, Bosnia and Herzegovina, Georgia, Kyrgyz Republic, Macedonia, Moldova, Serbia, Tajikistan, Turkmenistan, Ukraine, and Uzbekistan have less-thanaverage levels of national food security. The rest of the countries have higher-than-average levels of national food security. In terms of the characteristics of the dynamics of the FSI level, it might be highlighted that Azerbaijan (566.19%), Tajikistan (520.97%), Uzbekistan (182.79%), Armenia (178.68%), Turkmenistan (97.80%), Georgia (83.93%), and Albania (74.63%) have the best growth dynamics in comparison with 2001, while for the other countries, the growth rate fluctuates in almost the same range (about 31.66%). The next step is the identification of the relationship between the relevant environmental determinants and the FSI. It is based on the panel data regression analysis (PMG estimator). Practically, it was implemented with the help of the "xtpmg" add-on of the Stata software. The results of the regression analysis are given in Table 4. Notes: X1-access to clean fuels and technologies for cooking (% of population); X2-access to electricity in rural areas (% of rural population); X3-agricultural methane emissions (% of total); X4agricultural nitrous oxide emissions (% of total); X5-arable land (% of land area); X6-cereal yield (kg per hectare); X7-CO2 emissions (metric tons per capita); X8-electric power transmission and distribution losses (% of output); X9-electricity production from renewable sources, excluding hydroelectric (% of total); X10-fertilizer consumption (kilograms per hectare of arable land); X11forest area (% of land area); X12-renewable electricity output (% of total electricity output); *significance at 10% level; **-significance at 5% level; ***-significance at 1% level. Therefore, the following conclusions can be made. The vast majority of the environmental factors have a statistically significant long-term impact on countries' food security (significant at the 10%, 5%, or 1% level). Environmental determinants that have no statistically significant impact on the FSI level in the long-term perspective are as follows: Agricultural methane emissions (% of total emissions) (X3); agricultural nitrous oxide emissions (% of total emissions) (X4); cereal yield (kg per hectare) (X6); electric power transmission and distribution losses (% of output) (X8). Thus, the absence of a statistically significant impact of the growth of greenhouse gas emissions by agricultural enterprises on the level of countries' food security is mostly explained by the intensified efforts of the world community on the reduction of such emissions (according to the Kyoto Protocol, countries are obliged to reduce greenhouse gas emissions by 2100). Additionally, agro-industrial enterprises provide only 10%-12% of the total emissions, while transport, industrial, construction, and energy enterprises have a greater impact on the ecosystem. The reduction in the net carbon dioxide emissions of the agro-industrial sector was largely explained by the decline of deforestation and the increase in forest plantations. However, the increase of carbon dioxide emissions per capita from all sources of pollution (X7) remains a strong factor of the negative impact on countries' food security in the long-run perspective. Namely, an increase of this independent variable by a point results in a decrease of country food security level by 0.0886 (or 3.71% of the maximum possible FSI value). In turn, some factors have a positive impact on the countries' food security, such as: -Access to clean fuels and technologies for cooking (% of population) (X1)-an increase of the environmental factor by a point results in strengthening of a country's food security by 0.0105 (or 0.43% of the maximum possible FSI value); -Access to electricity in rural areas (% of rural population) (X2)-an increase of the environmental factor by a point results in strengthening of a country's food security by 0.0811 (or 3.39% of the maximum possible FSI value), which means that further electrification of rural areas using environmentally friendly technologies should be a priority direction of public policy. This statement is also confirmed by a positive and statistically significant impact of expanding renewable electricity output (% of total electricity output) (X12) on the country's food security in the long-run perspective. Namely, its increase by a point leads to strengthening of a country's food security by 0.0154 (or 0.64% of the maximum possible FSI value). Experts note [67][68][69] that the expansion of land for growing biofuel plants might have some negative consequences. It leads to the elimination of the land from the process of food production and may harm a country's food security. Consequently, this damage might not be offset by the positive environmental impact of using biofuels instead of traditional fuels. In addition, the statistical significance of the long-term effects of arable land growth (X5) and forest area growth (X11) was confirmed at the 10% level. Particularly, an increase by a point of one of these particular environmental factors (X5 and X11) results in an increase in a country's food security by 0.0285 and 0.0948, respectively. Such a trend is quite natural, since the expansion of arable land will increase the volume of food products. However, such a scenario can have negative consequences and requires a well-thought-out and scientifically grounded approach. In particular, an intensive approach to the agricultural sector's development is preferable. It helps to ensure an increase of agricultural production without large-scale use of additional land resources. It is also equally important to use the most environmentally friendly tools for increasing agribusiness productivity and yields. While there is no widespread expansion of an intensive model of agricultural management, extensive technologies still do not lose their relevance. This is also confirmed by the statistically significant impact of the indicator "fertilizer consumption (kilograms per hectare of arable land)" (X10) on a country's food security (at the 5% level). Its increase by a point results in the FSI increase by 0.0020 (0.08% of maximum FSI value). It is worth noting that most of the short-run coefficients are not statistically significant. However, the variables "agricultural methane emissions (% of total)" (X3) and "CO2 emissions (metric tons per capita)" (X7) have a statistically significant negative impact on the food security index at the 1% and 5% levels, respectively. In addition, the positive impact of growth in electricity production from renewable sources is confirmed (both without hydroelectric power-variable X9 and with hydroelectric power-variable X12). However, in most cases, the particular environmental factors are statistically significant only in short or long run. Consequently, we cannot compare statistically significant results with insignificant ones. Hence, we mainly focused only on the analysis and practical implications of only statistically significant research results. Nonetheless, it is worth noting that the increase in renewable electricity output (% of total electricity output) (X12) has a positive long-term but negative short-term influence on a country's food security. These findings might be partially explained by the specificity of the sample of countries. Namely, most of 28 post-socialistic countries have triggered more intensive economic, environmental, and technological development only for the last three decades. That is the main reason for the absence of a highly productive network of renewable energy stations. Consequently, the expansion of renewable electricity output leads to an immediate negative impact on a country's food security because of the partial elimination of land and water resources from foodstuff production and the worsening of its quality. Otherwise, in the long run, renewable energy outcompetes traditional energy production, which is more harmful to the environment and countries' food security. Familiar trends were also mentioned in the FAO report "Impacts of Bioenergy on Food Security" [70]. In turn, the increase of CO2 emissions negatively influences a country's food security both in the short and long run. However, the scale and significance of this factor's effect become more influential in the long-term perspective. Discussion Aggregation of these empirical research results aimed at the identification of the influence of environmental (ecological) factors on a country's food security in short-and long-run perspectives allows the confirmation of trends and cohesions identified by other scientists. Specifically, Sola et al. [71] analyzed 132 articles about the influence of access to clean fuels and technologies for cooking on food security measures. Researchers mentioned that, in general, most of the scientists argued that this factor has a positive impact on food security and nutrition. However, there are no numerous empirical pieces of evidence of it. However, our research results allow us to quantitatively clarify such an impact: An increase of the factor by a point results in the strengthening of a country's food security by 0.0105 in the long run. Moreover, the FAO [72] also actively supports the idea that access to clean fuels leads to better nutrition and less environmental damage. In addition, our empirical results about the impact of access to electricity in rural areas on food security also correlate with the FAO's findings. Namely, in publication [72], it is mentioned that access to electricity is crucial for a country's food security because electricity is necessary at each stage of foodstuff production. Moreover, access to electricity in rural areas might become a driver of agricultural productivity, efficiency, and food security. In turn, Wambua, Omoke, and Telesia [73] found empirical pieces of evidence that lack of arable lands and other familiar resources are preconditions of food insecurity in Kenya. Mbuthia, Kioli, and Wanjala [74] highlighted the importance of the other resource factors. Namely, they revealed that the prohibition of cutting trees (forest areas) has a positive influence on household food security. Thereby, our research results form empirical evidence of the relationships that were previously identified at a theoretical level. Moreover, Wambua, Omoke, and Telesia [73] also revealed that using animal manure or industrial fertilizers allows an increase in agricultural crops. Hence, the authors pointed out that households using fertilizers for agricultural issues did not face the problem of food insecurity even in periods of unfavorable weather and climate conditions (based on 66 households' self-assessment). In our research, the hypothesis about the long-run positive impact of fertilizer consumption on a country's food security measures was also confirmed. In the research, it was revealed that CO2 emissions have a negative influence on a country's food security, as was also highlighted in other research by Sibanda and Ndlela [54], Dkhili and Dhiab [55], Mačaitytė and Virbašiūtė [56], and Odermatt [57]. Finally, empirical findings about the positive influence of renewable energy output on a country's food security were also proved by other scientists' and international organizations' reports, such as the International Renewable Energy Agency (IRENA) [75]. Namely, it is noted in the report that the increase of renewable energy has crucial importance because of several reasons: -Electricity itself plays an important role in households' everyday life and agriculture business activity, while it is necessary for foodstuff production, storing, and distribution processes; -Renewable energy and electricity allow the decrease of consumption of fossil fuels, both for private and business purposes; -Substitution of traditional electricity production with renewable electricity production might help to solve some environmental problems, especially in terms of reduction of greenhouse gas emission; -Renewable energy's prevalence in comparison with traditional energy sources is more fit for the Sustainable Development Goals, especially in terms of Goal #7: "Ensure access to affordable, reliable, sustainable and modern energy for all". In terms of the practical implications of the empirical research results, they might become a background for the development of states' economic, social, and environmental policies in order to ensure countries' food security. Moreover, it also might be useful for the identification of the strategic and operational priorities of public policy. In terms of further research perspectives, it might be noted that certain environmental determinants may be relevant to the general level of food security, but may not have a statistically significant effect on its components. Therefore, it is also important to identify specific environmental stimulants and inhibitors in terms of ensuring food availability, food access, food stability, and food utilization. Conclusions Thus, it can be concluded that this empirical research aimed at the identification of factors affecting countries' food security in short-and long-run perspectives allows us to confirm previous empirical research results and theoretical findings (especially about the influence of CO2 emissions, sufficiency of arable lands, forest areas, and other natural resources, access to electricity, and use of fertilizers). On the other hand, results that were revealed allowed us to obtain empirical evidence and quantitatively clarify the kinds of relationships that were identified mostly on a theoretical level (about influence of access to clean fuels and technologies for cooking). Therefore, taking into account the results obtained regarding the impact of environmental determinants on countries' food security in short-and long-run perspectives for the 28 former socialist countries, the following can be noted: -The main operational target in terms of ensuring a country's food security might be an intensification of efforts in reducing greenhouse gas emissions (both methane and carbon dioxide), as well as the reorientation towards the production and consumption of electricity from renewable sources rather than traditional ones, which are more destructive to the ecosystem (in countries where the use of alternative energy sources is limited, a possible solution of the problem may be reducing the number of cogeneration and nuclear power plants in favor of hydroelectric power plants); -Among the key vectors of mitigating the long-run risks of deterioration of a country's food security can be mentioned the following: Intensification of efforts to reduce carbon dioxide emissions not only in the agricultural sector, but also in the industrial sector; continuation of rural electrification and the provision of environmentally friendly fuels and electricity sources to the population, with the reorientation from traditional sources of energy production towards alternative ones; growth of arable land (or more effective usage of the existing ones) and increasing forest areas, while moving to intensive rather than extensive agricultural management (using fewer resources in order to ensure bigger yields). Consideration of these proposals might become a basis for the development of state policies in the field of ensuring national food security. Despite the fact that the obtained empirical results correlate with previous empirical findings, and that, on their basis, some practical recommendations that might be used by governmental authorities while ensuring country food security were developed, there are some limitations of this research, such as: 1) The sample of Countries consists of only 28 post-socialist countries, so expansion of the sample of countries might help to get more comprehensive, complex, and reliable results; 2) other than the expansion of the country sample, it might be valuable to realize cluster analysis and specify recommendations for certain clusters; 3) as the Global Food Security Index, which is considered as a unified proxy of countries' food security, covers the period starting from 2012, it is too small for reliable empirical results; thus, despite constructing our own index, a better option may be the use of the methodology of the Global Food Security Index in order to get more reliable assessments. Moreover, this research was aimed at identification of specific environmental determinants that influence a country's food security in short-or long-run perspectives, but in order to develop efficient public policy in terms of ensuring country food security, lags of postponed impact of environmental determinants on the FSI might be specified. Conflicts of Interest: The authors declare no conflict of interest.
2020-05-21T09:16:12.023Z
2020-05-16T00:00:00.000
{ "year": 2020, "sha1": "e93e713eaaa4b966b55ce65a9ced678a4625bb27", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/10/4090/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1936bf1b9d39dcbbdede51872cd64a0a9cd614f8", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
239639490
pes2o/s2orc
v3-fos-license
Insights of the pathophysiology of neurodegenerative diseases and the role of phytochemical compounds in its management A neurodegenerative disease (ND) is defined as an irreversible disorder in most cases, leading to progressive loss of neurons and intellectual abilities. ND can lead to fatality in most circumstances, and the elderly above the age of sixty-five (65) constitute the major risk category. The most common type of ND includes Alzheimer's disease (AD), and Parkinson's disease (PD). Other NDs are Huntington's disease (HD), motor neuron disease (MND), spinocerebellar ataxia (SCA), spinal muscular atrophy (SMA), and prion disease. ND strikes mainly in the middle to late life incidence expected to rise as the population ages. The hallmarks of ND are protein aggregation, mitochondrial dysfunction, neuronal loss via apoptosis or necrosis, lysosomal dysfunction, excitotoxicity and metabolic syndrome. Increasing evidence demonstrates that metabolic syndrome is interrelated with many NDs because it affects middle-aged or older adults (Yalcin & Yalcin, 2018). During neurodegeneration, an innate immune response acts as the first line of defence to protect the host against invading pathogens (Medzhitov, 2008). Immune resident cells in the brain, such as microglia, have a vital homeostatic function, including the phagocytosis of fragmented and dying cells (Salter & Stevens, 2017). However, activated microglia produce large amounts of free radicals and contribute significantly to inflammation in AD (Wolozin & Behl, 2000). This inflammatory activity is beneficial for a short period. However, during ageing and in chronic neurodegenerative disease, both the innate and peripheral immune systems are defective and fail to detect or respond to imbalances in homeostasis due to the accumulation of protein aggregates. Subsequently, prolonged neuroinflammation with higher levels of proinflammatory cytokines leads to harmful consequences in CNS. It is critical to note that protein abnormalities that define ND can be presented before the onset of clinical features (Dugger & Dickson, 2017). In ND, abnormal protein conformations and cellular and neuroanatomical distribution constitute the major histopathologic features essential for disease state diagnosis (Kovacs, 2016). These proteins are considered NDs biomarkers. For instance, AD is characterised by the OPEN ACCESS | EDITORIAL ISSN: 2576-828X extracellular deposition of Aβ fibrils, abnormally phosphorylated tau protein accumulation, neuritic senile plaques, and neurofibrillary tangles (Marks et al., 2017). In addition, prion diseases are a cellular form of prion protein (PrPc) and scrapie isoform of prion protein (PrPSc) (Mehrpour & Codogno, 2012) whereas αsynuclein protein (Dauer & Przedborski, 2003) and Lewy bodies (Licker et al., 2009) have key roles in the neuropathology of PD. In HD, unstable huntingtin (Htt) protein aggregates accumulate in neurones leading cell death (Roos, 2010). MND is contributed by glutamate excitotoxicity of neurones (Relja, 2004), whereas loss of motor neurons in the spinal cord is observed in SMA (Coovert et al., 1997). In SCA, aggregation of proteins with long polyglutamine tract (PolyQ) forming inclusions in the cytoplasm or nucleus of vulnerable neurones, contributing to the progression of the pathology such as neuronal dysfunctions and subsequent neurodegeneration (Pilotto & Saxena, 2018). Many models and ideas have been used to identify the exact pathophysiology and mechanisms of ND. The most frequently discussed risk amongst the general population with ND is ageing. Human brain ageing can be investigated using aged non-human primates and some other higher-order animal species. However, it is challenging to monitor complete neuropathological or clinical phenotypes seen in humans in these models. Hence, cell models, animal models, and genetically engineered non-mammals (Caenorhabditis elegans, Drosophila melanogaster and zebrafish) are employed to recapitulate the specific disease mechanisms involved in ND, including the screening of therapeutic compounds. The genetically engineered mice have been the most popular and widely used animal model to study ND (Trancikova et al., 2011). The commonly used study model for AD research includes the transgenic animal model targeting amyloid-beta precursor protein (APP), presenilin, tau or the human apolipoprotein E gene (APOE) gene. For PD research, classic neurotoxininduced animals are usually employed. Several compounds known to be toxic to dopaminergic neurones can be used to produce parkinsonism, such as 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), paraquat, rotenone and 6-hydroxydopamine. Animal models of HD include both toxin-induced models (mitochondrial toxin 3-nitropropionic acid or excitotoxins such as kainate, ibotenate, quinolinate), genetic models such as transgenic mice (R6/2, R6/1, N171-82Q, Yeast artificial chromosome expressing complete human htt protein) and knock-in mouse model (HdhQ92 mouse, HdhQ111 mouse, CAG140 mouse, CAG15O mouse) (Ramaswamy et al., 2007). Unfortunately, several clinical trials targeting ND have raised doubts about the translatability of animal disease models to humans. To bridge the gap between animal and human studies, three-dimensional (3D) cell culture models have been developed from human or animal cells. Traditionally, two-dimensional (2D) cell culture was used in vitro, but its efficiency is questionable because the environment is far from mimicking the in vivo state. Hence, the 3D cell culture creates an artificial environment that allows the biological cells to grow and interact to mimic a living organ and its microarchitecture. There are several advantages of 3D model such as (1) allows better control of variables that are difficult to regulate in vivo; (2) reproducible cellular and molecular mechanism; (3) allows human-based models to be grown by using human cells for drug testing, disease modelling and diagnoses; (4) can overcome the graft limitations; (5) allows faster and affordable translational studies involving the identification of the mechanism of action together with any associated risks (Bédard et al., 2020;Slanzi et al., 2020). Pathological changes of neurons and loss of synaptic protein are the key features in many ND, including dementia. The latter seems to be directly linked to cognitive deficits from the early stages of dementia and precede neuronal degeneration (Bereczki et al., 2018;Kashyap et al., 2019;Sharifi-Rad et al., 2020a). As shown in Figure 1, synaptic loss was established from activated microglia, which engulf and excessively prune the synapses. In addition, the activated microglia also release pro-inflammatory cytokines, which can have direct excitotoxic effects on the synapses (Hong et al., 2016;Wang et al., 2015). Current therapeutic interventions focus on the treatment of neuronal loss or synaptopathy targeting at the theoretically distinct processes of maintenance, compensation, and recovery of synaptic function, which can significantly impact cognitive function (Sheng et al., 2012). Most of the new drug discoveries in an attempt to improve the efficiency of remaining synapses in brains were based on its pharmacological classes such as anti-cholinesterase (Anand & Singh, 2013), selective serotonin reuptake inhibitors (Chow et al., 2007) and N-methyl-D-aspartic acid (NMDA) antagonists (Prentice et al., 2015). The most common physiological symptoms of ND are elevated oxidative/nitrosative stress, mitochondrial dysfunction, protein misfolding /aggregation, synapse loss, and decreased neuronal survival. This chronic neurodegenerative disorder leads to progressive dementia and deterioration of cognitive function. Defective innate immune reactions associated with ageing contributes to neurodegenerative diseases. Brain resident microglia and astrocytes can cross-talk each other and help to form the protein aggregates and damages the neuronal network with the help of pro-inflammatory cytokines such as interleukin-1 (IL-1), interleukin-6 (IL-6), tumour necrosis factor-alpha (TNF-α) and C-reactive protein. Alternatively, synaptic pruning and microglia can also agonise neuronal degradation independent of protein aggregation (Gan et al., 2018). Herbal active compounds listed in Table 1 can antagonise these signalling responses and protect against neuronal cell damage. Current available drugs focus primarily on temporary symptomatic relief. Hence, there is a high demand for the discovery of novel therapies and neuroprotective agents to prevent and retard the progression of ND (Sharifi-Rad et al., 2020b). Recently, some convincing evidence has been published regarding the use of traditional herbs and phytochemicals to delay the onset and slow the progression of ND. Most of the traditional herbal medicines are prepared from crude materials, and there are concerns about their specific medicinal effects and reproducibility, mode of action and the active ingredients (Kim et al., 2010). Furthermore, these natural phytochemicals are less toxic than novel synthetic drugs. Hence, many active compounds have been isolated and identified from medicinal plant extracts (Ansari & Khodagholi, 2013). These include lignans, flavonoids, tannins, polyphenols, triterpenes, sterols, and alkaloids, which have shown various beneficial pharmacological activities, such as antiinflammatory, anti-amyloidogenic, anti-cholinesterase, anti-oxidant, inhibiting protein misfolding, reducing neuroinflammation, anti-apoptotic, neurotrophic, acetylcholinesterase (AChE) inhibition, monoamine oxidase (MAO) inhibition and anti-thrombotic (Howes et al., 2003;Sharifi-Rad et al., 2020b). Phytochemical compounds with anti-oxidant and antiinflammatory activities have the potential to treat ND. Good examples of these are flavonoids which possess high anti-oxidant properties. Flavonoids have low molecular weight, and they belong to polyphenolic antioxidants present in fruits, vegetables, and beverages such as wine and tea (Panche et al., 2016). In addition, these flavonoids can also be found in the roots and leaves of Andrographis paniculate (known as Hempedu bumi in Malay) (Subramaniam et al., 2015). Ficus deltoidei consists of at least 25 different flavonoids with high anti-oxidant properties (Azemin et al., 2014;Hakiman & Maziah, 2009). Flavonoids and tannins derived from Uncaria gambir demonstrated antioxidant properties that prevent damage caused by free radical-mediated processes (Ningsih et al., 2014). Anthocyanins have anti-inflammatory activity as they inhibit cyclooxygenase enzymes. These flavonoids inhibit the expression of vascular cell adhesion molecules (VCAM), thus inhibiting the reaction and adhesion of endothelial cells with leucocytes. These compounds are believed to decrease the levels of interferon necrotic factor-gamma, interleukin-2 and inhibition of mast cell degranulation (Joseph & Jini, 2011;Pataki et al., 2002). Ferulic acid, another phenolic acid, has a broad therapeutic effect against neurodegenerative and inflammatory diseases. It is attributed partly due to the anti-oxidant activity of this phenolic acid. The ferulic acid prevents lipid peroxidation and scavenges superoxide free ion radicals (Joshi et al., 2001). This phenolic acid reduces inflammatory mediators like tumour necrotic factoralpha, prostaglandin E2 (Appendino et al., 2006), protects proteins, DNA and lipids from oxidative stress, thus exerting anticancer properties (Delmas et al., 2006). Terpenoids present in most plants like Andrographis paniculate, Panax ginseng, Gynura procumbens, Labisia pumila, Orthosiphon stamineus, Phyllanthus niruri. Terpenoids possess both anti-oxidant and antiinflammatory activities. Ginsenosides (one of the terpenoids) from Panax ginseng has been shown to reduce Aβ levels by promoting Aβ degradation and enhancing neprilysin gene expression, a rate-limiting enzyme in Aβ degradation (Yang et al., 2009). Ginkgolides, a cyclic diterpene isolated from Gingko biloba has been extensively studied for its neuroprotective effects (Shi et al., 2009). Cannabinoids are monoterpene derived from Cannabis sativa, inhibiting AChE-induced Aβ aggregation and reduce Aβinduced toxicity (Eubanks et al., 2006). Oleanolic acid from Aralia cordata rescued neuronal death induced by Aβ in cultured rat cortical neurons and improves Aβinduced memory deficit in mice (Cho et al., 2009). Another triterpene isolated from Polygala tenuifolia known as tenuifolin reduces Aβ secretion by inhibiting β-secretase, one of the enzymes responsible for cleaving APP to Aβ (Lv et al., 2009). Ursolic acid derived from Origanum majorana exhibits a neuroprotective effect against Aβ. Ursolic acid can effectively inhibit AChE activity and Aβ binding to microglia, reducing the production of pro-inflammatory cytokines and neurotoxic reactive oxygen species (Wilkinson et al., 2011). Table 1 summarises the selected potential herbal plants with the phytochemical components that show promise for NDs treatment. Taken together, herbal medicines containing phytoactive compounds may show a great promise for the future treatment and management of NDs. Therefore, to effectively treat NDs, the natural active compounds need to be evaluated, standardised, explored and learned through pre-clinical research using various ND disease models. On the other hand, synthetic medications can temporarily relieve the symptoms and may not be the permanent solution to cure the NDs completely. Together with the help of clinicians and researchers, natural medicines can be made safer and more effective to treat ND patients. Finally, researchers need to explore biological mechanisms involved in NDs and expand the knowledge of natural compounds by conducting the fundamental research critical to combat ageing-related NDs.
2021-09-25T15:26:18.054Z
2021-08-28T00:00:00.000
{ "year": 2021, "sha1": "af16057c158b69d23b77d3f4230df4ad527fb3dd", "oa_license": "CCBYNC", "oa_url": "https://neuroscirn.org/ojs/index.php/nrnotes/article/download/77/123", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "6c611c205b8cb3e83188ec5e4c36b6e679dffcab", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
55432728
pes2o/s2orc
v3-fos-license
Pregnancy in a uterine anomaly: a case report Unicornuate uterus with rudimentary horn is an anomaly resulting from partial development of one of the Müllerian ducts and an incomplete fusion with its contralateral side. Unicornuate uterus with rudimentary horn is a rare condition affecting 1:10000 to 1:140000 pregnancies. Rudimentary horn pregnancies usually present with acute abdomen when the horn ruptures with advancing gestation. This condition could be dangerous as it may cause significant maternal morbidity and even mortality. In Malaysia, one case was reported in 2011 where the diagnosis was made intra-operatively when patient presented with acute abdomen with hypovolemic shock. Our case was clinically stable and the rudimentary horn pregnancy was only diagnosed intraoperatively. INTRODUCTION Unicornuate uterus with rudimentary horn is an anomaly resulting from partial development of one of the Müllerian ducts and an incomplete fusion with its contralateral side. Unicornuate uterus with rudimentary horn is a rare condition affecting 1:10000 to 1:140000 pregnancies. 1 Rudimentary horn pregnancies usually present with acute abdomen when the horn ruptures with advancing gestation. This condition could be dangerous as it may cause significant maternal morbidity and even mortality. In Malaysia, one case was reported in 2011 where the diagnosis was made intra-operatively when patient presented with acute abdomen with hypovolemic shock. 2 Our case was clinically stable and the rudimentary horn pregnancy was only diagnosed intraoperatively. CASE REPORT 24 year old healthy primigravida at 11 weeks pregnancy with background history of corrected Fallot's Tetralogy was referred from health clinic for molar pregnancy. Clinically she was pink and her vital signs were stable. Abdomen was soft, non tender and the uterus was not palpable. Vaginal examination revealed a single cervix with a closed cervical os. Pelvic ultrasound showed an empty uterus with thick endometrium ( Figure 1). There was a right adnexal mass measuring 5x6cm with a central double trophoblastic ring containing a yolk sac ( Figure 2). No obvious foetal echo was visualised. Myometrial-like tissue was also seen surrounding some parts of the adnexal mass. The mass however was seen away and separated from the uterus. Despite this finding, diagnosis of unruptured right cornual ectopic pregnancy was made. She underwent exploratory laparotomy under general anaesthesia. Intraoperative findings were unruptured right rudimentary horn pregnancy ( Figure 3). Left unicornuate uterus was otherwise normal. There was a fibrous band between the right horn and the left uterus but direct communication between these two structures was absent. Both ovaries and tubes were normal. Excision of the right rudimentary horn was performed and cut section showed myometrium and gestational sac ( Figure 4). Post-operative period was uneventful, and she was discharged well few days operation. Trans-peritoneal migration of the spermatozoa was quoted in many papers as the possible mechanism for the occurrence of pregnancy. This theory however cannot explain the 10% of cases whereby the corpus luteum was observed on the contra-lateral side. It is probable that in such cases, fertilization occurs in the peritoneal cavity with subsequent transmigration and transplantation of the fertilized ovum in the rudimentary horn. Latto and Norman in their classic paper published in 1950 on the other hand believe that the theory of trans-peritoneal migration is untenable based on few sound arguments. 4 They believe that pregnancy invariably occurs in communicating rudimentary horn but manifested eventually as non-communicating rudimentary horn pregnancy because tissues reacting to the advancing syncytium in pregnancy will cause occlusion of the communication channel. DISCUSSION Early detection of rudimentary horn pregnancy is crucial to avoid high morbidity and mortality risk. Early detection however is not easy unless there is a high index of suspicion since the sensitivity of ultrasound is only 26%. 5 Tsafrir et al suggests several criteria including: • a pseudo pattern of asymmetrical of bicornuate uterus; • absent visual continuity tissue surrounding the gestation sac/ uterine cervix; and • presence of myometrial tissue surrounding the gestation sac; to suggest pregnancy in the rudimentary horn. 6 As the pregnancy advances, the sensitivity of ultrasound becomes a lot less. Most cases are in fact diagnosed intraoperatively in the 2 nd trimester when rupture occurs. Few reported cases advanced to the 3rd trimester. 3,7 Out of the 10% of cases that reach term, and the fetal salvage rate is only 2%. 8 Rudimentary horn pregnancy was often misdiagnosed as tubal, cornual, abdominal and even intrauterine pregnancy. 9,10 Preoperatively, we diagnosed this case as cornual pregnancy since myometrial tissue was seen surrounding the gestational sac. This is despite documenting that the 'ectopic gestation' was not continuous with the empty uterus. The aetiology behind rupture of the rudimentary horn in pregnancy is due to the underdevelopment of the myometrium. The onset of rupture largely depends on the variable thickness of musculature in addition to the dysfunctional endometrium. 11 In addition, poorly developed musculature can also cause placenta percreta and the reported incidence was quoted to be around 11.9%. 8 Placenta percreta can be confirmed by a histopathology examination from as early as seven weeks. 12 Surgical excision is the definitive treatment of rudimentary horn pregnancy in order to prevent complication such as rupture, recurrence and chronic pelvic pain. 13,14 The choice of surgical approach either via laparoscopy or laparotomy depends on patient's general condition, gestational age, size and vascularity of the pregnant horn. First trimester diagnosis facilitates laparosocopic management with rapid favourable outcome while laparotomy on the other hand is usually reserved for patients with acute abdomen. 6,15 Some authors reported the successful use of systemic methotrexate administration; however this approach does not prevent recurrence. 13 Park et al described combination of both medical and surgical management of rudimentary horn pregnancy. In his paper, feticide with intracardiac potassium chloride and intraplacental methotrexate were given to reduce blood flow to facilitate subsequent interval laparoscopic excision. 16 However, in our case, although diagnosis of ectopic was made at early gestation, laparoscopy was not performed due to her underlying cardiac condition. CONCLUSION Rudimentary horn pregnancy is a rare phenomenon; however this diagnosis should be entertained when the diagnosis of ectopic pregnancy is made. The purpose of this report is to create awareness amongst clinicians of this very rare diagnosis.
2019-03-18T14:02:47.868Z
2018-08-27T00:00:00.000
{ "year": 2018, "sha1": "697a021ed167738d00882bd89c99ce89f62eeaea", "oa_license": null, "oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/5128/3857", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8219a9def2d801b8562ea7be36d7a38b959723fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257287387
pes2o/s2orc
v3-fos-license
Association of ApaI rs7975232 and BsmI rs1544410 in clinical outcomes of COVID-19 patients according to different SARS-CoV-2 variants A growing body of research has shown how important vitamin D is in the prognosis of coronavirus disease 19 (COVID-19). The vitamin D receptor is necessary for vitamin D to perform its effects, and its polymorphisms can help in this regard. Therefore, we aimed to evaluate whether the association of ApaI rs7975232 and BsmI rs1544410 polymorphisms in different severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants were influential in the outcomes of COVID-19. The polymerase chain reaction-restriction fragment length polymorphism method was utilized to determine the different genotypes of ApaI rs7975232 and BsmI rs1544410 in 1734 and 1450 patients who had recovered and deceased, respectively. Our finding revealed that the ApaI rs7975232 AA genotype in the Delta and Omicron BA.5 and the CA genotype in the Delta and Alpha variants were associated with higher mortality rate. Also, the BsmI rs1544410 GG genotype in the Delta and Omicron BA.5 and the GA genotype in the Delta and Alpha variants were related to a higher mortality rate. The A-G haplotype was linked with COVID-19 mortality in both the Alpha and Delta variants. The A-A haplotype for the Omicron BA.5 variants was statistically significant. In conclusion, our research revealed a connection between SARS-CoV-2 variants and the impacts of ApaI rs7975232 and BsmI rs1544410 polymorphisms. However, more research is still needed to substantiate our findings. www.nature.com/scientificreports/ affecting the structure of the VDR, which will impact the transcription of genes regulated by vitamin D that affect immune function 9 . VDR polymorphisms have been linked to an increased risk of acute lower respiratory infections in various contexts [10][11][12] . Several studies have investigated the association between four VDR polymorphisms including, TaqI (rs731236; exon 9; A > G), FokI (rs2228570; exon 2; C > T), ApaI (rs7975232; intron 8; C > A), and BsmI (rs1544410; intron 8; G > A) and the risk of hepatitis B virus infection in different ethnic groups 13,14 . It is important to note that the results of genetic studies investigating the function of the ApaI rs7975232 and BsmI rs1544410 polymorphisms in the pathogeneses of COVID-19 remained controversial. Therefore, this study aimed to examine whether these ApaI rs7975232 and BsmI rs1544410 polymorphisms play a role in the susceptibility to the COVID-19 of different variants of SARS-CoV-2. Materials and methods Sample collection. We confirm that all experimental protocols were approved by an Ilam University of Medical Science ethical committee. Moreover, all methods were performed in accordance with the relevant guidelines and regulations. From 14,117 patients who visited a hospital of Ilam University of Medical Sciences between November 2020 to February 2022 during the three peaks (Alpha, Delta, and Omicron BA.5) of the SARS-CoV-2 infection, 3184 patients were selected based on the following criteria: (1) having a positive real-time reverse transcription polymerase chain reaction (rtReal time-PCR) from the pharyngeal swab samples that were selected from a hospital; (2) giving informed consent to participate in the study; (3) having Iranian nationality with the same ethnicity; (4) lack of underlying comorbidities including pulmonary infection (cystic fibrosis, chronic obstructive pulmonary disease, and asthma), liver disease, chronic kidney disease, heart disease (cardiovascular disease, heart failure, and etc.), cancer, immunocompromised disease (transplant patients and human immunodeficiency virus), hypertension, pregnancy, and diabetes. In this study, we examined two groups. One was patients with mild and moderate symptoms (cough, malaise, loss of taste and smell, fever, muscle pain, sore throat, nausea, diarrhea, vomiting, headache, and oxygen saturation (SpO 2 ) above 94% on room air at sea level), which were considered as the control group (recovered patients), and the second group was patients with severe and critical symptoms (SpO 2 of 94% below room air at sea level, PaO 2 /FiO 2 of 300 mm Hg, lung infiltrates less than 50%, septic shock, difficulty breathing during slight movement or even at rest and multiple organ dysfunction) as the case group (deceased patients). All paraclinical information such as lipid profile, liver enzymes, complete blood count (CBC), real-time PCR cycle threshold (Ct) values, 25-hydroxyvitamin D, C-reactive protein (CRP), uric acid, erythrocyte sedimentation rate (ESR), and creatinine were obtained when visiting the hospital. ApaI rs7975232 and BsmI rs1544410 genotyping. After DNA extraction of all patients using the High-pure PCR Template Preparation Kit (Roche Diagnostics Deutschland GmbH, Mannheim, Germany), ApaI rs7975232 and BsmI rs1544410 genotyping was performed using polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. The forward and reverse sequence primers for ApaI rs7975232 with the PCR product sizes 242 bp included 5'-CTG CCG TTG AGT GTC TGT GT-3' and 5'-TCG GCT AGC TTC TGG ATC AT-3' , respectively. The forward and reverse sequence primers for BsmI rs1544410 with the PCR product sizes 297 bp were 5'-GGG AGA CGT AGC AAA AGG AG-3' and 5'-CCA TCT CTC AGG CTC CAA AG-3' , respectively. The PCR conditions were following: initial denaturation at 95 °C for 5 min, followed by 35 cycles of 95 °C for 30 s, 57 °C for 30 s, 72 °C for 35 s, and final extension at 72 °C for 10 min was for ApaI rs7975232 and initial denaturation at 95 °C for 5 min, followed by 35 cycles of 95 °C for 30 s, 58 °C for 30 s, 72 °C for 45 s, and final extension at 72 °C for 10 min was for BsmI rs1544410. The PCR products were digested with ApaI and BsmI, according to the manufacturer's instructions, and were visualized by electrophoresis on 2.5% agarose gel. The product sizes for ApaI rs7975232 after digestion were 191 bp and 51 bp for the CC genotype and 242 bp for the AA genotype, and for BsmI rs1544410, were 192 bp and 105 bp for the GG genotype and 297 bp for the AA genotype 15 . For the PCR-RFLP result confirmation, several samples were randomly selected and sequenced on an ABI 3500 DX Genetic Analyzer (ABI, Thermo Fisher Scientific, Waltham, MA, USA) by the Sanger sequencing method. Then raw data were analyzed with ChromasPro software. Statistical analyses. SPSS version 22.0 (SPSS, Inc, Chicago, IL, USA) was used for analysis. The Chi-square test was used to evaluate the significance of the relationship between the two qualitative groups. The Shapiro-Wilk test was used to determine the distribution's normality, and Mann-Whitney U test was used for quantitative data. The Chi-square test was used to examine all SNPs for Hardy-Weinberg equilibrium (HWE). Using SNPStats software, the correlation analysis was carried out, including dominant, over-dominant, co-dominant and recessive models. The minor allele frequency (MAF) and linkage disequilibrium (LD) was also determined. The fittingbest model was chosen using the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). The most effective model was the one with the lowest AIC score (http:// bioin fo. iconc ologia. net/ SNPSt ats). Logistic regression was used to determine odds ratios (ORs) and their respective 95% confidence intervals (CIs) for each model. P-values lower than 0.05 were deemed significant. Relationship between COVID-19 mortality adjusted by SARS-CoV-2 variants and ApaI rs7975232 and BsmI rs1544410 polymorphisms. The COVID-19 death rate was considerably more significant in patients with the ApaI rs7975232 AA genotype than in other genotypes. Patients who have recovered from COVID-19 also had the ApaI rs7975232 CC genotype. Patients with the GG genotype exhibited a higher COVID-19 mortality rate in the BsmI rs1544410 polymorphism. Table 2 tabulates the inheritance model analysis results for ApaI rs7975232 and BsmI rs1544410 polymorphisms in patients. By comparing the deceased and recovered patients, the codominant and dominant inheritance models with the lowest AIC and BIC values were found to be the best-fitting models for ApaI rs7975232 and BsmI rs1544410. The ApaI rs7975232 AA genotype was linked to a higher risk of COVID-19 mortality (P < 0.0001, Table 1. Comparison of laboratory parameters between SARS-CoV-2 variants. ALT alanine aminotransferase, AST aspartate aminotransferase, ALP alkaline phosphatase, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, WBC white blood cells, CRP C-reactive protein, ESR erythrocyte sedimentation rate, FBS fasting blood glucose, SD standard deviation, SARS-CoV-2 Severe Acute Respiratory Syndrome Coronavirus 2. *Statistically significant (< 0.05). The ApaI rs7975232 polymorphism in recovered and deceased patients was compatible with HWE (P > 0.05), while HWE in BsmI rs1544410 was incompatible in both groups (P < 0.001). The MAF for ApaI rs7975232 (A) and BsmI rs1544410 (G) polymorphisms in deceased patients was higher than in recovered patients. Fig. 1. There is significant difference in vitamin D levels between different ApaI rs7975232 (P = 0.041) and BsmI rs1544410 (P = 0.008) genotypes among recovered and deceased patients. The lowest amount of vitamin D was found in ApaI rs7975232 GG and BsmI rs1544410 AA genotypes, while the highest amount in ApaI rs7975232 AA and BsmI rs1544410 CC genotypes. Frequencies of ApaI rs7975232 and BsmI rs1544410 polymorphism between SARS-CoV-2 variants. Our findings showed that the death rate was related to the SARS-CoV-2 variants, much higher in the Delta variant than in the Alpha and Omicron BA.5 variants (P < 0.001). Discussion We investigated how the ApaI rs7975232 and BsmI rs1544410 affected the susceptibility to COVID-19 and showed that they might be used as genetic indicators for infection by different SARS-CoV-2 variants. Alleles A (0.37) for the ApaI rs7975232 and G (0.34) for the BsmI rs1544410 polymorphisms as MAF were directly related to mortality in patients with COVID-19. In this study, the levels of vitamin D in COVID-19 patients, especially those infected with the Delta variant with a higher mortality rate, were lower than the other two variants. It has been found that vitamin D can play an antiviral inhibitory role in nasal epithelial cells in SARS-CoV-2 infection 16 . This virus enters the host cells after binding to its receptors on the cell's surface called ACE2 by Spike protein. Type II alveolar cells, in which ACE2 receptors are strongly expressed, are the virus's primary target 17 . Calcitriol, a vitamin D agonist, increases the ACE2 expression and soluble ACE2, which may lead to virus trapping and inactivation. The renin-angiotensin-aldosterone system, altered by SARS-CoV-2 infection, is negatively regulated by calcitriol, inhibiting renin Table 3. ApaI rs7975232 and BsmI rs1544410 genotypes association with SARS-CoV-2 variants. SARS-CoV-2 severe acute respiratory syndrome coronavirus 2, OR odds ratios, CI confidence intervals. www.nature.com/scientificreports/ expression. This increased availability of angiotensin II leads to tissue damage, inflammation, and multi-organ failure 18 . Active forms of vitamin D and lumisterol have been shown to block SARS-CoV-2 replication machinery enzymes (main protease and RNA-dependent RNA polymerase), implying that novel vitamin D and lumisterol metabolites are potential antiviral therapeutic candidates. Moreover, these metabolites may prevent SARS-CoV-2 receptor binding domain from attaching to ACE2 by interacting with transmembrane serine protease 2 (TMPRSS2) and ACE2. The structural and dynamical motion alterations brought on by these interactions could impact TMPRSS2's ability to prime the SARS-CoV-2 spike proteins 19 . As a result, novel CYP11A1-derived vitamin D3 hydroxyderivative, including 20(OH) vitamin D3 and 20,23(OH)2 vitamin D3, and lumisterol hydroxymetabolites can inhibit COVID-19 via both independent and nuclear receptor-dependent mechanisms, making them excellent candidates for antiviral drug research as well as the informed use of their precursors as nutrients or supplements in the prevention and attenuation of COVID-19 disease 20,21 . Vitamin D's active hydroxyl forms have anti-inflammatory and antioxidant effects, and they also boost innate defense to infectious agents. These characteristics are shared by non-calcemic hydroxyderivatives produced by CYP11A1 and calcitriol. They exhibit inverse agonism on the retinoic acid-related orphan receptors-γ (RORγ), suppress the synthesis of pro-inflammatory cytokines, downregulate NF-κΒ, and combat oxidative stress by activating transcription factor NF-E2-related factor 2 (NRF2). As a result, a direct delivery of vitamin D hydroxyderivatives deserves consideration in the therapy of COVID19 of various etiologies 22 . Human VDR has more than 14 distinct identified polymorphisms. These polymorphisms may affect how VDR binds to calcitriol to modulate its response. FokI rs2228570, BsmI rs1544410, ApaI rs7975232, and TaqI rs731236 are the four SNPs that are most commonly examined. They were demonstrated independently modifying vitamin D status and in haplotypes 23 . The COVID-19 death rate was considerably more significant in patients with the ApaI rs7975232 AA genotype than in other genotypes. The COVID-19 mortality rate was related to ApaI rs7975232 CA in the Alpha variant and with AA and CA in the Delta variant and with AA in the Omicron BA.5 variant. In agreement with our results, Apaydin et al. showed that the AA genotype was common among patients with severe COVID-19 24 . Cohorts from Nigeria, Egypt, Ethiopia, Pakistan, Saudi Arabia, Lebanon, Turkey, and Italy were found to frequently have the AA genotype, according to the frequencies of the ApaI rs7975232 polymorphism. In contrast, the cohorts from Iran, the US, Poland, Greece, Mexico, India, the Netherlands, Czechia, Croatia, Russia, Spain, Finland, Brazil, and Tunisia frequently had the AC genotype. The CC genotype of the ApaI rs7975232 gene was most common in deceased patients from Korea, Japan, and China 25 . Studies with hepatitis B virus demonstrated that CA/AA genotypes of ApaI rs7975232 polymorphism trigger T helper 2 (Th2) cells proliferation, but there are no studies on ApaI rs7975232 and respiratory system viral infection. On the other hand, AA genotypes result in Th1 proliferation and anti-inflammatory cytokine production, which accelerates the progression of liver disease progression to cirrhosis 14 . The fact that participants with the AA genotype of the ApaI rs7975232 polymorphism in this study had a greater death rate suggests that Th2 can also release Interleukin-6 (IL-6), which is related to COVID-19 prognosis. IL-6 is one of the key factors in the cytokine storm caused by COVID-19. IL-6 induces endothelial dysfunction with expression of tissue factor and adhesion molecules via upregulation of angiotensin converting enzyme-2 receptor. These negative effects of IL-6 were mitigated by vitamin D and VDR polymorphisms. As a result, it is possible that this is one of the putative mechanism(s) by which vitamin D exerts its positive effects in COVID-19 infection 26 . Subjects with the severe and moderate disease who had the "CA" genotype compared to "CC and AA" genotypes demonstrated a more severe risk, according to the study by Abdollahzadeh et al. Contrary to "CA and AA" and "CA" genotypes, symptomatic-asymptomatic and moderately-asymptomatic patients with the CC genotype were more likely to have signs and symptoms. In contrast to our findings, none of the deceased participants had the AA genotype 15 . This study's patients with the GG genotype in this study exhibited a higher COVID-19 mortality rate in the BsmI rs1544410 polymorphism. The COVID-19 mortality rate was related to BsmI rs1544410 GA in the Alpha variant, BsmI rs1544410 AA and GA in the Delta variant, and GG in the Omicron BA.5 variant. It has been demonstrated that the BsmI rs1544410 G allele can be a risk factor for COVID-19 severity 15 , while no such relationship was seen in the study of Apaydin et al. 24 . The BsmI rs1544410 polymorphism's diversity revealed that the cohorts of the US, China, Poland, Turkey, Egypt, Italy, Saudi Arabia, Russia, Czechia, India, Greece, the Netherlands, Croatia, Brazil, Spain, Tunisia, Nigeria, and Lebanon frequently had the BsmI rs1544410 AG genotype. In contrast, the cohorts from Iran, Korea, Japan, Finland, Pakistan, and Mexico frequently had the BsmI rs1544410 GG genotype in deceased patients, but this was not significant 25 . The association of BsmI rs1544410 and viral infections such as HIV has been investigated. It has been demonstrated that BsmI rs1544410 A-allele was strongly correlated with the rapid progression of HIV disease. It is unclear exactly how the BsmI rs1544410 G-allele confers protection, while the BsmI rs1544410 A-allele raises the likelihood of disease 27 . The BsmI rs1544410 G to A polymorphism alteration occurs in the 3' untranslated regions (3' UTRs) of the VDR gene and is hypothesized to affect the VDR messenger RNA stability. This polymorphism has been linked to an increased HIV infection susceptibility and faster rate of HIV disease development 28,29 . Strong LD exists between BsmI rs1544410 and another 3′ UTR polymorphism (ApaI rs7975232), which has also been linked to the course of HIV illness. Given that the BsmI rs1544410 polymorphism is a synonymous mutation, the relationships seen may be explained by LD with one or more functional polymorphisms at other locations in the VDR gene 30 . However, synonymous rather than silent mutations could cause alternations in the protein's expression, conformation, and function. Therefore, BsmI rs1544410 polymorphisms might also directly change the VDR 31 . The findings that the BsmI rs1544410 A-allele is more influential in disease progression in the Delta variant than the other two may be explained by the difference in the serum vitamin D level, as these www.nature.com/scientificreports/ levels in patients with the Delta variant were much higher. Also, in this study, there was a strong LD between BsmI rs1544410 and ApaI rs7975232. According to our findings, the C-A haplotype was more common among all SARS-CoV-2 variations. The A-G haplotype was linked with COVID-19 mortality in both the Alpha and Delta variants. The A-A haplotype for the Omicron variants was statistically significant. These two SNPs may likely function differently in distinct SARS-CoV-2 variants. However, the mechanism underlying this divergence remains unknown. There were several limitations in our study that should be considered. We did not have any healthy controls who had not previously suffered from COVID-19. Besides, previous vaccination information of all patients was not available. Moreover, this study was conducted in only one population with the same ethnicity. To generalize the relationship between these two polymorphisms to the whole society, more studies should be done on different races in Iran. In conclusion, our study showed that the serum vitamin D level and BsmI rs1544410 and ApaI rs7975232 polymorphisms were related to the mortality rate of SARS-CoV-2 with different variants. The COVID-19 mortality rate was related to ApaI rs7975232 CA genotype in the Alpha variant and with AA and CA genotypes in the Delta variant and with AA genotype in the Omicron BA.5 variant. Moreover, in BsmI rs1544410 polymorphisms, the mortality rate was correlated with GA genotype in the Alpha variant and with GG and GA genotypes in the Delta variant and with GG genotype in the Omicron BA.5 variant. The A-G haplotype was linked with COVID-19 mortality in both the Alpha and Delta variants. The A-A haplotype for the Omicron BA.5 variants was statistically significant. Further studies in different ethnicities should be done to confirm our results.
2023-03-03T15:25:26.055Z
2023-03-03T00:00:00.000
{ "year": 2023, "sha1": "6182e2c3477df82465f32c0583d02c31c4fae148", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "6182e2c3477df82465f32c0583d02c31c4fae148", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10605070
pes2o/s2orc
v3-fos-license
iDamIDseq and iDEAR: an improved method and computational pipeline to profile chromatin-binding proteins DNA adenine methyltransferase identification (DamID) has emerged as an alternative method to profile protein-DNA interactions; however, critical issues limit its widespread applicability. Here, we present iDamIDseq, a protocol that improves specificity and sensitivity by inverting the steps DpnI-DpnII and adding steps that involve a phosphatase and exonuclease. To determine genome-wide protein-DNA interactions efficiently, we present the analysis tool iDEAR (iDamIDseq Enrichment Analysis with R). The combination of DamID and iDEAR permits the establishment of consistent profiles for transcription factors, even in transient assays, as we exemplify using the small teleost medaka (Oryzias latipes). We report that the bacterial Dam-coding sequence induces aberrant splicing when it is used with different promoters to drive tissue-specific expression. Here, we present an optimization of the sequence to avoid this problem. This and our other improvements will allow researchers to use DamID effectively in any organism, in a general or targeted manner. Summary: Critical improvements to the DamID protocol improve specificity and sensitivity in determining genome-wide protein-DNA interactions in transient or stable transgenic animal lines. INTRODUCTION Animal development is the result of an exquisite orchestration of changes in gene expression in time and space. Transcription factors (TFs) and other chromatin-associated proteins are fundamental elements in these processes and the search for their targets and the logic by which they are regulated in the genome is a central theme in today's research. Two methods are currently used to profile transcription factor-binding regions in the genome: chromatin immunoprecipitation (ChIP) and DNA adenine methyltransferase identification (DamID) (reviewed by Aughey and Southall, 2016;Furey, 2012). ChIP relies on antibody-based capture of protein-DNA complexes on crosslinked and sheared chromatin. Although this technique is solid and robust, its major drawback is its dependence on highly specific precipitating antibodies. In particular, cross-reacting antibodies may simultaneously immunoprecipitate more than one TF in a ChIP experiment. DamID offers a suitable solution to these problems. In DamID, the fusion of a TF to the bacterial gene DNA adenine methyltransferase, Dam, allows a restricted methylation of adenine residues of the GATC target sequences near the TF binding sites. These regions are subsequently enriched by digesting gDNA with the restriction enzyme DpnI and linker-mediated PCR (LM-PCR). The PCR products are hybridized to microarrays or used directly for deep sequencing. In summary, DamID requires relatively low input material and processing time, is cost-effective and accurately reflects ChIP results (Southall et al., 2013). DamID has been used successfully in model organisms, including Drosophila melanogaster (Van Steensel and Henikoff, 2000;Southall et al., 2013), Caenorhabditis elegans (Schuster et al., 2010), Arabidopsis thaliana (Germann et al., 2006) and mammalian cell cultures (Vogel et al., 2007). The current protocols require tight control to ensure low expression levels of the E.coli Dam methylase fused to the protein of interest. In the process of implementing DamID to developing medaka and zebrafish embryos using different transcription factors, we faced serious problems such as lack of any DamID product, non-specific amplification (DpnIindependent amplification) and lack of tissue-specific expression of Dam fusion proteins. To overcome these drawbacks and allow a wider, immediate application of the technique, we have made a series of improvements to the original iDamIDseq protocol, resulting in a method that is easily applicable and provides consistent results. This approach permits transcription factor profiling even in transient applications. We complement these experimental improvements with iDEAR (iDamID Enrichment Analysis with R), an analysis pipeline associated with iDam, as a rapid new method for establishing highly reliable profiles of transcription factor-binding sites. RESULTS AND DISCUSSION The iDamIDseq protocol: problems and solutions E. coli Dam (eDam) displays specific methylation activity on its cognate GATC but also minor unspecific methylation on nearcognate sequences (Horton et al., 2005). This means that its expression may trigger unwanted toxic effects. When mRNA coding for the fusion eDam-GFP (eD-f-G) was injected into zygotes, we observed a high number of abnormal embryos at stage 25, 52%, compared with 1% in the control case (Iwamatsu, 2004) (Fig. 1A). To overcome this problem, we used the mutant version DamL122A (Horton et al., 2005) (henceforth referred to as Dam), the activity of which has been shown to increase the specificity of methylation on GATC sites. Interestingly, injecting mRNA coding for the fusion Dam-GFP (D-f-G) produced a much lower number of abnormal embryos, 4%, which was similar to the control (Fig. 1A). Chimeric fusion may compromise the normal functions of a protein due to steric hindrance (Arai et al., 2001). We included a flexible linker between the Dam protein and the transcription factor, and tested different orientations. We observed that the methylation defect of Dam-deficient bacteria could be rescued differentially by the Dam fusions depending on the orientation of the fused protein (N or C terminal), but the presence of the flexilinker always improves the activity ( Fig. 1B and Fig. S1). Accordingly, all chimeric Dam constructs used for the rest of this work carry the flexilinker, indicated by the letter 'f' in the name of the fusions. To address a possible impact of the Dam fusion protein on development, we injected mRNA coding for a nuclear-localized Dam-f-GFP or Dam-f-TF into medaka zygotes and allowed them to develop at 28°C ( Fig. 2A). At stage 22 (Iwamatsu, 2004), embryos did not show any evident abnormality and Dam-f-GFP-injected embryos ubiquitously expressed GFP (Fig. 2B). As we repeatedly obtained linker-mediated amplification (LM-PCR) independently of DpnI (data not shown), we reasoned that this problem is due to the ligation of the adaptors to free phosphorylated 5′ ends, a result of the original genomic DNA preparation rather than DpnI digestion. We enhanced the specificity of the adaptor ligation by switching the order of the DpnI and DpnII digestions, and by adding an alkaline phosphatase step. First, we reduced size complexity by digesting the DNA with DpnII, which cuts GATC sites but is sensitive to adenine methylation. Then we treated these fragments with alkaline phosphatase and proceeded with digestion using DpnI, which only cuts methylated adenine GATC sites. LM-PCR amplification products were obtained only in samples treated with DpnI (Fig. 2C). In order to prepare the sample for deep sequencing, any contaminating genomic DNA must be removed. We performed LM-PCR using primers protected with phosphorothioate modifications and then treated the samples with T7 exonuclease (Fig. 2D). The final goal of DamID is to use specific promoters to generate transcription factor-binding profiles in a tissue-specific manner. We cloned Dam-f-GFP using promoters that included ubiquitin (Mosimann et al., 2011), heat shock (Blechinger et al., 2002) and Rx2 (Reinhardt et al., 2015) in plasmids carrying transgenesis markers such as Cmlc2:GFP or RFP. Surprisingly, none of the Dam-f-GFP constructs showed GFP expression, whereas the unfused GFP construct did ( Fig. 3A; data not shown). As the Dam-f-GFP fusion itself can be translated efficiently (see Fig. 2B), we suspected problems at the transcriptional/splicing level. RT-PCR of samples from the different ubiquitin-driven constructs revealed the aberrant splicing of the Dam gene out of the final transcript (Fig. 3A,B; Fig. S2). A customized optimization of the Dam gene (oDam) removed the cryptic splicing regulatory sites and restored the expression of the GFP in the larvae ( Fig. 3C; Fig. S3). Proof of concept validation and data analysis with iDEAR As a proof of concept, and to reveal the specific enrichment of transcription factor DamID products, we applied this technique to medaka using the transient expression of the transcription factor Rx2, which is the homolog of the mammalian Rax homeodomain proteins involved in retina development. We injected mRNA coding for a nuclear localized Dam-f-GFP or Dam-f-Rx2, extracted gDNA and processed the samples as described above with two biological replicates per condition. The correlation of read coverage over the genome is very high between replicates but quite distinct between Rx2 and GFP, showing the consistency and specificity of this method (Fig. 4A). We developed an R package, named iDEAR (iDamID Enrichment Analysis with R, available at https://bitbucket.org/juanlmateo/ idear), to facilitate the straightforward analysis of regions that , Dam-f-GFP and cMyc-Dam-f-GFP cassettes driven by the 3.5 kb ubiquitin promoter (Ubi) were co-injected with Tol2 transposase into medaka zygotes. Successfully injected larvae expressing EGFP in the heart were selected for further studies. Only Ubi::GFP is expressed ubiquitously in the body of the larvae. (B) RNA was isolated from pools of larvae from the experimental groups. RT-PCR was performed using a forward primer (orange arrowhead in A) annealing in the non-coding exon included in the ubiquitin promoter (NoE) and the reverse primer (green arrowhead in A) in the body of the GFP-coding sequence. Proper splicing occurs between NoE and GFP in the Ubi::GFP larvae. In Ubi::Dam-f-GFP larvae, incorrect splicing occurs between the NoE and a cryptic acceptor site in the GFP-coding region (red arrowheads). In the Ubi::cMyc-Dam-f-GFP, NoE is spliced to the proper acceptor upstream of the cMyc sequence, but after that the cMyc sequence is aberrantly spliced, using a cryptic donor site, to the same cryptic acceptor sequence in GFP as for Ubi::Dam-f-GFP (see also Fig. S2). The prokaryotic Dam ORF carries a strong splicing enhancer recognized in the eukaryotic context. (C) Optimization of the Dam ORF removed this potential, facilitating proper expression of the fusion proteins. undergo differential methylation (see Materials and Methods). Using iDEAR, we were able to identify 7948 Rx2 target regions (Table S1). Strikingly, we also identified 6255 regions with a significant depletion of the signal in the Rx2 samples compared with GFP. Based on the distance to the closest transcription start site (TSS, Fig. S4A,B), such Rx2-occupied sites tend to be within 10 kb and 50 kb of genes, reflecting enhancers, whereas Rx2-negative sites are mostly in the close vicinity of a TSS, showing a profile similar to promoters. We concluded that Rx2-depleted sites predominantly correspond to promoters of actively transcribed genes that are situated within regions of very accessible chromatin but are not bound by Rx2. Using DREME (Bailey, 2011) as a de novo motif discovery tool to compare Rx2-occupied versus Rx2-negative sites, the top hit was the motif BYAATTA, which is almost identical to the motif identified in vitro by SELEX for the mammalian Rax protein (Jolma et al., 2013) (Fig. 4B). This indicates that Dam-f-Rx2 shows specific binding that recognizes the motif demonstrated for its human ortholog, even in overexpression conditions. To evaluate the performance of iDEAR, we compared it with other tools used for similar purposes: MACS2 (Zhang et al., 2008) and the pipeline proposed by Marshall and Brand (2015). MACS2 produced 40,292 peaks and Marshall and Brand only 1635 sites. Although the number of identified sites by MACS2 is very different from the number of sites identified by iDEAR, their average length is very similar at around 1 kb; but Marshall and Brand's pipeline produced extremely large sites of enrichment (Fig. S4C). Knowing that the RAX motif is the most over-represented motif in the Rx2 sites, we wanted to see whether its presence correlates with the score that each tool assigns to the sites. To check this, we computed the ratio of sites with the motif in their sequence versus random sequences ordered by score. Rx2-occupied sites identified by iDEAR showed a higher ratio than the other tools (always greater than 1) and correlated well with the score, i.e. a higher abundance of motifs was found in sites ranked higher (Fig. 4C). We found the same correlation for MACS2, but half of the peaks this method identified have a lower content of the motif than expected at random. This finding may indicate a high false-positive rate in peak calling, which is also expected by the very large number of peaks that MACS2 finds. We need to note that MACS2 was designed to analyze ChIP-seq data where the enrichment of a TF is clearly identified by the so-called 'peak'. The fact that iDamIDseq data does not necessarily show a clear and single peak per bound region, in addition to the inability of MACS2 to handle replicates, explains the lower performance of this tool with respect to iDEAR. In order to gain insights into the potential functional properties of the sites identified by iDEAR, we looked into their overlap with regions constrained by evolution. The Rx2 sites identified by iDEAR overlapped to a higher degree with conserved sites in fish than the sites identified by the other two tools (Fig. S4D). Although Rx2 was provided as mRNA and was therefore expressed in the whole embryo, a careful inspection of the regions of Rx2 enrichment revealed many players known to be involved in retinal development, including Six3 (Loosli et al., 1998), Otx2 (Zuber et al., 2003), Pax6 (Loosli et al., 1998) and Sox2 (Reinhardt et al., 2015). Interestingly, we also found enrichment of Rx2 on its own proximal upstream locus (Fig. S5A). We generated a reporter element with the sequence of the Rx2 enriched region in front of a minimal promoter and GFP. This element drives GFP expression in the photoreceptor cell layer and overlaps completely with the Rx2 expression domain in these cells (Fig. S5B). Future analysis will require a fusion protein specifically expressed in the Rx2 expression domain in the retina. In conclusion, the improvements described above to the DamID protocol preserve the full chromatin profiling capacity of the 'classical' technique, but substantially reduce unwanted background noise and consequently increase sensitivity and specificity. iDamIDseq can be readily applied to determine transcription factor-binding profiles even in transient assays. Our optimization of the Dam-coding sequence facilitates proper tissue-specific expression, making it compatible with any organism that is amenable to transient or stable transgenesis. Fish maintenance Medaka (Oryzias latipes) fish were bred and maintained as previously established (Loosli et al., 2000). The animals used in the present study were from the inbred strain Cab. All experimental procedures were performed according to the guidelines of the German animal welfare law and approved by the local government (Tierschutzgesetz §11, Abs. 1, Nr. 1, husbandry permit number 35-9185.64/BH Wittbrodt). Plasmids The variant DamL122A was created by site-directed mutagenesis of the E. coli Dam gene using mutagenesis primers and flanking primers (Heckman and Pease, 2007). (All primers used in this work are listed in Table S1.) The flexible linker was cloned as a dsOligo that encodes four GGGS amino acid repeats. The repeat sequences are flanked by NheI and SpeI sites. The mmGFP was amplified from plasmid pT2-otpECR6_E1B:: mmGFP (monomeric GFP, see Gutierrez-Triana et al., 2014). All fragments were cloned into the pCS2+ vector (Rupp et al., 1994) as either N-or Cterminal fusions, followed by the SV40_polyadenylation signal of the pCS2 vector. Plasmid integrity was confirmed by sequencing. We used the gene synthesis service of GeneArt (Thermo Fisher Scientific) to obtain the optimized Dam sequence (oDam). In addition to codon optimization, cryptic splice sites were avoided (the DNA sequence is shown as an alignment to the unmodified DamL122A in Fig. S3). We replaced the DamL122A with the optimized Dam in the pCS2+ plasmids described above. The DamL122A or oDam cassettes were excised from the pCS2+ plasmids using AgeI and NotI, and subcloned downstream of the 3.5 kb zebrafish ubiquitin promoter (Mosimann et al., 2011) in a Tol2_based plasmid (Kawakami et al., 1998), with cmlc2::EGFP as the insertional reporter (Rembold et al., 2006). The Rx2 DamID-enriched region (Rx2_DBS) was amplified from medaka genomic DNA and the fragment was used to replace the otpECR6 element of the pT2-otpECR6_E1B:: mmGFP plasmid mentioned above using AscI-SpeI. DpnI protection assay pCS2+ plasmids carrying the cassettes coding for every particular Dam fusion protein were used to transform the E. coli strain C2925, deficient in the dam/dcm methylation system. As control, the pUC19 plasmid was used to transform C2925 cells and One Shot TOP10 cells, which have a normal methylation system. Bacterial genomic DNA was isolated from 3 ml LBamp cultures from individual colonies using the DNeasy Tissue kit (Qiagen, 69504). gDNA (1 µg) was digested with 10 units of DpnI (NEB, R0176S) for 1 h at 37°C. The products were run in a 1% agarose gel. iDamIDseq protocol gDNA isolation Embryos (20-30) or tissue were washed with 1× ERM or 1× PBS, respectively, removing as much media as possible and homogenized using a pestle in 400 µl of TEN buffer [100 mM Tris-HCl ( pH 8.5), 10 mM EDTA, 200 mM NaCl, 1% SDS] plus 20 µl of 20 mg/ml Proteinase K. Samples were incubated overnight at 50-60°C then cooled down to room temperature for 5 min. RNase A (20 µl of 10 mg/ml; DNase and Proteinase-free, Thermo Fisher Scientific, EN0531) was added then samples were incubated for 15 min at room temperature. Phenol: chloroform:isoamylalcohol (25:24:1, 600 µl) (Roth, A156.1) was added and mixed by inversion. Samples were then incubated for 10 min at room temperature then centrifuged at 10,000 rpm at room temperature for 20 min. The aqueous phase was transferred to a tube containing 600 µl of chloroform, mixed and centrifuged at 10,000 rpm for 10 min. The resultant aqueous phase was transferred into a tube containing 600 µl of isopropanol, mixed, stored at −20°C for 30 min then centrifuged at 10,000 g at 4°C for 20 min. The supernatant was removed and added to 800 µl of ice-cold 70% ethanol, then centrifuged at 20,000 g at room temperature for 10 min. As much supernatant as possible was removed and the pellet was dried at 60°C for 10 min before adding 50 µl of pre-warmed water (60°C). Tubes were incubated for 10-20 min at 60°C, with gentle flicking of the tube sporadically until the pellet was dissolved. Quality was checked by measuring OD 260 (above 1.80) and gel electrophoresis. DpnII digestion and alkaline phosphatase treatment In a 20 µl reaction, 2 µl 10× NEB3.1 buffer, 1 µg of gDNA and 10 units of DpnII (NEB, R0543S) were mixed and incubated for 6 h at 37°C. The enzyme was inactivated by incubation at 65°C for 20 min. To the inactive DpnII reaction, 23 µl H 2 O, 5 µl 10× AP buffer and 5 units of antarctic phosphatase (NEB, M0289) were added, then the mixture was incubated for 1 h at 37°C and inactivated at 70°C for 10 min. The reaction was cleaned up using an InnuPREP Double EPure Kit (Analytik Jena) and eluted in 12 µl of H 2 O. DpnI digestion In a 10 µl reaction, 1 µl CutSmart buffer, 5 µl of DpnII/AP-treated sample and 10 units of DpnI enzyme (NEB, R0176S) were mixed. DpnI was excluded from the control sample. The reaction was incubated at 37°C for 12 h then inactivated at 80°C for 20 min. T7 exonuclease treatment CutSmart buffer (5 µl, 10×) , 10 units T7 exonuclease (NEB, M0263S) and 14 µl H 2 O were added to 30 µl of clean LM-PCR. The sample was incubated for 1 h at 25°C, cleaned up using an InnuPREP Double EPure Kit (Analytik Jena) and eluted in 20 µl H 2 O ready for library preparation. In order to obtain the minimum amount of DNA for deep sequencing it was sometimes necessary to repeat the PCR step and pool the DNA samples. Sequencing DNA samples were fragmented using the Covaris S2 sonicator in AFA microtubes. The library was then prepared using the NEBNext Ultra DNA Library Prep kit for Illumina (E7370, NEB) with NEBNExt Multiplex Oligos for Illumina (E7500, NEB). Sequencing was performed with the Illumina HiSeq 2500 sequencing system. Sequencing data processing Reads were mapped to the medaka genome (Kasahara et al., 2007) (oryLat2 assembly) using Bowtie2 (Langmead and Salzberg, 2012) with default parameters. The mapped reads were then filtered with SAMtools (Li et al., 2009) to keep only those with a minimum mapping quality of 20. Identification of enriched regions First, the set of potential DpnI fragments was built from a BSgenome object for the oryLat2 assembly using the function vMatchPattern with the restriction site 'GATC'. Only fragments spanning adjacent predicted restriction sites with lengths ranging from 200 to 2000 bases were considered. Next, the reads that fell into each predicted fragment were counted for each of the samples, with the function summarizeOverlaps, setting the parameter ignore.strand to TRUE. These counts were used to produce the correlation heatmap in Fig. 4A. In order to discard fragments with spurious mapped reads, only fragments with a minimum number of reads relative to the fragment length were kept. This threshold was computed as three times the total number of reads in all fragments that were considered, divided by their total length. After this selection, fragments that were not further apart than the smallest fragment length were joined together. With the resulting set of genomic regions, the read count was computed again with summarizeOverlaps and the resulting matrix was used to compute significant differences between samples using DESeq2 (Love et al., 2014). The R package iDEAR implements this data analysis pipeline and is available at https://bitbucket.org/juanlmateo/idear. De novo motif discovery DREME (Bailey, 2011) was used to search for the most enriched motifs in the sequence within the coordinates of the Rx2-positive sites ( parameter -p), compared with the sequence within the coordinates of the Rx2-negative sites ( parameter -n). Motif enrichment FIMO (Grant et al., 2011) was used to identify motif matches of the RAXbinding motif (Jolma et al., 2013) (RAX_DBD) in the sequence within the coordinates of each region that had been identified. All the regions from each set were sorted by significance (from highest to lowest) and divided in 50 bins. For each bin, a ratio was computed as the number of original sequences with at least one motif match divided by the number of shuffled sequences with at least one motif. The shuffled sequences were generated by randomly permuting dinucleotides for each individual sequence that was analyzed. Association of sites to genes Each site identified by iDEAR was associated with the gene whose transcription start site is closest, or overlapping, on either side of it. Version 84 of medaka transcripts in ENSEMBL was used for this. Analysis with MACS2 and the Marshall and Brand pipeline For comparison, the mapped reads were also subjected to analysis with other tools, including MACS2 (Zhang et al., 2008) and the method proposed by Marshall and Brand (2015). MACS2 was invoked with the parameters -broad-cutoff 0.01 -broad -nomodel -extsize 300 -gsize 7e8 and -t, which provided the bam files of the two Rx2 replicates, and -c, which provided the bam files for the two GFP replicates. For the other tool, the script damidseq_pipeline was invoked, providing the bam files of the two Rx2 replicates and, as this script cannot handle replicates as controls, only the first replicate of GFP was provided with the parameter -dam. In this case we used the same coordinates for GATC fragments as we had with iDEAR, using the parameter -gatc_frag_file. After this, the script find_peaks was used over the bedgraph output produced by the previous script. Conservation analysis The phastCons 5-Way track for medaka was downloaded through the Table Browser (Karolchik, 2004) from the UCSC Genome Browser as a bed file. For each enriched site identified as an Rx2-binding site by iDEAR, MACS2 or the Marshall and Brand tool, the proportion of bases in the site that are also covered by a phastCons element were also computed.
2017-08-04T08:36:41.337Z
2016-11-15T00:00:00.000
{ "year": 2016, "sha1": "940e5cf9c32c3fbf2c8bc25c0755ae78359edb6f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1242/dev.139261", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "0115354503e13d03ab836b3dff237bab2688648a", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
267251558
pes2o/s2orc
v3-fos-license
Clinical, biochemical and molecular characterization of a new case with FDX2 ‐related mitochondrial disorder: Potential biomarkers and treatment options Abstract Ferredoxin‐2 (FDX2) is an electron transport protein required for iron–sulfur clusters biosynthesis. Pathogenic variants in FDX2 have been associated with autosomal recessive FDX2‐related disorder characterized by mitochondrial myopathy with or without optic atrophy and leukoencephalopathy. We described a new case harboring compound heterozygous variants in FDX2 who presented with recurrent rhabdomyolysis with severe episodes affecting respiratory muscle. Biochemical analysis of the patients revealed hyperexcretion of 2‐hydroxyadipic acid, along with previously reported biochemical abnormalities. The proband demonstrated increased lactate and creatine kinase (CK) with increased amount of glucose infusion. Lactate and CK drastically decreased when parenteral nutrition containing high protein and lipid contents with low glucose was initiated. Overall, we described a new case of FDX2‐related disorder and compare clinical, biochemical and molecular findings with previously reported cases. We demonstrated that 2‐hydroxyadipic acid biomarker could be used as an adjunct biomarker for FDX2‐related disorder and the use of parenteral nutrition as a treatment option for the patient with FDX2‐related disorder during rhabdomyolysis episode. Highlights 2‐Hydroxyadipic acid can serve as a potential adjunct biomarker for iron‐sulfur assembly defects and lipoic acid biosynthesis disorders. Parenteral nutrition containing high lipid and protein content could be used to reverse acute rhabdomyolysis episodes in the patients with FDX2‐related disorder. described a new case harboring compound heterozygous variants in FDX2 who presented with recurrent rhabdomyolysis with severe episodes affecting respiratory muscle.Biochemical analysis of the patients revealed hyperexcretion of 2-hydroxyadipic acid, along with previously reported biochemical abnormalities.The proband demonstrated increased lactate and creatine kinase (CK) with increased amount of glucose infusion.Lactate and CK drastically decreased when parenteral nutrition containing high protein and lipid contents with low glucose was initiated.Overall, we described a new case of FDX2-related disorder and compare clinical, biochemical and molecular findings with previously reported cases.We demonstrated that 2-hydroxyadipic acid biomarker could be used as an adjunct biomarker for FDX2-related disorder and the use of parenteral nutrition as a treatment option for the patient with FDX2-related disorder during rhabdomyolysis episode.Highlights 2-Hydroxyadipic acid can serve as a potential adjunct biomarker for iron-sulfur assembly defects and lipoic acid biosynthesis disorders.Parenteral nutrition containing high lipid and protein content could be used to reverse acute rhabdomyolysis episodes in the patients with FDX2-related disorder. K E Y W O R D S ferredoxin-2; iron-sulfur clusters; mitochondrial disorders; mitochondrial myopathy, episodic, with optic atrophy and reversible leukoencephalopathy; urine organic acid analysis Iron-sulfur clusters (Fe-S) are one of the most ubiquitous cofactors for multiple enzymes with various biological function. 1,2Mitochondrial biosynthesis of Fe-S is a complex process with tight regulations. 3,4Sulfur atoms donated by cysteine via the reaction of cysteine desulfurase complex, iron atoms and electrons combine to form [2Fe-2S] clusters using multiple scaffolding and transport proteins (Figure 1).[2Fe-2S] clusters are transported to target proteins or further converted to [4Fe-4S] clusters. Although very rare, mitochondrial Fe-S biosynthesis defects are an emerging group of metabolic disorders with various phenotype and broad clinical spectrum. 5mong those, the defect in electron transport protein, ferredoxin-2 (FDX2), encoded by FDX2 (formerly known as FDX1L), causes autosomal recessive episodic mitochondrial myopathy with or without optic atrophy and reversible leukoencephalopathy (MEOAL, MIM# 251900). 6][8][9][10][11] Clinical phenotype included progressive myopathy, rhabdomyolysis with variable onset and severity, peripheral neuropathy, and reversible leukoencephalopathy.Optic atrophy, neurodevelopmental abnormalities, microcytic anemia, neutropenia, hypothyroidism have been reported in some patients.Biochemical abnormalities include lactic acidosis, and hyperexcretion of lactate, ketones, tricarboxylic acid (TCA) cycle metabolites, and 3-methylglutaconic acid in urine organic acid analysis. 6,7,10,11Mitochondrial respiratory chain analysis revealed decreased activities of several mitochondrial complexes and mitochondrial aconitase. 6ere, we report a patient with FDX2-related disorder who presented with severe rhabdomyolysis affecting respiratory muscle.Urine organic acid analysis revealed increased 2-hydroxyadipic acid (2-HAA), a potential biomarker for MEOAL and other Fe-S biosynthesis defects.Despite the severe presentation, the patient recovers quickly with parental nutrition containing high protein content, which is a viable treatment option during acute metabolic crisis. | Case description The proband is a 10-year-old male with Eastern European ancestry.He was born full term with uncomplicated pregnancy and delivery.He had normal developmental milestones but was noticed to have decreased exercise tolerance compared to other children. At the age of 6 years, he developed dyspnea and weakness after exercise.His examination at that time was notable for positive Gower's sign and hyperlordosis.He had multiple episodes of rhabdomyolysis presenting with lower extremity weakness and elevated creatine kinase (CK) following initial evaluation.At 9 years of age, he developed severe rhabdomyolysis episode leading to prolonged respiratory failure requiring tracheostomy.Brain magnetic resonance imaging (MRI) was normal.After Plasma growth differentiation factor 15 (GDF15) was significantly elevated at 5914 pg/mL (reference range (RR) 0-750 pg/mL).Clinical exome sequencing (Invitae, San Francisco, California) revealed compound heterozygous variants in FDX2 (NM_001397406.1),designated as c.1A > T (p.Met1?) and c.146-2A > G.The variant p.Met1? has been reported previously in the patients with FDX2-related disorder. 6,8The variant c.146-2A > G is predicted to cause abnormal splicing, disrupting protein function.Both variants have been reported at very low frequency in general population, supporting their pathogenicity.He was subsequently decannulated and discharged with home regimen of sodium bicarbonate and mitochondrial cocktail comprising of n-acetylcysteine, niacin, biotin and coenzyme Q10. Despite continuous intravenous dextrose fluid, his CK continued to be fluctuated between 15 358 and 39 814 U/ L. Increased dextrose concentration to 7.5% led to subsequent increase of lactate level.Parenteral nutrition with protein and lipid with 5% dextrose concentration were then initiated.He responded dramatically with decreased CK and lactate levels (Figure 3).Upon follow-up visit 2 weeks after discharged, he achieved resolution of the symptoms and normalization of CK and lactate.Urine organic acid analysis obtained at the follow-up visit revealed persistently increased 3-methylglutaconic acid (40.3 mmol/mol creatinine), 2-HIA (6.3 mmol/mol creatinine), 2-HAA (3.5 mmol/mol creatinine) (Figure 2B).We describe a proband with FDX2-related disorder who presented with profound rhabdomyolysis affecting respiratory function.][8][9][10] Most of the patients presented with progressive myopathy (11/11) and acute rhabdomyolysis (6/11).Extra-muscular manifestations including optic atrophy and hematologic complications have been reported in two Brazilian families harboring homozygous c.422C > T (p.Pro141Leu) but have not been reported elsewhere. 90,11 The proband is the first reported patient with compound heterozygous variants.The patients harboring missense variants affecting initiator codon did not demonstrate extramuscular involvement.It is possible that pathogenic variants in FDX2 may cause two distinct disease entities: a milder phenotype presenting with episodic rhabdomyolysis and progressive myopathy, and a more severe phenotype with multisystemic involvement, described as MEOAL.The former phenotype has been associated with the patients who harbor initiator codon variant in at least one allele, while other pathogenic variants which severely impact protein function may be associated with MEOAL phenotype.This phenomenon has been observed in COQ7related disorder. 12,13Larger cohort of patients with FDX2-related disorder are needed to establish genotypephenotype correlation. T A B L E 1 Clinical, biochemical and molecular characterization of the reported cases of MEOAL.2-KAA and 2-HAA. 23It is also unclear whether the formation of 2-HAA is from the enzymatic reaction similar to the conversion of 2-ketoglutaric to 2-hydroxyglutaric acid, 24 or occurs during analytic process.Interestingly, previous studies showed that lipoylation of PDH and KGDH in the fibroblasts and myoblasts of the patients with MEOAL are normal. 7,8However, recent study showed that reduced FDX2 expression led to the decrease in Fe-S biosynthesis and subsequent abnormal lipoylation. 25Untargeted metabolomic analysis of the blood specimen from the patient with MEOAL revealed branchedchain keto-acids (2-HIA, 2-hydroxy-3-methylvaleric acids), and TCA cycle intermediates, which indicate the disruption of BCKDH and KGDH activities. 102-HAA has been reported in urine organic acid analysis of the patients with DLD deficiency and Fe-S assembly defects due to IBA57 and NFU1. 26-282-HAA has been speculated to be potential biomarker for DLD deficiency, lipoic acid biosynthesis disorders, and Fe-S assembly defects. 17,27,29We demonstrate that 2-HAA, when presents concurrently with branchedchain keto-acids, should raise the concern for multiple dehydrogenase complexes deficiency caused by DLD deficiency, lipoic acid biosynthesis disorders, and Fe-S assembly defects.In the proband, the levels of 2-HAA are not largely different during decompensated and well periods, which represents its role as a diagnostic marker.The levels of 2-HAA does not seem to correlate with the disease activity, however more studies are needed to establish its potentials as disease monitoring tool. Previous study showed that patient with FDX2-related disorder is sensitive to high dextrose fluid, leading to increased lactate production. 10,11The proband also demonstrated increased blood lactate with the increase of glucose infusion rate.This phenomenon has been known in PDH deficiency, and in fact, the patients with PDH deficiency responded to ketogenic diet. 30,31Intralipid has been shown to improve acute rhabdomyolysis episodes. 10e demonstrated the use of parenteral nutrition containing high lipid and protein content to reverse acute rhabdomyolysis and lactic acidosis.Despite the possibility of BCKDH dysfunction, the proband improved clinically and biochemically without abnormalities in plasma amino acids. Overall, we describe a novel case of FDX2-related disorder who presented with rhabdomyolysis episode and subsequent respiratory failure who responded to high fat and protein parenteral nutrition.We also demonstrated the use of 2-HAA as an adjunct marker for FDX2-related disorder and, possibly, other Fe-S assembly defects and lipoic acid biosynthesis disorders, especially when it presents with BCAA metabolites.We also demonstrated certain degree of genotype-phenotype correlation, although larger cohort of patients are needed to establish 1 Schematic representation of Fe-S cluster assembly.Adapted from.3,4 2 months of initial recovery, he was subsequently transferred to our facility for further investigation.Initial biochemical analysis during recovery state revealed normal CK and plasma amino acids, including normal glycine, and slightly increased plasma acetylcarnitine (C2) at 24.78 μmol/L (RR 4.21-20.60μmol/L). correlation and the performance of diagnostic markers.AUTHOR CONTRIBUTIONSPW and MMD designed and conceptualized the study.CP and MMD performed clinical analysis of the patient.PW, XH and MH performed biochemical analysis of the patient.PW, CP and MMD performed variant analysis.PW drafted the manuscript.All authors were involved with revising the manuscript.CP and MMD obtained consent.PW and MMD supervised the study. Although clinical significance of AAKAD is unclear, the patients with AAKAD exhibited hyperexcretion of 2-aminoadipic acid,
2024-01-26T17:22:10.692Z
2024-01-23T00:00:00.000
{ "year": 2024, "sha1": "3b32a8462ca20e160d5f9b7778b7fe2950df55c4", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jmd2.12408", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab5fc6ad02b7cdefca89c56009e13f0d02c21a00", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235874342
pes2o/s2orc
v3-fos-license
Digitizing and Evaluating Quality Assurance Documents at English Department, Faculty of Humanities This study aims to determine the level of satisfaction and effectiveness of digitizing quality assurance instruments such as the online form for thesis registration, proposal registration form, templates of some forms in English Department, Faculty of Humanities, Andalas University. The survey was conducted to the lecturers and students of English Departments. The survey covered ease of access, ease of understanding the narration or text, time efficiency, officially efficiency and so on. The results of the survey showed that most of the students and the lecturers were satisfied with the digitizing of some documents and forms. The respondents also give some suggestions to digitize any other documents and forms which are not being digitized yet. Besides, the survey also showed that 5 students out of 48 and 3 lecturers out of 23 said that the digitization still lack of socialization. INTRODUCTION Quality Control Circle (GKM) is a group of people who work together and carry out activities periodically to strive for quality control by identifying, analyzing and taking action to solve problems faced at work. Basically, the main tasks and functions of GKM is to carry out the quality assurance process in an agency or institution. A university which is an institution of higher education consists of various faculties and study programs. Quality assurance at the tertiary level or at the university level is carried out by the Lembaga Pengembangan Pendidikan dan Penjaminan Mutu (Institute for Educational Development and Quality Assurance), while at the faculty level it is carried out by Badan Penjaminan Mutu (BAPEM) or the Quality Assurance Agency. BAPEM is responsible to make a planning, implementing, coordinating, monitoring, evaluating, controlling, and improving the Internal Quality Assurance Standards (SPMI) at the faculty level. At the department level, it is known as GKM (Quality Control Circle). GKM is a quality assurance unit which responsible for implementing, coordinating, monitoring, evaluating, and controlling the quality of the implementation of learning at the department/study program level [1]. English Department as one of the departments under the Faculty of Humanities also carries out quality assurance activities. At the study program level, the Quality Control Circle carries out a quality assurance process related to monitoring and evaluation of various student services, lecturer performance, conducting analysis, making reports and providing recommendations for continuous improvement from the implementation of lecture activities and other activities related to students and lecturers. In addition, GKM is also tasked with compiling standard operating procedures at the study program level and ensuring that all SOPs that have been made can be implemented properly. In accordance with the demands of accreditation, both old and new versions, each study program must carry out many surveys related to student's satisfaction, lecturers, graduate user, partners and collaboration partners to find out the extent of services that have been provided by the study program to students such as teaching methods, completeness of teaching materials, amount of face to face, assessment methods and so on to produce output that meets the standards of the world of work. In addition, evaluation of the various activities that have been taking place is also very necessary to determine the shortcomings or weaknesses of the program and then plan their repairs. To get maximum results, the study program needs to prepare and compile the survey instruments maximally, so that they can be analyzed to determine the advantages and disadvantages of the study program. In the future, that are deemed unsatisfactory and maintain those that are satisfactory. Related Literatures A survey is research techniques by giving clear boundaries to the data; investigation; review [2]. Structured questions are called questionnaires. The questionnaire contains questions that will be given to respondents to measure variables, the relationship between existing variables, and can be in the form of experiences and opinions of respondents. The survey method is usually used to obtain data from a certain natural place, but the researcher performs treatment in data collection (questionnaires, tests, interviews, and so on), the treatment given is not the same in the experiment [3]. To maximize the work of the Quality Control Circle at the study program level, concrete steps are needed so that it can be more efficient in terms of time, coverage, and implementation. One of the ways to conduct surveys and evaluations quickly and efficiently is by digitizing all questionnaires related to study programs. According to [2], digitization is the process of giving or using a digital system. Meanwhile, according to [4] digitization is the process of converting graphic information available on paper to digital format. In the process of digitizing it requires time, effort, money, and requires experts who master the techniques. According to [5] Online surveys have several advantages compared to non-online surveys. These advantages include:  It can be quickly disseminated to respondents via a link.  It can reach a large number of online respondents.  The summary presentation and display of results can be obtained in real-time through existing features in the software or application used.  It can save printing costs and transportation costs to reach respondents.  It can reduce the error rate in data organization.  The respondents have more time flexibility to respond to surveys.  The respondents feel more secure in their confidentiality because they can respond anonymously and do not need to face researchers directly. Form and Document The word form comes from Dutch, namely formulier, which means a paper containing several formal questions that must be filled in. Form text is a sheet in the form of a card or paper of a certain size which contains data or information that is fixed in nature and also some other non-permanent parts. Another meaning according to the [2] (Indonesian Language Official Dictionary) the form text is a paper that has formatted space and must be filled in by the user. Form has many functions such as to find an information, to collect the data, to record the transaction or as a tool of communication. Through form we can find and collect any information we need for our research or another purposes. In university we use form as the tool for registration, asking our needs or to support our research. As well as form documents also plays an important role in one institution. Documents are needed for every activity we do such as letter of decree, letter of assignment, invitation, etc. According to [3] document is written or printed letter that can be used as proof of information (such as birth certificates, marriage certificates, agreement letters); printed articles or essays sent by post; sound recordings, pictures in films, etc. which can be used as evidence for information. METHODS This research applied qualitative and quantitative methods. Qualitative research is an inquiry strategy that emphasizes the search for meaning, understanding, concepts, characteristics, symptoms, symbols and descriptions of a phenomenon; focused and multi method, natural and holistic; prioritizing quality, using several ways, and presented in a narrative. From the other side and in simple terms it can be said that the purpose of qualitative research is to find answers to a phenomenon or question through the application of scientific procedures systematically using a qualitative approach [6]. According to [3] Qualitative research method is called a new method, because of its recent popularity. It is called post-positivistic because it is based on the philosophy of post-positivism. This method is also called an artistic method, because research has less patterned, and is called an interpretive method because the research data is more concerned with the interpretation of the data found in the field. Qualitative research methods are often called naturalistic research methods because the research is carried out in natural conditions; it is also called the ethnographic method, because initially this method was more widely used for research in the field of cultural anthropology; referred to as a qualitative method, because the data collected and analysis is more qualitative in nature. Advances in Social Science, Education and Humanities Research, volume 506 Meanwhile Quantitative research is a process of finding knowledge using data in the form of numbers as a tool to analyze information about what you want to know [7]. This research adopted two methods namely qualitative and quantitative since this research is aimed to find out the effectiveness and satisfaction of digitizing department's documents and forms then find the number of the respondents who response for strongly agree, agree, do not know, disagree and strongly disagree on each item being questioned in the survey. The technique used in collecting the data is an online survey. The researcher conducted a survey using an online system and then the data obtained from the questionnaire was analyzed and narrated. The speakers in this study were all alumni of the English Department, partners and stakeholders from the English Department. In this study, data analysis was carried out by describing the survey results obtained to obtain service satisfaction results for students, lecturer performance and others. The writer used Likert Scale in some questions in the survey [8]. This scale is used to measure the attitude and opinion of the respondents. With this Likert scale, respondents are asked to complete a questionnaire which requires them to indicate their level of agreement with a series of questions. The questions or statements used in this research are usually referred to as research variables and are determined specifically by the researcher Survey of Lecturers The survey related to the advantages of digitalization on department's documents and form was done to the lecturers and the students. The first survey was done to the lecturers as the user of these documents and forms. In general, all English Department lecturers who filled out the satisfaction questionnaire of feel satisfied with the document digitization process that ha d been carried out. As can be seen in table 1, the positive answers are in the form of strongly agree and agree to all the questions surveyed points are above 75%. The negative answer in the form of "disagree" was only found on four questions and not more than 15% of the total participants. In addition, there are still doubtful answers to the three survey questions but the number is also small, at no more than 15%. Table 1 also shows that most of the lecturers agree with the first, second, and third questions that investigate the level of ease of accessing these digital documents (Figure 1), the clarity of the document editor (Figure 3), and the ease and clarity of the filling procedure ( Figure 2). Although not all of them gave "very positive" answers, just 'agree' answers which on average are above 50% on the three points have shown recognition that the Department has carried out the digitization procedure properly in an easy-to-understand instruction format and understandable procedures. The success of providing a good digitalization program implementation process has also proven to have implications that are highly appreciated by lecturers, as illustrated by the results of their responses to statements number five to number eight. All of these statements are classified as statements that investigate the success and usefulness of this document digitization program for lecturers in terms Advances in Social Science, Education and Humanities Research, volume 506 of time efficiency, ease of official correspondence, paper savings, ease of work, and the ease of obtaining evidence of their performance. The responses of the lecturers participating in the questionnaire to all of these statements proved positive. Not one of the lecturers gave a negative or hesitant response to the benefits of this program in the various aspects investigated. In contrast to the first classification, the majority of the answers given were on the very positive side, namely "strongly agree". Although the program implementation process and its output is proved successful in satisfying the lecturers the provision of complaint collection mechanisms and the completeness of documents that had been digitized still needed to be improved. The success of the socialization of this program by the Department to the lecturers needs to be accompanied by the provision of a complaint mechanism and troubleshooting/problem anticipation that can be easily accessed by all lecturers so that all problems that may be faced in matters related to digitizing documents can be resolved more effectively. The next survey was conducted to the students of English Departments, especially 2015, 2016, and 2017 generation since these students often take advantage of the documents and online forms provided by the department such as proposal seminar registration, thesis registration, template of some forms and another documents which are needed by the students during their studies in English Department, Faculty of Humanities, Andalas University. The results of the survey which were followed by 47 students of English Department regarding to the digitalize document were not much different when comparing to the results of the survey to the lecturers. In general, student responses to each survey item showed more positive results than lecturers with an average of 80% give positive answers on ten items and the last item (number 11) which received 61.7% responses strongly agree and agree. A striking difference is seen in the high number of students' doubts in giving answers. Meanwhile, the negative answer is only in the "disagree" option and on average it is on each of the points surveyed. This is because socialization to students is more frequent than to lecturers. Besides, students often ask if there are things they do not understand regarding these documents and online forms. Table 2, the majority of students gave a more positive assessment than lecturers in terms of the process of implementing this document digitization program. Statements one, two, and three received positive responses above 59%, slightly above the lecturers' answers which were no more than 57.1%. However, students did not seem too sure about the ease of access, editorial clarity, and the ease / clarity of the procedure for filling out the department's digital documents. This doubt is also reinforced by high number of responses for "don't know|" answer to these three question compared to the number "disagree". Therefore, although the student response is considered very positive, it should also be noted that there are students who have difficulty accessing and following instructions for using digital documents in their activities in the department. Survey of Students This doubt also seems to be reflected in students' answers to the effects of digitizing this department's documents. Students 'agree' answers regarding the problems of time efficiency, ease of academic administration, paper saving, ease of carrying out academic activities, and ease of obtaining evidence of their activities, being in the 58-60% range seems to reinforce at first glance any doubt about the benefits of digitizing this department's documents for them. Although time efficiency and paper savings appear to be undeniable positive benefits, there were still students who answered that they did not know or did not agree with these benefits. Similar to the responses given by the lecturers, points regarding socialization and the mechanism for anticipating problems related to digital documents received higher neutral (don't know) and negative (disagree) responses. The highest level of student confidence (strongly agreed) with the socialization process carried out by Head of English Department was at 66% while the majority of students also acknowledged that the complaint mechanism and problem solving related to digital documents had been submitted. Although the majority of students also admitted that the documents digitizing the department were complete, there were also 22.5% "don't know" answers and 12.8% "disagree" answers regarding this completeness. The accumulated number of these two items (35.3%) can be used as a warning sign for departments to re-evaluate the completeness of this digital document as well as to check if the cause is also related to socialization issues that may be inadequate for a small number of students. CONCLUSION After doing the survey on the effects of digitalization of quality assurance instruments in English Departments, Faculty of Humanities, Andalas University, the researcher found that 20 lecturers and 43 students give positive response for some items such as the easiness to access the documents and form, easiness in understanding the text, time efficiency, paperless and the socialization. Meanwhile the small numbers of negative response come from the completeness of documents and forms and complaint mechanism. By this result the departments need to re-evaluate the completeness of documents and to socialize it continuously.
2021-07-16T00:05:44.162Z
2021-02-03T00:00:00.000
{ "year": 2021, "sha1": "979d1638c50b8ba370f0de2b7ff048d8252b8260", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125952037.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3caf5ed519917781427bc496888683ec8613c77e", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
236714448
pes2o/s2orc
v3-fos-license
Evaluation of risk factors for patients suffering from diabetic foot ulcer Background: Diabetes is one of the main problems in health systems in the world. The present study was conducted to assess the lesions in patients suffering diabetic foot ulcers. Materials & Methods: 104 patients with diabetic foot ulcers of both genders were included. All the cases were managed following conservative and surgical approach. Results: Out of 104 patients, males were 64 and females were 40. Duration of diabetes was <5 years in 14 patients, 5-10 years in 36 and >10 years in 54 patients. The difference was significant ( P< 0.05). The lesion was gangrene seen in 28 patients, cellulitis in 12, ulcer in 50 and abscess in 14 patients. The difference was significant ( P< 0.05). Conclusion: Lesion in diabetic patients were gangrene, cellulitis, ulcer and abscess. Maximum cases occurred in subjects with >10 years of diabetes history. Introduction Diabetes is one of the main problems in health systems in the world. The world prevalence of diabetes among adults was 6.4%, and will increase to 7.7% by 2030 [1,2] . Patients with diabetes are at greater risk of complications, the most important of them are diabetic neuropathy and peripheral vascular disorders that lead to diabetic foot ulcers [3] . Currently the most common cause of neuropathy in western countries is diabetes [4] . Diabetic neuropathy will develop in 50% of type 1 and 2 patients with diabetes. Diabetic foot problems are the most common cause of hospitalization in patients with diabetes and it accounts for 2 million patients with diabetes in the United States annually and often need long-term hospital admission. Diabetes is a major factor in half of all lower extremity amputations [5] . Few of the well-known complications of diabetes are Peripheral neuropathy (PN) and peripheral vascular disease (PVD). Patients with PN and PVVD lack the conventional symptoms, but are still considered to be at high risk for occurrence of foot complications. PN and PVD are the main causes of non-traumatic lower limb amputation [6] . The risk of ulceration and amputation among diabetic patients increases by two to four folds with the progression of age. It has also been proven by many longitudinal epidemiological studies that among diabetic patients, the life time foot ulcer risk is about 25% [7] . The present study was conducted to assess the lesions in patients suffering diabetic foot ulcers. Materials & Methods The present study was conducted at Department of Surgery, Barasat District Hospital (A DNB Teaching Hospital), Barasat, West Bengal. It comprised of 104 patients with diabetic foot ulcers of both genders. All were informed regarding the study and their written consent was obtained. Demographic data such as name, age, gender etc. was recorded. A thorough physical examination was performed. Investigations such as hemoglobin, TLC, DLC, ESR, blood urea, serum creatinine and blood sugar was estimated. In all the diabetic foot patient's pus was sent for culture and sensitivity examination before starting antibiotics. All the cases were managed following conservative and surgical approach. For the management of diabetic foot infection, debridement, drainage, and washing and dressing of wounds were regularly done. Antibiotics used included cefoperazone, linezolid, clindamycin, metronidazole, aminoglycosides, meropenem, and amoxicillin-clavulanic acid. Patients with ~ 89 ~ diabetes alone were treated by traditional means mostly by oral anti-diabetic agents but those having very high level of bloodglucose were subjected to take insulin therapy. Sixteen (15.38 %) patients underwent amputation during the course of this study. One underwent major amputation (amputation above the ankle), the remaining underwent minor amputations (below the ankle), mainly of the ray (n-5), toes (n-8) and 1 Syme amputation. Results thus obtained were subjected to statistical analysis. P value less than 0.05 was considered significant. P value less than 0.05 was considered significant. Table III shows that lesion was gangrene seen in 28 patients, cellulitis in 12, ulcer in 50 and abscess in 14 patients. The difference was significant (P< 0.05). Discussion Diabetic mellitus has reached epidemic properties worldwide as we enter the new millennium. The world health organization has commented there is an apparent epidemic over the next decade the projected number will exceed 200 million [8] . Diabetic foot is a serious complication of diabetes mellitus when compared with people without diabetes [9] . Foot ulcers are significant complications of diabetes mellitus and often precede lower extremity amputation [10] . Recurrence of the foot infection was common among India diabetic patients about 52%.6 Infection and gangrene of the lower extremities are the most common lesions requiring hospitalization in diabetes and are a major cause of morbidity [11] . The present study was conducted to assess the lesions in patients suffering diabetic foot ulcers. In present study, out of 104 patients, males were 64 and females were 40. Ravichandran et al. [12] in their study 100 patients admitted to the surgical ward with diagnosis of diabetic foot were selected. The mean age of the subjects was 49.28 + 6.88 years. Out of 100 patients, 23 were females and 77 were males. They observed that 27 patients were undetected at the time of admission at hospital. Majority of patients (n=46) had duration of diabetes from 5-10 years. 19 patients had duration of diabetes less than 4 years, 5 patients had duration of diabetes from 11-15 years. Most of the patients present with more than one lesion. Only major lesion is considered here. Ulcer was the major lesion seen in present series being present in 72 patients. We found that duration of diabetes was <5 years in 14 patients, 5-10 years in 36 and >10 years in 54 patients. We observed that lesion was gangrene seen in 28 patients, cellulitis in 12, ulcer in 50 and abscess in 14 patients. Wu L et al. [13] determined the prevalence of various risk factors responsible for occurrence of diabetic foot in patients with diabetes. They retrospectively evaluated a total of 296 patients who were admitted to the tertiary hospital because of diabetes. A questionnaire was framed and was made to be filled by all the patients. They also assessed their foot along with presence of absence of peripheral sensory neuropathy (PSN) and peripheral arterial disease (PVD). They observed foot deformity in 124 patients with the most prevalent abnormality being hallux valgus, which was observed in 65 percent of the patients of their study. They concluded that risk factors for foot ulceration and lack of fool care knowledge was rather common in a hospital-based diabetic population, emphasizing the importance of implementing simple and affordable screening tools and methods to identify high risk patients and providing foot care education for them. Nyamu PN et al. [14] evaluated the prevalence rate of patients with diabetic foot ulcers and the risk factors. They evaluated a total of 1788 diabetic patients and observed that in approximately four to five percent of the patients had diabetic foot ulcer. They observed presence of diabetic foot ulcer in patients with comparatively longer duration of diabetes. From the results, they concluded that the risk factors of diabetic foot ulcers were poor glycemic control, diastolic hypertension, dyslipidaemia, infection and poor self-care. Conclusion Authors found that lesion in diabetic patients were gangrene, cellulitis, ulcer and abscess. Maximum cases occurred in subjects with >10 years of diabetes history.
2021-08-03T00:06:24.845Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2e6ff9d287917d8370274b67aea3cdc2639a8282", "oa_license": null, "oa_url": "https://www.surgeryscience.com/articles/667/5-2-12-998.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "79a2786f09aab1b6a50caa61ba4d6424b6a5144d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14109149
pes2o/s2orc
v3-fos-license
On Positive Solutions of the Dirichlet Problem Involving the Extrinsic Mean Curvature Operator In this paper, we are concerned with necessary conditions for the existence of positive solutions of the Dirichlet problem for the prescribed mean curvature equation in Minkowski space −div ∇u 1 − |∇u| 2 = f (u) in Ω, u = 0 on ∂Ω, whose supremum norm bears a certain relationship to zeros of the nonlinearity f . Introduction Hypersurfaces of prescribed mean curvature in Minkowski space, with coordinates (x 1 , . . ., x N , t) and metric ∑ N i=1 (dx i ) 2 − (dt) 2 , are of interest in differential geometry and in general relativity (see e.g., [2,14]).In this paper we are concerned with necessary conditions for the existence of such a kind of hypersurfaces which are graphs of solutions of the Dirichlet problem −div ∇u 1 − |∇u| 2 = f (u) in Ω, u = 0 on ∂Ω. (1.1) We assume throughout that Ω is a bounded domain in R N , with a boundary ∂Ω of class C 2 , and f : R → R is a C 1 -function satisfying the assumption (H1) There exist s 0 , s 1 , s 2 ∈ R with 0 < s 0 < s 1 < s 2 such that f (s 0 ) ≤ 0, f (s j ) = 0 for j = 1, 2, f (s) < 0 in (s 0 , s 1 ) and f (s) > 0 in (s 1 , s 2 ). Corresponding author.Email: mary@nwnu.edu.cn Note that relatively little is known about the existence of positive solutions of problem (1.1) when Ω is a general bounded domain, see [11].Yet, for one-dimensional cases and radial cases of (1.1), the existence and multiplicity of positive solutions have been widely considered in recent years, see e.g., [3,4,8,9,21,22] and the references therein.Most of them allowed the nonlinearity f to be positive.It is worthy to point out that the result in [22], Ma and Lu used the quadrature arguments to show that if then the nonlinear Dirichlet problem with one-dimension Minkowski-curvature operator has at least two positive solutions u satisfying for sufficiently large λ > 0. Their result is an analogous of the well-known result due to Brown and Budin [7], who studied the problem (1.3) with κ = 0 by using a generalization of the quadrature technique of Laetsch [18].One is lead to show whether (1.2) is in fact a necessary condition for the existence of any positive solution of problem (1.3) satisfying (1.4).We shall answer this question in the affirmative employing the method of lower and upper solutions. The existence and multiplicity of positive solutions for the analogous problem associated with the Laplacian operator ∆u have been extensively studied in [1,12,13,17] in the case when the nonlinearity f is allowed to change sign.In [17], Hess showed that, for all λ sufficiently large, (1.2) is a sufficient condition for the existence of any positive solution u of (1.5) satisfying If the domain Ω satisfied a certain symmetry condition, it was proved by Cosner and Schmitt [12] that there exist lower bounds on the C( Ω) norm for certain positive solutions of (1.5).And then, Dancer and Schmitt [13] showed that (1.2) is in fact a necessary condition for the existence of any positive solution u of (1.5) satisfying (1.6), and that holds for arbitrary domains, where r ∈ (s 1 , s 2 ) is given by Also recently, the above results are generalized by Loc and Schmitt in [19], who established (1.2) is a necessary and sufficient condition for the existence of positive solutions of quasilinear problem involving the p-Laplace operator. Motivated by above papers [12,13,17,19,22], we shall attempt to show that (1.2) is in fact a necessary condition for the existence of any positive solution of problem (1.1) satisfying (1.6) and that (1.7) holds on the general bounded domain.To wit, we have Theorem 1.1.Assume (H1) and (1.9) If problem (1.1) has a positive solution u, u cannot satisfy (1.6). Theorem 1.2.Assume (H1) and (H2).Let r be defined by (1.8).If u is a positive solution of (1.1) So, to get the multiplicity of solutions of (1.1), we only need to work with the function f (s) in the interval 0, The contents of this paper have been distributed as follows.In Section 2, we construct a linking local lower solution defined on different subdomains.Finally, Section 3 is devoted to the proof of our main results by applying the method of lower and upper solutions as developed in [10] and a result of [9] about the radial symmetry of positive solutions of (1.1) if Ω is a ball. For other results concerning the Neumann problems associated with the prescribed mean curvature equation in Minkowski space we refer the reader to [5,20].The basic tools concerning Sobolev spaces and maximum principles which we employ in this paper can be found in [16,23]. Linking local lower solution By a similar argument from [6, Lemma 1.1] with obvious changes, we give the following lemma on linking local lower solution defined on different subdomains.Lemma 2.1.Assume that u is a positive solution of (1.1) which satisfies (1.6).Assume f (0) ≥ 0. Let B denote a ball in R N , centered at the origin such that Ω ⊆ B. We denote by ν the unit outward normal to Ω and ∂u ∂ν ≤ 0 on ∂Ω.Let α(x) be defined by Then α is a lower solution of the problem (2.2) and ∂u ∂ν ≤ 0 on ∂Ω.Then it follows from f (0) ≥ 0 that Thus α is a lower solution of (2.2). Proof of main results We start with the following simple consequence of the strong maximum principle. Lemma 3.1.Let g : R + → R be a C 1 -function, a 0 > 0 a number such that g(a 0 ) ≤ 0, and u a classical positive solution of Then u ∞ = a 0 . Proof.Suppose, to the contrary, that u ∞ = a 0 .Then 0 ≤ u ≤ a 0 for all x ∈ Ω.Note that there exists m ≥ 0 such that g(s) and, since −div Subtracting, we get The maximum principle implies that a 0 − u > 0 in Ω and hence u ∞ < a 0 , a contradiction. The following result is a simple modification of [9, Appendix], but we include the proof for the sake of completeness.Lemma 3.2.Assume that f : R → R is of class C 1 .Then any positive solution u ∈ C 2 ( BR ) of (3.5) is radially symmetric.Moreover, u (r) < 0 for r ∈ (0, R).Proof.To prove u is radially symmetric, we use the result stated in the claim of [9,Appendix].Notice that u ∈ C 2 ( BR ) given positive solution of (3.5) guarantees that there exists a constant L ∈ (0, 1), such that ∇u ∞ < L. (3.6) Now by using the same truncation technique in [9,Appendix] and [15, Corollary 1], we may deduce that u (r) < 0 for r ∈ (0, R).Indeed, let where the functions α 1 , α 2 : R → R and the constant c are such that ā ∈ C ∞ (R), ā is increasing and positive.Thus u is a positive solution of the modified problem It is easy to check that the second order differential operator ∂ x i u∂ x j u∂ x i x j u + f (u) associated with (3.8) is uniformly elliptic and satisfies all the assumptions in [15, Corollary 1], and consequently, u is radially symmetric and u (r) < 0 for r ∈ (0, R). Proof of Theorem 1.1.Assume (1.1) has a positive solution u which satisfies (1.6).We assume first that f (0) ≥ 0, this restriction will be removed later.By Lemma 2.1, α defined by (2.1) is a lower solution of (2.2).Obviously, β(x) = s 2 is an upper solution.Hence it follows from [10, Proposition 1] that (2.2) has a solution v(x), such that x ∈ Ω, which together with Lemma 3.1 imply that (2.2) has a positive solution v such that Moreover, it follows from Lemma 3.2 that v is radially symmetric and v (r) < 0 for r ∈ (0, R), where R is the radius of B and Ω ⊂ B. In particular, v has a unique maximum at r = 0. Hence v is a positive solution of the ordinary differential equation Multiplying (3.10) by v and integrating over (0, r), we obtain where Choose r 0 so that v(r 0 ) = s 0 .If N > 1, then we get and so On the other hand, it follows from (3.9) that f is nonnegative in v ∞ , s 2 .Combining this with (1.9) imply that which contradicts (3.14). If N = 1, then it follows from (3.11) that Since v ∞ = v(0) and v (r) < 0 for r ∈ (0, R), then v (r 0 ) = 0 and H(v (r 0 )) > 0. Hence (3.14) holds.Thus, by the same argument to treat the case N > 1, we also get a contradiction.We note that the assumption that f (0) ≥ 0 is needed in order to conclude that α(x) is a lower solution of (2.2). Next assume that f (0) < 0. Again assume that (1.1) has a positive solution v satisfying (1.6).Define f so that Here we use that v ∞ < s 2 .Then and as before β(x) = s 2 is an upper solution.Thus, it follows from [10, Proposition 1] that (3.17) has a solution u satisfying v(x) ≤ u(x) ≤ s 2 , i.e., u satisfies (1.6).Let α(x) be defined by Moreover, it follows from Lemma 3.2 that ṽ is radially symmetric and ṽ (r) < 0 for r ∈ (0, R).In particular, ṽ has a unique maximum at r = 0. Hence ṽ is a positive solution of the ordinary differential equation ṽ Multiplying (3.21) by ṽ and integrating over (0, r), we obtain where H is defined by (3.12).Choose r0 so that ṽ(r 0 ) = s 0 .We now proceed as in the first part of the proof with ṽ in place of v, and so we get a contradiction. According to Theorem 1.1, we show that (1.2) is in fact a necessary condition for the existence of positive solution u of problem (1.3) satisfying (1.4).Since the parameter λ does not play a role in our consideration we shall replace λ f by f .Then, arguing as in the proof of Theorem 1.1, the conclusion can be proved. Next, we give the proof of lower bounds on the C( Ω) norm for certain positive solutions of (1.1). Proof of Theorem 1.2.Assume, to the contrary, that u ∞ < r.Let f be defined as follows: where g(s) is chosen such that f > 0 in (s 1 , s 2 ), f (s 2 ) = 0, and s 0 f (s)ds ≤ 0. This clearly can be done since
2016-11-04T07:44:49.415Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "1d248d0b46a1a2a3bcf59208faa31ce66868bd15", "oa_license": "CCBY", "oa_url": "https://doi.org/10.14232/ejqtde.2016.1.98", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "1d248d0b46a1a2a3bcf59208faa31ce66868bd15", "s2fieldsofstudy": [ "Mathematics", "Philosophy" ], "extfieldsofstudy": [ "Mathematics" ] }
16157331
pes2o/s2orc
v3-fos-license
Factors affecting professional ethics in nursing practice in Iran: a qualitative study Background Professional ethics refers to the use of logical and consistent communication, knowledge, clinical skills, emotions and values in nursing practice. This study aimed to explore and describe factors that affect professional ethics in nursing practice in Iran. Methods This qualitative study was conducted using conventional content analysis approach. Thirty nurses with at least 5 years of experience participated in the study; they were selected using purposive sampling. Data were collected through semi-structured interviews and analyzed using thematic analysis. Results After encoding and classifying the data, five major categories were identified: individual character and responsibility, communication challenges, organizational preconditions, support systems, educational and cultural development. Conclusions Awareness of professional ethics and its contributing factors could help nurses and healthcare professionals provide better services for patients. At the same time, such understanding would be valuable for educational administrators for effective planning and management. Background Nursing mission is to provide high quality healthcare and maintaining and improving community health [1]. Ethics is considered as an essential element of all healthcare professions including nursing. Thus, it has a central role in nurses' moral behavior toward patients, which strongly influences on patients' health improvement [2]. Professional ethics constitutes legitimate norms or standards that govern professional behavior of both client and non-client [3]. Indeed, professional ethics addresses obligations of a profession towards people who are served [4]. An inherent part of nursing is to respect human values, rights and dignity [5]. From a clinical point of view, nursing has three basic principles of caring, namely ethics, clinical judgment, and care [6]. Vinson [7] points to five elements that are epistemological and fundamental to nursing, which include the following: knowledge of nursing, art of nursing, individual knowledge, ethics of nursing, and sociopolitical knowledge. From moral and philosophical perspective, nursing ethics incorporates using of critical thinking and logical reasoning in clinical practice on the basis of values [7]. Nursing ethics might also be considered as competency in nurses without any direct impact on their clinical activities, which could be separated from practical duties of nursing. However, such ethics are highly interwoven with clinical practices that cannot be alienated from them [8]. Lemonidou et al. [9] suggest that ethical commitment to care is an integral part of nursing practice in nurse-patient relationship. Nowadays, health care settings are changing rapidly. Thus, nurses are facing ethical challenges in healthcare that put them at risk of ethical conflict [10]. Although meeting the requirements of professional ethics in patients' care is essential, studies revealed that standards of professional ethics are not observed in nursing practices. Indeed, standards and criteria of professional ethics are not considered based on patients' preferences and culture [11]. According to previously conducted studies, nurses had poor attachment to professional ethics. Sokhanvar [12] reported that nursing awareness and application of ethical principles in patient's care and clinical decisions were not desirable in Fars, Iran. Additionally, nurses were not interested in applying ethical knowledge in their work [12]. Tefagh et al. [13] found that safe medication administration by Iranian nurses was significantly poor and lacked adherence to the professional ethics. A comparative study on nurses' perceptions of ethical problems in China and Switzerland revealed that there were differences in some ethical concepts including culture and faith. Chinese nurses were more nervous, sad and dissatisfied during and after the work compared to nurses from Switzerland. However, both groups experienced ethical problems of poor communication with patients due to heavy workload [14,15]. Another study reported that nurses might confront with various problems during their works [15]. Thus, ethical issues should be taken seriously as a basic requirement. On the other hand, the most comprehensive and complete approach to observe ethical standards is qualitative approach in which participants share their experiences [16,17]. Such information helps administrators promote professional ethics. This study aimed to explore and describe factors affecting professional ethics in nursing practice in Iran. Methods This qualitative study was conducted using conventional approach of content analysis. It has been intended to explore and describe factors affecting professional ethics in clinical practice. In general, content analysis is used when the objective of a study is to describe a phenomenon, and there are limited ideas [18] or fragmented knowledge about it [19]. Additionally, the phenomenon of professional ethics for nursing and affecting factors has vague aspects, which should be clarified through content analysis. Participating nurses were selected by purposive sampling from hospitals affiliated to Jahrom University of Medical Sciences in Jahrom, Fars, Iran. A total of 30 nurses including 25 female and five male nurses with at least 5 years of experience participated in the study. The sample size was chosen based on the data saturation. Data were collected using individual face to face and semi-structured in-depth interviews. Each interview took between 60 and 100 min. All interviews were conducted in the participant's workplace in a quiet setting. Firstly, interviews were started with main questions in accordance with participants' statements. Then, it was continued by probing questions. All interviews were initiated with this question: "as a nurse please tells me about the ethical issues you have faced in your workplace". As the interview progress, these questions were asked: what factors affect professional ethics in your clinical care? All interviews were recorded and transcribed immediately. Conventional approach for data analysis was implemented; no structure was used for categorizing data. This approach was carried out over three phases including: preparation, organizing and writing the report. In the preparation phase, each interview was treated as a unit of analysis. The recorded interviews were transcribed precisely and read several times to gain general impression. In the organizing phase, unites of meaning for each interview was highlighted, condensed, and openly coded. Then, codes with similar meanings were arranged into subcategories and main categories. Finally, the latent meaning of the data was reported in the reporting phase [19]. Conformability of findings was evaluated to achieve the reliability of collected data [20]. To achieve credibility of findings, content analysis, selecting appropriate units of meanings, way of categorizing data, and making judgment about similarities and differences of categories are very important [21]. Accordingly, the credibility of findings of this study was evaluated through spending enough time for data collection and analysis. Member check was also performed; data analysis was carried out by the second author for peer check. The approval of study was obtained from the Ethics Committee of Jahrom University of Medical Science. The participants were asked to sign a consent form; they were assured that they can withdraw from the study at any time. Results The findings highlighted two main themes: internal factors that deal with individual characters, responsibility and communication challenges; external factors that were reflected on organizational preconditions, support systems, educational and cultural development (Fig. 1). Internal factors: individual character and responsibility In this category, were extracted accountability, work conscience, positive energy associated with others, and self-control skills in conflicting situations. Most nurses pointed to professional ethics and accountability as important features that contributed to making a background for the ethical context of healthcare setting. Accountability: one of the participants with 11 years of experience mentioned that if nurses devoted themselves to their work sincerely, accepted their responsibility, and acted accordingly, the patients would receive medical care appropriately. For example, a nurse might use his/her hand (i.e. by putting it on the patient forehead) to see if the body temperature was normal instead of using a thermometer. Such inappropriate methods could jeopardize patient's health, because there was high possibility of making mistake. Another participant with 10 years of experience about work conscience said: "I think that work conscience is a personal issue, and no one can be forced to accept it." She continued "there are specific times when my shift at work is over, but I am still in the middle of the work. I take the responsibility of patients care and continue until I finish my duty, even if I have to stay more." However, some nurses explain their shift is over and they have to wait for the next shift. In the interim, work conscience is another factor that is important in work discipline and generates sense of duty in the individual. By character, we mean a set of behavior and manners of thinking that individuals use in their everyday life situations. Character is also determined by other characteristics that are unique to that individual; it has been established in him/her and is predictable. Regarding the character of an individual, the participants believed that personality is formed during childhood in both family and society. However, they added that the environment is responsible for changing 40-50 % after the formative years. Manager can adapt to the environment; for example, with a little encouragement can be a very good influence. Regarding the possession of positive energy, a nurse said: "in my opinion, a nurse should work in an environment in which marginal issues are excluded. In addition, it is brimmed with positive attitudes and energy. Because patients admitted to hospital are often suffering from disruption in health condition, and this is uncomfortable for them and their families. Therefore, nurses should provide care with positive energy for patients and their relatives, reinforce their spirit to recovery and create a sense of hope. Regarding the internal control skills, a participant expressed: "A professional nurse has to be able to be considerate in different situations. For example, I had a patient suffering from multiple fractures. He was using bad words when he has severe pain. While I was doing his treatment, he even pushed me hard so that I felt down on the floor, but it did not make me upset. I did my best in spite of the patient's behavior until I finally could reduce his pain. Internal factors: communication challenges In this category, the following themes were extracted: communication between doctors and nurses, professional relationship among staff, nurse-patient relationship, and effective communication and interaction in the workplace. In this regard, a participating nurse with 8 years of experience said: "a doctor found an error in a patient's medical records. Although it was not my fault, he insulted me verbally." she added: "Unfortunately, reporting such cases is not beneficial. Because authorities do not attend or respond to such instances in the healthcare system." Nurse-patient relationship: A participant with 12 years of experience stated: "I had an end stage patient. Even though the medical team was disappointed to him, and his level of consciousness was not at full stage, whenever I went to take care of the patient, I talked to him without receiving any response. The patients' family told me that he opened his eyes when the nurse was giving medication and nursing care. I feel that the patient was waiting for someone who cared about him; the patient even in the absence of any communication could realize that someone had sympathy for him." Nurse-patient communication: A participant with 8 years of experience said: "I believe that good communication with patients has miraculous results. For example, I had a patient with cardiac and respiratory diseases. He had an excruciating pain that made him scream. While I was doing my job as a nurse, I talked to him in a soothing manner and kept telling that he would be alright. I thought, based on his behavior, talking to the patient was effective enough to somehow make him relaxed." External factors: organizational preconditions In this category, the following themes were chosen: facilities and equipment; observance of nurse-patient ratio; heavy work load and shortage of staff; nurses' right to choose an appropriate ward. Hospital facilities and equipment play an important role in establishing professional ethics. Non-standard equipment can interfere with providing proper care and may even mislead medical staff judgment about patients' conditions. For example, we had a patient with kidney problems who had undergone surgery. While the patient was suffering from infection, temperature control device showed his fever lower than the actual degree. This seemingly simple incident could increase the length of hospitalization and hospital costs. Inappropriate nurse-patient ratio and heavy work load: a participant with 11 years of experience stated that once we had 40 patients, while only 5 nurses were available for care. Suppose from the above number, only 25 patients were in need of special care every few minutes, how could such a limited number of nurses be able to respond to required demands of the patients? It was absolutely infuriating. To make the situation still worse, add 25 more people to the list of the patients, those who accompany patients to the hospital and frequently go to the stations at hospital wards and expect to receive proper answers for their questions whenever they wish. Another participant with 14 years of nursing experience stressed on nurses' rights to choose their own working places at hospitals. According to this person, this opportunity could affect the application of the professional ethics. He said: "I was supposed to work for 2 years as an obligatory practice after graduation. I worked in the emergency ward, in spite of my will." External factors: support systems In this category, the following themes were extracted: appropriate support system, flexibility and effective reward and punishment. A participant with 12 years of experience said: "I believe that an effective support system should encourage us to observe the professional ethics. Whenever I face a problem, supervisors should support me. Also, a proper system of reward and punishment could help enhance experience of the professional ethics." Another participant highlighted flexibility as a necessity for nursing. He said: "nursing practice requires even if a patient is too much demanding or he has had challenges with us; we should never deprive him from our services. Another aspect of supportive system, according to a participant with 15 years of experience was an efficient way of punishment and reward system. He said "It would be helpful for nurses to get a positive or negative feedback based on their professional behavior. If the monitoring system rewarded me when I did my duties efficiently, I would be encouraged to work 10 times more than I supposed to; otherwise, I lose my motivation. Just try it for six months and see the results". External factors: educational and cultural development In this category, these themes were extracted: model educators and attention to practical ethics through modeling the environment, re-thinking about behavioral processes, cultural development focusing on ethics, and specialized practical and theoretical training courses in ethics. Regarding educator modeling and its impact on the development of morality, a participant with 13 years of experience stated that nursing instructors should be aware of the effects of training methods on trainees. He added "our instructor once forced male students to empty patients' urine bags and change the bed sheets in the presence of patients' companions and cleaning staff of the hospital. Meanwhile, he put the nurse under more pressure by repeating his order again and again. Such behaviors make negative impacts on our views of the job as a nurse." For the issue of re-thinking about behavioral processes, a participant with 10 years of experience commented "I had a patient with ventricular fibrillation. As physicians were not available at that time, I started resuscitation. It was successfully performed, and the patient is still alive. As I reflect on my deeds, I found it very important." The nurse continued, "In another instance, I found an error in a physician's prescription and since I was sure about the exact medication dosage, I made the correction." Regarding cultural development and ethics, a participant with 16 years of experience stated: "I had a critically ill patient who were supposed to be transferred to another hospital. I stayed with him until 4 pm, after my shift was over at noon; I had lunch after arriving home." Such devotions or commitment to a profession can be strengthened by means of cultural development." A participant emphasized on the importance of ethical courses for nurses. He said "training nurses in services expected from them is necessary. Every year, CPR training is repeated for us in accordance with new protocols to keep us updated." However, another participant's talk was focused more on educating nurses in professional ethics. He mentioned, "Such a course should be taught on location to make nursing students familiar with accepted patterns of morality in their interactions with patients. Discussion Factors effecting professional ethics in nursing practice have been identified in this study. The first main category of the findings was focused on the individual character and responsibility. It was emphasized on developing a sense of responsibility in nurses as a significant factor that influences professional behavior. Also, nursing literature indicated that creating professional commitment should be regarded as a necessary quality for nursing practice; nurses should be accountable for their decisions and outcomes. Such characteristics lead to better observance of professional ethics by nurses [22]. Indeed, most nurses believed that individual character and responsibility play an important role in sensitivity to the professional ethics compliance and moral development. Abbaszadeh et al. [23] emphasized that students who desire to enter into nursing profession should be checked for metacognitive features (e. g. personality) and be coordinated with nursing profession. The second category is communication challenges among health care members. The participants highlighted effective relationship as the element of professional ethics. The researchers also believe that effective nursing is highly related to developing proper relationships among members of the health care system. In the absence of such attitudes, patient care will be adversely affected. This study also indicates that patient's assessment is one of the important measures in establishing rapport between nurse and patient [24]. In recent years, it has been emphasized on professionalism in nursing. Thus, health care system requires nurses who are able to develop relationships with the multidisciplinary professionals as well as patients and their families [25]. According to Doran et al. [26], nurses do not work solely. In other words, they should try to expand connections with other health care teams in order to enhance patients' quality care [26]. In addition, Weaver and Morse [27] stated that interpersonal relationship is a vital factor in ethical sensitivity, and ignoring it may decrease the sensitivity. Also, participating students in the study conducted by Borhani et al. [28] expressed poor interpersonal communication as one of the barriers in achieving professional ethics. Sadeghi and Ashktorab [29] reported that poor communication between doctors and nurses and patients is a main part of the most raised ethical problems, which could lead to the violation of patients' rights. Organizational preconditions are the third category affecting professional ethics. Adib Haghbaghery et al. [30] stated that organizational structure should be compatible with nursing professional knowledge. When there are inappropriate organizational structures in health care systems, nurses cannot use professional knowledge properly [31]. In fact, it is a reasonable expectation that in an environment, which is consistent with organized standard of care, basic ethical working conditions are met. Although patient care is important for nurses, deficiency of clinical standards negatively affect nurses' performance [32]. This study showed that the effects of environmental factors including facilities and equipment on professional ethics have not been widely reported in the literature [31]. This indicates that deficiencies in clinical settings, such as lack of efficient organization, control and supervision are acutely felt in Iran. Based on the participants' perspective, another important aspect in compliance of professional ethics is the existence of human resources. Bennett et al. [33] reported that both time and staff shortage and/or in some cases the presence of too many patients are major barriers that challenge nurses in using research evidence and observance of professional ethics in health care. Merakou et al. [34] stated that nurses have in close contact with patients and have a good situation to support them; however, such a role is ignored in Greek hospitals due to staff shortage, lack of enough time and proper training regarding these subjects. Participants In a study conducted by Borhani et al. [32] mentioned that excessive work and staff shortage are two important factors that reduce the quality of care and ethical issues. They also stressed that even if the nurses wish to do so, it is not possible to provide adequate ethical nursing care [35]. The fourth main categories in this study are support systems. Studies showed that elements of supportive environment in nursing contain an appropriate team work, accepting sense of personal identity, freedom to ask questions, and having a suitable working relationship. These factors can enhance professionalism and autonomy in nursing. From practical point of view, however, most nurses have not experienced such a supportive working environment; too much effort is needed to get support [36]. In other studies, inappropriate feedback and insufficient support from both managers and organizations were mentioned by the participants as factors that decrease ethical sensitivity [37]. In this regard, participants in the study of Borhani et al. [32] reported that inadequate support systems were major causes of moral sensibility reduction. Participants of this study believed that when a person was sensitive to an issue, receiving support from others could compensate the inabilities and deficiencies and empower this sensitivity, while inadequate support could suppress this sensitivity [35]. Weaver and Mors [27] stated that inadequate support shared among the managers and colleagues can cause decreased job satisfaction resulting in decreased ethical sensitivity and increased moral distress. The fifth main category of this study was educational and cultural development. Experts have explained that establishing bonds of commitment to nursing profession depends on cultural considerations [24,38]. This, in turn, will lead to the enhancement of professional ethics in clinical practices. In doing so, the need for cultural understanding and establishing effective relationships with patients is widely expected to be inserted in the curriculums designed for nursing. Another external factor influencing professional ethics as reported by participants of this study was their desire for an efficient educational system. Nurses, as significant agents of human resources in health care services, play a major role in health promotion of society. Therefore, training programs of nursing should contain materials that incorporate boarder needs of society. Also such programs should be modified according to the changes and advancements in the medical care [39]. Teachers who have theoretical and professional knowledge in the field of ethics can be considered as role models; in fact, they could assist the development of professional ethics [32]. Woods [40] emphasized that although the role of instructors as role models in creation of student's ethical behavior is important, student's philosophical readiness and knowledge development in ethical field are the responsibilities of nursing instructors. A wide range of studies are emphasized the effects education on increasing compliance andethical sensitivity. In a conducted review study by Borhani et al. [41] were mentioned that education and training methods could effect on ethical sensitivity. Grundstein-Amado [42] reported that doctors and nurses were not able to properly make an ethical decision and follow a consistent pattern, mainly due to their lack of education in ethical issues. In addition, Wehrwein [43] believed that ethics education improves student's awareness from ethical issues and their application in the workplace is effective. Moreover, students attending ethics courses were more able in decision making for ethical issues compare to those who did not attend such courses [43]. Rodmell [44] suggests that curriculum is an effective factor in shaping peoples' attitude and increasing their knowledge, and also a framework to discuss and criticize the ethical issues. Furthermore, he claims that ethical knowledge is an important issue in nursing. In fact, including ethical issues in the curriculum is an appropriate way to be assured of increased ability in solving the ethical dilemmas as well as improved ethical judgment [44]. The research findings have shown that both internal and external factors affect professional ethics in clinical practice. Therefore, professional ethics is not limited to the internal factors. External factors including instructors, administrators, health care providers, education, and culture can be applied in workplace in order to assist nurses in moral development. Conclusions The acquisition of professional ethics is facilitated by internal and external factors. These factors could lead to legitimate norms and standards govern professional behavior of nurses in their relationships with patients. Furthermore, good communication among health care members, improvement of organizational preconditions, appropriate supportive system, and development of education and culture could lead to observing professional ethics in clinical practice.
2015-09-18T23:22:04.000Z
2015-09-09T00:00:00.000
{ "year": 2015, "sha1": "9656ff3d6cecb9af4808781f09fec5482b3d4218", "oa_license": "CCBY", "oa_url": "https://bmcmedethics.biomedcentral.com/track/pdf/10.1186/s12910-015-0048-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "51d48002c8ba7299d52ea76a526b6e2649302788", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232147281
pes2o/s2orc
v3-fos-license
Dynamic Message Scheduling With Activity-Aware Residual Belief Propagation for Asynchronous mMTC Systems In this letter, we propose a joint active device detection and channel estimation framework based on factor graphs for asynchronous uplink grant-free massive multiple-antenna systems. We then develop the message-scheduling GAMP (MSGAMP) algorithm to perform joint active device detection and channel estimation. In MSGAMP we apply scheduling techniques based on the residual belief propagation (RBP) and the activity user detection (AUD) in which messages are generated using the latest available information. MSGAMP-type schemes show a good performance in terms of activity error rate and normalized mean squared error, requiring a smaller number of iterations for convergence and lower complexity than state-of-the-art techniques. I. INTRODUCTION C Overing different industries as healthcare, logistics, process automation and utilities, it is believed that machinetype communications (MTC) will correspond to half of the global connected devices by 2023. Concentrated on the uplink, MTC traffic is typified by small packets transmitted sporadically, often with low data-rate and loose delay constraints [1]. With these characteristics and the expected huge number of machine-type devices (MTDs), conventional scheduling-based orthogonal multiple access schemes are not suitable. A solution proposed in recent years is based on grant-free non-orthogonal multiple access (NOMA) [2], where active devices transmit frames without previous scheduling, in order to eliminate the need for round-trip signaling. With the massive number of MTDs requiring access without coordination, a time-slotted transmission would cause significant overhead. In a time-slotted transmission scenario, where devices can change their activity state only at the beginning of each timeslot, any device that fails to align its time slots properly may disturb the whole detection and estimation process. In this way, the study of a non-time-slotted or asynchronous transmission is promising for mMTC since it has advantages such as reduced transmission latency, smaller signalling overhead due to the simplification of the scheduling procedure and improved energy efficiency (battery life) of MTCDs with the reduction in signalling [3], [4]. As all MTCDs simply transmit, the work of the BS is increased [5], [6], this scenario renders the activity and data detection [7], [8], [9] and channel estimation [10] even more challenging tasks. Despite the focus of many works on the joint user activity and data detection problem [11], [12], [13], [14], most of these studies considered that the uplink channel state information The authors are with the Centre for Telecommunications Studies (CETUC), Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Rio de Janeiro 22453-900, Brazil (e-mail: {robertobrauer, delamare}@cetuc.puc-rio.br). This work was supported by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). (CSI) is perfectly known to the base station (BS). However, in practice, the uplink CSI should be estimated before data detection. Exploiting the a priori distribution of the channel sparse vector to be recovered, the works in [15], [16], [17] use compressed sensing (CS)-based techniques in order to assess the channel estimation and the activity error rate (AER) performance. As an extension of the generalized approximate message passing (GAMP) algorithm [18], the hybrid GAMP (HyGAMP) [19] exploits the sparsity in the exchange of messages. HyGAMP outperforms other existing algorithms in terms of mean square error (MSE), since it combines a loopy belief propagation (LBP) part for user activity detection and a GAMP-type strategy for channel estimation. However, HyGAMP considers a completely parallel update of the messages, where each iteration performs exactly one update of all edges. In this work, we present a joint active device detection and channel estimation framework based on factor graphs for asynchronous uplink grant-free massive multiple-antenna systems. We also devise the message-scheduling GAMP (MS-GAMP) algorithm that uses the factor graph approach and aims to find the best sequence of message updates, improving the convergence and error rates by focusing on the part of the graph that has not converged. Unlike dynamic scheduling techniques [20], [21] used for decoding Low-Density Parity-Check (LDPC) decoders, MSGAMP is applied to factor graphs and performs novel sequential scheduling schemes for message updates in mMTC. In particular, MSGAMP updates messages according to the activity user detection (AUD) and the residual belief propagation (RBP). Since only a few very recent works [22], [23], [24] have studied the asynchronous mMTC scenario, MSGAMP addresses the problem of joint active device detection and channel estimation without requiring frame-level synchronization. MSGAMP exploits the a priori distribution of the sparse channel matrix and use the number of antennas in the BS to improve the activity detection. Simulations show that MSGAMP results in an improved performance over HyGAMP in terms of normalized MSE (NMSE) with fast convergence and a lower computational cost than existing techniques. This paper is structured as follows. Section II introduces the asynchronous system model. The problem of joint channel and user activity estimation along with the proposed MSGAMP is detailed in Section III. Section IV presents the results of simulations, whereas the conclusions are drawn in Section V. II. SYSTEM MODEL In this section we describe the considered asynchronous grant-free uplink NOMA scenario, where symbol-level synchronization is assumed but not frame-level synchronization. In the uplink, we have N single-antenna MTDs communicating with a BS equipped with M antennas [25], [26]. In arXiv:2103.04486v1 [cs.IT] 7 Mar 2021 the grant-free system model, each frame consists of pilot and data parts [1]. Since the goal of this work is to jointly detect the activity of devices and estimate their channels, we only consider the part of the frame with pilots. However, the use of the system model for data detection is straightforward. As depicted in Fig. 1, at the beginning of any symbol interval, each device is allowed to transmit L pilot symbols, which form a frame. Since mMTC results in sparse systems, we designate the Boolean variable ξ n,t = 1 that indicates the activity of the n-th device in the t-th symbol interval and ξ n,t = 0, otherwise. Thus, considering ρ n the probability of being active of the n-th device, P (ξ n,t = 1) = 1 − P (ξ n,t = 0) = ρ n , where all ξ n,t are considered i.i.d. in relation to n and each device has its own activity probability. When an MTD is active, it transmits one of the independent pilot sequences previously provided by the BS. The frame of the n-th device is composed by φ n = exp (jπα), where each element of vector α ∈ R L is drawn uniformly at random in [−1, 1]. Despite the intermittent pattern of transmissions, each device should wait, at least, to the guard period interval to transmit again. Let h t ∈ C N ×1 be the vector that models the channels between the BS and N devices in the t-th symbol interval. Considering t n as the symbol interval in which the n-th device initiates its transmission, each component is modeled as where h t gathers independent fast fading, geometric attenuation and log-normal shadow fading. The vector a t contains the fading coefficients modeled as circularly symmetric complex Gaussian random variables with zero mean and unit variance, while β n represents the path-loss and shadowing component of each device, which depends on the location of the devices and remains the same for all frames of the n-th device. As depicted in Fig. 1, in the asynchronous scenario, it is possible that just part of the transmitted frame falls within the observation window. Since the problem of interest here is to jointly estimate the channels and the activity of devices, the BS is only able to deal with the type-1 frames. Thus, type-2 and type-3 frames should have their channels estimated and activity detected in another observation window. Accordingly, the BS generates a sequence of observation windows otherwise. This sequence can be seen as a sliding window with window size T and step size ∆t. Since T > L, any consecutive observation windows have an intersection of T − ∆t symbol intervals, enabling BS to estimate the channels of all frames. Considering the M BS antennas, for an arbitrary observation window [t x , t x + T ) and omitting the subscript t x to simplify the notation, the received signals are described by the model where W m ∈ C L×T is the independent complex-Gaussian noise matrix with CN 0, σ 2 w , Y m ∈ C L×T is the matrix that gathers the received signals and H m ∈ C N ×T the channels. The subscript m indicates which BS antenna received the signals. For each new window, the values of W m , Y m and H m change, while Φ ∈ C L×N keeps the pilot sequence of each device. As in this scenario we have a massive number of devices, the size of the window T is smaller than N thus, the system is overloaded. However, as seen in (1), H is sparse, which makes its recovery possible through the theory of compressed sensing (CS) [27]. III. ACTIVITY DETECTION AND CHANNEL ESTIMATION In order to present the message updating rules of the MS-GAMP algorithm, we introduce some statistical properties of the system model. Assuming a BS with one antenna (M = 1), the subscript m is omitted in the following formulation. A. Factor Graph approach As reported in the literature [19], [28], it is possible to estimate the channels exploiting the statistical properties of the system model approximating the marginal posterior density by a product of the prior distribution of h t , p (h t |ξ t ), and the likelihood, p (Y|H, ξ). Thus, the minimum MSE (MMSE) where H \nt denotes all elements except h nt and the posterior distribution, denoted by p (H, ξ|Y) = 1 p(Y) p (Y|H, ξ) p (H|ξ) p (ξ) given by the Bayes' rules where P (h nt |ξ nt ) is the conditional density for the random variable in (1). In order to apply the proposed message scheduling techniques, the first step is to marginalize the problem. As seen in GAMP [18] and HyGAMP [19], one approach is to employ an approximation of the sum-product loopy belief propagation (BP). For each t-th symbol interval, the factor graph (FG) in Fig. 2 represents the problem, wherein factor nodes that represents the density functions, prior and likelihood, are depicted as cubes and the variable nodes ξ nt and h nt are seen as spheres. As Φ is a dense matrix, the FG in Fig. 2 is fully connected. Computing the messages in fully connected graphs is tricky as the messages themselves are functions. Thus, a common method is to approximate the messages by prototype functions that resemble Gaussian density functions which can be described by two parameters only. So, message passing reduces to the exchange of the parameters of a function instead of the function itself. Therefore, it is possible to iteratively approximate, for a FG with cycles as in Fig. 2, the marginal posteriors passing messages between different nodes. Thus, we can define the messages from p y lt · to h nt and to the opposite direction as and, considering ∝ as proportional, the messages from P (h nt |ξ nt ) to h nt and to the opposite direction are Thus, the belief distribution that provides an approximation to marginal posterior distribution p (h nt |Y, ξ) is given by where, defining Z = ΦH and p h r we can compute E ν . Specifically, each iteration of Algorithm 1 has three stages. The first one, labelled as "GAMP approximation" contains the updates of the GAMP based on expectation propagation (EP) algorithm, which treats the components h nt as independent with the estimated probability of being activeρ nt . As well as [29], [28], the EP is incorporated in the process of LBP to the relaxed belief propagation and then to GAMP. At iteration i, MSGAMP produces estimatesĥ (i) andẑ (i) of the vectors h and z. Several other intermediate vectors,p (i) ,r (i) andŝ (i) , are also produced. Associated with each of these vectors are matrices like Q h(i) and Q z(i) that represent covariances. Thus, in order to reduce the complexity of O (LN T ) to O (N T ), the message in (5) is firstly mapped to a Gaussian distribution based on the central limit theorem and Taylor expansions. So, ν (i) n←lt (h nt ) is updated by the Gaussian reproduction property (GRP) [28]. Following the same procedure in the messages of (6), (7) and (8), relaxed BP is obtained by combination of the approximated messages. Since many of these messages slightly differ from each other, in order to fill out those differences, new variables are produced and, ignoring the infinitesimals, GAMP based on EP is obtained. The second stage of Algorithm 1, labelled as "sparsity-rate update", refers to the "box" part of the FG in Fig. 2 and updates the estimates of each probability of being activeρ ntm . In order to use the diversity of the antennas in the BS to refine the activity detection, from this point we include the subscript m into the formulation. Computed using Gaussian approximations of likelihood functions, these estimates are then used to define the message scheduling proposed in this work. The messages in the "sparsity-rate update" stage are given by where (12) refers to the message from P (h ntm |ξ nt ) to ξ nt while (13) denotes the message in opposite direction and each belief at ξ nt is given by ν Defining X = R + W as a scalar random variable with the same density as H, the message in (12) can be approximated as a likelihood function given by ν Similarly to (14), we have LLR ntm log . Substituting (14) in (13) and in each belief, LLR (i) n→ntm is given by Thereby, the message in (6) is described by With the message passing established, the next step is to use the estimates obtained in (17) in message scheduling. B. Message-scheduling schemes Since it is expected up to 300, 000 devices per cell [30] in future mobile communication systems a technique with low computational cost is fundamental. We develop three different message scheduling criteria that reduce the computational repeat % GAMP approximation 2: for n = 1, . . . , |S (i−1) | ∀n ∈ S (i−1) 3: for (t = 1, . . . , T ) 4: for (m = 1, . . . , M ) 5:ĥ % Sparsity-rate update with (14), (15) and (17) MSGAMP determines a group of nodes S (i) to update based on two different criterion, the AUD and the RBP. The goal is to update, at every iteration i, only the nodes of the group and not all of them, as in HyGAMP. MSGAMP proceeds until i reaches the maximum number of iterations I or tol/M < 10 −4 , where tol is given by tm is a |S (i) | × 1 vector that corresponds to the estimated channel gains between the |S (i) | devices and the m-th BS antenna. As this stopping criterion takes into account only the devices in the group, unlike the parallel message update of HyGAMP that, in each iteration, O(N T M ) messages must be computed, MSGAMP needs only O(|S (i) |T M ) . Considering that we have a crowded scenario of MTCDs in future mobile communication systems and the sporadic transmission pattern of each device, the computational cost gain using scheduling schemes is evident, since |S (i) | << N . With the stopping criterion defined, we present the first message scheduling scheme. 1) MSGAMP-AUD: The message scheduling based on activity user detection (MSGAMP-AUD) sequentially updates the messages of devices detected as active and repeats the previous values of other devices. The criterion based on AUD uses the estimates of each BS antenna, asρ nt is higher than a threshold, the device is considered as active and is included in the set S (i) . In the first iteration, all messages of all nodes are updated. When i = 2, we have the first values ofρ tm , thus enabling the set S (i) . In this iteration, all messages that belong to S (i) , except for s (i) 1 will be updated. Then, the index that refers to the messages that had been updated is removed of S (i) as in Therefore, we exclude a group of messages that belong to a specific device to be updated, one at a time. In summary, we reduce the set S (i) that is updated in parallel until there is no message to update. When S (i) is empty, MSGAMP updates all messages, including the ones that do not belong to the older set, i.e., the new set is S (i) = [1, . . . , N ]. In the next iteration, a new update of the set S (i) , using the newρ ntm is performed. 2) MSGAMP-RBP: In this variation, MSGAMP updates the messages according to an ordering metric called residual belief propagation (RBP). A residual is the norm (defined over the message space) of the difference between the values of a message before and after an update. In our scheme, we define the residual with the beliefs described in (9). Thus, the residual for the belief distribution at h nt , is given by The intuitive justification of this method is that as the factor graph approach converges, the differences between the messages before and after an update diminish. Therefore, if a message has a large residual, it means that it is located in a part of the graph that has not converged yet. Thus, propagating that message first should speed up the convergence. Using the residual values computed in (20), we compute the set S (i) of messages to be updated in the next iteration. Since the probability of being active of each MTD is typically around 5% [1], S (i) has the 0.05 N nodes with highest residual. The update sequence of MSGAMP-RBP is the same of MSGAMP-AUD, the difference is how both groups are formed. 3) MSGAMP-ARBP: This dynamic scheduling strategy combines the AUD and RBP criterion. The main idea is use AUD criterion to create S (i) and the RBP criterion to compute the updating sequence of it. MSGAMP-ARBP updates the messages of one node per iteration, starting with the one with highest residual. After the group being fully updated, MSGAMP-ARBP proceeds as in previous strategies, updating the messages of the nodes that does not belong to S (i) and compute a new set. When a stop criterion is met, the activity detection and the channel estimation are given by lines 19 and 5 in Algorithm 1. IV. SIMULATION RESULTS In order to verify the performance of the proposed MS-GAMP schemes, we simulate an mMTC system with N = 128 devices, M = 2 BS antennas, L = 32 symbols per frame and T = 3 L as the size of the observation window. The threshold to detect the activity of devices considered is 0.9, the average SNR is set to 1/σ 2 w , while the activity probabilities p n are drawn uniformly at random in [0.01, 0.05]. The variations of MSGAMP are compared to the well-known generalized approximate message passing (GAMP) [18] and the stateof-the-art HyGAMP algorithm [19]. Versions of MSGAMP-ARBP and of HyGAMP with perfect activity knowledge (OMSGAMP-ARBP and OHyGAMP) are used as lower bounds. Figs. 3 and 4 show results of NMSE and AER per frame, respectively. In terms of NMSE, Fig. 3 shows that the message scheduling schemes have a competitive performance, where MSGAMP-AUD and MSGAMP-RBP slightly outperform HyGAMP, requiring less computational cost. MSGAMP-ARBP surpasses not only HyGAMP and the other MSGAMP algorithms but also OHyGAMP. One can see that the use of the BS antennas in order to refine the activity detection improved the AER performance of MSGAMP-ARBP since the AER curves have lower values as M increases. Fig. 5 depicts the convergence rate of MSGAMP-type techniques and HyGAMP. One can notice that for different values of SNR, our solutions converge faster and to lower values of NMSE than HyGAMP. We note that the proposed techniques will be examined with LDPC codes [31] in future works. V. CONCLUSION In this paper, we have presented a framework for joint activity detection and channel estimation for mMTC and developed the MSGAMP algorithm. We have developed three scheduling techniques for MSGAMP that update the messages based on the AUD and the RBP. The results indicate that MSGAMPtype techniques outperform other solutions in terms of NMSE and AER, with fast convergence and low computational cost.
2021-03-09T02:16:14.306Z
2021-03-07T00:00:00.000
{ "year": 2021, "sha1": "650e41a5834767f61f1cb44032a26bed0109432b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a7723e8ee1711394526f2a0d0957e88ae32806ec", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
209164582
pes2o/s2orc
v3-fos-license
SWATH Differential Abundance Proteomics and Cellular Assays Show In Vitro Anticancer Activity of Arachidonic Acid- and Docosahexaenoic Acid-Based Monoacylglycerols in HT-29 Colorectal Cancer Cells Colorectal cancer (CRC) is one of the most common and mortal types of cancer. There is increasing evidence that some polyunsaturated fatty acids (PUFAs) exercise specific inhibitory actions on cancer cells through different mechanisms, as a previous study on CRC cells demonstrated for two very long-chain PUFA. These were docosahexaenoic acid (DHA, 22:6n3) and arachidonic acid (ARA, 20:4n6) in the free fatty acid (FFA) form. In this work, similar design and technology have been used to investigate the actions of both DHA and ARA as monoacylglycerol (MAG) molecules, and results have been compared with those obtained using the corresponding FFA. Cell assays revealed that ARA- and DHA-MAG exercised dose- and time-dependent antiproliferative actions, with DHA-MAG acting on cancer cells more efficiently than ARA-MAG. Sequential window acquisition of all theoretical mass spectra (SWATH)—mass spectrometry massive quantitative proteomics, validated by parallel reaction monitoring and followed by pathway analysis, revealed that DHA-MAG had a massive effect in the proteasome complex, while the ARA-MAG main effect was related to DNA replication. Prostaglandin synthesis also resulted as inhibited by DHA-MAG. Results clearly demonstrated the ability of both ARA- and DHA-MAG to induce cell death in colon cancer cells, which suggests a direct relationship between chemical structure and antitumoral actions. Introduction The fat content of a normal diet consists mainly of triacylglycerols (TAG) (about 90% of total ingested lipids) and small amounts of sterols and phospholipid esters, as well as fat-soluble vitamins (A, D, E, and K) [1]. The fatty acid (FA) distribution on the glycerol backbone of TAG influences their absorption, distribution, and tissue uptake [2]. Free FA (FFA) and sn-2-monoacylglycerol (sn-2 MAG), the two hydrolysis products of dietary TAG, are absorbed from the lumen into polarized enterocytes in the small intestine. Polyunsaturated FAs (PUFAs) are better absorbed when they are esterified at the sn-2 position of the glycerol molecule, while the type of FA at the remaining locations also influences their intestinal absorption [2][3][4][5]. affected [21]. Here, we used the same cellular model, study design, and technology to investigate the actions of DHA and ARA-MAG in colorectal cancer (CRC) cells, and the results are compared with those reported for the corresponding FFA. For this objective, cell viability and cell cytotoxicity assays have been performed, and the biological pathways that are affected by these two MAGs have also been studied by means of sequential window acquisition of all theoretical mass spectra (SWATH)-mass spectrometry (MS) global protein quantitation followed by pathway analysis. Oil Samples and Purification of MAG DHASCO ® (40% DHA, a mixture of the oil extracted from the unicellular alga Crypthecodinium cohnii and high oleic sunflower oil) and ARASCO ® (40% ARA, a mixture of an oil extracted from the unicellular fungi Mortierella alpina and high oleic sunflower oil) oils were supplied by Martek Bioscience Corporation (Columbia, MD, USA). Purification of ARA-and DHA-MAG was carried out according to the methodology described by González-Fernández et al. [22] based on a chromatography process. Briefly, both DHASCO ® and ARASCO ® oils were subjected to an enzymatic hydrolysis with porcine lipase. Then, MAGs were separated from the remaining hydrolysis products (mainly FFA and glycerol) using an open chromatography column with silica gel as stationary phase and a hexane/acetone mixture as mobile phase. Once MAG mixtures were obtained, another open chromatography column with silver nitrate as stationary phase was used to purify either DHA-MAG or ARA-MAG. Liquid chromatographic fractions were collected in test tubes and analyzed by gas chromatography (GC) to determine the purity grade according to Rodríguez-Ruiz et al. [23]. To this end, about 1 mg of each fraction was weighed into test tubes next to 1 mL of n-hexane and 1 mL of freshly prepared transesterification reagent (methanol and acetyl chloride 20:1 v/v). Then, the tubes were placed in a thermoblock at 100 • C for 30 min. After that, the mixtures were cooled at room temperature and 1 mL of distilled water was added. Samples were shaken and centrifuged (2500 rpm, 3 min) and the upper n-hexane layer collected and stored in numbered vials at −20 • C until GC analysis. The equipment used for FA methyl esters (FAME) analysis was a GC device (Focus, Thermo Electron, Cambridge, UK) equipped with flame ionization detector (FID) and an Omegawax 250 capillary column (30 m × 0.25 mm ID × 0.25 µm film thickness) (Supelco, Bellefonte, PA, USA). The oven temperature program was 90 • C (1 min), 10 • C/min to 100 • C (3 min), 6 • C /min to 260 • C (5 min). The injector temperature was 250 • C with split ratio 50:1, and injector volume was 4 µL. The detector temperature was 260 • C. The flow of carrier gas (N 2 ) was 1 mL/min. Peaks were identified by retention times obtained for known FAME standards (PUFA No. 1, 47033; methyl linoleate 98.5% purity, L6503; and methyl stearidonate 97% purity, 43959 FLUKA) from Sigma, (St. Louis, USA), and FA contents were estimated by using methyl pentadecanoate (15:0; 99.5% purity; 76560 Fluka) from Sigma as internal standard. SWATH-MS Differential Abundance Proteomics Analysis HT-29 cells, cultured in media supplemented with 600 µM of either DHA-(n = 6) or ARA-MAG (n = 6) at 24 h, and the same cells with no acyl species added (control group, n = 6) were recovered and lysed. Protein extracts were obtained, and protein was precipitated with TCA/acetone for removing contaminants. Then, 40 µg protein for each sample were subjected to trypsin digestion, and massive protein relative quantitation was assessed following a SWATH approach as described in Ortea et al. [21]. Briefly, this approach consisted on three steps: (i) an MS/MS peptide library was built from the peptides and proteins identified in data-dependent acquisition (DDA) shotgun nanoLC-MS/MS runs from the samples, using Protein Pilot software (v5.0.1, Sciex) with a human Swiss-Prot protein database (20,200 protein entries, appended with the RePliCal iRT peptides (PolyQuant GmbH, Bad Abbach, Germany) and downloaded from UniProt on March 2017). Main settings used in Protein Pilot were iodoacetamide as Cys alkylation, trypsin as enzyme, TripleTOF 5600 as instrument, and thorough ID as search effort. The false discovery rate (FDR) was set to 0.01 for both peptides and proteins; (ii) each sample was analyzed with a variable SWATH LC-MS method; and (iii) protein quantitative data for the proteins contained in the peptide library were obtained from the SWATH runs by extracting the corresponding fragment ion chromatograms using the MS/MS ALL with SWATH Acquisition MicroApp (v.2.0, Sciex). Peptide retention times were calibrated in all the SWATH runs using the RePliCal iRT peptides, spiked into each sample according to manufacturer's instructions. To be confident on the proteins being identified and quantified, only those showing confidence scores above 99% and FDR below 1% were included in the analysis. For both kinds of LC-MS analysis, DDA and SWATH, a hybrid Q-TOF mass spectrometer (Triple TOF 5600+, Sciex, Redwood City, CA, USA) coupled on-line to nano-HPLC (Ekspert nLC415, Eksigent, Dublin, CA, USA) was used. For higher sensitivity, both DDA and SWATH runs were performed at nano-flow (300 nL/min) in a 25 cm long × 75 µm internal diameter column (Acclaim PepMap 100, Thermo Scientific, Waltham, MA, USA) using a 120 min gradient from 5% to 30% B (A: 0.1% FA in water; B: 0.1% in ACN). Pathway and Gene Ontology (GO) Analysis Advaita Bio's iPathwayGuide (Advaita Corporation, Plymouth, MI, USA) was used for analyzing the significantly impacted pathways and for GO analysis. We considered a restrictive scenario, namely a differential expression threshold of log (fold change) 1.0 (that is, fold change 2.0) and adjusted p-value 0.01, in order to have more confidence in selecting the proteins that presented real expression changes. Data were analyzed in the context of pathways obtained from the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (Release 78.0+/06-02, Jun 16). Validation by Parallel Reaction Monitoring (PRM) Analysis Several protein changes corresponding to the main significant affected pathways were subjected to validation using targeted quantitation by micro-HPLC PRM on a Triple TOF 5600+ (Sciex). Skyline software (v4.2.0) [25] was used for designing and optimizing a PRM acquisition method for all of the targeted proteins. Two to six proteotypic peptides for each protein were selected according to the following criteria: (i) enzyme: trypsin [KR/P] with 0 missed cleavages; (ii) 7-16 amino acid residues; (iii) carbomidomethylation of cysteines as structural modification; (iv) excluding peptides containing methionine; and (v) excluding the N-terminal amino acids. Transitions were filtered according to the following criteria: (i) +2 and +3 precursor charges; (ii) y and b product ion types; (iii) product ions from (m/z > precursor)-1 to last ion; (iv) method match tolerance 0.055 m/z; (v) a maximum of 10 product ions; and (vi) resolving power of 15,000 for MS/MS filtering. The HPLC gradient consisted of 5-22% buffer B (A: 0.1% FA in water; B: 0.1% FA in ACN) at 5 µL/min for 45 min, plus 5 min at 95% B and another 6 min at 5% B for re-equilibration. The column used was a 15 cm long × 300 µm internal diameter C18 column (Dionex Benelux B.V., Amsterdam, Netherland). Five to six individual samples (4 µg per sample) from each of the groups, DHA-MAG, ARA-MAG, and control, and two blank samples, were analyzed with the developed PRM, and the resulting chromatograms for all the monitored peptides were imported into Skyline and manually curated. Three injection replicates were used for calculation of the coefficients of variation (CV) for each of the monitored peptides. Statistical Analysis For cell assay results, statistical significance was determined using generalized linear models (GZLMs) using Statgraphics Plus 4.0 (Statistical Graphics Corp., Rockville, MD, USA). For SWATH-MS protein quantitation, data were analyzed following Ortea et al. [21]. Briefly, quantitative data were normalized for inter-run variability, and differences in protein abundance were assessed by applying a Student's t-test, checking for multiple testing underestimation of p-values by obtaining a q-value estimation for FDR using the qvalue R package [26]. For impact pathway and GO analysis, iPathwayGuide software calculated a p-value using a hypergeometric distribution. The p-values were adjusted using FDR correction for pathways and Bonferroni correction for GO analysis. For PRM validation of protein changes, statistical analysis was performed using the 'group comparison' function in Skyline. Briefly, Skyline performed pairwise group comparisons for each protein using a Student's t-test on the log2 transformed summed transition peak area for all the peptides from that protein, adjusting the p-values for multiple testing with the Benjamini-Hochberg correction. DHA-and ARA-MAG Showed Differential and Concentration-Dependent Effects on HT-29 Cell Viability, Cell Membrane Integrity, and Apoptosis The present study was conducted using the well-established HT-29 human colon cancer cell line. The purities of the assayed MAGs were 98.0% and 98.7% for DHA and ARA, respectively. According to González-Fernández et al. [22], during MAG purification process an acyl migration occurs quickly, so the concentrations of 1(3)-MAG balance with 2-MAG at one point that depends on several causes, e.g., solvent type, pH, and temperature. Although the purified forms are mainly 2-MAG and these are those added to cell cultures, in the culture medium the acyl-migration continues [27]. For this reason, we generically refer to MAG instead of 2-MAG. The actions of ARA-and DHA-MAG on cell membrane integrity measured by the LDH assay after 48 and 72 h of treatment, are shown in Figure 1b. The test assesses the release of the LDH enzyme into the culture medium after cell membrane damage caused by MAG. The tested concentrations ranged from 50 to 600 µM. No effect of DHA-MAG on the amount of LDH release was noted, while a 40% increase in LDH activity after 72 h treatment was detected at the highest ARA-MAG concentrations. To clarify whether ARA-and DHA-MAG were able to reduce cancer cell viability by promoting apoptotic cell death, a classical marker of apoptosis, caspase-3, was determined. In this study, caspase activation was evaluated in cells treated with ARA-and DHA-MAG at 300 and 600 µM for 90 min (Figure 1c). Caspase-3 activity is expressed as the percentage of activity compared to that of the untreated samples. As shown, a significant increase (up to 361%) of caspase-3 activity in the HT-29 cells was observed after 90 min exposure to DHA-MAG, while ARA-MAG did not show remarkable effects as compared to the respective untreated controls. Dose-dependent lactate dehydrogenase (LDH) release from HT-29 colon cancer cells after exposure to DHA-and ARA-MAG. (c) Dose-dependent caspase-3 activity from HT-29 colon cancer cells in comparison with untreated cells (control). Data represent the mean of three complete independent experiments ± SD (error bars). Data were analyzed using generalized linear models (GZLMs). There are no significant differences (p < 0.05) among series sharing the same letter. The actions of ARA-and DHA-MAG on cell membrane integrity measured by the LDH assay after 48 and 72 h of treatment, are shown in Figure 1b. The test assesses the release of the LDH enzyme into the culture medium after cell membrane damage caused by MAG. The tested concentrations ranged from 50 to 600 μM. No effect of DHA-MAG on the amount of LDH release was noted, while a 40% increase in LDH activity after 72 h treatment was detected at the highest ARA-MAG concentrations. To clarify whether ARA-and DHA-MAG were able to reduce cancer cell viability by promoting apoptotic cell death, a classical marker of apoptosis, caspase-3, was determined. In this study, caspase activation was evaluated in cells treated with ARA-and DHA-MAG at 300 and 600 μM for 90 min (Figure 1c). Caspase-3 activity is expressed as the percentage of activity compared to that of the untreated samples. As shown, a significant increase (up to 361%) of caspase-3 activity in the HT-29 cells was observed after 90 min exposure to DHA-MAG, while ARA-MAG did not show remarkable effects as compared to the respective untreated controls. . Data represent the mean of three complete independent experiments ± SD (error bars). Data were analyzed using generalized linear models (GZLMs). There are no significant differences (p < 0.05) among series sharing the same letter. SWATH Quantitation of 1882 proteins Showed That DHA-and ARA-MAG Differentially Affect the Whole Proteome of HT-29 Cells Samples were analyzed by DDA nanoLC-MS/MS, and runs were searched against a human protein database using Protein Pilot software. As a result, after integrating all three data sets, 2140 proteins and 15,406 peptides were identified (FDR < 1% at both protein and peptide levels); the list of identified proteins is shown in Table S1. The identified MS/MS spectra were compiled into a spectral library containing 2002 proteins. Using this library, chromatographic traces were extracted from the SWATH runs for 7653 peptides, corresponding to 1882 proteins. SWATH-based quantification normalized data for these 1882 proteins in all the samples is shown in Table S2. Tables S3 and S4 show the results for the differential abundance tests for DHA-MAG vs. control and for ARA-MAG vs. control, respectively. For subsequent analyses, we considered a restrictive scenario, namely p-value < 0.01 and two-fold change (FC), in order to have more confidence in selecting the proteins presenting actual expression changes. When addressing the changes in HT-29 cell proteome caused by DHA-MAG, a total of 896 proteins showed changes in expression (189 up-regulated and 707 down-regulated) (Figure 2a). Applying the same p-value and FC thresholds, only 70 proteins revealed a differential abundance as a consequence of the exposure to the ARA-MAG supplemented medium, 21 proteins being up-regulated and 49 down-regulated ( Figure 2b). When looking for the largest effects, extreme FCs (above 5.0) were found in 119 and six proteins (p-value < 0.01) for DHAand ARA-MAG, respectively (Tables S3 and S4, respectively). Therefore, it is clear than DHA-MAG produces a deeper effect than ARA-MAG on HT-29 cancer cells. When comparing the differentially abundant proteins (p-value < 0.01 and FC ≥ 2.0) in both tested groups, only 45 proteins were commonly being affected. Multivariate analyses of SWATH-based data including all 1882 quantitated proteins showed a complete separation of all groups: DHA-MAG, ARA-MAG, and control ( Figure 2c and Figure S1). Pathway and GO Analysis Showed Different Mechanisms of Action of DHA-and ARA-MAG on HT-29 cells The affected pathways and GO components were analyzed using iPathwayGuide software. Significantly impacted pathways and over-represented GO groups according to this analysis are shown in Figure 3. After FDR correction, only one pathway was found to be significantly impacted by DHA-MAG, namely the proteasome pathway (KEGG: 03050), with 30 proteins down-regulated ( Figure 4). On the other hand, two pathways resulted as significantly affected by ARA-MAG, DNA replication (KEGG: 03030), with a DNA polymerase and three helicase subunits down-regulated ( Figure 5); and pyrimidine metabolism (KEGG: 00240). The biological processes GO component most affected by DHA-MAG was nucleobasecontaining compound biosynthetic process (adjusted p-value 2.66 × 10 -4 ) (Figure 3b), with 239 proteins presenting differential abundance (Table S5). For ARA-MAG, the only significantly overrepresented biological processes according to our GO analysis were those related to the G1/S The affected pathways and GO components were analyzed using iPathwayGuide software. Significantly impacted pathways and over-represented GO groups according to this analysis are shown in Figure 3. After FDR correction, only one pathway was found to be significantly impacted by DHA-MAG, namely the proteasome pathway (KEGG: 03050), with 30 proteins down-regulated (Table S5). For ARA-MAG, the only significantly over-represented biological processes according to our GO analysis were those related to the G1/S transition of the cell cycle (Figure 3b), presenting eight proteins being regulated (Table S5). Regarding GO analysis of cellular components, DHA-MAG regulated proteins were mainly related to cytosol (adjusted p-value 2.05 × 10 −20 ), but also from extracellular and nuclear origin (Figure 3c and Table S6), while no cellular component resulted as over-represented as a consequence of ARA-MAG regulation. SWATH Proteomics Analysis was validated by PRM We developed a micro-HPLC PRM method to validate several of the protein changes previously found with the SWATH quantitative analysis workflow. Specifically, nine proteins were included in the PRM assay (Table 1): the most relevant pathways, as found in the pathway analysis, were represented by proteins MCM2 and MCM7 (helicase proteins from DNA replication pathway) and PSMF1, PSME3, and PSA3 (proteasome pathway); AIFM1 (apoptosis-inducing factor 1), as an apoptotic marker which we had found up-regulated in the SWATH analysis only in DHA-MAG treated cells but not in ARA-MAG, was also included in the validation assay; three other proteins showing SWATH-measured differences according to the compound used (ABHD2, up-regulated in ARA-MAG and not-significant in DHA-MAG; SRXN1, up-regulated in ARA-MAG and downregulated in DHA-MAG; and TEBP, down-regulated in DHA-MAG and not-significant in ARA-MAG) were also included for validation of the SWATH results. After optimization of the method for these nine proteins, a total of 27 proteotypic peptides were used for targeted PRM validation (Table S7). As an indication of the precision of the PRM assay, measured CVs were below 20% for all the peptides except for peptide TFVDYAQK (CV of 22%); median CV for the entire peptide set was 9.0% ( Figure S2 and Table S7). The effect of both compounds, DHA-MAG and ARA-MAG, on the nine targeted proteins, as measured by PRM, is shown in Figure 6. SWATH Proteomics Analysis was validated by PRM We developed a micro-HPLC PRM method to validate several of the protein changes previously found with the SWATH quantitative analysis workflow. Specifically, nine proteins were included in the PRM assay (Table 1): the most relevant pathways, as found in the pathway analysis, were represented by proteins MCM2 and MCM7 (helicase proteins from DNA replication pathway) and PSMF1, PSME3, and PSA3 (proteasome pathway); AIFM1 (apoptosis-inducing factor 1), as an apoptotic marker which we had found up-regulated in the SWATH analysis only in DHA-MAG treated cells but not in ARA-MAG, was also included in the validation assay; three other proteins showing SWATH-measured differences according to the compound used (ABHD2, up-regulated in ARA-MAG and not-significant in DHA-MAG; SRXN1, up-regulated in ARA-MAG and down-regulated in DHA-MAG; and TEBP, down-regulated in DHA-MAG and not-significant in ARA-MAG) were also included for validation of the SWATH results. After optimization of the method for these nine proteins, a total of 27 proteotypic peptides were used for targeted PRM validation (Table S7). As an indication of the precision of the PRM assay, measured CVs were below 20% for all the peptides except for peptide TFVDYAQK (CV of 22%); median CV for the entire peptide set was 9.0% ( Figure S2 and Table S7). The effect of both compounds, DHA-MAG and ARA-MAG, on the nine targeted proteins, as measured by PRM, is shown in Figure 6. Discussion IC50 values obtained by the MTT assay for ARA-and DHA-MAG are lower than those obtained for both LCPUFAs in the FFA form applied to the same cell line [21]. These differences may be related to the existence of a protein-mediated transport for PUFA across cell membranes. In this regard, a specific protein-mediated process has been reported for intestinal Caco-2 cells, which allows the entry of LCPUFA into such cells [5], and both acyl forms, i.e., FFA and MAG, establish competition for the same membrane-associated protein transporters (FAT, FATP, CD36, FABP). The findings of this work outline other ones from Ramos Bueno et al. [24], who demonstrated that unspecific oil-derived ARASCO ® -and DHASCO ® -MAG induced noticeable in vitro antitumor activity on HT-29 cells. Previous studies demonstrated that DHA significantly decreases cancer cells proliferation [28]; while Pompeia et al. [12] found dose-and time-dependent ARA-induced cytotoxicity in leukocytes, i.e., ARA at 10-400 μM induced apoptosis, while at concentrations above 400 μM the noted effect was necrosis. Several studies have discussed the relationship between FAs and mitochondrial permeability transition. Such relation is modulated by a variety of effectors of cell death, including reactive oxygen species (ROS), which are important messengers in normal-cell signal transduction and cell cycle being produced by mitochondria after stimulation of the TNFα receptor [29,30]. Moreover, Scorrano et al. [29] showed that ARA is a powerful permeability-transition inducer in MH1C1 cells, causing a release of cytochrome c followed by cell death. The LDH test is a colorimetric method suitable for the measurement of cell membrane integrity. It is based on the measurement of LDH enzyme activity, whose increase in the culture supernatant is proportional to the number of lysed cells [31]. Until now, several authors have studied cell membrane permeability to determine non-cytotoxic concentrations of MAG on different cell lines, such as Caco-2 and HT-29 cells, and no significant toxicity measured by the LDH assay was observed [15,24,32,33]. In this study, toxicity was found only after 72-h treatment at the highest ARA-MAG concentrations. Discussion IC 50 values obtained by the MTT assay for ARA-and DHA-MAG are lower than those obtained for both LCPUFAs in the FFA form applied to the same cell line [21]. These differences may be related to the existence of a protein-mediated transport for PUFA across cell membranes. In this regard, a specific protein-mediated process has been reported for intestinal Caco-2 cells, which allows the entry of LCPUFA into such cells [5], and both acyl forms, i.e., FFA and MAG, establish competition for the same membrane-associated protein transporters (FAT, FATP, CD36, FABP). The findings of this work outline other ones from Ramos Bueno et al. [24], who demonstrated that unspecific oil-derived ARASCO ®and DHASCO ® -MAG induced noticeable in vitro antitumor activity on HT-29 cells. Previous studies demonstrated that DHA significantly decreases cancer cells proliferation [28]; while Pompeia et al. [12] found dose-and time-dependent ARA-induced cytotoxicity in leukocytes, i.e., ARA at 10-400 µM induced apoptosis, while at concentrations above 400 µM the noted effect was necrosis. Several studies have discussed the relationship between FAs and mitochondrial permeability transition. Such relation is modulated by a variety of effectors of cell death, including reactive oxygen species (ROS), which are important messengers in normal-cell signal transduction and cell cycle being produced by mitochondria after stimulation of the TNFα receptor [29,30]. Moreover, Scorrano et al. [29] showed that ARA is a powerful permeability-transition inducer in MH1C1 cells, causing a release of cytochrome c followed by cell death. The LDH test is a colorimetric method suitable for the measurement of cell membrane integrity. It is based on the measurement of LDH enzyme activity, whose increase in the culture supernatant is proportional to the number of lysed cells [31]. Until now, several authors have studied cell membrane permeability to determine non-cytotoxic concentrations of MAG on different cell lines, such as Caco-2 and HT-29 cells, and no significant toxicity measured by the LDH assay was observed [15,24,32,33]. In this study, toxicity was found only after 72-h treatment at the highest ARA-MAG concentrations. This might be linked to the differential spatial configuration of both ARA-and DHA-MAG. The activities of both ARA and DHA were previously checked in the FFA form, with a higher activity being noted for ARA than for DHA [21,34], thus, after the intracellular hydrolysis of both MAGs, the action of the ARA-MAG would still prevail. The low activity could be due to the fact that both ARA-and DHA-MAG cannot be integrated effectively into cell membranes, since MAGs are low-polarity compounds and therefore they cannot establish effective chemical links with membrane proteins on these structures [35]. In this regard, Dommels et al. [36] stated that the cytotoxic effects mediated by some PUFAs, e.g., ARA and EPA, are due to the peroxidation products generated during lipid peroxidation and cyclooxygenase activity. However, MAGs are known to be surface-active compounds, so they might show minor cytotoxic effects on cells by disrupting cell membranes [32]. Caspase-3 is considered to be the most important effector of apoptosis and a marker for both intrinsic and extrinsic pathways [37]. An important aspect to consider is the chemical structure of the different MAGs, which affects the potency to induce apoptosis; in this regard, Philippoussis et al. [3] concluded that SFA-based MAG had no effect on such phenomenon, while PUFA-MAGs were highly potent to induce apoptosis in T-cells. The apoptotic activity noted here for DHA agrees with previous findings [3,38,39], and the potency of both LCPUFA-MAGs was higher than that previously reported for both FFA-based LCPUFAs [21]. Our findings demonstrated that DHA-MAG induces apoptotic cell death via activation of caspase-3. The non-activation of caspase by ARA-MAG may be due to the chemical structure of MAG, since in the FFA form such activity was detected [21]. These results completely agree with those from Ho and Storch [5], who suggested the existence of a protein-mediated process for MAG transport through cell membranes. Accordingly, LCPUFA-based MAGs could reach high concentrations inside cells and would be able to perform effective apoptosis actions, as reported here. SWATH, one of the recently developed data-independent acquisition (DIA) MS strategies [40], shows outstanding precision and accuracy even when used for proteome-wide quantitation [41]. SWATH performance is comparable to that of selected reaction monitoring (SRM), the golden standard for protein and small molecule quantitation [41]. SWATH consists of acquiring MS/MS data in stepped m/z fragmentation windows, and then matching the resulting fragment ions to peptides and proteins using a previously generated MS/MS spectral library, so fragment ion chromatograms can be in-silico extracted and used for label-free protein quantitation. Here we have used a SWATH v2.0 method, with variable Q1 isolation windows according to the ion density found in previous DDA runs, which has been described to improve peptide identification and quantification [42]. The results found in our SWATH analysis, with DHA-MAG producing a deeper effect than ARA-MAG on the HT-29 cancer cell proteome, suggest that the decrease of cell viability and increase of apoptosis observed should be produced by means of different mechanisms depending on the MAG tested. For comparing these results to those found in our previous work [21], it has to be noted that both experiments (cell cultures and cell assays) were run in parallel, and the analytical LC-MS methods and bioinformatics analysis performed have been exactly the same. All LC-MS runs were combined in one dataset, normalized for inter-run variability, and analyzed all together (SWATH extraction, pathways, and GO analysis). When comparing the results with those previously obtained for DHA-and ARA-FFA [21], 284 (45 up-regulated and 239 down-regulated) and 73 (27 up-regulated and 46 down-regulated) proteins, respectively, were reported for the FFA forms. Therefore, these previous results also showed a deeper effect for DHA than for ARA. Within the DHA-derived molecules, MAG affected the HT-29 cell proteome more globally than FFA did (896 vs. 284 differentially expressed proteins, respectively), although there is a protein core of 237 proteins that are common to both DHA forms ( Figure S3). Taking all these figures together, these results indicates that (i) DHA and ARA (in both forms, FFA and MAG) differentially affect the whole proteome of HT-29 cells, suggesting that the decrease of cell viability and increase of apoptosis observed should be produced by means of different mechanisms depending on the molecule tested; and (ii) MAG have a much deeper effect than FFA only for DHA forms, not for ARA. As an additional interesting result, we found PTGES3 (prostaglandin E synthase 3, TEBP, accession Q15185) was strongly down-regulated in DHA-MAG (fold-change 0.14, that is, seven times less abundant in DHA-MAG than in the control group) (Table S3). This protein, also down-regulated as an effect of DHA-FFA (fold-change 0.5, that is, two-fold less abundant in DHA-FFA than in control) [21], belongs to the prostaglandin biosynthesis pathway, and catalyzes the conversion of prostaglandin H2 to prostaglandin E, that is, the next step to the transformation of arachidonic acid to prostaglandin H2, which is catalyzed by COX-2. COX-2 is involved in regulation of apoptosis and proliferation of colorectal, liver, pancreatic, breast, and lung cancer cells [43], and although we do not have quantitative data for COX-2 in our results, the strong down-regulation of PTGES3 we found could mean that one of the antitumor activities of DHA could be effected by means of the prostaglandin cascade, since it plays an important role in antigen presentation and immune activation in cancer [44]. The significantly affected pathways were analyzed using iPathwayGuide software, which implements an 'impact analysis' approach, taking into consideration not only the over-representation of differentially expressed (DE) genes in a given pathway (i.e., enrichment analysis), but also topological information such as the direction and type of all signals in a pathway, and the position, role, and type of each protein [45]. Only the proteasome pathway resulted as significantly impacted by DHA-MAG in our analysis. The proteasome is a large protein complex which main action is degrading ubiquitinated-labeled proteins [46], and plays an important role in the regulation of many cellular processes such as cell cycle, cell differentiation, signal transduction, inflammatory response, and antigen processing. In Figure 4, a diagram of the different particles and proteins constituting the proteasome are shown, highlighting the proteins that we have found to be significantly regulated as an effect of DHA-MAG on HT-29 cancer cells. In our results, we found 30 proteins from the proteasome pathways to be significantly down-regulated, comprising all the main particles: 11 proteins from the 20S core particle and 14 proteins from the 19S regulatory particle, in both lid and base subunits. In addition, the PA28-αβ and PA28-γ are also down-regulated. PA28-γ has been found in the nucleus and plays an important role in the regulation of apoptosis and cell cycle progression [47]. Interestingly, proteasome was the main pathway that was found to be affected by DHA-FFA in our previous study using the same workflow [21]. In that case, 18 proteins from the proteasome complex resulted as down-regulated. Since the number of DE proteins reached 30 in the case of DHA-MAG, we can conclude that both forms of DHA affect the proteasome, but the MAG form induces its massive switch-off. Interestingly, proteasome inhibitors, such as natural polyphenol compounds, have been tested in clinical trials as drug candidates for treating different cancers, due to their ability to induce apoptosis and reduce cell proliferation [48,49]. According to the strong down-regulation of the proteasome particles we have found in our study, we suggest DHA-derived MAG, in addition to DHA-FFA as previously reported, as one of these candidates that deserve further studies as an anticancer effector. Two pathways resulted as significantly affected by ARA-MAG: DNA replication and pyrimidine metabolism. Regarding the DNA replication pathway, POLE3, one of the proteins conforming the DNA polymerase E, and three proteins from the helicase (MCM2, MCM3, and MCM7) were found to be significantly down-regulated ( Figure 5). In addition, the remaining helicase proteins, MCM4, MCM5, and MCM6, which are not significant according to our threshold (above two-fold change), are also affected to a certain extent, since corresponding fold-changes (case to control) found are 0.51, 0.55, and 0.51, respectively. The DNA helicase protein complex is responsible for unwinding the duplex DNA helix ahead of the DNA synthetic machinery at the replication fork [50]. Since DNA replication is linked to cell cycle progression and to DNA repair processes, it would be expected that the down-regulation of the helicase-constituting proteins and POLE3 would have an inhibitory effect on these other related processes. Actually, our results show that cell cycle pathway (KEGG: 04110), even though is not significant in our pathway analysis (adjusted p-value of 0.214), is perturbed by the addition of the ARA-MAG extract, since several proteins belonging to this pathway are regulated by it ( Figure S4a). The other pathway that resulted as significantly impacted according to our pathway analysis was the pyrimidine metabolism pathway (KEGG: 00240) (adjusted p-value 0.041), where ARA-MAG produces the under-expression of several proteins ( Figure S4b). When comparing to the results obtained in the previous study using ARA-FFA, these two pathways, DNA replication and pyrimidine metabolism, together with cell cycle, also resulted as significantly impacted in the pathway analysis [21]. Interestingly, DHA-MAG also affects proteins from the DNA replication pathway ( Figure S5), showing four helicase proteins being down-regulated, although our pathway analysis does not report this pathway as statistically significant due to the high global number of affected proteins. DNA replication turned out to be significantly impacted in our previous study using DHA-FFA, and therefore we can say that both forms of DHA induce a down-regulation of helicase proteins in addition to the deep effect on the proteasome. In contrast, ARA-MAG, as was the case also for ARA-FFA [21], does not induce a strong effect on the proteasome pathway (p-value of 0.661) since only two of the proteins included in this pathway were found to be regulated, PSMF1 and PSME3, with fold-changes (case to control) of 0.44 and 0.47, respectively ( Figure S6). For validating the SWATH-derived results, we developed a micro-HPLC PRM method which included 27 proteotypic peptides from a total of nine proteins. Interestingly, these nine proteins covered all the possible differences (up-regulated, down-regulated, or non-significant changes) for both comparisons (DHA-MAG to control and ARA-MAG to control) as found in the SWATH analysis. PRM [51], being a high-resolution (HR) MS targeted proteomics approach, has significantly greater sensitivity, accuracy, and precision than non-targeted discovery measurements, and more specificity than non-HR targeted methods, and therefore it is commonly used as a tool for validating protein abundance changes in all kinds of quantitative proteomics applications [52]. The results from the PRM assay were consistent with those previously found with the SWATH discovery approach (Table 1) (Table 1). Only protein PSMF1 showed a different behavior when comparing SWATH and PRM analyses, and only for the ARA-MAG to control comparison (Table 1). While it had been found to be down-regulated by SWATH, PRM analysis found a fold-change of 1.0 (and, accordingly, no statistically significant difference). When inspecting the PRM results for this protein, we found that this issue could be explained by a discordance between the measures for the two peptides that were monitored for this protein: one of the peptides (ALIDPSSGLPNR) showed a fold-change of 0.56, while the fold-change for the other peptide (LPPGAVPPGAR) was 1.41 (Table S7). On the other hand, this discordance between the two peptides was not found in the comparison DHA-MAG to control, where the protein was found to be significantly down-regulated in both PRM and SWATH measures. An interesting finding of the PRM analysis was the validation of the strong down-regulation of protein TEBP, prostaglandin E synthase 3, by DHA-MAG: measured fold-changes were 0.14 and 0.08 (SWATH and PRM, respectively), with adjusted p-values close to zero (Table 1), while ARA-MAG did not affect this protein. Prostaglandin E synthase 3 is one of the main proteins in the prostaglandin biosynthesis, converting prostaglandin H2 in prostaglandin E. Prostaglandin H2, the rate-limiting step in the formation of prostaglandins, is the product of prostaglandin G/H synthase 2, or COX-2, which has been related to colorectal cancer and whose inhibition (e.g., by nonsteroidal anti-inflammatory drugs) has been linked to tumor cell apoptosis, inhibition of proliferation, and reduction of colorectal cancer risk [53,54]. Therefore, it could be proposed that, apart from the proteasome pathway, one of the mechanisms contributing to the anticancer activity of DHA-MAG in HT-29 cells is carried out through the inhibition of prostaglandin synthesis, counteracting COX-2. It should be noted here that an increase in lipid droplet accumulation, as a consequence of a potential saturation of the cells by lipids, could have a role in some of the effects found. However, we have demonstrated, and this is the main finding of this work, that, depending on the MAG, the molecular mechanisms working in the background are different, with DHA-MAG deeply affecting the proteasome, with 30 proteins being strongly regulated, while ARA-MAG, with only two proteins affected, does not. Actually, the conclusion of our study is that DHA-and ARA-MAG, while differentially affecting the whole proteome of HT-29 cancer cells by means of different mechanisms are not affected by the possibility of lipid droplets playing a role on the effect seen. In summary, we have demonstrated that both ARA-and DHA-MAG showed concentrationdependent inhibitory effects on HT-29 cell viability, with a clear ability of DHA-MAG to induce cell death. The biological interpretation of SWATH-MS-generated proteomics data, validated by the quantification of nine relevant proteins by PRM, revealed that DHA-MAG outperforms the effect previously described for DHA-FFA, having a massive effect on the proteome of HT-29 cancer cells, with the proteasome complex being completely shut down. The strong down-regulation of prostaglandin E synthase 3, validated by PRM, also suggest a significant role of the prostaglandin synthesis in the anticancer activity of DHA-MAG in these colorectal cancer cells. On the other hand, although the effect of ARA-MAG is reduced in comparison to that of DHA-MAG, mainly in terms of inducing cell death, it still produces concentration-dependent inhibitory effects on HT-29 cell viability, as revealed by the MTT test. According to the proteomics experiments, this decrease of cell viability could be effected through inhibition of DNA replication and G1/S cell cycle transition, as it was previously described for ARA-FFA. Even though the MAG concentration used in the proteomics experiments (600 µM) could be considered relatively high, previous studies have reported higher (above 1700 µM) FA concentrations reached in human volunteers [55] and therefore, in the case of developing drugs based on MAG (and the corresponding FFA) for the treatment of cancer, 600 µM would be physiologically achievable, and it even would not represent negative effects on health, although, of course, further research in preclinical models and also in the clinical setting should be needed. Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6643/11/12/2984/s1, Table S1: List of identified proteins in the integrated data set. FDR was set to 1% at both peptide and protein levels. Table S2: SWATH-based protein quantification data for all the samples. 1882 proteins were quantified by SWATH-MS and measures were normalized for inter-run variability as described in Material and Methods section. Table S3: Differential abundance test for DHA-MAG vs. control. Fold changes, resulting p-values, and FDR analysis q-values for each of the 1882 quantified proteins are shown. Table S4: Differential abundance test for ARA-MAG vs. control. Fold changes, resulting p-values, and FDR analysis q-values for each of the 1882 quantified proteins are shown. Table S5: GO analysis: biological processes. Differential expressed proteins found in each biological process, together with all the proteins in that process, and the adjusted p-value, are shown for both comparisons, DHA-MAG vs. control and ARA-MAG vs. control. Table S6: GO analysis: cellular components. Differential expressed proteins found in each cellular component group, together with all the proteins in that component, and the adjusted p-value, are shown for both comparisons, DHA-MAG vs. control and ARA-MAG vs control. Table S7: Peptides selected for PRM validation. Fold-changes and adjusted p-values are shown for the comparisons DHA-MAG to control and ARA-MAG to control, together with the coefficient of variation (CV) measured using three technical replicates for each of the peptides. Figure S1: Multivariate analysis including all 1882 quantified proteins, showing the complete separation of the samples from each group (DHA-MAG, ARA-MAG, and control). (a) Group-averaged heat map; (b) partial least-squares discriminant analysis (PLS-DA); and (c) hierarchical cluster analysis (Spearman distance). Figure S2: Coefficient of variation (CV) for the 27 peptides monitored in the PRM validation, calculated using three injection replicates. Median CV for the PRM assay was 9%, with all peptides but one showing a CV below 20%. Figure S3: Differentially expressed proteins found as effect of DHA-and ARA-MAG, together with the previous results for DHA-and ARA-FFA [21]. Figure S4
2019-12-11T14:01:43.679Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "940cae081dabdb210f11c3afced6558187ffaaf0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/11/12/2984/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a6060041b3605b2a7d0173e197cc412a9a3f9187", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
256702329
pes2o/s2orc
v3-fos-license
Serological testing of an equal-volume milk sample – a new method to estimate the seroprevalence of small ruminant lentivirus infection? Background In cattle attempts to evaluate within-herd prevalence of various infectious and parasitic diseases by bulk-tank milk (BTM) testing with ELISA have been made with moderate success. The fact that BTM is composed of variable and unknown volumes of milk from individual lactating animals weakens the relationship between numerical result of the ELISA and the within-herd prevalence. We carried out a laboratory experimental study to evaluate if a pooled milk sample created by mixing an equal volume of individual milk samples from seropositive and seronegative goats, henceforth referred to as an equal-volume milk sample (EVMS), would allow for accurate estimation of within-herd seroprevalence of caprine arthritis-encephalitis (CAE) using 3 different commercial ELISAs. By mixing randomly selected milk samples from seronegative and seropositive goats, 193 EVMS were created – 93 made of seronegative samples and 100 with the proportion of seropositive individual milk samples (EVMS%POS) ranging from 1 to 100%. EVMS%POS could be considered as a proxy for the within-herd seroprevalence. Then, OD of EVMS (ODEVMS) of the 193 EVMS was measured using 3 commercial ELISAs for CAE – 2 indirect and 1 competitive. Results The cut-off values of ODEVMS indicating SRLV infection were determined. The regression functions were developed to link ODEVMS with EVMS%POS. A significant monotonic relationship between ODEVMS measured with 2 commercial indirect ELISAs and EVMS%POS was identified. Two regression models developed on this basis described approximately 90% of variability and allowed to estimate EVMS%POS, when it was below 50%. High ODEVMS indicated EVMS%POS of > 50%. Conclusion Our study introduces the concept of serological testing of EVMS as a method of detecting SRLV-infected herds and estimating the proportion of strongly seropositive goats. Further field studies are warranted to assess practical benefits of EVMS serological testing. Supplementary Information The online version contains supplementary material available at 10.1186/s12917-023-03599-z. Background Caprine arthritis-encephalitis (CAE) caused by small ruminant lentivirus (SRLV) is one of the major health problems of goat population worldwide. SRLV infection is lifelong and in a proportion of goats progresses into a symptomatic form, with chronic arthritis being the most common clinical manifestation. Antibodies specific to SRLV are usually produced 4 to 12 weeks post infection, rarely later [1,2], and remain detectable for life although their levels are known to fluctuate [3][4][5]. It makes serology the mainstay of CAE diagnostics [6,7]. Serum testing is considered the optimal diagnostic modality for CAE control programs [8]. Testing of milk samples instead of blood eliminates animal stress as well as reduces costs associated with specimen collection. Therefore, several studies have evaluated diagnostic accuracy of serological milk testing [8][9][10][11][12][13][14][15]. Their results indicate high diagnostic accuracy and high agreement between qualitative results obtained on milk and serum. This makes milk testing a screening method worthy of consideration in dairy goat herds. Regardless of the disease in question and the biological material used, serological screening of the herd is an expensive procedure as it requires a representative group of animals be properly selected and sampled [16]. This fact has given rise to the idea of serological testing of a bulk tank milk (BTM) sample. This approach is currently used in cattle for screening herds for various viral [17][18][19][20], bacterial [21,22], and parasitic diseases [23][24][25][26][27][28]. In goats, serological BTM testing has so far been evaluated in terms of CAE, caseous lymphadenitis [29], toxoplasmosis [30], paratuberculosis [31], and Q fever [32,33]. Estimations of diagnostic accuracy of serological BTM testing vastly depend on the reference standard used. Generally, in the aforementioned studies diagnostic specificity usually outraced sensitivity [29] and higher sensitivities were observed when serological rather than molecular reference standards were used [31]. Serological BTM testing for CAE was shown to be moderately accurate in detecting herds with at least 2% within-herd seroprevalence with the area under ROC curve of 80%. At an optimal cut-off value the procedure was roughly 73% sensitive and 84% specific [29]. The intensity of color reaction in ELISA performed on BTM sample, both raw and corrected optical density (OD), is known to correlate with the within-herd prevalence of infections in lactating individuals. As a consequence, quantitative ELISA results obtained on BTM samples have been used to estimate the within-herd prevalence of infections with bovine viral diarrhea virus (BVDV) [34,35], bovine herpesvirus type 1 (BHV-1) [36][37][38], bovine leukemia virus (BLV) [39], and F. hepatica infection in cattle [23,24], and T. gondii infection in goats [30]. However, the contribution of individual animals to the seroreactivity of a BTM sample depends on the volume of milk they yield and the concentration of antibodies in individual milk samples. As these two variables are not only beyond examiner's influence but usually also remain unknown, estimations made on BTM samples have been shown to be rather imprecise. The former source of variability can be easily eliminated by creating an artificial milk sample by mixing an equal volume of milk from each animal. A recent study has shown that serological testing of this type of pooled milk samples from only 10 first-lactation cows in a herd yields more accurate result than BTM testing in terms of lungworm diagnosis [40]. This study, however, evaluated qualitative test results. On the other hand, Mazzei et al. [41] showed that the correlation between corrected OD of a pooled milk sample and the share of sheep milk seropositive for maedi-visna disease in this sample was almost perfect (coefficient of determination (R 2 ) = 0.98). However, in this study the second source of variability was also removed by constant diluting a milk sample from the same seropositive sheep in the same seronegative milk sample (made by pooling BTM samples from 3 seronegative herds). Such perfect conditions preclude using the regression formula derived in their article to estimate the within-herd prevalences in field conditions. Therefore, we carried out a laboratory experimental study to evaluate if testing a pooled milk sample created by mixing an equal volume of individual milk samples selected randomly from seropositive and seronegative goats (henceforth referred to as an equal-volume milk sample, EVMS) using three different commercial ELISAs would accurately estimate the proportion of milk samples coming from SRLV-seropositive goats. Results The list of 193 EVMS along with their optical density (OD EVMS ) and the proportion of seropositive individual milk samples in EVMS (EVMS %POS ) are presented in Table S1. Negative EVMS OD EVMS of sp-iELISA ranged from 0.052 to 0.143 with the arithmetic mean (SD) of 0.070 (0.021) and significantly non-normal (p < 0.001) and right-hand skewed distribution (coefficient of skewness [CoS] = 2.07, CI 95%: 1.58 -2.56). The cut-off value of OD EVMS above which EVMS should be classified as positive was set at 0.15. One positive EVMS (with EVMS %POS = 1%) overlapped with negative EVMS and therefore 99 EVMS with OD EVMS ≥ 0.15 were further investigated for the relationship between OD EVMS and EVMS %POS . OD EVMS of TM/CA-iELISA ranged from 0.118 to 0.928 with the arithmetic mean (SD) of 0.290 (0.141) and significantly non-normal (p < 0.001) and right-hand skewed distribution (CoS = 2.09, CI 95%: 1.60 -2.58). The cut-off value of OD EVMS above which EVMS should be classified as positive was set at 0.93. No positive EVMS overlapped with negative EVMS and therefore all 100 EVMS with OD EVMS ≥ 0.93 were further investigated for the relationship between OD EVMS and EVMS %POS . OD EVMS of SU-cELISA ranged from 0.476 to 0.855 with the arithmetic mean (SD) of 0.719 (0.092) and significantly non-normal (p < 0.001) and left-hand skewed distribution (CoS = -1.04, CI 95%: -1.53 --0.55). The cut-off value of OD EVMS below which EVMS should be classified as positive was set at 0.47. Six positive EVMS (with EVMS %POS from 1 to 6%) overlapped with negative EVMS and therefore 94 EVMS with OD EVMS ≤ 0.47 were further investigated for the relationship between OD EVMS and EVMS %POS . The aforementioned cut-off values of OD EVMS ensured 100% diagnostic specificity (95% confidence interval [CI 95%]: 96.0% -100%) of EVMS testing. However, diagnostic sensitivity of EVMS testing certainly was lower and was mainly affected by the analytical sensitivity of the ELISA which was not evaluated in this study. Therefore, EVMS testing yields results with high positive predictive value and positive result is highly trustworthy. However, EVMS testing should never be used for ruling SRLV infection out as the negative predictive value remains unknown. The study showed that low EVMS %POS had OD EVMS overlapping with OD EVMS from negative EVMS. The EVMS model based on sp-iELISA and applied to OD EVMS between 0.15 and 3.0 fit the data well (F 1,46 = 390.8, p < 0.001) and was described by the following equation: Its parameters are given in Table 1 and R 2 was 0.895. Standardized residuals were normally distributed (p = 0.213) and homoscedasticity was retained (p = 0.097). The relationship between OD EVMS measured using sp-iELISA and EVMS %POS in the entire range of possible OD values was as follows: TM/CA-iELISA on positive EVMS Also, in terms of this ELISA, the scatter plot showed a distinction in OD EVMS between samples with different EVMS %POS (Fig. 2). No positive EVMS overlapped with negative ones. Then, OD EVMS showed a gradual logarithmic increase that was linked to increasing EVMS %POS (after linearization using a logarithmic transformation of EVMS %POS : r = 0.964, CI 95%: 0.937 -0.980). The logarithmic relationship could be observed up to the EVMS %POS of 50% where the trend disappeared and the function became constant with OD EVMS > 4.0. To illustrate the relationship between OD EVMS and EVMS %POS , EVMS %POS from the range of 1% to 49% was logarithmically transformed (natural logarithm). The EVMS model based on TM/CA-iELISA and applied to OD EVMS between 0.93 and 4.0 fit the data well (F 1,47 = 610.4, p < 0.001) and was described by the following equation: Its parameters are given in Table 1 and R 2 was 0.927. Standardized residuals were normally distributed (p = 0.361) and homoscedasticity was retained (p = 0.425). The relationship between OD EVMS measured using TM/CA-iELISA and EVMS %POS in the entire range of possible OD values was as follows: SU-cELISA on positive EVMS As in the case of the other ELISAs, the scatter plot showed a distinction in OD EVMS between samples with different EVMS %POS (Fig. 3). Then, OD EVMS showed a gradual decrease that was linearly linked to increasing EVMS %POS (r = -0.871, CI 95%: -0.973 --0.491; p = 0.002), which, however, could be observed only up to the EVMS %POS of 15%. Then, the linear trend turned significantly weaker (r = -0.454, CI 95%: -0.687 --0.137; p = 0.007). Such a trend was observed up to the EVMS %POS of 49%, and then it slightly changed but remained similarly weak (r = -0.584, CI 95%: -0.740 --0.368; p < 0.001). A very narrow range of EVMS %POS (up to 15%) in which the relationship between OD EVMS and EVMS %POS followed a measurable trend indicated that SU-cELISA was unsuitable for estimation of EVMS %POS . As a result, it could not be used for estimation of the within-herd seroprevalence on the basis of EVMS. Discussion Our study shows that a milk sample prepared by pooling equal volume of individual milk samples from all lactating animals in a herd, called by us an equal-volume milk sample (EVMS), is characterized by the presence of a very strong monotonic relationship between the proportion of individual milk samples from goats seropositive for SRLV infection used for preparation of this EVMS (EVMS %POS ) and its OD value measured using a commercial indirect ELISA for CAE (OD EVMS ). This relationship was observed despite the fact that milk samples were selected based on serological testing of respective serum samples not milk samples themselves and the agreement between OD values in these two materials is moderate, not only in CAE [8] but also in other infections [24,42]. The strong monotonic relationship applies, however, only to a part of the range the two variables can take. Our results offer a chance to use OD EVMS not only to classify a herd as seropositive or seronegative but also to estimate the proportion of seropositive lactating goats in Fig. 3 The relationship between the proportion of seropositive individual milk samples in the equal-volume milk sample (EVMS %POS ) and the optical density of EVMS (OD EVMS ) in competitive ELISA based on surface glycoprotein antigen (SU-cELISA). Black broken line is a cut-off separating positive and negative EVMS a herd. This concept is very tempting as practical benefits from quantitative evaluation of a herd status based on testing only one sample are undeniable. Hope for this has encouraged many scientific teams to attain this goal by testing a BTM sample [34][35][36][37][38][39]. However, despite significant correlations and R 2 ranging from 0.7 to 0.85 a considerable spread of OD values from the regression line could be observed in all these studies. Moreover, the higher within-herd prevalence was the further OD values tended to be located from the regression line. In our study we observed the same trend as well as differences in the character of the relationship in subsequent ranges of the dependent variable (i.e. EVMS %POS ). In fact, the significant and strong correlation between the two variables was present in our study only for EVMS %POS < 50%. Within this range approximately 90% of variability could be explained by EVMS models. Above this value, the regression line became more or even completely horizontal in the case of sp-iELISA and TM/CA-ELISA, respectively. Therefore, the relationship between OD EVMS and EVMS %POS could not be described using a single function for the entire range of values of EVMS %POS (0% to 100%). Interestingly, a similar phenomenon could be noticed after careful inspection of scatter plots presented in many previous studies on BTM in bovine viral infections [34,[36][37][38][39]. In these studies, exponential, square root, and logarithmic functions were used to link the two variables in the whole range of their possible values. However, our observations suggest that OD values correlate significantly only with a limited range of values the analyzed variables may take. Therefore, we think it is pointless to try to construct a single model predicting the entire range of EVMS %POS . Above a particular value, which in our study seemed to be 50%, the EVMS %POS cannot be precisely estimated which is certainly a limitation of EVMS serological testing. In our opinion, the same limitation applies to most of so far published studies aiming to predict within-herd prevalence on the basis of OD value of BTM sample. Interestingly, this effect was absent in studies investigating this BTM testing in parasitic diseases [23,24]. The regression equations we derived are only an illustration of the relationships observable in certain predefined and controlled laboratory conditions. It is unlikely that they can be directly used to estimate within-herd prevalences of CAE in field conditions. This is because EVMS %POS is only a very simplified proxy for the withinherd prevalence. First of all, it was made by mixing milk samples from seronegative and strongly seropositive goats while not all SRLV infected goats are strongly seropositive. In fact, the levels of antibodies against SRLV are known to fluctuate [3][4][5] which is likely to blur the differences between situations in which 1 strongly seropositive goats or several weakly seropositive goats are present in a herd. The study of Mazzei et al. [41] shows that when the variability coming from different antibody concentrations in individual milk samples is eliminated by using still the same positive and negative milk sample to prepare milk dilutions, the correlation between OD value and proportion of seropositive parts of milk in the pooled sample is virtually perfect. However, such a situation is purely theoretical and does not correspond to field conditions. Further studies should answer the question about the magnitude of influence this factor has on the estimations based on OD EVMS . Secondly, we artificially created EVMS based on 100 individual milk samples. The influence of a single goat's seroreactivity on the seroreactivity of EVMS was probably relatively smaller than if EVMS was based on 10 milk samples. However, the magnitude of this effect remains unknown and cannot be reliably predicted by extrapolation of our results. Another drawback to our study is the unknown analytical sensitivity of the method. Commercial indirect ELI-SAs used in our study allowed to detect the presence of 1 (TM/CA-iELISA) or 2 (sp-iELISA) seropositive individual milk samples out of 100. This could falsely suggest that EVMS serological testing might serve as a screening method. It has to be emphasized that the minimum amount of antibodies that can be detected by the ELISAs in EVMS (limit of detection) is unknown. Therefore, no conclusions can be drawn from the fact only 1 or 2 positive EVMS of the lowest EVMS %POS overlapped with negative EVMS. EVMS with e.g. 10% of positive individual milk samples could have also overlapped with negative EVMS if these positive milk samples had contained less antibodies. Testing EVMS may only indicate the seropositive status of the herd but not the seronegative status. The same applies to BTM testing. The freedom from disease can only be confirmed in properly-designed and properly-conducted disease surveys [16]. Our observation that SU-cELISA performed worse than the two indirect ELISAs is probably at least partly associated with lower analytical sensitivity of EVMS testing with this ELISA. SU-cELISA is based on a single SRLV antigen so it is able to capture only antibodies against SU, leaving anti-CA and anti-TM antibodies unattended. Therefore, is likely to detect fewer positive EVMS as it has also been shown in terms of individual milk samples [15] and serum [43]. The fact that a significant correlation between OD value of BTM samples and within-herd prevalence has been observed in many previous studies acts in favor of the concept of EVMS testing. EVMS overcomes one crucial weakness of BTM i.e. an inequality of milk sample volumes contributed by individual lactating females, which is likely to considerably lower the variability in predictive models. A concept close to EVMS was proposed and evaluated by McCarthy et al. [40]. They created pooled milk samples from 10 randomly selected heifers in a cattle herd. A diagnostic accuracy of serological test for lungworm infection carried out on this sample proved to be significantly higher than on BTM sample. Their study shows highly probable advantage of pooling equal volumes of milk over BTM in which contribution of individual animals to the total volume is unknown. Unfortunately, an attempt to estimate the within-herd seroprevalence was not made in this study. EVMS combines advantages of the pooled milk sample from the representative number of animals in a herd (equal representation of animals tested) and the BTM sample (representation of all lactating animals). Obviously, like BTM, EVMS does not include males, kids, and females before the first parturition, as well as goats in a dry off period. However, especially in goats, it is unlikely to considerably interfere with the assessment of the herd status as males constitute only a small part of a dairy herd (usually a few bucks are kept separately from females and can be tested individually) and goats usually are bred seasonally. As a result, most of lactating does are in a similar stage of lactation which eliminates a factor considered as a potential source of variability in BTM testing of dairy cattle herds [37,44]. Moreover, the optimal moment for herd screening may easily be chosen. Modern milking parlors enable rapid and simple collection of an individual milk sample from the bucket (recording jar). As EVMS is prepared just by mixing such individual milk samples at equal volume, it adds very little work to farmer's daily schedule. Compared to collecting individual blood or milk samples directly from individual animal's vein or udder it not only saves money, time, and labor, but also spares animals additional stress. Conclusions Our study introduces the concept of serological testing of an equal-volume milk sample (EVMS) as a method of detecting SRLV-infected herds and estimating the proportion of strongly seropositive goats. Analogically to BTM, EVMS ensures the representation of all lactating animals in the herd, meanwhile eliminating the variability associated with different volumes of milk yielded by each animal. Our study demonstrates that in terms of SRLV infection a significant monotonic relationship between OD EVMS and EVMS %POS is observed only for a part of the range of values these variables may take and only when indirect ELISAs are used. Further field studies are warranted to assess practical benefits of EVMS serological testing not only in terms of CAE but also other infectious and parasitic diseases in dairy animals. Equal-volume milk samples Numerical results (raw and corrected OD values) of 200 individual serum and milk paired samples were purposively selected from the database used in the previous study regarding performance of 3 commercial immunoenzymatic assays on individual milk samples [14]. Serum and milk (lactoserum) samples from individual goats had been collected for this study between April and August 2019 (the first half of lactation) and screened using two indirect ELISAs -ID Screen MVV-CAEV Indirect Screening test (ID.vet Innovative Diagnostics, Grabels, France) containing the panel of peptides from SRLV structural proteins -surface glycoprotein (gp135, SU), transmembrane glycoprotein (gp46, TM), and capsid protein (p25/p28, CA) (henceforth referred to as sp-iELISA), and IDEXX MVV/CAEV p28 Ab Screening (IDEXX Laboratories, Westbrook, ME, USA) based on recombinant TM and CA antigen (TM/CA-iELISA), and one competitive ELISA -Small Ruminant Lentivirus Antibody Test Kit, cELISA (VMRD, Pullman, WA, USA) coated with SU (SU-cELISA). The following criteria of sample selection were applied: One hundred paired serum and milk samples came from 100 goats that tested positive for the presence of antibodies to SRLV (seropositive goats) in the 3 ELISAs, and their corrected OD was at least twofold higher than the manufacturers' cut-off values which were as follows: serum-to-positive control ratio (S/P%) = 50% in sp-iELISA, S/P% = 110% in TM/ CA-iELISA, and percentage of inhibition (PI) = 35% in SU-cELISA. Another 100 paired serum and milk samples came from 100 goats that tested negative for the presence of antibodies to SRLV (seronegative goats) in the 3 ELISAs, and their corrected OD was at least twofold lower than the manufacturers' cut-off values. Raw and corrected OD of paired 200 serum and milk samples are summarized in Table 2 and individual results can be found in Table S2. By mixing randomly selected milk samples from seronegative goats with randomly selected milk samples from seropositive goats, 193 equal-volume milk samples (EVMS) were created. Simple random method of selection with returning was performed using the RAND. BETWEEN() function in Microsoft Excel. For each EVMS, 10 μl of 100 individual milk samples were mixed to amount to the volume of 1 ml. Ninety three EVMS were made of individual milk samples from seronegative goats (negative EVMS). One hundred EVMS were made of individual milk samples from seropositive and seronegative goats combined at a ratio from 1:99 to 100:0 (positive EVMS). The proportion of seropositive individual milk samples out of 100 individual milk samples (EVMS %POS ) increased by 1% from 1% to 100%. EVMS serological testing Then, OD of EVMS (OD EVMS ) of the 193 EVMS was measured using the 3 ELISAs. The ELISAs were performed according to manufacturers' protocols. EVMS were diluted 1/2 in sp-iELISA and TM/CA-iELISA, and remained undiluted in SU-cELISA. These dilutions were chosen based on our previous study [15]. All other steps of the protocol remained unchanged compared to regular serum testing. OD EVMS was measured at a wavelength of 450 nm (sp-iELISA and TM/CA-iELISA) or 620 nm (SU-cELISA) using the scanner (Epoch Microplate Spectrophotometer, BioTek, USA). OD EVMS values exceeding the upper measuring limit of the scanner (OD > 4.0, OVER-FLOW) were replaced with the value of 4.2. Statistical analysis OD values were presented as the arithmetic mean, standard deviation (± SD), and range. Normality of distribution was assessed using the Quantile-Quantile scatter plots and the Shapiro-Wilk test, and the skewness of distribution was expressed using coefficient of skewness (CoS) with CI 95%. The cut-off value separating negative and positive EVMS was set at the maximum (in the case of indirect ELISAs) or minimum (in the case of SU-cELISA) OD EVMS observed in negative EVMS rounded up or down, respectively, to the closest second decimal digit. Diagnostic specificity of EVMS testing was calculated as the number of negative EVMS below the cut-off value divided by 93 negative EVMS and the CI 95% was calculated using Wilson score method [45]. Linear correlation between OD EVMS above the cut-off value and EVMS %POS was determined using the Pearson product-moment correlation coefficient (r) and the CI 95% was calculated with the method of Altman et al. [45]. Strength of correlation was classified as follows: r = 0.00 to 0.19 -very weak; 0.20 to 0.49 -weak; 0.50 to 0.69moderate, 0.70 to 0.89 -strong, and 0.90 to 1.00 -very strong [46]. The relationship between EVMS %POS (explanatory, independent variable, x) and OD EVMS above the cutoff value (outcome, dependent variable, y) was investigated using the scatter plot and simple linear regression according to a general formula: where β 0 was an intercept (an y value at which x = 0), β 1 was a slope of the linear regression line, and ε was a residual (error). EVMS %POS was considered an independent variable in this model because its values were fixed by the study design, and they had an influence on OD EVMS which were, therefore, considered a dependent variable. Hence, the equation was as follows: Depending on the shape of relationship, x variable was either entered in raw or logarithmically transformed (natural logarithm with e basis) form. To predict EVMS %POS based on OD EVMS the equation was inversed as follows: Assumptions of the linear regression were evaluated as follows [47]: 1) shape of the relationship between x and y was assessed by visual inspection of the scatter plot and it was linearized by x transformation if necessary; 2) normality of residual distribution was assessed by visual inspection of the Quantile-Quantile scatter plots and using the Shapiro-Wilk test; 3)
2023-02-10T15:03:09.536Z
2023-02-10T00:00:00.000
{ "year": 2023, "sha1": "20fe17a9c5e0e4061f3c14261bd0fc5dafe97248", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "20fe17a9c5e0e4061f3c14261bd0fc5dafe97248", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
258686771
pes2o/s2orc
v3-fos-license
On the Decoherence of Primordial Gravitons It is well-known that the primordial scalar curvature and tensor perturbations, $\zeta$ and $\gamma_{ij}$, are conserved on super-horizon scales in minimal inflation models. However, their wave functional has a rapidly oscillating phase which is slow-roll unsuppressed, as can be seen either from boundary (total-derivative) terms of cosmological perturbations, or the WKB approximation of the Wheeler-DeWitt equation. Such an oscillatory phase involves gravitational non-linearity between scalar and tensor perturbations. By tracing out unobserved modes, the oscillatory phase causes faster decoherence of primordial gravitons compared to those by bulk interactions. Our results put a stronger lower bound of decoherence effect to the recent proposals probing squeezed primordial gravitons. In particular, the quantum noise produced by the primordial gravitational wave can be largely enhanced by the inflationary squeezed states, providing a chance to make the non-classicality of gravitons detectable. However, it is well-known that the cosmological perturbations, including scalar and tensor, can experience the quantum-to-classical transition through the interaction with environment during inflation, described by the environment-induced decoherence. As a first step to analyze the potential obstruction of the mentioned proposals, we study the decoherence of the primordial gravitons during the simplest single-field inflation. As noted recently in [41], the slow-roll unsuppressed boundary (total-derivative) term which dominates for the super-horizon scalar curvature perturbation ζ, can introduce rapidly oscillating non-Gaussian phase to the wave functional of cosmological perturbations, leading to much larger decoherence than the bulk interactions. Such boundary terms without any time derivativeζ (similarly for tensor perturbationγ ij ) cannot contribute to correlators ζ m γ n , so they are often neglected in the literature, whereas they also cannot be removed by field redefinitions [48][49][50]. In this paper, we extend the boundary-term decoherence to 1 See also the related framework developed earlier in the semi-classical stochastic gravity [13][14][15] and the bound of squeezing from the current LIGO-Virgo data [16]. 2 We also compare with the results in [44,45], and the related decoherence quantities are estimated in Sec. IV D. the case with tensor perturbation, which relies on the slow-roll unsuppressed scalar-tensor cubic boundary terms and we will show that these boundary terms also lead to larger decoherence than the bulk terms. There is a constant interest in discussing the Wheeler-DeWitt (WDW) equation [51,52] and quantum gravity [53][54][55][56][57], which motivates us to discuss the relationship between these boundary terms and the wave functional obtained by the WDW equation. In the large volume limit of inflation [58][59][60] and dS space [61][62][63][64] or similarly the asymptotic infinity in AdS [54,65,66], the wave functional shares the same form, consisting of a real local action W (h ij , φ) and a part Z(h ij , φ) including non-local terms where h ij is the spatial metric induced on a hypersurface, and φ is the inflaton in our case. With the form of the wave functional (3), it is clear that the rapidly oscillating phase W (h ij , φ) cannot contribute to the expectation value of any observable defined by the spatial metric where we assume that Ψ(h ij , φ) is normalized and defined on the hypersurface with δφ = 0 (the ζ-gauge), so all the quantum degrees of freedom are in the metric. The local action (or WKB phase) W (h ij , φ) can be calculated by applying the WKB approximation to the WDW equation [58][59][60], and we will show that it matches the non-Gaussian phase obtained from the boundary terms in the action of cosmological perturbations, thus contributing to the decoherence. It is noteworthy that the slow-roll suppressed scalar bulk interaction ( + η)a (∂ i ζ) 2 ζ studied in [34] also contributes a rapidly oscillating non-Gaussian phase at late time, thus the state is considered as the WKB type by the author, causing decoherence of ζ by tracing out unobserved modes. However, we will show that the WKB phase in the WDW state includes slow-roll unsuppressed parts involving both scalar ζ and tensor perturbations γ ij . The paper is organized as follows. In Sec. II, we review the setup of the simplest singlefield inflation with a brief discussion of the choice of tensor perturbation and re-derive the splitting of bulk and boundary cubic terms. In Sec. III, we first discuss the non-Gaussian phase obtained from the slow-roll unsuppressed boundary terms in both the interaction and We set some notations for convenience. We label the comoving momenta of system modes with q and environment modes with k, and the comoving momentum p can be used to label arbitrary modes. The integral modes with momentum conservation is denoted by , and integrating over two environment modes with a fixed system mode is k+k =−q = . We use a simpler notation s 1 ,··· ,sn = s 1 ,··· ,sn=+,− to represent the sum of n circular polarization indices. II. The bulk and boundary terms in the cubic order In this section, we review the setup for deriving the action of cosmological perturbations up to the cubic order [67] with a brief comment on the two common choices of tensor perturbation in the literature. Since the neglected temporal boundary terms obtained by integration by parts are not clearly shown in [67], we also derive the splitting of bulk and boundary terms for both scalar and tensor perturbations. A. The setup We start with the ADM decomposition of the metric where N and N i are the lapse and shift respectively, and h ij is the spatial metric on the hypersurface which has the extrinsic curvature where n µ is the normal of the hypersurface. We consider the simplest single-field inflation with the action where the second term in the first line is the Gibbons-Hawking-York (GHY) boundary term for the manifold M [68][69][70] which cancels with the covariant derivative term when we decompose the 4-dimensional Ricci scalar with (3) R the three-dimensional Ricci scalar. The Hubble parameter is determined by the uniform background of inflaton field as with which the slow-roll parameters are defined satisfying the slow-roll conditions 1 and |η| 1. In the literature, there are two definitions of tensor perturbations in the spatial metric h ij which are applied in the decoherence problems [44,45], and here we briefly discuss the difference. One is γ ij defined in the comoving gauge (ζ-gauge with δφ = 0), e.g. in [67] h ij = a 2 e 2ζ (e γ ) ij , where ζ is the scalar curvature perturbation, and γ ij satisfies γ ii = ∂ i γ ij = 0. Another is defined by the scalar-vector-tensor decomposition [71] which includes the scalar part h S and the transverse traceless part h T T ij . The curvature perturbation ζ is related to (h S , h T T ij ) on the hypersurface with δφ = 0 by comparing the same det(h ij ) calculated in (11) and (12) showing the dependence between ζ and h T T ij , so h T T ij may not be a good choice when ζ is considered as the environment of decoherence of gravitons. Therefore, under the condition δφ = 0, one may either choose (ζ, γ ij ) or (h S , h T T ij ) to be the scalar and tensor perturbations to make the calculation convenient and avoid some ambiguities. On the other hand, γ ij defined in the first case conserves outside the horizon, as the cubic interactions related to γ ij all include derivatives which will be shown explicitly in (27)- (29). However, h T T ij does not conserve outside the horizon as the cubic interactions include a term without any derivative [44] suggesting that γ ij is more convenient to the describe the super-horizon evolution of tensor mode. As also discussed in [72], gravitational wave should not perturb the spatial volume, and the property det(e γ ) ij = 1 also suggests that γ ij is more appropriate. Due to the mentioned reasons, we adopt the spatial metric defined with (ζ, γ ij ) (11), consistent with the choice in [34,41,45]. Varying the action (7) with respect to N and N i gives the constraint equations, solving them gives With these, expanding the action (7) to the second order gives the free actions of scalar and tensor perturbations where in the second line the following mode decomposition is applied [73] and the symmetric polarization tensors satisfy By quantizing ζ and γ ij , the free actions (16) and (17) where the normalization factor G,p is separable, and the coefficients 3 where τ = t dt a(t ) is the conformal time with = d dτ , and the mode functions are For the Gaussian wave functional (20), the scalar and tensor power spectra are related to the two coefficients (21) and (22) as At the quadratic level, all the modes evolve independently, and thus we need the cubic order action for the interactions between observed and unobserved modes. 3 The time derivative of is neglected when we calculate A B. Splitting the bulk and boundary cubic terms We want to find out all the scalar and tensor temporal boundary terms neglected in [67], as they will be shown to be important to the inflationary decoherence. The guideline of splitting the bulk and boundary terms in the ζ-gauge is to make the former matches the one derived in the δφ-gauge (spatially flat) (see also [50,74,75] by arranging the bulk terms in the gauge-invariant manner), and it is expected that all the cubic bulk terms involving ζ is slow-roll suppressed since the transformation δφ ≈ −φ H ζ ∼ O( √ )ζ introduces slow-roll parameters. On the other hand, since the tensor perturbations in the two gauges are on the same slow-roll order, bulk terms include some slow-roll unsuppressed γγγ interactions. The splitting of the bulk and boundary interactions can be found in the literature [49,50,76], and here we derive again with the notations used in this paper. With the Mathematica package MathGR [77] doing integration by parts, we obtain the following splitting of bulk and boundary cubic terms from the action (7) (with spatial total derivative neglected) where the four bulk terms the two EOM terms 4 bulk/boundary type leading interaction of each type order bulk ζζζ and the boundary terms Using the size of power spectra of scalar and tensor perturbations we expect that the tensor perturbation is slow-roll suppressed compared to the scalar perturbation γ ∼ O( √ )ζ. Table I 5 is the summary of the size of interaction terms which do 5 In the choice of h T T ij , the leading three-tensor interaction is a 3 h T T ij h T T jl h T T li , but we have chosen γ ij . not involve any time derivative, 6 and only the most dominated terms of each type are shown. It is clear that boundary terms are less slow-roll suppressed compared to the bulk terms. III. The oscillating phase of inflationary wave functional It has been shown in the literature [48,49,76] that total time derivative terms in the Lagrangian L int ⊃ −∂ t K generally contribute to the in-in correlators as 7 Since decoherence is mainly contributed by terms without time derivatives [45], we focus on the boundary terms involving ζ, γ ij and their spatial derivatives, denoted by K (ζ, γ, t) with the explicit forms shown in (34)- (36). K (ζ, γ, t) commutes with normal correlators of the form O(t) = ζ m (t)γ n (t), and the expectation value in (39) vanishes, so such boundary terms are often neglected in the literature. In this section, we will show how such boundary terms contribute a non-Gaussian phase to the wavefunctional of cosmological perturbations with three methods. Sec. III A and III B calculate the phase with the interaction and Schrödinger pictures respectively, and the results will be shown to be consistent in (41) and (47). In Sec. III C, we show that such a phase also matches the WKB approximation of the WDW (3), and the explicit result will be shown in (59). 6 Terms with time derivatives are usually neglected for decoherence. For bulk terms, time derivatives are mainly contributed by sub-horizon modes which cause sub-dominated decoherence [45]. For boundary terms with time derivatives, they can be removed by field redefinitions [49]. 7 The relation H int = −L int holds in the cubic order with the interaction picture [78]. A. The interaction picture It is straightforward to generalize the interaction picture approach in [41] to the case with tensor perturbation γ ij , and the evolution operator is is the free evolution operator, and the labels I and S are the interaction and Schrödinger pictures respectively. Applying U (τ, τ i ) (40) to the initial Gaussian state |Ψ G (τ i ) gives a phase to the Schrödinger wave functional B. The Schrödinger picture We can also canonically quantize the theory with the boundary terms 8 with where we use a more compact notation α = α a to denote the dynamic fields ζ and γ ij for making the derivation simpler and more generic, so j aa and F abc can include comoving 8 We thank Haipeng An's suggestion of discussing the boundary term with the Hamiltonian formalism when CM Sou gave a seminar talk of [41] at Tsinghua University. momenta and polarization tensors expressed in the Fourier space. With these notations, the total time derivative has the form and the conjugate momenta are defined as leading to the Hamiltonian density whereF abc = F abc + F bac + F bca . We can easily check that the wavefunctional with the non-Gaussian phase (41), rewritten with the compact notations where denotes integrals in the Fourier space, satisfies the Schrödinger equation For the first two lines in (48), we applied the conditions obtained from the free theory and the following relations C. The WKB approximation of the Wheeler-DeWitt equation The boundary terms can also be obtained by applying the WKB approximation to the wave function of universe Ψ(h ij , φ), obtained with the WDW equation [51,52]. In [59,60], the WDW equation of gravity with a scalar field has been applied to analyze the consistency relation and bispectrum, and we follow the formalism. We start with the Hamiltonian corresponding to the action (7), defined on the hypersurface Σ inducing h ij , where κ = M 2 p 2 , the conjugate momenta are and the DeWitt metric is By promoting the conjugate momenta (52) to functional derivatives −i δ δh ij , −i δ δφ ( is restored for the moment) and varying (51) with respect to (N, N i ), we obtain the Hamiltonian and momentum constraints for the wave functional Ψ(h ij , φ) where the first line is the WDW equation. Here we adopt this formalism to analyze the dominated non-Gaussian phase of the wave functional, contributing to the decoherence. By applying the WKB approximation to the Hamiltonian constraint (54) with the ansatz Ψ(h ij , φ) = exp (iW (h ij , φ)/ ), the phase W (h ij , φ) satisfies the Hamilton-Jacobi (HJ) equation [58,61] suggesting that the functional derivatives match the classical conjugate momenta (52) [60] δW The solution of the HJ equation (55) has been constructed in the literature [58,61,79,80], and its form up to terms with two spatial derivatives is where , the phase includes the cubic terms where the first line matches all the slow-roll unsuppressed boundary terms (34)-(36) up to two spatial derivatives. 9 The γγγ terms in the second line has the same form with the bulk terms L γγγ (29) which come from (3) R [73] since the bulk terms mainly contribute a real phase to the wave functional at late time, and this can be shown as follows. As a supplement to the first two methods, we consider the remaining slow-roll unsuppressed bulk term L γγγ (29), which contributes a non-Gaussian part to the Schrödinger wave functional as [34,67,73] 9 Note that the boundary terms withζ orγ ij can be removed by field redefinitions [48,49], and thus we do not consider them. whereH (γγγ)s 1 ,s 2 ,s 3 and u (γ) p (τ ) is defined in (23). The time integral in (60) is proportional to and plugging the leading imaginary part into (60) gives the growing phase for the wave functional Ψ (γγγ) which agrees with the second line of (59). It is noteworthy that the integral (62) is also discussed in [34] for the scalar cubic interaction proportional to aζ (∂ i ζ) 2 , 10 in which the corresponding oscillating phase is the main contribution to the decoherence of ζ, and we will calculate the similar decoherence for L γγγ in Sec. IV C. On the other hand, the last boundary term with four spatial derivatives in the first line of L bd,ζζζ (34) is included in the next order solution of the HJ equation, which has been derived in [80] 11 10 Except the overall factor involving polarization tensors and comoving momenta, the bulk interactions with the form aζ (∂ζ) 2 and aγ (∂γ) 2 leads the same time integral (62). where we only keep the slow-roll unsuppressed terms with the ζ-gauge (∂ i φ = 0) in the first line. Since the terms with four spatial derivatives are suppressed by the scale factor as a 3 ∂ 4 a 4 = ∂ 4 a , we expect that their contribution to decoherence is negligible at late time. We emphasize that the cubic WKB phase (59) is independent to how the integration by parts is chosen to split bulk and boundary interaction terms in (25), as supported by the appearance of γγγ terms, and it relies on the hypersurface Σ for evaluating the WDW wave functional, as discussed in [41] by comparing the boundary terms on different hypersurfaces. Therefore, the wave functional of cosmological perturbations is expected to have slow-roll unsuppressed non-Gaussian phase regardless of the way of doing integration by parts in the action (7), and all the three methods presented here give consistent results as long as the slow-roll unsuppressed terms are identified. IV. The decoherence of primordial gravitons In this section, we calculate the decoherence with the wave functional of cosmological perturbations Ψ(ζ, γ ij ) and the formalism used in [34,41,81]. We consider the primordial gravitons to be observed form a system {ξ q } = {γ s q }, and other unobserved degrees of freedom interacting with the system form the environment {E k }, which includes unobserved modes of scalar ζ k and tensor perturbations γ s k . With the cubic scalar-tensor and three-tensor interactions, the states of gravitons' modes cannot evolve independently, and they entangle with the environment, represented by the non-Gaussian part of the wave functional where σ i denotes some discrete degrees of freedom of the environment modes, two polarizations for tensor and one mode for scalar perturbations, and we focus on the non-Gaussian part involving two environment and one system modes since this dominates the decoherence [45]. As the state of the environment cannot be accessed, the system is described by the reduced density matrix obtained by tracing out unobserved degrees of freedom where (· · · ) (similar notation · · · ξ for the system). In the cases when the non-Gaussian part is a rapidly oscillating phase (with F σ 1 ,σ 2 ,s k,k ,q has a dominated imaginary part), we expect that the expectation value over environment modes is highly suppressed if ξ =ξ, characterizing the loss of interference (decoherence). Such a suppression of off-diagonal terms of ρ R is calculated as the decoherence factor and for a particular system mode with comoving momentum q, the leading contribution is a one-loop integral 12 where the prime · · · E means ignoring factors like (2π) 3 δ 3 (k 1 + k 2 ), and the volume V = (2π) 3 δ 3 (0) is for discretizing the integral We only keep the imaginary part of F σ 1 ,σ 2 ,s k,k ,q which dominates in all the cases studied in this paper, and the expectation value of the minus exponent defines the dimensionless "decoherence exponent" (initially called the "decoherence rate" in [34]) 13 Γ(q, τ ) = P (γ) q σ 1 ,··· ,σ 4 ,s k+k =−q ImF σ 1 ,σ 2 ,s k,k ,q ImF σ 3 ,σ 4 ,s where we use (ξ s q V δ s,s , and the decoherence of the system mode ξ q happens when Γ(q, τ ) ≈ 1. We will study the decoherence of primordial gravitons 12 Here we mean the integral can be interpreted as a one-loop diagram, and one should not be confused with the one-loop quantum correction discussed in Sec. V. 13 Some papers [44,45] define the physical decoherence rate in the usual sense, so it is better to use another name for avoiding ambiguity while comparing some results. Note that we can also define the physical decoherence rate d dt Γ(q, τ ) = O(1)HΓ(q, τ ) since Γ(q, τ ) ∝ a n , and the decoherence moment with Γ(q, τ ) ≈ 1 means the physical decoherence rate is comparable to the Hubble rate. by the two boundary terms in L bd,ζ−γ (2) and the bulk term L γγγ (29), which include three leading terms decohering γ ij by the orders of magnitudes in Table I, 14 and the corresponding Γ(q, τ ) are computed as the diagrams in Fig. 1. A. ζζγ boundary interaction With the ζζγ boundary term in (2), the wavefunctional has a non-Gaussian phase leading to the decoherence factor for the tensor mode with comoving momentum q and the decoherence exponent Here we want to study the decoherence of the tensor mode by both the sub-and superhorizon scalar environments, the integral over k, k should be all the scalar modes except the observable super-horizon modes denoted as q ζ,min ≤ k, k ≤ q ζ,max . We will show later with the explicit result that such an exclusion of the observable region only modifies the sub-dominated part of the decoherence, so we first calculate (72) with all 0 < k, k < +∞. To calculate (72), we can choose q on the z-axis, and the polarization tensors along the q-axis are with e ± ij (−q) = e ± ij (q) * , so the multiplication of comoving momenta and polarization tensors where here k z = k cos θ, and we applied the fact that |k x | = |k x | and |k y | = |k y | since k + k = −qẑ. Therefore, the conserved-momentum integral in (71) is calculated with the spherical coordinates with a UV cutoff aΛ. The result has a few power-law UV divergences but without logarithmic type, so using the dimensional regularization (dim. reg.) only keeps the last term in (75), which is negative. We follow the method used in [41] resolving the UV divergence for the ζ boundary term (1), and this involves a field redefinition of ζ which dominantly redefines the sub-horizon modes, whereas the change of super-horizon modes is suppressed for preserving the correct nearly scale-invariant power spectrum. Note that the similar idea of resolving the UV divergence by field redefinition is also demonstrated in [82] for the decoherence of background scale factor a(t) by scalar field (with the setup in a closed spacetime), where the failure of using local counterterms and the dim. reg. is discussed. With (76), the scalar power spectrum is changed to and the integrand of (75) scales as k −6 when k → +∞, causing the integral to converge: where the full analytical expression of function J ζζγ (Q) is shown in Appendix (A1). Finally, we justify the previous claim that the exclusion of observable super-horizon region q ζ,min ≤ k, k ≤ q ζ,max is sub-dominated as follows. The power spectrum of these modes converges as P (ζ) , so their contribution to the integrand in (75) converges to a constant at late time, corresponding to the O(a 2 ) contribution in the decoherence exponent. Similar fact is also reported in [45] with the Lindblad equation approach that the dependence on the partition of system and environment is in the sub-dominated order at late time. B. ζγγ boundary interaction Now we consider the second term of (2), and we split the tensor modes into observable (system ξ ij ) and unobservable (environment E ij ) by their comoving momenta: The decoherence factor and exponent are and respectively. To compute the product of the four polarization tensors, we express the polarization tensor with k = k (sin θ cos φ, sin θ sin φ, − cos θ) in the coordinate form, and this has been done, e.g. in the appendix of [44] 15 and plugging in to (82) becomes 15 We set the coordinates such that k = −q when θ = 0, and our convention of the polarization tensors e ± ij is related to the one used in [44]ẽ +,× ij by the linear combinations: which has UV and IR divergences. The UV divergence can be resolved by the field redefinition (76) and a similar one for γ ij , making the integral (86) converges at k → +∞ where the analytical expression of J ζγγ (Q) is shown in Appendix (A3). The IR divergence can be regularized by putting the finite duration of inflation as a cutoff with log q k min = N q − N IR = ∆N , counting the e-folds from the onset of inflation to the horizon crossing, and this cutoff has been applied in [36]. Similar to the case of ζζγ, the exclusion of some observable super-horizon modes also contributes to O(a 2 ) term to (87), which is negligible compared to the logarithmic factor. On the hand, the lack of O(a 2 ) IR divergence in the case of ζζγ is manifested by comparing the products of polarization tensors in the limit k → −q (or θ → 0) in (74) and (85), and the former has vanishing contribution to the decoherence exponent, whereas the latter contributes. C. γγγ bulk interaction We note that the calculations of three-tensor decoherence in the literature are done with h T T ij [44], 16 and we want to see if the difference between this and γ ij discussed in Sec. II A leads to deviations of decoherence exponent. As shown in (62), the non-Gaussian phase contributed by L γγγ has the same time-dependent structure as the three-scalar interaction aζ (∂ i ζ) 2 , so we can generalize the calculation in [34] to the three-tensor case. We apply the same squeezed limit k ≈ k q and k ≈ −k to estimate the dominated part of the decoherence, and it turns out can greatly simplify the product of polarization tensors: where in the last line we use the facts k m e s 2 jm (k ) ≈ −k m e s 2 jm (k ) = 0 and e s 1 lm (k)e s 2 lm (k ) ≈ e s 1 lm (k)e s 2 lm (−k) = 4δ s 1 ,s 2 , and the factor of polarization tensors agrees with the graviton's consistency relation [67]. With (60), the dominated phase part of (62) and (88), the decoherence factor is approximated as and the decoherence exponent is where the product of polarization tensors is evaluated in (74) with k ⊥ q, so the decoherence exponent is similar to the scalar case in [34] except different prefactors. On the other hand, it is possible to have IR divergence when k → −q or k → −q, leading to terms with log k min q = ∆N regularized by the IR cutoff, and this can be shown from the bulk interaction where the factor of polarization tensor is similar to (88) with k ↔ q, so it contributes to a factor of q 4 to the one-loop integral in this limit. With this, the leading IR logarithmic factor in the decoherence exponent can be estimated as which agrees with the scalar case [34] that the IR-divergent part is proportional to a 2 , and adding the dominated part given by the squeezed limit (90) is the total decoherence exponent which is sub-dominated compared to the those by boundary terms, as expected in Table I. D. Compare the decoherence by different interaction terms We are ready to compare the decoherence by different interactions terms, including those by bulk interactions studied in [45] with γ ij and scalar environment, and [44] with h T T ij and tensor environment. Since these papers use different quantities to indicate the decoherence, we first convert these to the equivalent decoherence exponent Γ(q, τ ) for doing comparison. For a Gaussian mixed state, the reduced density matrix has the form which has the purity where Ξ = 2F 2 ReA . In [45], the decoherence is determined as Ξ = O(1), and the relation to the decoherence exponent is For the ζζγ bulk interaction, the decoherence of tensor perturbation is determined by [45] Ξ bulk ζζγ (q, τ ) = 1 36π which is equivalent to On the other hand, [44] studies the reduced density matrix ρ R in the particle basis of the system ξ, U 0,ξ |0 ξ , U 0,ξ a † 1,ξ |0 ξ , U 0,ξ a † 1,ξ a † 2,ξ |0 ξ where U 0,ξ and a † i,ξ are the free evolution and creation operators of the system defined with h TT ij , and the evolution of reduced density matrix ρ R has the form The decoherence exponent of the mode q illustrated in Fig. 1, is comparable to the one with two external 1-particle states and tracing out the sub-horizon tensor environment, which is the value used in [36]. For the decoherence happens with Γ(q, τ ) ≈ 1, the cases with bulk interactions require 7-9 e-folds after the horizon crossing, where such a 1-2 e-fold difference is partly attributed to different number prefactors obtained by various methods of calculating decoherence, 18 and it is also attributed to the IR part (92) by the bulk interaction L γγγ . On the other hand, the decoherence by the boundary terms happens around 5-6 e-folds after the horizon crossing, which is faster than the cases with the bulk interactions, expected by counting the order of slow-roll suppression in Table I. 17 The authors in [44] also comment about this comparison. We should also note that the authors use ρ red,00 , calculated with E 00 , to indicate the decoherence, but it is obtained by summing over all system modes {ξ q } and has IR divergence (as their system is defined as all super-horizon modes), thus not representing a single mode with comoving momentum q. 18 Note that (90) [83]. Two values of the IR cutoff are chosen: ∆N = 2 for the minimal inclusion of the IR environment, and ∆N = 10 4 as the value used in [36]. V. Comments on the one-loop quantum correction Previously, our treatment is limited to the tree-level analysis, and it remains to be seen how quantum fluctuation will influence the result. Since gravity can be viewed as a non-abelian gauge theory with diffeomorphism invariance, we need to also include ghost fields. In this section, we give an argument that the effect of one-loop correction is neglectable compared to the leading order contribution. To begin with, following (7), the corresponding Euclidean action reads 19 and the wave functional can be expressed with the path integral where h and φ 0 simply denote the boundary values of the metric and inflation field. The metric and inflation field can be expanded as where g c and φ c are the solutions to the classical equations of motion and under the diffeomorphism x µ → x µ + ξ µ the perturbations transform as Following [53,85], the boundary condition here is subtle. Here we mainly focus on metric field because it is the quantum gravity that cause the subtleties instead of scalar field. Usually there are two boundary conditions for metric field, one is the Dirichlet boundary condition, which fixes the metric at the boundary and requiresh| ∂M = 0, ξ| ∂M = 0, so the transverse component of metric is unrestricted and needs gauge fixing. The other is called conformal boundary condition which admits the boundary metric up to a Weyl rescaling. In [85], the author points out that mathematically, the Dirichlet boundary condition will be problematic because this boundary condition breaks the elliptic properties of propagator, and therefore, 19 In the following calculation of effective action, we follow the metric signature in [84]. may not lead to a well-defined perturbation theory of quantum gravity. The conformal boundary condition will satisfy the elliptic properties. However, the author also said that, a sufficient condition for the Dirichlet boundary continues to work is that the extrinsic curvature K ij is either positive or negative definite. For de Sitter it is not hard to prove that the metric satisfies this condition, and therefore both the Dirichlet and conformal boundary condition will work for the analysis. Without loss of generality, we take the Dirichlet boundary condition, and we need to fix the transverse component of the metric. Following [84], the gauge condition reads: where t µ α t αν = g µν c . It is worthy to note that some references [86,87] also take the Landau gauge. For the on-shell effective action, different choices of C α are equivalent, whereas for the off-shell effective action, the one-loop correction depends on the choice of gauge. However, we will show that in our case, higher loop corrections can be neglected, and therefore it is reasonable to only consider the on-shell effective action, in which the choice of gauge will not influence our conclusion. By expanding (106) around an infinitesimal gauge transformation, following [84], the ghost Lagrangian density for one loop is given as: where c andc are ghost and anti-ghost fields respectively, which obey the Dirichlet boundary condition and (106). Then we expand (101) to the second order of the metric and inflation field and adding the gauge-fixing term, following [84] L gf = L − 1 2 where the definitions of X, Y, Z are given in [84]: and we used the facts that: and Following [65,88], the one-loop wave functional is then formally written as: where J is defined in (107), and G is defined as: and the effective action is written as: The explicit calculation of the determinants needs to use the zeta function regulator and is model dependent, because here we need to know the explicit form of inflaton potential V (φ). The results have been shown in various literature, for reference, see [69,86,87,89,90]. Here we take the example in [86]: the effective action for a pure gravity with de Sitter background has the form where the first term is tree-level, and the second term is the one-loop correction with a energy scale µ. For the de Sitter space with R = 4Λ = 12H 2 , the ratio of the one-loop to tree-level contributions is: where the energy scale is chosen as µ ∼ H, and the Planck data [83] is used for the estimation. Similar form of the one-loop correction (115) is also derived in explicit inflation models such as [87], so the order estimation (116) should be generic. Such a small ratio implies that the one-loop and higher-order corrections to the phase of wave functional and the corresponding decoherence effect are neglectable. Following [91][92][93][94], it is worthy to mention that the symmetry also allows some other one-loop corrections, take the two-point function of scalar curvature perturbation as an example, there are two possible terms that allowed by symmetry. One is: where L being the comoving size, and the other is: The first term might contribute large in the IR limit, however, [94] shows that this will not affect the observable quantities and is purely a projection effect. The second term is only logarithmic type with the suppression of ∆ 2 ζ that it will not influence the result significantly as well. Similarly, the one-loop correction of gravitons' two-point function also includes a logarithmic term like [95] h TT which also does not correct significantly to the result, following the reason of the scalar case. VI. Conclusion Studying the decoherence of primordial gravitons is not only for explaining the quantum-toclassical transition of the primordial gravitational wave, but it is also useful to set constraints and identify potential obstacles for probing the non-classicality of squeezed gravitons. In this paper, we proceeded the discussion of the slow-roll unsuppressed boundary-term decoherence in [41] and calculated the case with primordial gravitons. Starting from the standard procedure of splitting the bulk and boundary interaction terms with integration by parts, we confirmed that there exists unsuppressed scalar-tensor coupling boundary terms of ζ and γ ij . Such boundary terms were shown to contribute a non-Gaussian phase to the wave functional of cosmological perturbations with either the interaction and Schrödinger picture approaches. As the phase grows with the scale factor a(t), the corresponding quantum state is close to the WKB type, as expected in [34], but now we have showed that it is slow-roll unsuppressed. To gain insight into the unsuppressed phase of the wave functional, we studied the Wheeler-DeWitt formalism of quantum gravity with the WKB approximation valid in the large a(t) limit. The WKB phase evaluated on the hypersurface with the given (h ij , φ) was shown to include unsuppressed cubic terms, agreeing with those obtained from the boundary terms. This result suggests that the leading phase of the inflationary wave functional is independent to the bulk-boundary splitting of the action, but such a splitting is certainly helpful to identify the unsuppressed terms. The agreement of these approaches also suggests that full considerations of either the boundary terms of cosmological perturbations or the WKB limit of the WDW wave functional are needed to discuss the classicalization of the perturbations. We thus calculated the environment-induced decoherence of primordial gravitons with the wave functional method presented in [34], addressing the influence by the unsuppressed non-Gaussian phase. The gravitons were shown to decohere around 5-6 e-folds after crossing the horizon, faster than the process by the bulk interactions which takes 7-9 e-folds. We have also estimated the size of the one-loop quantum correction to the wave functional including the Faddeev-Popov ghost determinant, showing that it is suppressed by a factor of graviton's power spectrum H 2 M 2 p compared to the three-level part, so the correction to our results should be negligible. During inflation, the squeezing of primordial gravitons happens after the horizon crossing with the parameter grows as R q ∼ log aH q [3], so their purity should also be considered in the proposals testing their non-classicality. How exactly the cosmological decoherence affects the proposals deserves further studies. Finally, we comment on the relationship between the physical hypersurface Σ and decoherence. As demonstrated in [41] with the well-defined variational principle, the boundary terms on the two hypersurfaces naturally defined in the ζ-gauge and δφ-gauge respectively are different, whereas they become equal when the same Σ is chosen even if the calculation is done with the fields defined in different gauges. It is noteworthy that similar results have been reported in [50] by constructing gauge-invariant quantities for all the scalar and tensor interactions, and boundary terms on different hypersurfaces do not match even the non-linear field redefinition is considered. Here we make another point of view by studying the phase of the WDW wave functional, and the leading WKB phase W (h ij , φ) is evaluated by choosing Σ. By the relationship between the phase of wave functional and boundary terms, it is easy to verify the facts in [41,50]. This also implies the same conclusion in [41] that the quantities naturally defined on various Σ have different decoherence rates, such as ζ and γ ij which conserve on super-horizon scales decohere faster. Better understanding the physical meanings of these conclusions is also worthy for further studies. Following [67,96], it will also be interesting to understand the decoherence effect in this paper from the perspective of dS/CFT and AdS/CFT, and in the context of string cosmology [97]. We leave this for future work.
2023-05-16T01:16:04.816Z
2023-05-14T00:00:00.000
{ "year": 2023, "sha1": "fdad9e2a2f84bde6dcf807c435a23a58023917cc", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP06(2023)101.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "2cca208c3dbee5809950b35124bc4fd5fb0c4327", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
271522927
pes2o/s2orc
v3-fos-license
Ferroptosis-based advanced therapies as treatment approaches for metabolic and cardiovascular diseases Ferroptosis has attracted attention throughout the last decade because of its tremendous clinical importance. Here, we review the rapidly growing body of literature on how inhibition of ferroptosis may be harnessed for the treatment of common diseases, and we focus on metabolic and cardiovascular unmet medical needs. We introduce four classes of preclinically established ferroptosis inhibitors (ferrostatins) such as iron chelators, radical trapping agents that function in the cytoplasmic compartment, lipophilic radical trapping antioxidants and ninjurin-1 (NINJ1) specific monoclonal antibodies. In contrast to ferroptosis inducers that cause serious untoward effects such as acute kidney tubular necrosis, the side effect profile of ferrostatins appears to be limited. We also consider ferroptosis as a potential side effect itself when several advanced therapies harnessing small-interfering RNA (siRNA)-based treatment approaches are tested. Importantly, clinical trial design is impeded by the lack of an appropriate biomarker for ferroptosis detection in serum samples or tissue biopsies. However, we discuss favorable clinical scenarios suited for the design of anti-ferroptosis clinical trials to test such first-in-class compounds. We conclude that targeting ferroptosis exhibits outstanding treatment options for metabolic and cardiovascular diseases, but we have only begun to translate this knowledge into clinically relevant applications. INTRODUCTION Cell death exhibits a hallmark of many diseases.Clinically relevant regulated cell death encompasses apoptosis [1], necroptosis [2], pyroptosis [3] and as an entirely different entity, iron-catalyzed necrosis [4,5], referred to as ferroptosis [6].All of these pathways result in a cataclysmic burst [7] mediated at least partially by oligomerization of the plasma membrane protein ninjurin-1 (NINJ1) [8][9][10].This ultimate rupture of the plasma membrane defines "necrosis" and is inevitably associated with the release of intracellular content referred to as damage associated-molecular patterns (DAMPs) [11][12][13] which result in the activation of immune cells in an event defined as necroinflammation [14,15].It is beyond the scope of this review to discuss the details of these terms apart from the definition of ferroptosis, and the interested reader is referred to the above cited review articles.However, the potential applications for inhibitors of ferroptosis (ferrostatins) are of particular importance in metabolic and cardiovascular diseases and their complications.Endocrine disorders, with diabetes mellitus as the most prominent example, are particularly susceptible to ferroptosis, and steroid hormones [16,17] as well as cholesterol metabolites [18,19] are emerging as important regulators of ferroptosis [20].Cardiovascular complications, such as myocardial infarction, acute kidney injury and stroke are particularly common in diabetic patients [21], and the associated ischemia-reperfusion injury (IRI) has been a prototype disease model for ferroptosis [5,[22][23][24].With lipid peroxidation representing a typical feature of ferroptosis, it is not surprising that fatty liver diseases and IRI in the liver are driven by ferroptosis as well [25,26].Please note that while intercellular ferroptosis propagation has been described [27,28], it remains entirely unclear how cell death propagation between cells is regulated.It is unclear until today to which extent cell death propagation is a specific feature of ferroptosis.We will highlight the potential of advanced therapies targeting ferroptosis and start by defining ferroptosis as iron-catalyzed necrosis. PART 1-THE DEFINITION OF FERROPTOSIS For the purpose of this review article, we define ferroptosis as ironcatalyzed plasma membrane rupture.Fenton reactions lead to the generation of reactive oxygen species that may be controlled by cytosolic redox systems such as the thioredoxin reductase reaction.The thioredoxin-mediated redox signaling represents an antient NAD(P)H-dependent biological reaction pattern, sometimes referred to as the redox metabolome [29], and has been described to be involved in plant immunity [30].Clearly, the thioredoxin (TRX)-system is involved in the pathophysiology of diabetes as well [31,32].Failure of such NAD(P)H-dependent systems to mitigate the cytosolic ROS-concentration triggers the lipid peroxidation and the typical chemical reactions of ferroptosis [33] that are opposed by ferroptosis surveillance enzymes.The best studied system is the glutathione peroxidase 4, a GSHmetabolizing selenocysteine essential for vertebrate life [34,35].GPX4 requires direct contact with the plasma membrane to fully function, and mutations in the lipid bilayer anchoring loop of GPX4 result in remarkable dysfunction.Besides GPX4, GSHindependent enzymes have been described to be capable of replacing GPX4 function, at least at the cellular level.Ferroptosissuppressor protein 1 (FSP1) relies on CoQ10 to prevent lipid peroxidation [36,37], while other systems, such as membranebound O-acyltransferase domain-containing 1 and 2 (MBOAT1/2) are less well understood [38].With all the chemistry and the ferroptosis surveillance systems studied in detail, the events that connect lipid peroxidation with subsequent rupture of the plasma membrane (necrosis) are almost entirely elusive.Although not all forms of ferroptosis (e.g., RSL3-induced cell death) appear to require ninjurin-1 (NINJ1) [39], recent data have suggested a critical involvement of this molecule to execute the cataclysmic burst of the plasma membrane in ferroptosis through NINJ1 oligomerization [10].The details of the regulated mechanisms of NINJ1 membrane organization and oligomerization are lacking any concept until today, although it has recently been demonstrated to involve the cutting and releasing of membrane discs [40].However, inhibition of NINJ1 oligomerization using a monoclonal antibody has been recently demonstrated to prevent other forms of regulated necrosis, such as necroptosis and pyroptosis from their ultimate execution and at least parts of their immunogenicity [9]. Figure 1 demonstrates our current understanding of the ferroptosis-defining cellular reactions.Based on the definition of ferroptosis introduced in Fig. 1, ferroptosis can be interfered with at various levels.Similarly, all conditions that shift the balance toward a higher ratio of lipid peroxidation to ferroptosis surveillance capacity will decrease the threshold for ferroptosis. PART 2-APPROACHES TO INTERFERE WITH FERROPTOSIS According to the definition introduced in Fig. 1, ferroptosis may be classified into four subcellular stages.We have chosen to allocate inhibitors of ferroptosis (ferrostatins) into four according classes.Ferroptosis inhibitors (ferrostatins) may be subclassified in at least four classes according to their mechanisms of action.Figure 2 demonstrates how these classes relate to the ferroptotic cellular chemical reaction patterns and Table 1 lists several but not all prominent examples according to their mechanism of action.As one example of an innovative approach, the use of selenium-containing Tat-proteins have claimed to protect (e.g., in a stroke model [41]), but the transition of this selenium containing Tat SelPep to other ferroptosis models remains to be demonstrated [41], so we did not add this approach as an individual class of ferroptosis inhibitors here.Other advanced therapies aim at harnessing small non-coding RNAs [42], particularly for cardiovascular diseases, but RNA interference may even sensitize to ferroptosis [43]. Class 1 ferrostatins: iron chelators The requirement of iron is part of the definition of ferroptosis.Erastin was already demonstrated in 2003 to induce rapid cell death in cancer cells in cell culture experiments [44].Iron chelation using 100 μM deferoxamine (DFO) was demonstrated to protect HT1080 cells from erastin-induced cell death in the first figure of the first ever publication ferroptosis [45].In the same set of experiments, addition of exogenous free iron potentiated erastininduced cell death while addition of other divalent metals such as Cu 2+ , Mn 2+ , Ni 2+ , and Co 2+ did not [45].However, long before the term ferroptosis was coined, a body of literature on the role of iron and iron chelation in cell death had accumulated as recently reviewed [46].Importantly, the Fenton chemistry might occur in highly compartmentalized areas of the cell, and iron chelators, chemically, might not reach such areas (e.g., the complex I and III in mitochondria), and therefore might fail to inhibit ferroptosis even though it is mediated by Fenton reactions.While some literature exists that demonstrates iron chelation to protect in disease models, e.g., of traumatic brain injury [47], clinical trials using iron chelators failed to provide clear protective effects e.g., in acute kidney injury [48,49], potentially because of the specific pharmacodynamics of DFO and derivatives.It will be challenging to design tissue-penetrant iron chelators to target the very upstream reaction of ferroptosis without significant side effect. Class 2 ferrostatins: radical trapping agents that function in the cytosolic compartment Radicals as a result of Fenton reactions may be scavenged by nonlipophilic radical trapping antioxidants.One endogenous example of this class may be hydropersulfide [50].Additionally, I3P is generated by the secreted amino acid oxidase interleukin-4-induced-1 (IL4i1) the activity of which therefore creates an anti-ferroptosis environment [51,52].Pharmacologically, the compound UAMC-3203 [53] was demonstrated to have a favorable PK profile because it is less lipophilic, and therefore appears to function as a particularly potent ferrostatin.Along similar lines, the tissue PK profile of the dual inhibitor of necroptosis and ferroptosis Nec-1f suggests hydrophilic properties while functioning as a low-potency ferrostatin [28].Several other compounds that have entered clinical routine, such as omeprazole, rifampicin, promethazine, carvedilol and propranolol have been demonstrated to function as ferroptosis inhibitors, while all of these drugs exhibit favorable tissue distribution and therefore might be repurposed as ferrostatins [54].In this context, it is worth mentioning that rifampicin resistance is a paramount problem in the treatment of mycobacteria [55] (see below). Class 3 ferrostatins Lipophilic radical trapping antioxidants.Lipophilic radical trapping antioxidants are by far the best studied class of ferrostatins, and the expanding list of lipophilic RTAs is too long to be listed here.However, ferrostatin-1 (Fer-1) is the first-in-class compound but was ascribed an unfavorable half-life and poor tissue PK [45].Despite these properties, this small molecule, against expectations, protected mice in several preclinical models including the kidney IRI model which may prolong the half-life of Fer-1 by reduction of its renal excretion [27].Liproxstatin-1 (Lip-1) protected GPX4-deficient mice from death by acute renal tubular necrosis and therefore is considered a particularly suitable ferrostatin for in vivo research [34].Vitamin E [56] and Vitamin K [57] are the best studied endogenous representatives of this class of compounds, and at least for Vitamin K, protection from ischemia-reperfusion injury has been reported [57].Therapeutic supplementation of Vitamin E was tested in patients with nonalcoholic steatohepatitis and was associated with significant improvement compared to placebo or pioglitazone, which was also tested in that trial [58].While a role for ferroptosis may be questioned in nonalcoholic steatohepatitis, this trial is valuable as it carefully assessed the potential side effect profile of a 96-week episode of ferroptosis inhibition in humans without major untoward effects reported [58].Vitamin K2 (menaquinone-7) was tested at a dose of 360 μg/day to improve the serum calcification propensity and arterial stiffness in kidney transplant recipients in a single center, randomized, double-blind trial, a 12-week supplementation period, all three severe adverse events were reported to have occurred unrelated to the study medication [59].Metabolites, such as 7-dehydrocholesterol (7-DHC) [18,19] can also function as radical trapping agents that oppose ferroptosis.In the case of 7-Fig. 2 The classes of ferroptosis inhibitors (ferrostatins).Four stages have been defined that characterize the ferroptotic reaction cascade, all of which potentially can be interfered with.In the initial step, Fenton reactions can be targeted by iron chelators, provided that specific cellular compartments, such as the lysosome or the ER, can be accessed by the compound.The default example of this class is deferoxamine (DFO).As soon as free radicals have formed, radical trapping agents (RTAs) may interfere with these short-lived highly reactive products as long as they are available in direct proximity to the radicals.Lipid peroxidation is competed with by lipophilic radical trapping antioxidants (lipophilic RTAs).Prominent examples of this most commonly studied class of ferrostatins are ferrostatin-1 (Fer-1), liproxstatin-1 (Lip-1), 16-86 etc.The ultimate step of plasma membrane rupture can be interfered with by monoclonal antibodies against ninjurin-1 (NINJ1).This step is not specific for ferroptosis, but was initially demonstrated to be required for necroptosis, pyroptosis and even secondary necrosis following apoptosis. Class 4 ferrostatins: inhibitors of the cataclysmic burst of the plasma membrane Until today, only NINJ1-oligomerization inhibitors can be allocated to this class [10].However, it remains to be determined if this approach would inhibit ferroptotic cell death alongside with necroptosis and pyroptosis, but given the overlapping final steps of these pathways before plasma membrane rupture [60], we consider it likely that NINJ1-interference functions as ferroptosis inhibition.Apart from NINJ1, it is known that glycine protects isolated kidney tubules from LDH release [61,62], and that this effect at least partially contains a ferroptotic component [63].At least partially, glycine suppresses necrotic cell death by inhibition of NINJ1 membrane clustering [64].Since glycine does not affect lipid peroxidation directly, it might function in a NINJ1-related way, potentially preventing plasma membrane discs from being shed off the membrane, relating to a recently published concept [40].Finally, glycine protects kidney tubules from plasma membrane rupture, but the energetic function of these tubular cells is severely compromised, indicating that the cells may be "metabolically dead" while the membrane remains intact [62]. Conditions that sensitize to ferroptosis It is beyond the scope of this review to mention the long list of ferroptosis inducers (FINs) that are developed mainly with the intention to drive cancers into ferroptosis.Some commonly used drugs, however, sensitize to ferroptosis by affecting the endogenous surveillance systems of ferroptosis (compare Fig. 1).As an example, dipeptidase-1 (DPEP-1) activity decreases the intracellular GSH pool, thereby decreasing the activity of GPX4. Commonly used steroids, such as dexamethasone and cortisol, through the glucocorticoid receptor, increase DPEP-1 expression and thereby sensitize to ferroptosis and deteriorate ischemiareperfusion injury [17,65].Another recently discovered mechanism of sensitization to ferroptosis involves the emerging treatment with siRNAs.While the approach offers great opportunities to directly target specific proteins by in vivo post-transcriptional gene silencing [66], siRNAs, just like viral RNAs, can be sensed by the mitochondrial protein MAVS [67] and functionally sensitize to ferroptosis independent of the knockdown of the target protein [43].Most likely by yet another independent mechanism, drugs that induce cell cycle arrest sensitize to ferroptosis and therefore my contribute to its success in tumor therapy [68].With a perspective to cardiovascular diseases, an oxygen enriched environment, such as it occurs during the postnatal phase, itself induces a cell cycle arrest [69] and thereby may contribute to sensitizing cardiomyocytes to ferroptosis.Finally, iron-selective prodrugs can activate ferroptosis [70], and iron addition of cancers and persister cells in particular [71], can be interfered with in many pharmacological ways and defines a therapeutic approach [72]. PART 3-METABOLIC AND CARDIOVASCULAR DISEASES DRIVEN BY FERROPTOSIS The oxygen burst that occurs at the birth of vertebrates creates an environment of hyperoxia in cardiomyocytes which results in cell cycle arrest [69].It is currently unclear to which extent this potentially priming metabolic event contributes to the outstanding sensitivity of the heart to ferroptosis [73] in diseases such as myocardial infarction [74] and cardiomyopathy [75,76].However, the cardiovascular system and its complications emerged as a prime target for treatments with ferrostatins.This also involves the common complications of atherosclerosis, many of which share the common pathophysiological principle of ischemia-reperfusion injury (IRI).Along these lines, kidney IRI has become a classical in vivo setting to study ferroptosis inhibitors [22,[77][78][79], but liver IRI [34,57,[80][81][82][83], stroke models [84][85][86][87] and myocardial infarction and heart failure models [23,[88][89][90][91][92][93][94][95][96] have been demonstrated to involve ferroptosis and can be improved by ferroptosis inhibition.Consequences of cardiovascular disease-induced necrosis may affect the cardiovascular system itself, as exemplified by cardiac arrythmias.While indeed one study indicated that ferroptosis inhibition may reduce the frequency of atrial fibrillation [97], this topic needs to be studied in much more detail.Most of the existing literature in this field that we will review in the following paragraphs, however, focused on atherosclerosis and patients at risk for cardiovascular complications, such as individuals suffering from diabetes mellitus and/or chronic hemodialysis treatment.The major risk factor for cardiovascular complications, besides cigarette smoking and uncontrolled elevated blood pressure, is diabetes mellitus.As illustrated in Fig. 3, the pathophysiology of disease progression during type 1 diabetes mellitus (T1DM) involves ferroptosis at several different stages.First, pancreatic beta cells, like other hormone producing cells [98,99], are known to be extraordinarily sensitive ferroptosis, potentially further driven by viral infections [100][101][102][103]. Second, atherosclerosis as a major hallmark of diabetic organ complications, involves a necrotic plaque formation the origin of which may comprise of ferroptotic cell death [104][105][106][107], potentially driven by cholesterol crystals.Finally, all mentioned ferroptosis-driven IRI complications (Fig. 3c) apply to the classical cardiovascular end points of diabetic patients.T1DM patients are commonly subjected to combined pancreas-kidney transplantation during the process of which ferroptosis and IRI can occur again (see below).Even though the specific literature on ferroptosis in cardiovascular diseases in the setting of diabetes mellitus is limited, some evidence suggests that endoplasmatic reticulum stress and associated ferroptosis are particularly important in myocardial IRI [108]. Prospective cohort studies have investigated cardiovascular diseases and iron uptake in the patient´s diets since the HPFS [109] and NHANES-I [110] studies in 1994.The results of these observational trials have indicated that individuals with relatively high heme iron intake exhibit an increased cardiovascular risk, most prominently represented by increased hazard ratios for coronary heart disease [111].As it is beyond the scope of this article, the interested reader may be referred to recent reviews on iron uptake in cardiovascular disease [112].We argue that given the potential deterioration of ferroptosis and additional preclinical experimental data on iron toxicity [53], additional iron supplementation should be used with caution in patients with high risk of cardiovascular complications.This appears to be particularly important for chronic dialysis patients who have lost every trace of renal function, including the production of erythropoietin as the cause of renal anemia.Guidelines for these patients still recommend iron supplementation [113], even though it is well known that cardiovascular complications in dialysis patients are the leading cause of death [114][115][116].Given the importance of cardiovascular complications for this patient cohort, we suggest that intravenous iron supplementation must be tested in longitudinal prospective randomized controlled clinical trials designed for cardiovascular complications as the primary end point.In our opinion, without such clinical trials, supplementation of iron to supranormal levels cannot be justified in dialysis patients as long as translational scientific data on ferroptosis are taken into consideration. Finally, a condition associated with cardiovascular diseases is chronic kidney disease (CKD).It is beyond the scope of this review to list details of CKD pathophysiology.However, most kidney researchers and nephrologists have accepted the general model of acute kidney injury (AKI)-to-CKD transition, the central hypothesis of which interprets CKD progression as repeatedly occurring episodes of AKI which lead to acute tubular necrosis and nephron loss [117].While it is clear that AKI is mediated by ferroptosis in many scenarios, CKD is commonly associated with Vitamin K deficiency [118] which might further sensitize CKD patients to additional episodes of AKI, thereby driving a vicious circle.Outside the ferroptosis research field, exhaustion of Vitamin K is mostly discussed as a shortage of Vitamin K dependent protein (CKDP) expression that contributes to vascular calcification in CKD patients.These findings are based on the "Rotterdam Study" that demonstrated high menaquinone intake in the diet to be associated with reduced risk of coronary heart disease [119,120].In conclusion, all these studies point to a superiority upon ferroptosis inhibition in cardiovascular diseases, and clinical trials with a clearer focus on ferroptosis rather than general vascular outcomes are indicated to assess the clinical applicability of antiferroptosis agents.In the following section, we will discuss potential clinical trials to address this question. PART 4-CONSIDERATIONS ON POTENTIAL CLINICAL TRIAL DESIGNS FOR FIRST-IN-CLASS FERROPTOSIS INHIBITORS As outlined in the previous sections, many clinical conditions may benefit from treatment with ferrostatins.However, clinical trial design may be limited because of the lack of a specific biomarker for ferroptosis in tissues.To consider possible clinical trials despite these circumstances, we recommend to consider the following examples: The ideal clinically emerging scenario requires a situation in which ferroptosis can be predicted.Solid organ transplantation exhibits one such example (Fig. 4a).This situation offers two possibilities for the application of ferrostatin.First, the brain-dead donor could be treated.Second, the graft, once removed from the donor and perfused in an isolated perfusion device, could be applied with ferrostatins.In both cases, the organ recipient would need to be investigated most carefully for potential side effects.As mentioned above, kidney-transplant recipient patients were already treated with supplementation of Vitamin K, but this was later after transplantation with an entirely different question on progression of calcification [59].Of course, a stably kidney transplanted patient regularly is on an entirely different and much less aggressive immunosuppressive medication compared to a freshly transplanted individual.And even though the first 3 months following solid organ transplantation may appear arguably different compared with the later life of a successfully transplanted patient, that trial, however, included a kidney transplant cohort without reporting medication-specific side effects [59].This trial included patients with an estimated glomerular filtration rate (eGFR) as low as 20 ml/min/1.73m².This also demonstrates that it may be considered safe to treat patients suffering from acute and chronic kidney injury. Another example of iatrogenic ferroptosis induction is ischemiareperfusion injury as a consequence of cardiac surgery (Fig. 4b). Fig. 3 The role of ferroptosis in the pathophysiology of diabetes mellitus.a Insulin-producing pancreatic beta cells are highly sensitive to ferroptosis, and their loss is considered the origin of type 1 diabetes mellitus (T1DM), a classical example of a metabolic disease.Diabetes mellitus, not restricted to T1DM, is frequently associated with cardiovascular complications many of which originate from progressive atherosclerotic plaque formation.b Cholesterol crystals and cells of both the innate and the adaptive immune systems are involved in atherosclerotic plaque formation, and ferroptosis may be amongst the many pathways that contribute to necrotic debris formation in these plaques.c Upon atherosclerotic plaque rupture, commonly observed in patients suffering from metabolic syndrome which includes diabetes mellitus, myocardial infarction, stroke and other disorders associated with a perfusion-deficit or ischemia-reperfusion injury may occur.The necrosis observed in such tissues, particularly its cell death propagation, is known to be driven by ferroptosis.Treatment with various classes of ferrostatins was demonstrated to protect end organ damage in respective experimental models. Upon clinical trial conditions, ferrostatins can be applied before the onset of surgery which precedes the onset of ferroptosis upon reperfusion by as many hours as the surgical procedure may take.As many cardiac patients exhibit compromised renal function, the inclusion of patients with an eGFR or 20 ml/min/1.73m² in a previous study may also be helpful [59] in the design for this trial, as the common exclusion criterion of an eGFR >30 ml/min/1.73m² should not apply.This allows to employ the eGFR slope, e.g., assessed over an episode of 6 weeks, alongside with the diagnosis of end stage renal disease (ESRD) and the need for dialysis as secondary and primary endpoints, respectively. Acute kidney injury is amongst the best studied conditions in ferroptosis research.However, AKI on intensive care units is pathophysiologically different from experimental IRI [121] in its nature as it represents a progressive disease in which nephron by nephron may undergo synchronized regulated necrosis by ferroptosis over a period of several days or weeks [122].It is an option to treat freshly resuscitated individuals with ferrostatins and enroll them into a clinical trial as soon as they enter the emergency room on an ICU (Fig. 4c).Ferrostatins could be applied in the potentially intubated patients intravenously for a defined period of days (e.g., 5 days) until blood pressure can be sufficiently controlled the situation of cardiovascular shock can be overcome. Primary and secondary endpoints could be defined as composite of death by any cause and requirement for renal replacement therapy (RRT).Alternatively, if no RRT is required, simple serum concentrations of urea and creatinine alongside the estimated glomerular filtration rate (eGFR) could serve as readout systems e.g., 7 or 10 days after discharge from the ICU and 6 months following the enrollment into the trial. Finally, ferroptosis induction is a pathogenic factor to bacteria [123][124][125].One of the most prominent examples causing global disease burden is mycobacterium tuberculosis [55].During granuloma formation and tissue necrosis upon tuberculosis infection of the lungs, ferrostatins might oppose a bacterial virulence factor [125].In such conditions, ferrostatins would have to be applied for several weeks to potentially unfold their beneficial effects (Fig. 4d).However, upon severe infection, this condition requires weeks of antibiotic treatment in the hospital in almost all cases, since only intravenous antibiotics are available.During such conditions, the enrollment of critically infected patients appears manageable to test the contribution of ferroptotic tissue damage to the overall disease progression. OUTLOOK Advancing therapies to create novel first-in-class drugs for clinical routine requires sequences of mechanistic basic research, biomedical research, translational preclinical research and stateof-the-art drug design.It further requires the definition of clear endpoint studies once the disease patterns are sufficiently understood, and it involves the careful assessment of side effect profiles, especially when novel therapeutic routes are tested in humans.If the clinical readout systems are unequivocally defined, does advancing therapies require a specific biomarker for a sematic term such as "ferroptosis", or is it sufficient to improve the outcome directly?In our opinion, it is the latter. For decades, antioxidant treatments have been considered without a clear endpoint to test."Non-specificity" was a principle of such approaches.Amongst stakeholders in the pharmaceutical industry, this has supported the preconception that radical trapping antioxidants (lipophilic or not), in contrast to kinase inhibitors or monoclonal antibodies, cannot be modern drugs and would be too "unconventional".A similar preconception was associated with mRNA-based vaccines until the pandemic thought us differently.In our opinion, we should let the ongoing cardiovascular pandemic stimulate our courage to test "unconventional" approaches.Along the same line, the ongoing M. tuberculosis pandemic is another paramount reason to take courage for advanced anti-ferroptosis treatments. Fig. 1 Fig.1The definition of ferroptosis.By definition, ferroptosis requires an iron-catalyzed reaction with oxygen (Fenton reaction) which is the origin of the ferroptosis reaction cascade.In an intermediate step, reactive oxygen radicals, such as H 2 O 2 and higher order radicals form as direct and indirect consequences of Fenton reactions.In most cells, thioredoxin (TRX) scavenges such radicals and is immediately regenerated by the potent selenoprotein thioredoxin reductase (TRXRD1) in an NAD(P)H consuming reaction.If such reactive oxygen radicals cannot be controlled, lipids in the plasma membrane (and other intracellular membranes) become peroxidized (Lipid peroxidation).Peroxyl versions of plasma membrane lipids are converted to alcohols by the classical ferroptosis surveillance systems, such as glutathione peroxidase 4 (GPX4) and ferroptosis suppressor protein 1 (FSP1) and others.Lipophilic radical trapping agents compete with lipids for peroxidation, thereby shifting the balance toward lower concentrations of lipid peroxides.Through entirely unknown mechanisms, and potentially involving several undefined intermediate steps, plasma membrane lipid peroxidation results in the oligomerization of pore forming ninjurin-1 (NINJ1) molecules that are required for the cataclysmic burst of the plasma membrane.The rupture of the plasma membrane defines ferroptosis as a necrotic event. Fig. 4 Fig. 4 Manifest clinical trials for ferroptosis inhibitors.a Ferroptosis inhibition employing a kidney perfusion device upon solid organ transplantation.b Ferroptosis inhibitor application before cardiac surgery.c Ferroptosis inhibition as a strategy to prevent acute kidney injury (AKI) and nephron loss in intensive care unit patients to improve the AKI-to-chronic kidney disease (CKD) progression.d Ferroptosis inhibition as an add on treatment option to anti-tuberculosis antibiotic therapy in critically ill patients may improve respiratory outcomes and limit long-term complications following tuberculosis infection. Table 1 . Prominent but incomplete members of the four classes of ferroptosis inhibitors.
2024-07-29T15:05:51.072Z
2024-07-27T00:00:00.000
{ "year": 2024, "sha1": "b456f945f70d62d9dbf2a4955290fa11f2d4c1a0", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6579a5cf7691b9d8cb8aa81b074716e9e2e029b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2269562
pes2o/s2orc
v3-fos-license
Emerging understanding of the \Delta I = 1/2 Rule from Lattice QCD There has been much speculation as to the origin of the \Delta I = 1/2 rule (Re A_0/Re A_2 \simeq 22.5). We find that the two dominant contributions to the \Delta I=3/2, K \to \pi \pi{} correlation functions have opposite signs leading to a significant cancellation. This partial cancellation occurs in our computation of Re A_2 with physical quark masses and kinematics (where we reproduce the experimental value of A_2) and also for heavier pions at threshold. For Re A_0, although we do not have results at physical kinematics, we do have results for pions at zero-momentum with m_\pi{} \simeq 420 MeV (Re A_0/Re A_2=9.1(2.1)) and m_\pi{} \simeq 330 MeV (Re A_0/Re A_2=12.0(1.7)). The contributions which partially cancel in Re A_2 are also the largest ones in Re A_0, but now they have the same sign and so enhance this amplitude. The emerging explanation of the \Delta I=1/2 rule is a combination of the perturbative running to scales of O(2 GeV), a relative suppression of Re A_2 through the cancellation of the two dominant contributions and the corresponding enhancement of Re A_0. QCD and EWP penguin operators make only very small contributions at such scales. Introduction The "∆I = 1/2 rule" remains one of the longest-standing puzzles in particle physics.It refers to the surprising feature that in K → ππ decays the final state is about 450 times more likely to have total isospin I=0 than I=2.In terms of the (predominantly real) K → ππ amplitudes A 0 and A 2 , where the suffix denotes I, this corresponds to ReA 0 /ReA 2 22.5.Perturbative running from the electroweak scale to about 1.5 -2 GeV contributes a factor of approximately 2 to this ratio [1,2]; the remaining factor of about 10 should come from non-perturbative QCD or, just possibly, from new physics.Lattice QCD provides the opportunity for the non-perturbative evaluation of A 0 and A 2 , although it is only very recently that such direct K → ππ calculations have become feasible.In this letter we summarise the emerging explanation of the ∆I = 1/2 rule from computations of A 0 and A 2 by the RBC-UKQCD collaboration. The first results from direct simulations of a kaon decaying into two pions were presented in [3][4][5].The determination of A 0 , where the two pions have vacuum quantum numbers, is particularly challenging and so far it has not been calculated with physical masses and momenta.We are striving to overcome technical issues such as the efficient evaluation of disconnected diagrams and the projection of the physical state through the use of Gparity boundary conditions [6][7][8][9] in order to evaluate A 0 at physical kinematics in the near future.In the meantime we have evaluated A 0 and A 2 for pions with masses of approximately 420 MeV [3] and 330 MeV [10] at thresh-old, i.e. with the pions at rest.For these unphysical masses we do find a significant enhancement of the ratio ReA 0 /ReA 2 , albeit a smaller one than 22.5 (see the first two rows of Table I).While investigating the origin of this enhancement we found a surprising cancellation in the evaluation of ReA 2 , which significantly increases the ratio ReA 0 /ReA 2 .This suppression of ReA 2 is the main result presented here. We have also evaluated A 2 with physical masses and momenta, obtaining a result for ReA 2 which agrees with the physical value and determining ImA 2 for the first time [4,5] (see the third row of Table 1).In the evaluation of ReA 2 at physical kinematics there is a similar cancellation; indeed it is even more pronounced than at the unphysical masses in the first two rows of Tab.I. In the next section we summarize the simulations we have performed, highlighting features of immediate relevance for the ∆I = 1/2 rule and referring to earlier publications for other details.We then explain the partial cancellation of the two contributions to ReA 2 , which contradicts naïve expectations from the factorization (vacuum insertion) hypothesis.We also show that these two contributions have the same sign in ReA 0 .We conclude by explaining how these features combine to provide an emerging understanding of the ∆I = 1/2 rule.Of course a full quantitative explanation will require a calculation of ReA 0 at physical kinematics which is underway. Calculation of the Decay Amplitudes Our evidence is based on calculations from three Domain Wall Fermion (DWF) ensembles with 2+1 sea-arXiv:1212.1474v2[hep-lat] 21 May 2013 TABLE I: Summary of simulation parameters and results obtained on three DWF ensembles.The errors with the Iwasaki action are statistical only, the second error for ReA 2 at physical kinematics from the IDSDR simulation is systematic and is dominated by an estimated 15% discretization uncertainty as explained in [5]. quark flavours (see Tab. I).Papers [4,5] describe a complete calculation of A 2 on a 32 3 spacial lattice using the IDSDR (Iwasaki + Dislocation Suppressing Determinant Ratio) gauge action [11] for (almost) physical pion and kaon masses and realistic kinematics.The ensemble was generated at a single lattice spacing a (a −1 1.4 GeV) chosen so that the volume is sufficiently large to accommodate the propagation of physical pions.In [3] a complete calculation of both A 0 and A 2 was carried out with the Iwasaki gauge action at a −1 1.7 GeV for m π 422 MeV and m K 737, 878 and 1117 MeV (here we present results for m K 878 MeV which corresponds to almost energy-conserving decays).Although the calculation was performed at threshold, this was the first time a signal for ReA 0 had been obtained in the direct evaluation of the K → ππ matrix elements.A similar threshold calculation was presented in [10] on a larger volume (24 3 ) with m π = 329 MeV.The increased time extent of this lattice suppresses "around-the-world" effects in which one of the pions from the sink propagates in the forward time direction, crossing the periodic boundary and reaching the weak operator with the kaon.The calculation also used two-pion sources in which the single-pion wall sources are separated in time by a small number of time slices δ (the results presented here are for δ = 4).We find that this suppresses the (unphysical) vacuum contributions in the I = 0 channel, significantly reducing the noise.In this way ReA 0 was resolved using only 138 configurations, compared to 800 in [3].With the actions used here, lattice artefacts scale parametrically as O(a 2 ), although at present we are not in a position to take the continuum limit. The amplitudes A 0 and A 2 can be expressed in terms of the "master formula" are the matrix elements calculated on the lattice.They are determined by fitting three-point correlation functions composed of a kaon source at t = 0, a two-pion sink at t = ∆, and one of the operators Q lat i in the weak Hamiltonian inserted at all times 0 < t < ∆.We fit the correlation functions C I,i (∆, t), (2) for 0 t ∆, using a one parameter exponential fit to determine the matrix elements M ∆I,lat i .E (ππ) I is the energy of the two-pion channel with isospin I.All these correlation functions can be expressed in terms of the 48 contractions enumerated in Section IV of [3] and labelled 1 through 48 .The contractions are functions of ∆ and t, but we leave this dependence implicit, writing for example The renormalization factors Z lat→MS ij provide the connection between the bare lattice operators and those renormalized in the MS-NDR scheme at the scale µ, The operators Q i on the left of (3) correspond to the conventional 10-operator "physical" basis, which is overcomplete (see e.g.[12]).When calculating the renormalization factors, it is convenient to work in an equivalent "chiral" basis of 7 linearly independent operators Q j with definite SU (3) L × SU (3) R transformation properties (see eqs.( 172)-(175) in [12]).z i (µ) + τ y i (µ) are Wilson coefficient functions.F I is the Lellouch-Lüscher factor relating the finite-volume Euclidean-space matrix element to the physical decay amplitude [13]. Evaluation of ReA 2 : A 2 receives contributions from the Electroweak Penguin (EWP) operators Q 7 and Q 8 as well as a single operator ) where the superscript 3/2 denotes ∆I and the subscript (27, 1) denotes how the operator transforms under SU (3) L × SU (3) R chiral symmetry.i, j are color labels and the spinor indices are contracted within each pair of parentheses.The subscript L denotes left, so that e.g. 1) .From all our simulations we confirm that the contribution from the EWP operators to ReA 2 is about 1%; e.g. for physical kinematics FIG. 1: The two contractions contributing to ReA 2 .They are distinguished by the color summation (i, j denote color).s denotes the strange quark and L that the currents are left-handed. we find ReA 2 = (1.381±0.046±0.258) 10 −8 GeV to which the EWP operators contribute −0.0171 10 −8 GeV [4,5] (the physical value is ReA 2 = 1.479(4)10 −8 GeV).We therefore neglect the EWP operators in the following discussion.Chiral symmetry implies that Q 3/2 (27,1) does not mix with the EWP operators so that ReA 2 is proportional to its lattice matrix element; the constant of proportionality is the product of the Wilson coefficient, the renormalization constant, finite-volume effects and kinematical factors (see [5] for a detailed discussion, including an explicit demonstration that the mixing is indeed negligible in the DWF simulation). Fierz transformations allow the K → ππ correlation function of Q 3/2 (27,1) to be reduced to the sum of the two contractions illustrated in Fig. 1, labeled by 1 and 2 .The two contractions are identical except for the way that the color indices are summed.A 2 is proportional to the matrix element extracted from the sum 1 + 2 .The main message of this letter is our observation from all three simulations that 1 and 2 have opposite signs and are comparable in size.This is illustrated in Fig. 2 for the results at physical kinematics from [4,5], where we plot 1 , -2 and 1 + 2 as functions of t.We extract A 2 by fitting 1 + 2 in the interval t ∈ [5,19] where there is a significant cancellation between the two terms.A similar, although not quite so pronounced cancellation occurs at threshold for physical masses and for the heavier masses studied in [3,10], see Fig. 3 for example. We stress that it is only the correlation function 1 + 2 which has a time behaviour corresponding to E (ππ)2 .Because the calculation is performed in a finite-volume E (ππ)2 = E (ππ)0 and 1 and 2 individually have an isospin 0 component.If E (ππ)2 = m K then 1 + 2 is independent of t away from the kaon and two-pion sources, and this is what we observe, particularly in Fig. 2 where the energies are matched most precisely. It has been argued that the factorisation hypothesis [14] works reasonably well in reproducing the experimental value of A 2 (see e.g.Sec.VIII-4 in [15]).In this approach, the gluonic interactions between the quarks combining into different pions are neglected and A 2 is related to the decay constant f π and the K 3 form factor close to zero momentum transfer.On the basis of color counting, one might therefore expect that 2 1/3 1 , whereas, for physical kinematics, we find 2 −0.7 1 and that nevertheless 1 + 2 leads to the correct result for A 2 .Thus the expectation based on the factorisation hypothesis proves to be unreliable here. Following the discovery that 1 and 2 have opposite signs we examined separately the two contributions to the matrix element K0 |(sd) L (sd) L |K 0 which contains the non-perturbative QCD effects in neutral kaon mixing [11].The two contributions correspond to Wick contractions in which the two quark fields in the K 0 interpolating operator are contracted i) with fields from the same current in (sd) L (sd) L and ii) with one field from each of the two currents.Color counting and the vacuum inser-tion hypothesis suggest that the two contributions come in the ratio 1:1/3, whereas we find that in QCD they have the opposite sign.This had been noticed earlier; see e.g.[16] and references therein. We postpone a discussion of the implications of these results to the ∆I = 1/2 rule until the next section, but we believe that the partial cancellation observed in the evaluation of A 2 is a significant component. Evaluation of ReA 0 : The evaluation of A 0 at physical kinematics has not yet been completed.The results presented here are obtained at threshold, with the two pions in their zero-momentum ground state with each pion at rest up to finite-volume effects.Even at threshold we have had to overcome many theoretical and technical problems, including the evaluation of the 48 contractions contributing to the correlation functions, the renormalization of the operators in the effective Hamiltonian, the subtraction of power divergences and the evaluation of the finite-volume corrections.The threshold calculations do not require however, the isolation of an excited state.The pions in a physical decay each have a non-zero momentum in the center-of-mass frame, which corresponds to an state in lattice calculations.the poor statistical signals after the subtraction of power divergences and the evaluation of disconnected diagrams, the evaluation of A 0 at physical kinematics is currently impracticable with standard techniques and is the main motivation for our development of G-parity boundary conditions [6][7][8][9]. (5) While these results differ significantly from the observed value of 22.5, because the calculations are not performed at physical kinematics, there is nevertheless already a significant enhancement in the ratio and it is interesting to understand its origin.In Tab.II we present the contributions to ReA 0 from each of the lattice operators in the 24 3 simulation with a −1 = 1.73(3)GeV and from each MS-NDR operator at renormalization scale 2.15 GeV.In both cases, the dominant contribution comes from the current-current operators Q 2 . The dominant contribution from the lattice operator Q 2 to the ∆I = 1/2 correlation function is proportional to the contractions 2• 1 − 2 and corresponds to type1 diagrams in the language of [3] (see Fig. 3 in [3]).In Fig. 4 we show the total contribution of Q 2 to the correlation function, as well as the total connected contribution and that of type1 diagrams given by i √ 3 {2• 1 − 2 }.The errors on the total contribution are dominated by the disconnected diagrams.The observation that 1 and 2 have opposite signs leads to an enhancement between the two terms rather than the suppression in the factorization approximation 2 = 1 3 1 .Similarly, in the case of Q 1 , the type1 combination i √ 3 {2• 2 − 1 } is dominant.In this case both the correlation function and the Wilson coefficient z 1 (µ) + τ y 1 (µ) are negative, so that the overall contribution adds to that from the correlation function of Q 2 . Finally we note that in our data ReA 2 shows a much stronger mass dependence than ReA 0 , which was also expected in SU(2) chiral perturbation theory [17].We attribute this to the partial cancellation between 1 and 2 in ReA 2 .Our results for ReA 2 and ReA 0 are given in Tab.I. Conclusions From our recent computations of K → ππ decay amplitudes a likely explanation of the ∆I = 1/2 rule is emerging.In particular, we find that in the evaluation of ReA 2 , which is proportional to the sum of two contractions 1 + 2 , there is a significant cancellation between the two terms.The naïve expectation based on the factorization hypothesis suggests that 2 ≈ 1 The evaluation of A 0 at physical kinematics has not yet been performed.Our simulations at threshold with m π = 329 MeV and 422 MeV show that the dominant contributions to A 0 comes from the current-current operators, with only small corrections from the penguin operators.This is true whether we express the results in terms of the bare lattice operators at a −1 = 1.73 GeV or the MS-NDR renormalized operators at µ = 2.15 GeV (see Tab. II).Although 48 contractions contribute to the I = 0 correlation function, in our simulations the largest contributions again come from contractions 1 and 2 with relative signs which enhance ReA 0 . References to estimates of the amplitudes using analytic or model approximations are presented in the reviews [18,19].We note that a suppression of ReA 2 and an enhancement of ReA 0 was found in [20] using the 1/N expansion with a particular ansatz for matching the short and long-distance factors at scales 0.6-0.8GeV. The results presented above indicate that ReA 2 is very sensitive to the choice of quark masses and momenta; a sensitivity we attribute to the partial cancellation of the two contributing contractions.On the other hand, there is no such cancellation in ReA 0 and indeed the results depend much less on the masses and the values we find are already close to the experimental result.Of course before we can claim to understand the ∆I = 1/2 rule quantitatively, we need to reproduce ReA 0 /ReA 2 =22.5 at physical quark masses and kinematics and in the continuum limit and we are currently undertaking this challenge.Nevertheless, from the results and discussion of this paper it appears that, in addition to the well known perturbative enhancement of ReA 0 /ReA 2 , the explanation is a combination of a significant relative suppression of ReA 2 as well as some enhancement of ReA 0 with penguin operators contributing very little. 2(0.5) 10 −7 3.2(0.5)10 −7 TABLE II: Contributions from each operator ReA 0 for m K = 662 MeV and m π = 329 MeV.The second column contains the contributions from the 7 linearly independent lattice operators with 1/a = 1.73(3)GeV and the third column those in the 10-operator basis in the MS-NDR scheme at µ = 2.15 GeV.Numbers in parentheses represent the statistical errors.
2013-05-21T09:03:55.000Z
2012-12-06T00:00:00.000
{ "year": 2012, "sha1": "05de2985af1064a69172e5d929dcf59ca88410c6", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.110.152001", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "05de2985af1064a69172e5d929dcf59ca88410c6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
237342205
pes2o/s2orc
v3-fos-license
Who Is Willing to Get Vaccinated? A Study into the Psychological, Socio-Demographic, and Cultural Determinants of COVID-19 Vaccination Intentions Crucial to the success of the COVID-19 vaccination campaign is the rate of people who adhere to it. This study aimed to investigate the reasons underlying people’s willingness to get vaccinated in a sample of Italian adults, considering the effects of different individual characteristics and psychological variables upon positive vs. negative/hesitant vaccination intentions, as well as subjects’ self-reported motivations for such intentions. An anonymous cross-sectional survey was distributed online in February 2021. The results showed that trust in science, number of vaccinations received in 2019, and belief that COVID-19 is more severe than the common flu, were associated with positive vaccination intentions. “Chance externality” health locus of control showed both direct and indirect effects upon positive vaccination intentions. Anxiety symptoms and participants’ perceived psychological status also showed indirect positive effects. Subjects’ self-reported motivations varied interestingly across positive vs. negative/hesitant intentions. Implications of these findings for identifying effective pro-vaccination messages are discussed in the final section of the paper. Introduction During the last year, scientific laboratories worldwide have worked on an unprecedented timeline [1] to create effective vaccines against the new coronavirus Sars-CoV-2, which is causing the current pandemic [2,3]. In Italy and Europe, 27 December 2020 marked the beginning of the vaccination campaign. This "Vaccine Day" was considered as a symbolic turning point in the management of the state of emergency. Just as vaccines' availability and supplies are crucial to obtain high vaccination rates, however, so is the population's willingness to get vaccinated. For this reason, vaccine hesitancy (i.e., the delay in acceptance or refusal of vaccination despite availability of vaccination services) [4,5] was identified by the World Health Organization as one of the top ten global health threats in 2019. To date, the potential acceptance rates of a generic COVID-19 vaccine as well as the factors influencing acceptance have been investigated in 33 different countries, revealing self-reported acceptance rates that range from almost 90% in China to less than 55% in the Middle East, Russia, Africa, and several European countries [6][7][8][9][10]. Considering that the proportion of the population sufficient to reach herd immunity for COVID-19 is estimated around 70% [11], these data indicate that, in some countries at least, specific interventions are certainly needed to improve vaccine acceptance. Thus, it is crucial and of primary interest for all governments and health institutions to identify factors underlining COVID-19 vaccine hesitancy in order to develop effective strategies to deal with them. Studies investigating COVID-19 vaccine hesitancy so far have focused primarily on the relationship between sociodemographic factors and cultural and personal beliefs, on the one hand, and the willingness to get vaccinated, on the other. In particular, positive intentions with respect to the COVID-19 vaccination have been shown to be associated with factors such as being male, married, an older adult, having a higher education, being a healthcare professional, having been vaccinated against influenza in the previous season, as well as perceiving a high risk of COVID-19 infection, trusting information from institutional sources, and believing in vaccinations' efficacy and medical advice more generally [6,9,12,13]. We argue that, in addition to these factors, other variables more closely related to individual psychological characteristics should be taken into account when dealing with vaccine hesitancy, as has been suggested by Rieger [14] and Murphy [15], who found a relation between hesitant/negative vaccination intentions and low altruism, high selfinterest, impulsiveness, and a personality characterized by being more disagreeable, more emotionally unstable, and less conscientious. Building on this, the purpose of the present study was to investigate the effects of different individual characteristics (i.e., sociodemographic status, health condition, and previous decisions about vaccinations), and different psychological variables (such as perceived psychological status, locus of control, and anxiety symptoms) on COVID-19 vaccine acceptance or hesitancy/resistance in a sample of adults in Italy. Moreover, self-reported reasons motivating positive or hesitant/negative vaccination intentions were investigated with the aim of providing insights for public health authorities about what types of incentives or messages are likely to be most effective to increase vaccines community uptake. Sample From 6 January to 28 February 2021, an anonymous cross-sectional survey was developed and distributed online across Italy using the Qualtrics software (Provo, UT, USA). The sampling procedure employed was the so-called "Exponential Non-Discriminative Snowball Sampling" [16]. Participants were reached via various social media platforms (i.e., Facebook, LinkedIn, Twitter, and Instagram) and mailing lists, and they were invited to share the survey with their acquaintances. Each participant had the opportunity to "stop and save" the survey and to continue it later. However, once the survey was completed, the link expired, preventing participants from responding more than one time. Inclusion criteria were: (a) age ≥ 18 years and (b) being a native Italian speaker. A total of 1256 surveys were collected. Of these, 1074 (85.5%) were deemed suitable for the analyses. Of the 182 (14.5%) remaining ones, 166 (91.2%) were excluded because participants completed less than 90% of the survey, and 16 (8.8%) because they did not give their consent to participate (see Figure 1). The study was approved by the ethical committee of the University of Milan and all respondents included in the analysis signed an online informed consent form before completing the survey. Perceived physical and psychological statuses were evaluated on a five-point Likert scale (from 1 = "Poor" to 5 = "Excellent"). Subjective probability of contracting COVID-19 was evaluated through a visual analogue scale ranging from 0 (i.e., "Not likely at all") to 10 (i.e., "Very likely"). Participants' fear of contracting COVID-19; their trust in government, health institutions, and science; as well as their opinions about COVID-19's dangerousness were all evaluated through a five-point Likert scale (from 1 = "Strongly disagree" to 5 = "Strongly agree"). Willingness to get the COVID-19 vaccination was assessed through the following question: "As soon as a COVID-19 vaccine becomes available to you, do you intend to get vaccinated?". Respondents could answer "Yes", "No", or "I do not know" and were then asked to explain their responses (open question). Differently from previous studies that asked subjects to explain their answers only in the case of "No" or "I don't know" responses (e.g., Fischer et al., 2020), the present study evaluated subjects' explanations also in the case of "Yes" responses, since the authors were interested in investigating the extent to which the kinds of explanations provided varied across the three different answers. See the Appendix A for the complete version of the Survey and COVID-19 specific questions. Finally, the present study investigated subjects' opinions about whether there are any "culprits" to blame for the pandemic. This was done through the following question: "Do you think there is anyone who can be held responsible for the pandemic?". Subjects who answered "Yes" were then asked to indicate who they did hold responsible. The aim of this question was to identify conspiracist beliefs about the (non-natural-i.e., human, artificial) origins of the virus. Symptoms of Anxiety Assessment: The 7-Item Generalized Anxiety Disorder Questionnaire (GAD-7) All participants were screened for symptoms of anxiety using the GAD-7 [17]. It is a self-report questionnaire involving seven items that adssess the core symptoms of generalized anxiety disorder (GAD) following DSM-IV-TR criteria. Each item score ranges from 0 ("Not at all") to 3 ("Nearly every day"), with 10 used as a cut-off for clinically relevant symptoms of anxiety. Psychometric evaluations of the GAD-7 suggest that it is a reliable and valid measure of GAD symptoms in both the psychiatric [18,19] and in the general population [20] samples. The GAD-7 has demonstrated good psychometric properties [17]. See Appendix A. Health locus of control refers to the belief that health is in one's control (namely, "internal control") or is not in one's control (namely, "external control"). Among adults, external locus of control is associated with negative health outcomes, whereas internal locus of control is associated with favorable outcomes [21]. The MHLCS is a self-report questionnaire that evaluates how a person tends to exhibit an "internal" or "external" health locus of control [22]. Specifically, the MHLCS is made of 18 items scored on a fivepoint Likert scale (1 = "Strongly disagree", 5 = "Strongly agree"). The MHLCS provides three main subscales: (1) "Internality", which reflects how much a person believes that his or her own health depends on his or her choices and behaviors; (2) "Powerful Others Externality", which indicates how much a person believes his or her health depends on other significant people (e.g., family, doctors, partners); (3) "Chance Externality", which indicates how much a person believes his or her health depends on chance or fate. The MHLCS subscales do not have cut-off points, thus the higher the score, the higher the dimension represented by each subscale (i.e., higher points in the "Internality" subscale indicates higher internal health locus of control). In the present study, only the "Chance externality" subscale was used. See Appendix A. Willingness to Get COVID-19 Vaccination Motivation: The Categorization Process Subjects' self-reported motivations for their willingness/unwillingness to get the COVID-19 vaccination passed through a three-step categorization process. In the first step, independent categorizations were made by three different experimenters. In the second step, the three sets of categories that thereby emerged were compared, and only those categories that had been identified by at least two experimenters were kept. In the third step, those categories went through the external revision of a fourth experimenter. The categories that resulted from this three-step procedure, together with their frequencies and percentages, are listed in Table 1. Answers containing more than one explanation were sorted into multiple categories (for example, the answer: "(I intend to get the COVID-19 vaccination) because I want to protect myself and the community I live in" was sorted both in the "Self-protection" and in the "Moral/social duty" categories). Willingness to Get COVID-19 Vaccination: Motivations Motivations provided for the willingness/unwillingness to get vaccinated against COVID-19 are reported in Table 1. Positive intention to get vaccinated was motivated by the following five main categories of reasons: (1) a social/moral duty to protect one's community (326 responses); (2) a desire for self-protection (284 responses); (3) a belief in the vac-cine's efficacy (258 responses); (4) a desire to come back to a "normal" (i.e., pre-pandemic) life (127 responses); (5) a general attitude of trust in medical science (126 responses). Negative or hesitant answers, on the other hand, were motivated by five main sorts of reasons, appealing, respectively, to: (1) concerns about the vaccine's safety (22 of the "no" responses; 51 of the "I don't know" responses); (2) concerns about the vaccine's efficacy (10 of the "no" responses; 14 of the "I don't know" responses); (3) skepticism about the vaccine's necessity in relation to the subject's condition (8 of the "no" responses; 7 of the "I don't know" responses); (4) personal health issues that make the vaccine specifically contraindicated for the subject (7 of the "no" responses; 11 of the "I don't know" responses); (5) the existence of effective vaccine alternatives (4 of the "no" responses; 4 of the "I don't know" responses). In the case of "I don't know" responses, a further kind of reason provided was: (6) insufficient and confusing information available on the vaccine's costs and benefits (23 responses). Finally, three respondents reporting a negative intention toward the vaccine explained it by expressing general no-vax attitudes. Frequencies, percentages and representative examples of the responses belonging to each category are reported in Table 1. Statistical Analysis Continuous variables are presented as mean ± standard deviation, and they were compared using a t test for independent samples. Variables not normally distributed are presented as median and interquartile range and were compared with the Wilcoxon rank sum test. Categorical data are reported as frequency and percentage and were compared using an χ2 test or Fisher exact test, as appropriate. Willingness to get a COVID-19 vaccination was defined as 0 (i.e., "Yes, I will get the vaccination") and 1 (i.e., "No, I do not intend to get the vaccination" + "I am not sure about my vaccination intentions"). Independent predictors of willingness to get a COVID-19 vaccination were identified via multiple logistic regression analysis with stepwise selection of the variables. The consistency and reliability of the identified subset of predictors were tested by a cross-validation iteration procedure. At each step the dataset was randomly split into two halves. The independent predictors were selected in the first half (training set) and the resulting model was tested for significance in the second half (testing set). The procedure was repeated 200 times with different random splits. The predictor was considered as reliable if it was selected and confirmed at least 75% of the time. The assessment of direct and indirect effects of psychological variables upon the willingness to get the COVID-19 vaccination was made using path analysis. Health locus of control, anxiety symptoms, and perceived psychological status were used as predictors, and those variables that had been resulting significantly from the cross-validation iteration procedure were used as mediators. The path analysis was performed by using the SAS Proc CALIS procedure (SAS Institute Inc., Cary, NC, USA) based on structural equation modelling. The strength of direct and indirect relationships between variables was quantified by standardized β coefficients. p-values below 0.05 were considered as significant and all tests were two-sided. All analyses were performed using SAS statistical package V. 9.13 (SAS Institute, Inc., Cary, NC, USA). Descriptive Statistics All the variables considered in the study are showed in Table 2. The mean ages of both groups (respondents who intended to get vaccinated against COVID-19 as well as those who did not or were in doubt) were close to middle age-with a range that varied from 18 to 88 years old. Male prevalence was significantly higher among those who intended to get vaccinated. Marital status did not differ between the two groups. Almost all participants were Italian (i.e., 1069, 99.6%) and most of them were located in Northern Italy. The majority of participants were employed, and the number of healthcare professionals was significantly higher among those who were willing to get vaccinated. Those who intended to receive the COVID-19 vaccine reported a better physical and psychological state compared to the others. Interestingly, those who reported negative or hesitant vaccination intentions had a higher chance externality health locus of control compared to the others. Respondents who intended to receive the COVID-19 vaccine estimated to have a greater chance of contracting the disease and reported greater fear for themselves, their families, and their friends in relation to that possibility. Furthermore, they also reported a higher trust in government, medical institutions, and science. Is There Anyone Responsible for the COVID-19 Pandemic? Forty-seven percent of subjects expressing negative/hesitant vaccination intentions said that there is someone responsible for the pandemic, whilst only 27.3% of those with positive vaccination intentions did so. In response to the follow-up open question about who can be held responsible, two main categories of "culprits" were indicated: (1) culprits responsible for the origin of the virus (generally identified as Chinese scientists who supposedly created it in a lab), and culprits responsible for the spread of the virus (identified either as politicians who did not implement effective preventive measures or with the general public who did not respect social distancing). Among respondents with negative/hesitant vaccination intentions, 73.7% identified the "culprits" as those responsible for the virus origins, while the remaining 26.3% identified the "culprits" as subjects responsible for the virus spread. Among respondents with positive vaccination intentions, 49.5% identified the "culprits" as subjects responsible for the virus origins, and 50.5% as those responsible for the virus spread. As shown in Figure 3, the estimated β coefficients showed that MHLCS (chance externality) had both a direct and an indirect effect on the willingness to be vaccinated. Furthermore, it had a direct effect on: (1) the belief that COVID-19 is more severe than the common flu; (2) trust in health institutions; (3) trust in science (Panel A). Notably, the indirect effect of MHLCS (chance externality) on the willingness to be vaccinated passed through trust in health institutions for 68% and through believing that COVID-19 is more severe than the flu for the remaining 32% (Panel B). Generalized Anxiety Disorder-7 (Symptoms of Anxiety) As reported in Figure 4 (Panel A), the estimated β coefficients showed that GAD-7 had a direct effect upon: (1) the number of vaccinations received in 2019 by the respondent and (2) significant others' willingness to be vaccinated against COVID-19, but not upon willingness to be vaccinated. In turn, both the number of vaccinations received in 2019 and the significant others' willingness to be vaccinated had a direct effect on willingness to be vaccinated. Moreover, we observed an indirect effect of GAD-7 on the willingness to be vaccinated (Panel B). Interestingly, this relationship passed through the number of vaccinations received in 2019 for 50% and significant others' willingness to be vaccinated for the remaining 50%. Perceived Psychological Status As reported in Figure 5 (Panel A), the estimated β coefficients showed that subjects' perceived psychological status had a direct effect upon trust in science, number of vaccinations received in 2019, and significant others' willingness to be vaccinated, but not upon actual willingness to be vaccinated. In turn, the number of vaccinations in 2019 and significant others' willingness to be vaccinated had a direct effect on the actual willingness to be vaccinated. There was an indirect effect of perceived psychological status on subjects' willingness to be vaccinated (Panel B). This effect passed through significant others' willingness to be vaccinated for 57% and the number of vaccinations performed in 2019 for the remaining 43%. Discussion The main aim of the present study was to investigate which variables may influence the decision to get vaccinated against COVID-19. Results of the cross-validation process showed that the number of vaccinations received in 2019 was the strongest predictor associated with positive COVID-19 vaccination intentions. This result is in line with previous studies showing that a positive intention toward receiving the COVID-19 vaccination is strongly associated with a general tendency to get vaccinated [8,9,13]. Willingness to get vaccinated against COVID-19 was also associated with trust in science and healthcare institutions, again in line with findings from previous studies [23,24]. The third significant factor predicting positive COVID-19 vaccination intentions was the belief that this new virus is more dangerous than the common flu-as indeed it actually is. This result is particularly important, as it indicates that relevant knowledge about a specific disease (in this case, COVID-19) influences subjects' willingness to be vaccinated against it. This finding also highlights the necessity of clear communication strategies from health and political authorities, especially under pandemic conditions that risks being dominated by confusing or misleading information leading people to engage in risky behavior that can compromise their own others' health [25]. Interestingly, significant others' willingness to get COVID-19 vaccination is associated with subjects' unwillingness to be vaccinated. This finding is quite complex to be explained, but it could be perhaps justified by the fact that the percentage of people who do not intend to get the COVID-19 vaccination is lower than the one of those who want to. Differently from what was observed in previous studies, which found that older, male, married, and employed subjects with a high income were more favorable toward getting a COVID-19 vaccination [6,7,10,13], no significant association was found between these sociodemographic variables and vaccination preferences. However, the present study results may be explained by the type of analyses performed. In fact, other than running multiple logistic regressions, the authors chose a cross-validation iteration procedure, which has a very strict statistic method. This may have influenced the results, showing only those predictors strongly associated with the study main outcome (i.e., the willingness to get COVID-19 vaccination). Self-Reported Reasons to Get the COVID-19 Vaccine The reasons that subjects provided to explain their negative vaccination intentions ("no" responses) turned out to be by and large overlapping with those given to explain hesitant intentions ("I don't know" responses)-thereby suggesting that, in most cases, the hesitation does not amount to a neutral stance but is closer to a negative one. Both in the case of negative and hesitant vaccination intentions, the most commonly cited reason was a concern about the safety of the vaccine (generally due to lack of sufficient testing) followed by reasons mentioning concerns about the vaccine efficacy ("it is still possible to contract COVID-19 after being vaccinated") and about the vaccine necessity ("I don't need the vaccine since I'm not at risk of getting the virus"). In the case of hesitant intentions, another commonly cited reason was the lack of sufficient and clear information on the vaccine's costs and benefits. In cases of positive vaccination intentions, on the other hand, the most commonly cited reason was a social/ethical duty to protect one's community, followed by reasons referring to a desire for self-protection, a belief in the efficacy of the vaccine, a desire to "have one's life back", and a general attitude of trust in science. Importantly, many of these self-reported reasons for positive vs. negative/hesitant vaccination intentions can be seen as the opposite sides of the same coin. Concerns about the vaccine's safety are clearly the flipside of a desire for self-protection; and skepticism about the vaccine's efficacy is the flipside of the belief that the vaccine is effective. More generally, the majority of the reasons provided to explain negative and hesitant vaccination intentions presuppose a mistrust in the information from scientific/medical sources, which is the flipside of the trust in science cited by subjects who reported positive vaccination intentions. This suggests that at least some of the basic desires and needs that are at the root of positive and negative/hesitant vaccination intentions are the same, although subjects have radically different perceptions and beliefs about the world and notably about how such desires and needs can be satisfied (for some subjects, the vaccine, and more generally the solutions indicated by medical authorities, are not suitable means to satisfy their desire to be safe and to find effective ways out of the emergency). That said, there were also some distinctive reasons that were cited to explain only the positive vaccination intentions and not the negative/hesitant ones (as well as, vice versa, the only negative/hesitant intentions, and not the positive ones). Most notably, the social/ethical considerations that were the most commonly cited reason with which subjects explained their positive intentions to get vaccinated were never mentioned to explain negative/hesitant intentions. This suggests that subjects with positive vaccination intentions perceive themselves as "ethical"-and their own actions as driven by "pro-social" attitudes-much more than subjects with negative/hesitant vaccination intentions do. Though it is important to note that what is at stake here are indeed self-perceptions: considering that what was collected were self-reported reasons-which might well be influenced by post hoc confabulation ad desirability bias. The findings do not in themselves prove that subjects with positive vaccination intentions are actually driven by ethical and pro-social motivations more than hesitant/no-vaxxers are (also because, of course, insofar as one takes vaccinations to be unsafe and ineffective, refusing them is not unethical from their point of view). Indeed, Rieger (2020) found that when altruistic motivations are triggered in vaccination hesitant/resistant subjects, such motivations do have the potential to influence subjects' decision-making, eventually leading many of them to shift toward more positive vaccination intentions. The Role of Beliefs about Human Responsibilities in the Pandemic Although few participants explicitly mentioned belief in conspiracy theories among the reasons for their negative/hesitant vaccination intentions, previous studies suggest that such beliefs are often associated with (if not directly responsible for) the said intentions. Hornsey et al. found a general connection between conspiratorial thinking and antivaccination attitudes [26], and Salali and Uysal found specific associations between beliefs in the human (rather than natural) origin of Sars-CoV-2 and hesitant attitudes toward COVID-19 vaccines [27]. In line with these results, the present study found that positive answers to the question whether there is anyone who can be held responsible for the pandemic were significantly more common among subjects with negative/hesitant vaccination intentions (47%), than among those with positive vaccination intentions (27%). Moreover, even when subjects with positive vaccination intentions said that they thought there were those responsible for the pandemic, often they did not refer to people responsible for the original creation of the virus in a lab, but rather to people responsible for the virus spread (due to failures to adopt adequate preventive behaviors)-which is a rather reasonable view that arguably does not involve any conspiracist claim. Overall, then, the present study solidly replicates previous findings according to which conspiracist attitudes (and in particular, in this case, conspiracist beliefs about the human origin of the virus) are associated with negative or hesitant vaccination intentions. This suggests that effective information campaigns about the nonhuman origins of the Sars-CoV-2 virus might have positive effects on overcoming vaccine hesitancy. The Role of Psychological Variables on COVID-19 Vaccine-Related Decisions The current pandemic is having a significant impact on public mental health. Numerous studies in the last year have pointed out the deleterious effects of the pandemic and the consequent containment measures (i.e., quarantine and social isolation) [28] upon public psychological wellbeing. As has happened in previous epidemics across history [28], an increase in psychological symptoms and disorders in the general population were registered, including, though not limited to, anxiety and depression, stress, feelings of helplessness, anger, and frustration. For this reason, the authors deemed it important to analyze whether and how people's psychological condition affected their attitudes toward the COVID-19 vaccine. Quite surprisingly, neither perceived psychological status nor self-reported anxiety had a direct effect on the willingness to be vaccinated. Nevertheless, respondents who reported more anxiety symptoms (in the previous two weeks) and who did not receive any vaccination in 2019 were less willing to receive the COVID-19 vaccine. On the other hand, subjects who reported a generally better psychological status and got a vaccine in 2019 showed more positive intentions to be vaccinated. It might well be that anxiety amplifies doubts and fears about the COVID-19 vaccine, especially in those people who are not that used to getting vaccinations in general, although further studies are needed to test this hypothesis. Particularly interesting are the results concerning health locus of control. In line with Olagoke et al. [29], higher "chance externality" health locus of control (i.e., assuming that one's health depends on fate or case) was directly and associated with hesitant or negative vaccination intentions. This is arguably explained by the fact that believing that one's health does not depend on one's actions and behaviors is likely to lead one to dismiss COVID-19 vaccination as a useful resource. This result is also in line with another study showing that a higher "chance externality" locus of control was associated with vaccine hesitant/resistant parental attitudes toward child vaccinations [15]. "Chance externality" locus of control had also negative indirect effects on the willingness to get the COVID-19 vaccination, mediated by: (1) trust in science and healthcare institutions, and (2) the belief that COVID-19 can be more severe than the common flu. Again, this might be due to the fact that believing that one's health depends more on fate than on one's own actions will lead one to consider healthcare institutions' advice as irrelevant to deal with COVID-19. The negative association between chance externality locus of control and the belief that COVID-19 can be more severe than the common flu, on the other hand, may be related to a poor health literacy, as suggested by a previous study [30]. Finally, the fact that negative/hesitant vaccination intentions were associated both with conspiracist beliefs about the human origin of the pandemic and with a higher "chance externality" health locus of control is in line with many previous studies that highlighted a close connection between conspiratorial thinking and high external locus of control in general [31,32]. Study Limitations This study had the following main limitations. First, despite the attempt to collect data from an Italian geographically distributed sample, the majority of the respondents ended up being from the North of Italy (and in particular from the Lombardy region), so the study sample is not really representative of the entire country and may be biased by the sampling method (i.e., selection bias). Second, most of the respondents turned out to be young and well-educated adults, so the results must be interpreted with caution. Third, the use of a cross-sectional study design makes it hard to establish causality and requires a careful interpretation of results. Fourth, data were collected at the very beginning of the vaccination campaign, so only vaccination intentions and not the actual vaccination uptake could be tested. Moreover, only one vaccine was available at that time, and data about potential lethal side effects related to the COVID-19 vaccine were much more limited than they are now. Fifth, the present study investigated subjects' self-reported reasons to get vaccinated, which might not (fully) match with the actual reasons that motivated subjects' responses (which in turn might not always be introspectively available to them). Finally, the motivations to get COVID-19 vaccination categorization process did not employ automated sentiment analysis software, and future studies should implement this qualitative type of analysis. Conclusions To date, evidence about psychological factors and personal motivations implicated in the willingness or unwillingness to get the COVID-19 vaccination is limited. The present study started filling this gap based on a sample of Italian respondents, paying close attention to psychological variables-including anxiety symptoms, perceived psychological status, and health locus of control-as well as personal motivations underlying vaccination intentions. Moreover, the data collected lay the basis for important suggestions about which interventions might prove helpful to increase vaccination acceptance rates. For one thing, the finding concerning the relation between high "chance externality" health locus of control and negative/hesitant vaccination intentions suggests that interventions aimed at boosting people's sense of control and feelings of empowerment might be effective. Of course, implementing such interventions is not easy. As noted by Van Prooijen (Van Prooijen, 2018), the most promising strategy to boost empowerment is arguably education, the effects of which can only be appreciated in the long-term. But other shorter-term strategies to boost empowerment and perceived control are also possible-such as improving transparency in public decision-making, providing detailed information about the decisions that are en-forced upon citizens, as well as giving citizens themselves the opportunity to "voice" their opinions about such decisions, thereby increasing their sense of control about them [33,34]. The findings concerning subjects' self-reported reasons for their vaccination intentions, on the other hand, provide insights about the sorts of messages that might positively influence such intentions. The fact that concerns about the vaccine's safety, efficacy, and necessity (together with a lack of sufficient information about the vaccine itself) were the most commonly cited reasons for negative/hesitant vaccination intentions suggests that pro-vaccination messages should seek to address those three sorts of concerns at length, providing extensive and accessible information about each of them. The idea that messages of that sort might positively influence vaccination intentions gets preliminary support from the study by Rieger (Rieger, 2020), which found that a message highlighting that COVID might be dangerous also for young and healthy people, hence that the vaccination might be necessary also for them if they want to avoid health troubles, led a significant amount of hesitant and resistant subjects to change their mind, expressing more favorable vaccination intentions. In fact, it is worth noting that Rieger's study showed that even more effective in influencing hesitant and resistant subjects were messages highlighting altruistic reasons in favor of the vaccine-i.e., the sort of "ethical/social" reasons that in the present study were mentioned by respondents with positive vaccination intentions. The fact that those very reasons proved effective to influence also hesitant and resistant subjects makes sense in the light of the present study observation that all subjects, irrespectively of their vaccination intentions, seem to be moved by the same basic desires and needs-whilst they differ in the ways in which they believe those needs can be satisfied. So, perceiving oneself as ethical and altruistic is arguably desirable for everyone, but the point is that some subjects do not see the vaccine as a means to behave altruistically. If it were possible to persuade them that it actually is such a means, this might well make them willing to get vaccinated. 16. A series of statements will be proposed to you below. We ask you to mark for each statement its corresponding degree of agreement, on a scale ranging from "strongly disagree" to "strongly agree"
2021-07-30T13:26:12.198Z
2021-07-21T00:00:00.000
{ "year": 2021, "sha1": "d8a6c5a097c4a00193493cda2ae687a0a1c3a212", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8402303", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f1e3af360182043f13ef0ff208919bb1f1539d85", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
41834536
pes2o/s2orc
v3-fos-license
Nsw Annual Report Describing Adverse Events following Immunisation, 2010 Nsw Annual Report Describing Adverse Events Aim: This report summarises Australian passive surveillance data for adverse events following immunisation in NSW for 2010. Methods: Analysis of de-identified information on all adverse events following immunisation reported to the Therapeutic Goods Administration. Results: 424 adverse events following immunisa-tion were reported for vaccines administered in 2010; this is 6% lower than 2009 but 24% higher than 2008 and the second highest number since 2003. A total of 274 (65%) adverse events involved seasonal or pandemic influenza vaccines. Reports were predominantly of mild transient events: the most commonly reported reactions were fever, allergic reaction, injection site reaction , malaise and headache. Only 9% of the reported adverse events were serious in nature, including eight reports of febrile convulsions in children following seasonal influenza vaccine. Conclusion: The large number of reports in 2010 is attributable to the high rates of fever and febrile convulsions in children after vaccination with 2010 seasonal trivalent influenza vaccine, as well as pandemic (H1N1) 2009 influenza vaccine. Adverse events following immunisation are generally regarded as any serious or unexpected adverse events that occur after the administration of a vaccine(s). They may be caused by a vaccine(s) or may be coincidental. Adverse events may also include conditions that occur following the incorrect handling and/or administration of a vaccine(s). Post-licensure surveillance – the practice of monitoring the safety of a vaccine after it has been licensed and released in the market – is important to detect rare, late onset and unexpected events which are difficult to detect in pre-licensure vaccine trials. This is the second annual report for adverse events following immunisation in New South Wales (NSW). It sum-marises passive surveillance data reported from NSW in 2010 and describes reporting trends over the 11-year period 2000–2010. To assist readers, a glossary of the abbreviations of the vaccines referred to in this report is provided in Box 1. Trends in reported adverse events following immunisation are influenced by changes to vaccines provided through the National Immunisation Program. Changes in previous years have been reported elsewhere. 1–12 Two recent changes to vaccine funding and availability influenced the adverse events surveillance data presented in this report: (i) For the first time, in 2010 annual vaccination with seasonal trivalent influenza vaccine (containing three influenza strains A/H1N1, A/H3N2 and B) was funded under the National Immunisation Program for people aged 6 months and over with medical risk factors (previously subsidised Pharmaceutical Benefits Scheme) … Adverse events following immunisation are generally regarded as any serious or unexpected adverse events that occur after the administration of a vaccine(s).They may be caused by a vaccine(s) or may be coincidental.Adverse events may also include conditions that occur following the incorrect handling and/or administration of a vaccine(s).Post-licensure surveillance -the practice of monitoring the safety of a vaccine after it has been licensed and released in the market -is important to detect rare, late onset and unexpected events which are difficult to detect in pre-licensure vaccine trials.This is the second annual report for adverse events following immunisation in New South Wales (NSW).It summarises passive surveillance data reported from NSW in 2010 and describes reporting trends over the 11-year period 2000-2010.To assist readers, a glossary of the abbreviations of the vaccines referred to in this report is provided in Box 1. Trends in reported adverse events following immunisation are influenced by changes to vaccines provided through the National Immunisation Program.][3][4][5][6][7][8][9][10][11][12] Two recent changes to vaccine funding and availability influenced the adverse events surveillance data presented in this report: (i) For the first time, in 2010 annual vaccination with seasonal trivalent influenza vaccine (containing three influenza strains A/H1N1, A/H3N2 and B) was funded under the National Immunisation Program for people aged 6 months and over with medical risk factors (previously subsidised Pharmaceutical Benefits Scheme) and for all Aboriginal people aged 15 years and over (previously all Aboriginal adults aged 50 years and over and 15-49 years with medical risk factors). 13ii) Pandemic H1N1 (pH1N1) influenza vaccine (Panvax Ò ) was introduced in Australia from 30 September 2009 for people aged 10 years and over, and from December 2009 for children aged 6 months to 10 years.14 Methods Adverse events following immunisation are notifiable to public health units by medical practitioners and hospital CEOs under the NSW Public Health Act 1991.Cases with outstanding information and all serious adverse events are followed up by public health units and NSW Health, and all notifications are forwarded to the Therapeutic Goods Administration.The Therapeutic Goods Administration also receives reports directly from vaccine manufacturers, members of the public and other sources. 15,16During the pH1N1 vaccine program, reports of adverse events following the administration of this vaccine were required to be notified directly to the Therapeutic Goods Administration rather than to a public health unit and reporting by the general public to the Therapeutic Goods Administration was actively promoted.All reports are assessed by the Therapeutic Goods Administration (TGA) using internationally-consistent criteria 17 and entered into the Australian Adverse Drug Reactions System database. Adverse events following immunisation data De-identified information on adverse events following immunisation (AEFI) reports from the Australian Adverse Drug Reactions System database was released to the National Centre for Immunisation Research and Surveillance for analysis and reporting.AEFI records contained in the Australian Adverse Drug Reactions System database were eligible for inclusion in the analysis if: a vaccine was recorded as 'suspected' of involvement in the reported adverse event; the vaccination occurred between 1 January 2000 and 31 December 2010; and the residential address of the individual was recorded as NSW.If the vaccination date was not recorded the date of onset of symptoms or signs was taken as the date of vaccination. The term 'AEFI record' is used throughout this report because a single adverse event notification to the TGA can generate more than one record in the Australian Adverse Drug Reactions System database.This may occur if there is a time sequence of separate adverse reactions in a single patient. AEFI records are classified as 'suspected' by the TGA.An AEFI record is classified as 'not suspected' and excluded from the Adverse Drug Reactions System database if: there is no reasonable temporal association between the use of a drug and the clinical event (generally defined as onset of symptoms within 28 days following vaccination); the record does not contain enough information for an adequate assessment or the information is contradictory; or if a clinical event is explained as likely to have arisen from other causes. Study definitions of AEFI outcomes and reactions AEFIs were defined as 'serious' or 'non-serious' based on information recorded in the Australian Adverse Drug Reactions System database and using criteria similar to those used elsewhere. 17,18In this report, an AEFI is defined as 'serious' if the record indicated that the person had recovered with sequelae, been admitted to a hospital, experienced a life-threatening event, or died. Because children generally receive several vaccines at the same time, all administered vaccines are usually listed as 'suspected' of involvement in a systemic adverse event as it is usually not possible to attribute the event to a single vaccine. Typically, each AEFI record listed several symptoms, signs and diagnoses that had been re-coded by TGA staff from the description provided by the reporter into standardised terms using the Medical Dictionary for Regulatory Activities (MedDRA Ò ). 19For the 23vPPV vaccine, as a single booster is recommended 5 years after the first dose, the number of respondents who declared being vaccinated within 5 years was divided by five to get an estimate of the average number of doses for a single year. Notes on interpretation The data reported here are provisional only, particularly for the fourth quarter of 2010, because of reporting delays and the late onset of some AEFIs.The information collated in the Australian Adverse Drug Reactions System database is intended primarily to detect signals of adverse events and to inform hypothesis generation.4][5][6][7][8][9][10][11][12]23 It is important to note that this annual report is based on vaccine and reaction term information collated in the Australian Adverse Drug Reactions System database and not on comprehensive clinical notes.Individual records in the database list symptoms, signs and diagnoses that were used to define a set of reaction categories based on the case definitions provided in the 9th edition of The Australian Immunisation Handbook. 16These reaction categories are similar, but not identical, to the AEFI case definitions. The reported symptoms, signs and diagnoses in each AEFI record in the Australian Adverse Drug Reactions System database are temporally associated with vaccination but are not necessarily causally associated with one or more vaccines.Adverse events following immunisation are generally regarded as any serious or unexpected adverse events that occur after the administration of a vaccine(s).For reports where the date of vaccination was not recorded, the date of onset was used as a proxy for the vaccination date. Source: Adverse Drug Reactions System database, Therapeutic Goods Administration. Results There was a total of 424 AEFI records for NSW in the Australian Adverse Drug Reactions System database with a date of vaccination (or onset of an adverse event if vaccination date was not reported) in 2010.This was a 6% decrease on the 450 records in 2009 and a 24% increase on the 340 records in 2008.Eighty percent (n ¼ 338) of the AEFI records during 2010 were reported in the first two quarters of the year, a substantial increase (68%) from the corresponding period in 2009 (24%, n ¼ 107).Fifty-seven percent (n ¼ 243) were for children aged less than 7 years.Thirty-six percent (n ¼ 154) were reported to the TGA by NSW Health and the remainder were reported directly to the TGA; 21% (n ¼ 88) by members of the public, 38% (n ¼ 161) by doctors/other health care providers, and 5% (n ¼ 21) by hospitals.The number of AEFI reports by members of the public was much greater in 2009 and 2010 than in 2008 (2%, n ¼ 7) with 95% of reports by members of the public relating to seasonal influenza and pH1N1 influenza vaccines. Reporting trends The AEFI reporting rate for 2010 was 5.8 per 100 000 population, compared with 6.2 per 100 000 population in 2009 (Figure 1).This is the third highest reporting rate for the period 2000-2010, after the first peak in 2003 that coincided with the national program for meningococcal C conjugate vaccine and high rates of reporting from the 18-month dose of DTPa; and the second peak in 2009 following the commencement of the pH1N1 program (Figure 1). Figure 1 shows the increase in reporting by the general public directly to the TGA in 2009 and 2010, and that the majority of reported events (from all reporter types) were of a non-serious nature.Figures 2b and 3 show that the rise in the reporting rate in 2009 and 2010 was due to reports following receipt of pH1N1 vaccine and seasonal influenza vaccines, and that in 2010 this was predominantly in children.Figures 2 and 3 The usual seasonal pattern of AEFI reporting from older Australians receiving 23vPPV and influenza vaccines during the autumn months (March-June) is evident in Figure 3, with a higher reporting rate for influenza in 2010. Age distribution In 2010, the highest population-based AEFI reporting rate occurred in infants aged less than 1 year, the age group that received the highest number of vaccines (Figure 4).Compared with 2009, there was a four-fold increase in AEFI Adverse events following immunisation (AEFI) are generally regarded as any serious or unexpected adverse events that occur after the administration of a vaccine(s).reporting rates among children aged 7 years and under (9.9 to 37.9 per 100 000 population), related to the increase in reports following the administration of seasonal and pH1N1 influenza vaccines.There were also increases in the reporting rates of most other individual vaccines given to this age group in 2010 (Table 1) compared to 2009. 24In adults there were also substantial increases in the number of reports following seasonal influenza vaccines, but about a three-fold decrease in AEFI reporting rates in this age group overall (6.3 to 2.4 per 100 000 population), due to the decline in reports following pH1N1 influenza vaccine in this age group. Severity of outcomes Nine percent (n ¼ 37) of events were defined as 'serious' (i.e.recovery with sequelae, requiring hospitalisation, experiencing a life-threatening event or death); higher than observed in 2009.Numbers of reported events and events with outcomes defined as 'serious' are shown in Table 1. Fifteen percent of records were recorded as 'not fully recovered' at the time of reporting (Table 3); 59% of these were following receipt of pH1N1 and seasonal influenza Adverse events following immunisation (AEFI) are generally regarded as any serious or unexpected adverse events that occur after the administration of a vaccine(s). Of the 29 cases of convulsion, 27 were children aged less than 7 years.The most commonly suspected vaccines were pH1N1 vaccine (n ¼ 18) and seasonal influenza vaccine (n ¼ 11), either given alone or co-administered with other vaccines. All the reports of hypotonic-hyporesponsive episodes were from children aged less than 7 years.Seven reports were following administration of hexavalent/pneumococcal and rotavirus vaccines while one case report was following administration of hexavalent and pneumococcal vaccines only. All four cases of Guillain-Barre `syndrome were in adults aged 65 years and over and following seasonal flu vaccine (Fluvax Ò ) (CSL Biotherapies) with onset within 24 hours of vaccination. AEFI reports not including influenza vaccines There were 150 AEFI records in 2010 which did not include influenza vaccines, either alone or co-administered with other vaccines.Only one (0.7%) was reported by a member of the public. Eleven percent (n ¼ 17) of the 150 AEFI records had outcomes defined as 'serious' (i.e.recovery with sequelae, hospitalisation, life-threatening event or death).Serious AEFIs reported included hypotonic-hyporesponsive episodes (n ¼ 6), diarrhoea (n ¼ 4), allergic reactions (n ¼ 3), injection site reactions (n ¼ 2), Idiopathic Thrombocytopenic Purpura (n ¼ 1) and intussusception (n ¼ 1).There were no reports of life-threatening events and all but one of the children coded as 'serious' was admitted to hospital.The report of intussusception followed administration of hexavalent (DTPa-IPV-HepB-Hib), pneumococcal (PCV7) and rotavirus vaccines in an infant and occurred 2 months post-vaccination.However, due to the length of latency, causality is unlikely to be related to the vaccine.The case of Idiopathic Thrombocytopenic Purpura followed administration of varicella vaccine.However, due to an alternate cause (febrile intercurrent viral infection), the causality is less likely to be related to the vaccine.The distribution of more commonly reported AEFIs is listed in Figure 5. seasonal influenza vaccine The majority of reports for seasonal influenza vaccine were for Fluvax Ò (CSL Biotherapies) (n ¼ 101, 67%) while another 20% did not specify the vaccine brand and were coded only as influenza vaccine.There were 13 adverse event reports following vaccination with Influvac Ò (Solvay Biosciences) and six with Vaxigrip Ò (Sanofi Pasteur).All reports following seasonal influenza A large proportion of the AEFIs following seasonal influenza vaccine were reported directly to the TGA by general practitioners and specialists (41%) and members of the public (20%), while 29% were reported to the TGA by NSW Health.Fifty-nine percent (n ¼ 89) of the reports following seasonal influenza vaccine were defined as 'non-serious', 7% (n ¼ 11) were defined as 'serious', 15% (n ¼ 22) were defined as not recovered, and an additional 19% were not categorised because of the nonavailability of data on hospitalisation and outcome.The distribution of reaction types for seasonal influenza vaccine is presented in Figure 5.The spectrum of reactions for seasonal influenza vaccine was different to that for non-influenza vaccines with a substantially higher proportion of fever (66% compared with 27% for non-influenza vaccines) and allergic reaction (33% vs. 22%) and a lower proportion of injection site reactions (13% vs. 38%).There were 11 reports (7%) of convulsions including eight febrile convulsions, four (3%) of Guillain-Barre `syndrome and two (1%) of syncope following seasonal influenza vaccine.A higher proportion of reports following seasonal influenza vaccine came from members of the public (20% compared with 0.7% for non-influenza vaccines). Monovalent pH1N1 vaccine For pH1N1 pandemic influenza vaccine events, 68% (n ¼ 86) were for children aged 7 years and under.Adverse events following immunisation are generally regarded as any serious or unexpected adverse events that occur after the administration of a vaccine(s).pH1N1 (percentage of 126 AEFI records); seasonal influenza vaccine (percentage of 150 AEFI records); and vaccines excluding influenza vaccines (percentage of 150 AEFI records) where the corresponding vaccines were listed as suspected of involvement in the reported adverse event following immunisation.GBS: Guillain-Barre ´syndrome HHE: hypertonic-hyporesponsive episodes pH1N1: pandemic (H1N1) 2009 influenza Source: Adverse Drug Reactions System database, Therapeutic Goods Administration. Forty-seven percent (n ¼ 59) were reported by members of the public directly to the TGA and only 17% were reported by NSW Health to the TGA.Seven percent of the reports following pH1N1 influenza vaccine were coded as serious. The distribution of reaction types for pH1N1 influenza vaccine is presented in Figure 5.The spectrum of reactions for the pH1N1 influenza vaccine was similar to that for seasonal influenza vaccine, showing higher rates for fever (58%), allergic reaction (38%), malaise (17%) and convulsion (14%). Discussion There has been an increase in both the number of AEFI reports and population-based reporting rates in both 2009 and 2010.This is due to the substantial increase in reports in children following vaccination with the two available influenza vaccines: the 2010 seasonal trivalent influenza and pandemic (pH1N1) influenza vaccines. The pH1N1 program for adults that commenced in September 2009 resulted in a large peak in reports for that age group in the last quarter of that year, followed by substantially lower levels in adults in 2010.Reports in children peaked in 2010 following the roll-out of vaccination to children aged 6 months to 10 years from December 2009.The safety of the pH1N1 vaccine has been examined closely both nationally and internationally.The World Health Organization reports that approximately 30 different pH1N1 vaccines have been developed using a range of methods. 25All progressed successfully through vaccine trials to licensure, showing satisfactory safety profiles, with the most common reactions being severe to moderate fever (1.2%; 95% CI, 0.2%-6.6%)and irritability in infants and younger children following the first dose of pH1N1 vaccine. 26However, these clinical trials were not large enough to detect rare adverse vaccine reactions which occur with a frequency of less than one in 1000.In general, the safety profile, including that for the Australian vaccine, has been similar to those of other vaccines, with predominantly mild transient events and a small number of serious reactions reported. 27In Australia, reports of febrile convulsions following Panvax administration were found to be between 10 and 100 per 100 000 doses; this is consistent with the definition of a rare event, and substantially less than that for Fluvax Ò . 28Active surveillance for Guillain-Barre `syndrome has resulted in no evidence of an increased incidence, and reports of anaphylaxis are also rare and within expectations. 29e large number of reports following the administration of the pH1N1 vaccine may be attributable to the active promotion of reporting by the TGA and may reflect the fact that immunisation providers are more likely to report milder, less serious AEFIs for vaccines they are not familiar with.This tendency to report an AEFI for newer vaccines increases the sensitivity of the system to detect signals of serious, rare or previously unknown events, but also complicates the interpretation of trends. While seasonal influenza vaccines have been used in Australia for decades, a vaccine safety concern emerged in children in 2010.Epidemiological studies determined that the 2010 seasonal influenza vaccine produced by CSL Biotherapies (Fluvax Ò and Fluvax junior Ò ) was associated with a rate of febrile convulsions within 24 hours of administration of 500-700 per 100 000 doses, 28 or between 5 and 20 times higher than in other seasonal influenza vaccines and pH1N1 vaccine.Very high rates of fever were also found in a follow-up study: 46% following administration of Fluvax compared to 16% following pH1N1, and 7% following Influvac. 30The use of the 2010 seasonal trivalent influenza vaccine in children aged under 5 years was suspended in April 2010, 31 after which reporting of AEFIs from seasonal influenza vaccine declined.The recommendation to resume the use of seasonal influenza vaccine in children aged 6 months to 5 years, using brands other than Fluvax Ò and Fluvax junior Ò , was subsequently made in August 2010. 32This issue was initially detected in Western Australia, where vaccine was provided for a larger proportion of children aged 6 months to 5 years through a state-based influenza vaccination program.In other jurisdictions including NSW, only children with medical risk factors were provided with free vaccine.Therefore, the ability of surveillance systems to detect an AEFI 'signal' was limited in these other jurisdictions.However, when results from these jurisdictions were subsequently combined, analyses found a similar rate of febrile convulsions following Fluvax Ò compared to that in Western Australia. 30imulated reporting associated with a new vaccine (pH1N1 influenza vaccine) and a vaccine safety issue (Fluvax) is likely to have resulted in increased reporting of milder AEFIs and for other vaccines.AEFI reporting rates for non-influenza vaccines in children were higher in 2010 compared with 2009.After excluding reports of influenza vaccines, the population-based AEFI reporting rate in children aged less than 7 years was three times lower (2.1 per 100 000 population) than the overall reporting rate per 100 000 population for 2010 in that age group (5.8).This is consistent with levels of reporting in 2004-2008 after the removal of the 18-month dose of DTPa that resulted in a reduction of injection site reactions.The majority of these (60%) were reported by the NSW Department of Health and only 0.7% were reported by members of the public. The recent increase in reports from members of the public (seven in 2008 compared with 88 in 2010) indicates a high level of public interest in both the pH1N1 and seasonal influenza vaccines.This is likely to be due at least in part to the active promotion of the reporting of events following pH1N1 vaccination directly to the TGA, 27 as well as the issues mentioned above. Conclusion There was a 24% higher rate of AEFIs per 100 000 population reported from NSW in 2010 compared with 2008, and a 6% decrease compared with 2009.The high rate in 2010 was attributable to a large number of reports following receipt of the pH1N1 and seasonal influenza vaccines in children.A higher proportion of these events were reported directly to the TGA by members of the public following promotion of this for pH1N1 also demonstrate marked variations of reporting levels in association with previous changes to the National Immunisation Program from 2000 onwards. Meningococcal C conjugate vaccine (MenCCV) was introduced into the National Immunisation Program schedule on 1 January 2003; and 7-valent pneumococcal conjugate vaccine (7vPCV) on 1 January 2005.MMR: measles-mumps-rubella Source: Adverse Drug Reactions System database, Therapeutic Goods Administration. Figure 5 . Figure 5.Most frequently reported adverse events following pH1N1 and seasonal influenza immunisation, 2010, by number of vaccines suspected of involvement in the reported adverse event. 20,21Box 1. List of abbreviations of vaccine types used in this report Table 1 . Vaccine types listed as 'suspected' in records of adverse events following immunisation for four age groups (,7, 12]17, 18]64 and $65 years), NSW, 2010 Records where at least one of the vaccines shown in the table was suspected of involvement in the reported adverse event.A 'serious' outcome is defined as recovery with sequelae, hospitalisation, life-threatening event or death. 17 a b Number of AEFI records in which the vaccine was coded as 'suspected' of involvement in the reported adverse event and the vaccination was administered between 1 January and 31 December 2010.More than one vaccine may be coded as 'suspected' if several were administered at the same time.c 'Serious' outcomes are defined in the Methods section.d Number of vaccine doses recorded and administered between 1 January and 31 December 2010.e The estimated AEFI reporting rate per 100 000 vaccine doses recorded.f Number of AEFI records excluding influenza vaccines administered alone.Most reports include more than one vaccine.g School-based only.h Seasonal influenza and 23vPPV only.Source: Adverse Drug Reactions System database, Therapeutic Goods Administration. Table 2 . Reaction categories of interest mentioned in records of adverse events following immunisation for two age groups (,7 and $7 years), NSW, 2010 14verse events following immunisation (AEFI) are generally regarded as any serious or unexpected adverse events that occur after the administration of a vaccine(s).aReactioncategorieswere created for the AEFI of interest listed and defined in The Australian Immunisation Handbook (9th edition, pp.58-65 and 360-3)14as described in Methods section.The bottom part of the table shows reaction terms not listed in The Australian Immunisation Handbook 10 but included in AEFI records in the Adverse Drug Reactions System database.bReaction categories like gastrointestinal related to rotavirus and heart rate/rhythm change had 11 reports each; tremor had 10 reports; oedema had eight reports; increased sweating and pallor each had seven reports; flushing had three reports and parotitis had one report.cThere were no reports for reaction categories like acute flaccid paralysis, irritability, meningitis, orchitis, osteitis, osteomyelitis, sepsis, toxic shock syndrome and abscess.dNot shown if neither age nor date of birth were recorded.eAEFI records where only one reaction was reported.fIncludes skin reactions including pruritus, urticaria, periorbital oedema, facial oedema, erythema multiforme etc. Table 3 . Outcomes of adverse events following immunisation for two age groups (,7 and $7 years), NSW, 2010 Unknown' outcome relates to the number of AEFI records which are not serious and with unknown outcome.Source: Adverse Drug Reactions System database, Therapeutic Goods Administration.vaccination in 2010 were received by the TGA on or after the date of announcement of the withdrawal of seasonal influenza vaccine from use in children (23 April 2010). a Percentages relate to the total number of AEFI records (N ¼ 424).b Percentages relate to the number of AEFI records with the specific outcome (e.g. of 214 AEFI records with a 'non-serious' outcome, 63% were for children aged less than 7 years).c ' . The majority of reports were of mild transient events.Increases in reporting following the introduction of a new vaccine (pH1N1) are expected.However, high rates of febrile convulsions and fever following seasonal influenza vaccine, predominantly in Western Australia where the vaccine was offered to all children aged 6 months to 5 years, ultimately resulted in the removal of the indication for the use of Fluvax Ò and Fluvax junior Ò in children of that age, nationally.31InNSW, where the influenza vaccine was provided only for children in this age group with medical risk factors, there were eight cases of febrile convulsions.These cases contributed to the finding that febrile convulsion reporting rates following Fluvax Ò were elevated across Australia.23.Centre for Epidemiology and Research.Summary report on adult health from the NSW Population Health Survey, 2009.Australian Government Department of Health and Ageing, Therapeutic Goods Administration.Australian Technical Advisory Group on Immunisation (ATAGI) and Therapeutic Goods Administration (TGA) Joint Working Group.Analysis of febrile convulsions following immunisation in children following monovalent pandemic H1N1 vaccine (Panvax/Panvax Junior, CSL).Available from: http://www.tga.gov.au/safety/alerts-medicine-seasonalflu-100928.htm(Cited 30 May 2011.)29.Australian Government Department of Health and Ageing, Therapeutic Goods Administration.Suspected adverse reactions to Panvax Ò reported to the TGA.30 September 2009 to 17 September 2010.Available from: http://www.tga.gov.au/safety/alerts-medicine-panvax-091120.htm(Cited 30 May 2011.) 30.Australian Government Department of Health and Ageing, Therapeutic Goods Administration.Investigation into febrile reactions in young children following 2010 seasonal trivalent influenza vaccination.Status report as at 2 July 2010 (updated 24 September 2010).Available from: http://www.tga.gov.au/safety/alerts-medicine-seasonal-flu-100702.htm(Cited 30 May 2011.) 31.Australian Government Department of Health and Ageing, Therapeutic Goods Administration.Departmental Media Releases 23 April 2010.Seasonal Flu Vaccine and young children.Available from: http://www.health.gov.au/internet/main/publishing.nsf/Content/mr-yr10-dept-dept230410.htm(Cited 30 May 2011.)32.Australian Government Department of Health and Ageing, Therapeutic Goods Administration.Departmental Media Releases 30 July 2010.Seasonal flu vaccination for young children can be resumed -Updated advice from the Chief Medical Officer.Available from: http://www.health.gov.au/internet/main/publishing.nsf/Content/mr-yr10-dept-dept300710.htm(Cited 30 May 2011.)New Senior Hospitalist Initiative: a new medical career pathway for NSW Health New South Wales (NSW) is developing a new medical career pathway for hospitalists.Hospitalists will provide a range of clinical services and promote coordinated patient care across disciplines.The establishment of the hospitalist role and development of an education program for experienced non-specialist doctors was recommended by the Garling Special Commission of Inquiry into Acute Care Services in NSW public hospitals.The report recognised that ''Hospitalists have an important role in coordinating the care of a patient who has needs which cross boundaries of individual specialities''.The hospitalist pathway offers a flexible, interesting and attractive career to non-specialists keen to remain involved in acute patient care, while leading improvements in the coordination of hospital services.The pathway will be supported by the Masters of Clinical Medicine (Leadership and Management) which will focus on the range of skills required for senior hospitalist roles within NSW Health.The Masters, or equivalent, will be a requirement for eligibility for NSW Health Senior Hospitalist positions as an alternative to the Senior Career Medical Officer Grading Committee.The 2-year part-time Masters, endorsed by NSW Health, will begin in 2012.The program is open to non-specialist doctors with 3 years full-time postgraduate medical experience.It will be accessible statewide through flexible delivery options and will have a substantial workplace component.To support eligible NSW Health doctors to participate in the Masters, the Department will sponsor 15 places in 2012.Those interested in finding out more about the Senior Hospitalist Initiative, including sponsorship and enrolment information, should check the NSW Health website at: http://www.health.nsw.gov.au/training/hospitalist/Cathie Hull, Senior Policy Officer, NSW Ministry of Health Catherine Ellis, Principal Policy Analyst, NSW Ministry of Health
2017-04-02T17:54:52.998Z
2011-11-29T00:00:00.000
{ "year": 2011, "sha1": "5bc3d02080f2fe370ccbb67d6a579ed989c68042", "oa_license": "CCBYNCSA", "oa_url": "https://www.phrp.com.au/wp-content/uploads/2014/10/NB11024.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5bc3d02080f2fe370ccbb67d6a579ed989c68042", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267742793
pes2o/s2orc
v3-fos-license
Development of a Trusted Third Party at a Large University Hospital: Design and Implementation Study Background: Pseudonymization has become a best practice to securely manage the identities of patients and study participants in medical research projects and data sharing initiatives. This method offers the advantage of not requiring the direct identification of data to support various research processes while still allowing for advanced processing activities, such as data linkage. Often, pseudonymization and related functionalities are bundled in specific technical and organization units known as trusted third parties (TTPs). However, pseudonymization can significantly increase the complexity of data management and research workflows, necessitating adequate tool support. Common tasks of TTPs include supporting the secure registration and pseudonymization of patient and sample identities as well as managing consent. Objective: Despite the challenges involved, little has been published about successful architectures and functional tools for implementing TTPs in large university hospitals. The aim of this paper is to fill this research gap by Background Medical research relies on the effective collection, management, and analysis of biomedical data [1].However, the complexity of associated data flows is increasing constantly due to the rising importance of data-driven approaches from the areas of data science and artificial intelligence [2,3].These typically require data to be reused and shared to generate the necessary large data sets, for example in neuroscience [4].At the same time, relevant data are often highly sensitive and require protection against unauthorized use and disclosure [5].In alignment with this need, various laws, regulations, guidelines, and best practices suggest pseudonymization as a central data protection mechanism, especially in biomedical research [6].Pseudonymization refers to a process in which data that directly identifies individuals (henceforth denoted as identifying data), such as names and addresses, are stored separately from data and biosamples needed for scientific analyses, and research assets are identified using protected identifiers, known as pseudonyms [7].This protects the identity of patients or study participants while still allowing the implementation of complex research workflows, for example, data linkage.It is frequently suggested to bundle pseudonymization with other functionalities relevant to data protection and compliance, such as consent management, and that those should be carried out by particularly trusted units, knwon as trusted third parties (TTPs).One example of a concept recommending TTPs is the Guideline for Data Protection in Medical Research Projects by Technology, Methods, and Infrastructure for Networked Medical Research (TMF), the German umbrella organization for networked medical research [8]. Although the general functionalities required by medical research projects may be similar, the way they are combined into workflows often differs significantly.The reason is that due to varying study schedules and (data) modalities, studies often have different requirements concerning the necessary number and types of pseudonyms as well as the research assets that have to be registered.The timing of consent collection can also vary, for example, if reconsenting is required.Another factor that can contribute to heterogeneity is the need for integration of or linkage with data from external systems or institutions.As a result, studies often develop study-or project-specific solutions to fulfill specific registration, pseudonymization, linkage, and consenting requirements [9].Some open tools, such as Enterprise Identifier Cross-Referencing (E-PIX) [10], Generic Pseudonym Administration Service (gPAS) [11], Generic Informed Consent Service (gICS) [12], or Mainzelliste [13], have been developed and are in widespread use; however, they are usually not integrated with each other, making the implementation of more complex workflows involving different TTP operations challenging and potentially lead to systematic limitations (explained further in the Discussion section).Although research exists on the components mentioned above, the literature lacks insights into the design of more comprehensive architectures that support complex research workflows that are actually in production use [14,15]. Objectives This paper presents the design of a comprehensive architecture for a TTP that aims to support a wide range of different research projects and studies using a unified system.As a first step, we present requirements elicited for this structure and then describe the implementation of a corresponding solution that reuses existing open components.These components are extended with a common application programming interface (API) and a common graphical user interface (GUI).We then present insights into our experiences with piloting this structure and describe our plans for future developments. Requirements TTPs typically offer a range of core functionalities based on their role in supporting research projects and clinical studies with data protection services.Three key functionalities provided are as follows: (1) identity management, through which patients and study participants are registered and their identities are managed across different systems using record linkage; (2) pseudonym management, which provides and manages pseudonyms for different research contexts and is thus critical for data protection compliance; and (3) consent management, to obtain and manage patient and participant consent for various research activities.Further components are usually included to make these core functionalities accessible.An API is necessary for the systematic retrieval of information, the implementation of complex workflows, and integration with further health care and research systems.Moreover, a well-designed GUI is necessary to enable TTP staff and study personnel to perform common tasks efficiently.An audit trail is required to ensure transparency and traceability.Furthermore, data import and export functions are necessary for transferring data from legacy systems and archiving in study-specific contexts. Finally, platform independence is an important nonfunctional requirement to support wide adoption. A common set of tools providing these core functionalities and features (Table 1) are E-PIX [10], gPAS [11], and gICS [12], which are provided as free web-based software by the MOSAIC project from the University of Greifswald (explained in the following section).They are successfully used in a range of research projects and infrastructures [16].Table 1 illustrates which of the above-mentioned core requirements are fulfilled by which of the MOSAIC tools. Programmatic Interfaces and Workflows Representational state transfer (REST) services have become a de facto standard for modern applications over the last couple of years, as they are stateless, lean, and based on open web standards.Hence, we considered a REST API to be an important requirement for all 3 areas-identity management, pseudonym management, and consent management.Together with other common technologies, such asJavaScript Object Notation, this makes the services offered by the TTP accessible to other systems and processes.It also fosters effective information exchange with other systems, for example, to automatically generate primary identifiers and pseudonyms in case a patient is registered in the electronic health record (EHR) system.Moreover, a common API across all services also enables cross-service workflows, which we consider particularly important.An example of this is the automatic creation of pseudonyms linked to the primary identifier when registering a patient or study participant. User Interfaces and Services We considered an integrated user interface (UI) together with a shared authentication and authorization mechanism to be central for our TTP infrastructure.Important functionalities that the UI needs to support include depseudonymization, patient and participant registration, consent management and configuration, as well as administration.A tighter integration of the different components also facilitates sending status messages to users in case actions are required on their side. Specific Features We further identified requirements in regard to specific management functionalities.For example, representing pseudonyms as QR codes is important for seamless workflows across different media; this includes printing the codes on accompanying documents or biospecimen tubes and then reading them using QR code readers.This is particularly important for biospecimen management.Moreover, we identified a need for versioning of managed consent documents.In the event of updates to consents, for example, due to wrong information on the consent form, versioning of the various consents in the system is important for traceability.This also requires the system to be able to assign consents or withdrawals to other participants (eg, if a wrong identifier has been used when originally collecting the form).In addition, a kiosk mode that locks the user into the application is needed for the secure collection of consents from patients using tablets. Nonfunctional Requirements The most important nonfunctional requirements are as follows: (1) scalability, particularly when executing crossservice operations, and (2) documentation of administration functions. Building Blocks In this section, we will describe basic building blocks of the developed application stack. MOSAIC Tools As mentioned previously, the application has been developed around the MOSAIC tools [17] as core components.Although these tools do not fulfill all our requirements, they provide a solid basis for implementing the core functionalities.The MOSAIC tools have been positively evaluated by the data protection authority of Mecklenburg-Vorpommern in Germany [18] and have been successfully used in several research projects, for example, the BeLOVE (Berlin Longterm Observation of Vascular Events) [19,20] and NAKO (German National Cohort) studies [21]. The MOSAIC suite consists of 3 tools [22]: E-PIX provides a master patient index following the Integrating the Healthcare Enterprise (IHE) profiles, Patient Identifier Cross-Reference (PIX), and Patient Demographics Query [23,24]; gPAS provides associated pseudonymization functionalities; and gICS supports integrated consent management.More specifically, E-PIX enables the central management of directly identifying master data and supports probabilistic record linkage.The resolution of potential matches between identifying data is supported through the UI.gPAS supports the generation and management of pseudonyms on top of the identities managed by E-PIX using different pseudonym domains that can refer to different systems, locations, or contexts.Finally, gICS supports digitally managing informed consent and supports different consent templates and associated use policies. Following our requirements, we implemented an authentication and authorization model as well as programmatic interfaces and graphical UIs around E-PIX, gPAS, and gICS to enable integrated workflows across all 3 tools and to improve their interfaces. Authorization and Authentication We designed a simple, yet flexible 3-stage authorization model, which combines permissions for basic object access with permissions regarding the domain of the object to be accessed (with create, read, write, or delete permissions) by a machine or human user of the infrastructure.An overview is provided in Figure 1. A domain defines the scope of the data managed by the TTP (eg, a research process, a study, a project, or an institute).Multiple domains can be created within a project (eg, to store pseudonyms used in specific subprojects or contexts).Additionally, in gPAS, a domain can have parent and child domains.This results in a tree structure that can be used to tailor permissions to different scopes within individual projects [25]. On the implementation side, we mapped this model to OpenID Connect (OIDC), which is based on OAuth 2.0 [26].The JavaScript Object Notation Web Token generated in this process contains role names as attributes, which are platform independent and can also be processed on mobile devices.This is important for the additional UIs that we had to develop.As an identity and access management solution, we chose Keycloak, which is in widespread use, has a native administration interface, and is published as open-source software under the Apache License 2.0.Importantly, it can also be connected to a range of directory services usually maintained by hospitals for account and permission management. Programmatic Interface We decided to implement a REST API to extend the programmatic interfaces of E-PIX, gPAS, and gICS and support cross-tool workflows.Due to its stateless nature, this design enables the management and sharing of data across different systems, combined workflows, and calls by external components.One important application of the unified REST API is to combine participant registration with automatic consent checking in gICS, indexing the participant in E-PIX, and generating pseudonyms in gPAS.Furthermore, the REST API can easily be integrated with the developed authentication and authorization model as well as logging and audit trail functionalities.Existing interfaces of MOSAIC tools can also be integrated with the permission model by wrapping them behind REST interfaces. Graphical Interfaces Web Interface Based on the integrated programmatic API that supports all services, we have also implemented an integrated GUI, which allows accessing all TTP services in a unified manner.Analogously to the programmatic API, the UIs are integrated with the described authentication and authorization model.Users can log into the platform with their account from the connected directory service, which is abstracted way using OIDC with Keycloak.The token generated at log-in contains all assigned permissions, which are used in the UI and sent as a bearer token with each request to the REST services.A strict content-security-policy workflow blocks the execution of foreign scripts outside the origin domain, thus increasing the level of security.Actions such as participant administration, depseudonymization, or consent administration can be performed through wizards.Users can request essential documents, such as copies of consent, directly from the web application. Mobile App The final building block is provided by a mobile app that serves as a direct channel from the TTP services to the participants.The most important application is collecting consent and handling withdrawals.A typical deployment consists of installing the appl on a tablet, which is then configured by study personnel and handed over to the participants (Figure 2).The study personnel can log into the app using the same log-in data as for the TTP web interface.After the project staff member enters a participant identification code and selects either a consent or a withdrawal form, the selected participant fills out the form.To prevent participants from accessing unauthorized information, the app will be started in kiosk mode.The identification code is either a temporary pseudonym or an already existing pseudonym for the participant, providing direct linkage to the research project managed by the TTP.In the latter case, the app automatically opens the associated consent template.After filling out the form, the participants can enter their name and place of residence, and then, they can put their signature in a designated field.Afterwards, the staff member provides their signature, confirming that the form has been completed with them as the assigned project staff member. Supported Pseudonym Algorithms In our system, generated random numbers are used as pseudonyms.The length is configurable, with a minimum of 6 digits, and is chosen based on the number of pseudonyms that are needed for the respective project.Additionally, we use the Damm algorithm to detect single-digit errors and all adjacent transposition errors with a simple checksum [27].Moreover, pseudonyms are combined with studyand context-specific prefixes.For example, the pseudonym "BLV-US-123456" could represent an ultrasound ("US") measurement for a study participant in a study called BeLOVE ("BLV").Finally, our system can also import and manage existing pseudonyms.As those are usually generated using different algorithms and often do not contain a checksum, we mark them as "external" within the system. Ethical Considerations This paper covers the design and implementation of a generic research service, which requires no ethics committee approval according to local policies.However, the individual studies that use the service have to apply for ethics approval.For example, the BeLOVE study, which is described as a case study in this paper, was approved by Charité's ethics committee (vote number EA1/066/17). Results In this section, we will first describe the general architecture of our solution, then cover important implementation details, and finally report on real-world experiences with the platform. Architecture The overall architecture is divided into the API, which wraps around the MOSAIC tools, the graphical interfaces oriented toward users, as well as the access and identity management component (Figure 3 presents more details).As illustrated, the core components are provided with an interface to the EHR system to support the pseudonymization of patient identities for direct reuse in the respective research context.Other systems that can access the TTP services via the REST API are, for example, electronic data capture systems, such as Research Electronic Data Capture (REDCap), or biobank information systems.All components of the respective interfaces are containerized with Docker [28] and deployed on a Docker swarm [29].By using OIDC based on OAuth 2.0 as the standard, we were able to integrate other systems via existing packages (eg, Spring-Boot-Security) and allow other applications to access the systems.When modeling the interfaces, we ensured that anything that could be done graphically could also be done programmatically.This keeps the platform open and supports other information systems with the integration of TTP services. Implementation The REST API was implemented using Java 13 with the Spring Boot framework [30] by focusing on stable packages, including Spring Security for OIDC, and relying on an established framework.The resulting platform is robust, maintainable, extensible, and flexible.We have implemented 35 generic interfaces so far, most of which are Create-Read-Update-Delete (CRUD) interfaces for the key information objects Domain, Participant, Identifier, Pseudonym, Consent, and Consent Template (Figure 4), as well as additional directory and search functions for pseudonyms and consents.The web-based interface (Figures 5 and 6) is implemented using the PHP-based lightweight enterprise web framework Laravel [31].Laravel uses a Model-View-Controller pattern [32], has a template engine named Blade, and supports agile development processes.By integrating the open-source framework Bootstrap, we were able to implement a responsive front end that could be displayed in browsers on multiple types of devices.The web application directly interfaces with the REST API and does not manage any participant data in a separate database.The app front end (see Figures 7-9) was developed in React Native [33] and then significantly extended to work on tablets integrated into our mobile device management.The application does not permanently store any data on the device, and processing is carried out exclusively via React Native state management. Core Functionalities for Research Projects As a result of our development efforts, the TTP software stack provides a wide range of functionalities that research projects need.Table 3 provides an overview of frequently used common features.On the API level, these features include integration with other systems to manage pseudonymization, depseudonymization, and data linkage.The app specializes in electronic consent management, specifically viewing, completing, and saving of consent templates.The web-based UI permits registration of participant details; provides an overview of participants, consents, and pseudonyms; supports depseudonymization as well as the retrieval of use permissions based on consent information.CRUD operations for major participant properties and printing consents are also supported. Experiences in Real-World Operational Settings The TTP has already supported more than 10 research projects since it was launched in December 2019.As of December 2022, our TTP system manages data of 3610 registered participants with 384,813 pseudonyms and 1762 consent documents.The pseudonyms fall into 2 categories: 40,867 pseudonyms have been assigned to individual participants managed by the TTP and 343,946 pseudonyms to other identifiers (eg, health insurance numbers that are managed by the TTP as part of its support for data linkage).On average, the TTP manages about 11 pseudonyms for each individual participant.As many as 153 research personnel actively engage with the software on a daily basis.Backups of our databases are created every night.These backups are stored for 90 days along with all log files. As a case study, we will describe how the TTP services are being used by the large-scale BeLOVE study [20], which is carried out as a cooperation between several sites and departments at Charité.BeLOVE uses all services provided, from patient as well as participant registration and consent management, to pseudonym generation for the various diagnostics and phenotyping activities performed during hospitalizations or study visits (about 12 pseudonyms per participant).Compared to the initial planning of the study, which required 2 study staff for the administrative tasks, these staff requirements were in the meantime reduced to zero due to the functionality of our TTP and the associated secure outsourcing of tasks to all study staff.The use of central TTP services has also significantly reduced the efforts required for coordinating BeLOVE and its substudies with the data protection and information security officers.Within Charité's internal data integration platform, consistent pseudonyms and API access to mapping rules are frequently used to link data collected about BeLOVE participants with routine health care data collected during inpatient and outpatient encounters for various types of analyses.Secondary pseudonyms have already been generated for 10 projects in which the data have been analyzed or shared with others. Principal Results In this paper, we have presented a software stack to support a TTP with its core tasks at a large German academic medical center.Our architecture extends existing systems for key functionalities, identity management, pseudonymization, and consent management with a fine-grained authentication and authorization model, a modern REST API, two types of UIs, and connections to third-party systems.These extensions were necessary to support cross-service workflows on the programmatic as well as the user level and to meet further functional and nonfunctional requirements.Our application is built using various open-source enterprise frameworks and standards (eg, OIDC) to ensure sustainability and integration with important institutional services (eg, our user directory and leading master patient index).Our experiences with supporting a wide range of research projects with TTP services over a longer period have shown that our approach works and provides functionalities that are generic enough to support a wide range of applications. Comparison With Prior Work Our architecture and implementation are based on the MOASIC tools [16], which we have extended with additional components to overcome functional and nonfunctional shortcomings.Most importantly, the publicly available basic versions of the MOSAIC tools are not suitable for handling more complex and flexible workflows with finegrained authorization.For example, supporting cross-service workflows, like registering a patient, generating pseudonyms, and preparing a consent form as an integrated operation, cannot be implemented without an additional dispatcher component that is currently not publicly available.We solved this by implementing a cross-service REST API.Although the MOSAIC tools already come with an API, it is provided individually for each service and is based on the Simple Object Access Protocol [34], which originates from the IHE web service standards [35] and is complex and slow, requiring managing server-side state.Analogously to an API, the MOSAIC tools also offer GUIs.However, they are provided individually for each service and hence do not enable users to seamlessly perform operations that require interactions with multiple core services.For this reason, we developed a cross-service UI that is based on our API.Additionally, we added functionalities for generating QR codes, versioning consent documents, and starting the system in kiosk mode.Finally, our extensions also improve the system's scalability when executing cross-service operations, such as querying for links between pseudonyms and identifiers, which can be slow when using the MOSAIC tools [36].We also added comprehensive documentation of administration functions, which is not fully available for the current open-source versions without registration with the vendor [37]. Prior work on TTP-related services usually focused on individual components or algorithms that could support TTP operations, deployments in specific research projects, or high-level architecture overviews.One well-known example is the one-way hash approach employed by Vanderbilt University Medical Center as part of the ingest process into their deidentified layer within a research data warehouse [38].Pommering et al [39] describe strategies for how pseudonymization could be used in different contexts, for example, in the secondary use of EHR data or in medical research networks and biobanks.They introduced two models that support repeated depseudonymization as well as one-time use [40].The former model was later integrated into a concept for sharing large data sets in medical research networks and biobanks [39]. Building on this, Lo Iacono [41] investigated a cryptographic approach for generating consistent pseudonyms in multicentric studies but without describing a specific implementation within a concrete project.Dangl et al [42] describe concepts and requirements for TTP services for a specific biobank of a clinical research group.Heinze et al [43] developed two services based on IHE profiles that have been implemented into the Heidelberg Personal Electronic Health Record.One service is used to capture patient consent, while the other provides a GUI to manage consents.Further components (eg, for pseudonym or identity management) were not described in detail. Lablans et al [13] introduce the Mainzeliste, which supports managing patient identities and pseudonyms through a web-based front end.Bialke et al [10] introduce the MOSAIC tools, which we also use in our work, as a set of tools supporting central data management for studies or research networks.They also introduce the "dispatcher" as an additional component for building complex workflows [22], which is, as we described above, unfortunately not publicly available. Aamot et al [44] compare different strategies for depseudonymization in which, among others, the strategy of Pommering et al [39] is compared with alternative approaches.Based on this comparison, they develop a pseudonymization approach using deterministic one-way mappings based on cryptographic protocols.Lautenschläger et al [45] implement and describe a generic and tightly coupled architecture and component for pseudonymization that has been used in several research projects.On the application side, Bahls et al [14] describe a TTP architecture using the MOSAIC tools for the Routine Anonymized Data for Advanced Health Services Research project.Hampf et al [17] benchmark parts of the MOSAIC tools and conclude that it would take several days to register 2 million patients with the hardware setup utilized. Limitations and Future Work As the most recent versions of the MOSAIC tools are not distributed as open-source software in a public repository [37], it was not possible for us to make changes to the core tools used.Instead, workarounds had to be implemented at the API or UI level, which is not ideal from an architecture perspective.Moreover, our TTP platform is currently focused on providing intra-institutional services only.In future work, we plan to extend our platform with external interfaces, enabling the TTP to act as a central trustee for multicentric projects.We also aim to implement additional programmatic interfaces following international interoperability standards, in particular, Health Level 7 Fast Healthcare Interoperability Resources [46] and enable study personnel to directly manage the permissions of associated staff.Finally, we plan to introduce a unified pool of consent policy keys to harmonize the permission information that can be queried from our system to enable automated downstream processing that considers consent information. Conclusions Scalable and comprehensive TTP services are central to modern data-driven medical research.However, communitybased comprehensive platforms that can be used to implement such services are still lacking.We believe that our description of key requirements as well as the insights provided into our flexible architecture that combines core tools with userand application-oriented workflows and interfaces, including third-party applications, can help other institutions setting up comparable services. Figure 1 . Figure 1.Stages of the functional authorization model. Figure 2 . Figure 2. Workflow of actions in the app. Figure 3 . Figure 3. Architecture overview, including wrapped MOSAIC stack (core components); systems maintained by the trusted third party (TTP; graphical components as well as access and identity components); systems queried by the TTP (electronic health record [EHR] system and directory services); and systems from which the TTP is queried (Research Electronic Data Capture [REDCap]).E-PIX: Enterprise Identifier Cross-Referencing; gICS: Generic Informed Consent Service; gPAS: Generic Pseudonym Administration Service. Figure 4 . Figure 4. Key information objects and their relationships. Figure 5 . Figure 5. Screenshots of the user interface: editing consent information. Figure 6 . Figure 6.Screenshot of the user interface: overview of consent status. Figure 7 . Figure 7. Screenshot of the consent app: entering or scanning an ID. Figure 8 . Figure 8. Screenshot of the consent app: filling out consent forms. Figure 9 . Figure 9. Screenshot of the consent app: sign and submit. Table 1 . Core functional requirements and MOSAIC tools that fulfill them. Table 2 . Additional functional requirements and core services for which they are relevant. c Not applicable. Table 3 . Essential functionalities provided to research projects. b UI: user interface.c EHR: electronic health record.
2024-02-19T16:09:34.275Z
2023-09-25T00:00:00.000
{ "year": 2024, "sha1": "38dde4af7a380515ea4ef14ca857feb844e52655", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "92adf837da9ee205bd75e9933c9de5d2556bc015", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
12580335
pes2o/s2orc
v3-fos-license
Thermodynamics of the two-dimensional black hole in the Teitelboim-Jackiw theory The two-dimensional theory of Teitelboim and Jackiw has constant and negative curvature. In spite of this, the theory admits a black hole solution with no singularities. In this work we study the thermodynamics of this black hole using York's formalism. Introduction Thermal radiation from black holes via the Hawking process hints that gravity, quantum mechanics and thermodynamics are linked together. The analysis of quantum fields in a black hole background has first appeared in four dimensional (4D) general relativity. It was then extended to lower dimensions and other theories, following indications from string theory that these are important and useful to study. Two dimensions (2D) has been of particular interest after a black hole in string theory has appeared [1,2]. Hawking radiation and thermodynamics of this black hole has been analysed by several authors (e.g., [3,4,5,6]). Another 2D theory which has been studied in some detail is the Teitelboim-Jackiw theory [7,8]. Although in this theory the curvature is constant and negative, it has a black hole solution [9,10,11,12,13,14]. The existence of a black hole implies a non-trivial causal strucuture which in turn generates interesting non-trivial thermodynamics. Hawking radiation of this black hole has been analysed in [12], and thermodynamics of a black hole in a version of the theory with electromagnetic fields has been studied in [15]. Here we study the black hole of the original Teitelboim-Jackiw theory using York's formalism [16,17]. In 2D this formalism has already been used in [5] to study the 2D black hole in string theroy. The formalism uses the fact that for a system of fixed size and fixed temperature the canonical partition function Z c characterizes thermodynamic equilibrium in the canonical ensemble. The free energy F and the partition function are linked through −βF = log Z c , where β is the inverse temperature. On the other hand Z c can be represented by a path integral, through a relation with the Euclidean action I E given by I E = βF = − log Z c . As a path integral, the partition function depends on quantities that are fixed in the functional integration such as the boundary data chosen from the fields of the system. The Lorentzian Black Hole Solution In the Teitelboim-Jackiw 2D theory the action is where g is the determinant of the metric, R is the curvature scalar, Λ is the cosmological constant (sometimes written as Λ = −2λ 2 ), and I B is a boundary term to specify later. A 2D metric can always be written as where x 0 and x 1 are the time and spatial coordinates, respectively, and A and P are metric functions. The action (1) has got a black hole solution given in the unitary gauge by where the range of x is −∞ < x < +∞, Φ 0 is a constant, and α 2 ≡ −Λ. Transforming to the Schwarzschild gauge, through r = √ b α cosh αx, with b constant greater than zero, one obtains where c is a constant. The maximal analytical extension of (5) (or (3)) is represented in the Penrose diagram of figure 1. It is clear from that diagram and the metric given in equation (5) that the radius r = √ b α (or x = 0) can represent a horizon. However, it can also be a coordinate trick. Indeed the curvature scalar of the solution is R = −α 2 which is a constant. Therefore, spacetime has constant negative curvature and, in principle, is anti-de Sitter spacetime. Now, anti-de Sitter spacetime has, in the unitary gauge, a metric given by and dilaton, To transform to the Schwarzschild gauge we put r = √ b α sinh αx and obtain where c is a constant, and b > 0 is also a constant which in this case can always be set to one, b = 1. The maximal analytical extension of (9) (or (7)) is given by the usual anti-de Sitter extension [18]. It is then clear that r and r are totally different coordinates. However, a set of transformations can indeed be found [12] as one might expect, since spacetime in both coordinates has constant negative curvature. Thus, in what sense can (5)-(6) be interpreted as a black hole? Or, in other words, in what sense are (5)- (6) and (9)-(10) different physical solutions? The interpretation of (5)-(6) as a black hole comes from theories in 3D and 4D. It was shown in [10] that action (1) comes from dimensional reduction of 3D general relativity. 3D general relativity admits a static black hole solution with circular symmetry [19]. Solution (5)- (6) gives the corresponding 2D black hole. In the 3D theory e Φ is the circumference radius. On the other hand, it was also shown in [11] that (1) comes from dimensional reduction of a low energy 4D action of heterotic string theory, but now e Φ represents instead the string coupling. This 4D action admits near-extremal magnetic black holes which in turn generate the ansatz for the dimensional reduction process. In both 3D and 4D theories it does not make sense in physical terms to have a negative e Φ . Thus, when (1) is used to model 3D and 4D black holes (as, for instance, s-wave scattering models in quantum evaporation of black holes), one has to cut the 2D spacetime at r = 0. In both cases, it is the dilaton the field which sets this boundary condition. Therefore, solutions with the same local metric properties as in (5)-(6) and (9) There is also the possibility of interpreting the solution (5)-(6) as a black hole without having to resort to higher dimensions. The idea in [13] is that There is a horizon at r = √ b α , i.e., x = 0. Observers at each end of the line x → ±∞ can only communicate if they enter through x = 0. The x = 0 segment is a null line, and test particles in timelike geodesics in one of the ends of the world (x → ±∞) will cross this horizon in a finite time. There is a problem in this interpretation. As figure 4 indicates, there is a cusp (i.e., a singularity) at the junction x = 0. Observers (or particles) when entering a new world have to decide which end (positive or negative x) they will join. Another 2D interpretation can be given to (5)- (6). One can notice that metric (5) represents a portion of the 2D anti-de Sitter spacetime in accelerated coordinates. Indeed, a stationary observer with r =constant in spacetime given by (5) has four acceleration a µ with magnitude a = √ a µ a µ given by where the acceleration is infinite, corresponds to the trajectory of a light ray. Thus, observers held at r =constant see this light ray as a horizon, they will never see events beyond this ray. They are accelerated observers and can see only a portion of anti-de Sitter spacetime. In this sense, region II in figure 1, can be considered a black hole for region I accelerated observers. Note that for anti-de Sitter, r=constant trajectories are straight vertical lines in the corresponding Penrose diagram [18]. In these coordinates the acceleration is a = α 2 r √ There is no infinite acceleration for such observers. The situation is analogous to the relation that Rindler and Minkowski 2D spacetimes bear with each other. However, here, there is an extra field, the dilaton. Thus, equations (5)-(6) represent a black hole in several different physical interpretations. In view of this it is interesting to show that this black hole solution has non-trivial thermodynamics. We use here the formalism developped by York [16,17] to understand the thermal behavior of the black hole, (for other types of formalism see [12,15]). The mass of the black hole of equation (5) can be calculated by the standard procedures [14] and is given by, The Euclidean Black Hole and its Reduced Action We now follow [17,5] to find the reduced action of the system. We assume that there is a black hole inside a cavity with boundary B. Now, the Euclideanized form of the metric (2) can be written as (η = ix 0 , ρ = x 1 ), Here η is a periodic coordinate running from 0 to 2π and ρ runs from 0 at the horizon to ρ B at the boundary. The values of the metric function A and dilaton Φ at the boundary are denoted by A B and Φ B . The inverse temperature β at the boundary is related to the proper length of the boundary circle S 1 through the relation, The regularity conditions of the metric and dilaton fields at the horizon imply, and where ′ ≡ ∂ ∂ρ . The Euclidean action can be obtained from (1), where the surface term is required to make the variational procedure selfconsistent, which is important in analysing the thermodynamics, h is the induced metric on the boundary, K is the extrinsic curvature and K 0 is a term necessary to choose the background (the zero point energy). As before, α 2 = 2λ 2 = −Λ. The equations of motion derived from (17) are, Then the T 00 constraint, T 00 = 0, gives, whose solution is where we have chosen the constant of integration as −α 2 b appropriately. Now, using, we can transform (17) into the following: where I 0 ≡ ∂V dρ √ he Φ K 0 is an important term for choosing the background. Then, integrating (22) and using the constraints and boundary conditions we find, where Φ H is the value of Φ at the horizon and I 0 ≡ βe Φ B α was chosen appropriately. In (23) we have put back Newton's constant G and Planck's constant h (still puting Boltzmann's constant and the velocity of the light equal to one). Note that in 2D we use the following units for the constants: [G] = LM −1 T −1 and [h] = MT −1 . As in 4D [17], one sees that a quantum term has appeared in the action, namely the term 2πe Φ , which is associated with the entropy of the system. Equation (23) is thus the reduced action I = I(β, Φ B ; Φ H ) which yields the important thermodynamic quantities. Temperature and the Canonical Boundary Conditions To find the temperature we have to obtain the stationary point of the reduced action, by differentiating I(β, Φ B ; Φ H ) with respect to Φ H . Setting the resulting equation to zero, i. e., ∂I ∂Φ H = 0, we find, where, Equation (24) gives the inverse of the temperature(β = 1 T ) of the 2D black hole. Now, a thermal equilibrium configuration in the canonical ensemble, has to yield Φ H as a function of β. Indeed, inverting (24) gives or in terms of the Schwarzschild gauge of equation (5) Thus as T → 0 we have M → 0. As T → ∞ we have a maximum mass M max = 1 2 α 3 cr B 2 for the BH in the thermal bath. That is, for a given r B the mass of the hole cannot be larger than the one which gives a horizon radius equal to r B . There is nothing like the instanton solution of the Schwarzschild bath in 4D. In figure 5 we draw the graph, r H as a function of r B . We see that, at equilibrium, for T → ∞ one has r H = r B for any r B , while for T → 0 one has that r H is very small in relation to r B . This means that for very high temperatures, the boundary is located at the horizon, precisely. At low temperatures the boundary has to be far from the horizon radius. We now study some thermodynamic quantities in this canonical ensemble formulation. We also analyse thermodynamic stability. Thermodynamical Quantities The entropy is defined through the equation Using (23) we find, which has the same functional expression as the one found in [5]. In the Schwarzschild gauge it gives, It is interesting to note that the functional dependence given in (29) is the same for all black holes having a simple 2D Brans-Dicke action [20]. Note also that the extreme case (M = 0) has zero entropy. The thermodynamic energy E is defined by Then from (23) we obtain which, in the Schwarzschild gauge, can be put in the form We see here that the zero point was chosen so that when there is no mass (r H = 0) the thermal energy is zero. Since r H 2 = 2M α 3 c we can invert expression (33) to yield 1 which relates the ADM mass and the thermal energy. The ADM mass (the mass at infinity) is equal to the termal energy times the length (in intrinsic units) of the reservoir minus a self-energy thermal term. Expression (34) is the closest one can get to the Schwarzschild expression found in [16] for the Schwarzschild mass, i.e., M = E − 1 2 E 2 r B . Now, we want to find the Euler relation for this thermodynamic system. From (24) we obtain the temperature T = 1 β , We define a linear pressure by Then, using (30), (34), (35) and (36) we obtain After integration we obtain the Euler relation Upon scaling, r B → lr B and r H → lr H or (S → lS) one has E → lE. Thus, E is homogeneous of degree 1 in S and r B . To analyse thermodynamic stability we first find the heat capacity. For 2D black holes it is defined by Using the expressions (29) for S H we find Thus the heat capacity is positive always, since r B ≥ r H . Therefore, one has thermal stability always. The root-mean-square energy fluctuations ∆E are given by When r B → r H we have ∆E finite and given by < (∆E) 2 > = α 3 c 2π r H . Free Energies and the Ground State of the Canonical Ensemble The Helmholtz free energy function for black holes, F BH , can be deduced from the action by the relation This free energy applies to the equilibrium value of the mass (or r H ) given in (27). From (23) we have in the Schwarzschild gauge the following free energy for the black hole, which is non-positive for all r B . Then, the action at equilibrium is given by the equation, But from (24) the inverse temperature is given by β = 2π . Then (44) can be put in the form We now find the free energy for hot anti-de Sitter space (HADS) in 2D. The local energy density, ρ 0 , of radiation can be found to be where g is the number of massless spin sates and where T local is the locally measured temperature. The energy-momentum tensor of a perfect fluid is A perfect radiation fluid in 2D obeys the following equation of state Thus in 2D the energy-momentum tensor of radiation becomes, Therefore, By the Tolman formula we have where T is the temperature measured at infinity. Thus, (48) yields . (52) The Tolman energy for HADS can also be found, where V is the proper volume of the energy one wants to measure. Now, in the Schwarzschild gauge, anti-de Sitter spacetime has metric given by (9). Then, √ −g = 1. Thus Here, V is the optical volume of radius r B , defined by, We see here that ADS spacetime behaves as an enclosure of finite volume. From (55) we have, For αr B → ∞ one has, E HADS (r B → ∞) = π 2 12α gT 2 , which is the energy for the whole spacetime. The action for HADS can be taken from the expression, I HADS = E HADS dβ. Using (56) one obtains, The ground state is the state of least free energy. Since I = βF , and β ≥ 0, we can compare directly the reduced actions for HADS and the black hole. We find that HADS dominates whenever Then using equations (45) and (57) one obtains, Whenever the number of particle species is relatively large then HADS is favoured for sufficiently small r B . Indeed, if g > 12πc, then the quantity inside square brackets is negative up to some boundary radius given implicitly by αr B arctan(αr B ) = 12πc g . This means that up to this radius HADS dominates and for larger r B HADS dominates if T obeys (59) (see figure 5, line (a)). If g < 12πc then HADS is favoured only if T obeys (59) (see figure 5, line(c)). The case g = 12πc says that for r B → 0 HADS is dominant (see figure 5, line (b)). Note that when the boundary r B → ∞ one obtains that, for finite temperature, the black hole is the ground state. It is also interesting to find the density of states, ν(E). Following [16], one finds Thus the density of states is proportional to the entropy. Conclusions The Teitelboim-Jackiw theory has, in absence of matter, constant curvature spacetime solutions. Therefore the black hole solution of the theory has no singularities. In the first studies exploring this theory it was thought that such a black hole did not exist. However, solutions containing point particles and horizons were found [21] which also had some interesting thermodynamic properties. To establish the existence of the black hole in this theory one has to invoke topological arguments. This solution is special in the sense that to have a black hole one needs to add features which are not contained in the metric, i.e., one has to add boundary conditions. We have then showed that this black hole yields non-trivial thermodynamics in York's scheme. Through an analysis of the free energies of both the black hole solution and hot anti-de Sitter spacetime it was possible to infer that for small enough ambient temperature the black hole is the ground state.
2014-10-01T00:00:00.000Z
1996-08-06T00:00:00.000
{ "year": 1996, "sha1": "11db7231c689ca05cc87d5e3601c3d31266f75d9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/9608016", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c0a4968e276a0c7b7adf815de3894811110193cb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
3557976
pes2o/s2orc
v3-fos-license
Phosphorylated and Non-phosphorylated Leucine Rich Amelogenin Peptide Differentially Affect Ameloblast Mineralization The Leucine Rich Amelogenin Peptide (LRAP) is a product of alternative splicing of the amelogenin gene. As full length amelogenin, LRAP has been shown, in precipitation experiments, to regulate hydroxyapatite (HAP) crystal formation depending on its phosphorylation status. However, very few studies have questioned the impact of its phosphorylation status on enamel mineralization in biological models. Therefore, we have analyzed the effect of phosphorylated (+P) or non-phosphorylated (−P) LRAP on enamel formation in ameloblast-like cell lines and ex vivo cultures of murine postnatal day 1 molar germs. To this end, the mineral formed was analyzed by micro-computed tomography, Field Emission Scanning Electron Microscopy, Transmission Electron Microscopy, Selected Area Electon Diffraction imaging. Amelogenin gene transcription was evaluated by qPCR analysis. Our data show that, in both cells and germ cultures, LRAP is able to induce an up-regulation of amelogenin transcription independently of its phosphorylation status. Mineral formation is promoted by LRAP(+P) in all models, while LRAP(–P) essentially affects HAP crystal formation through an increase in crystal length and organization in ameloblast-like cells. Altogether, these data suggest a differential effect of LRAP depending on its phosphorylation status and on the ameloblast stage at the time of treatment. Therefore, LRAP isoforms can be envisioned as potential candidates for treatment of enamel lesions or defects and their action should be further evaluated in pathological models. INTRODUCTION Dental Enamel is the outermost layer of the teeth and the most mineralized structure in the vertebrates, since it is constituted of at least 95% minerals. Its microstructure is composed of nanorod-like hydroxyapatite (HA) crystals arranged in a highly organized unit called the enamel prism or rod. Prism high organization leads to enamel robust mechanical properties for tissue protection against cariogenic bacteria and mechanical force upon tooth function. Enamel is formed through synthesis, growth, and organization of these rods by specialized cells, the ameloblasts, throughout the process of amelogenesis. In contrast to bone or dentin, it is acellular in its mature form. Indeed, ameloblasts are once and for all, degraded during the process of tooth eruption and consequently, they cannot regenerate and actively repair by themselves. In view of the high prevalence of dental caries and enamel defects, enamel regeneration, and repair has become a target for developing biomimetic therapeutic approaches (Cao et al., 2014;Ruan and Moradian-Oldak, 2015;Snead, 2015). The biological processes involved in enamel formation are well characterized (Li et al., 2006). During amelogenesis, ameloblasts undergo a maturation process with a change in appearance from early, elongated secretory cells actively involved in organic extracellular matrix synthesis, to more round, mature cells involved in the degradation of this matrix and deposition of the mineral. Ameloblast extracellular matrix is known to be key for controlling growth and organization of enamel crystals during mineralization (Robinson et al., 1989;Iijima and Moradian-Oldak, 2004). It is essentially synthesized by the secretory stage ameloblasts and is composed of various structural proteins such as amelogenin, ameloblastin, enamelin, and MMP20. Among these proteins, amelogenins are the most abundant (Fincham et al., 1999;Moradian-Oldak, 2012). Native amelogenin, has been shown, in porcine teeth, to be synthesized mostly under a form phosphorylated on the single Serine-16 site. Phosphorylation affects amelogenin function since phosphorylated native porcine amelogenin (P173) inhibits calcium phosphate crystallization and stabilizes amorphous calcium phosphate while its recombinant un-phosphorylated counterpart guides the formation and organization of aligned enamel crystals (Beniash et al., 2005;Wang et al., 2007;Kwak et al., 2009;Wiedemann-Bidlack et al., 2011;Margolis et al., 2014). Different isoforms of amelogenin, mostly resulting from alternative splicing, have been evidenced in bovine and rodent enamel (Shimokawa et al., 1989;Lau et al., 1992). They are translated into amelogenin proteins that vary in length and relative abundance (Bartlett et al., 2006;Yamakoshi, 2011). Among these alternative isoforms, the Leucine Rich Amelogenin Peptide (LRAP) is the second most abundant amelogenin protein (Shimokawa et al., 1989). LRAP was observed in secretory stage ameloblasts (Iacob and Veis, 2008) and shown to be produced throughout amelogenesis (Yuan et al., 1996;Veis et al., 2000). It is a short peptide (56-59 amino acids, depending on the species) identical to the full-length amelogenin except for the majority of the exon-6 coded region that is lacking (Bonass et al., 1994). It contains the two self-assembly domains of the full-length amelogenin form (Paine and Snead, 1997;Pugach et al., 2010) and has been evidenced in mouse, porcine, bovine, and human (Goldberg, 2010). LRAP has been demonstrated to display both signaling and structural properties on dental cells. It is able to promote ameloblast or odontoblast in vitro differentiation (Tompkins and Veis, 2002;Sarkar et al., 2014) and can affect in vitro calcium phosphate formation in a very similar fashion to the full-length amelogenin (Beniash et al., 2005;Kwak et al., 2009Kwak et al., , 2014Wiedemann-Bidlack et al., 2011). Remarkably, the phosphorylated form of the peptide on serine 16 [LRAP(+P)] stabilized amorphous calcium phosphate (ACP) whereas the non-phosphorylated form [LRAP(-P)] was shown to guide the formation of bundles of well-aligned needle-like apatitic crystals (Le Norcy et al., 2011b). LRAP(-P) has also been recently shown to act as a surface treatment agent to enhance remineralization of altered enamel (Shafiei et al., 2015) and guide the regeneration of acid-etched enamel structure (Kwak et al., 2017). Despite these recent observations, few studies have addressed the direct role of LRAP, on enamel mineralization in biological models, in relation to its phosphorylation status. Namely, nothing is still known on the ratio of un-phosphorylated to phosphorylated LRAP forms and whether this ratio changes during tooth development and maturation. Indeed, up to now, most researches have been performed with a recombinant non-phosphorylated LRAP peptide although amelogenins are detected in vivo under their phosphorylated form (Fincham and Moradian-Oldak, 1993). In a context of future therapeutic applications, the aim of this work was therefore to determine whether the phosphorylation status of LRAP impacts the nature of the mineral formed in biological systems as it does in vitro. To this end, the effect of the LRAP (+P) or (-P) on mineral formation was analyzed in two ameloblast-like cellular models mimicking secretory (LS8) and maturation (ALC) stage ameloblasts and in a model of ex vivo tooth germ culture (Chen et al., 1992;Nakata et al., 2003;Sarkar et al., 2014). Preparation of LRAP Variations of the porcine LRAP (MPLPPHPGHPGYINFS P YEV LTPLKWYQNMIR HPSLLDLPLEAWPATDKTKREEVD) with and without the phosphate group on Serine-16, were synthesized commercially (NEO Peptide, Cambridge, MA, USA) and re-purified, as previously described (Nagano et al., 2009). Lyophilized peptides were weighed and dissolved in distilled deionized water at room temperature to yield a stock solution of 2 mg/mL. Complete solubilization of both peptides in water was verified by dynamic light scattering analyses. LRAP concentrations were confirmed by nanodrop analyses at 280 nm. Molar Germ Culture First mandibular and maxillary molar germs (n = 125) were extracted from post-natal day 1 Swiss Webster mice (PND1) after euthanasia. This procedure was carried out in accordance with the French regulations on animal testing (Decree n r 2013-118 of February 1st 2013 on animal protection used for scientific purposes NOR: AGRG1231951D). Germs were cultured in a mineralizing medium composed of Minimum Essential Medium α (MEM α) supplemented with 10% FBS, 0.18 mg/mL ascorbic acid, 1x Glutamine, 1% Penicillin/streptomycin, and 5 mM β-glycerophosphate. A quantity equivalent to one third of the medium volume of agar was added to each well. Thirty three ng/mL LRAP(+P) or of LRAP(-P) were added to the medium before agar addition (Tompkins et al., 2005). Five mandibular and five maxillary molar germs were separately cultured for 9 days (D9) under each condition and time point and experiments were repeated 4 times (n = 4). Germs were fixed on the day of extraction (D0) or after 9 days of culture, by immersion in 4% paraformaldehyde (PFA) for 30 min then rinsed with PBS and stored in 70% ethanol. RNA Extraction and Quantitative PCR Total RNAs were extracted from cells at D0, D2, and D7, and also at D14 for ALC cells and from tooth germs at D0 and D9 using respectively RNeasy Mini Kits for the cells and RNeasy Micro Kits (Qiagen) for the molars. Five hundred and fifty nanograms of total RNA were respectively reverse transcribed to first strand cDNA using a Verso cDNA Synthesis Kit (Thermo Fisher Scientific). For quantitative PCR, mouse specific primers for Amelx (F: GATGGCTGCACCACCAAATC, R: CTGAAG GGTGTGACTCGGG), Actin (F: GTGGCATCCATGAAACTA CAT, R: GGCATAGAGGTCTTTACGG), GAPDH (F: TGTGTC CGTCGTGGATCTGA, R: TTGCTGTTGAAGTCGCAGGAG) were used. PCR was accomplished in a Lightcycler thermocycler 480R with SYBR R Green Supermix (Bio-Rad) according to the manufacturer's instructions. Values were calculated with the LightCycler R 480 software 1.5.0 (Roche, Applied Science). Results were analyzed by the method of Ct. All data points were normalized to Actin and/or GAPDH and all samples were run in triplicate. Statistical analyses were conducted with Microsoft Excel 2011 software (Microsoft, Redmond WA, USA). A two-tailed unpaired Student T comparison test was performed (α = 0.05, * p < 0.05; * * * p < 10 −4 ). LRAP(-P) and LRAP(+P) treated samples were compared to the control. Micro-Computed Tomography (Micro-CT) Imaging and Analyses Germ mineralization was quantified by X-ray Micro Computed Tomography imaging (Micro-CT, Quantum FX Caliper, Life Sciences, Perkin Elmer, Waltham, MA, USA) at 90 kV and 160 µA. Tridimensional images were acquired with an isotropic voxel size of 20 µm and a rotation step of 0.1 • (scan time = 3 min). Before each micro-CT acquisition, the lead citrate calibrator was scanned with an HAP phantom to assign an HAP value for each gray level of lead citrate solutions. Reconstructed files were converted into eight-bit images with fixed lower and upper brightness limits using the "CT analyzer" software (Skyscan, release 1.15.4.0, Kontich, Belgium). A binary segmentation process was applied uniformly on each data stack to separate the mineralized and non-mineralized material inside the whole germ volume. The threshold value used for binarization was manually set so that every voxel with an equal or higher value was represented as solid material, and lower values represented as space. Similar gray level values for global germ density and mineral density were set and used for analysis in all samples. In the quantifications, the mineral density corresponded to a mean of the total germ mineral content (addition of dentin and enamel) whereas the enamel volume is reflected by the ratio between the volume occupied by the enamel layer and the whole germ volume. Transmission Electron Microscope (TEM) Analyses Ten microliters of aliquots were taken from scraped regions of alizarin red stained cell cultures observed under the light microscope and placed on carbon-coated Cu grids (Electron Microscopy Sciences, Hatfield, PA, USA). Duplicate grids were prepared from a minimum of three different experiments. Images were obtained in bright field and Selected Area Electron Diffraction (SAED) modes with a Tecnai 12BT Transmission Electron Microscope (TEM) at 80 kV. Field Emission-Scanning Electron Microscopy (FE-SEM) Analyses PFA fixed germs were analyzed using a Field Emission-Scanning Electron Microscope (Zeiss SUPRA 40). They were air dried and placed on an SEM holder without any preparation. Lateral faces of molar cuspids were observed. Acquisitions were made using the Everhart-Thornley type Secondary Electron detector (SE2) for the first three magnifications (177-226x, 10 kx, 20 kx) and using the In-lens detector for the largest magnification (40 kx). Effect of LRAP Phosphorylation Status on Ameloblast Cell Line Mineralization To analyze the effect of LRAP and its phosphorylation status on ameloblast mineralization, we used two murine ameloblastlike cell lines (LS8 and ALC) mimicking different stages of enamel formation as well as control murine embryonic fibroblast NIH3T3 cells. LS8 cells appear to correspond to secretory stage ameloblasts characterized by high expression of Amelx, Ambn, Enam, and Mmp20 transcripts while ALC cells behave as maturation stage ameloblasts with high expression of Odam and Klk4 transcripts (Sarkar et al., 2014). Culture in mineralizing medium promoted the formation of macroscopically visible mineralization nodules after alizarin red staining, at day 7 in the LS8 cells but only very scarce and light nodules in the ALC cell cultures. After 2 weeks, ALC cells exhibited small squared mineralization nodules while the LS8 cells started to degenerate (data not shown). In contrast, control culture of NIH3T3 cells in the same medium did not lead to any mineralization even after 3 weeks of culture (data not shown). Therefore, both ameloblastic cell lines were able to mineralize but with a different kinetics (Supplementary material). To characterize the structure of the mineral formed in the various conditions, SAED analyses were performed. They showed that the mineral formed under all conditions was HAP (Figures 1, 2). Furthermore, TEM observations revealed that the mineral formed by untreated LS8 cells was composed of dispersed needle shaped HAP crystals (mean length of 43.9 ± 7.8 nm; n = 24) (Figures 1A,B). Upon LRAP(+P) addition, similarly dispersed but slightly longer needle shaped HAP crystals, (mean length of 56.7 ± 9.2 nm; n = 35) were observed (Figures 1C,D) while, bundles of fine elongated HAP crystals (mean length of 103 ± 17.8 nm; n = 34) were formed with LRAP(-P) (Figures 1E,F). Crystal length to width ratio (L/W) was similar in the control and LRAP(+P) treated cells (4.44 ± 0.89 and 4.4 ± 0.73, respectively) whereas it was increased in the presence of LRAP(-P) (7.38 ± 1.26; Table 1). In untreated ALC cells (Figures 2A,B and Table 1), a mixture of large round mineral particles and few very large elongated HAP crystals (mean length of 342.7 ± 49.5 nm) were predominantly observed with TEM and characterized by SAED although a small quantity of shorter needle-shaped HAP crystals was also present (mean length of 74.9 ± 40.9 nm; n = 16). On the whole, mineral structures were much larger than those found in the LS8 control cells. Upon LRAP(+P) treatment, only needle shaped HAP crystals (mean length of 84 ± 33.1 nm; n = 34) were observed (Figures 2C,D and Table 1). LRAP(-P) treatment promoted the formation of bundles of elongated HAP crystals (mean length of 76.2 ± 57.1 nm; n = 42) similar to those formed by the LS8 cell (Figures 2E,F and Table 1). While length to width ratios were similar in untreated, and LRAP(+P) or LRAP(-P) treated cells (9.08 ± 3.46, 9.13 ± 2.97, and 9.15 ± 2.89, respectively; Table 1), the mineral organization was very different between LRAP-treated and untreated cultures: large crystals were predominant in controls but absent in the peptide-treated cultures. In addition, needle-shaped crystallites were organized in bundles whereas they were randomly dispersed in the untreated cultures. This organization was more particularly evident after LRAP(-P) treatment (Figures 2E,F). Since the un-phosphorylated form of LRAP had been previously shown to impact Amelx gene transcription (Iacob and Veis, 2008), the relative effect of both peptides on Amelx expression by the cells was evaluated by qPCR. We observed that both forms of LRAP induced an early (D2) statistically significant up-regulation of Amelx transcription in the LS8 cells although more pronounced with LRAP(-P) ( Figure 3A). In ALC, a similar up-regulation (D7) in Amelx transcription was observed with the two peptides, although only statistically significant with LRAP(-P) ( Figure 3B). This up-regulation was however delayed as compared to LS8 cells in agreement with the mineralization kinetics (Figures 3A,B). Therefore, both peptides affected amelogenin expression and presented an effect on crystal organization. Effect of LRAP Peptides on Germ Mineralization To determine the effect of LRAP phosphorylation on mineral formation, in a more integrated biological context, we tested the peptide effect on PND1 molar tooth germs cultured ex vivo over a 9 day period (Bègue-Kirn et al., 1992;Tompkins et al., 2005). Growth of the first molar germs was observed in all conditions upon 9 days of ex vivo culture on semi-solid medium ( Figure 4A). Micro-CT imaging allowed quantifying germ mineralization in all samples, as well as determining the mineral density (enamel and dentin combined) and the enamel volume (Figures 4B,C). An increase in mineral density was detected in all cultured samples, i.e., treated and untreated, as compared to uncultured D0 germs confirming the germ growth in culture (Figure 4B). Peptide treatment did not appear to impact this value. In contrast, germ culture in the presence of LRAP(+P) led to an increase in enamel volume (>50%) as compared to untreated germs, in contrast to LRAP(-P) treatment which did not ( Figure 4C). The mineral formed in the PND1 molar tooth germs was further characterized by FE-SEM (Figure 5). Ameloblast pits typical of immature enamel were observed in all samples, confirming enamel formation in the culture process (Figures 5A,D,G). LRAP(+P) treated germs displayed smaller and more spaced pits (Figures 5E,F) than untreated controls (Figures 5B,C), while those of LRAP(-P) germs appeared slightly wider (Figures 5H,I) than the LRAP+P-treated or control germs. These observations suggested an increased mineralization process in the presence of LRAP(+P) confirming the micro-CT analysis. The effect of the LRAP peptides on Amelx transcription was evaluated in the PND1 molar germs cultures. A statistically significant up-regulation of Amelx transcription was observed with both LRAP(+P) and LRAP(-P) treatment relative to untreated germs, with a stronger effect of LRAP(+P) than LRAP(-P) (respectively 3-vs. 1.5-fold relative to the control) ( Figure 3C). DISCUSSION This study shows a differential effect of the LRAP peptide on enamel formation, depending on its phosphorylation status in in vitro and ex vivo culture models. Mature enamel, in contrast to other mineralized tissues like bone or dentin, cannot be repaired. When the tooth erupts in the oral cavity, ameloblasts are degraded and, consequently, the enamel cannot be re-grown or regenerated. The search for molecules able to restore enamel defects is therefore ongoing. In this context, the LRAP peptide has proven of interest thanks to its signaling properties as well as its apparent effect on crystal growth and structure (Shaw et al., 2004;Beniash et al., 2009;Le Norcy et al., 2011a,b;Wiedemann-Bidlack et al., 2011;Moradian-Oldak, 2012;Kwak et al., 2016Kwak et al., , 2017. In the present study, we first show that both forms of peptide have a differential effect on the mineral formed by ameloblast-like cells in culture. Untreated LS8 cells, a model for secretory stage ameloblast, thus actively expressing Amelx, Ambn, and Enam, and Mmp20 mRNAs (Sarkar et al., 2014), synthesize crystals with a very similar structure to those observed in secretory stage tooth enamel. LRAP(-P) treatment potentiates the crystal lengthening and bundle formation whereas LRAP (+P) has little effect on the general crystal shape. Therefore, in the LS8 model, despite the up-regulation of Amelx expression promoted by both forms of the peptide, the structure of the mineral appears mainly affected by the LRAP(-P) form, likely through a direct action of the peptide on HAP crystals as previously observed in precipitation experiments (Le Norcy et al., 2011b). In the ALC cell line, a model for maturation stage ameloblast, characteristically expressing Amelx, Odam, and Klk4 transcripts, treatment by both peptides affects crystal formation, favoring bundle formation, as in LS8 cells. This effect is, however, again most evident with LRAP(-P). Remarkably, when measuring the length to width ratio, no significant difference is found between the crystals formed by control or LRAP-treated cells which might be related to the fact that this maturation stage is characterized by a crystal growth and no longer elongation (Sarkar et al., 2014). The fact that both peptides stimulate Amelx expression in the ALC cell, where it is usually low, could explain the change in crystal morphology and organization through guidance by the potentially newly induced amelogenin protein. The observed LRAP(-P) action could then result from a direct effect of the peptide on crystal shape as observed in LS8 cells and in precipitation experiments (Le Norcy et al., 2011a,b) and recently on acid etched enamel surfaces of human teeth (Kwak et al., 2017). Molar germs can be cultured ex vivo and were shown to develop well-organized layers of polarized ameloblasts and odontoblasts (Tompkins and Veis, 2002). Micro-CT and FE-SEM analyses of the mineral formed by PND1 molar germs after a 9-day culture revealed an increase in enamel volume with LRAP(+P) while it was not increased by LRAP(-P) treatment. Rescue experiments with recombinant plasmid encoding LRAP in amelogenin KO mice, have shown that LRAP contributes to final enamel thickness and prism organization (Gibson et al., 2011;Xia et al., 2016). It can be speculated from our data, that in these mice LRAP is present under its phosphorylated form. Altogether our results obtained in cell and germ cultures claimed for a differential effect of the phosphorylated and nonphosphorylated LRAP on crystal formation. It is not clear however at this point why LRAP(-P) did not significantly impact the mineral volume in the germ culture while it did so in the cell lines. This observation might be related to the 3D vs. 2D cell organization in both system and to the homogeneous differentiation stage present in the cell cultures as compared to the cultured germs where secretory and mature ameloblasts co-exist. Our results on the peptide action on ameloblast cell-like culture, strongly suggest that peptide phosphorylation is not essential to achieve an impact on amelogenin gene expression, and likely on differentiation, since LRAP(+P) as well as LRAP(−P) were both able to stimulate amelogenin transcripts in the LS8 and ALC cells. This stimulation is restricted in Mean length of HAP crystals and length to width ratio were measured in TEM images in the control, LRAP(+P) and LRAP(-P) treated cells at D7 for LS8 and D14 for ALC. A statistically significant increase in crystal length was observed under both treatment conditions in the LS8 cells (*p < 0.05 and ***p < 10 -6 ). For the ALC cells, due to the large heterogeneity in crystals observed, statistical analyses of crystal length were not relevant. Similar length to width ratio were observed in the control and the LRAP(+P) treated cells with both cell lines; an increase in the ratio was observed when LS8 cells were treated with LRAP(-P). a time frame since for both cell lines and peptides, it is followed by a decrease in amelogenin expression in agreement with what is observed during the process of differentiation of tooth ameloblasts. Our results with LRAP(-P) are concordant with previous studies showing its action on ameloblastic differentiation (Tompkins and Veis, 2002;Tompkins et al., 2005;Ravindranath et al., 2007). Notably, the two cell lines reacted to the peptide treatment with a different kinetics. The LS8 cell line responded very quickly to LRAP treatment (48 h) by increasing the number of Amelx transcripts whereas the ALC cell response was delayed (7 days). This variation in response kinetics is likely linked to the different stage of ameloblastic differentiation mimicked by these cell lines. Amelogenin secretion is very active during the secretory stage (Aoba et al., 1987), but then drops as the cells mature. In the LS8 cells, the peptides appear to potentiate the already active expression of amelogenin and this process appears direct as shown previously for LRAP(-P) (Iacob and Veis, 2008) while in the ALC cells they likely act through indirect more complex processes. Understanding the potential complementary action of LRAP(+P) and LRAP(-P) on cell mineralization and metabolism is a next step in our analysis. This may further lead to the establishment of differential treatments by . At D2, both peptides induced a statistically significant increase in amelogenin transcripts relative to the control in the LS8 [*p < 0.05 for LRAP(+P) and ***p < 10 −4 for LRAP(-P)]. At D7, both peptides induced a similar increase for the ALC, statistically significant for LRAP(-P) (*p < 0.05) relative to the control. (C) Amelx transcripts levels were compared between D0 (PND1 germ) and cultured D9 germs. Inhibition of Amelx expression was observed in all conditions relative to the D0 germ. LRAP(+P) and LRAP(-P) treatment induced a statistically significant (*p < 0.05) increase in Amelx expression relative to the control. FIGURE 4 | Macroscopic views and mineral density and enamel volume of first molar germs cultured in the absence and presence of added peptide. (A) Photographs of first molar germs at D0 and after 9 days of culture in the absence (CONTROL) or presence of LRAP(+P) and LRAP(-P). Scale bar 500 µm. Mean mineral density (B) and enamel volume (C) were calculated from Micro-CT scans of first molar germs at D0 and after 9 days of culture. Addition of LRAP(+P) peptide lead to a statistically significant increase (*p < 0.05) in enamel volume relative to the D9 control, no difference could be observed between LRAP(-P) treated and control samples. All samples presented an increased mineral density (B) and enamel volume (C) at D9 relative to D0 confirming tooth germ growth and maturation. FIGURE 5 | FE-SEM analyses of D9 first molar germs cultured in the absence and presence of added peptide (A-C) CONTROL, (D-F) LRAP(+P), and (G-I) LRAP(-P). Ameloblast pits and mineral organization were clearly observed for all three samples, confirming enamel growth in culture. Ameloblast pits appeared smaller and more spaced in molar germs treated with LRAP(+P) (E,F) relative to the control (B,C) and slightly wider in molar germs treated with LRAP(-P) (H,I) relative to the control. selected peptide(s) according to the tooth developmental stage. The present data obtained in biological models parallel those recently described in vitro claiming that LRAP(-P) is involved in the modulation of crystal maturation (length, width) (Shafiei et al., 2015;Kwak et al., 2017). Therefore, its topical application can be envisioned for a future repair of enamel lesions. Furthermore, our ex vivo observations on LRAP(+P) correlate with the recent in vitro findings by Yamazaki and colleagues on the native phosphylated amelogenins during the early stages of enamel formation (Yamazaki et al., 2017). Unraveling the signaling pathways underlying this action is therefore mandatory for a potential use of this peptide as early treatment of inborn disorders of enamel.
2018-02-13T22:18:03.826Z
2018-02-08T00:00:00.000
{ "year": 2018, "sha1": "9cb5eaa2450df1adc361ffd1abcbd3f67caf2936", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2018.00055/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cb5eaa2450df1adc361ffd1abcbd3f67caf2936", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
1302256
pes2o/s2orc
v3-fos-license
The estimated economic burden of genital herpes in the United States. An analysis using two costing approaches Background Only limited data exist on the costs of genital herpes (GH) in the USA. We estimated the economic burden of GH in the USA using two different costing approaches. Methods The first approach was a cross-sectional survey of a sample of primary and secondary care physicians, analyzing health care resource utilization. The second approach was based on the analysis of a large administrative claims data set. Both approaches were used to generate the number of patients with symptomatic GH seeking medical treatment, the average medical expenditures and estimated national costs. Costs were valued from a societal and a third party payer's perspective in 1996 US dollars. Results In the cross-sectional study, based on an estimated 3.1 million symptomatic episodes per year in the USA, the annual direct medical costs were estimated at a maximum of $984 million. Of these costs, 49.7% were caused by drug expenditures, 47.7% by outpatient medical care and 2.6% by hospital costs. Indirect costs accounted for further $214 million. The analysis of 1,565 GH cases from the claims database yielded a minimum national estimate of $283 million direct medical costs. Conclusions GH appears to be an important public health problem from the health economic point of view. The observed difference in direct medical costs may be explained with the influence of compliance to treatment and possible undersampling of subpopulations in the claims data set. The present study demonstrates the validity of using different approaches in estimating the economic burden of a specific disease to the health care system. Introduction Herpes simplex virus type 2 (HSV-2) is the most frequent causative organism of genital herpes (GH) in the United States, while HSV type 1 is felt responsible for this recurrent infection in only 20 to 30 percent of cases [1]. GH is contagious both in the symptomatic and in the asymptomatic phase of the disease and causes painful genital ulcers. The management of the patient involves consultations, laboratory exams and drug treatment for the disease and its complications. GH is one of the three most widespread sexually transmitted diseases in the USA [2]. Published data show that 45 million persons aged 12 years or older have HSV-2 an-tibodies [3], that up to 70% of patients attending sexual transmitted disease clinics have HSV-2 infection [4] and that the majority of patients with initially symptomatic GH will develop recurrent disease [5]. Cost of illness studies represent one of the applications of economic science to medicine. The aim of these studies is to assess the economic burden of a disease and to help decision-makers in targeting preventive efforts and allocating resources [6,7]. Despite its widespread and increasing transmission, there is still poor understanding of the economic impact of GH in the USA, which makes it difficult to evaluate societal costs and the cost-effectiveness of preventive efforts [8]. Therefore, the objective of the present study was to estimate the economic burden of GH in the USA, using two different costing approaches. Methods We conducted a population-based study on the costs of GH. In order to give a better estimate of the disease burden, we retrieved economic information using two different approaches: In the first instance, direct interviews were conducted with a random sample of office-based physicians using a structured, comprehensive questionnaire. Secondly, information was retrieved from a large, longitudinal administrative database. For this purpose, data from Diversified Pharmaceutical Services (DPS), which is a pharmaceutical benefit management firm, were used. For both approaches, the time span was one year (1996). Costs were referred to on a yearly basis and computed in 1996 US dollars. Data were collected and analyzed separately. Approach using expert interviews A questionnaire was administered to 30 randomly selected primary and secondary care physicians practicing in the North-East of the USA. Physicians were asked about the annual number of patients with GH and the total number of episodes of symptomatic GH, seeking medical care and treatment. Data collection included the following variables: (1) demographic characteristics of the physicians' practices; (2) epidemiological figures: number of patients with GH per year; stratification of patients according to age, gender, severity of disease recurrence rates; duration of episodes; and (3) resource utilization: frequency of consultations and laboratory tests for GH patients; prescription of drugs in first/recurrent episodes and duration of treatment; frequency of hospitalizations due to GH complications; employment of patients; proportion of patients unable to work. The perspective of this analysis was societal; thus, both direct and indirect medical costs were based on the costs born by society. The US population in 1996 (279 million) served as a reference. Regarding medical costs, the categories considered were the consultations and medical procedures performed, laboratory tests, pharmacological treatment and hospitalizations. Unit costs are illustrated in Table 1. For the purpose of this analysis, average monetary values from all respondents were considered [9]. Values used came directly from interviews with physicians involved in the study. Indirect costs were calculated using the human capital approach [7,10,11]. These costs were based on an average U.S. hourly wage of $14 [12], and were valued according to estimates of time consumed by hospitalization, time lost from work due to illness, and travel and waiting-room time resulting from physician visits. We estimated that individuals with a primary GH syndrome would miss two days of work if symptoms were not severe enough to require hospitalization, and one week of work if hospitalization was required. Patient travel and waiting room time was estimated to be two hours per physician visit. We did not use gender-specific wage rates, as females are underpaid relative to men [12], so that gender specific wage-rates would implicitly undervalue the costs of GH in females. Approach using claims database DPS is a pharmaceutical benefit service collecting and processing claims on consultations, laboratory tests, drug utilization and hospital treatment from Health Maintenance Organizations (HMOs) and Independent Practice Associations from different areas in the USA. For the present study, all patients diagnosed with GH who have been enrolled in the plans of four HMOs at any time during 1996 were included. The four health plans included in the analysis were located in the Southwest, East, Mid-West, and West of the USA. The approximately 0.5 million members enrolled in the health plans were generally employed or dependents of employed members. The majority of members reside in an urban rather than rural environment, representing an urban, working-class population. Less well represented in this population were the elderly (age over 65), the unemployed and those entitled to state medical assistance (6% of the plan members had Medicaid as their principal insurance). Members were included if they had at least one claim for GH during the study period. We used the following International Classification of Disease-9 codes for data collection: 054.1 (GH); 054.10 (GH, unspecified); 054.11 (herpetic vulvovaginitis); 054.12 (herpetic ulceration of vulva); 054.13 (herpetic infection of penis) and 054.19 (other GH). For this analysis, the following definitions were used: "prevalent case" = members who had a GH diagnosis claim between the dates of 1/1/96 and 12/31/ 96; "active case" = members who had a GH diagnosis claim between the dates of 1/1/96 and 12/31/96, and had a drug claim for treating GH or were treated in the hospital for GH; "incident case" = members who had a GH diagnosis or drug claim in 1996 and no GH claim prior to 1996; "recurrent case" = members who were continuously enrolled in 1996, had a GH diagnosis or drug claim in 1996 and a recurrent claim 15 days or more after the first claim (incident cases excluded). Direct medical costs were based on actual pharmacy, outpatient and hospital claims processed by DPS. Component costs were based on drug costs, outpatient costs, emergency room costs, inpatient costs, laboratory costs and home visit costs. Drugs used for treatment of GH were identified in members who also had a diagnosis claim for GH at some time during the study period. This allows for some reliance that the drug was used to treat GH and not some other form of herpetic infection. Mean, median, and standard deviation were calculated to describe the population statistics for costs. The perspective of this analysis was third party payer. The annual costs attributable to GH-infection in the United States were estimated as the product of the number of incident and prevalent infections and the average present value of the costs attributable to a single GH-infection. We also evaluated the impact of other GHassociated complications such as neonatal herpes and excess cesarean sections. A crude estimate of the burden of these complications was calculated, based on data extracted from the DPS database and published data [1,13,14,15,16]. Approach using expert opinion In the sample of 30 interviewed physicians, 15 were general practitioners (GPs), while 15 were specialists (6 dermatologists, 4 gynecologists, 3 infectious disease specialists and 2 urologists). Based on our data, GPs see on average 34 GH patients a year; among these patients 13 (38%) were reported to be primary cases. Specialists see 56 GH patients per year, including 12 (21%) incident cases. Sixty-five percent of patients were in the 18-30 age group; 52% of patients were female. Almost half of the patients (49%) suffered from mild GH, whereas moderate and severe cases represented 37% and 14% of patients, respectively. Forty-eight percent of patients experienced less than 2 relapses a year, 36% 2 to 5 relapses and 16% more than 5 per year. A typical first GH episode was reported to last on average 10.8 days, slightly more than a typical recurrence (8.4 d). From the data available, the incidence of clinically manifest GH can be estimated to be 423,000 cases and the number of recurrent cases at about 698,000 patients in 1996. These estimates correspond to an occurrence rate of 3,139,000 symptomatic episodes. Table 3 gives a breakdown of the costs at the single patient level. Approach using claims database Among 1,565 patients with GH, 65% of patients were in the 21-40 age group; 74% were female. Table 4 summarizes the main epidemiologic findings from this analysis. We extrapolated the collected data to the US population in 1996 (279 million) and calculated an annual incidence of 131,130 symptomatic GH cases in 1996, corresponding to a crude incidence of 0.47 per 1,000. The total number of persons with prevalent, active or recurrent symptomatic GH was estimated to be 806,310. This produces a crude prevalence of 2.89 cases per 1,000 in 1996. Discussion This report presents crude estimates rather than precise measures of the economic costs of GH in the USA. Using two different approaches, we estimated the total direct medical costs of GH to range from a minimum of $283 million to a maximum of $984 million in 1996. Indirect costs to society amounted to $214 million due to production losses. Office-based medical care and drug treatment were the major sources of direct costs. Moreover, this study showed that the average GH patient seeking treatment is likely to be younger than 40, to develop about 2 recurrent episodes, to undergo several laboratory exams, and to be treated with antiviral drugs. Relapses tended to be shorter than first episodes though having a higher cost per episode. Our results are likely to be conservative estimates due to the great number of asymptomatic and shedding patients not seeking medical care and transmitting the disease to other individuals. As recently shown, the great majority of people with serologic evidence of HSV-2 infection have no history of recognized GH [3]. However, many seropositive persons shed HSV-2 that is detectable by culture from the genital tract, and many have symptoms that are directly referable to HSV-2 detectable by culture [17]. From an economic point of view, it must be borne in mind that GH and other sexually transmitted infectious diseases have negative externalities in the sense that consequences of the disease are not only limited to people who have the disease but also to other people that can be potentially infected. This stems from the fact that consequences of risky sexual behavior are borne by the subject itself and by others via the transmission of the disease. In addition, it must be underlined that GH constitutes a risk factor for the spread of other sexually transmitted diseases (e.g. human immunodeficency virus), which can be interpreted as a negative consequence of GH. The estimates of the total direct medical costs obtained with the two different approaches are discrepant and can be explained, at least in part, with the influence of compliance to treatment. It must be noted that the higher figure is obtained with data collected via questionnaire and is likely to represent the monetary value of the amount of treatment prescribed by physicians. Moreover, the reported duration of primary and recurrent GH episodes in this analysis was longer than that commonly cited in the literature [5,8,17], which may have artificially increased the cost estimates. The lower figure, on the other hand, is an estimate based on claims and represents the mini-mum amount of medical care and treatment actually consumed by patients. This difference can be expressed as the difference from what is prescribed and what is actually consumed, i.e. compliance. Thus, the difference between the two global figures may be attributable to different utilization rates or different levels of compliance. A lower level of compliance means probably lower shortterm direct costs, but probably higher indirect and longterm medical costs. The hypothesis of different utilization rates is also consistent with the psychological aspects of GH, which is perceived as a potential source of shame on patients [18] and is a plausible reason for lower levels of compliance to treatment. In a recently published article by Tao et al [16], the national direct medical costs of GH were estimated at $166 million annually for 1992-1994 ($207 million in 1999 dollars), based on claims data from several sources. These numbers may be underestimates. Tao and colleagues [16] estimated that less than 30% of acyclovir claims not associated with a specific diagnostic code were provided for the treatment of GH. As drugs account for over half of the costs attributed to GH, underestimation of drug costs substantially decreases the estimated annual costs of GH. Based on our estimates using the DPS claims database, GH seems to be a public health problem of important economic relevance. At least $283 million can be estimated the direct health care costs attributable to GH, corresponding to 0.1% of the US health care expenditure ($1,007,300 million). However, when computing for indirect costs and long-term complications (e.g. neonatal herpes, enhanced HIV transmission), the true costs may be greater than $1.0 billion. In addition, this study has not been designed to give a monetary estimate of intangible costs of the disease. Psychological stress related to GH is well documented in the literature [18,19,20], and should be considered as a relevant factor of the total burden of illness, though not easily quantifiable. As with any research study, limitations must he placed on the ability to generalize the results beyond the sample and setting employed. First, treatment for GH in the USA can be met at neighborhood health clinics, which offer confidential, low-cost treatment. Since a stigma is attached to the diagnosis of GH, patients may choose treatment at these clinics. The database had no information on these visits, since no claims were generated. Second, claims databases are collected for the purpose of payment to providers for the medical services rendered on behalf of enrolled members and rely on the coding of numerous medical events. Because of variations and incompleteness of coding, errors in identification and classification may occur. Coding is dependent on the diagnostic process, which is related to a clinician's training. Thus, the decision to diagnose GH by individual clinicians with different levels of expertise cannot be controlled within the boundaries of claims data. Third, the health plans included in this study allowed for geographical representation on a large regional basis. However, undetected patient, provider and practice differences may still exist. Caution should therefore be exercised in generalizing to other regions. Finally, cases selected may not necessarily be indicative of minority, low socioeconomic status, or indigent populations, since claims data can only provide data on those individuals who access the system. Therefore, the demographics of the database, in combination with the use of neighborhood clinics for GH treatment, make our calculated GH rates and costs lower bound estimates of the true GH prevalence and associated costs in the USA. In conclusion, GH appears to be an important public health problem in the USA from the health economic point of view. The present study demonstrates the validity of using different approaches in analyzing the economic burden of a specific disease to the health care system.
2014-10-01T00:00:00.000Z
2001-06-28T00:00:00.000
{ "year": 2001, "sha1": "b83c212e9d16fbe2ca405fbf512ed9edb535f5bc", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-1-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5701722a4d449d1f1ef1d13d0d5828a13f2fa08", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244596186
pes2o/s2orc
v3-fos-license
Evaluation of the instrumented Timed Up and Go test as a tool to measure exercise intervention effects in nursing home residents: results from a PROCARE substudy To achieve independence in activities of daily living, a certain level of functional ability is necessary. The instrumented Timed Up and Go (iTUG) test provides guidance for appropriate interventions, for example, when considering the subphases within the TUG. Therefore, we evaluated the iTUG as a tool to measure the effects of a multicomponent exercise intervention on the iTUG subphases in nursing home residents. Fifty long-term nursing home residents (34 women, 82.7 ± 6.46 [65–91] years; 16 men, 78.6 ± 7.0 [62–90] years) performed the iTUG test before and after a 16-week intervention period (2 × 45–60 min/week). According to the attendance rates, participants were divided into three groups. The total iTUG duration decreased from baseline to posttest, F(2,46) = 3.50, p = 0.038, η2p = 0.132. We observed significant correlations between the attendance rates and the total iTUG duration (r(50) = 0.328, p = 0.010). However, we did not observe significant group × time interaction effects in the subphases. The Barthel Index moderated the effect between attendance rate and the total duration of the iTUG test, ΔR2 = 8.34%, F(1,44) = 4.69, p = 0.036, 95% CI [0.001, 0.027]. We confirmed the effectiveness of the iTUG as a tool to measure exercise intervention effects in nursing home residents, especially when participants exhibit high attendance rates. That said, mobility needs to be considered in a more differentiated way, taking into account parameters in the subphases to detect changes more sensitively and to derive recommendations in a more individualized way. Introduction The ability to navigate safely and efficiently through the environment is a critical aspect of an individual's level of independence. Maintaining physical mobility such as gait (locomotor task) is essential for social participation and maintaining quality of life (Metz, 2000;Shafrin, Sullivan, Goldman, & Gill, 2017). However, a decline in an individual's level of independence can negatively affect personal safety and quality of life (Stubbs, Schofield, & Patchay, 2016;Johnen 2017). Due to diminished independent mobility, it is becoming increasingly difficult for older adults to visit grocery stores, the doctor's office, the hairdresser, or participate in the choir, craft group, or other sociocultural activities (Giannouli, Bock, Mellone, & Zijlstra, 2016). Gait disturbances represent a functional limitation in elderly people with a predictive potential for the development of comorbidi-Availability of data and material Data can be obtained from the corresponding author upon reasonable request. Independent mobility is routinely assessed with field tests such as the Timed Up and Go (TUG) test (Podsiadlo & Richardson, 1991). This test can be used diagnostically to guide appropriate interventions, especially considering the subphases within the TUG. Traditionally, only the total duration of the TUG test is assessed using a stopwatch. In contrast, using an instrumented TUG (iTUG) with sensors (e.g., Cimolin et al., 2019;Zarzeczny et al., 2017) provides the opportunity to examine the subphases more closely, including getting up from a chair, walking, turning around, and sitting down again. This helps to identify individual weaknesses and provides the opportunity of targeted training to improve functional mobility (Schoene et al., 2013). While some studies used the iTUG in different populations and clinical conditions (e.g., Zampieri et al., 2010 in Parkinson's disease; Mirelman et al., 2014 in mild cognitive impairment), to our knowledge, there is only one study with nursing home residents (Zarzeczny et al., 2017). This study investigated the subphases of the iTUG in nursing home residents based on quantitative wearable sensors. The authors demonstrated that vertical sit-to-stand acceleration correlated best with subject age (r 2 = 0.430, p < 0.05), suggesting that age-related decreases in TUG performance are primarily associated with decreases in "explosive" lower extremity muscle strength. However, Zarzeczny et al. (2017) only considered cross-sectional findings; studies on intervention effects are not available. To determine the generalizability of these findings to larger cohorts of insti-tutionalized older adults, the purpose of the study was to (1) examine the iTUG as an instrument to measure the effects of a multicomponent exercise intervention on physical function and balance in nursing home residents and to identify subphases of the iTUG that are more responsive to intervention effects than others. We hypothesized that older adults in long-term care show positive effects in all subphases of the iTUG, and in particular, show improvements in the walking phase because the intervention focused heavily on that aspect. We also wanted to (2) evaluate the impact of the attendance rate on iTUG improvement, since little is known about the dose-response ratio concerning actual attendance. In some cases, the attendance rate is reported in intervention studies, but a defined adherence rate above which participation can be described as successful are hardly to be found. For a more differentiated consideration of the intervention effects, it is therefore necessary to take into account the attendance rate. In a comparable setting, Fairhall et al. (2012) were able to show that in a multifactorial interdisciplinary intervention higher adherence was significantly associated with better performance for most mobility outcomes. Thus, we expect the intervention effects to be significant only at higher attendance rates. Ethics This study took place as part of the PROCARE project-Prevention and occupational health in long-term care study (Cordes et al., 2019). The ethics committee of the Hamburg Chamber of Physicians, Germany, approved the study protocol of the PROCARE project (PV5762). Participants A total of 50 long-term nursing home residents (34 women, 82.7 ± 6.46 [range: 65-91] years; 16 men, 78.6 ± 7.0 [range: 62-90] years, cf. participant characteristics in . Table 1) were recruited in six different nursing homes located in the city of Stuttgart, Germany. All nursing home residents or their legal guardians gave written informed consent before enrolling in the study. Inclusion criteria included: (i) willingness to participate, (ii) the ability to understand and carry out simple instructions, (iii) the ability to walk 10 m with or without a walking aid, and (iv) the ability to participate in group activities (Bischoff, Cordes, Meixner, Schoene, Voelcker-Rehage, & Wollesen, 2021). The nursing staff and principal investigators assessed the eligibility criteria. The experimental procedure was explained in detail to the participants. In the general training literature, adherence is defined as successful when participants complete at least twothirds of the training program (Hawley-Hague, Horne, Skelton, & Todd, 2016;King et al., 1997). Other papers specify a minimum level of participation for low adherence rate, < 30% of exercise classes (Tiedemann, Sherrington, & Lord, 2011). For this reason, participants were divided into three groups based on their attendance rates in the multicomponent exercise intervention: group 1 with attendance rates up to 33.2% (low), group 2 with attendance rates between 33.3 and 66.6% (moderate), and group 3 with attendance rates higher than 66.6% (high; . Table 1). Instrumented Timed Up and Go test. The Timed Up and Go (TUG) test is one of the most common tests used to examine balance, gait speed, and functional ability related to the performance of basic activities of daily living (ADL) in older populations (Herman et al., 2011;Podsiadlo & Richardson, 1991). It can also help track clinical changes over time (Podsiadlo & Richardson, 1991). The TUG measures the time it takes a participant to stand up from a chair, walk 3 m at a comfortable speed, walk around a cone, walk back, and sit down on the chair. If individuals require less than 10 s, they are considered to have free mobility. The time frame between 10-20 s is considered to have independent mobility. If the task is completed in 20-29 s, the individual has variable mobility, and if Abstract Ger J Exerc Sport Res 2021 · 51:430-442 https://doi.org/10.1007/s12662-021-00764-0 © The Author(s) 2021 Evaluation of the instrumented Timed Up and Go test as a tool to measure exercise intervention effects in nursing home residents: results from a PROCARE substudy Abstract Background and objectives. To achieve independence in activities of daily living, a certain level of functional ability is necessary. The instrumented Timed Up and Go (iTUG) test provides guidance for appropriate interventions, for example, when considering the subphases within the TUG. Therefore, we evaluated the iTUG as a tool to measure the effects of a multicomponent exercise intervention on the iTUG subphases in nursing home residents. Methods. Fifty long-term nursing home residents (34 women, 82.7 ± 6.46 years; 16 men, 78.6 ± 7.0 [62-90] years) performed the iTUG test before and after a 16-week intervention period (2 × 45-60 min/week). According to the attendance rates, participants were divided into three groups. Results. The total iTUG duration decreased from baseline to posttest, F(2,46) = 3.50, p = 0.038, η 2 p = 0.132. We observed significant correlations between the attendance rates and the total iTUG duration (r(50) = 0.328, p = 0.010). However, we did not observe significant group × time interaction effects in the subphases. The Barthel Index moderated the effect between attendance rate and the total duration of the iTUG test, ΔR 2 = 8.34%, F(1,44) = 4.69, p = 0.036, 95% CI [0.001, 0.027]. Conclusions. We confirmed the effectiveness of the iTUG as a tool to measure exercise intervention effects in nursing home residents, especially when participants exhibit high attendance rates. That said, mobility needs to be considered in a more differentiated way, taking into account parameters in the subphases to detect changes more sensitively and to derive recommendations in a more individualized way. Keywords Nursing home residents · Adherence · Functional ability · Balance · Multicomponent exercise intervention · Gait it takes the individual more than 29 s, the individual has impaired mobility (Podsiadlo & Richardson, 1991). With a "cut-off " value of 14 s or more, the TUG is considered a good predictor for identifying healthy individuals at risk of falling (Shumway-Cook, Brauer, & Woollacott, 2000;Allison, Painter, Emory, Whitehurst, & Raby, 2013). Internal consistency, reliability, validity, and responsiveness are excellent, as reported by Galhardas, Raimundo, and Marmeleira (2020), with an intraclass correlation coefficient (ICC) of 0.99 in older nursing home residents. Recently, new emerging technologies allow for the recording of gait and postural transitions with wearable devices. In the present study, Opal™ sensor modules and Mobility Lab™ system (APDM Mobility Lab, APDM Inc., Portland, OR, USA) were used to measure the iTUG test and its subphases. A total of six inertial sensors (Opal™) were attached to the trunk, lower back, left and right foot, and left and right wrist (. Fig. 1). The sensors were placed with a velcro belt and straps. The Mobility Lab™ software analyzed the raw data from all sensors using an integrated and automatic algorithm to calculate the durations of the iTUG subphases. Upon completion of the analysis, Mobility Lab™ displayed the data in all subphases in a full report that includes all tested trials and parameters. The iTUG test can be divided into four major subphases: sit-to-stand, walk, turn, and stand-to-sit. The duration of eachofthese subphaseswasautomatically calculated (see Supplementary Figure). Sit-to-Stand is the time required to stand up at the beginning of the task. Walk is the time required to walk at a normal walking pace to the cone at a distance of 3 m, plus the time required to return back to the chair. Turn is the time required to perform the 180°turn. Stand-to-sit is the time required to sit down at the end of the task (. Fig. 2). The total duration required to complete the test was also recorded. In addition to demographic variables, we recorded body composition (weight and height). The participant's independence in basic ADL was assessed with the Barthel Index (Barthel, 1965) and global cognitive functioning was measured with the Montreal Cognitive Assessment (MoCA, Nasreddine, 2005). The Barthel Index is a scale that measures ten basic aspects of activity related to self-care and mobility (the highest score is 100, and lower scores indicate greater dependency; Barthel, 1965). Bouwsta and colleagues (2019) demonstrated that the interrater reliability is sufficient to measure and interpret changes in phys-ical function in geriatric patients, with an ICC of 0.96 (95% confidence interval [0.93, 0.98]). The MoCA includes measures of executive functions, language, memory, attention, orientation, calculation, and visuospatial ability. The score ranges between 0-30, with scores below 26 indicate mild cognitive impairment with 100% sensitivity and 87% specificity (Nasreddine et al., 2005). The MoCA also demonstrated a high test-retest reliability. Using the ICC values ranging from 0.75 to 0.92, indicate a fairly high to a high reliability over time periods ranging from 4 weeks to 18 months (Ozer, Young, Champ, & Burke, 2016). Multicomponent exercise intervention The intervention was developed by Wollesen (2018;Bischoff et al., 2021) for the "Prevention and occupational health in long-term care" (PROCARE) project. The study protocol was published in Cordes et al. (2019). The duration of the intervention was 16 weeks with 2 sessions per week (32 sessions). Each session lasted between 45-60 min and was conducted by a certified exercise scientist or physiotherapist with a maximum group size of 10 people. The program combines published exercises that are beneficial for cognitive-motor performance in older adults (community-dwelling as well as institutionalized) (Liu & Latham, 2009;Fiatarone, 2019;Thomas, Mackintosh, & Halbert, 2010;Wollesen & Voelcker-Rehage, 2014;Wollesen et al., 2017). In addition, the exercise program was continuously adapted to the residents' capacity and, hence, is organized as a progressive challenge to expand participants' resources according to the F.I.T.T. (Frequency, Intensity, Time, and Type of exercise) principle (Garber et al., 2011) and the recommendations of the Global Aging Research Network (IAGG-GARN) and the IAGG European Region Clinical Section for physical activity in older persons (de Souto Barreto et al., 2016). Further information on the multicomponent exercise program can be found under Cordes et al. (2019). Data acquisition and procedure The assessment took place upon entry to the study (pretest) and was repeated after 16 weeks (posttest). Thus, the iTUG was administered twice, one trial at baseline and one trial in the posttest. The participants were required to walk at a self-selected and comfortable walking speed. After preparation (attachment of the sensors), participants sat on a standard chair with their arms at their sides. They were instructed as follows: "When I say 'Go' , I want you to get up from the chair, walk straight ahead at your comfortable walking pace, turn around, and then walk back to the chair and sit down". The test was completed when the participant was seated again. The chair was positioned against a wall to ensure that the chair was stable when standing up and sitting down. A researcher with previous experience in this procedure administered the iTUG. Participants were instructed not to use their hands, neither during the sit-to-stand phase nor during the standto-sit phase. No specification was made as to which leg the test person should start off with or in which direction he or she should turn. The main parameter was the duration in the subphases and the total duration of the iTUG test in seconds. Statistical analysis Data was analyzed using SPSS version 25.0 (SPSS Inc., Chicago, IL, USA). First, we examined the duration in each subphase for missing values, normality of distributions (tested by Kolmogorov-Smirnov tests), and the presence of outliers. An alpha (α) level of 0.05 was used for all statistical tests. Group comparison for continuous variables (such as age, body mass index [BMI], MoCA) were assessed using analysis of variance (ANOVA); sex as categorical demographic variable was compared using chi-square (X 2 ). To analyze the effect of the intervention on each subphase of the iTUG, a 3 (groups) × 2 (time) analysis of covariance (ANCOVA) with repeated measures was calculated for the duration in each subphase with pretest control as a covariate and a priori contrasts. Due to baseline adjustment, any interaction effect would produce the same results as the main effect group. Therefore, the main effects for group were not reported. Effect sizes for all ANOVAs were reported using partial eta squared (η 2 p) (Lakens, 2013), with a small effect defined as 0.01, a medium effect as 0.06, and a large effect as 0.14 (Cohen, 1988). There were different numbers of missing values in the subphases because the iTUG algorithm in the Mobility Lab™ software could not reliably detect these parameters (sit-to-stand: 22; walking: 26; turning: 2; stand-to-sit: 10). For the iTUG total duration, the dataset was complete. In addition, the percentage changes between pre-and posttest were calculated foreachparticipant: (((pretest -posttest)/ pretest) * 100) and correlated with the attendance rates (. Table 2). Linear mixed-effects modeling was utilized to determine the moderation effect of cognitive performance (MoCA), age, and the independence/need of care (Barthel Index) on the relationship between attendance rate and intervention effect in the iTUG total duration using the PROCESS macro in SPSS (Hayes, 2017). The simple moderation model we used in this study was Model #1. Thus, three moderation analyses were conducted to determine whether the interaction between the independent variables (MoCA, age, and Barthel Index) and the attendance rate significantly predicted the intervention effect. Significant transition points within the observed range of the moderator were analyzed using an application in the PROCESS macro, called the Johnson-Neyman method (Johnson & Neyman, 1936). The relationship of all variables involved in the moderation analysis was approximately linear, as visually shown in the scatterplots after LOESS smoothing (Jacoby, 2000). Participants . Table 1 shows the characteristics of the sample. In the group with low attendance an attendance rate of 3.09% (±5.26) was recorded, a rate of 50.1% (±10.1) in the group with moderate attendance, and a rate of 88.3% (±10.3) in the group with high attendance. Distributions of age, sex, BMI, and MoCA scores did not differ between groups. The MoCA total score was 16.7 (±0.86) points for all residents, which is below the cutoff value (19 points) for discriminating between mild cognitive impairment and Alzheimer's disease (Roalf et al., 2013). In all, 63.3% of the participants were thus screened as showing signs of dementia. German reference data for the prevalence of dementia for nursing home residents are 51.8% (Hoffmann, Kaduszkiewicz, Glaeske, van den Bussche, & Koller, 2014), which is lower than the prevalence values of the present sample, based on the MoCA results in this study (63.3% < 19 points). Most participants were of normal weight in all groups following the BMI guidelines from several expert panels (Villareal et al., 2005;NHLBI Expert Panel, 1998). Participants were classified as moderately dependent with a Barthel Index mean score of 75 (±6.03) points (Shah, Vanclay, & Cooper, 1989). Relationship between attendance rate and percentage change between pretest and posttest of the iTUG We observed a significant correlation between the attendance rate and the iTUG total duration (r(50) = 0.328, p = 0.010). The higher the attendance rate, the greater the percentage change from pre-to posttest. A significant relationship with the attendance rate was observed for the iTUG stand-to-sit duration (r(40) = 0.301, p = 0.029). Thus, an increased percentage reduction in duration was found with an increased rate of attendance. The correlations with the other subphases were not significant (r = -0.124-0.66, p = 0.195-0.440; . Table 2 and . Fig. 6). Regarding the iTUG total duration, there was a moderating effect of functional independence (Barthel Index) on the relationship between attendance rate and the intervention effect (. Fig. 4). The overall model showed that 21.9% of the intervention effects can be significantly (p = 0.012) explained by the model. Functional independence (Barthel Index) significantly moderated the effect between attendance rate and the iTUG total duration, ΔR 2 = 8.34%, F(1, 44) = 4.69, p = 0.036, 95% CI [0.001, 0.027]. The moderator value defining the Johnson-Neyman significance was 70.07 (41.7% of the participants were below this value and 58.3% of participants were above this value; . a Total duration, b sit-to-stand duration, c walk duration, d turn duration, e stand-to-sit-duration. Covariates in the model are evaluated for the following values: iTUG total duration = 23.1; iTUG walk duration = 13.8; iTUG sit-to-stand duration = 1.11; iTUG stand-to-sit duration = 0.965; iTUG turn duration = 3.24. Asterisk significant interaction effect time ×group was only observedfortotal duration. A priorcontrasts showedsignificant differences betweenthe low attendancegroup compared to the high attendance and moderate attendance groups . Figure 6 shows that 30% of nursing home residents (n = 3) in the group with a low attendance rate, 25% (n = 2) in the group with a moderate attendance rate, and 62.5% (n = 20) in the group with a high attendance rate improved in total iTUG performance (% change in iTUG total duration > 0%). Following Masciocchi, Maltais, Rolland, Vellas, and de Souto Barreto (2019) and assuming an 11.2% decrease in TUG total performance over 4 months, 40% (n = 4) in the low attendance group, 62.5% (n = 5) in the moderate attendance group, and 78.1% (n = 25) in the high attendance group showed a positive effect of the multicompoment exercise intervention. Discussion This study aimed to evaluate the iTUG as a tool to measure the effects of a multicomponent exercise intervention on the iTUG subphases in nursing home residents, particularly concerning the subphases, and to evaluate the impact of the attendance rate on iTUG changes. One may assume that the nursing home residents participating in our study would be among the fitter individuals, since participation required specific physical abilities. This should be considered when assessing the representativity of the sample. Indeed, the range of TUG performance in nursing home residents was extensive (< 10 up to > 150 s). Some studies reported longer durations in a similar sample (Baum et al., 2003;Johnen & Schott, 2018;Henskens, Nauta, Van Eekeren, & Scherder, 2018), although some examined nursing home residents with dementia. Other studies reported shorter TUG total durations at baseline (Arrieta et al., 2018;Benavent-Caballer, Rosado-Calatayud, Segura-Ortí, Amer-Cuenca, & Lisón, 2014;Cadore et al., 2014;Meng et al., 2017;Kocic et al., 2018); however, some of them studied cognitively unimpaired individuals or older adults in the assisted living environment. Other findings in this setting and age group are similar to our results at baseline (Cancela, Ayán, Varela, & Seijo, 2016;Mouton et al., 2017;Zarzeczny et al., 2017;Holmerová et al., 2010). A significant interaction of pretest performance × time with a concurrent interaction effect time × group for iTUG total duration suggests that residents with high iTUG performance at baseline benefit more from the interven-tion than residents who started at lower iTUG performance levels. Our results contradict the findings by Fairhall et al. (2012), in which they found a higher effect of the intervention on gait speed among frail older people. It is not surprising, as mobile residents were less dependent on caregivers and were able to come to interventions independently. This could have led to lower attendance rates for less mobile residents, as it was not always possible to ensure that they were ready on time or that the caregivers always reliably brought them to the intervention. The moderating effect of a person's functional independence (which is above a Barthel Index of 70.07) Fig. 6 8 Percentage changes in the instrumented Timed Up and Go (iTUG) total duration for each group and as a function of the attendance rate on the relationship between attendance rate and intervention effect in the iTUG total duration also confirmed that. The moderation was able to explain an additional 8.34% of the variance, which can be interpreted as moderate according to Cohen (1988). The significant interaction effect time × group for the iTUG total duration indicated that a high attendance rate positively affected the iTUG performance and its subphases. With increasing attendance, we saw larger effects for the total duration and the stand-to-sit subphase, indicating a dose-response effect of the intervention. This is consistent with Fairhall et al. (2012), who showed that higher adherence was significantly associated with better performance for most outcomes. Nevertheless, the absence or slowing down of the decline in physical performance can, in principle, be interpreted as a sign of the effectiveness of the intervention since the natural decline in physical function is considered normal in nursing home residents. Masciocchi et al. (2019) reported in their narrative review that performance in the TUG test declined by an average of 2.8% (range 0.7-6.2%) per month when nursing home residents did not attend any additional physical exercise therapy. This natural decline can be explained by the high sedentary times among nursing home residents (Harvey, Chastin, & Skelton, 2015;Healy et al., 2011;McArthur, 2019;Jansen, Diegelmann, Schnabel, Wahl, & Hauer, 2017). Applied to our intervention duration of 4 months, this would predict a decline of 11.2% if a linear decline is assumed. In our study, we observed even higher declines of 22.7% in the group with a low attendance rate; however, in the group with a high attendance rate, we saw a positive effect of the intervention in 78.1% (n = 25) of the residents. Regarding the subphases, we observed that residents in the group with a high attendance rate improved or maintained their TUG performance in all subphases compared to the other groups. However, these group differences were not significant. A possible explanation could be the relatively small number of participants and values that were not provided by the system because the Mobility Lab™ algorithm could not detect them. The sit-to-stand subphase, for example, was the least reliable component (with 22 missing values), probably due to the large degrees of freedom available to nursing home residents, who can use a variety of strategies to perform this activity (Janssen, Bussmann, & Stam, 2002). As seen, the acceleration patterns in these subphases of the iTUG can be very heterogeneous, which makes detection based on the acceleration peaks more difficult. In addition, the training program focused on improving walking per-formance, coordination, balance, dualtask performance, mobility and cognitive performance. Strength exercises, e.g., for the lower extremities, which appeared to be important for the sit-to-stand subphase, were addressed only secondarily. In previous studies, lower extremity training has been shown to affect standing up and mobilization in general. For example, Johnen & Schott (2018) showed that nursing home residents significantly improved their physical performance in the TUG and 30-second Chair Stand test after resistance training for the upper and lower extremities with both free weights and machines. In this study however, the subphases were not considered. Regarding the sit-to-stand subphase, a meta-analysis on intervention effects in stroke patients indicated a significant overall effect estimate in favor of the intervention group (standardized mean difference [SMD] -0.34; 95% CI [-0.62,-0.06], seven studies; Pollock, Gray, Culham, Durward, & Langhorne, 2014), and a recently published study by Kasch (2021) showed that 12 weeks of progressive strength training decreased the duration in the sit-to-stand subphase up to 22% in patients with multiple sclerosis. The improvements in the sit-tostand subphase in our study could be explained by the strength training and range of motion exercises for the hip and trunk within the intervention program (Cordes et al., 2019). This apparently led to increased strength in the lower extremities and a better lean angle in the sit-to-stand phase, and thus to a shorter duration in the iTUG. There are nevertheless some limitations that need to be addressed. In additiontocognitive performance, which may influence performance in the iTUG and the intervention effect, there are other factors that we did not examine in this study. These include depression, fear of falling, and other emotional factors that play a crucial role and affect one another (Kose, Cuvalci, Ekici, Otman, & Karakaya, 2005). Unlike the PROCARE study (Cordes et al., 2019), we did not conduct a retention test to examine the persistent effects on iTUG performance. A retention test is mandatory but quite difficult in the nursing home setting given the high mortality rate in this age range, making it hard to provide suggestions on the sustained effects of a specific intervention. Moreover, it would have been useful to compare the intervention group with a control group that did not receive this intervention. Since we did not have a traditional control group, we divided the groups according to their attendance rate. This did allow for a better illustration of the intervention effects as a function of visit frequency. We decided to divide participants who visited two-thirds of the units (Hawley-Hague et al., 2016) and compared this group with those with lower attendance rates. Studies reporting mean attendance rates should provide more details, such as the range of attended sessions, at least in studies with small samples, (e.g., Henskens et al., 2018, p. 69: "Mean attendance to the intended 72 exercise sessions was 55% [mean = 39.5, SD = 20.8; range = 0-64]. "). Besides, it is important to consider how lower attendance rates occurred. This may have different outcome effects for someone who had to stop attending the intervention sessions for several weeks (maybe, due to some personal reasons) to someone who regularly attended the intervention sessions. We examined irregularities related to the attendance rate and factored in unpredictable circumstances (such as people suffering from stroke or a disease), but this did not justify excluding this group of participants. Overall, we had a relatively small number of participants, so the subphases between these groups did not become significant. Furthermore, a priori power analysis was not performed. Studies with a higher number of participants and additional measures to assess TUG performance (such as number of steps in the turning phase, turning strategies, lean angle in the sit-to-stand and stand-tosit subphase) could have led to a more differentiated interpretation of the intervention effects. These additional parameters allow to detect obvious impairments or changes and capture subtle differences and thus provide a better description of motor processes. Sensor-based analysis systems and the associated algorithms (Caldas, Mundt, Potthast, de Lima Neto, & Markert, 2017), which can sensitively capture different measures (biomarkers), play a crucial role in long-term observations and for documenting intervention successes. The downside is that these systems are cost-intensive and can only be used in the care setting with considerable effort. In this regard, modern smartphones have a growing number of inertial and location sensors, such as accelerometers, GPS, gyroscopes, and magnetometers, and are comparably user-friendly. To what extent sensor-based systems will be used in the nursing home setting to investigate alternative motion parameters remains to be seen. Ponciano, Pires, Ribeiro, and Spinsante (2020) conducted a systematic review of how inertial sensors embedded in mobile devices were used to measure various parameters of the iTUG test in older people. The authors stated that together with mobile devices using open source technologies, iTUG is very accessible to all. Persons without experience with nursing home residents and the application of the TUG should be alert to potential accidents. For safety reasons, the resident should be accompanied during the iTUG. Also, an alternative and secure realization of the iTUG is to use two chairs; one chair with the seat facing the wall and another against the backrest. This prevents the chair from tipping over and avoids subjects injuring their heads on the wall if they lose their balance and fall backwards while sitting down. This alternative was not applied but was considered the safer alternative during the course of data collection. For comparability reasons we did not change the setup. Our findings have potential implications for assessing intervention effects in nursing home residents. We have approved the iTUG test as a potential tool for measuring the effects of a multicomponent exercise intervention on physical function and balance in nursing home residents. We observed changes in the iTUG performance especially in the group with high attendance rates. Therefore, the iTUG performance can be highly recommended as an evaluation tool for intervention effects. In addition to the total TUG duration, other parameters should be considered in the different subphases. The exercises in the intervention programs could be adjusted accordingly to induce significant differences in these subphases. For this to work, however, gait analysis systems must measure these subphases reliably and sensitively. Factors emanating from the individual, such as fear of discomfort or pain, anxiety or depression, and limitations due to neuromuscular or musculoskeletal impairment, may influence the iTUG performance and the subphases. For example, external factors include forced rest for therapeutic purposes (Herdman et al., 2021). These factors must also be considered if we want to examine the effects on physical function and balance in nursing home residents. A more detailed view of the intervention effects on mobility will be provided by the results of the multicenter PROCARE project using different evaluation criteria (Cordes et al., 2019). Conclusion Overall, we strongly believe that the iTUG test can be recommended as a vital tool to measure the effects of a multicomponent exercise intervention on physical function and balance in nursing home residents. However, individuals need to attend a sufficient number of sessions to observe a positive effect on the iTUG performance. Our study showed that especially mobile, independent residents frequently participated in the training and thus were able to benefit the most. Due to the low number of participants, we cannot make any definite statements, particularly regarding the subphases of the iTUG. The algorithms included in the different measurement systems do not seem to be developed enough to represent reliable and sensitive parameters for intervention effects, especially for a specific group of people. Future studies should focus on making adaptations to the algorithms, especially for participants who shuffle when walking and hardly lift their feet. Funding. This study was supported by the health insurance Techniker Krankenkasse. The views expressed in this paper are those of the authors and may not be shared by the funding bodies. The study is part of the project "Prevention and occupational health in long-term care" (PROCARE; Head of the consortium: Prof. Dr. Bettina Wollesen, University of Hamburg). Trial data were analyzed independently of the trial sponsors. This funder did not play any role in the design of the study, data collection and analysis, reporting of results, or the decision to present the manuscript for publication. Author Contribution. We confirm that all authors were fully involved in the study, prepared the manuscript and provided the material within it. All Funding. Open Access funding enabled and organized by Projekt DEAL. Declarations Conflict of interest. The authors have no financial or personal relationships with any other person or organization that could improperly influence or otherwise influence their work in this study. T. J. Klotzbier, H. Korbus, B. Johnen and N. Schott declare that they have no competing interests. All procedures performed in studies involving human participants or on human tissue were in accordance with the ethical standards of the institutional and/or national research committee and with the 1975 Helsinki declaration and its later amendments (World Medical Accociation, Fortaleza, 2013) or comparable ethical standards. The study was approved by the Ethics Committee of the Hamburg Chamber of Physicians (registration number PV5762). All nursing home residents or their legal guardians received written and verbal information about the study and signed informed consent prior to their participation. Informed consent was obtained from all individual participants included in the study. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2021-10-17T15:08:10.342Z
2021-10-15T00:00:00.000
{ "year": 2021, "sha1": "7ce3e1f82069ee125003a289ece390221695cd44", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12662-021-00764-0.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fa8af2da6864871ff80a81c59e567d2b8739e41c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
22694503
pes2o/s2orc
v3-fos-license
A Naturalistic Database of Thermal Emotional Facial Expressions and Effects of Induced Emotions on Memory This work defines a procedure for collecting naturally induced emotional facial expressions through the vision of movie excerpts with high emotional contents and reports experimental data ascertaining the effects of emotions on memory word recognition tasks. The induced emotional states include the four basic emotions of sadness, disgust, happiness, and surprise, as well as the neutral emotional state. The resulting database contains both thermal and visible emotional facial expressions, portrayed by forty Italian subjects and simultaneously acquired by appropriately synchronizing a thermal and a standard visible camera. Each subject's recording session lasted 45 minutes, allowing for each mode (thermal or visible) to collect a minimum of 2000 facial expressions from which a minimum of 400 were selected as highly expressive of each emotion category. The database is available to the scientific community and can be obtained contacting one of the authors. For this pilot study, it was found that emotions and/or emotion categories do not affect individual performance on memory word recognition tasks and temperature changes in the face or in some regions of it do not discriminate among emotional states. Introduction The testing of competitive algorithms through data shared by dozens of research laboratories is a milestone for getting significant technological advances [9]. Shared databases allow to validate and develop new algorithms, as well as assess their performance in order to select the most excellent for a given application. The advancement of the pattern recognition research community is measured on the performance obtained by the proposed pattern recognition systems on benchmark databases in fields such as biometrics, optical character recognition, medical images, object recognition, etc. A challenging research topic in the field of Human-Machine Interaction is the analysis and recognition of emotional facial expressions. This is because recognizing faces (and in particular emotional faces) under gross environmental variations (such as the quality of the camera, light variations etcetera) and in real time remains a problem largely unsolved [3]. The collection and distribution of databases is a time-resource-consuming task, requiring experience and care both in the content design and the acquisition protocol. After the data collection, additional efforts are typically dedicated to supervise, annotate, label, error correct and document the collected data. In addition, a set of legal requirements have to be addressed, including consent forms to be signed by the donators and operational security measures as instructed by the data protection authorities. Finally, the distribution of the database involves intellectual property rights and maintenance issues. When it comes to emotional facial expressions, according to the classical literature (largely debated but not yet superseded [7]) there are only 6 emotional categories 1 labelled as happiness, sadness, anger, fear, surprise, and disgust. However, the limited number of classes does not simplify the collection of a database of emotional facial expressions, due to the intrinsic difficulty to dispose of natural and spontaneous emotional samples from a significant amount of people. In order to collect such data there are mainly three procedures: a) Recordings of spontaneous manifestations of emotional feelings. Generally this can be done by collecting video-recordings of subjects in their everyday activity, such as shopping, meeting, etcetera. The main drawback in such scenarios is the lack of control and therefore, a high amount of variability in the data, as well as the presence of few strikingly clear instances of episodic emotions. Cowie et al. [4], after the analysis of the Belfast naturalistic database, containing highly emotional talk-show recordings, showed that clear-cut emotional episodes were unexpectedly rare in such scenarios. b) Recordings of subjects asked to simulate a specific facial emotional expression. Generally they are professional actors. However, although skilled actors can be convincing, it could be argued that they are not really experiencing the portrayed emotion, but a stylized version of the natural one, and therefore, a different set of facial features may be needed for their description. Batliner et al. [2] demonstrated that vocal signs of emotionality used by an actor simulating a particular human-machine interaction were different from, and much simpler than, those produced by people genuinely engaged in it. c) Recordings of induced emotional states: to this aim there exists various emotion induction techniques. Some include the listening to emotional musical expressions, the watching of pictures and movies with highly emotional contents, as well as the playing of specially designed games. The advantage achieved in such scenarios is a higher situational control and thus, a major reliability of the collected data and the associated measurements. 3 An excellent overview of the different existing databases for the automatic modelling of emotional states is reported in [4]. According to the acquisition procedure, emotional databases can be split into four modalities: audio (typical measurements over speech signals are prosody, voice quality, timing, etc.), photos and video-sequences (eye-brow, and lip movements), gestures (hand and body movement) and physiological measures (temperature, humidity, heart rate, skin conductance, etc.). Some databases are collected accounting of several modalities simultaneously. There is a considerable amount of image and audio emotional databases and a testimonial presence of physiological ones. Physiological measures of emotional states mainly refer to heart rates, skin temperature variations and electro-dermal activity. Such measurements always require the involved subject to wear a sensor which, no matter how comfortable it may be, can affect the physiological measurement. It is worth mentioning on this respect the works of Kataoka et al. [12], Shusterman et al. [19] as well as Aubergé et al. [1] and Kim et al. [13] who implemented a 24-hour wearable ring or a wristwatch-type sensor to measure natural skin temperature (SKT) variations due to emotional stimuli. The first to hypothesize that emotional feelings or stress may change the distribution of face temperature was Fumishiro [10] who used a thermal imager (the resolution was higher than 0.01ºC) to show that under emotional feeling the radiance temperature of eyes, noses and brows can vary in the range +/-0.2ºC. However, to date, there are no systematic studies linking face temperature and emotions. This work aims to scientifically test such a relationship by collecting a database of thermal and visible facial emotional expressions. The collected data will allow to assess if such changes can be considered an emotional feature and whether different emotions can be discriminated by different temperature values of the face or of regions of it. To collect such data, emotions were induced though a carefully assessed experimental set-up (describe below) and the acquired database consisted of appropriately synchronized thermal and visible facial expressions. A selection of what were considered the most significantly emotional faces was also made using a custom developed Matlab software program. All the data, including those selected as best representatives of a given facial emotional expression, are available to the scientific community as part of the COST Action 2102 (http://cost2102.cs.stir.ac.uk/) activities. In addition, the present paper reports experimental data ascertaining the effects of emotions on memory word recognition tasks by measuring the individual recognition performance. Database design The aim of this work was to define a database of emotional facial expressions which much spontaneous as possible . The original idea was to identify video stimuli that could be used to elicit emotional states. Four emotions were selected among the six listed by Ekman [6] as basic emotions: fear, happiness, sadness, and disgust. A neutral state was also considered 2 , intended here as a state where no emotion is induced. This was done for the practical reason to separate series of facial video sequences recorded under a given induced emotion from another one as well as, to control the effects of an emotional stimulus on the other. Surprise and anger were not considered, due to the difficulty to elicit such emotional states through video stimuli. The definition of the spontaneous emotional facial expression database passed through three steps: 1) The identification of video stimuli to elicit the emotions under consideration i.e. how the video-clips were selected and assessed; 2) The identification of a memory word recognition task, acting as a distractive task for ; 3) The acquisition protocol. Identification of video stimuli with high emotional content A total of 60 video-clips 3 , 10 for each of the abovementioned emotional states were downloaded from YouTube (www.youtube.it) using the emotion labels as keyword. The original audio-track was kept. These stimuli were assessed by 20 naïve Italian subjects (9 males and 11 females) asked to watch the video-clips (randomly presented through a PPT Presentation) and label them by using the most appropriate of the 5 abovementioned emotional categories or any other emotional label. In addition, subjects were asked to rate the intensity of the portrayed emotion by using a Likert scale [14] varying from 1 (very weak) to 5 (very strong) through the intermediate values of 2 (weak), 3 (medium), and 4 (quite strong). The result of this assessment was the identification of 5 video-clips for each emotion category (happy, sad, disgust, fear), plus 5 short neutral video-clips (30 sec.) separating an emotional video from another in the same emotional category, and 3 long neutral (2 minutes) video-clips separating sequences of different emotional category. This amounted to a total of 28 selected video-clips constrained to an average intensity rate value no lower than 3. Identification of the Memory Word Recognition Task In order to avoid overlaps among the induced emotional categories, a word memory recognition task was defined. In literature [15] such tasks are tasks and consist of: a) A learning phase, where the subject memorizes a list of words (in our case 8 Italian words); b) A retention phase, where the subject is involved in an activity that has nothing to do with the task (in our case she/he was watching a sequence of 5 emotional video-clips belonging to the same emotional category interlived with short neutral stimuli); c) A re-enactment phase in which the subject is presented with a new list of words and she/he must provide a YES (if the word was already in the word list previously seen) or NOT (otherwise) answer . To this aim 8 word lists were created, 4 named Memory Lists (ML) and 4 named Recognition Lists (RL) each containing 8 Italian words. Both the word lists were shown on a computer screen. In each RLi there were 4 words already presented in the associated MLi , i:=1, .., 4. Before the induction of any of the 4 abovementioned emotional states, the subject was asked to read and memorizes the words in an MLi. Then, she/he was asked to watch a sequence of 5 emotional video-clips all belonging to the same emotional category. Finally, the RLi associated to the previously presented MLi was presented to the subject and she/he was asked to indicate on a paper grid, whether or not the words in the RLi list were already in the previously seen MLi one. Acquisition procedure: The experimental set up The subject was invited to sit in front of a computer screen in order to perform the task which consisted of the following steps: 1. Read and memorize an MLi list in 30s; 2. Watch a set of 5 video-clips belonging to a given emotional category, each interleaved by a short neutral stimulus (N); 3. Read the RLi list associated to the previously seen MLi list; 4. Using a pencil and a YES or NOT answer signs the words in the RLi list seen in the MLi list; 5. Watch a Long Neutral (LN) stimulus; 6. Go back to step 2 until the end of the stimuli. The stimuli presentation was randomized among the subjects according to the 4 different condition schemes reported in Table 2, where the letters indicate emotional categories, with S=sad, H=happy, F=fear, D=disgust. The facial expressions recorded from each subject were taken at 1 sec. sampling rate. Table 2. Stimuli sequencing in each of the 4 identified CONDITIONS (A, B, C, and D). The letters indicate the emotional categories, with S=sad, H=happy, F=fear, D=disgust, N=Neutral. The number after the letter identifies the stimulus inside the category. For example, F3 indicates the third stimulus used for Fear. Note that the Neutral stimuli were always the same, but were associated randomly to the categories. 2.4 Hardware and software configuration 7 requirements and temporal-spatial resolution, since typical changes of muscular activities lasts for a few seconds [8]. Acquisition scenario and timing The data collection was made in a quiet laboratory. Neither the acquisition computers, or the operators, or other people were visible to the participants. She/he watched the stimuli on a third laptop while wearing headphones to listen to the original video-clip audio-tracks, seated on a comfortable chair, with a black background and fluorescent room illumination, as illustrated in Figure 1. The acquisition took place from the 15 th to the 19 th of March 2010, between 9 a.m. and 6 p.m All the donors were Italian psychology undergraduate students, aged from 21 to 28 years. Such a population was deliberately chosen in order to reduce age and cultural background variability. A consent form was filled and signed by each participant allowing the use of the collected data for scientific scopes. The acquisition timing for each subject is reported in Table 3 Database description More than 120.000 images for each camera (both the visual and thermal one) were collected during the experiment. Quantitative results obtained by human inspection are beyond the aims of this paper and may be tackled in future works. A snapshot in the visible and thermal domain of each induced emotional facial expression is displayed in Figure 2. It is worth noting that in the sad state the subject is crying and tears can clearly be seen in the thermal but not in the visible image. Summary of the main characteristics Using a custom Matlab software program the authors selected a total of 479 thermal and 479 visual images as the most significant facial emotional expressions elicited in the subjects. An example is displayed in Figure 3. This work was necessary in order to eliminate, amidst all the captured images, those which, according to a couple of expert judges, did not belong to the emotional categories selected for the experiment. The software is able to name each selected image, showing the type of camera used (thermal or normal), the number assigned to the participant (1 to 49), the timestamp of the collected image (expressed in minutes, sec., and msec.) and the temperature in Celsius degrees reported by the thermal camera. Results on the word memory recognition task The effects of the emotional states on the word memory recognition task were assessed considering the averaged error committed by each subject on the RL lists, after watching a given sequence of emotional stimuli, all belonging to the same emotional category. The original scores are reported in Table 4 for each of the four experimental conditions and for each emotion category. The numbers indicate how many words were wrongly listed in the RLi lists by the subjects involved in a given experimental condition (A, B, C, D). Table 5 reports their transformation into zscores with standard devia Table 5. Z-score transformation of the data reported in Table 4 memory performance that could be attributed to a given induced emotional state or to a given experimental condition. None of the Z-scores falls outside the average score distribution in the real interval of [-2, +2]. Even the C condition, where both the best ere gathered, does not show any significant deviation. The average word error in the word memory recognition task performed by the subjects on each emotional video sequence and for each of the 4 random elicited conditions is graphically displayed in Figure 4 and it varies in the real interval [0 2]. The average total error is illustrated in Figure 5. The data suggests that none of the induced emotional categories affects the word memory performance. Results on the thermal data The selected highly emotional faces, as reported in section 2.7, were manually tagged on 5 face regions (left part of left eye (LL), right part of left eye (RL), left part of right eye (LR), right part of right eye (RR), tip of nose (TN)) in order to measure possible changes in their temperature (with respect to the neutral state) when a given emotional state was induced. The custom Matlab software used for the tagging is illustrated in Figure 6. The temperature of these face regions was extracted from a 5x5 pixel matrix created around the selected points (as illustrated in Figure 6). Table 7. Temperature data for the experimental condition A (males). In addition, also the mean temperature of the whole face (WF) was considered. The measurements of the relative temperature changes (measured in Celsius degree relative changes with respect to the neutral state) are reported, as an exemplification and only for the experimental condition A, in Tables 6 and 7 for the females and males respectively. The gray columns indicate that 80% of the participants exhibited in such face regions a temperature change with respect to the neutral state, measured 13 before the emotion was induced. However, these changes randomly appear in different face regions when the experimental conditions change from A to B, C, D. For example, the temperature changes for the same emotional category in the experimental condition B (see Tables 8 and 9 for females and male respectively) do not follow the same pattern observed for the experimental condition A (see Tables 6 and 7). Therefore, it seems that with this temperature resolution and in the defined experimental conditions, emotional states do not significantly change the temperature of the face or regions of it. An increased temporal-spatial resolution of the thermal camera to identify appreciable temperature changes would be necessary. Conclusions This paper reports on a collection of naturalistic thermal and visible induced facial emotional expressions providing details on the experimental set-up, the acquisition scenario, the eliciting stimuli and the data. Facial emotional expression recognition through visible images has occupied a great deal of research, while thermal images have not yet been considered. Given that thermal images have the good property of not being affected by illumination and shadows, they can be, to a certain extend, more useful than the visible ones to determine distinctive facial emotional features. In addition, this work reports data obtained through a pilot experiment, showing no effects of emotional states for a defined word memory recognition task. It could be argued that the proposed word memory recognition paradigm (memory task) proved to be ineffective by the emotional interference, compared to the recall paradigm proposed by Dougherty and Rauch [5]. However, this rises several open questions on the intervention of emotional states on memory performance. Further investigations are needed to assess which are, and to what extent cognitive and memory tasks are affected by emotional states. Some questions. Which emotional state will produce an improvement or a deterioration of the cognitive and memory performance? Does the feeling experienced by the subject in the learning or the retention phase play a role in the accuracy of the recognition? For a better memory performance, is the emotional feeling state at the time of the encoding more important than the one experienced during the retention of the mnemonic material? Literature suggests the importance of both [16][17][18]. However, more data are needed. Finally, what are the effects of the sequencing? Does it produce a bias in the learning and retention phase? Finally, it was shown that an increased temporal-spatial resolution of the thermal camera would be necessary to observe appreciable temperature changes in the face or regions of it.
2018-01-23T22:44:36.633Z
2011-02-21T00:00:00.000
{ "year": 2022, "sha1": "3e9b6c570b7ef9e0ac14e27f546333f9cdcc85c7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0086e8b5a17c53312e1d5ad91d18fbe04430012b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Psychology" ] }
247491401
pes2o/s2orc
v3-fos-license
The Kasaba Quartet: The Impact of Card Games on Knowledge and Self-Efficacy HIV/AIDS Prevention BACKGROUND: The rate of HIV/AIDS infection is increasing every year. The highest rates of HIV infection are among adolescents aged 15–24 years. Therefore, appropriate action is needed to prevent HIV transmission through risky behavior in adolescents. AIM: The purpose of this study was to determine the effect of Kasaba Quartet card game on HIV/AIDS knowledge and self-efficacy in preventing HIV/AIDS-related risk behavior in adolescents. METHODS: The study used a quasi-experiment with an equivalent time-series design. The intervention in this study was a card game using the Kasaba Quartet. The card game was held 3 times with a 1-day break. Adolescents’ HIV/ AIDS knowledge and self-efficacy were measured at the end of each card game. Sampling used purposive sampling with criteria including adolescents aged 12–16 years and domiciled in Bandung. A total of 30 people were involved in this study. RESULTS: After playing the Kasaba Quartet card game, the results showed that adolescents’ knowledge of HIV/ AIDS in the excellent category increased significantly, with average scores from 66.04 ± 16.219 to 97.40 ± 2.776. Likewise, adolescents’ self-efficacy with the high sort was raised, from 77.83 ± 8.67 to 97.60 ± 3.45. The results of statistical tests using the Friedman test showed the significance level of 0.001 (Sig. <0.05). In other words, there was an effect of the Kasaba quartet card game on HIV knowledge and self-efficacy in preventing HIV risk behavior. CONCLUSIONS: Thus, the Kasaba Quartet card game effectively increases knowledge of HIV/AIDS and selfefficacy in preventing risky behavior in adolescents. The study results can be used as an alternative strategy to increase knowledge and confidence in adolescents to avoid the spread of HIV/AIDS cases. Edited by: Sasho Stoleski Citation: Wilandika A, Fatmawati A, Farida G, Yusof S. The Kasaba Quartet: The Impact of Card Games on Knowledge and Self-Efficacy HIV/AIDS Prevention. Open Access Maced J Med Sci. 2022 Mar 01; 10(E):341-348. https://doi.org/10.3889/oamjms.2022.8681 Introduction Indonesia is one of the countries with the most HIV cases in Southeast Asia. The cumulative number of reported HIV cases until March 2021 is 427.201 people [1]. Meanwhile, HIV cases in West Java are ranked 4 th with 49,440 HIV cases [2]. In 2018, Bandung was the area with the highest incidence of HIV/AIDS, with 4620 people [3]. Meanwhile, the highest incidence of HIV is in the age range of 15-24 years, including adolescents. Based on data from the Public Health Offices of Bandung [4], the number of new HIV-positive sufferers in 2020 was 82 cases. New cases of AIDS in 2020 were 67 patients. Meanwhile, the HIV key population in Regol District is high. The number of HIV cases in this vital group includes injecting drug users (59 cases), female sex workers (258 cases), men who have sex with men (217 cases), and shemales (20 cases) [3]. HIV risk behavior that occurs in the community and the high incidence of HIV in this area requires every teenager to have the ability to prevent HIV infection. Adolescence is a critical period in human development both physiologically, psychologically, and socially. Cognitive development in adolescents is at its peak stage. Cognitive development in adolescents includes the ability to reason abstractly, think systematically, and understand various problems within them [5]. HIV infection in adolescents is closely related to cognitive development, knowledge, and behavior. Adolescence, which tends to explore new things, can lead to practices at risk of contracting a disease; moreover, if the teenager is not equipped with sufficient knowledge. Lack of knowledge will encourage adolescents toward risky behavior. Several studies show that adolescents' knowledge of HIV/AIDS is not comprehensive. The inside of adolescents about HIV/AIDS with less is 48.7%, sufficient is 41%, and good is 10.3% [6]. Likewise, research on adolescent girls aged 15-24 years in Malawi found that 42.2% had comprehensive knowledge; the rest were still lacking [7]. Proper knowledge about HIV/AIDS is one of the factors in avoiding HIV transmission. However, the knowledge aspect does not guarantee that the person will not carry out activities that risk being infected with https://oamjms.eu/index.php/mjms/index HIV. One may know the cause of the illness, but may not know the factors that may put them at risk of getting the illness [8]. HIV prevention means trying to stop someone from contracting HIV. Avoiding risky behavior is influenced by developing a positive attitude toward selfprotection. Meanwhile, a positive attitude is influenced by awareness of risk factors. HIV prevention can be associated with self-efficacy. Self-efficacy is defined as a person's belief in knowing his abilities, taking specific actions, overcoming situations, and being confident that he can achieve what he expects [9], [10]. Several studies have revealed a significant negative relationship between self-efficacy and risk behavior in adolescents [11], [12]. The higher the selfefficacy, the lower the risk behavior carried out by adolescents. Thus, preventive measures to reduce the intention to carry out HIV risk behaviors can increase knowledge and self-efficacy [13], [14]. Prevention of dangerous behaviors in adolescence is promoted through health education. Health education can be conducted individually or in groups [15]. Health education aims to increase knowledge and self-efficacy, thus forming persistent attitudes and behaviors. There are many choices of health education methods for HIV prevention. These methods include lectures, seminars, and simulation games. Health education with the lecture method is the easiest method to implement and does not require complicated equipment. This method is widely used to provide information and increase knowledge, especially when using interactive lectures [16]. However, the lecture method has limitations in the process of teacher-student interaction. Several studies found that this method was considered monotonous and boring for students [17], [18]. Likewise, the seminar method has the same characteristics as lectures. Seminars have the purpose of acquiring specific knowledge. The seminar method is also considered adequate to increase students' knowledge, active learning abilities, and cooperation [19], [20]. The seminar emphasizes multi-directional interaction between students or teachers [21]. However, seminars have weaknesses such as taking a long time and increasing overload because each student has to do various preparations and assignments before the seminar is held [22]. The lecture and seminar methods are not necessarily suitable for all ages, especially teenagers, who are sometimes difficult to focus on for a long time at 1 time. Simulation games are a method of health education that is carried out by providing certain information. This game is fun, so the presented material is easier to understand. In addition, the media used is also simple and easy to attract the attention of the game participants [23]. Indah and Gamayanti [24] found a significant difference in health education with simulation games and lectures. Still, simulation games were more effective in increasing students' knowledge, attitudes, and behavior. In this study, the media used is the quartet card game. This deck is called "Kasaba" (Kartu Sadar Bahaya HIV/AIDS) or a quartet card related to HIV/AIDS information and HIV/AIDS infection risk. Quartet card games with specific topics impact students' knowledge of the issues discussed on the cards [23], [25], [26]. Quartet card games can affect students' ability to identify a problem. Quartet card games improve students' knowledge comprehensively and are directed to achieve learning objectives. Quartet card games also increase students' motivation during learning [27]. In addition, students who are involved in learning through this quartet card game look more relaxed and happy [28]. Thus, this study aimed to determine the impact of the game Kasaba Quartet on HIV/AIDS knowledge and self-efficacy in preventing HIV/AIDS risk behaviors in adolescents. Methods In this study, we used a quasi-experiment with an equivalent time series design. This design applies the intervention 3 times with measurements at the end of the test 3 times. The intervention given was a quartet card game. At the initial stage, all respondents filled out a pre-test questionnaire to determine the level of knowledge and self-efficacy. Then, the respondent started the Kasaba quartet card game. After the match ended, respondents filled out a post-test questionnaire. This quartet card game is repeated 2 times. The intervention given was in the form of a quartet card game. The quartet card game uses cards, and wherein there are four cards in one set. Each card has a main topic and four subtopics from that central topic. The total cards used are usually 32 cards [23], [26], [29], or 33 cards, including the Joker [29]. The quartet card used in this study the Kasaba Quartet card game. This card game is called "Kasaba," which means in Indonesian, namely, "Kartu Sadar Bahaya HIV/AIDS" or a quartet card regarding HIV/AIDS information and the dangers of HIV/AIDS infection. This Kasaba Quartet Card was developed independently by the researcher. The Kasaba Quartet Card is a collection of cards with pictures and information explaining information about HIV/AIDS. There are 32 cards grouped into eight topics without the joker card. Each topic consists of four cards. The sampling technique used purposive sampling. The sample of this study is teenagers. Selection considered several inclusion criteria, such as adolescents aged 12-16 years and domiciled in Balonggede Village, Regol District, Bandung, West Java, and Indonesia. A total of 30 adolescents participated in this study. In addition, the determination of the sample pays attention to the even distribution in five hamlets in Balonggede Village. The number of youth in each hamlet was determined by proportional allocation to achieve the representation of each region. Participants divide themselves into groups of 3-5 people. The division of the group is determined based on the age range. There are eight groups in this game, and all participants filled out a knowledge and self-efficacy questionnaire 15 min before the game started. The game runs for 30 min. Researchers provide assistance and observations during the game. This observation is carried out to ensure that activities run according to procedures. The game is played once a day and then repeated for 2 days. All participants play a total of three rounds. All participants filled out the HIV prevention knowledge and self-efficacy questionnaire at the end of the game. This study uses the HIV/AIDS Knowledge Questionnaire to measure adolescents' knowledge about HIV/AIDS. This instrument was developed by the researcher, who then assessed the feasibility of the tool through expert judgment, validity, and reliability tests. The tool has been declared feasible and reliable, with a validity value between 0.361 and 0.777 and a reliability value of 0.819. This questionnaire assesses the understanding of basic HIV information, transmission media, mode of transmission, phase of HIV disease, non-infectious behavior, risk groups, prevention, and impact of HIV. Meanwhile, to measure the self-efficacy of HIV prevention in adolescents using the Self-Efficacy Questionnaire for Prevention of HIV-Risk Behaviors developed by Wilandika [30]. This questionnaire has a validity value between 0.324 and 0.642 and a reliability value of 0.803. The behavioral aspects assessed in this questionnaire include pre-marital sex, watching pornographic videos, drug use, use of needle tattoos, attitudes in dealing with sexual relations, and neglect of partner's HIV status. In this study, the researcher applied the restriction method to control confounding factors that might affect the intervention outcome. During the 1-day break in the card game, each participant is emphasized not to repeat the information from the card. Each participant agreed not to seek or read news related to HIV/AIDS from any information source. Data analysis in this study used descriptive analysis to identify information about age, gender, and HIV information exposure. Due to the sample abnormality and the comparison of the three data groups affecting the change in knowledge and selfefficacy of risk behavior prevention, we used the nonparametric Friedman test. This study was ethically approved by Research Ethics Committee from Sekolah Tinggi Ilmu Kesehatan' Aisyiyah Bandung with No.17/ KEP.02/STIKes-AB/VII/2019. Table 1 shows the characteristics of the adolescents involved in this study. About 30% of adolescents were 14 years old, with 60% male. The teenager had never been exposed to HIV/AIDS information by 56.7%. In addition, the self-efficacy results showed a change in HIV/AIDS risk prevention self-efficacy at the pre-test of 77.83 ± 8.667 to 97.60 ± 3.450 at the post-test, as shown in Table 3. The result of Friedman test is shown in Table 4. The statistical tests showed the significance level of 0.001 (Sig. <0.05), which means that the Kasaba Quartet Card game affects adolescents' self-efficacy in preventing HIV/AIDS risk in adolescents. Discussion HIV infection among adolescents is a complicated problem and becomes a prolonged problem if it is not prevented. Prevention through education in understanding the dangers of HIV to adolescents is necessary. The study results show that the Kasaba Quartet Card game affects HIV/AIDS knowledge and self-efficacy in preventing HIV/AIDS risk behavior. This result is indicated by a significance level of 0.001 (Sig. <0.05). In the last measurement, adolescents who participated in the Kasaba Quartet Card game showed an increase in knowledge and self-efficacy of HIV prevention. The application of educational interventions such as the Quartet Card game is an effort to increase adolescent understanding that can stimulate selfefficacy in preventing HIV risk behavior. Card games combine role-playing activities and fun discussions [31]. Card games have advantages over other methods. Card games can improve knowledge, attitudes, and skills and provide experience. In addition, this game is also an activity to channel pent-up feelings and can develop the talents and abilities that they already have [32]. Thus, the Kasaba Quartet Card game method can be used as a form of health education to increase HIV/AIDS knowledge and self-efficacy in adolescents' prevention of HIV/AIDS risk behaviors. Impact on knowledge of HIV/AIDS Health education through educational games increases knowledge, attitudes, and behavior [25], [27]. Card games are educational games that are appropriate if appropriately implemented. This game is easy to do with simple and attractive tools to accept the information presented on the card more readily. Ease of implementation of the game is an essential aspect of education that can to complete. Games that are easy to implement will support achieving the desired goals. Quartet card games in groups can train each student's cognitive abilities to understand more deeply the topics discussed. This Quartet card game is a fun activity, so students can play while learning. This game attracts students' attention to be involved in the education and teaching process. Similarly, Sutriyanto's research [23] regarding health education with the Kasugi card game consists in playing activities in its implementation. The study results found that card games were proven to increase students' knowledge about healthy and living behavior, and during health counseling, students were active and enthusiastic. Promoting knowledge about HIV/AIDS has been a significant factor in successfully preventing HIV infection. The study results on the understanding of HIV/ AIDS in adolescents before being given the Kasaba Quartet Card game showed that most adolescents had sufficient knowledge. Adolescents experienced a significant increase in knowledge after playing the Kasaba Quartet Card game. After the match, adolescents' knowledge of HIV/AIDS has an average of 97.40 ± 2.78. Learning activities trigger this increase in student knowledge carried out during the game. Various factors generally influence knowledge. Factors that can influence knowledge about HIV/AIDS include age, gender, education, economy, religion [33], [34], experience, and exposure to information [35]. Some of the youth involved in the study had received information about HIV/AIDS. The information is obtained from teachers' education provided by teachers and informal communication they get from various media or public activities. Individuals who have access to sources of information have good knowledge. Although the information obtained must come from a fixed and correct source so as not to cause misconceptions related to HIV/AIDS [36]. Proper knowledge about HIV/AIDS is an essential factor in preventing HIV/ AIDS risk behavior. In this context, knowledge of HIV/AIDS is acquiring scientific facts and information regarding symptoms, modes of transmission, adverse consequences, and disease prevention strategies. Information exposure with HIV/AIDS prevention behavior has a close relationship. Someone who understands the dangers of HIV/AIDS tends to take better preventive actions than those who have never been exposed to this information. Rilyani and Kusumaningsih [37] said a relationship between exposure to information sources and HIV/AIDS prevention behavior. Adolescents exposed to HIV/AIDS information have a positive attitude towards HIV/AIDS prevention behavior. In addition, the teenager showed good preventive behavior. In addition to information exposure, the age factor also affects a person's knowledge. Age will affect a person's understanding and mindset. Knowledge will increase with age. Increasing age impacts the development of a perspective and performance of a problem [38]. As in his research, Estifanos et al. [39] found that women aged 20-24 years have a better comprehensive knowledge of HIV than those under 19 years because the level of understanding of the information obtained can be appropriately processed individuals who have grown up. However, planting information is a significant factor in understanding information. Children who get the correct information about HIV/AIDS at an early age will have a good knowledge base. Along with increasing age, the understanding of this information can develop properly. Giving information is related to memory [38]. In this study, health education through the Kasaba card game also considers the memory factor in increasing knowledge. The card game is held 3 times with a time lag of once a day is also a factor that affects improving aspects of student knowledge. Memory is the human ability to receive, store and produce impressions, understanding, or responses. Memory can also be interpreted as the result of an experience or a change in behavior or activity. Memory is organized knowledge and can develop [40]. Sensory memory records information that enters through the five senses. If the information is responded to, it is transferred to the short-term memory system. A shortterm memory system can store data for 30 s, and a short-term memory system can hold about seven pieces of data at a time [41]. The memory of the stored information becomes the beginning of knowledge formation. However, because working memory only keeps a few information units, it must be repeated to maintain this memory. If there is no repetition, the information will be lost within 15-25 s, and the information will be lost [42]. When data are in working memory, related information in long-term memory is activated to combine the old data with the new. The reference of knowledge in long-term memory depends on the frequency of continuity. The more often an event is encountered, the stronger the relation in the memory [40], [42]. Thus, the frequency of the Kasaba Quartet Card game, which is repeated 3 times with a 1-day break, makes information about HIV/AIDS stay in students' memories so that students' knowledge also increases. The age of 14 years is included in the early teens. Adolescence is a transition period from childhood to adulthood, so that their curiosity is very high. Health education provided with an attractive appearance will increase adolescents' interest. This high curiosity is a practical key to increasing adolescent knowledge [43]. Similarly, Fandakova and Gruber [44] argue that curiosity and interest positively affect learning and memory in childhood and adolescence. The Quartet card game is exciting and can increase curiosity and interest so that at the end of the card game, the knowledge related to HIV/AIDS can achieve. Impact on self-efficacy prevention of HIVrisk behavior Correct and appropriate knowledge is an essential point in efforts to prevent the transmission of HIV/AIDS, especially among adolescents. However, a person's sound knowledge of HIV/AIDS prevention does not guarantee that the person will not engage in risky activities. Self-efficacy factors also influence this preventive behavior. The study results on self-efficacy in preventing HIV/AIDS risk behavior before the Kasaba Quartet Card game showed that most adolescents had moderate self-efficacy. After adolescents played the Kasaba card game, adolescent self-efficacy was significantly increased. Self-efficacy of HIV/AIDS prevention in adolescents after implementing the Kasaba card game in the final stage has an average of 97.60 ± 3.45. Adolescent self-efficacy can be formed through the implementation of health education. Such is the case in Wilandika's research [45], which found that health education through case-based learning can increase self-efficacy in preventing HIV risk behaviors. Selfefficacy in preventing HIV-risk behavior in adolescents has increased the ability of adolescents to believe that they can and successfully take preventive action against various possible sexual behaviors that are at risk of contracting HIV infection. This increase in selfefficacy is carried out in stages by providing information about HIV/AIDS, forming permanent knowledge related to the given topic. Risk behavior prevention self-efficacy and knowledge have a significant relationship [46], [47]. Likewise, Yu et al. [48] said HIV knowledge had been a relationship with self-efficacy and condom use intentions in adolescents. The study results found that self-efficacy is a factor that mediates the knowledge and behavioral purposes to use safety. The better the understanding, the higher a person's self-efficacy, which will affect the attitude of preventing HIV/AIDS risk behavior. Bandura [49] said self-efficacy affects how individuals think, feel, motivate themselves, and take action. Someone who only has knowledge, attitudes, and skills without self-efficacy is likely that that person will not take action [50]. This study found that almost all adolescents, after being given the Kasaba card game intervention, nearly all adolescents had high selfefficacy in preventing HIV risk behaviors. The results of this study can be interpreted that the higher a person's self-efficacy, the higher the confidence to control HIV risk behavior. In the end, the teenager is expected to take real action in preventing various HIV risk behaviors. Newby et al. [51] revealed that self-efficacy is an essential determinant of health behavior. Selfefficacy will positively affect health behavior. Selfefficacy possessed by a person will make that person pay attention to behaviors that support their health. A person with high self-efficacy toward healthy behavior is most likely to carry out healthcare such as exercising and avoiding behaviors detrimental to that person's health [52], [53]. Meanwhile, someone who has low self-efficacy is likely to approach behaviors that are risky to his health. Thus, the Kasaba Quartet Card game impacts increasing knowledge of HIV/AIDS in adolescents. This educational method affects increasing good adolescent knowledge about HIV/AIDS. Adolescents who have good knowledge can judge whether an action is good or bad to develop self-efficacy beliefs in themselves. Adolescent self-efficacy develops into the initial formation of behavior in avoiding the risk of HIV infection. The results of this study can be used as a policy basis in the design of prevention programs carried out by health practitioners and the government on a broader scope. The prevention program was designed through health education based on simulation games such as the quartet card game, which increased knowledge and self-efficacy in the findings. Based on behavioral health theory, the intervention to reduce HIV risk increases knowledge, awareness, and self-efficacy [54], [55]. https://oamjms.eu/index.php/mjms/index Therefore, interventions like this can be a reference in preventing HIV infection in adolescents. Limitation of study The limitation of this study relates to the application of a 1-day break time for each quartet card game intervention. This break time was intended to internalize education results to students, but this action can also cause bias. The bias that occurs is that students can forget about the information provided during the intervention, and students can also find out educational information through other sources. The researcher tried to control these confounding factors by emphasizing that each participant did not seek or read news related to HIV/AIDS from other sources. In addition, the use of purposive sampling to selecting samples can be a limitation in generalizing the results. However, this proportional sampling was conducted to obtain in-depth and specific information on the variables and targets studied. In this study, the quartet card game only involved a particular age group of teenagers. Further research should be directed to analyze the effect of the Kasaba Quartet Card game on other age groups. The Kasaba Quartet must also be redeveloped and adapted to the individual developmental stage. Conclusions The Kasaba Quartet Card game affects knowledge about HIV/AIDS and self-efficacy in preventing HIV/AIDS risk behavior. Adolescents who have good knowledge affect their confidence to carry out various activities to avoid different HIV risk behaviors. Kasaba Quartet Card game is an effective health education to increase knowledge and self-efficacy of HIV prevention. This game uses exciting and easy card media, so teenagers are interested and enthusiastic about playing it. The Kasaba Quartet Card game can also increase the curiosity and motivation of teenagers to learn information about HIV/AIDS. The results of this study become a strategy in HIV infection prevention education among adolescents. The development of this method is adjusted to the characteristics of the target to achieve the expected goals.
2022-03-17T15:21:16.508Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "ae3b0301c371fc0e21e6d17b5ce973b4c4082492", "oa_license": "CCBYNC", "oa_url": "https://oamjms.eu/index.php/mjms/article/download/8681/7030", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7223559fa4b355eb5198b21681b5cf7397cf262d", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [] }
85464842
pes2o/s2orc
v3-fos-license
Mel-Scaled Autoregressive (Mel-AR) Model based Voice Activity Detection using Likelihood Ratio Measure In this paper, a Mel-scaled AR (Mel-AR) model based VAD is presented, where likelihood ratio measure is used to classify the input speech frames as speech/non-speech segments. The Mel-AR model parameters have been estimated on the linear frequency scale from the input speech signal without applying bilinear transformation. This has been done by employing a first-order all-pass filter rather than unit delay. The performance of the proposed VAD is evaluated on Aurora-2 database by measuring FAR and FRR. The equal false rate (EFR) at the crossover point is also presented as a merit of VAD. In addition, the performance of the proposed VAD in speech recognition is verified by incorporating it with a Mel-Wiener filter for MLPC based noisy speech recognition. INTRODUCTION Voice activity detector (VAD) plays an important and sensitive role in many applications including robust speech recognition, digital hearing aids and discontinuous speech transmission for bandwidth reduction or distributed speech recognition over wireless and IP networks [1], [2], [3], [4]. One of the most critical problems of such applications is that the limitations of coping with the environments. Environmental noises contaminate the speech signal and change the feature parameters. As a result, the performance of these applications severely degrades in a wide variety of environmental conditions. To maintain the performance at an acceptable level a noise suppression unit along with a precise VAD is essential. For non-stationary noises, the VAD is even more crucial since it is necessary to update constantly varying noise statistics. Therefore, a correct classification of noisy signal into speech/non-speech segments is necessary to track an accurate estimation of noise and an efficient application to a speech enhancement scheme. Many researchers have studied different methods to develop an efficient VAD and most of them are heuristics using different speech parameters, such as, energy [5], [6], [7], zero crossing rate [2], [8], cepstral [9], LPC [10], etc. However, the algorithms based on speech features with heuristic rules have difficulty in coping with real world noises at low SNR conditions. Recently, statistical model based VAD is found to be an efficient approach to segregate speech and non-speech frames under a broad range of background noises [11], [12], [13], [14], [15], [16]. In [11], a robust VAD algorithm based on statistical likelihood ratio test (LRT) involving a single observation vector is proposed. Later, many variants of LRT have been studied to improve the performance of VAD [12], [17], [18]. In this paper, an autoregressive (AR) model [19] based VAD is proposed, where likelihood ratio (LR) measure is used to classify the input speech frames as speech/non-speech segments. The AR model is implemented on mel-scale using a first-order all-pass filter instead of unit delay. Let x be an N -dimensional random variable corresponds to N consecutive samples of the windowed signal. For an M -th order zero mean autoregressive process, x is given bỹ where {ẽ n } are Gaussian i.i.d. random variables with zero mean and unity variance, and {ã i } are the Mel-scaled AR coefficients withã 0 = 1. Now, for large N , the probability density function for x can be approximated by [19] Rã[i] is the autocorrelation function of AR coefficients andr x [i] is the mel-autocorrelation function [21], [22], [23] of x. The assumption made here is that the signal x has already been properly scaled, that is, in the LPC terminology this is equivalent to normalization by the square root of average residual energy. LIKELIHOOD RATIO MEASURE The proposed VAD is based on the likelihood ratio measure between autoregressive model of noise and input speech signal. An M -th order autoregressive noise model with coefficientsã 0 = 1 is created from initial 20 frames of the input speech signal. Then for any speech frame t, the mel-autocorrelation functionr x [i] is calculated to estimate likelihood ratio between AR noise model and current speech frame as follows: Finally, d LR is compared with a threshold value η. For d LR < η, the frame is detected as noise, otherwise, speech frame. When a frame t is detected as noise, the estimated melautocorrelation function of noiser n [i, t] is updated by accumulatingr x [i, t] as follows: if frame t is speech (8) where t p is the previous noise frame and β is the forgetting factor of value 0 < β < 1. Though the proposed VAD is based on the likelihood ratio measure, it is also possible to implement the VAD based on Itakura-Saito distortion measure [24]. Itakura-Saito distortion measure d IS between AR noise model and input speech frame is given by where σ 2 en and σ 2 ex are the residual energies of the estimated noise and current frame, respectively, and δ(x;ã) is given by Eq. (6). EXPERIMENTAL SETUP The proposed VAD was evaluated on test set A in Aurora 2 database [25]. The Aurora 2 database is a subset of TI digits database [26] contaminated by additive noises and channel effects. The order of AR model was set to 10 and the window length was 40 ms with 10 ms frame period. The value of forgetting factor was set to 0.96. PERFORMANCE EVALUATION Usually two measures are used to examine the VAD performance. One is frame based false alarm rate (FAR) and the other one is frame based false rejection rate (FRR). As reference the corresponding clean speech files are labeled as speech/nonspeech frames using an energy based VAD. Because for clean speech the energy based VAD can properly discriminate speech and silence. As the threshold factor η is used for detecting input frames as speech or noise, the effect of threshold factor on FAR and FRR is examined and the result is presented in Figure 1. Here FAR and FRR are calculated by using all the speech files for the entire set of noises (subway, babble, car and exhibition) in test set A for 5 dB SNR. The experiment was carried out for the threshold factor of 0.0 to 1.0. As shown in Figure 1, the proposed VAD keeps a steady FAR and FRR with increasing threshold factor. It is also observed that the FAR has a decreasing trend with increasing threshold factor. On the other hand, reverse characteristic is seen for FRR. The higher value of FRR means the most of the noise frames are detected as speech, on the contrary, the higher value of FAR means the most of the speech frames are detected as noise. Hence, there should be a trade-off between FAR and FRR for better estimation of noise. It has been found that the crossover point is obtained at the value of threshold factor η = 0.41, and the equal false rate (EFR) at this point is around 11.2%. The EFR at the crossover point both for Itakura-Saito (IS) distortion and likelihood ratio (LR) measure as a function of window length has also been examined and presented in Figure 2. It has been found that longer window length gives lower EFR both for IS and LR based VAD. Consequently, the proposed system uses 40 ms window length for VAD. Though the EFR for IS based VAD is much lower than that of LR based VAD, the recognition result for test set A of Aurora 2 database shows that LR based VAD obtains slightly better result, which is presented in Figure 4. To find the optimum threshold value, a number of recognition experiments were carried out with different threshold values under the conditions given in Table 1. The threshold factor η = 0.0 means the noise model is not adaptive and it is created from the initial 20 frames of the speech signal. The larger threshold affects the esti- CONCLUSION This paper presents an autoregressive model based VAD and its application to the robust speech recognition. The autoregressive model is efficiently implemented on mel-scale. The likelihood ratio measure is used to segregate speech and non-speech frames.
2019-03-31T13:14:01.890Z
2019-03-15T00:00:00.000
{ "year": 2019, "sha1": "46bada574ac18c3b7d65f70d37b4febf578e674d", "oa_license": null, "oa_url": "https://doi.org/10.5120/ijca2019918600", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cf0412e158a98aa4144868c8afc639386ec60ec9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
231202446
pes2o/s2orc
v3-fos-license
Recurrent PALB2 mutations and the risk of cancers of bladder or kidney in Polish population Introduction The role of PALB2 in carcinogenesis remains to be clarified. Our main goal was to determine the prevalence of PALB2 (509_510delGA and 172_175delTTGT) mutations in bladder and kidney cancer patients from Polish population. Materials and methods 1413 patients with bladder and 810 cases with kidney cancer and 4702 controls were genotyped for two PALB2 variants: 509_510delGA and 172_175delTTGT. Results Two mutations of PALB2 gene were detected in 5 of 1413 (0.35%) unselected bladder cases and in 10 of 4702 controls (odds ratio [OR], 1.7; 95% CI 0.56–4.88; p = 0.52). Among 810 unselected kidney cancer cases two PALB2 mutations were reported in two patients (0,24%) (odds ratio [OR], (OR = 1.2; 95% CI 0.25–5.13; p = 0.84). In cases with mutations in PALB2 gene cancer family history was negative. Conclusion We found no difference in the prevalence of recurrent PALB2 mutations between cases and healthy controls. The mutations in PALB2 gene seem not to play a major role in bladder and kidney cancer development in Polish patients. Introduction Carcinogenesis is an intricate multi-step process initiated by abnormal oncogenic signals in different signaling pathways. Defects in DNA repair are responsible for many cancers like: bladder, prostate, breast, kidney, colorectal, pancreatic, ovarian. The mismatch repair (MMR) and homologous recombination (HR) are well-established DNA repair pathways with links to human cancer [1][2][3][4][5][6][7]. In the homologous recombination DNA damage repair a key role play the genes BRCA1 and BRCA2 which interacts with many proteins: proteins of the MNR complex (MRE11/RAD50/ NBS1), RAD51, CtIP, MRE11, ATM, H2AX, PALB2, RPA, RAD52 and the Fanconi anemia proteins [8]. Mutations in genes: PALB2, ATM, RAD50, MRE11, NBN and the genes for the MRN complex are responsible for hereditary cancers. PALB2 has a large number of interactions with DNA damage response proteins BRCA1, BRCA2, RAD51, RAD51C and XRCC3 which play function in DNA repair by homologous recombination [9,10]. PALB2 is not only partner and localizer of BRCA2 but also is localized and interacts with BRCA1 plays an important role as a pivotal tumor suppressor protein [11]. Mono-allelic PALB2 germline mutations disrupt the interaction of PALB2 with either BRCA1 or BRCA2 engender DNA damage sensitivity, HR defects, and cancer susceptibility to breast, ovarian, pancreatic whereas biallelic PALB2 germline mutations cause Fanconi anemia subtype FANCN, with early onset of acute myeloid leukemia, medulloblastoma, neuroblastoma and often Wilms' tumor [12][13][14][15][16][17]. Given the intimate functional links between PALB2 and BRCA2 and the similar phenotypes associated with biallelic mutations in the genes that encode them, it is plausible that monoallelic PALB2 mutations confer susceptibility to adult cancer [12]. In Finland, Canada (in French-Canadians), and Poland, mutations in PALB2 are the cause of between 0,5 and 1% of all breast cancers and 0,52% unselected cases of pancreatic cancer [13,[18][19][20]. We were interested to investigated if mutations in PALB2 genes could be relevant to the pathogenesis of bladder or kidney cancer. Herein we genotyped 1413 patients with bladder and 810 cases with renal cancer and 4702 healthy controls. Patients This study includes 1413 unselected cases of urothelial bladder cancer (376 women and 1037 men) and 810 unselected kidney cancer (360 women and 450 men) diagnosed at the Urology Hospital in Szczecin between 1986 and 2018. A total of 1518 incident cases of bladder cancer and 869 kidney cancer were identified during the study period. Of these, 1413 patients with bladder and 810 with kidney cancer accepted the invitation to participate (93, 93%). All patients had a histopathological diagnosis of cancer. All patients had a histopathological diagnosis of cancer. The mean age of diagnosis of bladder cancer patients was 68 years (range 13-91) and 62 (range 17-91) of kidney cancer. A family history was taken by the construction of family tree and the completion of a standardized questionnaire. A total of 45 patients with a family history of at least 1 bladder cancer in first or second degree relatives and 30 cases with a family history of at least 1 kidney cancer in first or second degree relatives were identified. Cigarette smoking was reported in 1045 (74%) cases with bladder and 488 (60%) kidney cancers. The vital status and the date of death of all of the cases were requested from the Polish Ministry of the Interior and Administration in February 2020, and were obtained in March 2020. In total we collected data of death of 729 (51%) patients with bladder and 204 (25%) kidney cancer. The study was approved by the Ethics Committee of Pomeranian Medical University in Szczecin. Controls The control group included 4702 cancer-free, populationbased, adults from (the genetically homogeneous population) of Poland. In order to estimate the frequency of the Polish founder mutations in the general population, fourth control groups were combined. The first control group were women age 24-84 years identified from the region of Szczecin. These controls are described in detail elsewhere [21]. The second control group consisted of 1717 cancerfree females aged 32-72 years who participated in mammography screening at eight different centers across Poland: Kielce, Legnica, Olsztyn, Poznan, Szczecin, Swidnica, Torun, and Zielona Góra and who provided a blood sample for DNA analysis. The third group of women included 1036 patients age 20-94 years selected at random from computerized lists of patients at family practices located in the region of Opole. And the last fourth group included 990 women age 50-66 years who participated in a colonoscopy screening programme for colorectal cancer in Szczecin, Bialystok, and Łódz. The allele frequencies for all variants in our control group were not dependent on age and the prevalence estimates of mutations in all genes were similar in younger and in older controls. Methods DNA was isolated from 5 to 10 mL of peripheral blood. The two recurrent mutations of PALB2 (509_510delGA and 172_175delTTGT) were genotyped as described previously [14,22]. In brief, these variants were genotyped with a TaqMan assay (Life Technologies, Carlsbad, CA) using a LightCycler Real-Time PCR 480 System (Roche Life Science, Mannheim, Germany). Sanger direct sequencing was undertaken to confirm the presence of mutations, using a BigDye Terminator v3.1 Cycle Sequencing Kit (Life Technologies), according to the manufacturer's protocol. In all reaction sets, positive and negative controls (without DNA) were used. Statistical analysis Survival analysis We followed up PALB2 mutation carriers from the date of diagnosis until the date of death from any cause, or March 2020, if they were still alive. The median followup was 204 months. Due to a two variants of PALB2 mutations (509_510delGA and 172_175delTTGT) were not statistical significant among bladder and kidney cancer patients we did not perform survival analysis. Odds ratios The prevalence of each of the two PALB2 alleles was compared in cases and in controls, singly and in combination. Odds ratios were generated from two-by-two tables and statistical significance was assessed using the Fisher exact test where appropriate. The odds ratios were used as estimates of relative risk and additionally were adjusted for age, sex and pack-years of smoking by multiple logistic regression. Ethical statement The study was performed in accordance with the principles of the Declaration of Helsinki. All patients and controls provided written informed consent. Bladder cancer Bladder cancer cases and 4702 controls were successfully genotyped for the two PALB2 variants. Among bladder cancer cases the PALB2 mutations (two variants combined) were found in 0.35% of the patients and in 0.21% of the controls (OR = 1.7; 95% CI 0.56-4.88; p = 0.52) ( Table 1). A PALB2 mutation (509_510delGA) was present in four (0.3%) of 1413 cases with bladder cancer and in seven (0.15%) of 4702 controls (OR = 1.9; 95% CI 0.55-6.51; p = 0.5). A mutation 172_175delTTGT was detected in one patient with bladder cancer (6.55%) and three (0.06%) out of 4702 controls (OR = 1.1; 95% CI 0.11-10.7; p = 0.9). In the group of 1037 men we observed three (0.29%) mutations of 509_510delGA and among 376 womans were two (0.53%) mutations of PALB2 gene one of each type. The information about smoking we collected from 1045 patients with bladder cancer including 123 (11.8%) nonsmokers and 922 (88.2%) smokers. We observed one PALB2 mutation in person who did not smoke (0.8%) and four mutations among smokers (0.4%). The frequency of PALB2 mutation was slights higher in non-smokers (OR = 1.9; 95% CI 0.21-17; p = 0.47). In 45 family cases with bladder cancer in first-and/or second-degree relatives we did not observed any of investigated mutations in gene PALB2. Three patients with bladder cancer and mutation in variant 509_510delGA died up to a year after diagnosis and one to March 2020 was still alive. Patient with mutation in 172_175delTTGT died half year after diagnosis of the bladder cancer. Kidney cancer The 810 kidney cancer cases and 4702 controls were successfully genotyped for the two PALB2 variants. In kidney cases PALB2 mutations (two variants combined) were found in 0.24% of the patients and in 0.21% of the controls (OR = 1.2; 95% CI 0. 25-5.13; p = 0.84) ( Table 2). The PALB2 mutations (509_510delGA) were present in one (0.1%) of 810 cases with kidney cancer and in seven (0.15%) of 4702 controls (OR = 0.8; 95% CI, 0.10-6.75; p = 0.86). A mutation 172_175delTTGT was detected in one patient with kidney cancer (0.1%) and three (0.06%) out of 4702 controls (OR = 1.9; 95% CI, 0.20-18.6; p = 0.56). In the group of 450 men we observed two (0.45%) mutations of PALB2 gene one of each type. The information about smoking we collected from 488 patients with kidney cancer including 190 (39%) nonsmokers and 298 (61%) smokers. The one mutation of variant 509_ 510delGA was observed in person who smoked less than 20 of pack years (0.33%). We did not have information about smoking in patients with mutation 172_ 175delTTGT in PALB2 gene. In 30 family cases with kidney cancer in first-and/or second-degree relatives we did not observed any of investigated mutations in gene PALB2. The patient with kidney cancer and mutation in 172_175delTTGT died 3 years after kidney cancer of diagnosis but the patient with mutation in second investigated variant of gene PALB2 was still alive until March 2020. Discussion The results of our unselected cohort 1413 bladder, 810 kidney cancer cases and 4702 controls revealed no statistical significant, indicating that two mutations of PALB2 gene (509_510delGA and 172_175delTTGT) do not seem to play a major role in bladder or kidney cancer development. The PALB2 mutations combined are rare in the general population (0.21%). In this study we found that mutations in PALB2 gene were seen in five (0.35%) unselected cases of bladder cancer and two (0.24%) unselected cases of kidney cancer. Due to low statistical power of the study 18.6% for bladder cancer and 5.2% for kidney cancer our results need to be confirmed by larger multi-center study. In the literature there are some studies of PALB2 in unselected bladder and kidney cancer cases but again they are based upon small study cohorts. Reid et al. have described bi-allelic mutations in PALB2 in seven families affected with Fanconi anemia and cancer in early childhood [12]. Although PALB2 mutations were less common overall and appreciated in only 0.6% of tumors tested a significant proportion of PALB2 mutations were found in bladder (1.49%), breast (1.05%) but no single mutation was found in kidney. Adank et al screened a random cohort of 47 Dutch Wilms tumor patients for germline mutations in PALB2 by DNA sequencing and Multiplex Ligation-dependent Probe Amplification and they did not identify any bi-allelic pathogenic mutations [23]. Heeke et al. tested 201 bladder and 199 kidney tumors. Thy found that frequency of gene PALB2 mutation was 1.49% in bladder cancer and 0% in kidney cancer tumors [24]. Lee Yap et al. found five somatic mutations of PALB2 gene in two cases of bladder cancer [25]. Lee Yap et al. also observed that in patients with mutations in DNA repair genes is longer recurrence-free survival. In this study we did not do multivariable analysis because the presence of a PALB2 (509_510delGA and 172_175delTTGT) mutations were not statistical significant among bladder and kidney cancer patients. In summary we found no difference in the prevalence of recurrent PALB2 mutations between cases and healthy controls. Our results indicate that testing mutations 509_510delGA and 172_175delTTGT is unlikely to be relevant for the identification of individuals at risk of bladder or kidney cancer, at least in the Polish population.
2021-01-09T14:15:16.831Z
2021-01-08T00:00:00.000
{ "year": 2021, "sha1": "22cec4113c61eb2fd661167969aa988cf3786545", "oa_license": "CCBY", "oa_url": "https://hccpjournal.biomedcentral.com/track/pdf/10.1186/s13053-020-00161-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a032611e2f1f72e31320c87992015d0b9f08165", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
250680819
pes2o/s2orc
v3-fos-license
On strongly continuous ρh-semigroup In this paper, we introduce a semi group which it constructs the solution of the partial differential equation as the form: ρ(t)ρ′(t)∂u(t,x)∂t=h(t)h′(t)∂u(t,x)∂x,h(0)=1,ρ(0)=1 First, we introduce the operator theory and the fundamental theorems of the semigroup and certain notions of strongly continuous operators. These concepts are particular types of operator semigroups of functional analytic Using functional analytic tools and methods from ergodic theory, we describe various features of the On strongly continuous ρh -semigroup In this paper , we introduce a semi group which it constructs the solution of the partial differential equation as the form: Mathematics subject classification . 47Dxx The 1.Introduction. Many Scientists ( [1], [2], [3]) introduce several generations of analysts working in the area of operator semigroups. In particular, the progress has been made in the asymptotic theory of strongly continuous semigroups. One of the major results in this direction a strongly continuous semigroup on a Banach space with the norm of the resolvent of its generator A is uniformly bounded in the right half-plane. We consider the following equations. We introduce a new type of semigroup namely (Multipilicative Canonical semigroup) and its define by. And also we introduce strongly continuous generalized canonical semigroup defined by. Definition The function f(t) is called continuous at a point t0 if ‖ ( ) − ( 0 )‖ → 0 , → 0 , continuous on the interval [a,b], if it is continuous at each point of this segment. [5]. Definition The function f(t) is called differentiable in point t0, if there is an element ′ such that. The element f' is called the derivative of the function f (t) at point t0 and denoted by. Definition [7]. We will say that the operator function A(t) is continuous in norm at point t0 [a,b] if [8]. Definition The operator-function A(t) is strongly continuous in a point t0 [a,b] if at any fixed x  E1 Definition We say that an operator A is closed if for every xnD (A), then ‖x − x 0 ‖ → 0 and Ax0 = y0. [8]. Definition A family of bounded operators T (t) (t >0), define on the Banach space E, is called strongly continuous semigroup of operators if T(t) strongly continuous and satisfies the condition T (t)T (s) = T(t+s) (t, s > 0). Definition [6]. It is said that T (t) is a semigroup of class C0 if it is strongly continuous and the following condition for any x E. Theorem [4]. The linear operator A is a generating operator (generator) of a semigroup T (t) of class C0 iff its closed with a dense in E. Definition[6]. A family of bounded operators T (t) (t >0), define on the Banach space E, is called strongly continuous multiplicative semigroup of operators if T(t) strongly continuous and satisfies the conditions. Consider the differential equation. Definition. It is easy to see that the general solution of this equation is. Where is an arbitrary differentiable function. we can assign the one-parameter equation (1) to a oneparameter family of operators. under the assumption that φ belongs to the space of continuous and bounded functions C(a,b) with the norm. Definition. We define a binary operation ⨀ by. We will prove that ℎ ( ) defined by (3) is a semigroup of linear and bounded in C(a, b) Lemma. The operational family ℎ ( ) defined by (3) is a semigroup of linear and bounded in C(a, b) of operators with the binary operation in (4). Proof. We note that. Remark. The function ( ) which given in the semigroup ℎ ( ) is invariant relative to the functions h(x) on h(x)+c , where c-constant. semigroup ℎ ( ) is called ℎ −semigroup and equation (1) is its generating equation. We note that the function ( ) it is possible to select such that the equation (1) generates a family ℎ-semigroup. In the following proposition we show that ℎ −semigroup has a fixed point. Definition. If ( ) = ℎ( ) then the semigroup ℎ −semigroup can be written by the form. In the following lemma we will prove that the family ℎℎ −semigroup has one symmetric semigroup. the family of semigroups produced by symmetric differential equation, contains only one symmetric semigroup. Therefore ℎ is not symmetric for c1 . There exists a special cases of partial differential equation can be solved by another method and we take some of these cases in the following examples. We note that ℎ (0) ( ) with the binary operation ⨀ = . is called Arithmetic semigroup. Definition. Let f(t) be a vector function , define on 3.14 Remark. Definition . The function φ C(a,b) is called uniformly continuous if its −1 -deformation = ( ( )) is bounded and uniformly continuous function. we note that . Proof. We note that. Now we can get the form of A generator operator of the semigroup ℎ (0) ( ) as the following theorem. Theorem. A generator operator of the semigroup ℎ (0) ( ) given by the differential expression . Proof. We have . In the following theorem we get the estimate of the operator ℎ ( ). Theorem. The family of operators ℎ ( ) is strongly continuous generalized canonical semigroup defines on the space , ,ℎ and the following estimation holds.
2022-06-28T01:26:06.395Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "256c7fd3b0d92a3beab4be5204f98c5f33410bbf", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1234/1/012109/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "256c7fd3b0d92a3beab4be5204f98c5f33410bbf", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
6373842
pes2o/s2orc
v3-fos-license
Mechanism of phagocytosis in dictyostelium discoideum: phagocytosis is mediated by different recognition sites as disclosed by mutants with altered phagocytotic properties The recognition step in the phagocytotic process of the unicellular amoeba dictyostelium discoideum was examined by analysis of mutants defective in phagocytosis, Reliable and simple assays were developed to measure endocytotic uptake. For pinocytosis, FITC-dextran was found to be a suitable fluid-phase marker; FITC-bacteria, latex beads, and erythrocytes were used as phagocytotic substrates. Ingested material was isolated in one step by centrifuging through highly viscous poly(ethyleneglycol) solutions and was analyzed optically. A selection procedure for isolating mutants defective in phagocytosis was devised using tungsten beads as particulate prey. Nonphagocytosing cells were isolated on the basis of their lower density. Three mutant strains were found exhibiting a clear-cut phenotype directly related to the phagocytotic event. In contrast to the situation in wild-type cells, uptake of E. coli B/r by mutant cells is specifically and competitively inhibited by glucose. Mutant amoeba phagocytose latex beads normally but not protein-coated latex, nonglucosylated bacteria, or erythrocytes. Cohesive properties of mutant cells are altered: they do not form EDTA-sensitive aggregates, and adhesiveness to glass or plastic surfaces is greatly reduced. Based upon these findings, a model for recognition in phagocytosis is proposed: (a) A lectin-type receptor specifically mediates binding of particles containing terminal glucose (E. coli B/r). (b) A second class of "nonspecific" receptors mediate binding of a variety of particles by hydrophobic interaction. Nonspecific binding is affected by mutation in such a way that only strongly hydrophobic (latex) but not more hydrophilic particles (e.g., protein-coated latex, bacteria, erythrocytes) can be phagocytosed by mutant amoebae. Endocytosis is the uptake of fluid (pinocytosis) or particles (phagocytosis) by a eucaryotic cell from the extracellular environment into the cytoplasm via plasmamembrane-derived vesicles . Pinocytosis appears to be a constitutive property of many cells and seems to proceed with a basal rate characteristic for each cell type . The factors controlling this basal rate have not been identified . Phagocytosis involves recognition and binding of a particle by the phagocyte. Binding seems to create a transmembrane signal which leads to circumferential attach-ment of the particle by pseudopodial movement and subsequent internalization by membrane fusion . (For review, see references 1-5) . However, the underlying biochemical mechanism by which a particle is attached to the plasmamembrane of a phagocytotic cell and how this subsequently directs the contractile system to engulf this particle remains unclear. As an experimental system for studying phagocytosis, we have chosen the unicellular slime mold Dictyostelium discoideum (6). In nature, this amoeba grows by ingestion of soil fit[ JOURNAL of CELL BIOLOGY -VOLUME 86 AUrus1 1980 456-465, microorganisms . For laboratory use, axenically growing strains are available which can grow by high, continuous rates of pinocytosis but retain the capacity to phagocytose microorganisms (7) . Homogeneous populations of amoebae can be grown in large quantities . Moroever, the potential for genetic studies makes this organism particularly attractive (8) . By isolating mutants with altered phagocytotic properties, we have started to dissect the complex process of phagocytosis into individual steps . Analysis of the mutant phenotype reveals the presence of at least two independent receptors on the surface of D. discoideum which recognize different surface features on various substrate particles . Strains and Growth Conditions For axenic cultures, strain AX2 (ATCC24397) and derived mutants were grown in peptone-yeast extract medium supplemented with 18 g glucose or maltose/liter (7). Cell number was monitored with a particle counter (model DN, Coulter Electronics Ltd., Harpenden, England) . Cells were harvested by centrifuging at 200 g for 4 min. Amoebae were also grown in submersed cultures on a gyratory shaker (120 rpm) in suspensions of E. coli B/r (10"' cells/ml) or in association with bacteria on nutrient agar plates (I g glucose, I g bacto-peptone, I g yeast extract, 10 g agar in l liter of 17 mM sodium phosphate, pH 6.2). For preservation of strains, clonally derived spores were suspended in axenic medium and stored in liquid nitrogen . Spores were found to remain viable for several years under these conditions. Genetic nomenclature is based upon a system proposed for bacterial genetics (9) which was adapted to D. discoideum genetics (10) . Haploid strains isolated in this laboratory are designated HV and named according to their isolation number. The locus code tsg has been used for genes determining temperature sensitivity for growth, and we propose the locus code phg for genes determining the phagocytosis phenotype. Each independently isolated mutation is given an isolation number . A block of isolation numbers from 1350 to 1399 has been allocated to this laboratory .' Quantitation of Endocytotic Uptake The major experimental difficulty in measuring initial rates of phagocytosis is to find a rapid procedure to separate quantitatively cells containing ingested material from the bulk of uningested material . Separation by differential centrifugation, the procedure usually applied, is tedious and incomplete. We have overcome this problem by centrifuging the cell suspension through a column of highly viscous solution of poly(ethyleneglycol) 6000 . Extracellular fluidand small particles like bacteria or latex beads (diameter below 2 fim), with a high surface to volume ratio, remain on top of the column, whereas the large amoebae are found on the bottom . Cells remain fully viable during this procedure, and recovery is almost 100%. BACTERIA PHAGOCYTosis : Fluorescein-labeled bacteria (FITC-bacteria) were prepared by incubating bacteria (OD4zo = 20) in 50 mM Na2HP04, pH 9.2, in the presence of 0.1 mg/ml fluorescein isothiocyanate at 37°C for 3 h. To remove surplus reagent, cells were washed by centrifugation until no fluorescence was detectable in the supernate. For the phagocytosis assay, amoebae were harvested and resuspended in the medium specified at a concentration of 2-4 X 10' cells/ml . Cells were incubated on a rotary shaker (100 rpm) for 15 min to recover and FITC-bacteria (8 X 10" bacteria/ml) were added. Phagocytosis was stopped by diluting I -ml aliquots at various times into 2 ml of ice-cold 20 mM phosphate, pH 6.2 . To separate amoebae from noningested bacteria, the cell suspension was layered over and centrifuged through (200 g, 10 min) an aqueous solution (10 ml, 7 em height) of 20% (wt/wt) poly(ethyleneglycol) 6000 . Noningested bacteria remaining in the top fluid layer were removed, and pelleted amoebae were washed once by centrifugation in 3 ml of 50 mM Na,HPO,, pH 9.2, and resuspended in 3 ml of the same buffer. After counting, cells were lysed by addition of Triton X-100(0.2% final concentration), and fluorescence intensity of the solution was determined (excitation wavelength: 470 nm, emission wavelength : 520 run) . The number of bacteria ingested was determined by comparison with a standard curve obtained by lysing adefined number of bacteria in an SDS solution (1%, 2 min heating at 90°C), and determining the fluorescence intensity in aliquots of this solution diluted in the above buffer . The additional treatment with SDS was necessary because noningested bacteria, in contrast to ingested ' K. L. Williams, Australian National University, Canberra . Personal communication. ones, are not lysed by Triton X-100. The treatment does not cause a change in quantum yield. Under the experimental conditions described, no quenching by cytoplasma components has been observed, as the calibration factor was found to be the same in lysed cell medium (pH 9.2) and in Na2HP04 solutions (pH 9.2). The fluorescein fluorescence is very pH sensitive, but all the experiments were performed over a pH range of pH 9-9.2, within which the dye fluorescence is constant and maximal. The dye to bacteria ratio was approximately the same in all FITC batches as indicated by the calibration factor. Furthermore, as acontrol, "S-labeled E. coli B/r were used as substrate particles . Ingestion rates observed with radioactively labeled bacteria were quantitatively the same as those measured with the use of FITC-labeled bacteria. tATEXPHA000YTOSIs : Thephagocytosisassay with monodispersepreparations of polystyrene latex beads (diameter 1.08 lam, Dow-Latex; Serva, Heidelberg, W. Germany) was performed in exactly the same way as described for bacteria . The number of ingested beads was determined by measuring optical density at 560 nm after lysis of amoebae as described above and comparison with a standard curve. Alternatively, FITC-labeled latex beads (diameter 0.883 ftm; Polysciences, Inc., Warrington, Pa .) can be used and determined fluorimetrically as described above. Uptake of bacteria and latex beads can be determined simultaneously in the same batch of cells. After incubation as described above and lysis of amoebae with Triton X-100, the number of ingested latex beads can be determined by measuring the optical density. Subsequently, the beads are removed by centrifuging for 10 min at 500 g. Ingested FITC-bacteria are lysed by Triton X-100, and the fluorescence of the supernate is determined . FRY rHROCYTE PHAGOCYTOSrs : Uptake of sheep erythrocytes was determined as described previously (I1) . In brief, erythrocytes and amoebae were incubated in axenic medium . Aliquots of 2 ml were taken at various times and diluted fivefold in ice-cold water to lyse noningested erythrocytes . The amoebae were pelleted by centrifugation and dissolved in 2 ml of formic acid. Hemoglobin of ingested erythrocytes was determined by measuring optical density at 420 run, and their number, was estimated by comparison with a standard curve. PHAGOCYTOSISONFILTERS : Amoebae (5 X106) and substrate particles (1 .5 x 10") were rapidly mixed in 2 ml of the medium specified . The suspension was uniformly deposited on filters (AABP04700, 0.8 firn, pore size 47 mm diameter; Millipore Corp., Bedford, Mass .) resting on presoaked absorbent support pads. The samples were then incubated in 60-mm plastic petri dishes at the desired temperature in a moist atmosphere. After various times, cells were harvested by placing the filter in a centrifuge tube containing 4 ml of ice-cold medium and resuspending the cells by vigorous shaking . Subsequently, the procedures described above were followed . Phagocytotic uptake of the various particles is saturable with respect to particle concentration (data not shown) . A maximum rate of initial uptake was obtained at a ratio of particles to amoebae of about 200:1 . In shaken cultures, a wild-type amoeba ingests about four to eight E. coli B/r (cf. Fig. 4), about 8-14 latex beads (cf. Fig. 5), and about 0.2 erythrocytes (cf. Fig. 7) per min. Uptake rates are linear with incubation time for --8 min with bacteria, -4 min with latex beads, and --40 min with erythrocytes . Control incubation at ice-bath temperature or in the presence of an uncoupler of oxidative phosphorylation (cyanide-m-chlorophenylhydrazone, I fiM) yielded negligible background levels in the case of bacteria and erythrocytes. In contrast, some batches of latex beads yielded relatively high background values. This indicates that mere adsorption of latex beads sometimes interferes with the phagocytosis assay and makes the estimation of truly ingested particles inaccurate . ASSAY FOR PINOCYTOSIs : FITC-dextran (FITC-dextran 60, Pharmacia, Uppsala, Sweden) was used as a fluid-phase marker. Amoebae were suspended at a density of 2-4 x 10' cells/ml in axenic medium, and FITC-dextran was added to a final concentration of 2 mg/ml. Pinocytosis was stopped by diluting 1-ml aliquots at various times into 4 ml of ice-cold 20 mM phosphate, pH 6.2. Cells were collected by centrifuging for 5 min at 100 g, resuspended in phosphate buffer, and centrifuged through a poly(ethyleneglycol) 6000 solution as described above. After washing once, cells were resuspended in 2 ml of a 50 mM NazHP04 solution and the cell number was counted . Subsequently, cells were lysed by addition of Triton X-100 (0 .2% final concentration), the fluorescence intensity of the solution was measured, and the pinocytosed volume was determined by comparison with a standard curve. According to the following criteria, FITC-dextran qualifies as a suitable fluidphase marker : FITC-dextran is nontoxic for the cells and can be analyzed fluorimetrically in small amounts. Uptake of FITC-dextran is directly proportional to its concentration in the medium from 0.5-10 mg/ml ( Fig . 1 A). This is consistent with a bulk transport of this molecule, because receptor-mediated uptake would be expected to exhibit saturation characteristics. Uptake rate is proportional to cell concentration (Fig. t B) and proceeds linearly with time for at least I h (cf. Fig. 2). Furthermore, no uptake is observed at 0°C, or at 20°C in the presence of an uncoupler of oxidative phosphorylation (carbonyl cyanide-mchlorophenylhydrazone, I uM). Uptake rates obtained with FITC-dextran were identical to those measured with the use of horseradish peroxidase, a wellestablished fluid phase marker (l2) . Planseewerke, Plansee, Austria) was added/ 10 ml of the cell suspension, and the incubation was continued for another 2 h . To remove the bulk of noningested tungsten beads, the incubation mixture was allowed to stand for 5 min without shaking, and the supernatant fluid was carefully decanted into centrifuge tubes . The mixture was diluted l :3 by addition of axenic medium and centrifuged for 2 min at 70 g in a swing-out rotor to precipitate mainly cells containing tungsten beads . The centrifugation was repeated until no cells containing tungsten beads were detectable microscopically in the supernate. The tungsten treatment was repeated twice with intermittent growth of the amoebae at 20°C . Attempts to facilitate the separation procedure by use of iron or nickel beads and subsequent removal of phagocytosing cells magnetically were not successful . The magnetic particles tended to form clumps during the incubation period and were scarcely phagocytosed. Cells remaining after the tungsten treatment were plated clonally at 20°C on agar plates in association with E. coli B/r. Clones were examined for temperature-sensitive growth by transferring cells with toothpicks in duplicate to agar plates previously spread with bacteria and by replica plating at 20°and 27°C . Temperature-sensitive clones were purified twice over single colonies. Only one clone was collected from each batch of cells to ensure that all mutants obtained are of independent origin . A strategy for the isolation of mutants in the endocytosic pathway must recognize that pinocytosis and phagocytosis, the two modes of endocytosis, may share common steps (13) . Because endocytosis is the sole mechanism of nutrient uptake in D. discoideum, mutants defective in steps common to both processes are expected to be lethal. Consequently, a selection scheme was devised to isolate conditional-defective mutants. 458 Till JOURNAL OF CELL Biotocv -Vowmi 86, 1980 Cells of strain AX2 growing exponentially in axenic medium were mutagenized and subsequently incubated in axenic medium in the presence of tungsten beads at 27°C, the nonpermissive temperature . Cells that did not phagocytose tungsten beads could be isolated on the basis of their lower density. The selection is quite effective since only 1-5% of cells, virtually free of tungsten beads, remained after this procedure. Growth of amoebae on bacteria is dependent upon phagocytosis. Therefore, cells remaining after the tungsten treatment were screened for temperature-sensitive growth at 27°C on nutrient agar plates in association with E. coli B/r. About 5-10% of the clones were found to be temperature sensitive for growth . This frequency is about 50 to 100 times that obtained with mutagenized cells when the tungsten treatment is omitted. About 100 independently selected mutants have been isolated . Phenotypic Classification of Mutants All mutants are temperature sensitive for growth on bacteria plated on nutrient agar. Although growth on bacteria is dependent upon phagocytosis, defects in a variety of essential cellular functions only indirectly connected to the process of phagocytosis are expected to show this phenotype. To detect mutants directly affected in the phagocytotic process, mutant strains were initially tested for their ability to grow by pinocytosis in axenic medium at the nonpermissive temperature . Subsequently, phagocytotic activities were measured directly by incubating amoebae in shaken cultures in axenic medium using E. coli B/r, latex beads, and erythrocytes as particulate prey. Various particles have been employed to study the potential influence of different surface properties upon the acceptability of phagocytotic substrates . Mutants have been grouped into three classes according to their growth characteristics . Furthermore, mutants of each class could be divided into subclasses according to their phagocytotic properties (Table 1) . CLASS I STRAINS : 15 mutant strains grow in axenic medium by pinocytosis at 20°and 27°C like wild-type cells with doubling times of ---8 h. 12 of these mutants (class IA) phagocytose the particles used with initial rates comparable to those of wild-type cells at both temperatures, whereas three Most of the mutants grow normally in axenic medium at 20°C . However, after shifting to 27°C they only grow initially, but growth decreases gradually and finally stops after four to eight generations . About half of these strains are irreversibly injured, whereas the others recover when shifted back to 20°C . Most of these mutants phagocytose normally at 20°C, but at 27°C the phagocytotic capacity decreases in parallel with the decreasing growth rate (class II A) . 11 strains were found that do not phagocytose in shaken cultures in axenic medium at all (class II B ). CLASS III STRAINS: Five mutant strains are extremely temperature sensitive for growth in axenic medium and stop growing immediately after shifting to the restrictive tempera ture . Two of these mutants die at high temperature, whereas the others survive at least 2 d at high temperature and recover when shifted back to 20°C . These strains stop phagocytosis immediately after shifting to the higher temperature and recover again upon short time incubation (15 min) at 20°C . In summary, most of the mutant strains temperature sensitive for growth via pinocytosis in axenic medium phagocytose normally at 20°C and the phagocytotic capacity decreases at 27°C in parallel with the decreasing growth rate (class II A and class III) . These strains could be impaired in any essential cellular process participating either directly or indirectly in endocytosis . 12 mutants (class I A ) grow and phagocytose normally in axenic medium at 20°and 27°C . Because these mutants are unable to grow on bacteria at the restrictive temperature, they are probably impaired in steps subsequent to uptake, for example, in digestion of bacteria . 14 mutants have been found that do not phagocytose in shaken cultures in axenic medium, either at the permissive or at the restrictive temperature . I I of these strains belong to class IIB being in addition temperature sensitive for growth in axenic medium as well as on bacteria. This latter phenotype may be as a result of a second mutation . The extended treatment of amoebae with relatively high concentrations of the potent mutagen N-methyl-N'-nitro-N-nitrosoguanidine heightens the incidence of multiple mutations . The remaining three strains (class I A ), named HV29, HV32, and HV33, grow in axenic medium via pinocytosis at the permissive and restrictive temperature with wild-type characteristics but do not phagocytose any of the various substrate particles when incubated in axenic medium in agitated suspensions. These strains carry a mutation in phagocytotic uptake, designated as phg, without being impaired in other essential cellular functions . In these strains, development, fruiting body formation, and spore size are identical to the AX2 parent strain . Because these mutants exhibited a clear-cut phenotype which appeared to be directly related to the phagocytotic event, we decided to subject these isolates to further scrutiny . Comparison of Endocytosis in Wild-type and Mutant HV32 Amoebae Strains HV29, HV32, and HV33 have the same phenotype and, therefore, a detailed analysis of the endocytotic properties is presented for strain HV32 as a representative example . Pinocytosis was measured in axenic medium at 20°and 27°C . Consistently mutant amoebae have been found to pinocytose at about twice the rate of wild-type amoebae . Uptake rates are the same at both temperatures, and the results obtained at 20°C are presented in Fig. 2 . To demonstrate that adhesion of substrate particles to HV32 amoebae in shaken cultures is actually the determining factor for internalization, phagocytosis was assayed in the absence of shear forces on filters . For this, amoebae were incubated with the various substrate particles on filters resting on pads saturated with axenic medium . Substrate particles are immobilized under these conditions and E. coli B/r, latex beads, and erythrocytes can be engulfed by mutant and wild-type amoebae with comparable rates at 20°and 27°C (cf. Fig . 3) . Apparently, an initial binding step is affected by mutation in mutant HV32 cells, leading to the inability of phagocytosis in agitated suspensions . When agitated in phosphate buffer with E. coli B/r as the sole food source, mutant and wild-type amoebae grow with the same doubling time of -2 .5 h at 20°C . Because growth is dependent upon phagocytosis under these conditions, inhibition of phagocytosis appears to be caused by components contained in axenic medium . Axenic medium contains glucose, peptone, and yeast extract in phosphate buffer. Consequently, phagocytosis of the various substrate particles in shaken cultures was measured in phosphate buffer in the presence of each of these components . Uptake of E. coli B/r by mutant HV32 cells is completely inhibited by glucose but proceeds with high rates in wild-type VOGEL ET AE . Recognition in Phagocytosis in D . discoideum amoebae (Fig . 4) . Mutant and wild-type amoebae ingest bacteria with similar rates in buffer alone and after addition of peptone or yeast extract. In contradiction, uptake of latex beads by mutant HV32 cells is completely inhibited in the presence of peptone and yeast extract but in phosphate, or in the presence of glucose, uptake proceeds normally as compared to wild-type cells (Fig . 5) . Uptake of E. coli B/r and latex beads was also measured simultaneously in the same batch of mutant cells in phosphate buffer alone or in the presence of glucose or peptone (Fig. 6) . Glucose selectively inhibits the uptake of bacteria, whereas ingestion of latex beads is not affected . On the other hand, only uptake of latex beads is blocked in the presence of peptone, while ingestion of bacteria remains rapid. Uptake of erythrocytes could only be determined quantitatively in axenic medium (Fig . 7) but not in 20 mM phosphate buffer, as erythrocytes lyse in hypotonic medium . On the other hand, amoebae ingest poorly in high salt solutions such as physiological saline. Erythrocytes were stabilized by fixation with glutaraldehyde, and phagocytosis was determined qualitatively by microscope observation after incubation with amoe- bae in phosphate buffer on a rotary shaker . Only wild-type cells were observed to ingest fixed erythrocytes under these conditions, whereas mutant HV32 cells could not internalize erythrocytes, even in phosphate buffer . To summarize, wild-type cells appear to be indiscriminate regarding the nature of substrate particles taken up . However, mutant HV32 cells disclose clear-cut preferences for the type of particles. Uptake of E. coli B/r is selectively inhibited by glucose, whereas latex uptake is selectively inhibited by components contained in peptone or yeast extract. Finally, mutant HV32 amoebae are not capable of ingesting erythrocytes at all. We may conclude from these observations that functionally independent recognition or binding sites are present on the cell surface of D. discoideum and that binding properties of mutant HV32 amoebae are altered by mutation . Specificity and Mode of Inhibition of Phagocytosis by Sugars and Peptone in Mutant HV32 Uptake rates for E. coli B/r were measured in the presence of various sugars and the results are listed in Table II . All glucose derivatives with different anomeric configuration or different substitution on the Cl-carbon of glucose are strong inhibitors (group a) . However, rather strict structural requirements for inhibition are found at other positions in the sugar. Derivates of glucose such as deoxyglucose or N-acetylglucosamine (cf. group b), or diastereomeric sugars such as mannose, allose, and galactose (cf. group c) are far less effective. When oligosaccharides are used as inhibitors, glucose has to be bound glycosidically at the terminus . Lactose (Gal-fl-1,4-Glc) for instance is only a poor inhibitor. Phagocytosis of bacteria by wild-type amoebae is not significantly influenced by the sugars listed above. The type of inhibition of glucose for E. coli B/r uptake was analyzed in analogy with respect to enzyme kinetics . Phagocytosis was measured with subsaturating amounts of bacteria in the presence of various concentrations of glucose. After plotting of reciprocal uptake rates against reciprocal concentrations of bacteria, straight lines of differing slope with a common intercept on the ordinate were obtained (Fig . 8) . This indicates Comparison of erythrocyte uptake at 20°C by wild-type and mutant HV32 amoebae in shaken cultures in axenic medium . that glucose is a competitive inhibitor of E. coli B/r uptake, and the apparent inhibition constant was found to be^-0.7 mm . E. coli B/r contains glycosidically linked terminal glucose residues (14) . Glucose inhibits specifically and competitively the uptake of these bacteria by mutant HV32 cells . These findings strongly suggest that reversible binding of bacteria to amoebae is achieved by a glucose-binding protein . To test this possibility further, a lipopolysaccharide mutant of E. coli (K2754) which does not contain terminal glucose residues on the surface (15), was chosen as a phagocytotic substrate . In phosphate buffer, wild-type and mutant amoebae ingest the parent K-12 E. coli cells, containing terminal glucose residues, at rates very similar to those of E. coli B/r cells and the uptake is inhibited by glucose in mutant amoebae (data not shown) . The nonglucosylated K2754 bacteria are ingested by wild-type amoebae at rates comparable to those of the glucose-containing E. coli B/r (Fig . 9) . In contrast, mutant amoebae cannot phagocytose the glucose-free bacteria under any conditions . Taken together, these observations can be plausibly explained by the assumption that wild-type cells contain a glucose-binding protein and an additional binding site that is altered by mutation in mutant HV32 cells . A clear-cut analysis of inhibition of latex uptake by peptone or yeast extract in mutant cells seemed to be difficult because peptone and yeast extract are complex mixtures chemically not well defined . Peptone was found to be effective as inhibitor at concentrations as low at 10-20 pg/ml, whereas -0 .5-1 mg/ml of yeast extract was necessary for complete inhibition . Polystyrene latex spheres are very hydrophobic and many proteins such as immunoglobulins are tightly bound to the surface of latex beads (16) . Because peptone, a tryptic digest of meat, contains high amounts of amino acids and oligopeptides, the possibility was investigated that components from peptone were bound to latex beads and thereby change their surface properties in a way that binding to mutant amoebae is prevented . Latex beads were preincubated in a peptone solution (10 mg/ml) and washed with phosphate buffer . Subsequently, phagocytotic uptake by wild-type and mutant HV32 amoebae was determined in phosphate buffer. The pretreated latex beads were no longer ingested by mutant HV32 cells, but uptake proceeds normally in wild-type amoebae (Fig . 10). This effect is not specific for a certain component of peptone, because latex beads coated with FITC-labeled anti-rabbit immunoglobulin ( Fig. 10) or serum albumin (data not shown) were not phagocytosed either . The coating of the latex bead by fluorescein-conjugated immunoglobulin was confirmed by fluorimetric examination . Therefore, inhibition of latex uptake by peptone is not specific but seems to be caused by different surface properties of latex beads after coating with proteins or peptides . An explanation for this observation could be that coating with protein renders the hydrophobic polystyrene spheres more hydrophilic . Wild-type amoebae apparently do not discriminate between strong hydrophobic and more hydrophilic particles, but ingest both equally well . However, mutant HV32 cells seem to be altered in such a way that successful interaction is achieved only with hydrophobic particles. Consequently, the protein-coated and therefore more hydrophilic latex beads can not be phagocytosed . Cohesiveness of Mutant HV32 Cells Compared to Wild-type Cells Exponentially growing wild-type cells cohere rapidly when resuspended in phosphate buffer . Within 15 min, almost all cells form large, tight aggregates as revealed by microscope observation (Fig. 11). During early development, this kind of aggregation is inhibited or reversed by EDTA (17) . In contrast to wild-type cells, almost all mutant HV32 cells remained as 462 TUE IOURNAI Or CEEL BiOEOGV -VOLUME 86, 1980 single cells after identical pretreatment (Fig . I1). During acquisition of aggregation competence, EDTA-resistant cohesiveness develops in wild-type cells (17) . This is also the case for mutant cells. They start to form tight aggregates after 8-10 h of incubation in phosphate buffer, which are resistant to EDTA treatment. Furthermore, adhesion to foreign surfaces is also altered in mutant cells. Wild-type cells suspended in axenic medium or in phosphate buffer and incubated without shaking in polystyrene petri dishes adhere tightly to the surface in both media. In contrast, mutant cells adhere only when incubated in phosphate buffer, but remain in suspension when incubated in axenic medium. This behavior parallels the binding properties of mutant cells for hydrophobic polystyrene latex particles and for more hydrophilic protein-coated latex particles . The observation suggests that adherence of cells to an extended surface reflects their attempt to phagocytose a particle of infinite size . Isolation of Temperature-insensitive Revertants Mutant strains HV29, HV32, and HV33 have an identical phenotype according to the criteria described above. No direct correlation was detected between altered phagocytotic properties in these strains and their temperature sensitivity for growth on bacteria plated on agar . We have isolated spontaneous revenants that have regained the ability to grow on bacterial plates at 27°C . Revenants arose with a frequency of about 2 x 10-5 for strains HV29 and HV32 and with a frequency of about 5 x 10 -7 for strain HV33 . Four revenants of each strain were characterized more closely, and all of the revenants displayed the mutant phenotype for phagocytosis and cohesiveness . Therefore, temperature sensitivity is caused by a secondary mutation which is unrelated to the mutation causing the altered phagocytotic phenotype . DISCUSSION The major finding of the present work was the identification of two alternative mechanisms for recognition in the phagocytotic process of the unicellular slime mold D. discoideum . This was achieved by isolation of mutants with altered phagocytotic properties. Temperature-sensitive phagocytosis mutants have been described previously (18) . However, up to the present time these mutations could not be attributed unambiguously to the endocytotic process because the impairment of other essential cellular activities could not be excluded . We have found three mutant strains exhibiting a phenotype which is unequivocally related to the process of phagocytosis per se . Analysis of the mutant phenotype revealed that functionally independent binding sites are present on the cell surface of D . discoideum which recognize different surface properties on a particulate prey. Based on the data presented above, the following conclusions can be drawn (cf. Fig. 12): First, polystyrene latex beads, having a very hydrophobic surface (l9), are bound and internalized by wild-type and mutant amoebae equally well. Latex beads do not carry functional groups that can be imagined to interact specifically with a cell surface component. Thus, mere physical forces seem to promote adhesion between cells and latex particles . High interfacial tension between the particles and the surrounding medium, but low interfacial tension against the phagocytotic cell favors binding and phagocytosis (19,20). Because the existence of specific membrane receptors for these particles is unlikely, the term "nonspecific receptor" has been used to characterize cell surface components mediating this type of binding (2). Based on kinetic data, a saturable population of "binding sites" for latex beads has also been suggested to exist on the cell surface of Acanthamoeba (21) . D. dictyostelium wildtype amoebae appear to internalize a wide variety of substrate particles with relatively hydrophilic (e .g., bacteria, erythrocytes) and strongly hydrophobic (latex) surface properties after binding to this nonspecific receptor . The phg mutations in strains HV29, HV32, and HV33 apparently have altered the surface properties of these cells in such a way that their ability for nonspecific binding by hydrophobic interaction is changed . Only the strongly hydrophobic polystyrene latex beads can still be bound and ingested by this receptor but not more hydrophilic particles such as protein-coated latex beads, bacteria, and erythrocytes . Second, characterization of the mutant phenotype with respect to phagocytosis disclosed another binding site . Mutant cells avidly ingest E. coli B/r, a bacterium containing terminal glucose residues in a glycosidic linkage (14) . Uptake of these bacteria is inhibited specifically by glucose and by oligosaccharides containing glycosidically linked terminal glucose residues . The inhibition is competitive with an inhibition constant of -0 .7 mM . This finding strongly suggests that binding of these bacteria is a specific, reversible carbohydrate recognition . The recognition site might be a monovalent or multivalent lectinlike protein . The existence of this binding site is overshadowed in wild-type amoebae, as bacteria can also be ingested by the nonspecific recognition mechanism . Strong evidence in favor of this model is the observation that E. coli cells without glucose residues on the cell surface (E. coli K2754) cannot be phagocytosed by mutant amoebae under any conditions, whereas wild-type amoebae ingest these bacteria at rates comparable to that of the glucose-containing E. coli B/r (cf. Fig . 9) . The lectin-type receptor cannot recognize E. coli K2754, because glucose is not present on the cell surface . Wild-type amoebae can ingest these bacteria via the nonspecific receptor, but this recognition site is altered in mutant amoebae VOGEL 11 At . Recognition in Phagocytosis in D-discoideum and will not interact with the relatively hydrophilic surface of bacteria . Therefore, ingestion of these bacteria by mutant cells cannot be achieved by either recognition mechanism. Furthermore, it can be concluded from this experiment that the lectintype receptor is located on the amoeba cell surface and not on the bacterial surface. This situation is just opposite to that observed for phagocytosis of E. coli by mouse peritoneal phagocytes (22) . In this case, mannose residues on the phagocyte surface seen to be recognized by a bacterial lectin . In Table III the phagocytosic properties of wild-type and mutant amoebae are summarized . The observed mutant phenotype regarding different substrate particles and incubation conditions agrees perfectly well with the behavior predicted on the basis of the proposed model. All three mutants with altered properties in cell-particle binding in phagocytosis are, in addition, altered in other cohesive properties of the cells. When suspended in axenic medium, they do not adhere to plastic surfaces and mutant cells do not form EDTA-sensitive aggregates, when suspended in phosphate buffer . Because the mutants are of independent origin it seems likely that there is a common basis for these properties. Unspecific cohesiveness of the cells is probably determined by interfacial tension which itself is determined by the hydrophobicity of the cell surface (19) . If the mutations described here cause more hydrophilic surface properties of mutant cells compared to wild-type cells, a general change of cohesive properties as observed here could result. It is probably this change in cohesive properties which led to enrichment of these cells in the selection procedure. Cells that do not phagocytose in axenic medium and, in addition, have no tendency to clump together are expected to be selectively enriched in the supernate after tungsten treatment. Nevertheless, these mutants would have been lost during the subsequent screening had they not carried an accidental second but unrelated mutation that conferred a temperature-sensitive phenotype. In D. discoideum two functionally independent mechanisms for cell aggregation have been identified (17) . EDTA-sensitive side-by-side cohesion of vegetative cells is mediated by contact sides B, whereas EDTA-resistant end-to-end cohesion of aggregation-competent cells is mediated by contact sides A. Different glycoproteins (23,24) and carbohydrate-binding proteins seem to be necessary for proper development (25)(26)(27). Mutants HV29, HV32, and HV33 do not form EDTA-sensitive aggregates . Therefore, 464 THE JOURNAL OF CELL BIOLOGY -VOLUME 86, 1980 contact site B-mediated side-by-side association seems to be impaired in these cells. Because the mutants develop normally after acquisition of aggregation competence, contact site Bmediated cohesion is not necessary for proper development, but seems to be involved only in physical attraction between the cell surface and a second surface. This might be another amoeba cell, a substrate particle, or an extended glass or plastic surface. Little is known chemically about membrane components which determine the "stickiness" of cell surfaces . Biochemical comparison of wild-type and mutant amoebae will possibly allow the identification of these components .
2014-10-01T00:00:00.000Z
1980-08-01T00:00:00.000
{ "year": 1980, "sha1": "9a60241bdf9eca339e56ccfb89059169f4891dc4", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/86/2/456/1074370/456.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9a60241bdf9eca339e56ccfb89059169f4891dc4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
203610163
pes2o/s2orc
v3-fos-license
On the fractional susceptibility function of piecewise expanding maps We associate to a perturbation $(f_t)$ of a (stably mixing) piecewise expanding unimodal map $f_0$ a two-variable fractional susceptibility function $\Psi_\phi(\eta, z)$, depending also on a bounded observable $\phi$. For fixed $\eta \in (0,1)$, we show that the function $\Psi_\phi(\eta, z)$ is holomorphic in a disc $D_\eta\subset \mathbb{C}$ centered at zero of radius $>1$, and that $\Psi_\phi(\eta, 1)$ is the Marchaud fractional derivative of order $\eta$ of the function $t\mapsto \mathcal{R}_\phi(t):=\int \phi(x)\, d\mu_t$, at $t=0$, where $\mu_t$ is the unique absolutely continuous invariant probability measure of $f_t$. In addition, we show that $\Psi_\phi(\eta, z)$ admits a holomorphic extension to the domain $\{ (\eta, z) \in {\mathbb{C}}^2\mid 0<\Re \eta<1, \, z \in D_\eta \}$. Finally, if the perturbation $(f_t)$ is horizontal, we prove that $\lim_{\eta \to 1}\Psi_\phi(\eta, 1)=\partial_t \mathcal{R}_\phi(t)|_{t=0}$. 1. Introduction 1.1. Linear response and violation thereof. Response theory describes how the "physical measure" µ t of a dynamical system f 0 responds to perturbations t → f t , given a class of observables (test functions) φ. Classically, one studies linear response, where the goal is to express the derivative of R φ (t) := φ dµ t , for a fixed φ in a suitable class, in terms of (φ together with) f 0 , the vector field v 0 := ∂ t f t | t=0 , and the measure µ 0 . Linear response was first investigated [14,23,30,19] for smooth hyperbolic dynamics (Anosov or Axiom A). In the smooth mixing hyperbolic case, the physical measure µ t is the SRB measure [42] and corresponds to the fixed point of a transfer operator L t (whose dual preserves Lebesgue measure) on a suitable Banach space B (see e.g. [5]). This fixed point is a simple isolated eigenvalue in the spectrum of L t , and linear response can be proved via perturbation theory for simple eigenvalues (see e.g. [5, §2.5 and §5.3]). To avoid technicalities, we write the key formulas in the easy case of smooth expanding circle maps (see e.g. [4] for more details): Then, the physical measure is the unique absolutely continuous invariant probability measure, that is, µ t = ρ t dx, with ρ t smooth. The operator L t acts on smooth functions, and we have the trivial but key identity (1) ρ t − ρ 0 = (I − L t ) −1 (L t − L 0 )ρ 0 . Next, assuming that v 0 = X 0 • f 0 , it is not hard to show that lim t→0 Then, since φL k 0 (ψ) dx = (φ • f k 0 )ψ dx, we have, 1 for continuous φ (say), Finally, using (1), we get the fluctuation-dissipation formula (expressing the derivative as a Green-Kubo [21] sum of decorrelations) and, if φ is differentiable, integrating by parts, we get the linear response formula Exponential decay of the series in the right-hand side of (3) is not as apparent as in (2), since the derivative (φ • f k 0 ) ′ (x) = φ ′ (f k 0 (x)) · (f k 0 ) ′ (x) grows exponentially. However, the presence of this derivative should be expected 2 when describing response, and Ruelle [31] pointed out that it was meaningful to view the linear response formula (3) as the value at z = 1 of a natural power series, the susceptibility function of f 0 and v 0 = X 0 • f 0 . In the present smooth one-dimensional expanding case, setting z = e iω , the susceptibility function Ψ φ (z) is the Fourier transform of the response function k → X 0 (φ • f k 0 ) ′ ρ 0 dx, that is: We have, integrating by parts, so that exponential mixing implies that the susceptibility function Ψ φ (z) is holomorphic in a disc of radius larger than one, using that X 0 ρ 0 is smooth. In situations when the map t → R φ (t) is not differentiable (either due to bifurcations in the dynamics [27,3,9,6], or to singularities [7,29] of the test function φ), it is natural to consider fractional response, i.e., to investigate weaker moduli of continuity of this map. The simplest situation where linear response breaks down is that of piecewise expanding unimodal maps. In this case, the transfer operator has a spectral gap when acting on BV . However, the derivative ρ ′ 0 of the invariant density ρ 0 involves a sum of Dirac masses, and thus does not belong to BV , or to a space on which decorrelations are summable. Keller [24] showed in 1982 that |ρ t − ρ 0 | L 1 = O(|t|| log |t||). Examples of families (f t ) and smooth functions φ such that |R φ (t) − R φ (0)| ≥ |t|| log |t|| were described in [27] and [3,Theorem 6.1]. (See also the previous work of Ershov [17].) Baladi and Smania [9] showed that if the family f t is tangential to 3 the topological class of f 0 , then for any continuous function φ, the map R φ (t) is differentiable at t = 0. They also showed that when horizontality does not hold, then there exist smooth observables φ such that R φ (t) is not Lipschitz at t = 0. More recently, de Lima and Smania [16] showed central limit theorems which imply that for a generic map f 0 , if v 0 = ∂ t f t | t=0 is not horizontal, then for a generic φ, the map t → φ dµ t cannot be Lipschitz on a set of parameters t of positive Lebesgue measure. (See also [15] for a related result.) In the piecewise expanding unimodal case, the susceptibility function Ψ φ (z) can also be defined by the formal power series (4), and it has been studied in [3,9,8]. In particular [8], if the postcritical orbit is dense (a generic condition) then Ψ φ (z) has a strong natural boundary on the unit circle, while if the perturbation v 0 = X 0 • f is horizontal and, in addition (a generic condition) the postcritical orbit is Birkhoff typical, then the nontangential limit of Ψ φ (z) as z tends to 1 coincides with ∂ t R φ (t)| t=0 . In the present paper, we introduce and study a two-variable fractional susceptibility function Ψ φ (η, z), for ℜη ∈ (0, 1), in the setting of piecewise expanding unimodal maps. The initial motivation for this work comes from the following paradox: In 2005, Ruelle and Jiang considered [32,22] finite Misiurewicz-Thurston (MT) parameters t 0 in the quadratic family f t (x) = t − x 2 . By definition of MT, there exists a repelling periodic point x 0 and M ≥ 2 such that f M t 0 (0) = x 0 . Such parameters t 0 are Collet-Eckmann, so f t 0 admits a unique absolutely continuous invariant probability measure µ t 0 . The map f t 0 also has a finite Markov partition, which simplifies the analysis. Ruelle and Jiang proved that, for any C 1 observable φ, the susceptibility function Ψ φ (z) defined by (4) is meromorphic in the whole complex plane, and that z = 1 is not a pole. Since Ψ φ (1) is the natural candidate for the derivative of R φ (t), this raised the hope that there could exist a "large" subset Ω of Collet-Eckmann parameters, containing t 0 , and such that t → R φ (t) would be differentiable in the sense of Whitney on Ω at t 0 . However, Baladi, Benedicks, and Schnellmann [6] later showed that for any mixing (non horizontal 4 ) MT parameter t 0 , there exist a C ∞ observable φ, a sequence t n → t 0 of Collet-Eckmann parameters (with bounded constants), and C > 1 such that It is not known whether t 0 is a Lebesgue density point in the set of t n such that (5) holds. Also, the analogue of the de Lima-Smania [16] central limit theorems is not known in this setting. However, we expect that a similar CLT holds, and in particular that Ψ φ (1) cannot be interpreted as the derivative of R φ (t) in the sense of Whitney on a set of parameters containing t 0 as a Lebesgue density point. The fact that Ψ φ (z) is holomorphic at z = 1 can thus be viewed as a paradox. Baladi and Smania [11] very recently introduced two-variable fractional susceptibility functions Ψ φ (η, z) for the quadratic family, with the goal of resolving this paradox. The 3 We refer to the beginning of §4.4 for a definition of tangentiality, which is equivalent to the horizontality condition (16) on v0. 4 All MT parameters are non horizontal by [1] or [26]. piecewise expanding case is a toy model for the quadratic setting, and indeed, several ideas previously developed for piecewise expanding families [9] were crucial to obtain the breakthrough result (5) of [6] for the quadratic family. Although we believe it is interesting in its own right, the analysis carried out in the present paper for this toy model can also be viewed as a "proof of concept," establishing the feasibility of the fractional susceptibility function approach. 1.2. Informal statement of the results. We next describe briefly our main results. It is well-known that there is no canonical notion of a fractional derivative, see [35] and [28] for a presentation of the theory. We use here the Marchaud fractional derivative, defined for suitable functions g by where Γ denotes Euler's Gamma function. Our motivation for using this particular fractional derivative is threefold: First, M η g can be well-defined even if g does not decay at infinity. Second, the Marchaud fractional derivative of a constant function vanishes, while this is not the case for other fractional derivatives, in particular for the Bessel potential derivative defined by F −1 (1 + |ξ| 2 ) η/2 Fg, where F is the Fourier transform. Finally, the expression of Marchaud derivatives in terms of differences is convenient in view of (1). Nevertheless, we expect that fractional susceptibility functions defined via (e.g.) Bessel, or Riemann-Liouville fractional derivatives would enjoy similar properties as those we establish here using the Marchaud derivative. We define for η ∈ (0, 1) the fractional susceptibility function of the perturbation (f t ) and the observable φ ∈ L ∞ , to be the formal power series and the frozen fractional susceptibility function of (f t ) and φ to be the formal power series In both susceptibility functions, L t is the transfer operator L t ϕ(x) = ft(y)=x ϕ(y) |f ′ t (y)| . Our first main result (Theorem 2.3) says that if φ is bounded, then for any η ∈ (0, 1) the improper integrals in the two above formal power series are convergent, and that Ψ φ (η, z) and Ψ fr φ (η, z) are holomorphic functions of z in a disc D η of radius (which may depend on f 0 ) larger than one. In addition, Remark 1.1. Our lower bound for the radius of D η tends to 1 as η → 1. If the critical point of f 0 is preperiodic, then we expect Ψ φ (η, z) to be holomorphic in a disc of radius strictly larger than 1 and meromorphic in the entire complex plane for all 0 ≤ η ≤ 1. However, we believe that, generically, the radius of convergence of Ψ φ (η, z) should tend to 1 as η → 1. As a corollary of our main theorem and the results of [9], we obtain (Corollary 2.4) that, if v 0 is horizontal and φ is continuous, then Our second result, Theorem 2.6, is about more general fractional moduli of continuity: We consider there weighted Marchaud derivatives, replacing |t| −1−η by (log |t|) −β |t| −1−η . Applying Theorem 2.6 to β > 1 and η = 1 gives a modulus of continuity |t|| log |t|| β , almost reproducing the |t|| log |t|| estimates from [24,27,3]. Applying Theorem 2.6 to β < 0, we show (Corollary 2.7) that Ψ φ (η, z) and Ψ fr φ (η, z) are holomorphic in the domain The fractional susceptibility functions in (6) and (7) are of "fluctuation-dissipation" type. It is tempting to consider the response fractional susceptibility function 5 where the Marchaud derivative is now taken with respect to x. The arguments in the present paper easily show that for all η ∈ (0, 1) the function Ψ rsp φ (η, z) is holomorphic in a disc of radius larger than 1 for all bounded φ, and that in the horizontal case we have lim η→1 Ψ rsp φ (η, 1) = ∂ t R φ | t=0 . The function Ψ rsp φ (η, z) is at first sight the most seductive fractional susceptibility function. However, Ψ rsp φ (η, 1) has no reason to coincide in general with any fractional derivative of order η of R φ (t) (except in the limit η → 1, even in the linear examples studied in §2.3). In this respect, Ψ rsp φ (η, z) is not better than the frozen susceptibility function Ψ fr φ (η, z). Keeping also in mind that the goal of this fractional approach is to resolve the paradox described above for the quadratic family, we focus on the definitions (6) and (7) in the present paper, referring to [11] for more on the fractional response susceptibility function. Observe also that fractional derivatives do not enjoy a Leibniz formula with finitely many terms for the derivative of a product, so we cannot expect fractional susceptibility functions to be as well-behaved as ordinary ones. We next make a few comments about our method of proof. We shall consider transfer operators acting on Sobolev spaces H τ,p with p > 1, close to 1, and 0 < τ < 1/p. These spaces give us more flexibility than the BV spaces classically used for piecewise expanding interval maps. We exploit the bounds of Thomine [38] for the essential spectral radius of L t on such spaces H τ,p . In particular, we have ρ 0 ∈ H τ,p (see the beginning of §3.1), so that, for each η < 1, the function M η x ρ 0 belongs to a Sobolev space on which the transfer operator L t associated to f t has a spectral gap (see (22)). In order to apply the stable exponential decorrelation result of Keller-Liverani [25], we need Lasota-Yorke bounds which are uniform in t. Such bounds were known for the BV norm, but we have to carry out the corresponding estimates for the Sobolev spaces. The paper is organised as follows: Section 2 starts with definitions and formal statements. In §2.1, we state precisely Theorem 2.3 on fractional susceptibility functions, followed by its consequence (Corollary 2.4) in the horizontal case. In §2.2 we state Theorem 2.6 about other moduli of continuity and Corollary 2.7 about holomorphic extensions of the fractional susceptibility functions. In §2.3, we discuss two linear examples. Section 3 introduces the key tools: In §3.1, we recall properties of the transfer operator acting on Sobolev spaces, from the work of Thomine [38]. In §3.2, we discuss stability of mixing and mixing rates for good families, recalling in particular the results of Keller and Liverani [25] that we shall use, and stating the relevant technical lemmas (Lemma 3.2 and Lemma 3.1). Subsection 3.3 contains the proof of the perturbation Lemma 3.2 for Sobolev spaces which is needed to apply results of Keller and Liverani. In Section 4, we prove Theorems 2.3 and 2.6, as well as Corollaries 2.4 and 2.7, using the techniques presented in Section 3. Appendix A contains the simple proof that the limit of the Marchaud derivatives as η → 1 is the ordinary derivative. In Appendix B, we show the uniform Lasota-Yorke estimates (Lemma 3.1) needed to apply the results of Keller and Liverani to Sobolev spaces. Throughout, we shall use the notation C for a finite positive constant which can vary from place to place. Fractional susceptibility functions for piecewise expanding interval maps. Let I be the compact interval [−1, 1] and let r ∈ {2, 3}. We say that a continuous map f : I → I is a piecewise C r unimodal map if there exists −1 < c < 1 such that f is increasing on I + = [−1, c], decreasing on 6 I − = [c, 1], and f | Iσ extends as a C r map denoted f σ to a neighbourhoodĨ σ of I σ . If, in addition, λ(f ) := inf σ=± |f ′ σ (x)| > 1, we say that f : I → I is a piecewise C r expanding unimodal map, and we define (9) λ n (f ) := inf for all n ≥ 1, where f n σ is the composition of the maps f σ i , i = 1, . . . , n, and the points x in (9) are restricted to those x ∈Ĩ σ 1 for which the composition exists. Following [9], we say that a piecewise C r expanding unimodal map f is 6 The results of Thomine [38] hold for piecewise C 1+η 0 maps, up to replacing the condition τ < 1/p there by τ < max(η0, 1/p). It would be interesting to adapt our results to this setting, for 0 < η < η0. 7 Goodness will ensure the uniform Lasota-Yorke Definition 2.1 (Perturbation (f t ) of a piecewise C r expanding unimodal map). Let r ∈ {2, 3} and let f be a piecewise C r expanding unimodal map. Given ε > 0 and a family (f t ) |t|≤ε of piecewise C r unimodal maps (for fixedĨ ± ) is called a C r perturbation in all these cases, up to taking a smaller ε, we may and shall assume that Λ > 1. Any piecewise C 2 expanding unimodal map f t admits a unique absolutely continuous invariant probability measure µ t = ρ t dx. The measure µ t is ergodic, and it is mixing if f t is mixing. The density ρ t is the unique fixed point (see e.g. [2]) of the transfer operator L t defined on BV by We now recall the definition of the Marchaud fractional derivatives in order to introduce fractional susceptibility functions. Let 0 < η < 1, and set Γ η = η Γ(1−η) where Γ is Euler's function. Let g : R → C be a bounded globallyη-Hölder function withη ∈ (0, 1). Recall [35,p. 110,Theorem 5.9] that for any η ∈ (0,η), the left-sided and right-sided Marchaud derivatives of g at t 0 ∈ R are The two-sided Marchaud derivative is then defined by The choice of the normalisation Γ η ensures the following key property: Assume that g is bounded on R and differentiable at t 0 . Then (The above lemma is certainly well-known, we provide a proof in Appendix A.) Given a C 2 perturbation (f t ) |t|≤ε of a piecewise C 2 expanding unimodal map f , and given ε 1 ≤ ε, we put 8 8 See also (26) for an alternative approach. Then, for any function φ ∈ L 1 (I), we define the η-fractional susceptibility function of (f (ε 1 ) t ) for the observable φ to be the formal power series and we define the frozen η-fractional susceptibility function of (f ) and φ to be the formal series The coefficient of z k in each of the two formal power series above is a sum of improper integrals, for t ∈ (−∞, 0) and t ∈ (0, ∞). We shall see in the proof of Theorem 2.3 that each integral converges, if φ belongs to L q for large enough q. In particular, using Fubini, we recover the formulas (6) and (7) stated in the introduction. We are now ready to state our main result. Recall Λ > 1 from Definition 2.1. For a function g(x, t) of two real variables, we denote by M η g t=t 0 the Marchaud derivative of g in the t variable, at t = t 0 . Theorem 2.3 (Fractional susceptibility function) . Let (f t ) |t|≤ε be a C 2 perturbation of a mixing piecewise C 2 expanding unimodal map f . Assume that either f is good or that all the f t are topologically conjugated to f . Then there exist κ < 1 (depending only on f 0 ) and ε 1 ∈ (0, ε) such that for any 0 < η < 1, and for any φ ∈ L q (I) with q > (1−η) −1 , the following holds for the fractional susceptibility functions of (f dx for all |z| ≤ 1. As explained in §1.2, formula (b) in the above theorem can be viewed as the key result of this work. The proof of Theorem 2.3 will be given in Section 4, after we introduce some necessary tools in Section 3. We next make a few remarks about the statement: Clearly, if φ ∈ L ∞ , then we can consider all values of η ∈ (0, 1). The proof of Theorem 2.3 actually shows that for φ ∈ L q (I), the response function is Hölder continuous with exponent η provided that q > (1 − η) −1 . We do not claim that the holomorphy radius given in claim a) is optimal. However, we expect that, generically, the maximal holomorphic extension radius of Ψ φ (η, z) tends to one as η → 1. It is unclear whether the frozen fractional susceptibility function is holomorphic in a disc of radius larger than one, uniformly in η → 1. Claim c) implies that, for any η ∈ (0, 1), and all |z| ≤ 1 For z = 1 this is reminiscent of the fluctuation-dissipation formula for linear response (see e.g. [7]). Note, however, that we cannot integrate by parts (in spite of [35, (6.27)]) because the Marchaud derivative is with respect to the parameter t. See Corollary 2.4 for more information on the frozen susceptibility function in the "horizontal" case. To state an interesting corollary of our Theorem 2.3, letting H u denote the Heaviside where the regular part ρ reg 0 is differentiable and supported in I, with derivative in BV , and the saltus (or singular) part In addition, if the critical point c is not periodic, we have Note that if N f is finite but c is not periodic (it is then preperiodic) then the jump s j at c j is given by the sum of alls k for k such that If v is not horizontal, we say that v is transversal. In the horizontal case, we have (Corollary 2.4 is proved in Section 4.4): Then, there exists ε 2 ∈ (0, ε) such that for any φ ∈ C 0 , the following holds for the fractional susceptibility function of (f (ε 2 ) t ) and φ: Assume furthermore that c is not periodic for f . Then, we have Remark 2.5. The first term of (17) can be rewritten as Indeed, α is the solution (which [9, Lemma 2.2, Remark 2.3, Proposition 2.4] is unique and continuous under the assumptions of the corollary) of the twisted cohomological equation =s j α(c j ) . 2.2. Fractional moduli of continuity and Keller's x log x bound. In the definition of the Marchaud derivatives M η + g and M η − g of a function g, we may replace t 1+η with other weights. Suppose for instance that ℓ : [0, ∞) → C is a continuous function and that γ ∈ [0, 1) is a constant such that (21) ℓ(0) = 0, We then define the right and left-sided ℓ-Marchaud derivatives of a bounded γ-Hölder function g by We define the two-sided ℓ-Marchaud derivative by M (ℓ) g = 1 2 (M (ℓ) . Finally, we define the ℓ-susceptibility function to be the formal power series Similarly, we define the frozen ℓ-susceptibility function by the formal power series The following theorem is proved in almost the same way as Theorem 2.3 (see Section 4.2). Theorem 2.6 (Generalized fractional susceptibility function). Let (f t ) |t|≤ε be a C 2 perturbation of a mixing piecewise C 2 expanding unimodal map f . Assume that either f is good or all the f t are topologically conjugated to f . Let ℓ and γ ≥ 0 be such that (21) holds. Then, for any q ≥ (1 − γ) −1 and any φ ∈ L q , the following holds: (a) the susceptibility functions Ψ φ ((ℓ), z) and Ψ fr φ ((ℓ), z) are well-defined and holomorphic in a disc of radius strictly larger than one. Corollary 2.7 (Holomorphic extension in (η, z)). Let (f t ) |t|≤ε be a C 2 perturbation of a mixing piecewise C 2 expanding unimodal map f . Assume that either f is good or all the f t are topologically conjugated to f , and let κ < 1 and ε 1 be from Theorem 2.3. Let φ ∈ L q , for some q > 1. Then the function 2.3. Two linear examples. We illustrate our definitions with two families of linear tent maps. One simplifying feature is that ρ reg t vanishes identically for all t in both examples. To study ρ sal 0 , we shall use that for any x = u, we have To prove (22), observe that if x − u > 0, then Similarly, if x − u < 0, then It follows that The first example is the family of tent maps with fixed slopes λ 0 ∈ (1, 2) given bỹ We can also use that the topological entropy off t is log λ 0 for all t.) We assume that the orbit of c = 0 is infinite for the sake of simplicity. We haveX 0 ≡ 1 on the support ofρ 0 , so that (22) and (23) imply Next, observe that sincef t (y) =f 0 (y) + t for |t| < ε 1 , we have (L t ϕ)(x) = (L 0 ϕ)(x − t) for such t, so that, usingL 0 ρ sal 0 = ρ sal 0 , we find, for any |t| < ε 1 , (The claims above are proved in the same way as (22).) Using (24) and (25), we get, Finally, So we see that, in the horizontal linear case given by the family (f t ), the frozen and response susceptibility function coincide if ǫ 1 ≥ sup n (c 1 − c n , c n − c 2 ). This does not seem possible, since ε 1 ≤ ε 0 , where ε 0 is given by the uniform Lasota-Yorke bound Lemma 3.1. However, the two susceptibility functions are qualitatively similar. (In fact, the relationf t =f 0 + t for |t| < ǫ 1 implies that the response and frozen susceptibilities differ by a function holomorphic in a disc of radius larger than one; this can be shown as in [11,Proposition 2.5]). In addition, if we replaced 10 in the definitions of all fractional susceptibility functions the Marchaud derivative by the truncated Marchaud derivative then the frozen and response susceptibility functions would coincide forf t . (Lemma 2.2 and the integration by parts formula mentioned in footnote 5 both hold for M η,(ε 1 ) .) The second example is the family of tent maps with varying slopesf t (x) = λ t x+λ t −1 for −1 ≤ x ≤ 0 andf t (x) = −λ t x+ λ t − 1 for 0 ≤ x ≤ 1, where λ t = λ 0 + t, for λ 0 ∈ (1, 2) and |t| < ε 1 , with small enough ε 1 < min(2 − λ 0 , λ 0 − 1). This family is not horizontal (see e.g. [41, §4]). We assume again that the orbit of c = 0 forf 0 is infinite for the sake of simplicity. Transfer operators and Sobolev spaces 3.1. Basic definition and properties. We will use the Sobolev spaces H τ,p = H τ,p (I) of functions ϕ ∈ L p (R) supported in I such that where p > 1 and τ ∈ [0, 1/p) are real numbers, and F denotes the Fourier transform. (Note that H τ,p (I) ⊂ Hτ ,p (I) ifτ < τ , and that the embedding is compact.) Let f t be a piecewise C 2 expanding unimodal map. Thomine [38] showed that the transfer operator L t is bounded on H τ,p , for any 1 < p < ∞ and 0 ≤ τ < 1 p . More precisely, there exists a constant Recall that the essential spectral radius r ess (L| B ) of a bounded operator L : B → B on a Banach space B is the smallest r ≥ 0 such that the spectrum of L on B in the complement of the disc of radius r consists of isolated eigenvalues of finite multiplicity. Thomine [38,Theorem 1.3] proved that, for any 1 < p < ∞ and 0 < τ < 1 p , we have 11 In particular, ifΛ ∈ (1, Λ(f t )), we find p(f t ) = p(f t ,Λ) > 1 such that By standard arguments (see [38,Theorem 1.6]), using that f t is unimodal and thus topologically transitive on [f 2 t (c), f t (c)], and that the dual of L t preserves Lebesgue measure dx on I, it follows that for such τ and p the spectral radius of L t on H τ,p is equal to one, that the invariant density ρ t belongs to H τ,p , (extending ρ t by zero outside of its domain), that ρ t is the unique fixed point of L t in H τ,p , and that the algebraic multiplicity of the eigenvalue 1 is equal to one. If 0 <τ < τ then, outside of the closed disc of radiusΛ . Let (f t ) |t|≤ε be a C 2 perturbation of a piecewise C 2 expanding unimodal map f = f 0 . Assume that either f is good or all the f t are topologically conjugated to f , and recall Λ > 1 from Definition 2.1. Then for 11 Indeed, in our one-dimensional unimodal setting Thomine's "n-complexity at the beginning" is bounded by 2, and his "n-complexity at the end" is bounded by 2 n . Cf. the proof of our uniform Lasota-Yorke estimate Lemma 3.1. 12 Above equation (26) in [9] there is a mistaken reference to Remark 5 in [25] instead. anyΛ < Λ there exist p 0 ∈ (1, p(f )) and ε 0 ≤ ε such that for any p ∈ (1, p 0 ) and 0 ≤τ < τ < 1 p there exist finite constants C 0 and C such that We shall also use the following perturbation estimate, proved in §3.3: . Let (f t ) |t|≤ε be a C 2 perturbation of a piecewise C 2 expanding unimodal map f 0 . For any p > 1 and 0 <τ < 1 p there exists C < ∞ such that Assume now that f t = f is mixing. Then for any H τ,p with p ∈ (p(f ), 1) and τ < 1/p, we have that 1 is the only eigenvalue of the transfer operator L 0 of f on the unit circle (adapting e.g. the proof [2, Theorem 3.5] for L t acting on BV ). In other words, the operator L 0 has a spectral gap on H τ,p . The following notation will be useful: Definition 3.3 (Maximal eigenvalue κ < 1 of a mixing piecewise C 2 expanding unimodal map f ). Let f be a mixing piecewise C 2 expanding unimodal map. If there exist p > 1 and τ < 1/p such that L 0 on H τ,p has an eigenvalue ζ = 1 with |ζ| > Λ(f ) −τ −1+ 1 p , then we set κ(f ) < 1 to be the maximal such modulus |ζ|. Otherwise we set κ(f ) = 0. 3.3. Four basic lemmas and the proof of the perturbation Lemma 3.2. We end this section by showing the perturbation Lemma 3.2. The proof will use the following four standard lemmas (the first three lemmas are also instrumental in the proof of Lemma 3.1): [40,Section 4.2.2]). Suppose that g ∈ C γ where γ > τ , and let p > 1. Then there exists C = C(τ, p, γ) > 0 such that gϕ H τ,p ≤ C g C γ ϕ H τ,p . Proof of Lemma 3.2. Let I t = f t (I), and put J t = I t \ (I 0 ∩ I t ) and K t = I 0 \ (I 0 ∩ I t ). (It is possible that J t or K t is empty.) We have We consider first the term 1 I 0 ∩It (L t ϕ − L 0 ϕ) Hτ ,p . If x ∈ I 0 ∩ I t , both f −1 t,± (x) and f −1 0,± (x) are defined, and we may write We consider the first term on the right-hand side (the second one is treated in the same fashion). Since (f t ) is a C 2 perturbation, we may extend f t,− : has compact support and is bounded uniformly in t, and f t,− − f 0,− C 1 = O(|t|) as |t| → 0. Since 0 <τ < 1/p, by the Strichartz Lemma 3.6, where ϕ is extended by zero outside of its domain I. We then split The first term in (37) is estimated using Lemmas 3.4, 3.5, and 3.7: Note that as t → 0. Since 0 ≤τ < τ < 1, for the second term in (37) we then have by Lemmas 3.4 and 3.5 the upper bound We have shown that if 0 <τ < τ < 1 p then We return to (36) and consider 1 Jt L t ϕ Hτ ,p . If q ∈ (p, ∞), letting r be the conjugate of q/p, then Hölder's inequality gives for any interval J ⊂ I and any u ∈ L q (I) that Since |J t | ≤ C|t|, it follows by the Sobolev embedding theorem that, taking q > p such that Also, (30) and Lemma 3.6 give, for τ < 1/p, Finally, since Hτ ,p = [L p , H τ,p ]τ /τ (where [B 0 , B 1 ] θ denotes complex interpolation) interpolating at θ =τ /τ (see [34, §2.5.2] and [39, §1.9]) between (38) and (39), we get In the same way, since |K t | ≤ C|t|, we get 1 Kt L 0 ϕ Hτ ,p ≤ C|t| τ −τ ϕ H τ,p . Recalling (36), we have proved L t ϕ − L 0 ϕ Hτ ,p ≤ C|t| τ −τ ϕ H τ,p for 0 <τ < τ < 1 p . Since I is compact, we may assume that q = ∞ (otherwise replace q by any finite number larger than (1 − η) −1 ). It suffices to consider the contribution of the improper integral ∞ 0 in Ψ φ (η, z), the computation for the integral 0 −∞ is exactly the same. We will prove claim (a) for Ψ φ (η, z), the proof for Ψ fr φ (η, z) is obtained by a slight simplification. Let q ′ > 1 be the conjugate of q, that is 1 q + 1 q ′ = 1. Then, by Hölder's inequality, Next, fixing p ∈ (1, q ′ ) (we shall need to take p close to 1 soon) and setting the Sobolev embedding theorem [20,Theorem 1.3.5], gives a constant C p,q ′ such that Thus, for each fixed k ≥ 0, by Lemma 3.2, the improper integral defining the coefficient of z k in Ψ(η, z) or Ψ fr (η, z) is a well-defined complex number. Assume from now on that p ∈ (1, min(p 0 , q ′ )) where p 0 is from Lemma 3.1, and let κ be as in Definition 3.3. To conclude the proof of Theorem 2.3, it only remains to prove (c). Note that L k 0 does not depend on t and acts on functions depending on x. Since we have shown that every improper integral defining Ψ fr φ (η, z) is convergent, and since the sum over k in (13) converges absolutely for |z| ≤ 1, we may write Thus, (c) just follows from the definition of the Marchaud derivative M η . 4.3. Proof of Corollary 2.7 on holomorphic extensions. Remark 4.1 (Sketch of an alternative proof using Morera's theorem). The proof below is by estimating the growth of derivatives. We sketch here a more conceptual proof: First extend the definition of M η and Theorem 2.3 to complex η with 0 < ℜη < 1 − q −1 . This implies in particular that the double integrals appearing in Ψ φ (η, z) are well-defined for such η, and Fubini is justified. Then, since Euler's Gamma function and t −1−η are holomorphic in the domain considered, Morera's theorem gives the desired holomorphic extension. To estimate (47), we first use the proof of Theorem 2.6, to get and C is a constant that does not depend on n. Next, since γ > ℜη 0 we have In conclusion, C n grows at most with an exponential speed. Hence, we may change order of integration if ζ is in a sufficiently small disc. In particular, our estimates show that Ψ φ ((ℓn), 1) n! grows at most exponentially in n, so that holds when ζ is in a disc around the origin. The proof that the function (η, ζ) → Ψ φ (η, z) is holomorphic in the domain uses the same estimates, writing instead 4.4. Proof of Corollary 2.4 on horizontal perturbations. By [9, Theorem 2.8] (see [10]), the horizontality assumption implies that f t is tangential to the topological class of f 0 that is, there exist a C 2 perturbation (f t ) of f 0 with |f t − f t | = O(t 2 ) (as |t| → 0) and homeomorphisms h t of I with h t (c) = c andf t = h t • f • h −1 t . Note that, lettingL t be the operator associated tof t , the proof of Lemma 3.2 gives ε 2 such that (cf. [9, (27) proof of Proposition 3.3]) that (this will be used to show (18)). Since we may choose 0 <τ < τ < 1/p such that 2(τ −τ ) > 1, it is not hard to see that we may replace f t byf t in the proof of Corollary 2.4. The last statement in the corollary, (18), can be deduced 15 from statement (c) of Theorem 2.3 together with the following claim (the last term in the right-hand side is understood in the sense of distributions, integrating against a continuous function) Indeed, it suffices to note that (recall that c is not periodic for f ) The desired identity (48) will be an immediate consequence of the decomposition ρ = ρ reg + ρ sal , with (15), and Lemma 2.2 after we establish that t → I φ(x)(L t ρ 0 )(x) dx is differentiable at t = 0 for any φ ∈ C 0 , with derivative To show (49), we first recall Step 2 in the proof of [9, Theorem 5.1]. For this, letting c k,t = f k t (c) and denoting by B t the vector space of functions ϕ = ϕ reg + ∞ k=1 a k H c k,t where ϕ reg ∈ BV ∩ C 0 is supported in I and is differentiable with derivative in BV , and the a k are complex numbers, we recall the invertible map G t : B t → B 0 from [9], defined by G t (ϕ) = ϕ reg + ∞ k=1 a k H c k . If c is periodic but all the f t are topologically conjugated to f 0 , the argument in the non periodic case can be applied.
2019-10-01T13:24:56.000Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "cbcf50ad751d2fb8bd4ef340a2be3241835e94c0", "oa_license": null, "oa_url": "https://www.aimsciences.org/article/exportPdf?id=df87168f-f307-4da6-bd74-90e343a3b27f", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "cbcf50ad751d2fb8bd4ef340a2be3241835e94c0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
1673379
pes2o/s2orc
v3-fos-license
Retinal Arterioles in Hypo-, Normo-, and Hypertensive Subjects Measured Using Adaptive Optics Purpose Small artery and arteriolar walls thicken due to elevated blood pressure. Vascular wall thickness show a correlation with hypertensive subject history and risk for stroke and cardiovascular events. Methods The inner and outer diameter of retinal arterioles from less than 10 to over 150 μm were measured using a multiply scattered light adaptive optics scanning laser ophthalmoscope (AOSLO). These measurements were made on three populations, one with habitual blood pressures less than 100/70 mm Hg, one with normal blood pressures without medication, and one with managed essential hypertension. Results The wall to lumen ratio was largest for the smallest arterioles for all three populations. Data from the hypotensive group had a linear relationship between outer and inner diameters (r2 = 0.99) suggesting a similar wall structure in individuals prior to elevated blood pressures. Hypertensive subjects fell below the 95% confidence limits for the hypotensive relationship and had larger wall to lumen ratios and the normotensive group results fell between the other two groups. Conclusion High-resolution retinal imaging of subjects with essential hypertension showed a significant decrease in vessel inner diameter for a given outer diameter, and increases in wall to lumen ratio and wall cross-sectional areas over the entire range of vessel diameters and suggests that correcting for vessel size may improve the ability to identify significant vascular changes. Translational Relevance High-resolution imaging allows precise measurement of vasculature and by comparing results across risk populations may allow improved identification of individuals undergoing hypertensive arterial wall remodeling. Introduction The human eye offers direct optical access to the retina, a portion of the central nervous system, and its vasculature, using noninvasive optical techniques. Because of this accessibility changes in the retinal vessels have long been considered as potential biomarkers for systemic vascular diseases [1][2][3][4] and improved clinical instruments [5][6][7][8][9] have enhanced our ability to measure properties of the retinal vascular bed. In the past, these measurements were limited to the larger retinal vessels unless exogenous contrast agents such as fluorescein were used. Nonetheless, systemic diseases such as hypertension and diabetes have well documented relationships with retinal vessel structure and regulatory responses. 2,10,11 In recent years, the development of adaptive optics (AO) retinal imaging, [12][13][14][15][16][17][18][19][20][21][22] and optical coherence tomography (OCT) 23,24 have provided clinicians and scientists with improved information on retinal vascular changes. With the recent advances in imaging of the vasculature [25][26][27][28][29][30] we now have tools available to provide precise structural measurements of retinal vessels from the largest retinal vessels down to the capillaries. It has recently been demonstrated that AO retinal imaging can show hypertensive changes in the walls of retinal vessels with great precision. 22 The current work uses AO-assisted retinal imaging to measure the impact of essential hypertension (HTN) on the walls of the retinal arteries and arterioles. It is thought that in essential HTN, changes in blood vessel structure leads to increased resistance to blood flow. 31,32 This increase in resistance is thought to occur first at a level of the vasculature known as ''resistance vessels'' that, by definition, control the blood pressure of the system. Although the actual size of these resistance vessels is not well known, they are usually thought to be smaller than 350 lm in outer diameter (OD). 33 With the advent of HTN there are changes in vascular walls, but these are initially reversible. However, once the pressure of the system has been elevated for a sufficient duration, a positive-feedback cycle effects the vascular wall structure. 32,34 For vessels larger than about 300 lm in diameter the increase in vessel wall thickness occurs without changing the lumen diameter, or inner diameter (ID), a process known as outward hypertrophic remodeling. 34 In small arteries remodeling is thought to occur differently. The total volume of the vascular wall remains constant but both the OD and the ID each decrease, 35 a process known as inward eutrophic remodeling. 34,36,37 These processes may not be independent, subcutaneous and retinal arteries may use a combination of eutrophic and hypertrophic remodeling dependent on the degree and duration of hypertension, but the retinal response being primarily eutrophic. 35,38,39 The hypertrophic form of vascular remodeling seems to be particularly associated with an elevated risk of cerebrovascular and cardiovascular risk 40,41 and any improvement in noninvasive detection and classification of vascular remodeling would be valuable. In general, changes to the vascular walls are typically quantified using a ratio measure comparing the wall thickness with the lumen diameter, the wall to lumen ratio (WLR). WLR is correlated to HTN 37,38,42 and can be measured in relatively large vessels of the eye using a scanning laser Doppler flowmeter. [42][43][44] Recently Koch et al. 22 used a commercially available flood illuminated AO retinal camera to show increased WLR in HTN, and together with other studies, the results suggest that smaller vessels are better indicators of hypertensive changes in the eye. 22,43 The purpose of this study was to use an AO scanning laser ophthalmoscope (AOSLO) with multiply scattered light detection providing higher contrast of vessel wall structure, 27,29,45 to measure WLRs over the full range of retinal vessels and in particular to test whether arterioles smaller than 50 lm in ID have a larger difference between the WLR of hypertensive subjects and normotensive (NTN) subjects. In an attempt to better refine the relationship between blood pressure and vascular wall anatomy, we included a third group of otherwise healthy subjects with a life-long history of nonpathological low blood pressures. Methods Subjects Fifty-five subjects were recruited for study by use of poster advertisements and referral from local physicians. Subjects were classified as HTN if they had been physician diagnosed as such, and were on any type of antihypertensive medication. Subjects were classified as NTN if they had never been diagnosed as hypertensive and never taken antihypertensive medications. Subjects were place into a third group, which we called hypotensive (LTN) if they met the classification of NTN, but also selfreported that their blood pressure had rarely been above 100/70 mm Hg and were fully functional without a medical condition producing hypotension. For subanalyses, we divided subjects into two age groups, a younger (,40 years) group and an older group (!40 years). All subjects were examined by an ophthalmologist. No subjects had signs of retinal hypertensive pathology other than a few subjects with arteriovenous nicking or vascular tortuosity. Careful medical histories were taken with a focus on causes of secondary hypertension, subjects were excluded from the study if they had any relevant systemic diseases other than essential HTN. Subject characteristics and statistics by group are given in Table 1. The population was primarily Caucasian, with four Asians and no African Americans. All subjects were informed of the risks and benefit of participating in the study and all procedures in this study were approved by the institutional review board of Indiana University and adhered to the tenets of the Declaration of Helsinki. Imaging We used the Indiana AOSLO. 46 This produces near-diffraction limited performance when imaging the retina in vivo. 47 The system can work with pupil sizes up to 8 mm, 48 although as expected the pupil size is inversely related to the resolution. For an 8-mm pupil, the lateral resolution using 820-nm light is approximately 2 lm. Axial resolution is dependent on the size of the confocal aperture and the spatial frequency of the target. In general, Axial focusing within 20 lm was required for optimal vessel wall images. The system incorporates a steering system that allows us to relate the small-field AOSLO image to locations on a previously obtained wide field fundus image from a commercial retinal imaging system (Heidelberg Spectralis, Heidelberg, Germany). 49 The AOSLO system also allows us to fully control our confocal aperture at the detector plane, choosing one of several aperture diameters, as well as changing its location in 2 dimensions. 50 Procedure All subjects had an ophthalmic exam including visual acuities, a slit-lamp examination, and a dilated fundus examination. If the dilated exam did not take place immediately before the AOSLO imaging, one drop of tropicamide 0.5% ophthalmic solution was distilled into the study eye. Axial length (AL) measurements were taken with a Zeiss IOL Master Version 5 (Carl Zeiss Meditec, Dublin, CA) for retinal image magnification calculations. Fundus images and OCT scans were then obtained on all subjects using a Heidelberg Spectralis system. If image quality of the OCT scans was not high, or was variable, artificial tears were given to the subject. Experimental Imaging The AOSLO imaging beam was 820 nm (12 nm half-width) and the wavefront sensing was at 870 nm (powers measured at the cornea were 100 and 50 lW, respectively). All light levels used were safe according to ANSI standards for safe use of laser light. 51 A single eye was measured for each subject. AOSLO images were obtained by sequentially imaging along a retinal artery, starting with arteries approximately 1500 lm from the center of the optic disc and following along successive branches until the walls of the arteriole were too small to resolve. For most subjects all AOSLO experimental images were obtained in a single session with frequent short breaks. A data collection session took approximately 40 minutes. To enhance the visibility of vessel walls a relatively large confocal aperture (103 the Airy disc diameter), displaced by more than the aperture radius was used. 26 For the best quality image, the confocal aperture was offset along the direction of blood flow in the vessel, which was known from the anatomy of the retinal quadrant being imaged. Image focus was adjusted as needed to keep the vessel walls in focus, and the scan size of the system was adjusted from 1.78 to approximately 1.148 for the smallest vessels (at a pixel size of 520 3 570 pixels after correction for scan distortion). Field sizes were automatically recorded by the imaging system. To obtain an image of a vessel the operator first optimized the focus of the image watching a live view of the retina. When a good image was achieved, the operator signaled the computer by a mouse click or foot switch and approximately 100 sequential frames of video were recorded at a rate of 28 frames per second. Image Processing/Measurements Image processing was performed semiautomatically offline after the completion of the data collection as has been described elsewhere. 29 Briefly, images were first corrected for the sinusoidal distortion produced by the scanning method and individual video frames were then eliminated if blinks, poor image quality, or large eye movement within a frame was detected. The remaining frames of the video were then aligned to a template frame automatically chosen by the software based on minimal eye motion between two successive frames. Small eye movements both between and within each frame were corrected as successive frames were aligned to the template frame. The final output consisted of a short video clip for each region of the blood vessel, an average image, 52 and a SD image 25 that provided a map of the vascular perfusion. The averaging of the image improved grading of the vessel by reducing noise and the SD image showed the areas of erythrocyte movement, aiding in identification of vessel lumens. Quantification of lumens and walls was performed manually in Photoshop (Adobe Systems, Inc., San Jose, CA) by masked graders. We first selected arterioles by comparing the small field AOSLO images to the 308 clinical SLO fundus images, ensuring we followed an arteriole during image acquisition. It was also possible to distinguish the direction of red blood cell motion based on the video segments. From these arterioles we then selected images that contained areas of measureable contrast between the vessel wall and both the retina and the vessel lumen. The grader made measurements in a location where the vessel was approximately straight for a length of at least 10 times the ID of the vessel and the vessel was without bifurcation. All measurements were taken in a direction perpendicular to the vessel wall. The distance from the outside of the vessel wall to the outside of the contralateral vessel wall was called the OD of the vessel (Fig. 1). This measurement was recorded, and then at the same location the distance from just inside the vessel wall to just inside the contralateral vessel wall was recorded as the ID. Both the OD and the ID for each vessel image were measured five times with the location of each individual pair of OD and ID measurements displaced from the previous measurement by approximately the ID of the vessel. Magnification was taken into account when analyzing the data by use of a simple formula ( Table 2) and the axial length measurements. Magnification differences between scan sizes are a known property of the system and were also taken into account. Some vessel images could not be measured due to shadowing, or because the focus was in a different plane than the vessel. Images where the contrast of the vessel wall was deemed uncertain by the grader were not measured and therefore, not used in any calculations. The WLR was calculated as the difference between the OD and ID divided by the ID. In a some cases (20 of 370), shadowing on one side of the vessel prevented precise localization of one of the outer walls of the vessel, and in this case a measurement from the outside of the vessel wall on the side with good contrast to the inside of the contralateral vessel wall was taken. In these cases the WLR was calculated assuming radial symmetry of the vessel. In all cases, when measuring the vessel lumen, it was differentiated from the vessel wall by visible structural differences and was verified when needed by movement of erythrocytes seen in aligned video sequences. Variability of Measurements Interobserver variability of the manual grading of WLR was measured by having a second expert grader blindly regrade both the ID and OD of 21 vessel locations from 18 subjects. The measurements were then compared with the original grader's results. On average, the two graders agreed within 2% on vessel size and the interclass correlation for this comparison was 0.99. For this reason, only the original grader's measurements were used for analysis. WT WCSA ¼ ( p (OD 2 -ID 2 )) / (4) RM ¼ retinal magnification, AL ¼ axial length, WT ¼ wall thickness, OD ¼ outer diameter, IU ¼ inner diameter, WLR ¼ Wall to Lumen Ratio, WCSA ¼ Wall Cross-Sectional Area; subscript of 1 indicates computation when only one wall was clearly visualized. Calculations Calculations were made based on the equations in Table 2. The retinal magnification (RM) due to ocular AL for each subject was calculated using Equation 1 53 and the subject's AL obtained from the Zeiss IOL Master. This calculation became necessary to compare the absolute vessel sizes across subjects. Wall thickness (WT) was calculated using Equation 2. WLR was calculated using Equation 3. In 5.4% of images in which the OD was measured using the one sided method, wall thickness was calculated using Equation 4. For these images the WLR was calculated using Equation 5. Wall crosssectional area (WCSA) calculations are shown in Equation 6. Statistical Analysis We analyzed data in two ways. Primary statistical analyses were performed by first pooling multiple measures within a vessel size category for each subject. Pooling was performed to compare subject groups for WCSA and WLR, which had a strong dependence on vessel size. Thus, it was necessary to avoid weighing each subject differently based on the number and location of measured blood vessels. For each subject we included all measured vessel locations (an average of 7 locations per subject), and grouped vessels into three size classes, indicating whether their ID was less than 10 lm (Class 1), between 10 and 50 lm (Class 2), and larger than 50 lm (Class 3). We then calculated the average WLR data from each subject for each vessel class. For the analysis of the variance (ANOVA), we included only vessels in classes 2 and 3. The primary analysis was a two-way ANOVA of WLR, vessel size, and blood pressure status. Post hoc comparisons of WLR as a function of vessel size class were made within each blood pressure group and for a given size across blood pressure groups using Student's t-test. We also performed regression analysis to determine the relation of ID and OD for the different subject groups. Measurability In general, we were able to make measurements in almost all subjects for all locations selected for imaging. Across all subjects we could not quantify vessel properties for 14 of 370 imaged artery locations (3.8%) due to shadowing or nonoptimal plane of focus. Twenty arterioles were graded using the ''one sided method'' out of the total 370 artery locations picked for measurement (5.4%). Vascular Structural Measurements Vascular ID varied with both vessel size and blood pressure status. Figure 1 shows example results from normal subjects for two sizes of vessels. Here, we see that the vascular wall was clearly imaged and measurable. Vessel ID ranged in size from 3 to 169 lm and OD ranged from 6 to 216 lm (Fig. 2). ID and OD were highly correlated for all three subject groups (r 2 of 0.993, 0.982, and 0.973 for LTN, NTN, and HTN groups, respectively, Fig. 3). Wall to Lumen Ratio WLR varied with vessel diameter (Fig. 4). The largest WLR occurred in the smallest vessels that were measureable, which were vessels under 10 lm in lumen diameter. While some vessels with ID of less than 10 lm were measurable in all subject groups, 17 of the 22 (77.27%) vessels in this size range were measured in the LTN group, perhaps partially due to their younger age (Table 1) providing higher contrast images. Therefore, we excluded from our statistical analysis vessels smaller than 10 lm in ID (class 1 vessels). For vessels above 10 lm in ID the largest WLR occurred among the smallest measurable vessels of the HTN subjects ( Fig. 4) with 9 of 10 of the largest WLR measurements occurring in HTN subjects. The WLR's for our class 2 vessels were 0.44 6 0.20, 0.41 6 0.23, and 0.70 6 0.38 for the LTN, NTN, and HTN subjects, respectively (Fig. 5). WLR was smaller for the class 3 vessels, with WLR's of 0.211 6 0.196, 0.234 6 0.078, and 0.303 6 0.084 for LTN, NTN, and HTN subjects, respectively. The two-way AN-OVA for WLR, calculated using averages for each subject for each size category, revealed a significant relation between both WLR and vessel size and WLR and HTN status, with WLR depending on vessel size class (P , 0.0001) and blood pressure group (P , 0.0001). Vessel size and blood pressure status interacted significantly (P ¼ 0.0385). These interactions all persisted when comparing only the NTN and HTN groups. The WLR of the NTN and LTN groups were not significantly different from each other (P . 0.7), although in general data from the LTN group showed a more systematic relation between OD and WLR and other measures (see below). We examined the data for significance of age and also of sex and no significant differences within groups were seen. Wall Cross-Sectional Area WCSA increased with vessel size (Fig. 6). HTN subjects had larger WCSA than NTN subjects for both size classes of vessels (P ¼ 0.025 and P ¼ 0.01, unpaired t-test for small and large size classes, respectively). For the comparison of HTN and LTN subjects the difference for small vessels was not significant (P ¼ 0.06) but was for the larger vessels (P ¼ 0.008). Discussion The finding of increased WLR in HTN agrees with previous studies, 22,38,54,55 now using multiply scattered light AOSLO imaging. Our results are similar to the increased WLR with HTN reported by Koch et al. 22 with a flood illuminated AO camera, but we cover an even larger range of sizes. The hypertensive subjects in this study were treated, whereas those in Koch were untreated. To allow us to compare our results more directly with Koch et al., 22 as well as to studies that used scanning laser Doppler flowmetry . The dependence of the WLR on the OD. While we measured vessels with an ID less than 10 lm, these were primarily measurable in our LTN subjects (filled symbols), and therefore were not included in the statistical analyses. Figure 5. Comparison of WLR for different subject groups. Vessels smaller than 10 lm were excluded from this analysis and the data for each subject within a size class were averaged and then analyzed. The median for each group is shown as the center horizontal line, the box encloses the 25th to 75th percentiles and the whiskers are set to enclose 80% of the data; symbols represent subjects with either a WLR in the top 10% or bottom 10% of the sample. on slightly larger vessels, 35,40,54,55 we formed a subset of our data including all vessels between 90 and 120 lm ID. Our data, while having slightly thinner walls for a given lumen, show similar WLR and impact of hypertension, whether treated or not ( Table 3). All techniques show thickened walls in the hypertensive groups although not significantly for the Baleanu 35 study. While our WLR measurements were smaller than the others, as seen in Figures 1 and 3, the walls were well delineated and measurements were replicable. The results from all these retinal studies are consistent with HTN producing increased WLR in the central nervous system. 40 We examined the relation of ID to OD, which is important both for understanding how vessels remodel 56 and for determining whether there is a fundamental relation between wall thickness and lumen diameter. The data of Figure 2, demonstrate that both the LTN and NTN subjects' vessels have a very tight linear relationships between ID and OD. In particular, there seems to be an upper bound to the ID for each OD formed by the data from the LTN subjects. Data from the NTN and HTN subjects fall on or below this line. This suggests a deterministic relationship between vessel diameter and the wall thickness prior to vascular remodeling. Linear correlation between the outer and inner vascular diameters for the LTN group had an r 2 greater than 0.99 (Fig. 2) and the NTN data were close but fall slightly below the LTN group and the HTN data fall even further below, with the HTN group outside the 95% confidence limits of both the LTN and NTN groups. This paints a consistent picture of the LTN data representing a ''basal'' state for the vasculature and NTN and hypertensive data resulting from eutrophic remodeling. The assumption, that the line formed by the LTN data represents an estimate of the initial state from which vessels are eutrophically remodeled, allows us to model the impact of vascular remodeling as depicted in Figure 7. Here the solid diagonal line with a slope of 0.88 is the linear fit to the LTN data of Figure 6. Eutrophic remodeling conserves WCSA for a given vessel ID and OD. This causes a vessel to move down and to the left, as illustrated by the dashed curved lines, because as the outer wall moves inward, the ID shrinks even more. Remodeled vessels lie somewhere within the space below and to the right of the baseline. We can relate each current hypertensive data point to a position on the LTN line, which has the same WCSA (the curved line with an arrow shows possible WCSA conserving positions from an initial starting data point). Using the empirically defined relations we can ask (1) are they consistent with eutrophic remodeling that is expected for vessels of this size in HTN, and (2) what is the impact of remodeling on vascular resistance? In Figure 7 we see that all data fall within the range of values consistent with eutrophic remodeling. However, our results could be consistent with other types of remodeling. To determine whether the data suggest hypertrophic remodeling we fit WCSA versus OD for all vessels with a three parameter power model and computed 95% confidence intervals (CIs). While there was a trend for the large vessels to have larger WCSA, the difference was within the 95% confidence limits and so we were unable to reject hypertrophic remodeling. Our second question was how remodeling would impact vascular resistance. Although the systemic resistance vessels are generally considered to be small arteries in the range of 100 to 450 lm OD, 57 our results suggest that the largest WLR changes are in the smallest arterioles (Fig. 4). To evaluate the impact on vascular resistance we assumed that WCSA was conserved as in Figure 7. We then calculated from the curves the ''initial'' ID of the hypertensives' arterioles from the measured ID and OD of the vessel and assumed that vascular resistance is proportional to the inverse fourth power of the lumen radius (Pouseille's law), and no other parameters (viscosity, etc.) change. We then calculated the difference between the resistance given by the measured ID and the calculated ''initial'' resistance and then divided by the ''initial'' resistance. As expected the resistance increases with remodeling (Fig. 7, right) and the increase is largest for the smallest vessels. However, because the distance traveled within a small vessel is short, approximately 150 lm, 33 it is unlikely that these vessels are a major determinant of the increased systemic vascular resistance in HTN. The curve tapers to an increase in resistance of about 30% for larger arterioles and because of the distribution of vessel lengths within the eye, it is this asymptotic value that is likely to represent the overall change in resistance within the eye. While there are other possible forms of remodeling, given that our subject hypertensive population consisted of managed essential hypertensive patients, eutrophic remodeling is the most likely mechanism for an increase in vascular resistance. We also performed this calculation based on a simple direct comparison of vessels (assuming the OD remained the same and only the ID changed, and this produced a similar conclusion). It is unknown whether the simple linear relation between ID and OD that we measure for the eye represents a property of the systemic circulation or is organ specific. If this is generalizable, it opens the possibility of evaluating a patient's current vascular wall structure compared with this basal state. This would provide an estimate of a given patient's vascular status arising from hypertension or diabetes. This would aid in 'individualized' medicine by allowing estimation of the effects disease in a new patient and thus delivery of more appropriate care than depending on surrogate measures of vascular health such as blood pressure. Although its medical use is dependent on the establishment of correlations between ocular vascular change and systemic vascular consequences such as renal or cardiac damage and possibly even Alzheimer's disease; it is reasonable that precise measurements over a greater range of vascular sizes could yield improved clinical patient stratification and prognostication. To test this concept on the current data set we used the linear fits to the LTN data of Figure 2. We computed the difference between the predicted and actual ID of each measurement of OD and averaged these for each subject. The group results are shown in Figure 8. The LTN subjects fall close to the prediction as expected. The NTN subjects overlap the LTN data but there are numerous outliers with smaller ID. Every individual in our HTN group Right. The computed normalized change in resistance based on a fourth power relation between lumen size and resistance. Note that the largest resistance changes occurred for our measured small vessels, however their total length is relatively small, which supports the idea that the largest changes in total systemic resistance do not arise from the smallest vessels. fell outside the 95% confidence limits for the LTN subject. While a larger scale, well-controlled clinical study is desirable, the current results are encouraging. The main limitations of the current study include the relatively small sample size and the lack of control for lifestyle factors, age, or sex. A consensus has not been reached on how age or gender affect WLR 22,44,55 and subdividing our subject population results in numbers too small to be meaningful statistically and in fact our low-tension group was primarily female (Table 1). Other limitations are that our hypertensive subject group was not stratified by medication or degree or duration of blood pressure elevation and also some members of the NTN group could have been prehypertensive. Because these factors were not controlled for, the overall diversity within each pressure group may have increased the within group variability, created a bias, or both. However, the increased variability should have decreased statistical power, yet we found the differences were highly significant and so we do not believe this limitation weakens our results, but rather that better classification of a larger study should most likely increase the effect that was seen and help resolve age, sex, and racial differences in vascular wall structure. In conclusion, the use of multiply scattered light AOSLO retinal imaging allowed us to extend the range of vessel sizes for which in vivo vascular structure could be accurately measured. Results suggest that the largest relative changes in resistance to blood flow in HTN could occur at the smallest sizes of arterioles, certainly within the eye and possibly elsewhere in the body. This level of vasculature is distinctly smaller than the usual class of arterioles considered as the primary resistance vessels, and our results do not contradict the general consensus, because while the change per vessel is large, the effective length is small. The very high correlation of the ID and OD for the arterioles in LTN subjects is suggestive that, at least in the eye, a proportional relation may exist between ID and OD, which is then modified through eutrophic or hypertrophic remodeling in individuals with higher blood pressures. This data may allow more accurate studies of hypertensive vascular remodeling and of clinically observable vascular alterations seen in other diseases such as diabetes, by allowing patients to be compared with a basal vascular wall template. Further research is clearly required to validate these concepts. Figure 8. Box plots of the residual for each subject group compared with the template curve achieved by fitting the LTN data of Figure 2 with a linear model (r 2 . 0.99). Actual measurements for each vessel in each subject were then subtracted from the predicted value for that size vessel, and then the average residual computed for each subject and the averages for groups plotted. The LTN group is close to the template (as expected) and the range of residuals is very small. For the NTN and HTN groups the data diverge from the template. Box plots as in Figure 5.
2017-10-14T23:20:37.647Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "98c853da728157723914deb3c9dd686f5fd83bca", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/tvst.5.4.16", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1cb6e8f0c176de10862f44a0397d046b4765f569", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55160043
pes2o/s2orc
v3-fos-license
Endothelial Dysfunction as a Consequence of Endothelial Glycocalyx Damage: A Role in the Pathogenesis of Preeclampsia Endothelial Dysfunction as a Consequence of Endothelial Glycocalyx Damage: A Role in the Pathogenesis of Preeclampsia The endothelial glycocalyx is an intravascular compartment which consists of carbohy- drate part of membrane glycoconjugates, free proteoglycans and associated proteins. It is thought to play an important role in the vascular tone regulation, vascular permeability and thromboresistance. It was suggested that the leading cause of endothelial dysfunction in various cardiovascular, inflammatory, and kidney diseases is the damage of the endothelial glycocalyx. This review presents the changes in the composition and structure of the endothelial glycocalyx in the settings of damage and under systemic inflam - matory response, and the impact of these changes on the functions of endothelial cells and intercellular contacts, mediating the interaction of endothelium and the immune cells. The second issue, discussed in this article is a possible role of endothelial glycocalyx in the pathogenesis of preeclampsia—a complication of pregnancy associated with hypertension, proteinuria and edema. The reviewed data contribute a new insight in the endothelial dysfunction pathogenesis. Introduction Preeclampsia (PE) is one of the main problems of modern obstetrics. PE develops in 2-9% of all pregnancies; it is the second most frequent cause of maternal morbidity and one of the leading causes of neonatal morbidity and mortality. PE is now regarded as a syndrome which is caused by disrupted adaptation to pregnancy and manifests with the development of complex, multiorganic and polysystemic insufficiency with clinical signs appearing after the 20th week of gestation [1,2]. Despite of vigorous research in this area, pathogenesis of PE is still not clear. However, it is well known that the key factors of PE are immune system hyperactivation and the following excessive systemic inflammatory response (SIR), which initiate endothelium activation and cause endothelial dysfunction in cases of early onset and complicated course of the disease [3]. Inflammatory response is accompanied by cell phenotype transformation (formation of activation cell status), leading to the generation of "danger" signals (generated from products of trauma, ischemia, necrosis or oxidative stress) [4,5], which are recognizable by the immune system. It was found that the composition of endothelial glycocalyx (eGC) changed under excessive inflammatory response. Hypoglycosylated structures which may be perceived by immune system as neoantigens, appear оn the membrane of endothelial cells; also, antigens which are normally covert become apparent [6]. These events may promote autotolerance disruption and cause production of autoreactive antibodies damaging endothelial cells. In this regard, in this chapter a special attention is paid to eGC-functional layer of endothelial cells, which mediates all endothelial functions. Much evidence that under SIR, the alterations of eGC are associated with changes of cardiovascular system hemodynamics, vascular tone regulation, vascular permeability [7]-the main vectors of pathophysiological disorders in PE, and that alterations affect endothelial autoimmune phenotype formation, allow to assume that eGC may be one of the main targets of PE. that are anchored in the membrane, as well as by secretory proteoglycans and glycosaminoglycans (GAGs), that are not anchored and are inter-connected by non-covalent interactions [11][12][13]. Their carbohydrate part contains a large amount of sialo and sulpho residues, forming overall negative charge of the endothelial cell surface. The outer segment of this layer (spreading out toward the vascular lumen), formed by the carbohydrate part of glycoconjugates, is a polysaccharide gel-eGC [14], with thickness ranging 2-4.5 μm [15] in different departments of the vascular system. The base of the eGC is formed by carbohydrate-protein conjugates-transmembrane and secretory proteins; their carbohydrate part is represented by both short (2-15 monosaccharide residues) branched oligosaccharides, often decorated with sialic acid and sulfate (in glycoproteins), and by high-molecular glycans, often ending with highly sulfated residues (in proteoglycans) [16]. The glycoproteins can contain N-linked (Asn-linked) and/or O-linked (Ser/Thr-linked) glycans of variable length and composition. Complex hybrid and high-mannose glycans are usually present in the glycoproteins [17]. The main glycoproteins of endothelial cells are cell adhesion molecules (selectins, integrins, immunoglobulin superfamily molecules, endothelial mucins and addressins) which provide homing, migration and interaction between cells in different processes, and secretory molecules associated with eGC, participating in vascular homeostasis support, fibrinolysis and coagulation (thrombomodulin, von Willebrand factor (vWF)), antithrombin III, etc.). These molecules expression depends on factors, altering endothelium activation [16]. Under inflammatory response, the glycans modification occurs, leading to alteration of intercellular contacts, hemostasis and blood rheology. Biochemical eGC composition (the main structural and associated molecules) is presented in Tables 1 and 2 (parts I and II). It was found that the carbohydrate part is crucially important for glycoprotein function. N-linked glycans, particularly high-mannose chains, determine specific interactions of different molecules from the intercellular adhesion molecule (ICAM) family with the receptors [17]. N-glycans of the junctional adhesion molecule-A (JAM-A) regulate leukocyte adhesion and lymphocyte function-associated antigen-1 (LFA-1) binding [22]. Platelet/endothelial cell adhesion molecule-1 (PECAM-1 or CD31), a membrane highly glycosylated protein (~30% of molecular mass), has N-linked glycans represented by neutral and sialylated glycans [51,52]. E-selectin is heavily glycosylated protein with hybrid/complex type N-linked oligosaccharides [53]. Cadherin of the vascular endothelium (VE-cadherin, CD144)-is the main transmembrane protein of adhesion contacts; its carbohydrate part is presented mainly by sialylated biantennary N-glycans of a complex type, and sialylated hybride N-chains (~40 and 28% of all identified glycans, respectively). Branched tri-and tetraantennary N-glycans, as well as N-glycans of high-mannose type are represented in smaller quantity in N-glycans of VE-cadherin [21,54]. In the presence of antiinflammatory factors (such as tumor necrosis factor-α, TNF-α) the quantity of glycans ending with α2,6-sialic acid residues and fucose-α1,2-galactose-β1,4-N-acetylglucosamine increases, as well as the expression of N-glycans of high-mannose and hybrid type, which mediate intercellular contacts of monocytes with endothelium in the rolling and adhesion, particularly at the intercellular connections sites [55]. Hemostasis controlling proteins associated with outer eGC are also highly-glycosylated. VWF is a key component for maintenance of normal hemostasis, acting as the carrier protein of the coagulant Factor VIII and mediating platelet adhesion at the sites of vascular injury [31]. VWF is heavily glycosylated by N-and O-linked oligosaccharides, and glycosylation affects many of its functions [30]. Antithrombin is a major inhibitor of the blood coagulation cascade. The eGC mostly consists of proteoglycans-highly glycosylated proteins (glycans account for 90-95% of the molecular mass); GAGs branches form their carbohydrate part. There are [15]. In the human body the GAGs are present in a protein bound form (i.e., in proteoglycans composition) and do not exist in a free form, except for HA. Besides playing structuring and supporting roles, proteoglycans are involved in cell signaling, regulation of cell proliferation, adhesion, migration, differentiation [55]. Key eGC glycans are heparan sulfate proteoglycans (HSPGs), which compose about 50-90% of the total amount of proteoglycans present in the eGC, and HA-the main supporting glycan [14,15]. Main proteoglycans of the eGC and their characteristics are given in Table 2 (part II). Glycosphingolipids (GSLs), a class of ceramide-based glycolipids, are also a significant part of eGC. Glycosphingolipids are subclassified as neutral (no charged sugars or ionic groups), sialylated (gangliosides), or sulfated [58]. GSLs cluster with cholesterol in cell membranes to form GSL-enriched lipid raft [59]. Cultured human umbilical vein endothelial cells (HUVEC) appeared to contain complex lacto and globo series compounds (lactosylceramide, Gb 3 Cer and Gb 4 Cer), but the most abundant neutral GSL is lactosylceramide (LacCer, CDw17) [60]. LacCer can bind to various microorganisms, is highly expressed on the plasma membranes of human phagocytes, and forms lipid rafts containing the Src family tyrosine kinase Lyn. LacCer-enriched lipid rafts mediate immunological and inflammatory reactions, including superoxide generation, chemotaxis, and non-opsonic phagocytosis [61,62]. Therefore, LacCer-enriched membrane microdomains are thought to function as pattern recognition receptors (PRRs), which recognize pathogen-associated molecular patterns (PAMPs) expressed on microorganisms. LacCer also serves as a signal transduction molecule for functions mediated by CD11b/CD18integrin as well as being associated with several key cellular processes [63]. Endothelium activation by pro-inflammatory cytokines, particularly by TNF-α, affect the Gb 3 Cer and Gb 4 Cer [64] expression; interferon gamma (IFNγ) has a striking effect on the surface expression of GSLs; IL-1 increases the cell content of neutral and acidic GSLs but does not alter their surface expression [55]. Cytokines TNF-α and IL-1 can potentiate the toxic effect of verocytotoxin (Shiga-like toxin-produced by Escherichia coli and the main cause of hemolytic uremic syndrome) to human endothelial cells by inducting an increase in the Gb 3 Cer synthesis in these cells [65], because Gb 3 Cer (CD77) binds to the verocytotoxin and injures human endothelial cells [66]. Acidic GSLs of human endothelial cells are: monosialoganglioside or GM3-the major ganglioside of endothelial cells, and it constitutes about 90% of the whole ganglioside fraction [67], and sulfoglucuronyl paragloboside (SGPG), a minor GSL in endothelial cells, is a ligand for L-selectin [55]. Although GIyCAM-1 and CD34 constitute the major L-selectin ligand on venous endothelium, endothelial SLe x gangliosides may also play a role, since L-selectin can also bind SLe x GSLs under physiologic flow conditions [68]. Functions of the endothelial glycocalyx The eGC is considered as an intravascular compartment which has various functions. First, eGC mediates the endothelial mechanotransduction of shear stress and performs regulation of shear stress-induced nitric oxide (NO) production [69]. This is provided by the impact of tangential stress of blood flow shift primarily to eGC; the latter accepts and scatters the load, created by fluid shear stress. Local spin moment, created by fluid shear stress, affects the proteoglycans chains, and further-the core proteins (syndecans and glypicans), causing actin cytoskeleton reorganization and transmission of the signal into the cell and the cell nucleus [70,71]. The study of Fu and Tarbell (2013) aimed to determine the eGC role in mechanosensing and transduction, and measured the flow-induced production of NO in vitro [7]. It was found that compared to static conditions, the application of steady flow shear stress rapidly increased NO production from the baseline in bovine aortic endothelial cells. Enzymatic treatment of the key components of eGC (HS, HA) completely blocked flowinduced NO production without affecting receptor-mediated NO production, suggesting that the eGC has a direct effect on the NO production machinery [7]. Therefore, the eGC under physiological conditions (intact eGC) transforms hemodynamic effect into cell biochemical signals, which regulate the vascular tone. Second, the negatively charged eGC forms a polyanionic hydrated mesh on the surface of endothelial cells, which acts as a selective permeability electrostatic barrier for plasma cells and proteins and serves as a selective permeability [72]. According to Salmon and Satchell, in both continuous and fenestrated microvessels, this eGC is acting as an integral component of the multilayered barrier provided by the walls of these microvessels (i.e., acting in concert with clefts or fenestrae across endothelial cell layers, basement membranes and pericytes) [73]. Dysfunction of any of these capillary wall components, including the eGC, can disrupt normal microvascular permeability. Disruption of eGC manifests with increased systemic microvascular permeability and albuminuria in the glomerulus [73]. Evidence from the experiments on Munich-Wistar-Fromter (MWF) rats, used as a model of spontaneous albuminuric chronic kidney disease (CKD), confirm that loss of eGC could contribute to both renal and systemic vascular dysfunction in proteinuric CKD [74]. Also, in the 5/6-nephrecomized rats model with CKD a significant decrease in eGC thickness and stiffness in the blood explants of aorta endothelial cell isolated from CKD rats was demonstrated [75]. An increase of the levels of the two major components of the eGC, namely syndecan-1 (Syn-1) and HA, in the blood of patients with CKD indicated the disease progression and correlated tightly with plasma markers of endothelial dysfunction such as soluble fms-like tyrosine kinase-1 (sFlt-1), soluble vascular adhesion molecule-1 (sVCAM-1), vWF and angiopoietin-2 [75]. The study of experimental eGC degradation in mice induced by long-term hyaluronidase infusion, including evaluation of the eGC thickness and composition by immunohistochemical methods and by transmission electron microscopy for complete and integral assessment of glomerular albumin passage, showed that glomerular fenestrae were filled with dense negatively charged polysaccharide structures that were largely removed in the presence of circulating hyaluronidase, leaving the polysaccharide surfaces of other glomerular cells intact [76]. Thus, HA is a key component of the glomerular endothelial protein permeability barrier; reduction of the HA facilitates albumin passage across the endothelial layer and the glomerular basement membrane toward the epithelial compartment [76]. Regulation of selective permeability by eGC, and the role of its separate components in this, is still subject of discussion. According to Lennon and Singleton, the HA plays key role in supporting endothelial barrier function [77]. HA maintains vascular integrity through eGC modulation, caveolin-enriched microdomain regulation and interaction with endothelial HA binding proteins. Certain disease states, especially accompanied by SIR, increase hyaluronidase activity and reactive oxygen species (ROS) generation which break down high molecular weight HA to low molecular weight fragments causing damage to the eGC. Further, these HA fragments can activate specific HA binding proteins upregulated in vascular disease to promote actin cytoskeletal reorganization and inhibition of endothelial cell-cell contacts [77]. A glycocalyx-junction-break model, described by Curry and Adamson summarizes multiple studies and the role of the eGC in vascular permeability regulation [78]. According to this model, the layered structure of the endothelial barrier requires continuous activation of signaling pathways regulated by sphingosine-1-phosphate (S1P) and intracellular cAMP. These pathways modulate the adherens junction (zonula adherens), continuity of tight junction strands, and the balance of synthesis and degradation of eGC components [78]. Third, the eGC forms anti-inflammatory and anti-adhesive barrier at the endothelial cells. Vascular protection via inhibition of coagulation and leukocyte adhesion is provided by maintenance of the composition permanence and balance of degradation under the impact of stress shift and synthesis of eGC components [73,79]. Total negative charge, formed by carbohydrate residues of the glycoconjugates chains on cell surface, prevents adhesive interactions of blood cells with vascular wall, biologically active molecules with anti-thrombotic action, while eGC-associated molecules provide hemostasis [80,81]. Also eGC plays a structural role, impeding adhesion by covering adhesion molecules on the surface of the cell and by creating steric hindrance, making leukocyte binding more challenging [82]. Under the effect of damaging factors, the structure and composition of eGC change, its thickness may reduce significantly, and carbohydrate residues, normally covert and masked, become apparent. Main damaging factors, affecting the eGC in vivo, are: inflammation, hyperglycemia, endotoxemia, septic shock, oxidized low-density lipoproteins, cytokines, natriuretic peptides, abnormal shift stress and damage due to ischemia-reperfusion [79]. Shedding of eGC components in response to cytokines and chemoattractants occurs in all compartments of microvasculature: arterioles [83], capillaries [83,84] and venules [84][85][86]. According to Lipowsky, the studies of leukocytic-endothelial adhesion in response to chemoattractants and cytokines, and shedding of constituents of the eGC, suggest that activation of extracellular proteases (matrix metalloproteases, MMPs) play a role in mediating the dynamics of leukocytes adhesion in response to inflammatory and ischemic stimuli [79]. Inhibition of MMP activation with sub-antimicrobial doses of doxycycline, or zinc chelators, have also inhibited leukocytes adhesion and shedding of glycans from the endothelial cells surface in response to the chemoattractant. Experiments by McDonald et al. have confirmed that under the enzymatic degradation of eGC with heparinase, endothelial cells developed a pro-inflammatory phenotype when exposed to uniform steady shear stress leading to an increase in leukocyte adhesion [82]. The results show an up-regulation of ICAM-1 (expression increases in 3 times) with degradation compared to non-degraded controls, and attribute this effect to a down-regulation in nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) activity in response to flow; this suggests that eGC is not solely a physical barrier to adhesion but rather plays an important role in governing the phenotype of endothelial cells, a key determinant in leukocyte adhesion [82]. Other mechanisms also contribute to the initiation of lymphocytes adhesion to the endothelial cells after reduction of eGC layer: decrease of NO production, which is capable to inhibit leukocyte-endothelial cell adhesion [87]; appearance of eGC fragments, (such as low-molecular-weight HA), which show their pro-inflammatory properties, affecting the maturity of dendritic cells and stimulating them to produce cytokines [14,88]; and exposure and synthesis under inflammatory response of hypoglycosylated structures, which interact with cell adhesion molecules of leukocytes [18,89]. Modulation of eGC structure under effects of damaging factors, including inflammation, shows a thromboresistance loss [90,91]. This occurs due to destabilization of heparin sulfate chains, the binding sites for coagulation inhibitor factors (antithrombin-III, the protein C system, and tissue factor pathway inhibition); this leads to a reduction of their local concentration at the vascular wall. In turn, a concentration gradient of protective and regulative molecules, associated with eGC (albumin, fibrinogen, orosomucoid, extracellular superoxide dismutase, fibronectin, vitronectin, collagens, thrombospondin-1 and other), and of growth factors (fibroblast growth factors, vascular endothelial growth factors, transforming growth factor-β, platelet-derived growth factors) is also decreased, facilitating pathological processes in blood vessels [80]. Therefore, the eGC is a labile structure; its composition changes under effects of damaging factors. This determines development of pathophysiological processes of endothelium activation/dysfunction with loss of vascular tone regulation, hemostasis and barrier function. Endothelium activation/dysfunction is induced by inflammation and accompanies it, thus forming a vicious cycle, which can be overcome only under normal immune system functioning. Inflammatory response of various degree accompanies not only pathologic processes, it is also observed under physiological conditions, for example, a pro-inflammatory background is shown at certain periods of normal pregnancy. Understanding the mechanisms of disruption of maternal immunology tolerance to fetus, causes of transition of physiologic inflammatory reaction to systemic and excessive inflammatory response (as in PE), accompanied by endothelial activation/dysfunction, and revelation of the contribution of eGC damage to preeclampsia development may be subject of new discoveries in the disease pathogenesis. The development of systemic inflammatory response in pregnancy There is much experimental evidence of a so-called "physiological", controlled SIR during pregnancy. Similarly to the classic inflammatory response, physiological inflammatory response during pregnancy is a reaction to local damage (matrix remodeling, associated with implantation, placentation and angiogenesis in placenta) [92,93] and foreign invaders (cells, microparticles and soluble factors of placental origin) [94,95]. Humoral factors, cellular debris and subcellular particles of trophoblast are considered to be the triggers of SIR, but they can also play a role of adjuvants [95,96]. Cells-effectors of the maternal innate immunity detect fetal products as pathogen/danger images, implementing cell and molecular protection mechanisms against allogeneic material [97]. The gene products inherited from the father can be regarded as exogenous factors, while endogenous factors are gene products, resulting from trauma, ischemia, necrosis or oxidative stress [97]. Also there are some reports on generation of various new antigens due to inflammatory response; they are variations of the "modified own"; of the neoantigens formed as a result of the post-translational proteins modification [98]; and of antigens, mobilized to membrane from cytoplasm and the inner cell compartments interacting with membrane proteins or phospholipids, and acting as images of danger [99]. The enhanced pro-inflammatory background in normal pregnancy is evidenced by an increase of the level of the soluble cell adhesion molecules (sCAM) in blood, indicating the activation of leukocytes (increase of sE-selectin, sVCAM-1, sICAM-1 levels) and endothelial cells [100,101]. The glycan-mediated processes in inflammation Central event of the inflammatory response is the contact between leukocytes and endothelium, with subsequent migration of immune cells to the inflammatory lesion. At early stages of inflammatory response endothelial selectins (Е-selectin and Р-selectin) and lymphocytic L-selectin form reversible bonds with carbohydrate counter-receptors on the partner cell, thus providing tethering and the leukocyte rolling along the vascular wall. The counter-receptors for selectins are typically heavily glycosylated molecules, many of which bear terminal SLe x motifs (Neu5Acα2-3Galβ1-4(Fucα1-3)GlcNAc) [102]. P-and L-selectin, but not E-selectin, bind to some forms of heparin/HS. However, each of the selectins binds with higher affinity to its specific macromolecular ligands. Many of the known ligands are mucins containing sialylated fucosylated O-glycans. The major ligand for P-selectin, named P-selectin glycoprotein ligand-1 (PSGL-1), has sulfated tyrosine residues adjacent to a core-2 based O-glycan expressing SLe x . Also, PSGL-1 is one of the physiological ligands for E-selectin, but E-selectin can also interact with several other glycoproteins that express the SLe x motif on either N-or O-glycans, including the E-selectin ligand-1, CD44, L-selectin (in humans), and possibly long-chain GSLs expressing the SLe x [68,103]. Ligands for L-selectin that occur within specialized endothelia termed high endothelial venules (HEV; HEV are specialized post-capillary venous swellings characterized by plump endothelial cells as opposed to the usual thinner endothelial cells found in regular venules. HEVs enable lymphocytes circulating in the blood to directly enter a lymph node by crossing through the HEV) contain 6-sulfo-SLe x motif on mucin-type O-glycans and on N-glycans [104]. The ligands for E-and P-selectin are expressed on circulating leukocytes whereas L-selectin binds to ligands on both leukocytes and the endothelium [89]. At the firm adhesion stage, following the leukocyte capture stage and rolling, N-linked glycans on ICAM-1 regulate binding to its integrin ligands-macrophage-1 antigen (Mac-1) and LFA-1. Moreover, it was found that Mac-1 binds with higher avidity to molecules of ICAM-1 with smaller N-linked oligosaccharide chains, since the binding with the ligand increased after the use of α-mannosidase inhibitor deoxymannojirimicin (DMJ). In contrast, LFA-1 binds with higher affinity to glycoforms of ICAM-1, which has a more complex carbohydrate chain [89]. Also, there is experimental evidence that high-mannose ICAM-1 can function in leukocyte firm-adhesion [105]. It is speculated that some N-glycan-binding sites on ICAM-1 may be pro-adhesive, whereas the neighboring sites may be anti-adhesive, underscoring the potential breadth of how ICAM-1 function may be regulated by N-glycosylation [106]. On the stage of firm adhesion an important aspect of inflammatory response is exposure of the active epitope of integrins, provided by chemokines, which are present on the endothelial cell surface, and are bound to HS. Glycosylation of chemokine receptors also contributes to the adequate dynamics of the inflammatory reaction, thus increasing the binding affinity of the chemokine to the receptor and protecting the latter from proteolytic cleavage (reviewed in [18,89]). Key molecules mediating leukocyte transmigration: PECAM-1, JAM-1, ICAM-2 and VE-cadherin, are highly-glycosylated. However, carbohydrates part in leukocyte transmigration is still not clear. The recent studies show that N-glycosylation of JAM-A is required for the protein's ability to reinforce barrier function [22]; sialic acid-containing glycan of PECAM-1 reinforces dynamic endothelial cell-cell interactions by stabilizing the PECAM-1 homophilic binding interface [52]; glycosylation status of ICAM-2 (hypo-or non-glycosylated variants) significantly affects the function of this protein in cell motility assays [107]; in pro-inflammatory conditions, modification of VE-cadherin glycans is observed [55]. This obviously requires further investigations. Molecules that mediate intercellular interactions during inflammation are presented in Table 3. Many studies demonstrate modification of endothelial glycome (glycome is the entire complement of sugars, whether free or present in more complex molecules, of an organism) under inflammatory response. Modeling of inflammatory response in vitro on endothelial cell lines showed that an enhanced α2,6-sialylation was observed after TNF stimulation [108]. Proinflammatory stimuli increase hypoglycosylated (namely, high-mannose/hybrid) N-glycans on the cell surface as determined by lectin histochemistry, and cause an increase in genes encoding for fucosylation and sialylation (confirmed at specific staining with relevant lectins [18]; this correlates with increased monocyte adhesion [18]. Glycosylation of the endothelium has been proposed to act as a "zip code" for directing leukocyte subtype-specific recruitment in different vascular beds in response to specific stimuli [89]. The glycobiology of immunoregulation The carbohydrate-protein interactions not only mediate the initial stages of inflammation, but also promote many cellular contacts, which regulate innate and adaptive immune response. The main carbohydrate binding proteins are endogenous lectins [109], widely present on the immune system cells and expressed both in membrane-linked and in soluble forms. Three main classes of endogenous lectins include: А. C-type lectins, which, depending of specificity, are: • Specific to mannose (Man-) and/or fucose (Fuc-) terminated glycans; • Specific to galactose (Gal-) or N-acetylgalactosamine (GalNAc-)/N-acetylglucosamine (GlcNAc-) Lectins of С-type are present on macrophages, dendritic cells, natural killer cells, leukocytes. They act as pattern-recognition receptors and fulfill signaling and adhesion functions [110]. B. Galectins are a family of 15 evolutionary conserved carbohydrate-binding proteins [89,113], belonging to the glycoproteins and glycolipids of cell surface and ECM [114] and specifically Binding L-selectin with: some glycoproteins (mucins containing highly clustered glycans) that bear the SLe x determinant Mediates: • leukocyte recruitment in both acute and chronic inflammation; • leukocyte capture and rolling The endothelium may be a source of Gal-1, which then targets the neutrophils to inhibit cell recruitment, аnd Gal-3, Gal-8, Gal-9 promote neutrophil and eosinophil adhesion [89]. C. Siglecs are a family of 17 known lectins, which specifically bind the glycans structures with terminal sialic acid [117]. Sialyl Tn (Neu5Acα2,6GalNAcα-) is a common ligand for all members of this family. Glycan 6′ sulfated SLe x is a ligand for Siglec-8, and is important for selectin-dependent cell adhesion [118]. The majority of this family members are inhibitory receptors as they bear an immunoreceptor tyrosine-based inhibition motif (ITIM) in their structure, and they are mainly expressed on immune cells [119]. Siglecs participate in regulation/restriction of an excessive activation response to inflammatory reaction, initiated via recognition of pathogen associated molecular patterns, and damage-associated molecular patterns, with following phagocytosis of cells, bearing these patterns [120,121]. Siglecs regulate cell proliferation, differentiation, apoptosis, adhesion, cytokines synthesis and negative regulation of В-lymphocyte signaling [122]. Some endogenous lectins are capable, like autoantibodies, to interact with the body's unchanged antigens (glycans), so-called own self-images (SAMPs-self-associated molecular patterns) [111]. Molecular patterns, containing sialic acid and heparin/HS are supposed to act as self-images [111]. Also it is thought that interaction of lectins, recognizing SAMPs, (mainly siglecs), with ligands, inhibits the immune response to foreign/damaging effects [111,120]. It is known that presence of terminal sialic acid is very important: this substance provides the overall negative charge of cell surface, glycoconjugates conformation stabilization, production of glycoconjugates, and cells protection from recognition and degradation. Sialylation protective properties manifest not only with sialylated structures interaction with inhibited receptors, but also with masking of sugar residues which are the antigen determinants [123,124]. For example, at desialylation, the unmasked residues of Galβ-, GalNAc-, and mannose, interacting with lectins from galectins family and С-type lectins [120]; these interactions are important for metastasis and SIR development. Therefore, inflammatory response regulation is implemented under direct involvement of the glycan binding proteins (endogenous lectins) and glycans; composition and structure of these vary significantly under physiological and pathophysiological conditions, providing evidence of the eGC modification at inflammation, and of formation of the carbohydrate "zip code", which acts as navigator for immune cells. Inflammatory reactions in pregnancy are initiated by pathogenic and danger images, which are formed at the fetal-mother cell contact; this activates innate and adaptive immunity. SIR may be enhanced or restricted through mechanisms based on carbohydrate-protein interaction [125][126][127]. Excessive SIR developing in pathologic pregnancy is characterized by compensatory reactions and development of various dysfunctions, resulting in organic or multi-organic failure [128]. Endothelial activation and endothelial dysfunction As a rule, in the studies dedicated to determination of endothelium role in different pathologies, the authors use terms "endothelial activation" and "endothelial dysfunction" [129]. Activation should be distinguished from activity because in its resting state, endothelium is a metabolically active organ, which produces vasodilatory substances and bears anticoagulative and antiadhesive phenotype. Activation of endothelium under various pathophysiologic processes leads to alterations of its phenotype and function. These events may be reversible, but also may cause multiorgan failure. There are two stages in endothelial activation: endothelial stimulation (early events) and endothelial activation (later events). The latter can be subdivided in endothelial activation of types I and II, respectively [130,131]. Endothelial activation of type I follows the stimulation stage and manifests with shedding of the adhesion molecules and molecules with antithrombotic properties, such as Р-selectin, thrombin, heparin, antithrombin ІII and thrombomodulin, from the surface of the endothelial cells. In the same time, the endothelial cells of the venules and small veins decrease in volume, and the contacts between the cells become distorted, resulting in hemorrhages, edema, and increase of vessels permeability [131]. Endothelial activation of type II is a slightly delayed process, which depends on gene transcription activation and protein synthesis de novo. As a result, the genes coding for the adhesion molecules, chemokines and procoagulative factors: Е-selectin, vWF, IL-8, thrombocytes activating factor [132], are activated. Also, the secretion of NO and prostacyclin increases. Morphologic changes show protrusion of the endothelial cells into the vessel lumen, cell hypertrophy and an increase of cell permeability. The result of this stage is leukocyte contact with activated endothelium through lectin-carbohydrate interactions, extravasation, transendothelial migration, and, possibly, leucocyte binding with Fc-receptors (FcR) of endothelial cells with immune complexes disposition [131]. Alterations of phenotype, accompanying endothelial cells activation, manifest also with the change of the carbohydrate composition of the molecules forming the eGC. Therefore, endothelial activation implies an alteration of the endothelial cells phenotype under the activation factors (cytokines, endotoxins, etc.) impact, inducing shedding and modification of the vasculoprotective surface layer associated with the membrane, and expression of the activation antigens. This correlates with pro-adhesive, antigen-presenting and procoagulative properties of the endothelial cells. Activation reflects an ability of endothelial cells to perform new functions, but this status does not presume a cell damage or their uncontrolled division. Endothelium activation is a reverse process with a possibility to return to a state of active reposing cells [131]. Endothelium dysfunction, on the other hand, is a stage following the endothelium activation and manifesting with cell functional activity change; it leads to loss of the ability of endothelium to perform its function, and to a disbalance of factors, which provide homeostasis and a normal course of all processes, mediated by endothelium [8,129,131]. Endothelial dysfunction is a consequence of chronic, permanent endothelial activation and may lead to non-reversible damage of the endothelial cells, their apoptosis and necrosis. Preeclampsia as a manifestation of excessive systemic inflammatory response, accompanied by endothelial activation/ dysfunction PE is a multisystemic pathologic condition, manifesting after the 20th week of pregnancy. PE clinical signs are: an increase of systolic blood pressure (SBP) above 140 mm Hg, diastolic blood pressure (DBP) above 90 mm Hg for the first time noted during pregnancy; proteinuria (≥0.3 g/L) in daily urine, edema, manifestation of multiorganic/polysystemic dysfunction/ insufficiency [133]. Severe PE is accompanied by acute renal failure, eclampsia, pulmonary edema, HELLP (hemolysis, elеvated liver enzymes, and lоw plаtelet соunt)-syndrome [3]. Etiology of PE is not clear; genetic, immunological and microenvironment may play a role [134][135][136][137][138]. Currently two phenotypic variations of PE are distinguished: early manifestation of the symptoms (before the 34th week of gestation) and later manifestation (after the 34th week of gestation) [139]. Pathophysiological mechanisms of PE development are distinguished accordingly [140]. The first-"fetal" pathway-is characterized by inadequate or microcellular invasion of trophoblast cells into the uterine spiral arteries and lack or incompleteness of the phase of substitution of placental smooth muscle elastic fibers with fibrinoid [140,141]. In this mechanism, physiological remodeling and transformation of spiral arteries is lacking, and this affects the uterine-placentary blood flow quality [142][143][144]. Fetal mechanism of PE development presents with severe disease course and frequent complications in the neonate. The second pathway is "maternal", where the deficiency of uterineplacental blood flow appears as a result of spiral arteries damage due to certain maternal diseases, especially thrombophilias (genetic or acquired). In this case, the study of placental morphology testifies adequate gestational reorganization of spiral arteries. Maternal pathway usually implies later manifestation and a milder course. Some also distinguish the third (or "mixed") pathway, where the arteries are both affected and poorly reorganized [145,146]. Disrupted trophoblast invasion initiates ischemic and hypoxic damage of placental cells and tissues, leading to increase of cell debris and microparticles of fetal origin contents in the mother's blood. These processes result in the mother's immune cells activation and inflammatory cytokines synthesis induction [147], leading to the development of generalized endothelial activation/dysfunction with development of multiorganic insufficiency [148] (Figure 1). Trophoblast debris was also found in the mother's is blood in a normal pregnancy and it was primarily apoptotic. Particles of trophoblast debris range from polynuclear aggregates of the syncytium cells to subcellular micro and nanoparticles. In vitro co-culturing of trophoblast debris, obtained from women with normal pregnancy, with macrophages and endothelial cells leads to tolerogenic М2-phenotype of macrophages [149,150]. Trophoblast debris becomes more necrotic when in vitro system is supplemented with antiphospholipid antibodies or IL-6. Phagocytosis of the necrotic debris by the endothelial cells is accompanied by their activation [151]. Activation of endothelial cells is also caused by the addition of the trophoblast debris isolated from patients with preeclampsia to the culture of the endothelial cells [152]. Endothelium activation markers in preeclampsia Numerous studies have shown that in PE, manifestations of excessive SIR are observed due to the loss of control over the balance of production of pro/anti-inflammatory cytokines. This leads to an increase in the synthesis and expression of key molecules that mediate intercellular contacts between leukocytes and endothelium [147,153,154]. In this context, it has been shown that in PE, the plasma levels of sE-selectin, sVCAM-1 and sICAM-1 were significantly elevated [100,[155][156][157], and that cultivation of endothelial cells with the blood serum of PE women significantly increased the expression of ICAM-1 by the endothelial cells [158]. It was found that the expression of E-selectin and P-selectin in the endothelial cell culture was significantly higher after administration of trophoblast cells from the PE patients, than after cultivation of endothelial cells with trophoblast cells isolated from placental tissue of healthy women [159]. We have shown in a prospective longitudinal study that in patients with severe PE, the levels of sE-selectin, sVCAM-1 and sICAM-1 were increased from the 8th week of pregnancy until the appearance of clinical symptoms of the disease [160]. In a similar design study, it was shown that joint determination of sICAM-1 and sVCAM-1 levels measured in peripheral blood within 22-29 weeks of gestation, was of high predictive value and capable to detect up to 55% of women with a pathologic pregnancy [161]. The increased levels of sICAM-1 and sVCAM-1 in blood during PE significantly correlated with the signs of the acute phase of inflammation and PE: hypertension, proteinuria, increase of hepatic enzymes levels [162]. Also it was noted that high levels of sVCAM-1 and sE-selectin in women with PE could result in adverse perinatal outcome and endothelial dysfunction in fetus, as confirmed by negative correlation between sVCAM-1 and endogenous NO synthesis by HUVECs, isolated from the umbilical cord after birth [163]. Alteration of endothelial glycocalyx in preeclampsia The signs of endothelial activation are the expression of activation markers by endothelial cells and increased plasma concentrations of the soluble forms of CAMs and of the factors, regulating angiogenesis and blood clotting. However, the main feature of the evolving endothelial activation is alteration, damage and shedding of the eGC and an increase of its components concentration in blood. Currently, there are limited studies of this phenomena in PE, but available reports show significant alteration of eGC composition in the placental structures in PE [164]. The most prominent alteration of the eGC composition was found in the placentas of women with severe PE. Alterations take place also in the eGC capillaries of terminal placental villi: the content of glycans with terminal β-galactosyl and α-mannosyl residues increase, while the content of α2,3-linked sialic acids decrease in the glycome in severe PE [165]. These alterations are supposed to point to the exposure of glycans bearing the "danger signals" and being the counter-receptors for endogenous lectins; interaction with these activate maternal immune system [166,167] (REF). Such studies, performed by immunohistochemistry of placenta after childbirth and using the lectins panel or monoclonal antibodies to carbohydrates antigens, give an idea of alterations of the placental glycome and its separate structures, including capillary endothelium, and provide evidence obtained by direct eGC visualization [165,168]. Since direct visualization of the eGC is impossible in clinical trials where no surgical tissue sampling is implied, in these cases, an indirect assessment of the content of the degradation products of eGC is used. Indirect methods have significant limitations, but they are the only possibility to evaluate the eGC in vivo. Indirect assessment of the eGC by ELISA show that in PE, the plasma content of the structural proteoglycans (endocan-1, syndecan-1, decorin and HA) and the GAGs of eGC increase [169][170][171]. Serum endocan concentrations were significantly elevated in women with PE versus normotensive controls, and concentrations seem to be associated with the severity of the disease [172]. Median maternal plasma endocan concentrations were higher in PE patients and lower in acute pyelonephritis with bacteremia than in uncomplicated pregnancy. No significant difference was observed in the median plasma endocan concentration between other obstetrical syndromes and uncomplicated pregnancies [173]. It is suggested that in PE, the maternal endothelium is a source of GAGs in blood, and intensive eGC shedding thus indicates a manifestation of endothelial dysfunction [169][170][171][172][173][174]. Also, patients with PE show GAGs excretion in urine; this is thought to be linked with the eGC proteoglycans alterations and with the glomerular basement membrane changes, and associated with proteinuria [175]. In vitro and in vivo experimental studies, using cell and animal models is another opportunity of indirect eGC evaluation. This approach was used to study CKD [74,75], cardio-vascular and inflammatory diseases [13,176], cancer [13,176,177]-the conditions manifestating with hypertension, proteinuria, edema, SIR, thrombosis. The results of such studies provide some keys to PE, which is less studied, but exhibits similar clinical signs. Experimental models allow to evaluate not only the degree of the eGC damage by various factors (SIR being the most significant), but also the molecular changes of the eGC composition. This moment is a crucial point because SIR is not a specific process; it accompanies almost any pathology and promotes the generation of neoantigens, acting as an adaptive response trigger and provoking autoimmune reactions. Conclusion Endothelial dysfunction represents the central link in the pathogenesis of various diseases and complications, and is a subject of intensive research. On the background of the progress in understanding the mechanisms of development, diagnosis and treatment of endothelial dysfunction, many studies in the recent years have been focused on the eGC as an early indicator of endothelial injury and a potential marker of vascular injury. Alterations of the phenotype of endothelial cells, secretion and release of various activation markers into the bloodstream and dysfunction of the endothelium are directly related to the damage of eGC. This damage is the initiating factor and the initial stage in the development of endothelial activation/dysfunction, but this stage has for a long time been obscure due to the difficulties of eGC visualization and diagnosis. By now, the main criteria for eGC damage assessment have been defined. In addition to the appearance of eGC components in the blood, the degree of manifestation of the SIR is also an important criterium of the damage, since endothelial inflammation and dysfunction are inseparably related processes. In this regard, the molecular mechanism of the inflammatory reaction is based on the ligand-receptor, carbohydrate-protein interaction of the immune cells and endothelium, and alteration of glycome/glycocalyx is a crucial factor in the development of inflammation and endothelial dysfunction. Therefore, the pathogenesis of endothelial activation/dysfunction should be envisioned from the point of damage of the intravascular compartment-the eGC, which regulates the functions of the endothelium. Expanding research of the eGC role in the development of endothelial dysfunction may be a subject of new discoveries in the pathogenesis of a large group of diseases, including pregnancy pathology and PE, especially since PE is a classic example of the immune system hyperactivation, manifestation of SIR and development of endothelial dysfunction. Undoubtedly, future studies of the eGC will evoke an absolutely new insight in the development and progression of endothelial dysfunction.
2018-12-11T00:30:23.829Z
2018-10-24T00:00:00.000
{ "year": 2018, "sha1": "dece03203fa0fe49460fdbd5b991fe1e08a1c0f4", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/61045", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "12eda32bc847d454ea757979883a927cef6a8a83", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
33136510
pes2o/s2orc
v3-fos-license
Numerical modelling of hydro-morphological processes dominated by fine suspended sediment in a stormwater pond Fine sediment plays crucial and multiple roles in the hydrological, ecological and geomorphological func- tioning of river systems. This study employs a two-dimensional (2D) numerical model to track the hydro-morphological processes dominated by fine suspended sediment, including the prediction of sediment concentration in flow bodies, and erosion and deposition caused by sediment transport. The model is governed by 2D full shallow water equations with which an advection–diffusion equation for fine sediment is coupled. Bed erosion and sedimentation are updated by a bed deformation model based on local sediment entrainment and settling flux in flow bodies. The model is initially validated with the three laboratory-scale experimental events where suspended load plays a dominant role. Satisfactory simulation results confirm the model’s capability in capturing hydro-morphodynamic processes dominated by fine suspended sediment at laboratory-scale. Applications to sedimentation in a stormwater pond are conducted to develop the process-based understanding of fine sediment dynamics over a variety of flow conditions. Urban flows with 5-year, 30-year and 100-year return period and the extreme flood event in 2012 are simulated. The modelled results deliver a step change in understanding fine sediment dynamics in stormwater ponds. The model is capable of quantitatively simulating and qualitatively assessing the performance of a stormwater pond in managing urban water quantity and quality. (cid:1) 2017 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license a b s t r a c t Fine sediment plays crucial and multiple roles in the hydrological, ecological and geomorphological functioning of river systems. This study employs a two-dimensional (2D) numerical model to track the hydromorphological processes dominated by fine suspended sediment, including the prediction of sediment concentration in flow bodies, and erosion and deposition caused by sediment transport. The model is governed by 2D full shallow water equations with which an advection-diffusion equation for fine sediment is coupled. Bed erosion and sedimentation are updated by a bed deformation model based on local sediment entrainment and settling flux in flow bodies. The model is initially validated with the three laboratory-scale experimental events where suspended load plays a dominant role. Satisfactory simulation results confirm the model's capability in capturing hydro-morphodynamic processes dominated by fine suspended sediment at laboratory-scale. Applications to sedimentation in a stormwater pond are conducted to develop the process-based understanding of fine sediment dynamics over a variety of flow conditions. Urban flows with 5-year, 30-year and 100-year return period and the extreme flood event in 2012 are simulated. The modelled results deliver a step change in understanding fine sediment dynamics in stormwater ponds. The model is capable of quantitatively simulating and qualitatively assessing the performance of a stormwater pond in managing urban water quantity and quality. Ó 2017 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Introduction In river systems, fine-grained sediment is a natural and essential component and plays a crucial role in the hydrological, ecological and geomorphological functioning of the system. It has been recognised that fine-grained sediment management in urban rivers is environmentally significant (Birch et al., 2006). Sustainable sediment management requires a structure of supporting research on fine sediment dynamics and its interactions within hydrological catchments such as rivers, floodplains, reservoirs and Sustainable Urban Drainage Systems (SuDS) (Owens et al., 2005). In general, fine-grained sediment has a controlling influence on the quality and quantity of receiving water. In an urban catchment, contaminants and pollutants including heavy metals and nutrients are generally absorbed by fine sediment which is then conveyed to the receiving waters (Saeedi et al., 2004;Jartun et al., 2008;Jones et al., 2008). These urban pollutants attached to sediments have implications on both habitats of downstream receiving waters and human health (Wood and Armitage, 1997;Owens et al., 2005;Crosa et al., 2010). To mitigate these risks, more sustainable features, such as stormwater ponds, are increasingly used in urban catchments as an option to manage fine suspended sediments (Ahilan et al., 2016;Allen et al., 2017) by storing stormwater runoff, trapping fine sediments and improving urban runoff quality. The movement of sediment will be minimised by the interrupted flows in a storm water pond. The low energy environment in the pond enables the considerable proportion of fine suspended load is trapped which provides water quality benefits to receiving water bodies. However, from a longer-term viewpoint, this will diminish the storage capacity of stormwater ponds, thereby influencing their hydraulic performance and maintenance. Similarly, finegrained sedimentation occurs in dam reservoirs where the release of deposited sediments often leads to cascading effects in downstream reaches through sediment transport and re-deposition (Liu et al., 2004). This has been considered to be a worldwide problem (Vörömarty et al. (2003)). Additionally, excessive suspended sediment inputs to rivers due to catchment erosion and inchannel bank erosion can cause sedimentation in channels which may affect channel morphology, stream habitats and navigation (Eekhout et al., 2015). In view of the sediment effects, natural processes have been widely used for river and flood management in recent years (Dadson et al., 2017). In the aforementioned cases, fine suspended sediment has a controlling influence on the quality and quantity of receiving waters in hydro-systems through playing a variety of roles. Therefore, there is a need to develop an improved understanding of how fine-grained sediment is eroded, transported and deposited by a variety of flow environments. In recent years, numerical models have been increasingly used to understand complex flows, sediment transport and the corresponding morphological changes in rivers, floodplains and SuDS. In view of the multiple roles of fine-grained sediment in receiving waters, a robust fine sediment model is crucial to develop coupled models enabling the simulations and understanding of hydrological, ecological and geomorphological conditions of catchments. Recently numerical models have been used to simulate dambreak induced in-channel evolution (Cao et al., 2004;Simpson and Castelltort, 2006;Bohorquez andFernandez-Feria, 2008, Zech et al., 2008;Guan et al., 2015a;Benkhaldoun et al., 2012;Li and Duffy, 2012;Guan et al., 2014), sediment routing in dam reservoirs (Liu et al., 2004;Guertault et al., 2016), turbidity currents over erodible bed (Hu and Cao, 2009;Hu et al., 2012;Janocko et al., 2013). This provides feasible mathematical modelling approaches to quantify the evolution of sediment-laden flows and corresponding geomorphological changes dominated by fine-grained sediment. This research presents a 2D numerical tool to track the erosion, transport and deposition of fine-grained sediment and particularly investigate fine sediment dynamics in stormwater ponds. The model is a depth-averaged 2D numerical model that includes a robust shallow water based hydrodynamic model, a suspended load transport model and a bed evolution model. It provides more reliable information than a 1D model whilst being more costeffective than a 3D model. The model is capable of simulating full sediment transport process where non-cohesive fine suspended load plays a dominant role, including both sediment concentration in flow bodies and bed changes. This is not only limited to a case with full suspended load transport, but also to a case which may have a small portion of bedload. Whatever, this should have a small rouse number lower than 2.5 during the main transport stage. The model is firstly validated against three laboratoryscale experiments prior to a real-world application in a stormwater pond located in Newcastle Great Park, UK. Based on the simulation results, this study aims to determine the erosion, transport and deposition characteristics of fine sediment in a stormwater pond with various flow conditions and to develop a greater understanding of fine-grained sediment dynamics in stormwater ponds. Hydrodynamic model Shallow water based numerical models have been widely used for hydraulic modelling due to their robustness in capturing flow hydraulics (Guan et al., 2013;Vacondio et al., 2014;Costabile and Macchione, 2015;Hou et al.,2015;Guan et al., 2015b). The 2D shallow water equations can be expressed by: where h = flow depth, z b = bed elevation, g = h + z b denotes the water surface elevation which includes both changes of the water depth and bed elevation varying with the time t, u and v = the depth-averaged flow velocity components in the two Cartesian directions, g = acceleration due to gravity, p = sediment porosity, C = total volumetric sediment concentration, q s and q w denote the densities of sediment and water respectively, Dq = q s À q w , q = density of flow-sediment mixture, S fx , S fy = frictional slope in x and y components which are calculated based on Manning's roughness , T xx , T xy , T yx and T yy are the depth-averaged turbulent stresses which are determined by the Boussinesq approxi v mation which has been widely used in the literature (e.g. Wu, 2004;Abad et al., 2008;Begnudelli et al., 2010). This gives the Reynolds stresses as: where v t is the turbulence eddy viscosity and is the molecular viscosity which can be ignored in environmental applications. Various approaches have been adopted to estimate the turbulence viscosity, e.g. assuming a constant eddy viscosity, an algebraic turbulence model (m t $ hu à ), as well as the k -e turbulence model. In this study, the eddy viscosity is estimated by m t ¼ bhu à with b = 0.5. Fine suspended load model The suspended load transport is governed by the advection-diffusion equation. For non-uniform graded sediment mixtures, it is necessary to divide the graded sediments into fractions due to the difference of grain-size related parameters (Guan et al., 2015b). For the suspended transport of each fraction, the governing equation is described by where e s is the diffusion coefficient of sediment particles; S E,i is the entrainment flux of sediment for the ith fraction; S D,i is the deposition flux of sediment of the ith fraction. The diffusion coefficient of sediment particles is related to the diffusion of fluid momentum, and it is determined by using the below formula presented in van Rijn (1984). where the factor b represents the difference in the diffusion of a sediment particle and a fluid particle and it is assumed to be constant over the flow depth (van Rijn 1984). / represents the damping of the fluid turbulence by the sediment particles and it is assumed to be dependent on the local sediment concentration. Both factors are calculated by using the formula derived by van Rijn (1984), which are widely used (e.g. Duan and Nanda, 2006). where C a is the near-bed concentration at the reference level a (average for non-uniform sediments); C ae is the near bed equilibrium concentration (average for non-uniform sediments). Both are defined below. As there is no universal theoretical expression for the entrainment flux and deposition flux of sediments, both variables are calculated by the following widely-used function. where F i , percentage of the ith grain fraction; x f,i, is the effective settling velocity for the ith grain fraction which is calculated by the function derived by Soulsby (1997) as below; ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 10:36 2 þ ð1 À C i Þ 4:7 1:049d 3 à q À 10:36 ð 6Þ C a,i = dC i is the near-bed concentration for the ith grain fraction at the reference level a; the definition of the coefficient d by Cao et al., (2004) is: d ¼ minf2:0; ð1 À pÞ=Cg; C ae,i is the near bed equilibrium concentration for the i th grain fraction at the reference level that is calculated by using the van Rijn formula (van Rijin, 1984). For each grain fraction, the function can be expressed as: T ¼ ðu ;2 à À u 2 Ã;cr Þ u 2 Ã;cr a ¼ min½maxðk s ; 2d 50 ; 0:01hÞ; 0:2h where k s is the equivalent roughness height; d ⁄ = d i [(q s /q w À 1)g/ m 2 ] 1/3 is the dimensionless particle diameter; m is the viscosity of water; u ; à ¼ uð ffiffiffi g p =C 0 Þ is bed-shear velocity related to grain; C' is the Chézy-coefficient related to grain; u Ã;cr ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðs À 1Þgdh c p is the critical bed-shear velocity, where h c is the Shields shear stress. Morphological change model Morphological evolution is determined by the difference of sediment entrainment and deposition that is calculated per grid cell at each time step. The equation used to calculate morphological change is written by where N is the number of grain size fractions. Numerical method Eqs. (1) and (3) and (8) constitute the model system which is a shallow water non-linear system. In compact form, the governing equations can be expressed by where U is the vector of conserved variables; E and F are the flux vectors of the flow in the x and y directions respectively,Ẽ andF contain the turbulent terms in the x and y directions, S is the source term vector. The model is solved numerically by a well-balanced Godunovtype finite volume method (FVM) based on Cartesian coordinates. To update the variables in each cell, the following equation is used to update hydrodynamics: are the difference of the fluxes at the left and right interfaces of the cell (i, j) in the x and y direction;Ẽ à i;j andF à i;j represents the flux difference of turbulent and dispersion stresses at the left and right interfaces of the cell (i, j) in the x and y direction; Dt, Dx, Dy are the time step, cell size in the x and y direction, respectively. To calculate the first three flux terms (e.g. E à lr1;2;3 ), the Harten, Lax and van Leer (HLL) scheme has been used in this study. More details are described in Guan et al., (2014). Similar to updating the hydrodynamic variables, the sediment concentration is updated at the same cell and time step based on the sediment inter-cell flux C ⁄ as follows, where t represents the time; S c is the source term shown in the right hand side of Eq. (3). The sediment flux C ⁄ is calculated using the following equation, where c l and c r are the volumetric sediment concentration at the left and right cells; E à lr j 1 ; F à lr j 1 represent the flow intercell mass flux. S ⁄ is the middle wave speed calculated by the equation of Toro (2001). A variable time step Dt, adapted to local flow conditions, is calculated at each time step based on a fixed Courant number (CFL) for stability (0 < CFL < 1.0). Model validation Three laboratory cases were used to verify the model capability in simulating morphological changes dominant by fine suspended load, which includes (1) sediment transport in a trench, (2) partial dam-breach flow over a mobile channel, and (3) localised erosion and deposition in a pond with erodible bed. In all three experimental cases, it has been observed that suspended load is the main transport mode, which ensures the applicability of the cases in the model verification. Here the model errors with the measured data were quantified using the Brier Skill Score (BSS) as: where superscripts m and o refer to modelled and observed point data, respectively, and n is the total number of point data. Sediment transport in a trench To verify the capability of the proposed model in predicting bed evolution under the conditions of unsteady flows a simulation was carried out to compare with experiments originally conducted at the Delft Hydraulics Laboratory to investigate the movable bed evolution caused by steady open channel flow (van Rijn, 1980). The trench is located in the middle of the 30 m long channel. Three tests with different side slopes of the trench (1:3, 1:7 and 1:10) were performed in the experiments. Following van Rijn (1980), the key information of the three tests is listed in Table 1. The mean inflow velocity was 0.51 m/s at the inlet and the water depth were kept constant as 0.39 m. The erodible bed consists of fine sand with d 10 = 0.115 mm, d 50 = 0.16 mm and d 90 = 0.2 mm. The sand density and porosity was 2650 kg/m 3 and 0.4 respectively. According to the experiment, the settling velocity of sediment particles was 0.013 m/s ± 25%. A hindering settling velocity x 0 = 0.015 m/s is used. Manning's coefficient n is set to be 0.016. In addition, to maintain the sediment equilibrium conditions in the upstream, i.e. no scour or deposition occurring, sand with the same composition was fed at a constant rate of 0.04 kg/s/m; thereby, the suspended load transport rate was estimated to be 0.03 ± 0.006 kg/s/ m and the bed load transport rate of about 0.01 kg/s/m. The contribution of the suspended load transport to the total load transport was in the range of 60% to 90%. For simulation, the whole domain is discretised by 150 cells with Dx = 0.2 m. To ensure steady flow, the model is run for 900 s. After 900 s, sand is fed and bed evolution occurs. Van Rijn (1984) suggested estimating the reference level by the following equation, a ¼ min½maxðk s ; 2d 50 ; 0:01hÞ; 0:2h. Based on this formulation, a = 0.01 m and a = 0.02 m was used in the model to demon-strate the influence of the reference level. Fig. 1 plots the simulated velocity and depth-averaged measurement at the five measured sections. It can be seen that the model produces the velocity reasonably well around the trench. Also, the water surface has been simulated to be close to the real constant value 0.39 m. Regarding the predictin of changes in bed profiles, Fig. 2 indicates that the simulated bed profiles with a = 0.01 m and a = 0.02 m show similar shape with only a slight difference. With both reference levels, the simulated bed has a high Brier Skill Score which is over 0.9. When a = 0.01 m, the model gives a better results. Therefore, the model is also verified in Test 2 and Test 1 with the reference level a = 0.01 m. As shown in Fig. 3, the general bed profiles at both 7.5 h and 15 h are produced with a good BSS. This implies the capability of the model in simulating bed changes due to sediment transport dominant by suspended load. Partial dam-breach flow over a mobile channel To verify and validate the performance of the suspended load model it was used to reproduce partial dam-breach flow experiments over a mobile bed, which were carried out at the Hydraulics Laboratory of Tsinghua University, China (Xia et al., 2010). A thin dam was located 2.0 m downstream of a 18.5 m  1.6 m rectangular flume, and a 0.2 m wide dam-breach centred at y = 0.8 m; the region of 4.5 m after dam site was covered by fine non-uniform coal ash with a median diameter of 0.135 mm, and its natural and dry density were measured approximately as 2248 kg/m 3 and 720 kg/m 3 respectively; the water depth was initially set to be 0.4 m in the reservoir and 0.12 m downstream of the dam. In this experiment, the bed levels at two cross sections CS1 (x = 2.5 m) and CS2 (x = 3.5 m) after 20 s were measured. During the whole experiment, only suspended load transport occurs due to the particles being so fine. Table 2 lists the key parameters used in the simulation. For the simulation, the domain is discretised by 370  80 cells, and the time interval is Dt = 0.005 s. The Manning's coefficient n = 0.02 s/[m 1/3 ]; the sediment porosity is set as 0.35. The suspended load model is run for 20 s. Fig. 4 shows a comparison between the observed and modelled cross-sectional profiles at 20 s. It is shown that the trend of the predicted bed profiles is similar to that of the measured profiles. Erosion occurs in the middle of the cross sections. The bed erosion quantity is less than the measurement at CS1 where the predicted bed is underestimated, particularly in terms of the erosion width. However, a similar maximum scour depth and location are predicted here. At CS2, the simulated and measured bed profiles are in good agreement with each other. The simulated and measured scour depths are very close and the erosion areas agree very well with each other, but the measured range of bed profile is about 20 cm wider than the simulated range. For this reason, it can be seen that the BSS for CS2 is relatively smaller. The bed deposition is underestimated by the model here. This is possibly due to either the experimental errors or the neglected turbulence term, which means the model may not be able to generate the rapid formation of horizontal circulating flow at the downstream of the dam. Fig. 5 illustrates the contour plot of the simulated bed topography after 20 s. Severe erosion occurs at the outlet of the dam, and the eroded suspended load is flushed to deposit downstream due to the decrease of bed shear stress. Erosion and deposition in a pond with erodible bed The experiment was conducted to investigate the erosion process in a rectangular basin due to clear water inflow from a narrow channel by Thuc (1991). In this test, the initial setup involves an inlet rectangular channel of 2.0 m long and 0.2 m wide, a rectangular movable basin with 5.0 m long and 4.0 m wide, and a 1.0 m long and 1.2 m wide channel in the downstream. Therein, the movable basin consists of fine sand with median diameter of 0.6 mm, with a movable bed layer was 0.16 m thick. For the initial hydraulic conditions, initial water depth was specified as 0.15 m; the inflow velocity at the inflow boundary was kept constant at 0.6 m/s, and the water depth at the outlet was a constant value of 0.15 m. Only basin area is erodible during the experiment period. Table 3 show the key parameters of the experimental case. This experiment is simulated in this study because the sediment particle diameter is small (0.6 mm), and the rouse number of the case is estimated to be in a range of 0-2.4 in the main movable area, which means suspended load is the dominant transport mode. This fits the capability of the present model. The length of channel is discretised with a constant interval Dx = 0.1 m, but in width direction, the grid spacing around the It can be seen that the inflow pipe has the biggest bed shear stress due to the high flow velocity, and the inflow pipe outfall area and the outlet area also have higher bed shear stress. Therefore, it can be seen that significant erosion occurs at the outfall area due to the inflow of clear water, then the eroded sediment moves downstream and deposits forming a hill. Since only basin area is erodible, no bed changes are found at the outlet area. Fig. 7 further shows the comparison of the measured and simulated bed changes along the longitudinal centreline at 1 h, 2 h and 4 h. All have a satisfying Brier Skill Score (BSS). Overall, the simulated morphological evolution tendency at 1 h and 2 h are in good agreement with the measured results. However, the maximum deposition heights are slightly underpredicted, with a 13.4% difference at 1 h and 30.6% at 2 h. Furthermore, it can be seen that the model overestimates the erosion depth at the inlet of the basin. There the simulated erosion is much more severe than the measured erosion. This is most likely because secondary flow plays an important role here; however, these nonhydrostatic flows are neglected in the current model. Application in a stormwater pond Stomwater ponds are characterised by urban runoff detention, runoff quality improvement and sediment trapping. The decrease in flow velocity and the low energy environment causes deposition of fine sediments delivered by urban flows as it enters the pond. Stormwater pond sedimentation leads to a decrease in pond storage capacity and triggers environmental and economic issues. The validated model is applied to a case study of a stormwater pond in Newcastle Great Park and based on the results, improved understanding of fine sediment dynamics is developed. Study site The study area is located in Ouseburn catchment (the black boundary in Fig. 8a) in Newcaslte upon Tyne in the UK. The stormwater pond connects the upstream newly built urban development and the Ouseburn River. Fig. 8c shows the simulated domain which is an area of 230 m by 140 m. It was observed that the pond is covered with dense vegetation which protects the local bed. For simulations, the flow discharge is input to the model via a pipe section as an upstream boundary. The other boundary is set to be free open which means that the floodwater can freely flow out based on the local flow conditions. Model scenarios Three scenarios were considered: non-flood (5 year), sewer design (30 year) and flood (100 year) (Fig. 9a). Also, rainfall events in the extreme flow year 2012 with 15 min interval rainfall measurements at the Jesmond Dene gauging station (EA #19356) were used to conduct an annual sediment simulation, and the flow at the inlet for the identified rainfall events is quantified by using the physically-based conceptual rainfall-runoff model -the Revitalised Flood Hydrograph (ReFH) model (Fig. 9b). Allen et al. (2015) measured the continuous flow records from January to May 2015 at the pond's outfall. The ReFH rainfall runoff model is calibrated with the observed flow data sets by varying the drainage length parameter (DPLBAR) in the model. Based on the field survey, the fine sediment composes of three classes: d10 = 5 mm (fine silt), d50 = 12 mm (fine silt) and d90 = 50 mm (silt) that were obtained from the manual sampling and equally distributed as an input in the upstream boundary. The fine sediment concentration is estimated based on the regression relationships between flow, turbidity and suspended sediment concentration from the analogue catchment (Ahilan et al., 2016). In order to assess the relative impact of the pond on the hydrologic and morphologic responses during high flow events, two Digital Elevation Model (DEM) data sets were incorporated in the model setup. The current DEM represents existing topography ('with') pond condition and the DEM corresponding to the year 2000 represents the predevelopment stage ('without') pond scenario in the hydromorphodynamic model. Table 4 lists the key information about this case study. Allen et al. (2015) surveyed the cumulative sediment deposition at monthly intervals at six locations in the pond during the monitoring period, which is used to validate the morphodynamic model in simulating sediment deposition in the pond. Flow events between 23/04/2015 and 26/05/2015 (as shown in Fig. 10) were modelled because there are a number of high flow events over the period apart from low base flows. Fig. 10 shows the simulated sediment deposition in the pond and the location of the six monitoring points. It indicates that the main deposition area is located at the outfall area. This is because the flow velocity sharply decreases after the water flows to the pond from upstream pipe, this leads to bed shear stress be so small that sediment particles settle down to the bed. Resuspension during high flows causes slight sedimentation in the far area from the outfall. Table 5 shows the measured and simulated depths at the six monitoring points. It is indicated that the model predicts the sedimentation in the stormwater pond generally well despite the fact that there are clear discrepancies at some points. These differences are expected because of the uncertainty factors in reality. The main uncertainty factors include: (1) the stormwater pond is covered by a variety of soft vegetation which causes clear implication on flow dynamics and sediment transport, however, this is difficult to quantify and predict; Model validation with sampling data (2) the inflow discharge and sediment concentration are quantified based on a conceptual rainfall-runoff model and regression relationship between flow and turbidity, thus this brings about uncertainties in model inputs; (3) sediment particles are very fine, and the sedimentation depth is small, the field monitoring quantifies sediment weights rather depths which might cause some errors to quantify its real depth. Despite of the discrepancies, it can be seen that both simulated and measured shows a higher deposition near the outfall location and a smaller sedimentation at the far-point from the outfall. Therefore, considering the main objective of this study in developing better understanding of fine suspended load transport, the model results are deemed to be adequate. Fine-grained sediment tracking during single events The validated model is used in the hydro-morphological simulations during single events (5-year flow, 30-year flow, and 100year flow). Fig. 11 shows the water depths, suspended load concentration, and bed shear stress and velocity field during the flow peak for each scenario, as well as the resultant sediment deposition in the stormwater pond after each event. In the viewpoint of hydrodynamic effects, it is clear that the pond has the capability to store the 5-year flow, and the sediment particles in the flow bodies are mostly trapped in the stormwater pond (Fig. 11b) and gradually settle down in the stormwater pond because of the slow flow velocity and the low bed shear stress. However, during the 30year and 100-year flow events ( Fig. 11f and j), a considerable amount of water flows from the pond into the river, which transports fine sediments downstream. As shown in Fig. 11f and j, although the waters in the pond still have relative higher suspended load, sediment particles are flushed out to the river with the increasing inflow. This leads to deposition not only inside the pond, but also in the river downstream ( Fig. 11g and l). Table 6 quantifies the input sediments and the deposited sediments for the three scenarios. It shows that the increasing of inflow magnitude results in a decrease in sediment trapping efficiency of the pond as expected. Before building the stormwater pond, the urban flows were directly drained into the river. The simulated results in Fig. 12 clearly shows that the direct drainage to the watercourse leads to much wider inundation and sedimentation during flooding in comparison with that with the 'pond' in Fig. 12. Consequently, sediment particles are deposited in the inundated areas after flood recession, as demonstrated in Fig. 12f and j. Even for the more frequent 5-year flow event, the direct drainage causes considerable amount of sedimentation in the river channel. If there is any, the contaminants attached with sediment particles will potentially influence the water quality in the receiving water. Therefore, the simulations imply that the stormwater pond has the benefits of retaining urban flows and trapping sediment particles generated from upstream urban catchment. The model is capable of quantitatively simulating and qualitatively assessing the performance of a stormwater pond in managing urban floods. Fine sediment dynamics varying with flows As indicated in Table 4, an extreme event in year, 2012, was simulated by the validated model in order to numerically investigate the fine sediment response to an extreme flood event. Table 4 The data and key parameters used in the study. Data Description Fig. 13 plots the inflow at the pond inlet and the cumulative sediment deposition over the whole period in the study domain. Clearly, we can see a non-linear relationship between inflow discharge and cumulative deposition which demonstrates two distinctively different response modes: (1) steadily rising (e.g. zone 1 in Fig. 13a), and (2) sharply dropping (zone 2 in Fig. 13a). To look at the trend of change in deposition volume and the inflow discharge in Fig. 13b, we found that a high inflow leads to a sharp increase in deposition, and consistent low flows increase the sedimentation, but with a lower rate. However, the extreme flows in Fig. 13b reduce the deposition volume sharply, and the higher the inflow, the more significant the reduction is. Fig. 14 further demonstrates the changes of pond sedimentation due to the three selected representative events in the year 2012 (Event 1, 2, and 3 in Fig. 13). It is found that a considerable amount of sediment is trapped during Event 1, whilst the extreme Events 2 and 3 re-suspend the deposited sediment and transport them to the downstream, particularly in the area facing the pipe outlet. The behaviour is similar to the laboratory event reported in Section 3.1.3. At the pipe outlet bed shear stress is sufficiently high to cause re-suspension of sediments. The two different response Fig. 11. Simulated water depths (a, e, i), suspended concentration (b, f, j), and bed shear stress and velocity field (c, g, k) during flow peak, as well as sedimentation in the stormwater pond (d, h, l) for the 5-year (a-c), 30-year (d-f), and 100-year (g-i) events. modes observed during varying flow conditions raise a hypothesis, that is: sediment deposition in the pond increases with the inflow discharge, but after a critical value where there is a balance between erosion and deposition, the bed will be eroded due to the high bed shear stress, and the erosion rate is proportional to the inflow magnitude. To verify the hypothesis raised above, we picked out 24 different flow events with a flow peak varying from 0.2 m 3 /s to 10 m 3 /s from the extreme year 2012, and quantified the deposition volume before and after each event. Fig. 15 plots the scatter points between the change in deposition volume and flow peak for each event, and the trendlines among the points. It can be seen that two trendlines are derived as postulated, and both have a good determination coefficient, R 2 , that is larger than 0.8. The deposition volume has a linear relation with a high determination coefficient (0,8775) with the flow discharge. The linear relationship of erosion volume and flow discharge is also significant, but there is a clear large difference during extreme high flows (see Fig. 15). These two events with significant difference are event 2 and event 3 in Fig. 14. With a similar high flow, event 2 has more severe erosion than event 3. This is because there is significant deposition in the pond before event 2 occurs, which allows more sediment to be re-suspended during the extreme flow of event 2. However event 3 occurs about 110 h after event 2, the deposited sediment available for re-suspension is clearly much less than the pre-event 2 volume. Therefore, this leads to a significant bias for the two events with similar high flow discharge. We found that there is a critical value defined as 'balance point' of the flow peak, and the value is approximately in a range of 1.79-1.96 m 3 /s for the studied stormwater pond. In other words, sediment deposition in the stormwater pond increases with the inflow, and the rate is proportional to the flow peak when the flow peak is below the balance point; however, for the flow with a peak above the balance point, fine sediment particles will be resuspended and transported downstream, and the re-suspension rate is proportional to the flow peak. Clearly, this balance point is a transition value causing bed deposition or erosion in the pond. This point provides a valuable indicator for stormwater ponds design and maintenance. Removing sediment from stormwater ponds is needed periodically to maintain proper function and restore capacity to prevent localised flooding. Traditionally machinery dredging is one option during dry conditions (United States Environmental Protection Agency, 2009)⁄⁄. However, the understanding of ponds' balance point can suggest a natural hydraulic regulation method, so saving maintenance cost, and sediments transporting to downstream can also improve the river habitat. Similar hydraulic regulation method has been used for sustainable sediment management in reservoirs (Kondolf, et al., 2014). It should be mentioned that the actual changes in sedimentation volume are also related to inflow volume in addition to flow peak, because a larger flow volume means more fine sediments discharging into the pond. Nonetheless, the flow peak is the deterministic factor causing fine sediments either to be deposited in the pond or to be flushed out of the pond. Conclusions The study has developed a numerical model to track the hydromorphological processes dominated by fine-grained suspended sediment, including the prediction of sediment concentration in flow bodies, and erosion and deposition caused by sediment transport. The model has been validated with three laboratory-scale test cases where suspended load plays a dominant role. The results show that the model is capable of reproducing the flow dynamics and the resultant morphological changes reasonably well. Applications in real-world events are performed to further develop the process-based understanding of fine sediment activities in a stormwater pond during varying flow conditions. Findings drawn from this study include: (1) a stormwater pond can be used to attenuate flow peak and trap fine sediment particles, and the effect is more significant for flow events with a smaller flow peak; (2) a balance point for the inflow peaks determines whether fine sediments settled down or are re-suspended, and the value is determined in a range of 1.79-1.96 m 3 /s for the studied pond; (3) the consistent low flows lead to gradual accumulation of sediment particles in the pond, and each rainfall-induced flow event results in a sharp rising in the deposition volume below the balance point, but
2017-11-27T06:05:05.462Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "b376ec5540f7a6950f8319e55e10a7713a679bfd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jhydrol.2017.11.006", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "0c42bf9cc8dded4d1b4c94c90d553fef8a06bd1f", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
7782086
pes2o/s2orc
v3-fos-license
Finding the Needles in the Metagenome Haystack In the collective genomes (the metagenome) of the microorganisms inhabiting the Earth’s diverse environments is written the history of life on this planet. New molecular tools developed and used for the past 15 years by microbial ecologists are facilitating the extraction, cloning, screening, and sequencing of these genomes. This approach allows microbial ecologists to access and study the full range of microbial diversity, regardless of our ability to culture organisms, and provides an unprecedented access to the breadth of natural products that these genomes encode. However, there is no way that the mere collection of sequences, no matter how expansive, can provide full coverage of the complex world of microbial metagenomes within the foreseeable future. Furthermore, although it is possible to fish out highly informative and useful genes from the sea of gene diversity in the environment, this can be a highly tedious and inefficient procedure. Microbial ecologists must be clever in their pursuit of ecologically relevant, valuable, and niche-defining genomic information within the vast haystack of microbial diversity. In this report, we seek to describe advances and prospects that will help microbial ecologists glean more knowledge from investigations into metagenomes. These include technological advances in sequencing and cloning methodologies, as well as improvements in annotation and comparative sequence analysis. More significant, however, will be ways to focus in on various subsets of the metagenome that may be of particular relevance, either by limiting the target community under study or improving the focus or speed of screening procedures. Lastly, given the cost and infrastructure necessary for large metagenome projects, and the almost inexhaustible amount of data they can produce, trends toward broader use of metagenome data across the research community coupled with the needed investment in bioinformatics infrastructure devoted to metagenomics will no doubt further increase the value of metagenomic studies in various environments. Introduction The vast majority of the biosphere_s genetic and metabolic diversity is currently locked up within the world_s microbial communities, containing a staggering number of yet uncharacterized microbial genomes [48,73]. It has become well accepted that the diversity of microorganisms represented in culture collections is highly skewed toward those taxa that are amenable to growing under laboratory conditions, making our discovery of microbial genes through cultivation-dependent conventional genome sequencing equally skewed. Even with the recent success of novel and high throughput culturing strategies [30,31,59,65,67,86], we are still unable to mimic most microbial environments sufficiently to induce growth of many environmentally relevant microbes. Recent developments in molecular detection and identification techniques have enabled us to get a glimpse of the huge diversity of the microbial world. However, these techniques have only allowed for fragmentary observations of populations and communities, and a full picture of the structure and the (putative) function of microbial communities is still lacking. In principle, any study that addresses all the individuals of a community as a single genomic pool can be seen as an exercise in metagenomics. In this regard, the pioneering studies that first delved into microbial diversity by direct cloning of microbial DNA followed by meticulous screening for ribosomal RNA genes [47,49] should be, and in this study are, considered the first metagenomic studies. By the application of PCR in search of 16S rRNA gene diversity [23] and later diversity in other functional genes, a much more directed interrogation of this part of the metagenome became possible. Although understandable on technical grounds, we generally lost sight of the rest of the metagenome for about a decade in our quest to zoom in on phylogenetic markers and specific functional genes of interest. The wonder of PCR indeed made molecular inventories of microbial communities routine, but biases inherent to PCR amplification and the primers used in this procedure are far from trivial [32]. In this light, it is interesting to note that advances in screening methods and sequence throughput have now made it more feasible to survey rRNA gene diversity without the help of PCR amplification, and such approaches are gaining considerable favor [40,78]. Improvements in cloning technologies [64] and increased sequencing capacity provide new tools to gain greater access to the functional complexity of the metagenome ( [4,26,57,78]; Fig. 1), but how can we gain as much understanding as possible from these endeavors? The goals of researchers venturing into the microbial metagenome vary from directed product discovery to total community characterization, and the phylogenetic complexity of the environments studied can range over orders of magnitude. Likewise, methodologies vary widely in metagenomic studies, and community complexity and research goals are the clear determinants of which metagenomic approaches are most appropriate. A number of excellent reviews have highlighted the numerous breakthroughs in metagenomics [25,33,41,66], and it is not our goal in this study to appraise the breadth of work in this emerging area of research. Rather, we seek to highlight recent breakthroughs in the application of metagenomic approaches to important environments, and to discuss the unique advantages and disadvantages of the various metagenomic approaches used to date. In particular, we aim to identify and evaluate research possibilities and novel approaches that hold promise to advance our ability to gain functional knowledge from pursuits in metagenomics. Techniques, Approaches, and Examples A wide range of approaches has been employed to gain access to metagenomes (Fig. 1). The choice of strategy depends on a number of factors, including the complexity of the community, the amount of sample material available, the nature of the substrate, the density of microorganisms in a habitat, and of course the goal, scope, and resources available for the study. For purposes of this discussion, we group metagenomic studies into three classes: (1) shotgun studies that use mass genome sequencing, followed by scaffold reconstruction and gene annotation; (2) product or activity-driven studies that are designed in search of specific microbial activities and the genes encoding them; and (3) studies that attempt to link genome information with phylogenetic markers of microbial groups of interest. Step in metagenomic study Issues Shotgun analysis of community genomes is a rather simple exercise in terms of wet science. DNA extraction protocols abound that can provide high-quality DNA for the construction of large libraries of clones containing small inserts of environmental DNA, and automated high-throughput methods are implemented to recover and sequence as many clones as necessary or resources will allow. The majority of technical challenges with shotgun metagenomic approaches come in the construction of scaffolds of sequence from vast numbers of unordered short sequences. Advances in assembly methods, stimulated by the human sequencing project, now allow for complex pools of sequences to be assembled if sufficient sequence coverage is available. This last point is most critical to this process and is directly correlated to the complexity of the community under study. Perhaps the most elegant application of community shotgun sequencing (average insert size of 3.2 kb) was presented by Tyson and colleagues [76]. In a relatively modest 100 Mb of sequence, this group was essentially able to reconstruct the genomes of the five dominant organisms composing the biofilms of the acidic mine drainage habitat at Iron Mountain, California, USA, thereby piecing together the metabolic routes of the ecosystem. This sure-to-become classic example shows that simple communities in some ways can be seen as metaorganisms, and as with individual organisms, genome determination opens the door to postgenomic studies to gain further insight into genetic networks and metabolic circuitry in an environment. Our ability to master metagenomes decreases dramatically with increased complexity of the community, as demonstrated by the largest metagenomic study published to date [78]. In a monumental project to assess the genomic diversity of the Sargasso Sea, representing over one billion base pairs of sequence, Venter and colleagues found that reasonably large scaffolds could only be assembled for the most dominant community members, including the reconstruction of two nearly complete genomes. Clearly, complete sequencing of such environmental genomes is not an easily attainable goal. Fortunately, it can be argued that this may not be the most relevant goal, as this study exhibited the wealth of genomic information obtained via a variety of analyses into patterns of phylogenetic and functional diversity. Analyses suggested approximately 1,800 different genomic species, with a large number of novel phylotypes. Sequence annotation predicted 1.2 million new genes, including for example 782 rhodopsin-like genes affiliated with a wide range of bacterial taxa. This latter finding suggests that a large fraction of marine bacteria possess chlorophyll-independent light harvesting systems. Numerous other niche-defining genes and pathways were also detected, providing an unprecedented insight into the biogeochemistry of such marine ecosystems. The problems associated with assembling sequences recovered from shotgun libraries from complex communities become extreme when even more diverse ecosystems are interrogated in this way, as demonstrated by Tringe et al. [74] in their analysis of a soil metagenomic library. Soil-borne microbial communities are thought to be Earth_s greatest source of biodiversity, with estimates ranging from thousands to tens of thousands of species per gram of soil [10,72]. Indeed, nearly 140 Mb of sequence from a farmland soil revealed less than 1% of sequences showing any overlap, and produced no contigs, indicating that complete sequencing of such habitats is practically unattainable. However, Tringe et al. [74] demonstrated that such an exercise is far from futile. While obviously falling far short of providing an adequate sampling of the genetic diversity of this complex environment, this study did provide a wealth of novel genetic data, revealing hundreds of thousands of new protein-encoding genes, the vast majority of which were only distantly related to known protein sequences. Furthermore, these authors demonstrated that distribution patterns of sequence motifs and clusters of orthologous groups (COGs) of proteins [69,70] can be used to provide functional fingerprints of environments, which can be compared across disparate habitats. This brief synopsis of shotgun cloning approaches across a gradient of microbial diversity serves to highlight the power and limitations of such approaches as applied to different environments. As such endeavors expand to include other environments, we can expect that full community genomes will be produced from numerous low-diversity environments such as bioreactors and biofilms [60]. This information will pave the way for postgenomic studies that should help elucidate microbial interactions and pathways, allowing predictive and manipulative management of such economically relevant microbial communities. We predict that numerous genomic scaffolds will be revealed in shotgun clone investigations of important environments of intermediate diversity such as GI tracts [14,89] and oral cavities [19]. In addition, diversity within gene families of particular relevance to these habitats should be revealed. Within high-diversity habitats such as soil, metagenomic approaches should continue to reveal novel and specialized genes (see also below) and provide comparative insight into the distribution of microbial functions across different habitats. Product or activity-driven metagenomic studies are often approached from a more applied perspective, with the express goal to discover and exploit useful properties encoded within the metagenome [41]. Given that the majority of natural products are of microbial origin, and that the vast majority of microbial genomes have yet to be explored, it follows that microbial metagenomes contain a great economic potential. Due to their huge diversity and history as sources of commercially valuable G.A KOWALCHUK ET AL.: FINDING THE NEEDLES IN THE META-GENOME HAYSTACK molecules with agricultural, chemical, industrial, and pharmaceutical applications [9,41,42], soil environments have been the most common subjects of metagenome interrogation in this way [11]. Successful exploitation of microbial activities or metabolic pathways via a metagenomic approach requires a large number of critical steps (Fig. 2). Firstly, the target environment must contain the gene(s) encoding the activity of interest, preferably in a high frequency. Secondly, the DNA extraction and cloning methods must allow for the capture of intact genes or operons. Thirdly, the target genes must be detectable, either genetically or phenotypically. Lastly, once potential target activities have been detected, it must be possible to tailor their expression into viable production schemes. Predicting the success rate can be modeled depending on the nature of the target genes and the proportional abundance of the microorganisms harboring them [22]. Obviously, one must first start by looking in the right kind of environment, as exemplified by Rhee et al. [55] in their search for thermostable esterases, bearing in mind that not all environments provide easy access to large microbial biomass (see below). Still, except in cases where engineered systems are known to possess high levels of an activity of interest [27], specific target genes will represent only a very small fraction of the total genomic material in environmental samples. One obvious way to stack the deck in favor of detection of a property of interest (e.g., enzyme activity) is to enrich environmental samples for its presence. Metagenomic analysis of enrichment cultures has indeed become a powerful approach to isolation of genes encoding simple functions like biocatalyst or degrading activities [17,24,35,36,79]. As with other methods that depend on growth of target populations, enrichment procedures before metagenome extraction bias samples toward populations that react particularly well to the specific enrichment conditions. This may severely restrict the diversity and novelty of the target gene pool. Many extremely useful enzyme-encoding genes may occur within populations that respond slowly to enrichment conditions, thereby being masked by potentially less-useful genes that occur within more responsive populations. Step two in the chain toward metagenome prospecting has for the most part been solved rather well. Numerous DNA extraction and cloning methods are now available, and methods can pretty much be tailored to the sample type and the insert size desired. Insert size and expression background are the key factors when determining cloning strategy, and hinges on the size of the genomic region of interest (i.e., single genes vs full pathways) and the suspected phylogenetic range of target genomes. Choice of cloning strategy is intimately linked with the next link in the discovery chain, namely, identification of clones of interest. In theory, clones of interest can be identified by mass sequencing, where huge amounts of sequence data are examined for Bpotentially interesting bits^which are then studied in further detail. Alternatively, degenerate nucleotide sequences targeting conserved regions of gene families can be used to screen via various hybridization methods. These examples of screening by Bforward genetics^can be effective when target genes belong to a well-defined protein family, but are generally inefficient, and can only detect potentially interesting inserts based upon homology to known motifs. Functional screening methods potentially provide a means to discover new variants of functions of interest. The efficiency of functional screening of metagenomic libraries relies both on the efficiency and sensitivity of the assay and the compatibility of host_s transcription, translation, and modification machinery to act upon the transgenic DNA in question. Obviously, expansion of host ranges within metagenomic studies [39,43,81,82], even to eukaryote hosts [2], should provide greater access to the expression of a wider range of environmental gene activities, and steps in this direction are already bearing fruit. In the majority of studies to date, transgenic gene expression has relied on promoter elements intrinsic to the transgenic genomic material. However, the use of vectors that couple inserts to general or specific promoters has also come forward as a useful and highly directed means of probing the metagenome for microbial activities. An example is Substrate-Induced Gene Expression (SIGEX) screening [77,85]. This novel method clones environmental DNA into GFP-tagged vectors, and libraries are subsequently subjected to the target substrate of interest. Clones expressing GFP in the presence of the target substrate are then sorted and collected by FACS for further cultivation and analysis. This procedure allows one to zoom in on activities that are related to particular substrates or catabolic pathways of interest. Recent advances in vector systems and knowledge of promoter systems are adding to the potential of such directed approaches to functional gene discovery. Several flow cytometric methods have also been devised to examine large metagenomic libraries for activities that can be detected by fluorescent assays (see Diversa patents US 5958672 and 6872526-B2), promising more rapid interrogation of metagenomic libraries for sequences and activities of interest. A final hurdle in realizing the potential of genes recovered from metagenomic libraries is obtaining highlevel expression and incorporation into viable industrial processes. Continued effort to improve well-controlled high-expression systems remains an open research area. Many microbe-derived activities are still less than optimal for implementation in industrial processes. Directed evolution and selection methods [15,16] are providing fascinating and promising results that may allow researches to mold enzymatic activities to fill their specific needs. Phylogenetic and large-insert metagenomic approaches provide access to genetic information contained within microbial populations only known to us in the form of specific phylogenetic marker gene sequences [57]. The general strategy is to use 16S rRNA gene markers as phylogenetic handles to identify genomic fragments from not-yet-cultured populations of interest from largeinsert libraries [25]. The already classic example of this strategy is the discovery of proteorhodopsin within a genomics fragment belonging to a SAR86 population [4]. The discovery of this niche-defining gene led to further, far-reaching inferences concerning the diversity and extent of phototrophy in the world_s oceans [3], and it serves as the ecological poster child of metagenomics success. Similar strategies have now been successful in providing insight into other not-yet-cultured organisms including uncultured Acidobacteria [40] and Archaea [51]. Although these successes provide us glimpses into novel genomes, it requires a combination of insight and pure luck to define niches based upon relatively short stretches of genomics information. Indeed, in silico exercises using complete genome sequences can easily demonstrate that it is usually impossible to infer the niche of an organism based upon the 1-2% of the genome adjacent to an rRNA operon. A number of approaches may allow us to glean more functional information from such exercises: (1) Using genes toward the ends of marker-containing inserts as markers for further interrogation of clone libraries would allow one to detect adjacent inserts, thereby expanding the contiguous chromosomal region investigated. Although this sounds highly attractive, the use of such a strategy may only be practical where the target populations represent a considerable proportion of the total community; (2) Using known functional genes of interest instead of phylogenetic markers may provide a more direct route to the discovery of gene clusters of related function. In many cases, prokaryotic phenotypes are the result of the concerted effort of many genes that are often arranged into adjacent operons or super-operonic clusters. Thus, by targeting known genes central to complex phenotypes, the entire metabolic pathway of interest can be captured [25,56]; (3) Many niche-determining microbial activities reside on relatively mobile genetic elements. Strategies targeting the so-called mobilome [21] provide a means of focusing in an especially interesting subset of microbial activities [44,45,68]. As above, a limiting factor in such approaches is our ability to screen libraries for markers or activities of interest, and screening strategies include PCR-based methods, hybridization [38], and several novel approaches such as use of microarrays [61] and flow cytometry [46]. Given that most anchored metagenome approaches rely upon rRNA gene markers, the creation of libraries that are enriched for inserts containing these markers may also prove a useful first step in gaining access to genomic information from defined phylogenetic groups. Homing restriction enzymes may facilitate such approaches. These enzymes target relatively long recognition sites, typically unique within a bacterial genome, and I-CreI for example should theoretically ground metagenomic clones to rRNA gene operons. The prospect of custom-made homing enzymes [58] is especially exciting as these may provide a means of grounding metagenomic libraries to specific genomic sites of choice. Metagenomic approaches have the potential to generate tremendous amounts of sequence information. However, the knowledge gleaned from such studies is not proportional to the sequencing effort involved, and it depends on the bioinformatics interpretation of the information obtained. Bioinformatics challenges are encountered at several steps of metagenome analyses, namely: (1) sequence assembly, (2) sequence annotation, and (3) broader use and analysis of metagenomic sequence information. Algorithms for sequence reconstruction and contig formation have dramatically improved over the last couple of years but still rely to a large extent on principles used for the reconstruction of genomes from single organisms represented with a large coverage. Genome assembly is already complicated when analyzing a single cultured bacterium, and assembly becomes increasingly difficult when the total diversity and structure of a community is not known. Although community genome sequencing projects to date have managed to provide valuable insight into how patterns of sequence coverage and COG recognition can be used to glean important information from incomplete genomic sampling, further progress in this area is essential. For example, building recovered sequence information onto the scaffolds of known genomes is proving to be a highly valuable tool in trying to piece together partial genome sequences recovered from environmental samples. As community genome sequencing efforts continue and novel sequencing methods are introduced, community assembly algorithms will need to place a greater emphasis on unraveling genomic information from partial coverage of genomes and a high abundance of short sequencing reads. In the ideal scenario, the annotation of gene sequences should depend upon recovery of a full gene sequence, the context of the gene within the genome, sequence homology genes of known function, and experimental evidence of a gene product and function. Even in the analysis of genomes from pure cultures, the last of these criteria is lacking, and assumptions are made based mostly upon sequence homology and recognized sequence motifs, as well as the assumption that past annotations are correct. However, with environmental sequences, the first two criteria are often also lacking, making reliance on pure sequence homology often tenuous at best. Clearly, gene function also relies on context, and conclusions based solely upon sequence similarities should be treated with the appropriate caution. Bearing this in mind, predictions of functional modules and domains based upon dynamic databases of gene families from sequenced genomes, as exemplified for polyketide synthase genes [84], should provide a greater degree of confidence for the annotation of genes recovered directly from the environment. Due to the costs and infrastructure of large-scale metagenomics efforts, it is clear that such approaches are not yet available to a broad community of scientists. On the other hand, large-scale metagenome projects can produce much more data than any one group can analyze, and initial analyses are typically restricted to general trends of diversity and composition and a selected number of traits of specific interest to the researchers. Of course, recovered sequence information is made available via public databases, but this is often in a less useful form than the original datasets. Opening up metagenomic datasets for interrogation by a broader group of researchers, whose interests span a greater breadth of microbial functions, seems to be a relatively easy step that could greatly increase the understanding gleaned from large-scale metagenomics initiatives. Practical Aspects and Coordinating Efforts To date, there has been little broad-scale coordination in efforts to describe environmental metagenomes, and standards of resource management and curation are essentially absent. Who should choose the environments to be studied, and how should they be sampled? Who should decide the best approaches to access these metagenomes? Should cloned material be cataloged and stored, and if so, how and where? What is the most useful form of database management for recovered sequence information, and how should this be implemented? Up to now, the answers to these questions have for the most part been dictated by the specific interests and assets of the researchers spearheading individual metagenome projects. Some recent efforts have been helpful in providing the first coordination in such efforts, as exemplified by the US Department of Energy_s Genomes to Life Program and the Community Sequencing Program sponsored by the Joint Genome Institute. Not only is choice of environment important but also more coordinated funding efforts, better storage and access to cloned material, and standards of annotation and data deposition are necessary. Clearly, greater national and international cooperation in choosing and overseeing such metagenome efforts would help make large-scale metagenomic efforts more valuable, increasing their resource value to the scientific community. What the Future May Hold Metagenomics strategies currently followed, and the resources brought to bear in their execution, fit into the category of what might be called Bsledgehammer^or Bbrute force^approaches. Advances in cloning, screening, and sequencing technologies have made such a rough, indirect approach possible, and continued devel-opment in these areas will no doubt increase our access to the massive amount of information encoded in uncultivated microorganisms. Still, we may never find genes or assemble genomes originating from relatively low-abundant species or organisms residing in environments with high biodiversity despite their possible keystone roles in their environment or value to man. More focused methods are clearly needed if we wish to increase the efficiency with which we can recover genomic needles of interest from the haystack of environmental microbial diversity. Why go through the effort of producing and screening large metagenomic libraries for particular genomic fragments of interest if the organisms in question can be cultivated and subjected to genome analysis [75]? A major selling point of metagenomic approaches is that they are not restricted to only culturable microorganisms but also provide access to the Bunculturable^majority of microbial communities [83]. Increasingly, the application of the term Bunculturableĥ as proven to be incorrect for many microorganisms, as novel isolation and culturing methods are fueling a new wave of success stories in efforts to culture diverse microbes [30,31,54,59,65,67]. Thus, many Bunculturable^bacteria are more correctly probably just not-yet-cultured, and investments in culturing efforts may help to reduce the need for indirect and cumbersome metagenomic approaches. A number of other technologies are emerging that should also help us to focus on particular microbial needles in the haystack. These include: (1) combining metagenome approaches with stable isotope probing methods to focus in on genomes of active community members, (2) increased use of methods that target mRNA to access diversity of expressed genes, (3) zooming in on small sample sizes in particular environments of interest using whole community genome amplification methods to increase DNA quantities, (4) micromanipulation of individual cells for single-cell genome sequencing, and (5) the isolation and sequence determination from single DNA molecules. Stable isotope probing has become a powerful approach for studying subsets of microbial communities that respond to particular key substrates [52]. Molecular analysis of Bheavy-labeled^fractions of microbial communities based upon on phylogenetic and functional gene markers has provided a great impetus in the quest to couple microbial identity and function. However, such methods still focus on individual genes. Application of metagenomic approaches to active fractions of microbial communities offers an obvious route to isolation of important and complex microbial activities. Potential problems in this approach include the recovery of large molecular weight DNA, if large-insert approaches are required, and the limited amount of labeled nucleic acid available for subsequent analysis. Amplification of the labeled fraction may provide a solution to this latter problem as discussed below. Metagenomic approaches focus on genomic potential as opposed to realized activities, and a greater focus on gene expression in the environment is urgently needed. While gene expression of individual genes are providing insight into particular processes of interest [8], mRNA-based studies targeting numerous microbial activities simultaneously may hold the key to understanding the functioning of microbial consortia [7,18]. In this respect, DNA-based metagenome studies should be coupled with environmental transcriptomics approaches to gain insight into the genes that are actually active in the environment [50]. Numerous methods (DOP-PCR, IPEP, MDA, Omni Plex) have recently been developed for the amplification of genomic DNA without knowledge of sequence content [12,62,71,88]. Such whole-genome amplification strategies have typically been employed in the analysis of trace amounts of human DNA for analytical purposes [37]. However, the recent use on low-density cultures has opened up the ability to obtain genomic sequences from organisms for which extensive high-density culturing is not yet possible (i.e., genome sequencing has been performed on as little as õ1,000 cells after MDA; [13]). Similarly, genome amplification methods hold great promise to assist in the analysis of environmental samples that lack sufficient biomass for convenient application of metagenomic methodologies. Wholegenome or metagenome amplification methods will not only allow for the analysis of low-biomass environments but will also allow for the analysis of microbial communities at scales that are more appropriate for elucidating microbial functioning. For instance, many soil processes may best be understood at the level of microbial aggregates and bioreactors at the level of individual flocs. Taken a step further, such amplification technologies provide access to microbial genomes at the level of a single microbial cell [53,87,88]. The ability to gain genome sequence information from a single cell will finally fully bypass our need to culture organisms to gain access to their full genomic potential. Combining singlecell sequencing methods with in situ methods of cell identification and new techniques for the isolation and characterization of single prokaryotic cells [6,20] presents the possibility of examining microbial community genomes and activities one cell at a time. Why put all the genomes of an ecosystem into a mixer and try to piece the genomes back together again afterward when genomic information can be directly obtained from the individual community members? Such methods will not only open the door to the study of individual cells belonging to phylogenetic groups that are resistant to G.A KOWALCHUK ET AL.: FINDING THE NEEDLES IN THE META-GENOME HAYSTACK culturing methods and/or that occur at low frequencies, but will also provide a means of conducting bacterial population biology [63]. As with other methods, such amplification methods also carry a number of potential drawbacks, especially biases introduced by selective amplification [12], production of relatively short DNA fragments, and risks of contamination. Although whole-genome amplification methods provide access to the vast majority of genomic DNA present, and methods are being improved [28], amplification bias will remain an issue for the foreseeable future in the application of such procedures to environmental DNA. The production of relatively short fragments hampers the prospect of recovery of intact genes or operons although methods for larger fragment recovery upon amplification are becoming available [34]. Wholegenome amplification methods, especially the Multiple Displacement Amplification (MDA) method [1,12,28], are superior to PCR-based method in recovering large DNA fragments from very limited amount of materials. However, when applied to single cells, the issue of background amplification with MDA is not trivial, as exemplified by Raghunathan et al. [53], who found up to 70% of amplicons to be contaminants. In addition, the amplification by strand displacement creates a complex, repeated forked structure of DNA that may hamper downstream manipulations [80]. Most recently, two methods have been developed to reduce background amplification: one based on nanoliter-scale reaction volumes [29], and the other involving careful experimental procedures coupled by real-time monitoring of amplification kinetics [87]. With these improvements in background amplification, as well as a new sequencing library construction protocol to deal with the unusual hyperbranched DNA structures generated by MDA, Zhang et al. [87] demonstrated amplification of single Prochlorococcus cells and recovered approximately two-thirds of the genome at the sequencing depth of 3.5õ4.7Â. Due to amplification bias on single-cell amplifications, it was estimated that a sequencing depth of õ15Â would be required to recover 90% of the genome, with the filling of remaining gaps best dealt with via PCR-based methods. Nevertheless, this study represents a significant technological advance in obtaining genome information from single cells in environmental samples without lab culturing. Further developments, such as reducing amplification bias and improving sequencing coverage, as well as implementation of high-throughput screening platforms, are required to tackle the highly complex microbial communities in the environment. Before the single-cell genome sequencing method can be robustly and cost-effectively implemented in regular research labs, metagenomic sequencing will remain as an attractive complementary method in the coming years. Recent technological advances indicate that the analysis of small nucleic acid samples can be taken to the extreme, namely, single DNA or perhaps even RNA molecules [5]. Single molecule sequencing technologies are not yet applicable to the study of environmental samples but, if rendered feasible, hold the potential to open the door to microbial community genomics at the subcellular level. Conclusions Metagenomic approaches offer the unique ability to examine directly the genomic content of microbial communities, and recent advances in cloning, sequencing, and screening technologies are rapidly increasing the speed and efficiency with which community genomes can be analyzed. However, the immense microbial diversity of this planet precludes a simple strategy of sequencing everything, and clever choices and coordination in environment selection, screening methods, and data analysis will be key to deriving maximal knowledge and utility from available resources. The greatest advances in accessing community genome pools will probably come not from course improvements in metagenome library construction, but rather in methods to interrogate metagenomes for important microbial functions. Despite the hype of metagenomic approaches, emerging technologies and a revival in culturing efforts may make metagenomic approaches unnecessary in many cases. Thus, while metagenomic approaches can provide unique and unprecedented glimpses into microbial community function, they should not be seen as a means in and of themselves, but rather one impressive tool within the integrated approaches becoming available to tackle the diversity of Earth_s microbial functions.
2014-10-01T00:00:00.000Z
2007-03-08T00:00:00.000
{ "year": 2007, "sha1": "4c1958ba18b8eb7f9c90a90f01f29f01afee5058", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00248-006-9201-2.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4c1958ba18b8eb7f9c90a90f01f29f01afee5058", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244908937
pes2o/s2orc
v3-fos-license
Dirac-type results for tilings and coverings in ordered graphs A recent paper of Balogh, Li and Treglown initiated the study of Dirac-type problems for ordered graphs. In this paper we prove a number of results in this area. In particular, we determine asymptotically the minimum degree threshold for forcing (i) a perfect $H$-tiling in an ordered graph, for any fixed ordered graph $H$ of interval chromatic number at least $3$; (ii) an $H$-tiling in an ordered graph $G$ covering a fixed proportion of the vertices of $G$ (for any fixed ordered graph $H$); (iii) an $H$-cover in an ordered graph (for any fixed ordered graph $H$). The first two of these results resolve questions of Balogh, Li and Treglown whilst (iii) resolves a question of Falgas-Ravry. Note that (i) combined with a result of Balogh, Li and Treglown completely determines the asymptotic minimum degree threshold for forcing a perfect $H$-tiling. Additionally, we prove a result that combined with a theorem of Balogh, Li and Treglown, asymptotically determines the minimum degree threshold for forcing an almost perfect $H$-tiling in an ordered graph (for any fixed ordered graph $H$). Our work therefore provides ordered graph analogues of the seminal tiling theorems of K\"uhn and Osthus [Combinatorica 2009] and of Koml\'os [Combinatorica 2000]. Each of our results exhibits some curious, and perhaps unexpected, behaviour. Our solution to (i) makes use of a novel absorbing argument. Introduction In recent years there has been a significant effort to develop both Turán and Ramsey theories in the setting of vertex ordered graphs. A (vertex) ordered graph or labelled graph H on h vertices is a graph whose vertices have been labelled with [h] := {1, . . . , h}. An ordered graph G with vertex set [n] contains an ordered graph H on [h] if (i) there is an injection φ : [h] → [n] such that φ(i) < φ(j) for all 1 ≤ i < j ≤ h and (ii) φ(i)φ(j) is an edge in G whenever ij is an edge in H. Turán-type problems concern edge density conditions that force a fixed graph H as a subgraph in a host graph G. Whilst the Erdős-Stone-Simonovits theorem [8,9] determines, up to a quadratic error term, the number of edges in the densest H-free n-vertex graph, there is still active interest in the Turán problem for bipartite H. Indeed, for bipartite H the error term in the Erdős-Stone-Simonovits theorem is in fact the dominant term, so more refined results are sought. Similarly, a result of Pach and Tardos [24] determines asymptotically the number of edges an ordered graph requires to force a copy of a fixed ordered graph H with so-called interval chromatic number χ < (H) at least 3. Therefore, again there is significant interest in the 'bipartite' case of this problem (i.e., when χ < (H) = 2); see Tardos [28] for a recent survey on such results. The study of Ramsey theory for ordered graphs has also gained significant traction. For example, results of Conlon, Fox, Lee and Sudakov [5], and of Balko, Cibulka, Král and Kynčl [2] demonstrate that there are ordered graphs H for which the behaviour of the Ramsey number is vastly different to their underlying unordered graph. Other than Turán and Ramsey problems, another central branch of extremal graph theory concerns Dirac-type results; that is, minimum degree conditions that force fixed (spanning) structures in graphs. In a recent paper, Balogh, Li and the second author [3] initiated the study of Dirac-type results for ordered graphs. Their main focus was on perfect H-tilings, though they also raised other Dirac-type problems (see [3,Section 8]). In both the ordered and unordered settings, an H-tiling in a graph G is a collection of vertex-disjoint copies of H contained in G. An H-tiling is perfect if it covers all the vertices of G. Perfect H-tilings are also often referred to as H-factors, perfect H-packings or perfect H-matchings. H-tilings can be viewed as generalisations of both the notion of a matching (which corresponds to the case when H is a single edge) and the Turán problem (i.e., a copy of H in G is simply an H-tiling of size one). Problem 1.1. [3] Given any ordered graph H and any n ∈ N divisible by |H|, determine the smallest integer δ < (H, n) such that every n-vertex ordered graph G with δ(G) ≥ δ < (H, n) contains a perfect H-tiling. The analogous problem in the (unordered) graph setting had been studied since the 1960s (see, e.g., [1,6,13,17,20,21]) and forty-five years later a complete solution, up to an additive constant term, was obtained via a theorem of Kühn and Osthus [21]. We will discuss this result further when comparing this problem with Problem 1.1. In [3,Theorem 1.9], Balogh, Li and Treglown asymptotically resolved Problem 1.1 for H with χ < (H) = 2. Further, they developed approaches to the absorbing and regularity methods for ordered graphs, including providing general absorbing and almost perfect tiling lemmas. In this paper we build on their results to asymptotically resolve Problem 1.1 in all remaining cases (i.e., for all H with interval chromatic number at least 3). Our main result shows that Problem 1.1 does exhibit a somewhat different behaviour when χ < (H) ≥ 3 compared to when χ < (H) = 2; we discuss this further in Section 1. 3. In addition to this result, we also resolve the Dirac-type problems for H-covers in ordered graphs (see Section 1.2) and H-tilings covering a fixed proportion x of the vertices of an ordered graph for x ∈ (0, 1) (see Section 1.4). 1.1. A Dirac-type theorem for perfect H-tilings. In this subsection we state Theorem 1.8, which asymptotically resolves Problem 1.1 when χ < (H) ≥ 3. This result will depend on several definitions and parameters which we now introduce. We informally refer to an H-tiling in an n-vertex (ordered) graph G as an almost perfect H-tiling if it covers all but at most o(n) vertices of G. Komlós [16] proved that (1 − 1/χ cr (H))n is the minimum degree threshold for forcing an almost perfect H-tiling in an n-vertex graph G. In fact, it was later shown [26] that such graphs G contain H-tilings covering all but a constant number of the vertices in G. In the setting of ordered graphs, a related parameter χ * cr (H) turns out to be the relevant parameter for forcing an almost perfect H-tiling. To introduce this parameter we need the following definitions. For brevity, we will usually drop φ and just write U σ −1 (1) < · · · < U σ −1 (k) . Given t ∈ N, write B(t) for the blow-up of B with vertex set x∈V (B) V x , where the V x 's are sets of t independent vertices; so there are all possible edges between V x and V y in B(t) if xy ∈ E(B). Given an interval labelling φ of B, let (B(t), φ) be the ordered graph obtained from B(t) by equipping V (B(t)) with a vertex ordering, satisfying V x < V y for every x, y ∈ V (B) with φ(x) < φ(y). We refer to (B(t), φ) as an ordered blow-up of B. Definition 1.4 (Bottlegraph). For an ordered graph H, we say that a complete k-partite unordered graph B is a bottlegraph of H, if for every permutation σ of [k] and every interval labelling φ of B with respect to σ, there exists a constant t = t(B, H, φ) such that the ordered blow-up (B(t), φ) contains a perfect H-tiling. We say that B is a simple bottlegraph of H if for any choice of σ and φ we can take t = 1. Note that in Definition 1.4 we did not impose any restriction on the size of the parts of a bottlegraph. However, as we will see in Proposition 5.1, it suffices to consider bottlegraphs B ′ where all parts are of the same size except for perhaps one smaller part. More precisely, given any bottlegraph B of H, there is another bottlegraph B ′ with this structure such that χ cr (B ′ ) = χ cr (B). This bottle-like structure is where the name bottlegraph is derived, and was first used by Komlós [16] in the setting of unordered graphs. We say a bottlegraph B of F is optimal if χ cr (B) = χ * cr (F ). Notice that χ < (H) − 1 ≤ χ * cr (H) for all ordered graphs H as each bottlegraph B of H must have chromatic number at least χ < (H) and so χ < (H) − 1 < χ cr (B). In fact, Proposition 2.6 in Section 2 yields a stronger lower bound on χ * cr (H). On the other hand, (in contrast to χ cr (F ) for unordered graphs F ) we will also see examples of ordered graphs where χ * cr (H) is much larger than χ < (H). Note though that χ * cr (H) ≤ h for any ordered graph H on [h] as K h is a bottlegraph of H. In fact, this upper bound is attained when H is such that 1 and 2 are adjacent or h − 1 and h are adjacent; this is an immediate consequence of Proposition 11.1. To aid the reader's intuition, in Section 3. 3 we give examples of ordered graphs H where we compute χ * cr (H). Various bounds on χ * cr (H) are given in Section 11. The next result, a simple corollary of [3,Theorem 4.3], shows that χ * cr (H) is a relevant parameter for forcing an almost perfect H-tiling in an ordered graph. 1 Theorem 1.6 (Balogh, Li and Treglown [3]). Let H be an ordered graph. Then for every η > 0, there exists an integer n 0 = n 0 (H, η) so that every ordered graph G on n ≥ n 0 vertices with n contains an H-tiling covering all but at most ηn vertices. At first sight it is not clear if the minimum degree threshold in Theorem 1.6 is best possible. However, Theorem 12.1 in Section 12 shows that Theorem 1.6 is best possible in the sense that one cannot replace the 1 − 1/χ * cr (H) term in the minimum degree condition with any other fixed constant term a < 1 − 1/χ * cr (H). Thus, Theorem 12.1 and Theorem 1.6 provide an analogue of Komlós' theorem in the ordered setting. Unusually, in the proof of Theorem 12.1, for most ordered graphs H we do not simply produce an explicit extremal example. Indeed, if one has not explicitly computed the value of χ * cr (H) and the 'reason' why it takes this value, then it seems difficult to produce such an explicit extremal example. Instead, the proof splits into a few cases and uses various tools and results that we introduce in the paper. At this point the reader may wonder if the conclusion of Theorem 1.6 can be strengthened to ensure a perfect H-tiling. For some ordered graphs H this is possible. However, for other ordered graphs one will require a significantly higher minimum degree condition. The following definition is the critical concept for articulating this dichotomy for H with χ < (H) ≥ 3. Definition 1.7 (Local barrier). Let H be an ordered graph on [h] with r := χ < (H) ≥ 2. We say that H has a local barrier if for some fixed i = j ∈ [r + 1] the following condition holds. Given any interval (r + 1)-colouring of H with colour classes V 1 < · · · < V r+1 such that V i = {v} is a singleton class, there is at least one edge between v and V j in H. Note that in this definition we may have that a colour class V k is empty. If H is the ordered complete graph on r vertices then H does not have a local barrier; it is also easy to check that χ * cr (H) = χ < (H) = r. Given r ≥ 2, let H ′ be any complete r-partite (unordered) graph with at least 2 vertices in each colour class. Let H be any ordered graph obtained from H ′ by assigning an interval labelling to H ′ ; so χ < (H) = r. Then one can check that H has a local barrier with parameters i = 1 and j = r + 1 as in Definition 1.7. We are now able to state our main result which resolves Problem 1.1 for all ordered graphs H with χ < (H) ≥ 3. Theorem 1.8. Let H be an ordered graph with χ < (H) ≥ 3. Given any η > 0, there exists an integer n 0 = n 0 (H, η) so that if n ≥ n 0 and |H| divides n then and H has a local barrier; (iii) 1 − 1 Therefore Problem 1.1 is now asymptotically resolved. The reader might find it hard to see why the value of δ < (H, n) behaves as in Theorem 1.8, and indeed, it took the authors quite some time to discover the correct behaviour of this problem. In Section 1.3 we give further intuition on this result. In Section 3 we give examples of H in each case (i)-(iii) of the theorem. In Section 2 we give the extremal constructions for Theorem 1.8. In particular, in cases (i) and (iii) the extremal examples are 'bottlegraphs' -complete multipartite ordered graphs where each part has the same size except at most one smaller part; in Section 11 we explicitly compute the value of χ * cr (H) for a range of H, and thus the minimum degree threshold in Theorem 1.8 also. The explanation for why these extremal 'bottlegraphs' do not have perfect H-tilings revolve around 'space barrier' constraints, (i.e., one runs out of space in some subset of vertices in the extremal bottlegraph). 2 However, the 'reason' for obtaining a space barrier can be somewhat involved (and unlike any other space barrier extremal example we have ever seen before); we discuss this in Section 3.1. The proof of Theorem 1.8 applies an absorbing theorem from [3, Theorem 4.1] and Theorem 1.6 above. The main novelty is to prove an absorbing theorem for ordered graphs H as in Theorem 1.8(iii). Whilst our argument makes use of a lemma of Lo and Markström [23], and seems rather natural, it is different to any absorbing proof we have previously seen (in particular, we do not use local-global absorbing as in [3]). See Section 9 for an overview of our absorbing strategy. 1.2. A Dirac-type theorem for vertex covers. Given (ordered) graphs H and G, we say that G has an H-cover if every vertex in G lies in a copy of H. Note that the notion of an H-cover is an 'intermediate' between seeking a single copy of H and a perfect H-tiling; in particular, a perfect H-tiling in G is itself an H-cover. Given any n ∈ N and any (ordered) graph H, let δ cov (H, n) denote the smallest integer k such that every n-vertex (ordered) graph G with δ(G) ≥ k contains an H-cover. As noted in [22], an easy application of Szemerédi's regularity lemma asymptotically determines δ cov (H, n) for all graphs H. Proposition 1.9 implies that, asymptotically, the minimum degree threshold for ensuring an Hcover in a graph G is the same as the minimum degree threshold for ensuring a single copy of H in G. In [22,Theorem 5], Kühn, Osthus and Treglown asymptotically determined the Ore-type degree condition that forces an H-cover for any fixed graph H. There has also been several recent papers concerning minimum ℓ-degree conditions that force H-covers in k-uniform hypergraphs; see, e.g., [11,12,14]. Falgas-Ravry [10] raised the question of determining δ cov (H, n) for all ordered graphs H. Our next result asymptotically answers this question. Theorem 1.10. Let H be an ordered graph and η > 0. Then there exists an integer n 0 = n 0 (H, η) so that if n ≥ n 0 then H has a local barrier. Theorem 1.10 is a direct consequence of some of the auxiliary results we use in the proof of Theorem 1.8. Note that the behaviour of the threshold in Theorem 1.10 is perhaps unexpected. Indeed, unlike in the unordered setting, Theorem 1.10 and the Erdős-Stone-Simonovits theorem 2 See [15] for a discussion on space barriers. for ordered graphs imply that the asymptotic minimum degree thresholds for forcing a copy of H and an H-cover are different if H has a local barrier. Furthermore, a key moral of the Erdős-Stone-Simonovits theorem (and Proposition 1.9) is that once a graph G is dense enough (or has large enough minimum degree) so as to ensure a copy of K r (or a K r -cover) then G must contain every fixed graph H (or an H-cover) for every H of chromatic number r. An intuition for this comes from Szemerédi's regularity lemma. However, the analogous moral is not true for H-covers in ordered graphs. Indeed, if H is an ordered complete graph on r vertices (so χ < (H) = r and H has no local barrier) then Theorem 1.10 tells us the minimum degree threshold for forcing an H-cover in an n-vertex ordered graph is (1 − 1 r−1 + o(1))n whilst the corresponding threshold for any 'blow-up' H ′ of H (so χ < (H ′ ) = χ < (H) = r and H has a local barrier) is significantly higher, namely (1 − 1 r + o(1))n. This should hint to the reader that the regularity method behaves differently in the ordered setting; in particular, if H has a local barrier this provides an obstruction when applying the regularity lemma. More discussion on the regularity method for ordered graphs can be found in [3, Section 3.1]. 1.3. Intuition behind the threshold in Theorem 1.8. In this subsection we build up further intuition behind the threshold in Theorem 1.8. For this it will be useful to first take a step back and consider perfect H-tilings in unordered graphs. In this setting the Dirac-type threshold is governed by two factors: (C1) The minimum degree needs to be large enough to force an almost perfect H-tiling. (C2) The minimum degree must be large enough to prevent 'divisibility' barriers within the host graph that constrain us from turning an almost perfect H-tiling into a perfect H-tiling. This is made precise by the following theorem of Kühn and Osthus [21]. [21]). Let δ(H, n) denote the smallest integer k such that every graph G whose order n is divisible by |H| and with δ(G) ≥ k contains a perfect H-tiling. Recall that every graph H satisfies χ cr (H) ≤ χ(H). So by Komlós' aforementioned almost perfect tiling theorem [16], the minimum degree condition in Theorem 1.11 is enough to ensure (C1) holds. Meanwhile those graphs H with hcf(H) = 1 are precisely the graphs for which, at the almost perfect tiling threshold, (C2) is satisfied. Furthermore, for graphs H with hcf(H) = 1, (C2) is only guaranteed to be satisfied once the n-vertex host graph has minimum degree around (1 − 1/χ(H))n. We do not state the precise definition of hcf(H) = 1 here, however, the following example is instructive. Let H be any connected bipartite graph and let n ∈ N be divisible by |H|. Consider the n-vertex graph G that consists of two disjoint cliques whose sizes are as equal as possible so that neither is divisible by |H|. Then whilst G contains an almost perfect H-tiling, the divisibility constraint on the clique sizes prevents a perfect H-tiling. Thus all such H are examples of graphs with hcf(H) = 1. In particular, δ(G) = n/2 − O(1) = (1 − 1/χ(H))n − O(1), so G is an extremal example for Theorem 1.11 in this case. As mentioned earlier, another necessary condition for a Dirac-type threshold for perfect H-tilings is the following: (C3) The minimum degree needs to be large enough to force an H-cover. Condition (C3), however, does not factor into the statement of Theorem 1.11 as Proposition 1.9 shows that one can ensure an H-cover 'earlier' than an almost perfect H-tiling (recall that χ(H) − 1 < χ cr (H)). Interestingly, the opposite is true in the ordered graph setting when H has a local barrier and χ < (H) ≥ 3 is such that χ < (H) > χ * cr (H). Indeed, in this case, Theorem 1.8(ii) essentially states that the H-cover condition is the 'last' of conditions (C1)-(C3) to be satisfied. In particular, Extremal Example 1 in Section 2 shows that for every H satisfying Definition 1.7, there are nvertex ordered graphs with δ(G) > (1 − 1/χ < (H))n − 1 for which a certain vertex does not lie in a copy of H. In all other cases when χ < (H) ≥ 3, Theorem 1.8 essentially states that the almost perfect tiling condition is the 'last' of conditions (C1)-(C3) to be satisfied. Therefore, surprisingly (at least to the authors!), divisibility barriers play no role in Problem 1.1 for H with χ < (H) ≥ 3. In contrast, Theorem 1.9 in [3] shows that in the case when χ < (H) = 2, each of (C1), (C2) and (C3) can be the condition that governs the Dirac-type threshold for perfect H-tiling, depending on the choice of H. 1.4. A Dirac-type theorem for H-tilings. In addition to determining the Dirac-type threshold for almost perfect tilings in unordered graphs, Komlós [16] provided a best-possible minimum degree condition for forcing an H-tiling covering a certain proportion of the vertices in a graph G. [16]). Let H be a graph and x ∈ (0, 1). Define x. Given any η > 0, there exists some n 0 = n 0 (x, H, η) ∈ N such that if G is a graph on n vertices where n ≥ n 0 and δ(G) ≥ g(x, H) · n then there exists an (x − η, H)-tiling in G. Note that the minimum degree condition in Theorem 1.13 is best possible in the sense that given any fixed H and x ∈ (0, 1), one cannot replace g(x, H) with any fixed g ′ (x, H) < g(x, H) (see [16,Theorem 7] for a proof of this). The function g(x, H) is quite well-behaved. Indeed, for fixed H, g(x, H) grows linearly in x. Note that g(0, H) · n and g(1, H) · n are the asymptotic minimum degree thresholds for ensuring an n-vertex graph contains a copy of H and an almost perfect H-tiling respectively. From this prospective, the function g(x, H) can be viewed as a linear interpolation of these two thresholds. The question of obtaining an ordered graph analogue of Theorem 1.13 was raised in [3, Question 8.2]. We provide an answer to this problem; for this we require the following definitions. Definition 1.14 (x-bottlegraphs). Let H be an ordered graph and x ∈ (0, 1]. An unordered graph B is an x-bottlegraph of H if it satisfies the following properties: (i) B is a complete k-partite graph with parts U 1 , U 2 , . . . , U k , for some k ∈ N. (ii) There exists some m ∈ N such that |U 1 | ≤ m and |U i | = m for every i > 1. Given any ordered graph H, if B is an x-bottlegraph of H then χ(B) ≥ χ < (H). This implies that χ cr (B) > χ < (H) − 1 and so An application of Theorem 1.13 together with a tool from [3, Lemma 6.2] yields the following minimum degree condition for the existence of (x, H)-tilings in ordered graphs. Theorem 1. 16. Let H be an ordered graph, x ∈ (0, 1) and define . Given any η > 0, there exists some n 0 = n 0 (x, H, η) ∈ N such that if G is an ordered graph on n vertices with n ≥ n 0 and δ(G) ≥ (f (x, H) + η)n then G contains an (x, H)-tiling. The minimum degree condition in Theorem 1.16 is best possible in the following sense. Let H and x ∈ (0, 1) be fixed. Given any 0 < a < 1 − 1/χ * cr (x, H), and any sufficiently large n ∈ N, consider any n-vertex graph B that satisfies (i) and (ii) of Definition 1.14 for some choice of k, m ∈ N and where So χ cr (B) < χ * cr (x, H). (Note such a graph B exists for any choice of 0 < a < 1 − 1/χ * cr (x, H).) Then by definition of χ * cr (x, H) there is a permutation σ of [k] and an interval labelling φ of B with respect to σ, such the resulting ordered graph (B, φ) does not contain an (x, H)-tiling. A draw-back of Theorem 1.16 is that it seems hard to compute χ * cr (x, H) in general. However, in Section 14 we describe the behaviour of the function f (x, H) for some fixed ordered graphs H. In particular, akin to Theorem 1.13, if H has χ < (H) = 2 then f (x, H) is linear in x. Perhaps surprisingly though, there are ordered graphs where f (x, H) is only piecewise linear. We also compute f (x, H) for every ordered graph H and every x that is not too big. 1.5. Organisation of the paper. The paper is organised as follows. In Section 2 we give the extremal constructions for Theorems 1.8 and 1.10. In Section 3 we give some examples of ordered graphs H that fall into each of the three cases of Theorem 1.8. In Section 4 we state a new absorbing theorem (Theorem 4.2) and an absorbing theorem from [3] and combine them with Theorem 1.6 to prove Theorem 1.8. The subsequent sections therefore build up tools for the proof of Theorem 4.2: in Section 5 we state a couple of useful properties of bottlegraphs; in Section 6 we introduce Szemerédi's regularity lemma and related useful results; some tools for absorbing are given in Section 7; Section 8 contains several results which give flexibility in how one can interval colour certain ordered graphs H. In Section 9 we give a sketch of the proof of Theorem 4.2 before proving it and Theorem 1.10 in Section 10. In Section 11 we give general upper and lower bounds on χ * cr (H) and also compute χ * cr (H) for a few general classes of ordered graphs H. In Section 12 we prove that the minimum degree condition in Theorem 1.6 is best possible. The proof of Theorem 1.16 is given in Section 13; in the subsequent section we describe the behaviour of the function f (x, H) for some choices of H. We conclude the paper with some open problems in Section 15. If G is an (ordered) graph, |G| denotes the size of its vertex set, and e(G) denotes the number of edges in G. Given A ⊆ V (G), the induced subgraph G[A] is the subgraph of G whose vertex set is A and whose edge set consists of all of the edges of G with both endpoints in A. We define G\A := G[V (G)\A]. For two disjoint subsets A, B ⊆ V (G), the induced bipartite subgraph G[A, B] is the subgraph of G whose vertex set is A ∪ B and whose edge set consists of all of the edges of G with one endpoint in A and the other endpoint in B. We write e(A, B) := e(G[A, B]). For an (ordered) graph G and a vertex x ∈ V (G), we define N G (x) as the set of neighbours of x in G and d G ( Given an ordered graph G we say that V 1 < · · · < V r is an interval r-colouring of G to mean that there is an interval r-colouring of G with colour classes V 1 < · · · < V r . We say that an ordered graph G is complete r-partite if there exists an interval r-colouring V 1 < · · · < V r such that xy ∈ E(G) for every x ∈ V i and y ∈ V j with i = j. We refer to the V i 's as the parts of G. Given an unordered graph G and a positive integer t, let G(t) be the graph obtained from G by replacing every vertex x ∈ V (G) by a set V x of t vertices spanning an independent set, and joining u ∈ V x to v ∈ V y precisely when xy is an edge in G; that is, we replace the edges of G by copies of K t,t . We will refer to G(t) as a blown-up copy of G. If U i is a vertex class in G then we write U i (t) for the corresponding vertex class in G(t). We use analogous notation when considering blown-up copies of complete k-partite ordered graphs. In particular, given a complete k-partite ordered graph B with parts B 1 < · · · < B k , the ordered blow-up Throughout the paper, we omit all floor and ceiling signs whenever these are not crucial. The constants in the hierarchies used to state our results are chosen from right to left. For example, if we claim that a result holds whenever 0 < a ≪ b ≪ c ≤ 1, then there are non-decreasing functions f : (0, 1] → (0, 1] and g : (0, 1] → (0, 1] such that the result holds for all 0 < a, b, c ≤ 1 with b ≤ f (c) and a ≤ g(b). Note that a ≪ b implies that we may assume in the proof that, e.g., a < b or a < b 2 . Extremal constructions In this section we provide the extremal examples for Theorems 1.8 and 1.10. First, consider the case when H has a local barrier. We now construct an n-vertex ordered graph which does not contain an H-cover (and thus no perfect H-tiling), and whose minimum degree is more than (1 − 1/χ < (H))n − 1, thereby giving the lower bounds in Theorem 1.8(ii) and Theorem 1.10(ii). Extremal Example 1. Let n, r ∈ N and i, j ∈ [r + 1] with i = j. Let F 1 (n, r, i, j) be an n-vertex ordered graph consisting of vertex classes U 1 < · · · < U r+1 which satisfy the following conditions: • U i = {u} is a singleton class while the remaining vertex classes are as equally sized as possible, and in particular, |U j | = n−1 r ; • F 1 (n, r, i, j)\{u} is a complete r-partite ordered graph with parts U 1 , . . . , U i−1 , U i+1 , . . . U r+1 ; • u is adjacent to all other vertices except those in U j . Note that Furthermore, we now prove that F 1 (n, r, i, j) does not contain an H-cover (nor a perfect H-tiling) provided that χ < (H) = r and H has a local barrier with respect to parameters i, j ∈ [r + 1]. Lemma 2.1. Let H be an ordered graph, let r := χ < (H) and let n ∈ N. If H has a local barrier then there exist i, j ∈ N, with i = j, and a vertex u ∈ F 1 (n, r, i, j) such that there is no copy of H in F 1 (n, r, i, j) covering the vertex u. In particular, F 1 (n, r, i, j) does not contain an H-cover nor a perfect H-tiling. Proof. Suppose H has a local barrier with respect to i = j ∈ [r +1], as defined in Definition 1.7. Let u be the vertex in the singleton class U i of F 1 (n, r, i, j). Suppose there is a copy of H in F 1 (n, r, i, j) covering the vertex u. Then the interval (r + 1)-colouring U 1 < · · · < U r+1 of F 1 (n, r, i, j) induces an interval (r + 1)-colouring V 1 < · · · < V r+1 of H such that V i = {v} is a singleton class and there is no edge between v and V j . This contradicts the assumption that H has a local barrier with respect to i, j; thus, there is no copy of H in F 1 (n, r, i, j) covering the vertex u. We immediately obtain the following corollary of Lemma 2.1 and (2). Corollary 2.2. Let H be an ordered graph and let n ∈ N. If H has a local barrier then n. Next we prove a general lower bound on δ < (H, n) which is asymptotically sharp if the ordered graph H does not have a local barrier. Similarly to before, we construct an n-vertex ordered graph which does not contain a perfect H-tiling and whose minimum degree is at least (1−1/χ * cr (H))n−1, thereby giving the lower bound in cases (i) and (iii) of Theorem 1.8. Extremal Example 2. Let H be an ordered graph and n ∈ N. Set ℓ := n χ * cr (H) + 1 . Define F 2 (H, n) to be the unordered complete ⌈n/ℓ⌉-partite graph on n vertices such that all classes have size ℓ except for one class of size at most ℓ. It is easy to check that the minimum degree of F 2 (H, n) is Additionally, there exists a certain ordering of the vertices of F 2 (H, n) such that the resulting ordered graph does not contain a perfect H-tiling: Lemma 2.3. Let H be an ordered graph and n ∈ N such that |H| divides n. There exists an interval labelling φ of F 2 (H, n) such that the ordered graph (F 2 (H, n), φ) does not contain a perfect H-tiling. Proof. The critical chromatic number of F 2 (H, n) is It follows that F 2 (H, n) is not a bottlegraph of H. Hence, by definition, there exists a permutation σ of [⌈n/ℓ⌉] and an interval labelling φ of F 2 (H, n) with respect to σ such that (F 2 (H, n), φ) does not contain a perfect H-tiling. Lemma 2.3 and (3) immediately imply the following corollary. Corollary 2.4. Let H be an ordered graph. Then given any n ∈ N divisible by |H|, n. Next we give a general lower bound for δ cov (H, n) which is asymptotically sharp if the ordered graph H does not have a local barrier, thereby giving the lower bound in Theorem 1.10(i). Extremal Example 3. Let H be an ordered graph and n ∈ N. Let F 3 (H, n) be the complete (χ < (H) − 1)-partite ordered graph on n vertices with parts of size as equal as possible. It is easy to check that the minimum degree of F 3 (H, n) is As does not contain a copy of H and thus does not contain an H-cover. We therefore obtain the following result. Lemma 2.5. Let H be an ordered graph and n ∈ N. Then For n divisible by χ < (H) − 1, F 3 (H, n) also shows that the minimum degree threshold that ensures an almost perfect H-tiling in an n-vertex ordered graph is more than (1 − 1/(χ < (H) − 1))n. Thus, combined with Theorem 1.6 this immediately implies that, for all ordered graphs H, Actually, we close the section by proving an even stronger lower bound on χ * cr (H). Proposition 2.6 (A lower bound for χ * cr (H)). Let H be an ordered graph on h vertices and r := χ < (H). Then, Proof. Let B be an arbitrary bottlegraph of H. It suffices to show that χ cr (B) ≥ (r − 1) + r−1 h−1 . If χ cr (B) ≥ r then we are done (since h ≥ r), so for the rest of the proof we assume that χ cr (B) < r. In particular, as (4) implies that χ cr (B) > r − 1, this means that B has exactly r parts. Let B 1 denote the part of B of smallest size. Pick any interval labelling φ of B; then there exists some t ∈ N such that the ordered blow-up (B(t), φ) contains a perfect H-tiling H. Since B has exactly r parts, it follows that every copy of H in (B(t), φ) intersects all parts of B. Hence, In Section 3.3 we give a family of ordered graphs H for which the lower bound on χ * cr (H) in Proposition 2.6 is tight. Motivating examples 3.1. An example for Theorem 1.8(i). Recall that Extremal Example 2 yields the lower bound in cases (i) and (iii) of Theorem 1.8. The argument in Lemma 2.3 is rather straightforward. This is because of the definition of χ * cr (H); if one takes a complete multipartite graph G with χ cr (G) < χ * cr (H), then by definition there is a vertex labelling of G so that the resulting ordered graph does not contain a perfect H-tiling. Therefore, if one provides an argument that justifies why a bottlegraph of H is optimal, this equivalently can be translated into an argument which explains why an ordered graph is an extremal example for cases (i) and (iii) of Theorem 1.8. In this way, one can view χ * cr (H) as 'encoding' properties of the extremal example. In Section 11 we will compute χ * cr (H) for various classes of ordered graphs H. Often these arguments will be somewhat involved; thus, in these cases the reason why the extremal example for Theorem 1.8 does not contain a perfect H-tiling is also 'involved'. That is, in general the reason why extremal examples do not contain perfect H-tilings is not as immediate as Lemma 2.3 might suggest. We illustrate this point through the following example. (In fact, in Proposition 11.5 we compute χ * cr (F ) for all complete 3-partite ordered graphs F .) Thus, for such H we are in case (i) of Theorem 1.8 and so We now describe an extremal example for Theorem 1.8 for such H. Let n ∈ N such that |H| divides n and n ≥ 20. Let G be the complete 4-partite ordered graph on n vertices with parts Note that G 4 is the smallest part since |G i | ≥ n/4 for i = 1, 2, 3 and |G 4 | ≤ n/4. In particular, Suppose for a contradiction that G contains a perfect H-tiling H. Let A ⊆ H be the set of copies of H in H which have exactly ℓ vertices in G 1 and set B := H \ A. This immediately implies Combining (5) and (6) yields the following: The above is a contradiction since Hence, G does not contain a perfect H-tiling. Note that G is a 'space barrier' construction as our argument tells us that G 1 ∪ G 2 is 'too big' to ensure a perfect H-tiling in G; moreover, the reason why G 1 ∪ G 2 is 'too big', whilst not difficult, is not at first sight, obvious (i.e., we needed to consider how two types of copies of H intersect Space barrier constructions occur in many other settings too (e.g., the Kühn-Osthus perfect tiling theorem [20]). However, all previous graph space barrier constructions we are aware of have a different flavour to the above space barrier G. Indeed, previously known examples fail to contain the desired substructure due to some very immediate property that means one vertex class is 'too small' or 'too big'. In Section 11 we compute χ * cr (H) precisely for several classes of ordered graphs. In particular, we give other ordered graphs H that fall into case (i) of Theorem 1.8, namely all complete 3-partite ordered graphs and all complete r-partite ordered graphs whose smallest part is the first or last part (see Propositions 11.3 and 11.5). 3.2. An example for Theorem 1.8(ii). The next example provides a family of ordered graphs that fall into case (ii) of Theorem 1.8. Figure 1). So χ < (H) = r. Let B the complete r-partite graph with parts B 1 , . . . , B r where |B i | = k for i ∈ [r − 1] and |B r | = 2. Observe that χ cr (B) = (r − 1) + 2/k. It is straightforward to check that for any permutation σ of [r] and any interval labelling φ of B with respect to σ, the ordered graph (B, φ) contains a spanning copy of H; hence B is a simple bottlegraph of H. It follows that Example 3.2 (An example for Theorem 1.8(ii)). Let r, k ≥ 3 and let H be the ordered graph with vertex set Furthermore, H has a local barrier: for any interval (r + 1)-colouring {1} < V 1 < · · · < V r of H we have that (r − 1)k + 2 ∈ V r and thus there is one edge between {1} and V r . 3. 3. An example for Theorem 1.8(iii). Next we consider a family of ordered graphs which fall into case (iii) of Theorem 1.8. We will explicitly compute χ * cr (H). In particular, we prove that χ * cr (H) < χ < (H) and that H does not have a local barrier. We first construct a bottlegraph of H. Let B denote the complete r-partite graph with parts B 1 , . . . , B r where |B i | = k for i ∈ [r − 1] and |B r | = 1. Observe that χ cr (B) = (r − 1) + 1/k. It is straightforward to check that for any permutation σ of [r] and any interval labelling φ of B with respect to σ, the ordered graph (B, φ) contains a spanning copy of H. Thus, B is a simple bottlegraph of H and so χ * . Then x is isolated in H and so clearly there is no edge between x and V j . If i = 1, there exists an interval (r + 1)-colouring Proof of Theorem 1.8 In this section we present some intermediate results and explain how they imply Theorem 1.8. Crucial to our approach will be the use of the absorbing method, a technique that was introduced systematically by Rödl, Ruciński and Szemerédi [25], but that has roots in earlier work (see, e.g., [19]). [3]). Let H be an ordered graph on h vertices and let η > 0. Then there exists an n 0 ∈ N and 0 < ν ≪ η so that the following holds. Suppose that G is an ordered graph on n ≥ n 0 vertices and Then V (G) = [n] contains a set Abs so that • |Abs| ≤ νn; • Abs is an H-absorbing set for every W ⊆ V (G) \ Abs such that |W | ∈ hN and |W | ≤ ν 3 n. Theorems 1.6 and 4.1 can be combined to yield a minimum degree condition that forces a perfect H-tiling. Indeed, let G and H be ordered graphs and suppose that We first invoke Theorem 4.1 to find a set Abs ⊆ V (G) which is an H-absorbing set for any not too large set W ⊆ V (G) \ Abs. Then we apply Theorem 1.6 to G \ Abs to find an H-tiling M 1 which covers all but a small proportion of vertices in G \ Abs. Let W denote the set of such vertices in G \ Abs. Since W is relatively small, Abs is an H-absorbing set for W , and thus G[W ∪ Abs] contains a perfect H-tiling M 2 . Finally, observe that M 1 ∪ M 2 is a perfect H-tiling in G. Thus we have proven that In particular, this is asymptotically sharp if χ * cr (H) ≥ χ < (H) (by Corollary 2.4) or if χ * cr (H) < χ < (H) and H has a local barrier (by Corollary 2.2), therefore proving cases (i) and (ii) of Theorem 1.8. However, if χ * cr (H) < χ < (H) and H does not have a local barrier then this minimum degree condition can be substantially lowered. To achieve this, we need a new absorbing result: Theorem 4.2 (Absorbing theorem for non-local barriers). Let H be an ordered graph on h vertices with χ < (H) ≥ 3 and let η > 0. If H does not have a local barrier and χ * cr (H) < χ < (H), then there exists an n 0 ∈ N and 0 < ν ≪ η so that the following holds. Suppose that G is an ordered graph on n ≥ n 0 vertices and Then V (G) = [n] contains a set Abs so that • |Abs| ≤ νn; • Abs is an H-absorbing set for every W ⊆ V (G) \ Abs such that |W | ∈ hN and |W | ≤ ν 3 n. Note that the statement of Theorem 4.2 is false if one allows χ < (H) = 2; indeed, the conclusion of the theorem fails for so-called divisibility barriers H. 3 However, one can adapt our proof and relax the hypothesis of Theorem 4.2 to χ < (H) ≥ 2 if one additionally assumes H is not a divisibility barrier. We will not do this in this paper, however, as [3, Theorem 1.9] already resolves the perfect H-tiling problem for ordered graphs H with χ < (H) = 2. We postpone the proof of Theorem 4.2 to Section 10. With Theorem 4.2 at hand, we can now give the proof of Theorem 1.8. Proof of Theorem 1.8. First note that the lower bounds in parts (i)-(iii) of the theorem follow immediately from Corollary 2.4 (for (i) and (iii)) and Corollary 2.2 (for (ii)). Thus it remains to prove the upper bounds. Let H be an ordered graph with χ < (H) ≥ 3 and let η > 0. Let n ∈ N be sufficiently large and such that |H| divides n. Let G be an ordered graph on n vertices with minimum degree so that Thus, by Theorem 4.1 (for cases (i) and (ii)) and Theorem 4.2 (for case (iii)), there exists some 0 < ν ≪ η and a set Abs ⊆ V (G) such that • |Abs| ≤ νn; • Abs is an H-absorbing set for every W ⊆ V (G) \ Abs such that |W | ∈ |H|N and |W | ≤ ν 3 n. Let G ′ := G \ Abs. In all cases we have that δ(G ′ ) ≥ (1 − 1/χ * cr (H))|G ′ |. Since n was chosen to be sufficiently large, by Theorem 1.6 there exists an H-tiling M 1 in G ′ covering all but at most ν 3 n vertices. Let W ⊆ V (G ′ ) denote the set of vertices which are not covered by M 1 . Since |H| divides n, |V (M 1 )| and |Abs| we have that |H| divides |W | too. Also, |W | ≤ ν 3 n, hence G ′ [W ∪ Abs] contains a perfect H-tiling M 2 . Finally, observe that M 1 ∪ M 2 is a perfect H-tiling of G, as desired. Bottlegraphs In the following proposition we show that it suffices to consider bottlegraphs where all parts are of the same size except for perhaps one smaller part. Proof. Let B be a bottlegraph of H; so B is a complete k-partite unordered graph with parts B 1 , . . . , B k for some k ∈ N. Without loss of generality, we may assume that It remains to show that B ′ is a bottlegraph of H. Observe that the vertices in B ′ 1 can be partitioned into (k − 1) sets of size |B 1 | while the vertices in B ′ i can be partitioned into (k − 1) sets of sizes |B 2 |, . . . , |B k | respectively, for every i > 1. This implies that B ′ contains a perfect B-tiling consisting of (k − 1) copies of B. Let {C 1 , . . . , C k−1 } be a perfect B-tiling in B ′ (i.e., each C i is a copy of B in B ′ ). Let σ be a permutation of [k] and let φ be an interval labelling of B ′ with respect to σ. For every C i , φ induces an interval labelling φ i of C i with respect to some permutation σ i of [k]. Since B is a bottlegraph, Finally, M 1 ∪ · · · ∪ M k−1 is a perfect H-tiling of the ordered blow-up (B ′ (t), φ). Since σ, φ were arbitrary, B ′ is a bottlegraph of H. Note that the notion of a bottlegraph of H (Definition 1.4) and 1-bottlegraph (Definition 1.14) are not quite the same. However, the next result implies that χ * cr (H) = χ * cr (1, H). Thus, inf X = χ * cr (H) and inf X 1 = χ * cr (1, H). By definition of a bottlegraph and 1-bottlegraph we have that X 1 ⊆ X ; so to prove the proposition it suffices to show that X ⊆ X 1 . Given any bottlegraph B of H, let B ′ be the bottlegraph of H obtained by applying Proposition 5.1. So B ′ satisfies conditions (i) and (ii) in the definition of a 1-bottlegraph of H and χ cr (B ′ ) = χ cr (B). As B ′ is a bottlegraph of H, there is some t ∈ N so that B ′ (t) satisfies condition (iii) of the definition of a 1-bottlegraph of H. Then B ′ (t) is a 1-bottlegraph of H with χ cr (B ′ (t)) = χ cr (B ′ ) = χ cr (B). Thus, X ⊆ X 1 , as desired. The regularity lemma In the proof of Theorem 4.2 we will make use of the regularity method. In this section we state a multipartite version of Szemerédi's regularity lemma and some other related tools. First, we introduce some basic notation. The density of an (ordered) bipartite graph with vertex classes A and B is defined to be We now state some well-known properties of ε-regular pairs. The first (see, e.g., [18,Fact 1.5]) implies that one can delete many vertices from an (ε, d)-regular pair and still retain such a regularity property. Lemma 6.1 (Slicing lemma). Let (A, B) G be an ε-regular pair of density d, and for some α > ε, The following theorem is a multipartite version of Szemerédi's regularity lemma [27] (presented, e.g., as Lemma 5.5 in [3]). Theorem 6.2 (Multipartite regularity lemma). Given any integer t ≥ 2, any ε > 0 and any ℓ 0 ∈ N there exists L 0 = L 0 (ε, t, ℓ 0 ) ∈ N such that for every d ∈ (0, 1] and for every nearly balanced t-partite and a spanning subgraph G ′ of G, such that the following conditions hold: We call the W j i clusters, the W 0 i the exceptional sets and the vertices in the W 0 i exceptional vertices. We refer to G ′ as the pure graph. The reduced graph R of G with parameters ε, d and ℓ 0 is the graph whose vertices are the W j i (where i ∈ [t] and j ∈ [ℓ]) and in which The following well-known corollary of the regularity lemma shows that the reduced graph almost inherits the minimum degree of the original graph. Proposition 6.3. Let 0 < ε, d, k < 1 and let G be an n-vertex graph with δ(G) ≥ kn. If R is the reduced graph of G obtained by applying Theorem 6.2 with parameters ε, d, ℓ 0 , then δ(R) ≥ (k − 2ε − d)|R|. A useful tool to embed subgraphs into G using the reduced graph R is the so-called key lemma. Lemma 6.4 (Key lemma [18]). Let 0 < ε < d and q, t ∈ N. Let R be a graph with V (R) = {v 1 , . . . , v k }. We construct a graph G as follows: replace every vertex v i ∈ V (R) with a set V i of q vertices and replace each edge of R with an (ε, d)-regular pair. For each v i ∈ V (R), let U i denote the set of t vertices in R(t) corresponding to v i . Let H be a subgraph of R(t) on h vertices with maximum degree ∆. Set δ := d − ε and ε 0 := δ ∆ /(2 + ∆). If ε ≤ ε 0 and t − 1 ≤ ε 0 q then there are As in [3], some of our applications of Lemma 6.4 will take the following form: suppose within an ordered graph G we have vertex classes V 1 < . . . < V k so that each pair (V i , V j ) G is (ε, d)-regular. Then Lemma 6.4 tells us G contains many copies of any fixed size ordered graph H with χ < (H) = k, where the ith vertex class of each such copy of H is embedded into V i . Absorbing tools In this section we state a couple of results which are useful for proving the existence of Habsorbing sets. The first result is the following crucial lemma of Lo and Markström [23]; we present the ordered version of their result which appeared as Lemma 7.1 in [3]. Lemma 7.1 (Lo and Markström [23]). Let h, s ∈ N and ξ > 0. Suppose that H is an ordered graph on h vertices. Then there exists an n 0 ∈ N such that the following holds. Suppose that G is an ordered graph on n ≥ n 0 vertices so that, for any x, y ∈ V (G), there are at least ξn sh−1 Informally, we will sometimes refer to a set X satisfying the assumptions of Lemma 7.1 as a chain of size |X| between vertices x and y. The next lemma states that it is in some sense possible to concatenate chains. Proof. Let z ∈ A and let X, Y be an (s 1 h − 1)-set and an (s 2 h − 1)-set respectively which satisfy the above properties, with X, Y disjoint so that y ∈ X and x ∈ Y . Then X ∪ {z} ∪ Y is an Flexible colouring In this section we prove several auxiliary results which will be particularly important for the proofs of Theorem 4.2 and Theorem 12.1. We start with the following definition. Definition 8.1. Let H be an ordered graph and let r := χ < (H). We say that H is flexible if, for every i ∈ [r − 1], there exists an interval (r + 1)-colouring In the next lemma we show that given a flexible ordered graph H, there exists a complete χ < (H)partite ordered graph F such that F contains a perfect H-tiling and any complete r-partite ordered graph F ′ , whose parts have approximately the same sizes as the corresponding parts in F , contains a perfect H-tiling too. r be an interval (r + 1)-colouring of H as described in Definition 8.1. Let F be the complete r-partite ordered graph with parts F 1 < · · · < F r such that Thus, |F | = 2r(r − 1)h 2 . Let s 1 , . . . , s r ∈ Z such that s 1 + · · · + s r = 0 and |s i | ≤ h for every i ∈ [r]. Let F ′ be the complete r-partite ordered graph with parts F ′ 1 < · · · < F ′ r such that |F ′ k | = |F k | + s k for every k ∈ [r]. Then both F and F ′ contain perfect H-tilings. Proof. First, we explicitly construct a perfect H-tiling in F . For every i ∈ [r − 1], consider rh copies of the interval r-colouring V i For every k ∈ [r] we take the union of all the kth colour classes from the interval r-colourings above to obtain r new classes. It is easy to check that the size of the new kth class is exactly the size of F k , so we have just constructed a perfect H-tiling in F . In particular, as there are 2r(r − 1)h copies of H in this tiling, |F | = 2r(r − 1)h 2 . Note that for every pair of consecutive classes (F k , F k+1 ) we could independently move rh vertices from F k to F k+1 to yield a new complete r-partite ordered graph which still contains a perfect Htiling; similarly, we could move rh vertices from F k+1 to F k (these 2rh vertices fulfill the role of x k in their respective interval r-colourings of H). We are going to use this observation to construct a perfect H-tiling in F ′ . Set t k := s 1 + · · · + s k for k ∈ [r] and t 0 := 0. For every pair of consecutive classes (F k , F k+1 ), if t k ≥ 0 move t k vertices from F k+1 to F k , otherwise move −t k vertices from F k to F k+1 . This is possible since |t k | ≤ |s 1 | + · · · + |s r | ≤ rh by assumption. Note that the size of the new kth class is |F k | + t k − t k−1 = |F k | + s k = |F ′ k |, hence we just constructed a perfect H-tiling in F ′ . Observe that the ordered graphs F and F ′ in Lemma 8.2 have the same number of vertices. In the next corollary we ease this restriction and allow F ′ to have a few more vertices than F . Corollary 8.3. Let H be an ordered graph on h vertices and let r := χ < (H). If H is flexible then the following holds. Let F be the complete r-partite ordered graph with parts F 1 < · · · < F r as in Lemma 8.2 and let t ∈ N. For any s 1 , . . . , s r , ℓ ∈ N ∪ {0} such that s 1 + · · · + s r = ℓh ≤ th and any complete r-partite ordered graph F ′ with parts F ′ 1 < · · · < F ′ r of size |F ′ k | = t|F k | + s k for every k ∈ [r], both F (t) and F ′ contain perfect H-tilings. Proof. By Lemma 8.2, F contains a perfect H-tiling and thus clearly the ordered blow-up F (t) contains a perfect H-tiling too. We prove that F ′ contains a perfect H-tiling by induction on ℓ. If ℓ = 0 then F ′ = F (t) and so F ′ contains a perfect H-tiling. Assume ℓ > 0. For every k ∈ [r], let s ′ k ∈ N ∪ {0} such that s ′ k ≤ s k and s ′ 1 + · · · + s ′ r = h. Let Q 1 < · · · < Q r be an interval r-colouring of H. Notice that V (F ′ ) can be partitioned into four sets X, Y, W, Z such that . Altogether this clearly implies F ′ contains a perfect H-tiling. Our goal now is to show that if χ * cr (H) < χ < (H) then H is flexible. This property is crucial for the proof of Theorem 4.2, and is a corollary of the following lemma. Proof. Since H is not flexible, there exist some i ∈ [r − 1] such that there is no interval (r + 1)colouring V 1 < · · · < V i < {x} < V i+1 < · · · < V r of H such that both V 1 < · · · < V i ∪ {x} < V i+1 < · · · < V r and V 1 < · · · < V i < V i+1 ∪ {x} < · · · < V r are interval r-colourings of H. Let U 1 < · · · < U r be an interval r-colouring of H. Let y be the largest vertex in U i and let z be the smallest vertex in U i+1 ; so z = y + 1. Note that y must be adjacent to some vertex in U i+1 , as otherwise the interval (r + 1)-colouring U 1 < · · · < U i \ {y} < {y} < U i+1 < · · · < U r satisfies the flexibility property, a contradiction. Let y ′ be the smallest vertex in U i+1 adjacent to y. Similarly, let z ′ be the largest vertex in U i adjacent to z. Claim 8.5. Given any r-colouring Q 1 < · · · < Q r of H, y ∈ Q i and z ∈ Q i+1 . Fix an arbitrary interval r-colouring Q 1 < · · · < Q r of H; say that z ′ lies in the kth interval Q k . Note that Q 1 < · · · < Q k ∩ [z ′ ] < (U i \ [z ′ ]) < U i+1 < · · · < U r is an interval colouring of H. If (i) k < i − 1 or (ii) k = i − 1 and z ′ = y then χ < (H) < r, a contradiction. If k = i − 1 and z ′ = y then the interval (r + 1)-colouring U r satisfies the flexibility property, a contradiction to our initial assumption in the proof. Thus, k ≥ i, which implies that z is contained in the (i + 1)th interval Q i+1 or above since z and z ′ are adjacent. Similarly, we can show that y is contained in the ith interval Q i or below. This implies that, since y, z are consecutive vertices in H, y lies in Q i and z in Q i+1 . This completes the proof of the claim. By the claim above, and as z = y + 1, we conclude that y is the largest vertex in Q i . Hence, the number of vertices in the first i intervals of any given interval r-colouring of H is exactly y, yielding the required result. Proof. Let r := χ < (H). Suppose for a contradiction that H is not flexible. By Lemma 8.4, there exist some i ∈ [r − 1] such that the number of vertices lying in the first i intervals of any given interval r-colouring of H is, say, y for some fixed y ∈ N. Consider a bottlegraph B of H with χ cr (B) < r (which exists by definition of χ * cr (H) and as χ * cr (H) < r); note that B must consist of precisely r parts. By Proposition 5.1 we may assume that B has parts B 1 , . . . , B r where |B 1 | < m and |B i | = m for some m ∈ N. Let σ be a permutation of [r] and φ be an interval labelling of B with respect to σ. Recall that the ordered graph (B, φ) has parts B σ −1 (1) < · · · < B σ −1 (r) . Then, there exists some t ∈ N such that the ordered blow-up (B(t), φ) contains a perfect H-tiling. In particular, this perfect H-tiling consists of t|B|/h copies of H. By assumption, each copy of H has exactly y vertices lying in the first i parts of (B(t), φ) which implies that the first i parts contain exactly yt|B|/h vertices; that is, the total number of vertices in these parts is independent of the choice of σ. This is clearly a contradiction as if we pick σ such that σ −1 (1) = 1 then there are fewer than itm vertices in the first i parts of (B(t), φ), while if σ −1 (r) = 1 then there are precisely itm vertices in the first i parts of (B(t), φ). Note that one does really require χ * cr (H) < χ < (H) in the statement of Corollary 8.6; consider, e.g., when H is a complete balanced r-partite ordered graph with parts of size at least 2. Proof sketch of Theorem 4.2 In this section we briefly present the main ideas behind the proof of Theorem 4.2 before giving a rigorous argument in Section 10.1. Throughout this section we set r := χ < (H) ≥ 3. Let H be an ordered graph such that χ * cr (H) < χ < (H) = r and H has no local barrier. Let G be an n-vertex graph with n sufficiently large and minimum degree Our ultimate goal is to construct many chains between any given pair of vertices x, y ∈ V (G); then Lemma 7.1 will conclude the proof. To achieve this, we first divide [n] into many nearly balanced intervals, remove all edges in G which lie completely in some interval and then apply the multipartite version of the regularity lemma to obtain a reduced graph R. This preconditioning process will make it convenient to work with cliques in the reduced graph: if two clusters W and W ′ are adjacent in R then by construction either W < W ′ or W > W ′ . We then show that given an arbitrary pair of clusters W and W ′ in R, for almost every pair of vertices x ∈ W and y ∈ W ′ we can find many chains between x and y. We achieve this gradually through various steps: • Given a copy T of K r in the reduced graph R and an arbitrary cluster W in T , we prove that for almost every pair of vertices x, y ∈ W we can find many chains between x and y. This is quite straightforward and the only property of H we use is that χ < (H) = r. • Given a copy T of K r in the reduced graph R and two arbitrary clusters W, W ′ in T , we prove that for almost every pair of vertices x ∈ W and y ∈ W ′ we can find many chains between x and y. The "flexibility" guaranteed by the condition χ < (H) < χ * cr (H), formally stated in Corollary 8.6, will be the main ingredient here. • Finally, we show that given any two arbitrary clusters W and W ′ , for almost every pair of vertices x ∈ W and y ∈ W ′ we can find many chains between x and y. Since r ≥ 3, this fairly straightforwardly follows from the minimum degree of R and the previous point. Next, given an arbitrary vertex x ∈ V (G), we use the minimum degree condition to find a particular structure L in G containing x and resembling an extremal construction for graphs which have a local barrier, namely Extremal Example 1; in particular, the vertex classes of Extremal Example 1 are replaced by some of the clusters in R. Assuming H does not have a local barrier, and using Corollary 8.3, we find many chains between x and almost every vertex lying in a certain cluster W in L. Let G be an ordered graph on n ≥ L 0 vertices with minimum degree Let W 1 < · · · < W t be a nearly balanced interval partition of [n]. Let G ′ be the ordered graph obtained by deleting all edges lying in each W i from G. As t = ⌈4/η⌉ we have that |W i | ≤ ⌈ηn/4⌉ for all i ∈ [t] and so Apply Theorem 6.2 to G ′ with parameters ε 1 , t, ℓ 0 and d to obtain a pure graph G ′′ and a partition Let R be the corresponding reduced graph of G ′ . By (9), Proposition 6.3 implies that Crucially, observe that if W j i W j ′ i ′ is an edge in R then, by construction of G ′ , i = i ′ and so either First, we prove that given a copy T of K r in R and an arbitrary cluster W in T , for almost every pair of vertices x, y ∈ W there exist many chains between x and y. Claim 10.1. Let T 1 < · · · < T r be r clusters which form a copy of K r in R. Given any i ∈ [r], there exists a set A i ⊆ T i of size |A i | ≥ (1 − ε 1 r)|T i | such that the following holds: for every x ∈ A i there exists a set C x ⊆ T i of size |C x | ≥ (1 − 2ε 2 r)|T i | such that for every y ∈ C x there are at least that both G[X ∪ {x}] and G[X ∪ {y}] contain a copy of H. Proof. Fix i ∈ [r]. For every k = i, let Let A i := T i \ (L 1 ∪ · · · ∪ L r ). It follows from the previous inequality that By the same argument as before, By the slicing lemma, the pair (T ′′ a , T ′′ b ) G ′′ is (ε 3 , d/3)-regular for every a = b ∈ [r]. Furthermore, by construction, x and y are adjacent to all vertices in T ′′ k for every k = i. Let V 1 < · · · < V r be an interval r-colouring of H. Pick any v ∈ V i and let H ′ be the complete r-partite ordered graph with parts V 1 < · · · < V i \ {v} < · · · < V r . Using (7), (8) and (10) We now prove that given a copy T of K r in the reduced graph R and two arbitrary clusters W and W ′ in T , for almost every pair of vertices x ∈ W and y ∈ W ′ there exist many chains between x and y. Recall s = 2r(r − 1)h + 1. Claim 10.2. Let T 1 < · · · < T r be r clusters which form a copy of K r in R. For every i, j ∈ [r], there exist sets A i ⊆ T i and A j ⊆ T j of size |A i | ≥ (1 − ε 1 r)|T i | and |A j | ≥ (1 − ε 1 r)|T j | such that for every x ∈ A i and y ∈ A j there are at least ξ 2 n sh−1 (sh − 1)-sets X ⊆ V (G) for which both G[X ∪ {x}] and G[X ∪ {y}] contain perfect H-tilings. Proof. Fix i, j ∈ [r]. As in the proof of Claim 10.1, for every k = i, we define and set A i := T i \ (L 1 ∪ · · · ∪ L r ). In the proof of Claim 10.1 we saw that Let x ∈ A i . As in the proof of Claim 10.1, we define T ′ k := T k ∩ N G ′′ (x) for every k = i and k is not quite the same as the corresponding set given in the proof of Claim 10.1 (it is defined it terms of j not i); however, as in the proof of Claim 10.1 we have that The previous inequality implies that Let z ∈ C. Define T ′′ k := T ′ k ∩ N G ′′ (z) for every k = j and T ′′ j := T ′ j \ {z}. By the slicing lemma and (7), the pair (T ′′ . Furthermore, by construction, x is adjacent to all vertices in T ′′ k for every k = i while z is adjacent to all vertices in T ′′ k for every k = j. Since χ * cr (H) < χ < (H), by Corollary 8.6 H is flexible. Let F be the complete r-partite ordered graph with parts F 1 < · · · < F r as defined in Lemma 8.2. Recall that |F | = 2r(r − 1)h 2 = (s − 1)h. Pick any v ∈ F i and let F * be the ordered graph obtained by removing the vertex v from F . In particular, F * is a complete r-partite ordered graph with parts F 1 < · · · < F i \ {v} < · · · < F r . By (7), (8) and (10), the key lemma (Lemma 6.4) implies that there exist at least In summary, given any x ∈ A i and y ∈ A j , we have shown that for every z ∈ C ∩ C y there exist at least ξ 1 n |F |−1 (|F | − 1)-sets Y such that both G[Y ∪ {x}] and G[Y ∪ {z}] contain perfect H-tilings and similarly there exist at least Applying Lemma 7.2 with C ∩ C y , ε 1 η/(5L 0 ), ξ 1 , ξ 2 playing the roles of A, α, β, γ, we conclude that there exist at least ξ 2 n |F |+h−1 = ξ 2 n sh−1 (sh − 1)-sets X ⊆ V (G) such that both G[X ∪ {x}] and G[X ∪ {y}] contain perfect H-tilings, as desired. In the next claim we show that given any arbitrary pair of clusters W and W ′ in the reduced graph R, for almost every pair of vertices x ∈ W and y ∈ W ′ there exist many chains between x and y. The assumption r ≥ 3 is crucial here. Proof. Recall the reduced graph R has minimum degree Using the above minimum degree condition, given any two adjacent clusters in R, one can greedily construct a copy of K r in R containing them both. Since r ≥ 3, δ(R) > |R|/2 and so the clusters W and W ′ have a common neighbour U in R. Let K and K ′ be two copies of K r in R containing W, U and W ′ , U respectively. Apply Claim 10.2 with K, W , U playing the roles of K r , T i and T j to obtain sets A ⊆ W and D ⊆ U with |A| ≥ (1 − ε 1 r)|W | and |D| ≥ (1 − ε 1 r)|U |. Similarly, apply Claim 10.2 with K ′ , W ′ , U playing the roles of K r , T i and T j to obtain sets By Claim 10.2, for any x ∈ A, y ∈ A ′ and z ∈ D ∩ D ′ , there exist at least Applying Lemma 7.2 with D ∩ D ′ , η/(6L 0 ), ξ 2 , ξ 3 playing the roles of A, α, β, γ, we conclude that there exist at least contain perfect H-tilings, as desired. In the next claim we use the minimum degree condition of the reduced graph R to find a structure L containing an arbitrary vertex x and resembling the extremal construction for an ordered graph which has a local barrier. Furthermore, we prove that there exist many chains between x and almost every vertex in some cluster in L. Claim 10.4. Let x ∈ V (G). There exists some cluster W ∈ V (R) and a set A ⊆ W of size |A| ≥ (1 − ε 2 r)|W | satisfying the following: for every y ∈ A there exist at least ξ 1 n sh−1 (sh − 1)-sets X ⊆ V (G) such that both G[X ∪ {x}] and G[X ∪ {y}] contain perfect H-tilings. Proof. Let x ∈ V (G) and define . Recall that every non-exceptional cluster W has size m. Hence, if W ∈ N * (x) then there are at most m neighbours of x in W , while if W ∈ N * (x) then there are at most ηm/100 neighbours of x in W . Finally, there are at most ε 1 n vertices lying in exceptional clusters. Therefore, and thus Using (11) and (14), we can greedily find r clusters T 1 , . . . , T r in R such that • the T k 's span a copy of K r in R; . By the properties above, we may relabel indices so that there is an i ∈ [r + 1] and j ∈ [r] for which T 1 < · · · < T i−1 < x < T i < · · · < T r and T k ∈ N * (x) for every k = j. Define T ′ k := T k ∩ N G ′ (x) for k = j and T ′ j := T j . By construction, |T ′ k | ≥ ηm/100 for every k ∈ [r]. Thus, by the slicing lemma (Lemma 6.1), (T ′ a , T ′ b ) G ′′ is (ε 2 , d/2)-regular for every a = b ∈ [r]. We will show that the cluster W := T j is as desired for the claim. For every k = j, let Furthermore, x and y are adjacent to all vertices in T ′′ k for every k = j (see Figure 3). Figure 3. In this picture, we take r = 3, T 1 < T 2 < x < T 3 and j = 1. Note that (T 1 \ {y}, T ′′ 2 ), (T 1 \ {y}, T ′′ 3 ) and (T ′′ 2 , T ′ 3 ) are regular pairs, while x and y are adjacent to all vertices in T ′′ 2 and T ′′ 3 . Since H has no local barrier, there exists an interval (r + 1)-colouring of H such that there is no edge between v and V j . Furthermore, as χ * cr (H) < χ < (H), H is flexible by Corollary 8.6. Let F be the complete r-partite ordered graph with parts F 1 < · · · < F r as defined in Lemma 8.2 and let F ′ be the complete r-partite ordered graph with parts Note that 1 + r k=1 |V k | = 1 + (h − 1) = h thus, both F and F ′ contain perfect H-tilings by Corollary 8.3. Pick any u ∈ F ′ j . Note that F ′ \ {u} is the complete r-partite ordered graph such that the kth part has size |F k | + |V k | for every k ∈ [r] and |F ′ \ {u}| = |F | + h − 1 = sh − 1. By the key lemma (Lemma 6.4), there exist at least ξ 1 n sh−1 (sh − 1)-sets X ⊆ V (G) such that G[X] spans a copy of Next, we consider G[X ∪ {y}]. Since y ∈ T j and y is adjacent to all vertices in T ′′ k for k = j, then G[X ∪ {y}] spans a copy of F ′ and so G[X ∪ {y}] contains a perfect H-tiling. Remark 10.5. Note that Claim 10.4 holds even if we relax the hypothesis of Theorem 4.2 to allow χ < (H) ≥ 2; that is, we did not use the condition that r ≥ 3 anywhere in the proof of this claim. This fact will be useful in the proof of Theorem 10.7 in the next subsection. Our final claim states that given arbitrary vertices x, y ∈ V (G) there exist many chains of bounded size between x and y. 10.2. Proof of Theorem 1.10. By arguing as in Claim 10.4, we obtain the following result. Theorem 10.7. Let H be an ordered graph that does not have a local barrier and let η > 0. There exists an n 0 ∈ N so that the following holds. If G is an ordered graph on n ≥ n 0 vertices with then for any vertex x ∈ V (G) there exists a copy of H in G covering the vertex x. As in the proof of Theorem 4.2, apply the regularity lemma, and then argue as in the proof of Claim 10.4 to obtain the following: given any x is adjacent to all vertices in T ′ k for any k = j. As H does not have a local barrier, there exists an interval (r + 1)-colouring of H such that there is no edge between v and V j . By the key lemma, there exists some set X ⊆ V (G) such that G[X] spans a copy of H \ {v} with V k ⊆ T ′ k for every k ∈ [r]. Since x is adjacent to all vertices in T ′ k for every k = j, G[X ∪ {x}] spans a copy of H in G, as desired. We are now ready to prove Theorem 1.10 using Theorem 4.1 and Theorem 10.7. Proof of Theorem 1.10. Note that the lower bounds stated in Theorem 1.10 follow immediately from Corollary 2.2 and Lemma 2.5. It remains to prove the upper bounds. Let H be an ordered graph and η > 0. Let G be an ordered graph on n vertices with n sufficiently large and minimum degree |H i | . We will now prove that B is a simple bottlegraph of H. Let σ be a permutation of [k + 1] and let φ be an interval labelling of B with respect to σ. Note that V ((B, φ)) = [h · (h!)]. Set t 0 := 0 and t i := (|H 1 | + · · · + |H i |) · (h!) for every i ∈ [r]. For every j ∈ [h!] and i ∈ [r], let again reaching a contradiction. It follows that the ordered graph T j spanned by T 1 j < · · · < T r j is a complete r-partite ordered subgraph of (B, φ). Since |T i j | = |H i | then T j spans a copy of H in (B, φ) for every j ∈ [h!]. The T j 's are disjoint and cover all the vertices of (B, φ), thus they yield a perfect H-tiling. Since σ, φ are arbitrary, B is a simple bottlegraph of H. In particular, The next result follows easily from the previous bounds. In [3], Balogh, Li and the second author (implicitly) computed χ * cr (H) for any H such that χ < (H) = 2. Their result can be easily recovered using Propositions 11.1 and 11.2. Proof. We may assume that h 1 ≤ h 3 ; the case h 1 ≥ h 3 follows by the symmetry of the argument. If h 1 ≤ h 2 then χ * cr (H) = h/h 1 by Proposition 11.3 and the result follows. So for the rest of the proof we assume h 2 < h 1 ≤ h 3 . Under these assumptions, Claim 11.6. χ * cr (H) ≥ g(H). Let B be a bottlegraph of H with parts B 1 , . . . , B k for some k ∈ N. Note that k ≥ 3 since H is 3-partite. Our aim is to show that χ cr (B) ≥ g(H). By Proposition 5.1, we may assume that there exists some m ∈ N such that |B k | ≤ m and |B i | = m . Pick an interval labelling φ of B such that the ordered graph (B, φ) has parts B 1 < B 2 < · · · < B k . By definition there exists some t ∈ N such that the ordered blow-up (B(t), φ) contains a perfect H-tiling M. Recall that we denote the parts of (B(t), φ) by B 1 (t) < · · · < B k (t). Let M 1 be the set of copies of H ∈ M whose first part H 1 lies completely in B 1 (t) and let M 2 := M \ M 1 . Note that every copy of H ∈ M 1 has exactly h 1 vertices in B 1 (t) and at most h 1 +h 2 vertices in B 1 (t)∪B 2 (t), while every copy of H ∈ M 2 has at most h 1 vertices in B 1 (t)∪B 2 (t). Observe that V (B) can be partitioned into two sets X, Y of size |X| = h 1 (2h 1 + h 2 ) and |Y | = , and all remaining vertices of X lie in B σ −1 (4) . In particular, The only remaining case is when σ −1 (3), σ −1 (4) = 1. However, since h 1 = h 3 , this case will follow by a symmetric argument to that of the previous case. Thus B is a simple bottlegraph of H. Let σ be a permutation of [k] and φ an interval labelling of B with respect to σ. Recall that the ordered graph (B, φ) has parts B σ −1 (1) < · · · < B σ −1 (k) . Let X ⊆ V (B) be a set of size 4h 2 1 − h 2 2 such that . Thus we can partition V (B)\X into 2h 1 −h 2 sets of size h 3 −h 1 and assign each set to a copy of H ′ . Notice that every copy of H ′ together with its assigned set forms a copy of H; so we constructed a perfect H-tiling in (B, φ). Since σ and φ are arbitrary, B is a simple bottlegraph of H. Recall that h 2 < h 1 ≤ h 3 . We may assume that g(H) is not an integer as otherwise we are done by Claim 11.8. Given t ∈ N and ℓ ∈ N ∪ {0}, let H(t, ℓ) be the complete 3-partite ordered graph with parts H ′ 1 < H ′ 2 < H ′ 3 of size th 1 , th 2 , th 3 − ℓ respectively. Set k := ⌊g(H)⌋. We define t, ℓ, s and B as follows: since the right hand side of the above inequality is a positive rational number. Thus, Furthermore, we have Equations (15) and (16) imply that th 3 − ℓ > th 1 . Hence th 2 < th 1 < th 3 − ℓ and so g(H(t, ℓ)) = 2 − th 2 th 1 Let B be the simple bottlegraph of H(t, ℓ) as in Claim 11.8 and set s := 0. • If 3 < g(H) < 4, let t := 1 and ℓ := h 3 − h 1 . Observe that the parts H ′ 1 < H ′ 2 < H ′ 3 of H(t, ℓ) have size h 1 , h 2 , h 1 respectively. Let B be the simple bottlegraph of H(t, ℓ) as in Claim 11.7. Set s := h 2 1 − h 2 2 . Note that in both cases, B is a complete (k + 1)-partite graph where each part has size (th 1 ) 2 except one smaller part of size t 2 s (which is empty if g(H) ≥ 4). Let σ be a permutation of [k + 1] and let φ be an interval labelling of B ′ with respect to σ. Recall that the ordered graph (B ′ , φ) has parts B ′ σ −1 (1) < · · · < B ′ σ −1 (k+1) . Let X, Y be two disjoint sets in V (B ′ ) of size a|B| and b|B| respectively such that Notice that by (18) and the choice of the sizes of the parts of B ′ , all vertices in B ′ σ −1 (i) are in X ∪ Y for i ≤ k; as s ≤ h 2 1 it may be that some vertices in B ′ σ −1 (k+1) are not in X ∪ Y . Note that (B ′ , φ)[X] is a copy of (B(a), φ ′ ) for some interval labelling φ ′ . So as B is a simple bottlegraph of H(t, ℓ), (B(a), φ ′ ) contains a perfect H(t, ℓ)-tiling M 1 consisting of copies of H(t, ℓ). Similarly, (B ′ , φ)[Y ] is a copy of (B(b), φ ′′ ) for some interval labelling φ ′′ and it contains a perfect H(t, ℓ)-tiling M 2 consisting of b|B| copies of H(t, ℓ). Note that the vertices in t sets of size ℓ and assign each set to a copy of H(t, ℓ) in M 1 ∪ M 2 . Note that each copy of H(t, ℓ) together with its assigned set forms a copy of H(t, 0). Thus we constructed a perfect H(t, 0)-tiling in (B ′ , φ). Since H(t, 0) = H(t), this yields a perfect H-tiling in (B ′ , φ), proving that B ′ is indeed a simple bottlegraph of H. Claim 11.9 implies that χ * cr (H) ≤ g(H). Together with Claim 11.6, this concludes the proof. Observe that Propositions 11.3 and 11.5 combined with Theorem 1.8 yield the following result which makes the threshold in Theorem 1.8 explicit for all complete 3-partite ordered graphs. Theorem 11.10. Let H be a complete r-partite ordered graph on h vertices with parts H 1 < · · · < H r where |H i | = h i for every i ∈ [r]. 12. The tightness of Theorem 1.6 In this section we show that the minimum degree condition in Theorem 1.6 is best possible. Given an ordered graph H, let c < (H) denote the smallest non-negative number which satisfies the following: for every η > 0, there exists an integer n 0 ∈ N such that if G is an ordered graph on n ≥ n 0 vertices and with minimum degree δ(G) ≥ c < (H)n then G contains an H-tiling covering all but at most ηn vertices. Observe Theorem 1.6 immediately implies that In fact, we will show that equality holds. Proof of Theorem 1. 16. Let H be an ordered graph on h vertices and let x ∈ (0, 1). Throughout the proof we set r := χ < (H). If r = 1 the statement of the theorem holds trivially, so we may assume that r ≥ 2. Observe that it suffices to prove that the theorem holds for any η > 0 sufficiently small. Fix constants 0 < η ≪ 1/χ * cr (x, H) and Let k := χ(B) ≥ r and let B 1 , . . . , B k be the parts of B. Fix t ∈ N such that k/t < ε. Let n ∈ N be sufficiently large and let G be an ordered graph on n vertices with minimum degree Recall by (1), χ * cr (x, H) ≥ r − 1; so δ(G) ≥ (1 − 1/(r − 1) + η)n. By the Erdős-Stone-Simonovits theorem for ordered graphs [24] there exists a copy of H in G. We remove the vertices of H from G and repeat the same process until we obtain a set W ⊆ V (G) such that G[W ] contains a perfect H-tiling W and |W | = ηn 2h h. Let G ′ := G \ W . Observe that Since χ cr (B(t)) = χ cr (B) (and ignoring the ordering of V (G ′ )), Theorem 1.13 implies there exists a B(t)-tiling B in G ′ which covers all but at most ε|G ′ | vertices. Consider a fixed copy of B(t) ∈ B whose parts are A 1 , . . . , A k with |A i | = t|B i | for every i ∈ [k]. By Lemma 13.1, there exist sets S i ⊆ A i for every i ∈ [k] such that |S i | ≥ ⌊|A i |/k⌋ ≥ |B i | and a permutation σ of [k] such that S i < S j if σ(i) < σ(j). By discarding some vertices if necessary, we may assume that |S i | = |B i | for every i ∈ [k]. Note that the sets S 1 , . . . , S k span a copy of B and the ordering of V (G ′ ) induces an interval labelling of B with respect to σ. Crucially, |A i \ S i | = (t − 1)|B i |, so we can repeatedly apply Lemma 13.1 to find a B-tiling in G ′ covering all but at most k|B| = (k/t)|B(t)| vertices in our fixed B(t) ∈ B. Furthermore, by Lemma 13.1, the ordering of V (G ′ ) induces an interval labelling on each of these copies of B. Since B is an x-bottlegraph, each of these copies contains an (x, H)-tiling. Repeat this process for all B(t) ∈ B and denote the union of all these (x, H)-tilings as M. The number of vertices in G covered by M ∪ W is at least ≥ xn. Hence M ∪ W is an (x, H)-tiling in G, as desired. Next we prove that f (x, H) ≥ x(h − α(H))/h. Without loss of generality, we may assume that α(H) = α + (H). Let G be the ordered graph obtained by taking the complete 2-partite ordered graph with classes U < V where Finally, we have As N can be chosen arbitrarily large and η arbitrarily small, it follows that f (x, H) ≥ x(h−α(H))/h. Proof. Let H be an ordered graph on h vertices with r := χ < (H) and x ∈ (0, x 0 ] where x < 1. Fix constants 0 < 1/N ≪ η < 1 where N ∈ N. We first show that f (x, H) ≤ 1 − (h − xT )/(h(r − 1)). Let B be the complete r-partite unordered graph with parts B 1 , . . . , B r where for every i = 1. We now prove that B is an x-bottlegraph of H. Note that for every i = 1, Let σ be a permutation of [r] and φ be an interval labelling of B with respect to σ. Recall that the ordered graph (B, φ) has parts B σ −1 (1) < · · · < B σ −1 (r) . Let H 1 < · · · < H r be an interval r-colouring of H which minimises |H σ(1) |. By the definition of T we have that Furthermore, by the definition of J we have that for every i = 1, Hence, the ordered graph (B, φ) contains an H-tiling H consisting of N disjoint copies of H. These copies cover exactly N h vertices of (B, φ). In particular therefore, H is an (x, H)-tiling in (B, φ). Since σ, φ are arbitrary, B is indeed an x-bottlegraph of H and thus χ * cr (x, H) ≤ χ cr (B). Note that As N is arbitrarily large, the above implies f (x, H) ≤ 1 − (h − xT )/(h(r − 1)). Next we show that f (x, H) ≥ 1 − (h − xT )/(h(r − 1)). Suppose the value of T is achieved for some interval r-colouring H * 1 < · · · < H * r of H and i * ∈ [r]; that is, T = |H * i * |. Let G be the complete r-partite ordered graph with parts G 1 < · · · < G r of where As N can be chosen arbitrarily large and η arbitrarily small, it follows that f (x, H) ≥ 1 − (h − xT )/(h(r − 1)). Thus, H is an (x, H)-tiling in H ′ . Since H ′ contains an (x, H)-tiling, every 1-bottlegraph of H ′ is an x-bottlegraph of H, therefore χ * cr (x, H) ≤ χ * cr (H ′ ). Since |H ′ 1 | ≤ · · · ≤ |H ′ r |, Proposition 11.3 implies χ * cr (H ′ ) = |H ′ |/|H ′ 1 |. Therefore, Since N can be chosen arbitrarily large, the above implies Next we prove that f (x, H) is at least the claimed value. Let G be the ordered graph obtained by taking the complete (t + 1)-partite ordered graph with parts G 1 < · · · < G t+1 where for every i ≤ t, and adding all missing edges to G t+1 . Suppose G contains an (x, H)-tiling H. Observe that Hence, H covers at least N h vertices in G. Let H ′ ⊆ H be an H-tiling consisting of exactly N copies of H. Each copy of H has at most t i=1 ℓ i vertices in G 1 ∪ · · · ∪ G t . Also, and so there are fewer than 1−x x N h + t vertices in G which are not covered by H ′ . This implies which is a contradiction; hence G does not contain an (x, H)-tiling. Therefore, Theorem 1.16 implies that δ(G) < (f (x, H) + η)|G| and so Concluding remarks and open problems Theorem 1.8 together with [3, Theorem 1.9] asymptotically determine the minimum degree threshold for forcing a perfect H-tiling in an ordered graph, for any fixed ordered graph H. Depending on the structure of H, this threshold depends on one of three factors: (C1) the existence of an almost perfect H-tiling; (C2) the avoidance of divisibility barriers; (C3) the existence of an H-cover. Analogous factors govern the threshold for other perfect H-tiling problems in a range of settings too. Therefore, it would be extremely interesting to find a natural 'local' density condition (e.g., minimum degree, Ore-type, degree sequence) for an (ordered) graph, directed graph or hypergraph for which the corresponding perfect H-tiling threshold depends on another factor. We suspect no such problem exists. An alternative way to think about this question is as follows: are there barriers, other than local and divisibility barriers, that prevent absorbing for a perfect H-tiling problem? Other than this general 'meta problem', it would be interesting to establish the Ore-type degree threshold that forces a perfect H-tiling in an ordered graph G (and compare this threshold to the corresponding Ore-type degree threshold for unordered graphs [22]). In light of Theorem 1.6 it is natural to raise the following ordered graph analogue of the theorem of Shokoufandeh and Zhao [26]. Finally, whilst we have obtained some understanding of the function f (x, H), it would be interesting to obtain a more complete understanding of how this function can behave in general. In particular, is it true that for any fixed ordered graph H, f (x, H) is piecewise linear?
2021-12-07T02:16:22.413Z
2021-12-06T00:00:00.000
{ "year": 2021, "sha1": "8d44f9f53510609e29e1be94fd5dfc2944739e58", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "61a17a5b190f51c40c5191fe3d67af62d2cc505c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
32629918
pes2o/s2orc
v3-fos-license
Cellular prion protein (PrPC) protects neuronal cells from the effect of huntingtin aggregation. The effect of normal cellular prion protein (PrPC) on abnormal protein aggregation was examined by transfecting huntingtin fragments (Htt) into SN56 neuronal-derived cells depleted of PrPC by RNA interference. PrPC depletion caused an increase in both the number of cells containing granules and the number of apoptotic cells. Consistent with the increase in Htt aggregation, PrPC depletion caused an decrease in proteasome activity and a decrease in the activities of cellular defense enzymes compared with control cells whereas reactive oxygen species (ROS) increased more than threefold. Therefore, PrPC may protect against Htt toxicity in neuronal cells by increasing cellular defense proteins, decreasing ROS and increasing proteasome activity thereby increasing Htt degradation. Depletion of endogenous PrPC in non-neuronal Caco-2 and HT-29 cells did not affect ROS levels or proteasome activity suggesting that only in neuronal cells does PrPC confer protection against Htt toxicity. The protective effect of PrPC was further evident in that overexpression of mouse PrPC in SN56 cells transfected with Htt caused a decrease in both the number of cells with Htt granules and the number of apoptotic cells, whereas there was no effect of PrPC expression in non-neuronal NIH3T3 or CHO cells. Finally, in chronically scrapie (PrPSc)-infected cells, ROS increased more than twofold while proteasome activity was decreased compared to control cells. Although this could be a direct effect of PrPSc, it is also possible that, since PrPC specifically prevents pathological protein aggregation in neuronal cells, partial loss of PrPC itself increases PrPSc aggregation. Introduction The normal cellular prion protein (PrP C ) is a glycosylphosphatidylinositol-anchored glycoprotein that is predominantly expressed in the brain (Prusiner, 1998;Weissmann and Flechsig, 2003). In prion diseases, the protease-resistant misfolded scrapie isoform of prion protein (PrP Sc ) is the causative agent of transmissible spongiform encephalopathies, which are neurodegenerative disorders that include scrapie in sheep and goats, bovine spongiform encephalopathies, chronic wasting disease in deer and elk and Creutzfeldt-Jakob disease in humans (Prusiner, 1998). In all of these disorders, exposure of nerve cells to PrP Sc converts PrP C to aggregated deposits of PrP Sc . There have been numerous models proposed for the neuronal cell loss and spongiform changes in the brain that occur in scrapie, but it is still not clear whether this pathology is due to a loss of functional PrP C or only to a gain of function by PrP Sc . Clinical symptoms can occur without any obvious scrapie deposits (Collinge et al., 1990;Medori et al., 1992), which has led to the suggestion that the loss of normal PrP C function, not formation of PrP Sc deposits, causes prion disease (Aguzzi and Weissmann, 1997). Unfortunately, the normal function of PrP C is unknown, although its conservation in many different species suggests that it plays a prominent role in a basic physiological process. It has been reported that PrP C functions in cell survival, signal transduction, cell adhesion, copper-dependent antioxidant activity, and copper uptake and sequestration . Although PrP C knockout mice are healthy, the brains of these mice were found to have reduced levels of cell defense enzymes activity, such as catalase, and increased levels of oxidative stress markers (Klamt et al., 2001;Brown and Besinger, 1998;Brown et al., 1997b;Sakudo et al., 2005;Wong et al., 2001;Wong et al., 2000;Wong et al., 1999). Similarly, tissue cultures of nerve cells derived from the PrP C knockout mouse are less viable and more susceptible to oxidative damage and toxicity caused by agents such as copper and hydrogen peroxide than cells expressing wild-type PrP C (Brown et al., 1997a;Kuwahara et al., 1999). PrP C was hypothesized to act as an antioxidant (Brown and Besinger, 1998;Wong et al., 1999), but recent studies have established that PrP C has no superoxide dismutase activity either in vivo or in vitro (Hutter et al., 2003;Jones et al., 2005). Since numerous studies suggest that, under stress conditions, PrP C has a neuroprotective effect, this raises the question as to whether the neurodegenerative defects observed in scrapieinfected mice are aggravated by the loss of PrP C , as well as the build up of PrP Sc amyloid plaque. In fact, neurons from both PrP C knockout mice and scrapie-infected animals show similar changes in neurophysiological function (Colling et al., 1996;Collinge et al., 1994;Jefferys et al., 1994;Johnston et al., 1997;Manson et al., 1995) and biochemical properties (Keshet et al., 1999;Ovadia et al., 1996). Furthermore, altered neuronal excitability can predispose individuals to neuronal damage and death (Leist and Nicotera, 1998) so it is possible that loss of The effect of normal cellular prion protein (PrP C ) on abnormal protein aggregation was examined by transfecting huntingtin fragments (Htt) into SN56 neuronal-derived cells depleted of PrP C by RNA interference. PrP C depletion caused an increase in both the number of cells containing granules and the number of apoptotic cells. Consistent with the increase in Htt aggregation, PrP C depletion caused an decrease in proteasome activity and a decrease in the activities of cellular defense enzymes compared with control cells whereas reactive oxygen species (ROS) increased more than threefold. Therefore, PrP C may protect against Htt toxicity in neuronal cells by increasing cellular defense proteins, decreasing ROS and increasing proteasome activity thereby increasing Htt degradation. Depletion of endogenous PrP C in non-neuronal Caco-2 and HT-29 cells did not affect ROS levels or proteasome activity suggesting that only in neuronal cells does PrP C confer protection against Htt toxicity. The protective effect of PrP C was further evident in that overexpression of mouse PrP C in SN56 cells transfected with Htt caused a decrease in both the number of cells with Htt granules and the number of apoptotic cells, whereas there was no effect of PrP C expression in non-neuronal NIH3T3 or CHO cells. Finally, in chronically scrapie (PrP Sc )-infected cells, ROS increased more than twofold while proteasome activity was decreased compared to control cells. Although this could be a direct effect of PrP Sc , it is also possible that, since PrP C specifically prevents pathological protein aggregation in neuronal cells, partial loss of PrP C itself increases PrP Sc aggregation. PrP C function contributes to scrapie pathogenesis in this way. However, contrary to the idea that neuropathology is caused by loss of PrP C function, Collinge and coworkers found that there was no effect on neuronal survival when PrP C was knocked out from a 10-week-old mouse (Mallucci et al., 2002). Moreover, by disrupting the prion gene in a scrapie-infected mouse, they reversed the spongiosis, cognitive defects and neurological disfunction caused by scrapie (Mallucci et al., 2003;Mallucci et al., 2007). In the present study, we have further examined whether knocking out PrP C contributes to a loss of function under stress conditions by examining the effect of PrP C depletion on protein aggregation. We used RNA interference (RNAi) to deplete endogenous PrP C from neuronal-derived tissue culture cell lines that were also transfected with HttQ103. Our results show that there is an increase in HttQ103 aggregation in PrP Cdepleted cells. In addition, we found that PrP C may protect against Htt-induced toxicity possibly by increasing cellular defense enzymes, decreasing reactive oxygen species (ROS) and thereby increasing proteasome activity. Interestingly these effects of PrP C on ROS and proteasome activity are specific for nerve cells and do not occur in non-nerve cells even if these cells normally express PrP C. Results Two different oligonucleotide sequences were used to knock down PrP C in mouse neuronal cells. Sequence 1 is within the coding region and sequence 2 is in the 3Ј UTR. Fig. 1 shows western blots against PrP C of the cell lysates and lysates that were immunoprecipitated with the anti-prion before and after depleting PrP C from SN56 cells. The immunoprecipitated lysate, which has a much higher concentration of PrP C protein, shows multiple bands on the western blot as a result of the different glycosylated forms of PrP C , which are not visible at lower concentrations. From the quantification of the western blots, both sequences reduced PrP C by more than 90% following 2 days of transfection of SN56 cells with oligonucleotides. Similar levels of PrP C depletion were measured 3 days following transfection with the siRNA oligonucleotides (data not shown). As expected, PrP C levels were not affected by transfection of a scrambled sequence of oligonucleotide 1. Throughout this study, oligonucleotide sequences 1 and 2 produced the same phenotype. However, cells depleted of PrP C with sequence 2 could be rescued by expressing PrP C because unlike sequence 1, this sequence is in the UTR region of the message. Since PrP C has been reported to be neuroprotective (Kuwahara et al., 1999;Roucou et al., 2003), we investigated whether PrP C confers protection against Htt aggregation. Both control and PrP C -depleted SN56 cells were transfected with GFP-Htt constructs. Routinely, the day after transfecting with oligonucleotides, the cells were transfected with the Htt constructs. The phenotype of the cells was analyzed 2 days later or 72 hours after transfection of the siRNA. We used both HttQ25, which normally does not form granules, and HttQ103, which forms granules and is toxic to the cell. As expected, there was no aggregation of HttQ25 either in the presence or absence of PrP C ( Fig. 2A). However, compared with cells only transfected with HttQ103 or scramble vector, cells depleted of PrP C with sequence 2 caused a marked increase in the number of cells with granules of HttQ103 ( Fig. 2A). This effect could be partially reversed by expressing mouse PrP C . Quantification of the granules in the SN56 cells ( Fig. 2A, open bars) shows that 48 hours after transfection with HttQ103, 60% of the PrP C -depleted cells had HttQ103 granules whereas only 25% of the control cells had granules. This increase in the number of cells with granules was observed using both oligonucleotide 1 and 2. To insure that the observed phenotype was due to depletion of PrP C , cells depleted of PrP C with oligonucleotide 2 were partially rescued by transfecting with a plasmid expressing mouse PrP C . As expected, we could not rescue the phenotype generated with oligonucleotide 1 because it is in the prion coding region and therefore it inhibits expression of the plasmid PrP C along with the endogenous protein (data not shown). Remarkably, the extent of HttQ103 aggregation in the PrP C -depleted cells was similar to that obtained when HttQ103-transfected SN56 cells were treated with the proteasome inhibitor, lactacystin (Fig. 2B, lane 6). Essentially, the same results were obtained with N2a cells (gray bars). Therefore, PrP C depletion caused increase aggregation of HttQ103 in the neuronal cell lines, SN56 and N2a. To ensure that the difference in the level of aggregation was not due to differences in expression levels of HttQ103, western The different intensities of the PrP C bands were normalized to the mock-depleted value which was set at 100%. Quantification of the western blot showed that it was linear from 10 to 100 g of cell lysate. (C) Immunoprecipitation of PrP C from 500 g cell lysates obtained from control cells and cells transfected with the scramble sequence, sequence 1, or sequence 2. The control cells were treated with Lipofectamine, the same as the PrP C -depleted cells. blot analysis was performed on the cell lysates of the transfected cells. Similar levels of expression of HttQ103 were obtained in lysates from control cells, PrP C -depleted cells, and the PrP C -rescued cells (Fig. 2D). Moreover, PrP C depletion did not significantly affect the expression of Hsp70, although HttQ103 expression did cause a significant increase in Hsp70 levels under all conditions (Fig. 2E). Given that PrP C depletion increased Htt aggregation, we examined whether depletion affected cell viability. Consistent with the results from the mouse PrP C knockout studies, PrP C depletion had no effect on either cell viability or apoptosis as measured by caspase-3 activity (Fig. 3). There was also no effect of HttQ25 expression on viability and caspase-3 activity in PrP C -depleted SN56 cells. As expected, transfection with HttQ103 alone caused a marked decrease in viability and an increase in caspase-3 activity. Interestingly, in HttQ103expressing cells, PrP C depletion caused a further decrease in cell viability and increase in caspase-3 activity, which could be partially rescued by expression of mouse PrP C . Therefore, PrP C functions in neuronal cells to reduce HttQ103 aggregation and increase viability of the cells expressing HttQ103. One possible mechanism for these observed phenotypes is that PrP C depletion causes these effects indirectly by reducing proteasome activity. As shown in Fig. 4, this is indeed the case. . The open and gray bars are data obtained from SN56 and N2a cells, respectively. *P<0.05 and **P<0.01 compared with transfected control cells. (C) Filtration assay to measure aggregated protein lysates from SN56 cells transfected with HttQ25, HttQ103, oligonucleotides 2 followed by transfection of HttQ103 expressing vector, and oligonucleotides 2 followed by transfection of HttQ103 and mouse PrP C expressing vectors. The intensity of the Htt retained on the membrane is quantified beneath the dot blot for each experimental condition. (D) The level of HttQ103 expression in cells transfected under varying conditions. Cell lysates (100 g) from SN56 cells transfected with HttQ103, scrambled oligonucleotide followed by HttQ103, oligonucleotides 2 followed by HttQ103, and oligonucleotides 2 followed by HttQ103 and mouse PrP C . The western blot was probed using anti-GFP and anti-actin antibodies. (E) The level of Hsp70 expression in SN56 cells under varying conditions in the presence and absence of HttQ103. Cells were transfected with either HttQ25 or HttQ103. Cells were either mock transfected or transfected with scramble sequence, sequence 1 and sequence 2 oligonucleotides. PrP C depletion causes a 40% decrease in proteasome activity. The expression of HttQ25 has no effect on proteasome activity, whereas expression of HttQ103 caused a 40% decrease in proteasome activity, in agreement with other studies (Jana et al., 2001;Nishitoh et al., 2002;Rangone et al., 2005). When HttQ103 was transfected in PrP C -depleted cells, there was a further reduction in proteasome activity. Specifically, when HttQ103 was expressed in PrP C -depleted cells, the proteasome activity was reduced to 15% of the control cells. Therefore, both HttQ103 expression and PrP C depletion caused a reduction in proteasome activity and the two effects appear additive. The marked reduction in proteasome activity in PrP Cdepleted cells probably caused the marked increase in HttQ103 aggregation, similar to that observed when proteasome activity was inhibited with lactacystin. Since oxidative stress causes a reduction in proteasome activity (Obin et al., 1998;Reinheckel et al., 2000), we measured whether PrP C depletion causes an increase in ROS levels. First, using fluorescence imaging we determined whether the ROS level was higher in PrP C -depleted cells. SiRNA transfections were performed using oligonucleotides conjugated to a fluorophore to enable us to visualize the transfected cells (Fig. 5Aa,c). Compared with transfection with scramble oligonucleotide, the ROS fluorescence intensity was much greater in cells transfected with oligonucleotide 1 than with the scramble oligonucleotide (Fig. 5Ab,d). The ROS fluorescence intensity of cells transfected with the scramble vector was not significantly different from that of the nontransfected cells (see cells with asterisks). Quantification of the ROS levels in the SN56 cells by FACs analysis showed that PrP C depletion caused more than a threefold increase in ROS levels compared with control cells (Fig. 5B). Expression of HttQ103 also caused about a fourfold increase in ROS, in agreement with previous studies (Solans et al., 2006;Wyttenbach et al., 2002). PrP C -depleted cells transfected with HttQ103 showed a sevenfold increase in ROS levels, so again, the effects of PrP C depletion and HttQ103 appears additive. Table 1 shows that the increase in ROS in PrP C -depleted cells was due to a reduction in antioxidant enzyme activities. Compared to control cells, PrP C -depleted cells showed a marked reduction in the activities of SOD, catalase and glutathione reductase. This reduction was partially rescued when the cells were transfected with mouse PrP C vector. Therefore, PrP C depletion caused a decrease in antioxidant activity, which in turn increased ROS levels thus causing decreased proteasome activity. To determine whether PrP C functions to protect other cell types that endogenously express PrP C , Caco-2 and HT-29 cells, two human colonic adenocarcinoma cell lines, were depleted of PrP C . As shown in Fig. 6, these cell lines endogenously express PrP C , with HT-29 cells expressing much higher levels of PrP C than the Caco-2 cells (Garmy et al., 2006). By using siRNA oligomers made against human PrP C , we achieved at least a 90% reduction of PrP C in both Caco-2 and HT-29 cells (Fig. 6A). In contrast to neuronal cells, depletion of PrP C from both intestinal cell lines did not significantly affect either proteasome activity or ROS levels (Fig. 6B,C). These results suggest that PrP C confers protection only on neuronal cells. The protective effect of PrP C in neuronal cells was further evident when mouse PrP C was overexpressed in SN56 cells. Overexpression of PrP C caused a reduction in the percentage of cells with HttQ103 granules and an increase in proteasome Journal of Cell Science 120 (15) activity (Fig. 7). Specifically, when mouse PrP C was overexpressed in SN56 cells, HttQ103 granules decreased from 20% to 10%. Similarly, in SN56 cells overexpressing PrP C showed that HttQ103 only caused a 10% decrease in proteasome activity compared to the 40% decrease that occurred in SN56 cells just transfected with HttQ103. Consistent with the lack of protection conferred by PrP C on non-neuronal Caco-2 and HT-29 cells, there was no effect of PrP C expression on the percentage of cells with either HttQ103 granules or proteasome activity in the non-neuronal cell lines NIH3T3 and CHO. Therefore, consistent with our finding that PrP C only confers protection on neuronal cells, expression of PrP C is not protective against Htt-induced toxicity in nonneuronal cells that do not express endogenous PrP C . Finally we examined whether the presence of the scrapie form of prion, PrP Sc affected the aggregation properties of HttQ103. As shown in Fig. 8A, comparison of uninfected and scrapie-infected SN56 cells (ScSN56) showed that 40% of the infected cells had HttQ103 granules compared to 25% of the infected cells. ScSN56 and Sn56 cells had similar viability, probably due to the low levels of scrapie in most of the ScSN56 cells. However, HttQ103 had a different effect on the viability of SN56 and ScSN56 cells. Expression of HttQ103 caused a 40% reduction in the viability of the SN56 cells whereas it caused a 60% reduction in the viability of the ScSN56 cells. Although it had no effect on viability, PrP Sc alone caused an increase in ROS and a corresponding decrease in proteasome activity, and HttQ103 caused further changes. Cells with both PrP Sc and HttQ103 showed a sixfold increase in ROS activity compared to a twofold increase with PrP Sc alone and a fourfold increase with HttQ103 alone (Fig. 8C). Similarly, cells with both PrP Sc and HttQ103 showed a 60% decrease in proteasome activity compared to a 25% decrease with PrP Sc alone and a 40% decrease in proteasome activity with HttQ103 alone (Fig. 8D). These data show that PrP Sc further reduces proteasome activity beyond the reduction caused by HttQ103 expression alone, an effect that could explain the increase in the percentage of cells with HttQ103 granules and the decrease in cell viability in ScSN56 cells expressing HttQ103. PrP Sc could cause these effects directly by contributing to the total amount of aggregated protein in the cell, or indirectly by decreasing the amount of active PrP C , or both. Discussion A long-standing question in the prion field is whether the loss of PrP C from scrapie-infected nerve cells contributes to the neuropathology of the disease. To investigate this question, we examined whether PrP C depletion affects Htt aggregation. Our results showed that PrP C depletion caused a marked increase in HttQ103 aggregation in both N2A and SN56 neuronal cell lines. The increase in the fraction of cells with Htt granules that occurred after PrP C depletion was similar to the increase that occurred after treatment of the cells with the proteasome inhibitor, lactacystin. Consistent with this observation, the proteasome activity of PrP C -depleted cells expressing HttQ103 was only 15% of that in control cells whereas it was 60% of the control activity in the absence of HttQ103 expression. Thus when the cells are stressed, there is a further decrease in proteasome function in PrP C -depleted neuronal cells. Depletion of PrP C from neuronal cells also caused a reduction in the activity of antioxidant enzymes. However, despite this reduction in antioxidant enzymes and proteasome activity, there is no obvious phenotype caused by PrP C depletion in the absence of stress. Expression of HttQ25, had no effect on the PrP C -depleted cells. However, when the cells were stressed by expression of HttQ103, the PrP C -depleted cells showed a significant loss of viability and a marked increase in ROS levels. Our results are in agreement with the study of Klamt et al. (Klamt et al., 2001) in which an imbalance in antioxidant defense was found in PrP C -knockout mice. Specifically, oxidative damage to lipids and proteins was much higher in the knockout mice, and the activities of SOD and catalase were reduced. Interestingly, we found that scrapie-infected SN56 cells had properties similar to PrP C -depleted cells. The scrapie infected SN56 cells showed increased HttQ103 aggregation, decreased proteasome activity, and increased ROS levels. In agreement with our results, scrapie-infected hypothalamic neuronal GT1 cells displayed a higher sensitivity to oxidative stress than noninfected cells, as well as a decrease in viability when subjected to stress (Milhavet et al., 2000). An increase in ROS levels was also found in scrapie-infected N2a cells (Fernaeus et al., 2005). It is not clear whether these effects of scrapie infection are due to the scrapie aggregation itself or whether it is also due to a reduction in the level of PrP C . Recent results from the Collinge laboratory showed that disruption of the prion gene in scrapieinfected mice reversed any morphological, neurological or behavioral defects due to scrapie infection (Malluci et al., 2002;Malluci et al., 2007). This shows that scrapie pathology can be reversed by removing PrP C from the cells. However, it is still possible that when scrapie aggregates are present, their effects are worsened by the absence of normal PrP C function. Although we found that PrP C was protective in neuronal cells, it did not confer protection on non-neuronal cells. Depleting PrP C from two human epithelial cell lines, Caco-2 and HT-29, which like neuronal cells express PrP C endogenously, had no effect on proteasome activity and ROS levels. Furthermore, overexpression of PrP C reduced Htt aggregation in neuronal cells, but had no effect in either HeLa or CHO cells. Other labs have found that expressing PrP C in breast carcinoma MCF-7 cells inhibited the proapoptotic Bax conformational change and necrosis factor alpha-induced cell death (Diarra-Mehrpour et al., 2004), but there is no evidence that PrP C expression affected either ROS levels or proteasome activity in these cells. The neuro-specific protective effect of PrP C suggests that the signaling pathway activated by PrP C only occurs in neurons. Many proteins have been reported to bind to PrP C , including Sti1, N-CAM, mNOS, APLP1, BL-2 and synapsin (Sakudo et al., 2006) and could be involved in the neuro-specific signaling pathway even if they are not only expressed in nerves. In addition, in a recent model proposed by the Harris laboratory to explain the toxic effects of the truncated PrP C protein (⌬105-125) on mouse viability, they suggested that there is a receptor on the outer surface of nerve cells that normally binds intact PrP C (Li et al., 2007). This putative receptor could be involved in the neuro-specific signaling pathway activated by PrP C . Whatever the nature of the neuro-specific receptor that interacts with PrP C , it is clear that nerve cells respond to signaling triggered by PrP C by increasing cellular defense enzymes. Several pathways implicated in PrP C signaling are consistent with the increase in antioxidant enzymatic activities. A recent study showed that there is a reduction in AKT signaling in PrP C knockout mice compared to control mice (Weise et al., 2006). Similarly, attachment of PrP Cfusion proteins to monocytes caused an increase in AKT and ERK1 and ERK2 signaling (Krebs et al., 2006). Consistent with these observations there is an increase in phosphatidylinositol 3-kinase signaling in PrP C -expressing N2a cells (Vassallo et al., 2005). These signaling pathways promote cell survival and are perhaps responsible for the neuroprotective effect of PrP C on cell signaling. Ultimately, these neuroprotective pathways are not only regulated via phosphorylation but also by activation of the transcription factor, nuclear factor-B, which is a central regulator of immunity, inflammation and cell survival. A diagram of the activation of PrP C that we observed in neuronal cells is shown in Fig. 9. In this model, the deleterious effect of Htt aggregation caused by increasing ROS activity is mitigated by the action of PrP C , which reduces ROS. This in turn increases proteasome activity and reduces Htt aggregation. Interestingly, in a transgenic mouse model of amyotrophic lateral sclerosis, the PrP C protein was specifically repressed when the G85R SOD mutant was overexpressed, but overexpression of wild-type SOD had no effect on PrP C (Dupuis et al., 2002). This suggests that there may be a feedback mechanism that actually downregulates PrP C when cells are under stress, which in turn exacerbates the stress on the cells. There has been no parallel study in mouse models of Huntington disease to determine whether PrP C levels are reduced in animals overexpressing Htt with expanded polyglutamine repeats. In conclusion, PrP C provides protection against protein aggregation and this protection is neuronal specific. It may be that neurons are particularly sensitive to damage from aggregated proteins and therefore have a specific regulatory pathway that protects against this damage. By maintaining ROS levels, PrP C protects the cell from a reduction in proteasome activity, thereby helping to prevent protein aggregation. As decreased proteasome activity has been implicated in several neurodegenerative disorders and PrP C specifically increases proteasome activity in neuronal cells, it will be of interest in the future to investigate the protective role that PrP C plays in neurodegenerative diseases caused by protein aggregation. Cell culture The SN56 cells were a generous gift from Bruce Wainer (Department of Pathology, Emory University School of Medicine, Atlanta, GA). The chronically infected ScSN56 cell line infected with the Chandler strain of scrapie was a generous gift from Byron Caughey (RML, Hamilton, MT). SN56 and ScSN56 cells were cultured as described previously (Baron et al., 2006). N2A (mouse neuroblastama cell line), NIH-3T3 (mouse fibroblast cell line) and Caco-2 and HT-29 (human intestinal cell lines) were obtained from the American Type Culture Collection (Manassas, VA). Cells were maintained in Dulbecco's modified Eagle's medium (DMEM), 10% fetal bovine serum, 2 mM glutamine, and 1% penicillin-streptomycin in 75 cm 2 culture bottles in a 5% CO 2 atmosphere at 37°C. CHO cells were cultured and seeded as described previously (Yim et al., 2005). To quantify protein, lysates were run on SDS-PAGE gels (Invitrogen) and then western blot analysis was performed. PrP C was detected by immunoblotting using SAF-70 anti-prion antibody, Htt was detected using an anti-GFP antibody, and Hsp70 was detected using an anti-Hsp70 antibody. The protein bands were detected using chemiluminescent substrate (Pierce, Rockford, IL) and analyzed using the ChemiImager densitometer (Alpha Innotech Corp., San Leandro, CA). The linear range of the western blot analysis was determined by loading from 5 g to 200 g of cell lysates. Quantification of the western blots established linearity over a 10fold range of protein concentration. PrP C was immunoprecipitated by mixing the cell lysate (500 g) with 4 g of D13 anti-prion antibody followed by the addition of 50 l of protein A-Sepharose 4 Fast Flow (GE Health Science, Piscataway, NJ). After incubating at 4°C, the Sepharose beads were collected by centrifugation at 14,000 g, washed with PBS and resuspended in 50 l of 2ϫ Laemmli sample buffer. Protein concentration of the lysates was measured using the Bradford reagent (Bio-Rad, Hercules, CA). For the filter retardation assay, 20 g of denatured protein samples were filtered through a 0.2 m cellulose acetate membrane (Adventec MFS, Dublin, CA) using a dot-blot filtration unit (Bio-Rad). Plasmids and transfection The Htt polyglutamine constructs (HttQ25 and HttQ103) containing glutamine repeat expansion had an EGFP-tag on its C-terminal (Zeng et al., 2004). A mouse prion gene with the hamster epitope tag (a gift from S. Priola, Rocky Mountain Lab, MT) and a hamster prion gene (a gift from D. Ramanujan Hegde, NIH, Bethesda, MD) was subcloned using HindIII and XhoI restriction sites into pCDNA4 vector (Invitrogen). Cells were seeded and transfected with Htt plasmids as described previously (Zeng et al., 2004). At 48 hours after transfection, cells on coverslips were fixed with 4% paraformaldehyde and 3 g/ml 4Ј,6-diamidino-2-phenylindole (Invitrogen) in 1ϫ PBS at room temperature for 20 minutes and mounted. Microscopy and immunostaining Cells grown on two-well chamber slides (Labtek, NY) were imaged with a 40ϫ, 1.4 NA objective using a Zeiss LSM 510 confocal microscope (Jena, Germany). The argon and helium lasers were used to excite at 488 nm and 543nm, respectively. Apoptosis was measured by fixing and staining the cells with cleaved caspase-3 antibody (Cell Signaling Technology, Danver, MA). Cells were counted as aggregate positive if one or several granules were visible within a cell. GFP-positive cells (500 per experiment) were counted in multiple random visual fields on each slide. Measurement of proteasome activity, antioxidant activities, reactive oxygen species (ROS) and cell toxicity Proteasome activity was measured from the fluorescence intensity of amino-methyl coumarin (AMC)-conjugated to the chymotrypsin peptide substrate LLVY, using a commercial activity assay kit (Chemicon, Temecula, CA). Cleavage products were measured using the Spectra max Gemini fluorescence plate reader (Molecular Devices Co., Sunyvale, CA). The activities of superoxide dismutase (SOD; Chemicon), catalase (Invitrogen) and glutathione reductase (Cayman) were measured in the cell homogenates. To measure ROS levels, cells were stained with DCF-DA or red CC-1 (Invitrogen). For live cells, FACS analysis was performed on the FACS Calibur instrument (Becton Dickinson, Franklin Lakes, NJ) and for fixed cells, images were obtained on the Zeiss LSM 510 confocal microscope. Cell viability was assessed using a 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide (MTT)-based colorimetric assay kit (Invitrogen). Data analysis All data are an average of at least three independent experiments. Student's t-test was used to assess the statistical significance of differences. Fig. 9. Diagram showing the interrelationship between ROS, proteasome activity, huntingtin aggregation and PrP C expression. At any given time in the nerve cell, there are competing pathways in which PrP C expression reduces ROS and in turn increases proteasome activity, whereas Htt aggregation has the opposite effect.
2017-06-28T22:52:18.309Z
2007-08-01T00:00:00.000
{ "year": 2007, "sha1": "508819642976191654c3ff9d2506979d4c2bf333", "oa_license": "CCBY", "oa_url": "http://jcs.biologists.org/content/joces/120/15/2663.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "508819642976191654c3ff9d2506979d4c2bf333", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4977583
pes2o/s2orc
v3-fos-license
Characterization of the most frequent ATP7B mutation causing Wilson disease in hepatocytes from patient induced pluripotent stem cells H1069Q substitution represents the most frequent mutation of the copper transporter ATP7B causing Wilson disease in Caucasian population. ATP7B localizes to the Golgi complex in hepatocytes but moves in response to copper overload to the endo-lysosomal compartment to support copper excretion via bile canaliculi. In heterologous or hepatoma-derived cell lines, overexpressed ATP7B-H1069Q is strongly retained in the ER and fails to move to the post-Golgi sites, resulting in toxic copper accumulation. However, this pathogenic mechanism has never been tested in patients’ hepatocytes, while animal models recapitulating this form of WD are still lacking. To reach this goal, we have reprogrammed skin fibroblasts of homozygous ATP7B-H1069Q patients into induced pluripotent stem cells and differentiated them into hepatocyte-like cells. Surprisingly, in HLCs we found one third of ATP7B-H1069Q localized in the Golgi complex and able to move to the endo-lysosomal compartment upon copper stimulation. However, despite normal mRNA levels, the expression of the mutant protein was only 20% compared to the control because of endoplasmic reticulum-associated degradation. These results pinpoint rapid degradation as the major cause for loss of ATP7B function in H1069Q patients, and thus as the primary target for designing therapeutic strategies to rescue ATP7B-H1069Q function. substances. The patient did not complain of itching and had no clinical signs of genetic disorders. Growth and neurologic development were normal. Infections, autoimmune hepatitis, celiac disease, thyroid disorders, biliary system disease, cystic fibrosis and myopathy were ruled out. Wilson disease diagnosis was based on low serum levels of ceruloplasmin, increased basal and after penicillamine urinary copper excretion and genetic analysis (Table 1). Diagnosis was confirmed by DNA sequencing of the entire family, that indicated the patient as homozygous for the H1069Q mutation of ATP7B, and the mother heterozygous wt/H1069Q (22). The patient was treated with zinc therapy as first line. A progressive improvement in copper metabolism parameters (urinary copper excretion < 75 mcg/24 h; urinary zinc excretion > 2 g/24 h) was observed in the following 12 months, with a normalization of aminotransferase by 18 months of treatment. The patient remained asymptomatic throughout the entire period of observation and ALT levels persisted normal. Currently he is 18 years old and has normal serum level of aminotransferases. Mesodermal and ectodermal differentiation of hiPSCs For mesodermal differentiation to obtain cardiomyocytes we adapted the protocol by Zhang et al. (24). Briefly, control and patient iPSC colonies were detached from the plate using dispase and after sedimentation were resuspended in the following medium: DMEM/F12 (Invitrogen), 20% FBS (Hyclone), 1X Glutamax, 1X μM nonessential amino acids, 50 U/ml (penicillin and 50 mg/ml streptomicin), 50 μM 2mercaptoethanol (all from Invitrogen). After 4 days, the aggregates were plated on Matrigel coated plates and 50 µg/ml of ascorbic acid (Sigma) was added to the medium. The medium was changed on alternative day for further 21 days. For ectodermal differentiation to obtain neurons we adapted a previously used method (25). Control and patient iPSCs plated on Matrigel were induced to differentiate in RPMI supplemented with 1XN2, 50 U/ml penicillin and 50 mg/ml streptomycin (all from Invitrogen). After 7 days of differentiation 1X B27 (Invitrogen) and 1µm retinoic acid (Sigma) were added and kept for the following 7-18 days. The medium was changed every 2 days. RNA isolation and quantitative PCR For quantitative PCR (qPCR), total RNA was extracted by TriSure (Bioline), and first-strand cDNA was synthesized using Mu-MLV RT (New England BioLabs) according to the manufacturer's instructions. qPCR was carried out with the QuantStudio 7 Flex (Thermo Fisher Scientific) using Fast SYBR Green PCR Master Mix (Thermo Fisher Scientific). The housekeeping GAPDH mRNA was used as an internal standard for normalization. Gene-specific primers used for amplification are listed in Supplemental Table S3. qPCR data are presented as fold changes relative to the indicated reference sample using 2DeltaCt comparative analysis. Preparation of cell extracts, SDS-PAGE and Western blot analysis After 20 Immuno-electron microscopy. For pre-embedding immuno-electron microscopy HLCs were fixed, permeabilized and labeled as described previously (16
2018-04-20T13:19:32.577Z
2018-04-19T00:00:00.000
{ "year": 2018, "sha1": "9b41081b1978767b78a4d9c630b35838682ad103", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-24717-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32d93282e6efdc4529680a333483b0f8c9bfda57", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
247100521
pes2o/s2orc
v3-fos-license
SARS-CoV-2-Associated Multisystem Inflammatory Syndrome in a Child in Uganda: A Paediatric Experience in a Resource-Limited Setting SARS-CoV-2-associated Multisystem Inflammatory Syndrome in children (MIS-C) has been described in developed settings that have reported a high burden of COVID-19 cases. However, to date, there are few published cases of MIS-C that have been described in the African region. MIS-C has high morbidity and even mortality without a prompt diagnosis. We report a case of a 9-year-old girl who presented with typical clinical features of MIS-C in Uganda but had a delay in diagnosis. This case report aims to raise awareness among health providers in similar settings to improve clinical suspicion of MIS-C, facilitate prompt diagnosis and treatment, and thus improve outcomes. Introduction Coronavirus disease , caused by the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was first announced a pandemic by the World Health Organisation (WHO) on March 11, 2020. Globally, as of June 28, 2021, coronavirus cases were over 182M with approximately 4M deaths [1]. Around the same time, Uganda was experiencing a second wave of the pandemic and had cumulatively reported over 81,000 cases of COVID-19, of which 4767 (5.9%) were aged 0-19 years, with 4 deaths in this group [2]. Infections among children are usually asymptomatic or mild diseases as compared to adults as reported elsewhere [3,4]. However, SARS-CoV-2 infection in children has recently been associated with a rare severe condition affecting multiorgans called multisystem inflammatory syndrome in children (MIS-C) (CDC-MIS-C) [5,6]. MIS-C has been described by the CDC and WHO as a clinical entity among children infected with SARS-CoV-2 characterised by hyperinflammation with multiorgan involvement [5,6]. e MIS-C clinical presentation has common characteristics with the presentation of Kawasaki disease (KD), including fever, high levels of inflammatory markers, and multisystem damage [7,8]. MIS-C diagnosis can be difficult due to its diverse presentation [3,9]. Furthermore, there is still limited knowledge and experience in the diagnosis and management of SARS-CoV-2-associated MIS-C among clinicians in a resource-limited setting (RLS). Without prompt management, MIS-C has a high risk of poor outcomes, including cardiovascular derangement, high morbidity, and mortality, therefore the need for early and proper diagnosis and treatment [10,11]. With the small proportion of children reported with COVID-19 in most countries in Africa, clinical suspicion of MIS-C among clinicians is low, as many lack awareness, knowledge, and experience on MIS-C due to paucity of data on MIS-C from Africa. Additionally, our case had atypical presentation of MIS-C with a high risk of missed diagnosis as compared to the other cases reported in Africa [12]. We describe a 9-year-old with SARS-CoV-2 infection who presented with rapid deterioration and features of MIS-C, in whom the diagnosis of MIS-C was delayed due to limited awareness of the condition in Uganda. e report aims to raise awareness among health providers in similar settings to improve clinical suspicion of MIS-C, facilitate prompt diagnosis and treatment, and thus improve the outcome. COVID-19 Presenting with Acute Abdomen, Mimicking Acute Appendicitis. We describe a 9-year-old Ugandan girl, previously healthy, who presented to a COVID-19 treatment unit (CTU) of a tertiary hospital as a referral from a private hospital with persistent high-grade fever which was unresponsive to antipyretics and severe abdominal pain in the periumbilical and right iliac regions for 8 days. She also had associated malaise, nonbloody profuse diarrhoea (7-10 motions/day), and vomiting which developed approximately 5 days after the onset of symptoms. She had a generalised maculopapular nonitchy skin rash that had been noticed seven days after the onset of symptoms, but this was not associated with features of conjunctivitis or reddening of the mouth. She also developed a dry, irritating cough a day prior to admission to the CTU that was associated with shortness of breath. e child first presented to an outpatient health facility two days after the onset of the fever, weakness, and abdominal pain. She was treated with six-hourly paracetamol and an oral amoxicillin plus clavulanic acid following a complete blood count (CBC) report that showed neutrophilia. However, despite several days of treatment, the symptoms persisted with worsening abdominal pain, highgrade fevers, chills, diarrhoea, vomiting, and loss of appetite, which raised a clinical suspicion of appendicitis. An abdominal ultrasound performed at this time was essentially normal with no features to suggest an appendicitis or any other finding to explain the abdominal symptoms. Nevertheless, an empiric diagnosis of appendicitis was made and laparoscopic appendicectomy was performed. At laparoscopic inspection, the appendix was grossly normal. e child's course deteriorated postoperatively with severe lethargy and persistent symptoms despite three days of intravenous (IV) fluids, oral morphine for pain, antipyretics, and empiric broader spectrum antibiotics (piperacillin-tazobactam). e child developed photophobia. An nasopharyngeal swab for SARS-CoV-2 was performed and was found positive. At this time, approximately 7 days after the onset of the initial symptoms, the child developed a dry paroxysmal cough with shortness of breath. Her oxygen saturation (SPO 2 ) dropped from 98% to 82%. A diagnosis of COVID-19 pneumonia was made, and she was transferred to the CTU at the tertiary hospital for further management. Notably, it was established that the child was in close contact with her mother who had a positive COVID-19 PCR test on a nasopharyngeal swab about three weeks prior to her presentation at the CTU. Examination revealed a very sick child with a maculopapular rash, oral thrush, febrile (temperature of 39.5°C), tachypneic with a respiratory rate (RR) of 35 breaths/min, SPO 2 80-85% on room air, and 95%-97% on 3 l/min of oxygen by nasal prongs. e chest was clear. Pulse rate was 46-70 beats/min, blood pressure was 110/60 mmHg, and heart sounds were normal. e abdominal examination revealed a laparoscopy scar, tenderness in the right iliac fossa and epigastrium with no guarding or distension, and had normal bowel sounds. She was agitated, but the neck was soft and Kernig's sign was negative. Diagnostic Assessment. A comprehensive work up was performed including CBC, liver function tests (LFTs), D-dimers, C-reactive protein (CRP), ferritin, renal function tests (RFTs), electrolytes, Troponin T (results are provided in Table 1), and chest CT scan ( Figure 1). Significantly, she had a lymphopenia of 720 cells/microliter (NR 1.0-7.0), a raised CRP of 103 mg/dl (NR < 1.6), and ferritin of 344 ng/ml (NR 20-250). CT scan ( Figure 1) showed features of multifocal peripheral ground glass opacifications and consolidation in both lungs with basal and apical predominance. e findings of lymphopenia, raised CRP, and ferritin and a positive SARS-CoV-2 test together with symptoms of fever and gastrointestinal tract (GIT) symptoms were consistent with a diagnosis of MIS-C [5,6]. To investigate for Kawasaki disease, we also performed an echocardiogram and an electrocardiogram both of which were normal. Treatment for SARS-CoV-2-Associated MIS-C. e child was started on intravenous immunoglobulin (IVIG) on day 10 of fever onset at a dose of 2 g/kg. She was also treated with methylprednisolone at a dose of 2 mg/kg/day as an antiinflammatory drug and continued to receive piperacillin/ tazobactam at 100 mg/kg/day for possible sepsis. Low molecular weight heparin (LMWH) was administered at 20 mg subcutaneously for 5 days as prophylaxis for deep venous thrombosis. However, the patient continued to have cardiorespiratory instability, acute kidney injury (see creatinine trend in Table 1) with oliguria of 0.6 ml/kg/hr, and central nervous system (CNS) deterioration with compensated hypovolemic shock, hypothermia of 34.5°C, photophobia, diplopia, and agitation. We attributed her CNS symptoms to side effects of IVIG. For the agitation, clonidine tablets (100 microgram) were given at night for 3 days. Her fluid and nutrition intake were maintained using a nasogastric tube feed in addition to intravenous fluids. In view of a nationwide shortage of critical care beds resulting from the COVID-19 surge, it was not possible to escalate her care to mechanical cardiorespiratory support. We continued to provide the above support in a pediatric intensive nursing setting. Recovery and Follow-Up. After 72 hrs of IVIG, high-dose steroids, and intensive supportive care, the fever trended down to normal, the respiratory distress improved, and her vital organ function laboratory tests including the RFTs, LFTs, electrolytes, and inflammatory markers were trending to normal (Table 1). She was discharged home after 11 days of admission on oral prednisolone 30 mg (1 mg/kg), which was tapered over 2 weeks, aspirin 75 mg once a day for a total of 4 weeks, inhaled budesonide 200 mcg twice a day for 2 weeks, and home chest physiotherapy. Echocardiogram and electrocardiogram were repeated 2 weeks after discharge to evaluate for any complications due to the inflammation and these were normal. We plan to obtain a chest radiograph and spirometry 4 weeks after discharge to evaluate for complete resolution of lung disease and no residual lung disease. Discussion MIS-C associated with SARS-CoV-2 is a rare (2 in 100,000) severe clinical presentation of COVID-19 in children [9,13]. Although it is rare, there is a need to raise awareness of this syndrome due to its diverse presentation, the emergency/ critical care needed, and the possible fatal complications. We believe this is the index case reported in Uganda. Children and adolescents were less severely affected by the SARS-CoV-2 in our setting and, therefore, there was a very low index of suspicion among health workers for diagnosis of MIS-C. MIS-C associated with SARS-CoV-2 is thought to occur secondary to a cytokine storm that damages numerous organ systems. e inflammatory response results in blood vessel dilation, leading to hypotension, fluid accumulation, and shock. It is speculated that MIS-C is a stage III-delayed immunological phenomenon associated with hyperinflammation following either symptomatic or asymptomatic COVID-19 infection [14]. It is still unknown if there is a genetic predisposition to MIS-C [14]. e reported median age of presentation is 9 years [11]. Centers for Disease Control and Prevention (CDC) defines MIS-C as a clinical condition which affects patients under 21 years of age presenting with fever >38.0°C for ≥24 hours, or report of subjective fever lasting ≥24 hours, laboratory evidence of inflammation (one or more of the following: elevated C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), fibrinogen, procalcitonin, D-dimer, ferritin, lactic acid dehydrogenase (LDH), or interleukin 6 (IL-6), elevated neutrophils, reduced lymphocytes, and low albumin)), severe illness needing hospitalisation, and involvement of two or more organ systems (cardiac, renal, respiratory, hematologic, gastrointestinal, dermatologic, or neurological), with positive testing for SARS-CoV-2 indicating current or recent infection or COVID-19 exposure; and no other alternative plausible diagnoses [5]. Despite presenting with features suggestive of MIS-C as defined by the CDC, the patient was initially managed for the common illnesses in children like appendicitis and gastroenteritis. Presentation with an acute abdomen requiring surgical intervention is not uncommon in MIS-C [8,15]. A case series of children with MIS-C from South Africa showed that children underwent laparotomy for suspected appendicitis just like ours [15]. e cause of acute abdomen is plausibly due to the hyperinflammatory state seen in COVID-19 and MIS-C, which may play a role in the pathogenesis of intestinal involvement. It is also hypothesised that there is a role of the angiotensin-converting enzyme 2 (ACE2) receptor that is expressed in the intestine [16], allowing SARS-CoV-2 to invade gastrointestinal cells. is presentation may delay the diagnosis and lead to unnecessary surgery. erefore, a high index of suspicion is needed to avoid a missed diagnosis. Fever and gastrointestinal symptoms like abdominal pain and nausea are overlapping symptoms for both MIS-C and appendicitis. However, the presentation of this child at the peak of the second wave of SARS-CoV-2 in our setting, with exposure to SARS-CoV-2 in the family and increased inflammatory markers should have increased our diagnostic suspicion of MIS-C. Yet, the diagnosis of MIS-C was made 10 days after the onset of symptoms in the patient, which may have led to her rapid deterioration and could possibly have led to unfavourable outcomes without a proper diagnosis. Her specific therapy for MIS-C with IVIG and high-dose glucocorticoid was started on 10 th day of the onset of illness, yet it is reported that in most cases IVIG was administered between days 5 and 8 of illness [17]. We believe this would have been different if there was increased awareness among the health workers regarding MIS-C presentation. e majority of the children with MIS-C progress into cardiovascular, and for some, respiratory dysfunction, with a reported 61% becoming hypotensive [11,18]. Our patient showed features of circulatory shock, although she had normal BP, normal heart function on echo, and the cardiac markers were normal. However, shock and hemodynamic compromise in MIS-C can occur in the absence of laboratory evidence of myocardial inflammation and with preserved cardiac function and rapid reversibility [19]. e cause may plausibly be due to the pathogenesis of MIS-C of severe vasodilatation and even the infection. Most of the children will need inotropic support for the shock [11,20]. However, we adequately reversed the shock in the patient with IV fluids alone, and we did not need inotropes. is could have been due to the early identification and intervention of the shock while in the hospital. Respiratory distress which includes tachypnea, retractions, and/or increased work of breathing is common in 72% of children with MIS-C. Mechanical ventilation is required for approximately a quarter of the children with MIS-C [11]. e radiological findings vary in MIS-C, including pneumonia and/or pleural effusions identified in chest radiographs of 55.8% of children, as reported by Aronoff et al. [11], while Kaushik et al. [20] reported focal or bilateral pulmonary opacities in 33% of the patients. Subpleural ground glass opacities and consolidations with features of pneumonia have also been reported [21]. ese findings were consistent with our findings of consolidation and ground glass opacities. e patient also developed acute kidney injury as defined by raised serum creatinine for age, but this was resolved by conservative management with fluid. e AKI may have been caused by the cytokine storm of the disease, drugs used in management, or hypovolemia from the shock. Acute kidney injury has been reported in about 11.9% of children, and none of the cases reported long-term chronic kidney disease requiring dialysis [11]. Currently, specific therapy for MIS-C is based on expert opinion and previous management of hyperinflammatory conditions like KD [14]. No randomized controlled trials have been performed to date for the most appropriate therapy. Management of the MIS-C in this patient depended on previous reports [14,22,23] and guidance on management from the CDC and WHO [5,6]. e goals of treatment for MIS-C are to decrease systemic inflammation and restore organ function, in order to decrease mortality and reduce the risk of long-term sequelae, such as the development of persistent cardiac dysfunction. Our case received both steroids and IVIG as recommended, with a good response clinically and in the immunological markers. Although IVIG is recommended, it is costly and would not be affordable for the majority of patients in LMICs. Furthermore, the biologic agents, interleukin-1, and interleukin-6 antagonists like anakinra and tocilizumab, respectively, are unavailable and even more costly. erefore, if the patient needed escalation of therapy, this would not be an option in our setting. Although most of the current treatment protocols utilise intravenous immunoglobulin (IVIG) and methylprednisolone with ASA for the treatment of MIS-C [11,23], evidence has not yet emerged regarding which regimen gives a better outcome in the management of MIS-C. A review by Mcardle et al. was inconclusive regarding evidence for superiority of any of the three treatment regimens: a combination of IVIG and glucocorticoids, IVIG alone, and glucocorticoids alone [23]. However, they found that glucocorticoids alone may reduce progression on ventilator support, and the combination of IVIG and glucocorticoids may reduce the risk of immunomodulatory treatment escalation [23]. erefore, since IVIG and biologic agents are costly and have limited availability in many countries, more evidence is needed to support their use in preference to cheaper anti-inflammatory agents such as glucocorticoids. MIS-C has a fairly good outcome with early diagnosis and intervention. Survival was reported at 82.2% and mortality at 1.4% [11], but intensive care treatment is needed for many children [17,18,20]. Even with a delay in making a diagnosis and the institution of therapy, our patient had a favourable outcome with minimal complications. We believe the outcome was good due to the absence of cardiac complications like coronary aneurysms and myocardial damage and the eventual institution of therapy with IVIG and glucocorticoids. Additionally, evaluation of patients with evidence of MIS-C requires a multidisciplinary approach. ese teams are not readily available in many facilities in LMICs, but in this case early consultations were made with the teams including intensivists, infectious disease specialists, cardiologists, pulmonologists, and pediatricians, which may have led to a good outcome. Despite being rare, MIS-C is of significant concern due to the severity of the illness, with the majority of children requiring critical care treatment for complications to prevent unfavourable outcomes. MIS-C should be high on the differentials for patients who present with gastrointestinal symptoms and a history of recent SARS-CoV-2 exposure or infection, even if clinical findings seem consistent with other pathologies like appendicitis. erefore, health workers in LMICs need awareness regarding MIS-C to improve outcomes. Data Availability e data regarding this case report is present at the Records Registry of the Mulago National Referral Hospital. It may not be possible to get it online due to data protection policies for patients' records. Ethical Approval Institutional approval was obtained from the Mulago Hospital Research Ethics Committee.
2022-02-26T00:19:35.777Z
2022-02-23T00:00:00.000
{ "year": 2022, "sha1": "9aa9352f02ebbd70d9ae1cf6e7e7a2aedcb80af2", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/criid/2022/7811891.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6e5cc7a3e2d44ac356733ada6578b0f307f07839", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
62889231
pes2o/s2orc
v3-fos-license
Long-range p-d exchange interaction in a ferromagnet-semiconductor Co/CdMgTe/CdTe quantum well hybrid structure The exchange interaction between magnetic ions and charge carriers in semiconductors is considered as prime tool for spin control. Here, we solve a long-standing problem by uniquely determining the magnitude of the long-range $p-d$ exchange interaction in a ferromagnet-semiconductor (FM-SC) hybrid structure where a 10~nm thick CdTe quantum well is separated from the FM Co layer by a CdMgTe barrier with a thickness on the order of 10~nm. The exchange interaction is manifested by the spin splitting of acceptor bound holes in the effective magnetic field induced by the FM. The exchange splitting is directly evaluated using spin-flip Raman scattering by analyzing the dependence of the Stokes shift $\Delta_S$ on the external magnetic field $B$. We show that in strong magnetic field $\Delta_S$ is a linear function of $B$ with an offset of $\Delta_{pd} = 50-100~\mu$eV at zero field from the FM induced effective exchange field. On the other hand, the $s-d$ exchange interaction between conduction band electrons and FM, as well as the $p-d$ contribution for free valence band holes, are negligible. The results are well described by the model of indirect exchange interaction between acceptor bound holes in the CdTe quantum well and the FM layer mediated by elliptically polarized phonons in the hybrid structure. The exchange interaction between magnetic ions and charge carriers in semiconductors is considered as prime tool for spin control. Here, we solve a long-standing problem by uniquely determining the magnitude of the long-range p−d exchange interaction in a ferromagnet-semiconductor (FM-SC) hybrid structure where a 10 nm thick CdTe quantum well is separated from the FM Co layer by a CdMgTe barrier with a thickness on the order of 10 nm. The exchange interaction is manifested by the spin splitting of acceptor bound holes in the effective magnetic field induced by the FM. The exchange splitting is directly evaluated using spin-flip Raman scattering by analyzing the dependence of the Stokes shift ∆S on the external magnetic field B. We show that in strong magnetic field ∆S is a linear function of B with an offset of ∆ pd = 50 − 100 µeV at zero field from the FM induced effective exchange field. On the other hand, the s − d exchange interaction between conduction band electrons and FM, as well as the p − d contribution for free valence band holes, are negligible. The results are well described by the model of indirect exchange interaction between acceptor bound holes in the CdTe quantum well and the FM layer mediated by elliptically polarized phonons in the hybrid structure. I. INTRODUCTION The integration of magnetism into semiconductor electronics would initiate a new generation of computers based on advanced functional elements where the magnetic memory and electronic data processor are located on a single chip [1][2][3][4]. One approach in this direction is based on hybrid systems where a thin ferromagnetic film is placed on top of a semiconductor. In such a system one expects to detect emergent functional properties which appear and benefit from bringing the primary constituents together, i.e. the magnetism as in ferromagnets (FM) with the optical and electrical tunability as in semiconductors (SC) [5][6][7][8][9][10][11][12]. For that purpose it is mandatory to establish a strong exchange interaction between the charge carriers in the SC and the magnetic ions in the FM. Control of the concentration of the charge carriers and the penetration of their wavefunction into the FM layer should consequently change the magnitude of the exchange coupling between FM and SC [13]. As a result of the coupling, the following interdependencies are established: spin polarization of charge carriers in the SC by the magnetized FM layer and inverse action of the spin polarized carriers to control the FM magnetization. Previously, it was demonstrated that the stray fields of a FM layer influence the spin polarization of conduction band electrons in bulk GaAs [5,14] and diluted magnetic semiconductors [15,16]. In turn, illumination of a GaAs SC changed the coercive force of a nickel-based interfacial FM layer (photocoercivity), which was attributed to optical control of the exchange coupling at the interface between FM and SC [5]. A novel type of hybrid structure with a thickness of a few tens of nanometers only is obtained by combining a FM layer and a SC quantum well (QW) that are located in close proximity of each other, separated by a SC barrier with a few nanometer thickness [15][16][17][18][19][20][21]. Such structures with a well-defined profile along the growth axis can be fabricated with monolayer precision. The stray fields from the FM layer are weak so that they contribute significantly to the carrier spin polarization only in combination with a magnetic SC QW [15,16]. For non-magnetic SCs, the contributing mechanisms are a direct exchange interaction generating an equilibrium spin polarization [13,[17][18][19] and a spin dependent tunneling into the FM layer [20,22]. In Ref. 20 it was demonstrated that in hybrid structures based on combining a GaMnAs FM with a InGaAs QW the conduction band electrons in the QW are spin polarized due to spin-dependent capture into the FM layer. Another mechanism leading to an equilibrium spin polarization of a two-dimensional hole gas in an InGaAs QW due to the p − d exchange interaction was reported in Ref. 18. Also, the exchange fields in graphene layers coupled to yttrium iron garnet were used to achieve a strong modulation of spin currents [23]. Recently, a new type of proximity effect was observed in a hybrid structure composed of a few nanometer thick Co layer which is deposited on top of a CdTe/CdMgTe semiconductor QW structure. The proximity effect was manifested in a FM induced spin polarization of holes bound to shallow acceptors in the QW [21]. The polarization of the holes takes place due to an effective p − d Optical excitation is linearly (π) polarized. Circularly polarized σ + and σ − emission components are detected. The thickness of the Cd0.8Mg0.2Te buffer is 3 µm, the QW width is 10 nm, the Co thickness dCo ≈ 4 nm and the spacer thickness is in the range of dS = 5 − 10 nm. In an external magnetic field BF ≥ 50 mT the interfacial FM magnetization M is directed perpendicular to the sample surface. (b) Spectra of PL intensity (black line) and degree of circular polarization (colored symbols). Excitation photon energy ωexc = 1.7 eV. (c) Magnetic field dependence of FM induced circular polarization ρ π c (BF ). The polarization is averaged over the spectral range of the e − A 0 PL band (1.57 − 1.62 eV). All measurements are performed at T bath = 2 K. exchange interaction between the FM (d-system) and the QW holes (p-system). In this case, the FM produces an effective magnetic field which acts on the acceptor-hole spins and consequently leads to an equilibrium spin polarization of the holes. The main feature of this indirect exchange interaction is its long-range character, i.e. the proximity effect is almost constant with increasing thickness of the CdMgTe spacer between the FM and the QW layers up to 30 nm. This length scale is significantly larger than the 1-2 nm distance required for a significant overlap of wavefunctions in the direct exchange interaction between the QW holes and the magnetic ions in the FM. In Ref. 21 it was conjectured that the long-range indirect exchange originates from exchange of elliptically polarized acoustic phonons which exist in the FM layer close to the magnon-phonon resonance [24] and can penetrate into the SC layer. This mechanism was used later to explain the influence of elliptically polarized phonons on the magnetic properties of materials [25]. However, the spin polarization of the acceptors and the resulting circular polarization of the photoluminescence (PL) depend not only on the exchange splitting between the spin levels of the holes ∆ pd but also on other factors such as the temperature, the ratio of lifetime and spin relaxation time of the holes etc. Therefore, the polarization of the PL evaluated in Ref. 21 can be considered only as rough estimate for the splitting ∆ pd ≈ 50 µeV, and it is necessary to perform a direct measurement of the spin splitting of the acceptor holes using complementary techniques. In this paper, we report on the investigation of the FM induced spin splitting of the acceptor bound holes in a CdTe QW located in close proximity of a Co layer. While previous optical and electrical measurements were indirect requiring additional model assumptions for analysis, here we perform a direct measurement using spin-flip Ra-man scattering giving the dependence of the Stokes shift ∆ S on external magnetic field B. In strong magnetic fields, ∆ S (B) scales linearly with B. Extrapolation of these data to zero magnetic field reveals a finite offset of the Stokes shift due to the FM induced effective exchange field with a magnitude of ∆ pd = 50 − 100 µeV. This offset varies only weakly on the CdMgTe spacer thickness also in ranges where wavefunction overlap is negligible so that it has to be attributed to a long-range p − d interaction. In addition, we show that the s − d exchange interaction between conduction band electrons and the FM as well as the corresponding p − d contribution for free valence band holes are negligible. These results are surprising from the viewpoint of standard theory of exchange interaction which is proportional to the overlap of the wavefunctions of the interacting particles. However, they are in line with the conjecture of an indirect exchange mediated by elliptically polarized phonons in FM-SC hybrid structures [21] and therefore corroborate this model. The paper is organized as follows. First, in Sec. II we describe the proximity effect based on PL data recorded in a wide range of magnetic fields up to 3 T. Next, we present the results on spin-flip Raman scattering in Sec. III. In Sec. IV time-resolved data on pump-probe Kerr rotation are given where we evaluate the influence of the FM on the Larmor precession of the optically oriented holes and electrons. Finally, the results are discussed in Sec. V. II. FERROMAGNETIC PROXIMITY EFFECT The studied CdTe/Cd 0.8 Mg 0.2 Te QW structures were grown by molecular-beam epitaxy (MBE) on top of (100)-oriented GaAs substrates. The subsequent deposition of Co at room temperature was done without any intermediate contact to ambient atmosphere. Details on growth and characterization are given in Ref. 21. A schematic presentation of the structure and of the geometry for PL measurements is shown in Fig. 1(a). The used gradient growth technique allowed variation of the thickness of both the Co layer and the Cd 0.8 Mg 0.2 Te spacer up to 10 and 30 nm, respectively. The 10 nm thick CdTe QW is sandwiched between Cd 0.8 Mg 0.2 Te barriers. The thickness of the Cd 0.8 Mg 0.2 Te buffer is about 3 µm. Most of studies are performed on samples with a Co layer thickness of about 4 nm and a spacer thickness of d S = 5 − 10 nm. The samples are mounted in a split-coil helium bath cryostat with a variable temperature insert. The magnetic field is applied in the Faraday geometry parallel to the structure growth axis (B z). In the PL measurements, excitation of electron-hole pairs in the QW layer is accomplished by picosecond optical pulses emitted by a tunable Ti:Sapphire laser at a repetition frequency of 75.75 MHz. The photon energy ω exc is kept below the band gap energy of the Cd 0.8 Mg 0.2 Te barriers (∼ 1.9 eV) in order to generate carriers in the QW layer only. The emission is analyzed and detected by a spectrometer equipped with a charge-coupled-device camera and a streak camera for time-integrated and timeresolved measurements, respectively. Figure 1 summarizes the time-integrated data on the ferromagnetic proximity effect. These PL data are measured in the Faraday geometry on the sample with d S = 10 nm at a bath temperature of T bath = 2 K. The total PL intensity I 0 = I π + + I π − and degree of circular polarization ρ π c spectra are shown in Fig. 1(b). The degree of circular polarization is defined as ρ π c = (I π + − I π − )/(I π + + I π − ), where I π + and I π − are the σ + -and σ − -polarized emission intensities of the PL under linear polarized excitation, as indicated with the π in the superscript. Already in weak magnetic fields B F = ±40 mT a circular polarization of several percent appears in the spectral range of the low energy PL band from 1.57 − 1.62 eV, which corresponds to recombination of conduction band electrons with holes bound to acceptors (the e − A 0 band). This effect was studied in detail in our previous work where we demonstrated that: [21] (i) the circular polarization appears due to a FM induced spin polarization of the acceptor bound holes; (ii) the effect is induced by an interfacial FM which is formed at the Co/CdMgTe interface with a magnetization M||z and an out-of-plane (perpendicular) anisotropy (see Fig. 1). In weak magnetic fields the magnetization of the Co layer M Co is located in the plane of the structure (M Co ⊥ z) and does not contribute to the circular polarization of the PL. Here, we extend the measurements of ρ π c (B F ) to a larger magnetic field range up to 3 T, where an outof-plane magnetization of the Co FM layer is present. Figure 1(c) shows the FM induced dependence ρ π c (B F ) averaged across the spectral range from 1.57 − 1.62 eV as function of the magnetic field B F in the Faraday con- figuration. In strong fields B F > 0.25 T the polarization increases with B F and changes its slope to a weaker dependence around 2 T, which is close to the saturation field of Co 4πM Co = 1.7 T (see Ref. 26). At first glance this behaviour could be attributed to a spin polarization of the holes due to exchange interaction with the Co where the exchange constant J pd has the opposite sign as that of the interfacial FM. However, care should be exercised here because there is a significant contribution of magnetic circular dichroism (MCD) to the data as follows from time-resolved photoluminescence (TRPL) measurements. Figure 2(a) shows transients of the antisymmetric term of the polarization degreeρ π The data can be well described with the following expression where the instantaneous polarization degree ρ MCD results from the difference in absorption of σ + and σ − polarized light in the Co layer, the amplitude A corresponds to the equilibrium polarization of the acceptor holes induced by the external magnetic field B F and the FM induced effective exchange field. τ S is the spin relaxation time of polarized carriers, during which equilibrium populations of the spin levels are reached. The magnetic field dependencies of ρ MCD and A evaluated from fits to theρ π c (t) transients are shown in Fig. 2(b). Obviously the MCD saturates at B F ≈ 1.7 T, while the amplitude A continuously grows with B F . The increase of A with magnetic field is related to an additional equilibrium polarization of the holes due to thermalization between the spin levels. For small splittings (A ≪ 1) the field dispersion of A can be approximated by where µ B > 0 is the Bohr magneton, k B is the Boltzmann constant, and g A is the Landé factor of the acceptor which determines the splitting of the heavy hole states with angular momentum projections J z = ±3/2 onto the quantization axis of the QW. Since the amplitude A does not saturate in magnetic fields B F > 1.7 T (see Fig. 2(b)), we conclude that the contribution of the Co film to the proximity effect is negligible. Using Eq. (2) we obtain ∆ pd = 50±10 µeV and |g A | = 0.4±0.1 (see the dashed line in Fig. 2(b)). This evaluation depends, however, sensitively on the actual temperature of the crystal lattice T in the illumination area which we assumed to be T = 5 K, i.e. about 3 K higher than the bath temperature of T bath = 2 K. Laser heating of the crystal lattice due to optical excitation is in agreement with our previous studies on optical orientation of Mn ions in GaAs [27]. Thus, TRPL polarization measurements as applied up to now can be used to estimate the exchange energy splitting ∆ pd but this requires an accurate knowledge of T . Such a precise assessment is, however, hardly possible, but every determination of the crystal temperature is subject of considerable inaccuracies. In the following, we therefore present other methods that can be used for a direct measurement of the exchange energy which does not require any estimates. III. EVALUATION OF p − d EXCHANGE INTERACTION VIA SPIN-FLIP RAMAN SCATTERING Resonant spin-flip Raman scattering (SFRS) allows measuring the magnetic field induced splitting of the spin levels of charge carriers in semiconductor QW structures [27][28][29][30]; moreover, it can be also exploited to evaluate exchange energies by which different spin configurations are separated [31]. As we will demonstrate in the following, in contrast to polarization-resolved PL measurements, SFRS grants access to the effective p − d exchange constant in the hybrid structures studied here. The physics of SFRS for a hole bound to an acceptor is shown in Fig. 3. 3. (Color online) Schematic presentation of (a) SFRS for acceptor bound heavy hole; (b) geometry of SFRS experiment with Θ = 20 • . Black bold arrows ⇑ and ⇓ correspond to z-projections of angular momentum of acceptor hole, Jz, equal to +3/2 and −3/2, respectively. Red bold arrow ⇑ corresponds to z-projection of angular momentum of heavy hole in exciton which is equal to +3/2. Thin arrows ↑ and ↓ correspond to electron spin projection on z axis, +1/2 and −1/2, respectively. Coefficients α and β determine the mixing of electron spin states in external magnetic field and depend on angle Θ. Initially, the exciting photon in state |ω 1 , k 1 , σ + with optical frequency ω 1 and circular polarization σ + propagates along the magnetic field direction k 1 B. The | ± 3/2 ground states of the heavy hole bound to an acceptor A 0 in the QW are the eigenstates of the angular momentum projection onto the direction z perpendicular to the QW plane, J z = ±3/2 (black bold arrows ⇑ and ⇓ in Fig. 3(a)). In the absence of p − d exchange interaction, the Zeeman splitting of the spin levels is given by In the intermediate SFRS state the A 0 X complex given by an exciton bound to a neutral acceptor is created. For σ + excitation, the angular momentum projection of the heavy hole in the exciton is equal to +3/2 (red bold arrow ⇑ in Fig. 3(a)), while the spin of the acceptor bound hole is equal to J z = −3/2 (see Fig. 3(a)) [28]. The exchange interaction between the exciton heavy hole and the acceptor bound hole can lead to a mutual flip of their spins with conservation of total angular momentum. In the next step, the exciton is annihilated and a photon is emitted with optical frequency ω 2 and opposite circular polarization σ − . Here, energy conservation is fulfilled only for the initial and final states (photon and acceptor), but not in the intermediate state (exciton bound to neutral acceptor). In the final state we obtain the emitted photon |ω 2 , k 2 , σ − and the acceptor with J z = +3/2. Thus, the energy of the emitted photon is ω 2 = ω 1 − µ B |g A |B, which is shifted into the Stokes region. In Faraday geometry (B z) the transition described above is forbidden because the angular momentum of the hole in the A 0 X complex should change by three quanta, ∆J z = 3, while the angular momentum of the photon ∆l in the back-scattering geometry (k 1 = −k 2 ) changes by 0 or ±2 only. For the observation of an SFRS line corresponding to the transition of the hole between its Zeeman levels we use therefore an oblique field geometry, namely an angle Θ between the z-axis and magnetic field B of 20 • is chosen (see Fig. 3(b)). In this geometry, the magnetic field induces a mixing of the electron states with spin projections +1/2 and −1/2 along z (thin arrows ↑ and ↓ in Fig. 3(a)) which allows for observing SFRS in crossed circular polarizations [28]. For an efficient SFRS process, it is necessary to tune the laser photon energy into resonance with the A 0 X transition (1.610 eV). In case of a noticeable p − d exchange interaction between the magnetic ions in the FM layer (d-system) and the holes bound to acceptors in the QW (p-system) the splitting ∆ S (B) of the A 0 states is determined not only by the external magnetic field B, but also by the additional contribution due to the effective exchange field from the FM. Therefore, the resulting splitting is given by where m z is the z-projection of the unit vector m along the magnetization M. Eq. (3) is valid for large magnetic fields, when the first term on the right hand side is larger than the second one, i.e. ∆ S (B) > 0. Here, we use the fact that the p − d exchange interaction between the magnetic ions and the heavy holes in a QW structure is strongly anisotropic, i.e., it is described by the Ising Hamiltonian 1 3 ∆ pd m z J z [32]. In strong magnetic fields, the FM is fully magnetized along the B-direction and the dependence ∆ S (B) is a straight line with an offset given by the exchange constant ∆ pd . For small Θ, the projection m z = cos Θ ≈ 1 and |g A | corresponds to the longitudinal acceptor g factor, which determines the Zeeman splitting for B applied along the z-direction. In our case Θ = 20 • which allows one to use cos Θ = 1 in the evaluation of the exchange energy ∆ pd with an accuracy of 7%. Figure 4 summarizes the data on the SFRS corresponding to the spin flip of the electron (e) and the hole bound to an acceptor (A 0 ). For B = 10 T under resonant excitation of the A 0 X transition with photon energy ω exc = 1.610 eV, the spin flip of the acceptor bound hole is observed for crossed orientations of polarizer and analyzer (σ + , σ − ) as shown in Fig. 4(a). The signal is given by the broad line with a Raman shift of ∆ S = 160 µeV close to the laser line. Figure 4(b) shows the magnetic field dependences of the Raman shift of the acceptor bound hole ∆ S for various temperatures T bath . The data are well described by Eq. (3) with the hole g factor |g A | = 0.4 which determines the slope of the line. The offset ∆ pd ≈ 50 µeV for T bath = 5 K and depends weakly on temperature. A weak dependence of ∆ pd on T bath follows also from Fig. 4(d), where the temperature dependence of the Raman shift ∆ S for a fixed magnetic field B = 10 T is shown. Such behavior cannot be attributed to an exchange interaction with paramagnetic ions or FM Co clusters diffused into the QW during the growth process. The magnetization of ions should decrease strongly with increasing temperature from 2 to 25 K, which is in contrast to our observations (see Fig. 4(b) and (d)). Thus, we conclude that the SFRS demonstrates the splitting of the acceptor bound hole in the FM induced exchange field. The striking feature of this interaction is its long range nature. Figure 4(e) shows the splitting ∆ pd vs the spacer thickness evaluated from magnetic field dependences of ∆ S (B) measured on corresponding samples. We observe a splitting of about 100 µeV even for spacers as large as 10 nm. This distance is significantly larger than the penetration depth of electron and hole wavefunctions of maximum 1-2 nm into a FM layer that would be required to obtain a considerable direct exchange interaction [21]. The offset in the magnetic field dependence of the acceptor bound hole Raman shift ∆ S (B) has to be considered with considerable care. Apart from the FM induced exchange field, the offset may result from the energy splitting between the heavy and light holes bound to an acceptor. The magnitude of this splitting is about ∆ lh ≈ 1 meV [28]. For the magnetic fields B ≤ 10 T used in our experiments, the Zeeman splitting of the hole states µ B |g A |B is clearly less than ∆ lh , which results in the transition scheme shown in Fig. 5. At low temperatures, the lowest energy heavy hole state with angular momentum projection J z = −3/2 is populated. From this state, there are three possible transitions which are shown with the red arrows in Fig. 5. It follows that a decrease of the magnetic field leads to vanishing of the | − 3/2 → | + 3/2 spin flip transition energy. However, the transitions | − 3/2 → | − 1/2 and | − 3/2 → | + 1/2 have a positive offset corresponding to ∆ lh . We emphasize that our results cannot be attributed to such behaviour because: (i) the offset in Fig. 4(b) is negative and (ii) the magnitude of exchange energy ∆ pd < 100 µeV is significantly smaller than ∆ lh . Moreover, the magnetic field dependence of ∆ S (B) in CdTe QW structures without Co layer shows linear behavior which approaches zero when extrapolated to zero field, i.e. no offset is detected in this case. Therefore, the observation of SFRS on the acceptor bound hole corresponds to the spin-flip transition | − 3/2 → | + 3/2 and the offset is related to the heavy hole splitting in the effective exchange field from the FM. Transitions to the light hole states with J z = ±1/2 were not detected in the investigated samples which may be attributed to spectral broadening of the Raman line due to fluctuations of ∆ lh . The SFRS signal related to the heavy hole spin flip disappears when the exciting laser photon energy is increased and approaches the exciton resonance X (see the PL spectrum in Fig. 1(b)). In this case the spin flip of the electron dominates the SFRS spectrum, which is shown in Fig. 4(c) for ω exc = 1.615 eV. Figure 4(f) presents the magnetic field dependence of the Stokes shift for the electron spin-flip ∆ e S (B). The shift follows a linear dependence with the electron g factor |g e | = 1.58 and does not show any measurable offset [29]. This indicates that the effective s − d interaction between the conduction band electrons in the QW and the FM layer is negligibly small as compared with the p − d interaction of the QW heavy holes. We also note that we do not observe a SFRS signal related to the free heavy hole which is not bound to the acceptor. Its absence may be due to strong fluctuations of the free hole g factor leading to a significant broadening of the SFRS line. For detecting the spin splitting of the unbound heavy hole Ω h , we use a transient pump-probe technique as described below. IV. LARMOR SPIN PRECESSION OF VALENCE BAND HOLES Transient pump-probe Kerr rotation in the vicinity of the exciton resonance allows us to measure the frequency of the Larmor precession of electrons Ω e and holes Ω h in CdTe/(Cd,Mg)Te QWs [33]. Thereby circularly polarized pump pulses photoexcite carriers with optically oriented spin polarization parallel to the growth direction (z-axis). In a transverse magnetic field B x the subsequent spin precession leads to transient oscillations of the z-component, S z , of the spin polarization which is detected by the Kerr rotation of the linearly polarized probe beam when the delay between the pump and probe pulses t is varied. The electron and hole spins precess with different Larmor precession frequencies due to the difference in their g factors. The electron g factor in CdTe QW is close to isotropic, while the heavy hole one has a strong anisotropy. Our experiment requires an oblique magnetic field since the z-component of magnetic field has to induce a magnetization of the FM layer, while the x-component is required to observe the oscillatory precession signal. We stress that the pump-probe signal is observed in the studied FM-SC hybrid structures only when the excitation photon energy is tuned to the QW exciton resonance. This indicates that the experimental data monitor the spin dynamics of photoexcited carriers in the QW and not in the FM. Moreover, we get exclusively access to the Larmor precession of the conduction band electrons and valence band holes because an efficient optical orientation of the photoexcited carriers occurs only for resonant excitation of the excitons, whose oscillator strength is significantly larger than that of the excitons bound to acceptors. Figure 6 shows corresponding transient Kerr rotation signals in different magnetic fields. The inset shows schematically the geometry of the experiment where the magnetic field is tilted by an angle Θ = 70 • with respect to the z-axis. The transient signals comprise two contributions. The first one corresponds to a signal with high oscillation frequency and is attributed to the electron spin precession. The second contribution oscillates quite slowly and corresponds to the heavy hole spin dynamics with a small g factor. Each of these oscillatory signals is well described with A i cos(Ω i t + φ i ) which allows us to determine the magnetic field dependence of the Larmor precession frequencies Ω i for the electrons (i = e) and holes (i = h). The data are summarized in Fig. 7. For the holes, Ω h (B) dependencies are shown for Θ = 70 • at two different temperatures, 2 K and 12 K (Fig. 7(a)). At first glance the dependences appear to be linear across the whole range of magnetic fields with the corresponding g factor |g h | = 0.17 which weakly depends on temperature. However, a closer look shows that Ω h (B) shows small wiggles above about B ≈ 1.5 T. One possible explanation for the non-linear behavior of Ω h (B) is the exchange interaction of the heavy holes with magnetic ions in the FM layer whose magnetization slowly varies with magnetic field. However, even if this effect is present its magnitude is expected to be rather small. Therefore, we conclude that the valence band holes are weakly coupled to the FM layer which is in contrast to the strongly interacting holes bound to acceptors as demonstrated in Section III. The value of the heavy hole g factor is determined from the relation Taking g x ≈ 0 we obtain |g z | ≈ 0.5 thereby. This value is slightly larger than the g factor of the acceptor bound hole |g A | = 0.4 extracted from the SFRS data, which indicates that indeed the pump-probe signal addresses the spin dynamics of unbound, free valence band holes. The Larmor precession frequency of the electrons Ω e (B) depends linearly on magnetic field ( Fig. 7(b)), from which we evaluate the electron g factor to be |g e | = 1.31. The slight difference between the values obtained from pump-probe and SFRS is related to the anisotropy of the electron g factor [29]. Note that the magnetic field dependence of Ω e also does not show any offset. Thus, the electrons do not experience a s − d exchange interaction which is in accord with the SFRS data. V. DISCUSSION The main result of our study is the direct measurement of the exchange energy ∆ pd = 50 − 100 µeV for the effective p−d interaction between the magnetic ions in the FM layer and the holes bound to acceptors in the semiconductor QW, without involving any model. This energy splitting of the hole spin levels is in agreement with our previous estimates in Ref. 21, where ∆ pd ≈ 50 µeV was evaluated from polarization-and time-resolved PL measurements in weak longitudinal magnetic fields in which the interfacial FM layer resulted in a magnetization of the acceptor holes. Here, SFRS measurements have been performed in strong magnetic fields and, therefore, it is expected that an additional contribution from the Co layer to the exchange interaction is expected. This is because magnetic fields larger than 2 T are sufficient to saturate the out of plane magnetization of the Co film. However, in contrast to MCD the amplitude of the proximity effect A(B) in Fig. 2(b) increases linearly with magnetic field. Therefore, we conclude that the main contribution to the p−d exchange interaction comes from the interfacial FM. The origin of the interfacial magnetic layer requires further studies. Currently, it is reasonable to assume that its formation is caused by chemical reaction of Co atoms with the Cd 0.8 Mg 0.2 Te material. We observe no FM induced splitting of the spin levels of the valence band holes which are not bound to acceptors as well as of the conduction band electrons. The splitting of valence band holes has been evaluated from degenerate pump-probe Kerr rotation measurements under resonant excitation of excitons in the QW structure. This experiment differs significantly from SFRS which probes the acceptor bound holes under resonant excitation of excitons bound to neutral acceptors A 0 X. Figure 7(a) demonstrates that the magnetic field dependence Ω h (B) does not show a detectable offset and a deviation from a linear behavior. Thus, the pump-probe measurements clearly demonstrate that the exchange interaction between the valence band holes in the QW and the FM layer is small. The same result is obtained for the conduction band electrons where as well no offset in the magnetic field dependence of their Zeeman splitting is detected, both in SFRS and pump-probe. A further result obtained from SFRS is that the exchange energy ∆ pd does not decrease with increasing spacer thickness for d S ≤ 10 nm (see Fig. 4(e)). This is in accord with our previous studies in Ref. 21, where the suppression of PL intensity with decreasing d S gives a characteristic length of 1−2 nm for the wavefunction penetration into the Co-layer. This distance is much smaller than the spacer range of d S = 5 − 10 nm addressed in the present study. Also, the FM induced polarization of the PL depends only weakly on d S = 5 − 30 nm [21]. Therefore, we conclude that the effective p−d exchange interaction between the Co ions in the FM and the holes bound to acceptors in the QW is not determined by their wavefunction overlap. These results are surprising from the viewpoint of the standard theory of exchange interaction whose strength is proportional to this overlap [34,35]. Note, however, that this does not represent a contradiction because the exchange reported here is observed for holes bound to acceptors but is absent for conduction band electrons and valence band holes. In Ref. 21 we proposed that this kind of long-range interaction can be mediated by elliptically polarized acoustic phonons. The latter are strongly polarized in the vicinity of the magnon-phonon resonance in the FM [24]. In addition, phonons do not experience the electronic barrier between the QW and the FM layer. The characteristic frequencies of these elliptically polarized phonons (about 1 meV) are close to the energy splitting between the acceptor bound heavy | ± 3/2 and light | ± 1/2 holes (quasi-resonant case) and significantly smaller than the corresponding splitting between the confined valence band states in the QW with 10 nm width. For example, if the phonons are mainly σ + polarized (with positive z-projection of angular momentum) the interaction with the holes couples the ground state | − 3/2 with the excited state |− 1/2 which consequently leads to an energy shift of the levels. This results in lifting of the Kramers degeneracy of the | ± 3/2 doublet in zero external magnetic field which is the phonon analog of the optical ac Stark effect and the inverse Faraday effect which occurs in case of illumination with elliptically polarized light in transparency region. The optical Stark effect is a well established phenomenon in semiconductors [36]. It takes place when an electromagnetic wave with σ + polarization couples the electronic states with angular momentum projection −3/2 in the valence band and −1/2 state in the conduction band as shown in Fig. 8(a). Due to the interaction with light these states experience an energy shift ∆ ∝ P 2 /δ, where δ = E g − ω. Here, ω is the photon energy and E g is the energy gap of the semiconductor, P is the dipole matrix element of the optical transitions between valence and conduction bands. For photons with ω < E g repulsion between the electronic states takes place, i.e. ∆ > 0. Similarly, in the case of the phonon Stark effect [21], a circularly polarized phonon couples the heavy (hh) and light (lh) hole acceptor states with angular momentum projections −3/2 and −1/2, respectively (see Fig. 8(b)). The spin-phonon interaction for holes occurs due to the spin-orbit coupling of hole states in the valence band. In this case, the level shift is proportional to the square of the matrix element of the spinphonon interaction divided by the detuning of the phonon frequency at the magnon-phonon resonance in the FM relative to the energy separation between the heavy and light hole acceptor levels in the QW. In conclusion, our results are in agreement with the proposed model of an effective p − d exchange interaction mediated by elliptically polarized phonons. Here the energy splitting of the acceptor bound holes has been measured directly and amounts to ∆ pd = 50 − 100 µeV. This model explains the absence of a long-range s−d exchange interaction because the spin-orbit interaction in the conduction band is much smaller than the one in the valence band.
2017-08-17T13:49:13.000Z
2017-08-17T00:00:00.000
{ "year": 2017, "sha1": "3a47b853078134d137af7c27f40e3761f05ccf97", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1708.05268", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3a47b853078134d137af7c27f40e3761f05ccf97", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260597515
pes2o/s2orc
v3-fos-license
Performance of xenogeneic pulmonary visceral pleura as bioprosthetic heart valve cusps in swine Objective Bovine pericardium is common biological material for bioprosthetic heart valve. There remains a significant need, however, to improve bioprosthetic valves for longer-term outcomes. This study aims to evaluate the chronic performance of bovine pulmonary visceral pleura (PVP) as bioprosthetic valve cusps. Methods The PVP was extracted from the bovine lung and fixed in 0.625% glutaraldehyde overnight at room temperature. The PVP valve cusps for the bioprosthetic valve were tailored using a laser cutter. Three leaflets were sewn onto a nitinol stent. Six PVP bioprosthetic valves were loaded into the test chamber of the heart valve tester to complete 100 million cycles. Six other PVP bioprosthetic valves were transcardially implanted to replace pulmonary artery valve of six pigs. Fluoroscopy and intracardiac echocardiography were used for in vivo assessments. Thrombosis, calcification, inflammation, and fibrosis were evaluated in the terminal study. Histologic analyses were used for evaluations of any degradation or calcification. Results All PVP bioprosthetic valves completed 100 million cycles without significant damage or tears. In vivo assessments showed bioprosthetic valve cusps open and coaptation at four months post-implant. No calcification and thrombotic deposits, inflammation, and fibrosis were observed in the heart or pulmonary artery. The histologic analyses showed complete and compact elastin and collagen fibers in the PVP valve cusps. Calcification-specific stains showed no calcific deposit in the PVP valve cusps. Conclusions The accelerated wear test demonstrates suitable mechanical strength of PVP cusps for heart valve. The swine model demonstrates that the PVP valve cusps are promising for valve replacement. Introduction Transcatheter pulmonary valve replacement (TPVR) is becoming the treatment of choice in most congenital heart disease (CHD) patients with degeneration of prior right ventricular outflow tract repair. Right ventricular outflow tract (RVOT) dysfunction is a common hemodynamic challenge for children and adults with CHD, including patients with repaired tetralogy of Fallot (TOF), truncus arteriosus, and those who have undergone the Ross procedure for congenital aortic stenosis and the Rastelli repair for transposition of great vessels (1)(2)(3)(4). Recent advances in surgical techniques and perioperative care have dramatically improved the long-term outcome of CHD. Prior to the ground-breaking contribution of Dr. Bonhoeffer in the year 2000, open-heart surgery was the only modality to address RVOT dysfunction (5). The technical challenges of repeat redo cardiac surgery and the risk of myocardial injury related to repeated cardiopulmonary bypass adds to the complexity of the underlying CHD, which necessitated the search for an alternative approach. Since the life expectancy of these patients is improving, there is an increased demand for these procedures. Clinicians are now faced with a continuously growing population of adult patients with CHD, where most will require re-intervention in adulthood (6)(7)(8). Although transcatheter intervention to address RVOT obstruction, utilizing balloon angioplasty or stent implantation, provided relief of the RVOT obstruction, it came at the expense of pulmonary regurgitation with long-term detrimental effects (9). The introduction of TPVR serves as an alternative to address the stenosis and regurgitation in the same setting. Currently, pulmonary valve replacement is one of the most common procedures performed for adult CHD patients (10). Cuspal calcification and degeneration, however, are major risks in pulmonary valve replacement (especially in younger patients) (11-13). Calcification of bioprosthetic heart valves in recipient patients causes deterioration of valvular function and eventually requires reoperation. Structural valve failure is caused by calcification which is histologically evident within three years of valve implantation (14). Mechanical stresses, including mounting the valve on various catheters and distortion of the valve or incomplete valve expansion, have been identified as risk factors for early valve failure. Calcification of bioprosthetic valves can be inhibited by reducing functional stresses through the modification of design and tissue properties (15). In this regard, our computational simulations showed that the mechanical stresses in elastic cusps are significantly smaller than those of more rigid cusps, where the PVP and bovine pericardium were used as the comparative cusps biomaterials (i.e., stress-strain relation of the two different materials were used), respectively (16). Hence, PVP cusps in bioprosthetic valve may mitigate calcification through stress reduction. Furthermore, our previous study demonstrated that the PVP is composed of abundant elastin fiber (17). It is well known that the elastin fibers are extensively covalent cross-linked (18), which enables PVP cusps to resist degradation and hence increase the longevity. Additionally, PVP vascular grafts/patches demonstrated very low thrombogenicity and low-inflammation in rodent and large animal (swine/canine) models (19)(20)(21). These observations formed the basis for the calcification mitigation hypothesis explored in the present study. Here, we used glutaraldehyde-crosslinked PVP, to serve as the cusps of a bioprosthetic pulmonary valve in a swine model. The mechanical durability of the bioprosthetic valve was evaluated in accelerated fatigue testing. The bioprosthetic valves were implanted in the pulmonary outflow tract to replace a native pulmonary valve in the swine model for evaluation of the biocompatibility of the PVP bioprosthetic valve. The efficacy of the PVP bioprosthetic valve was postoperatively evaluated. The calcification, inflammation, and fibrosis of the PVP valve cusps were analyzed four months post-implant. Preparation of PVP valve cusps The bovine lungs were obtained from Sierra for Medical Science (Whittier, CA 90607). The bovine PVP was separately extracted from the lungs with the aid of pressurized phosphate-buffered saline pumped into the interstitial space between the lung tissue and PVP. The PVP tissues were laid flat and rinsed with 4°C saline with 1% protease inhibitors (PMSF, phenylmethylsulfonyl fluoride) five times. The bovine PVP was fixed in 0.625% buffered glutaraldehyde (pH 7.4) overnight at room temperature to crosslink the proteins and diminish immune rejection. The PVP tissues were then stored in 0.25% buffered glutaraldehyde (pH 7.4) until valve construction. The thickness of PVP was measured using an electronic thickness gauge (Model 547-561S, Neoteck). Every piece of PVP tissue with dimensions of 9 (length) × 4 (width) cm was tailored for three valve cusps using a laser cutter. Three valve cusps with uniform thickness (±0.01 mm) were selected to assemble one bioprosthetic valve. Valve assembly and accelerated fatigue test Three valve cusps were sewn onto the FoldaValve nitinol (selfexpandable) stent (25 mm diameter) with a 6-0 suture (22). The Heart Valve Tester (Dynatek Labs M6 tester SN M6-102281) was used for the accelerated fatigue test of the PVP bioprosthetic valve (23). Six valves were loaded into the valve test chamber of the tester. The tester was filled with normal saline (0.90% w/v of NaCl) at 37⁰C. The tester was run with a system pressure of 120/ 80 mmHg and 800 cycles/min until 100 million cycles. The leaflet coaptation was observed for every valve. After completing 100M cycles, the valves were removed from the Heart Valve Tester. The valve cusps of each PVP bioprosthetic valve were examined visually under a microscope. Implant experiment All animal experiments were performed in accordance with national and local ethical guidelines, including the Principles of Lu et al. 10.3389/fcvm.2023.1213398 Frontiers in Cardiovascular Medicine Laboratory Animal Care, the Guide for the Care and Use of Laboratory Animals and the National Society for Medical Research, and an approved California Medical Innovations Institute IACUC protocol regarding the use of animals in research. Six domestic pigs (55 ± 5 kg) were used in the study. Animals were obtained from a certified vendor. The animals fasted for twelve hours before surgery. Appropriate aseptic techniques were followed for the survival surgery, including thorough scrubbing and wearing sterile garments. Intramural injections of TKX (4.4 mg/kg), consisting of a mixture of telazol (50 mg/ml), ketamine (25 mg/ml), and xylazine (25 mg/ml), were provided sedation. All animals were intubated and ventilated via a mechanical respirator with general anesthesia maintained via 1%-2% isoflurane and oxygen. The animals were monitored continuously for a surgical level of anesthesia. Joint tone, movement, blood pressure, and heart rate were all used to ensure a suitable surgical plane. Vital signs, including ECG, were monitored continuously throughout the procedures. A heating pad was used to maintain the body temperature of the animal. An intravenous (IV) line was placed percutaneously in the femoral vein to administer fluids and drugs. An isotonic saline drip was administered via a peripheral venous line (300 ml/h) to prevent dehydration. Heparin (∼100-200 IU/kg) was administered to achieve an activated clotting time of >200 s. Lidocaine (4 mg/kg), magnesium (20-50 mg/kg), and amiodarone (150 mg IV bolus) were administered to prevent arrhythmia deployment of the prosthetic valve. The animal was placed in dorsal recumbency. The hair over the chest was clipped and cleaned. Baseline intracardiac echocardiography (ICE) measurement was performed. The animal was covered with sterile surgical drapes. Baseline angiography and cardiac pressure measurements were performed. The chest was opened through a midline sternotomy. As for heart exposure, an appropriate size of the sheath was placed from the right ventricular apex. The delivery system (transventricular catheter) for the bioprosthetic valve was advanced over a guidewire positioned in the pulmonary artery and positioned for deployment at the junction of the RVOT and pulmonary artery. The bioprosthetic valve was carefully deployed over the native pulmonary valve at the outflow tract. The incision at the right ventricular apex was closed by suture continuously. The sternum was closed with four or five stainless steel sutures. The muscle layer and subcutaneous tissue was closed with an absorbable suture, while the negative chest pressure was restored through a chest tube. The skin was closed with surgical staples. Fluoroscopy was used to evaluate whether the PVP bioprosthetic valves migrated from the RVOT and pulmonary artery junction. ICE was used to assess the open and coaptation of the PVP bioprosthetic valves. Clopidogrel (75 mg/day) and Aspirin (325 mg/day) were administered orally for survival durations as the general postoperative treatment of vascular surgery to prevent blood clots. On the terminal day, the six animals were anesthetized and heparinized. Fluoroscopy and ICE were performed to assess the position and function of the PVP bioprosthetic valves. The animals were euthanized. The chest was re-opened to expose the heart. Visual assessment of fibrosis or inflammation was completed. The animal was euthanized. The heart was excised for visual assessment of the PVP bioprosthetic valve in the pulmonary artery outflow tract. The PVP bioprosthetic valve was amputated from the adjacent aortic wall for examination with the aid of stereomicroscope. After fixation with 4% paraformaldehyde, the cusps with the entire tissue complex were carefully dissected from the stent. A segment on circumferential plane and a segment on axial plane in each cusp were sliced for histologic analyses. Statistics Average and standard deviation are reported for the various measurement parameters. Results We harvested approximately 400 cm 2 PVP with a uniform thickness from each lung set. The thickness of PVP varied due to the different age/weight of animals and regions at the lung surface. Generally, the thickness of bovine PVP ranged from 110 to 280 μm. Although it is thinner than bovine pericardium, the PVP can be handled and sewn for leaflets and skirts of bioprosthetic valves. Four examples of bioprosthetic valves with diverse valve cusps thicknesses (0.17 to 0.26 mm) are presented in Figures 1A-B and 1F-G. All three valve cusps in one PVP bioprosthetic valve are symmetric with coaptation at ∼3 cm hydraulic pressure. Six PVP bioprosthetic valves with valve cusp thicknesses of 0.17 ± 0.01 mm (n = 2), 0.22 ± 0.015 mm (n = 2), and 0.25 ± 0.017 mm (n = 2), respectively, were collected for accelerated wear/fatigue tests. The prosthetic valves were mounted in the 6-chambers in the Dynatek Labs M6 tester, respectively. In the Heart Valve Tester, normal valve cusps coaptation was observed. All six PVP bioprosthetic valves completed 100 million cycles. All valve cusps had opened and had coaptation at the end of 100 million cycles ( Figure 1C). No significant tears were observed for the valve cusps of each PVP bioprosthetic valve that were examined visually under a microscope ( Figures 1D-E and 1H-I). Six PVP bioprosthetic valves with cusps thicknesses of 0.17 ± 0.01 mm (n = 3) and 0.22 ± 0.015 mm (n = 3) were implanted in six pigs, respectively. While the PVP bioprosthetic valves were implanted at the junction of RVOT and pulmonary artery in pigs, no complications were observed in any animal during the postoperative period. In the terminal study, we did not observe any migration of the PVP bioprosthetic valves in the junction of RVOT and pulmonary artery until postoperative four months (Figure 2A). No right ventricular dilation was observed in fluoroscopy. The valve cusps opening and coaptation were observed by ICE ( Figures 2B,C). In the post-mortem examination, we did not observe any thrombotic deposit, inflammation, or fibrosis in the heart and pulmonary artery ( Figure 3A). Further dissection to expose the PVP bioprosthetic valves showed no thrombotic deposit, inflammation, or fibrosis on the valve cusps and skirt ( Figure 3B). When the PVP bioprosthetic valves were isolated, we did not observe any calcific deposit on the valve cusps, valve skirt, or aortic wall ( Figure 3C). In histologic analyses, all tissue slides were carefully reviewed in accordance with a pathologist's instruction. The structure of valve cusps remained intact and functional. Some examples are shown in Figures 4A-F. The valve cusps of PVP bioprosthetic valves did not show thickening for the four-month duration (Figures 4A-F). We did not observe any tearing or degradation in the valve cusps of PVP bioprosthetic valves ( Figures 4B,C,E,F). The collagen fibers (blue) in the valve cusps were well integrated for the four-month duration ( Figures 4C,F). A few host cell migrations (Red) were observed within the valve cusps ( Figures 4C,F). The calcific deposit should be represented black/dark grey in the von Kossa stain ( Figures 5A,E) or dark red in the Alizarin Red stain ( Figures 5B,F). We did not observe any black spot or dark red plaque in the valve cusps ( Figures 5A,B,E,F) for the four-month duration. The iron deposit was observed in 1 of 6 PVP bioprosthetic valvular implants ( Figures 5C,G). The iron deposit was mainly observed in the external region of the PVP cusps. The lipid deposit was observed in 1 of 6 PVP bioprosthetic valvular implants (Figures 5D,H). In the one implant where lipid deposition was observed, it was only seen in the external region of the PVP cusps. Using immunofluorescence microscopy, we examined the integration and compaction of elastin and collagen fibers in the valve cusps of PVP bioprosthetic valves. The elastin fibers in the valve cusps remained intact and robust during the four-month period ( Figures 6A-D). The collagen fibers in the valve cusps were also compact and continuous for the four-month duration ( Figures 6E-H). The MMP9 expression was observed in 3 of 6 PVP bioprosthetic valvular implants ( Figures 7A-D). The MMP9 expression was observed in external regions of PVP cusps in 2 bioprosthetic valvular implant Figure 7H). Discussion This is the first study using bovine PVP valve cusps as bioprosthetic heart valves in large animal models. The implanted PVP bioprosthetic valves remained at the junction of RVOT and pulmonary artery of pigs without migration for the 4-month period. There were no signs of calcification and degradation in the PVP pulmonary bioprosthetic valve. No thrombotic deposit, fibrosis, or inflammation were observed in PVP valve cusps of bioprosthetic valves in gross examination or histologic analyses. Histologic and immunofluorescence microscopic analyses did not reveal any collagen and elastin fibers degradation in the PVP valve cusps. Hemodynamically significant RVOT dysfunction (regurgitation due to valvular dysfunction) is commonly encountered in adulthood in patients who have undergone previous surgical repair for several conditions, including TOF, pulmonary atresia with ventricular septal defect, congenital pulmonary stenosis, truncus arteriosus, previous Ross procedure for congenital aortic stenosis, and Rastelli repair for transposition of great vessels. Pulmonary valve replacement has become one of the most common procedures for pediatric and CHD patients (4). Surgical pulmonary valvular replacement (SPVR) still remains the gold standard for patients with congenital heart diseases (25). TPVR can be a reliable and safe alternative to SPVR in patients that have undergone prior surgeries for congenital heart disease (25). Compared to SPVR, TPVR was associated with a significant reduction in risk for all-cause mortality at the longest available follow-up, recurrent pulmonary regurgitation, and thirty-day hospitalization, while the risk for post-procedural infective endocarditis was significantly higher (25). Improvements for TVPR include features such as a lower introducer profile (currently, delivery systems are 16-24 Fr size), low inflammatory response, no infection, long durability, low opening resistance with maximal valve area, fast and reliable closure, and non-thrombogenicity. The thickness of bovine PVP ranges from 110 to 280 μm which is significantly smaller than that of the bovine pericardium (>200 μm). It is known that approximately 40% of the bulk size of the trans-catheter valve stems from the valve cusps; i.e., the delivery system can be reduced accordingly when the tissue of valve cusps is thinner. Therefore, the PVP can substantially reduce the profile of the delivery system. Thinner PVP valve cusps would also reduce the tissue's degree of crimping, which Frontiers in Cardiovascular Medicine may cause damage and hence potential failure (tearing and calcification). Although it is thinner than bovine pericardium, we demonstrate in the study that the PVP valve cusps have no tear or degradation after 100 M cycles in an accelerated fatigue/wear test. We have demonstrated in the previous investigation that the PVP graft has similar burst pressure to the artery (19). Therefore, the mechanical strength of the PVP is suitable for the valve cusps of a bioprosthetic heart valve. Furthermore, we have also demonstrated in previous studies excellent biocompatibility and non-thrombogenicity of PVP vascular graft and patch in animal models (19)(20)(21). The present study underscores the excellent biocompatibility and non-thrombogenic property of the PVP bioprosthetic pulmonary valve in a large animal model. The PVP bioprosthetic valve exhibited excellent resistance to calcification and inflammation in the study, which is consistent with our previous studies (19)(20)(21). This suggests the potential for improved long-term durability than the current bioprosthetic valves using bovine pericardium. It is known that the collagen and elastin debris in valvular prosthesis due to degradation can induce calcification. The high mechanical stress in the valvular prosthesis is one of the causes of the degradation of collagen and elastin fibers. Our previous studies show that the PVP contains abundant elastin, and the ratio of elastin to collagen is about 1:1 (17). In contrast, the pericardium and peritoneum have a collagen to elastin ratio > 40.0:1 (26). Elastin and collagen are the major extracellular matrix proteins (27)(28)(29). Elastin is a potent autocrine regulator of vascular smooth muscle cell activity and inducer of actin stress fiber organization. Elastin also regulates myofibroblasts activity and promotes quiescent fibroblasts (convert from genotype to phenotype state) (30)(31)(32)(33), which may balance the proliferation on the PVP valve cusps. Elastin largely retains its elasticity after chemical/physical treatments to mitigate immune rejection (34). Our simulation shows that the elasticity due to elastin may reduce the stress in the PVP valve cusps of the bioprosthetic valve in the heartbeat cycle (16). In the histologic analyses of postmortem, collagen and elastin fibers were intact in the PVP valve cusps of the bioprosthetic valve in a large animal study for four months (Figures 4, 6). This correlates with the lower stresses induced in the leaflets due to the higher elasticity of the PVP. Furthermore, the minor MMP-9 expression in the adjacent tissue of the PVP cusps ( Figures 7A-D) also supports that there was little degradation of collagen and elastin in the PVP cusps. It is known that lipid deposit is a risk factor of degradation for bioprosthetic valves (35, 36). The histological analysis showed a minor lipid deposit in adjacent tissue of the PVP cusps ( Figures 5D,H), which suggests that lipid induced enzymatic precipitation and degradation are not implicated in the calcification and degradation of PVP cusps. Bioprosthetic valve thrombosis (BPVT) is a major cause of bioprosthetic valve degeneration and often has an elusive presentation causing delayed recognition and treatment (37). BPVT is a recognized complication of prosthetic aortic valves and can be found in up to 13% of patients after transcatheter implantation (38). BPVT may result in valve dysfunction, possibly related to degeneration and recurrence of patient symptoms, or remain subclinical (34). Recent reports have suggested a high incidence of subclinical cusps thrombosis following bioprosthetic aortic valve replacement (25,39,40). In previous studies, we demonstrated the non-thrombogenicity of the PVP as a vascular graft and patch of artery and vein (20,21). In histological analyses, iron deposit, product of hemoglobin degradation, was found in only one of six PVP bioprosthetic valves, and the iron deposit was located at the boundary between the PVP cusps and adjacent tissue ( Figures 5C,G), which suggests a minor thrombosis at the surface of the PVP cusps. Minor fibrin expression in the adjacent tissue of the PVP cusps ( Figures 7E,F) also indicates that there was no intravalvular hemorrhage. The thrombosis resistance of the PVP valve cusps of the bioprosthetic valve is verified in a large animal model. Therefore, the non-thrombogenic PVP valve cusps may mitigate the complications of BPVT to enhance the longevity of the bioprosthetic valve. The longevity of bioprosthetic heart valve, however, is a major hurdle in the clinic. Calcification and degeneration significantly decrease the longevity of bioprosthetic valves especially in younger patients. The major hurdle that remains is translation of our current animal studies to patients where significant co-morbidities in patients may play a role in the outcome, i.e., no animal model truly recapitulates the human conditions. Study limitations In this study, we did not include a control group of bioprosthetic valves as the cost of the valves was beyond our budget. We refer to historical observations in the literature on pericardium cusps in bioprosthetic heat valve, which show significant calcification in the glutaraldehyde-fixed pericardial cusps at 3-month implants in sheep/pig model (41,42). The post-implantation for 4 months in this study that shows no calcification which is a major milestone. Despite the lack of control, experimental and clinical literature have clearly demonstrated the propensity to calcification and tissue failure under fixation which is the standard of care clinically to eliminate the immune response. Therefore, despite the lack of control group, our finding of no-calcification of glutaraldehyde fixed tissue in a 4-month duration is very significant and warrants future clinical investigations. Glutaraldehyde-fixation is one of risk factors of calcification in bioprosthetic heart valve (35, 36). Various processes for biological tissue, such as decellularization, different crosslink agents, and tissue engineering technology, have been developed to mitigate immune rejection, inflammation, fibrosis, etc (43,44). To compare with the literature on pericardium cusps in bioprosthetic valve, glutaraldehyde fixation was used in this study. The updated technology for processing PVP biomaterial will be investigated in future. Although the PVP bioprosthetic valves were delivered into the junction of RVOT and pulmonary artery using a catheter system, we did not achieve transfemoral vein delivery. The thinner cusps do reduce the profile of the catheter in the delivery system, however, which provides the opportunity to develop a 12 Fr Lu et al. 10.3389/fcvm.2023.1213398 catheter in the delivery system using a transfemoral vein. The observations in this study (e.g., no thrombotic deposit, no inflammation, no calcific deposit, etc.) are still applicable regardless of the delivery route. Clinical perspectives The PVP valve cusps satisfy the basic mechanical strength requirements for a bioprosthetic heart valve, despite being thinner than the bovine pericardium. The implantation of PVP bioprosthetic valves in RVOT demonstrates PVP valve cusps' resistance to thrombotic deposits, inflammation, fibrosis, and calcification. Therefore, the PVP tissue is a very promising biological material to serve as the valve cusps of bioprosthetic valves for heart valvular replacement. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. Ethics statement The animal study was reviewed and approved by California Medical Innovations Institute IACUC. Author contributions XL: contributed to conception, design of the study, statistical analysis, and writing of the manuscript. GK: contributed to experiments and writing section of the manuscript. MW: contributed to experiments. XG: contributed to experiments. GSK: contributed to concept, design of the study, and writing of the manuscript. All authors contributed to the article and approved the submitted version. Funding The research has been supported in part by 3DT Holdings and NIH Grant R43HL149455. 3DT Holdings was not involved in the study design, collection, analysis, interpretation of data, the writing of this article, or the decision to submit it for publication.
2023-08-06T15:22:36.492Z
2023-08-02T00:00:00.000
{ "year": 2023, "sha1": "03efb7eefbeccbe29a007d51e507d934c43dcd06", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2023.1213398/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b3b5cfad210c39d71285b6eca66dba335fdb7aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219575102
pes2o/s2orc
v3-fos-license
Abstracts DGCH s – DGCH Annual Congress 2017 – Munich, March 21–24.  DOI 10.1515/iss-2017-2002 s169 Innov Surg Sci 2017; 2, (Suppl 1): s169–s216 © The Author(s) 2017, published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 License. Open Access From Axolotl to AmbLOXe – Transferring amphibian regeneration to mammalian wound healing (Abstract ID: 57) S. Strauß 1 , A. Stamm 2 , C. Liebsch 1 , I. Pepelanova 2 , T. Scheper 2 , P. M. Vogt 1 1 Medizinische Hochschule Hannover, Hannover 2 Leibniz Universität Hannover, Hannover Open Access From Axolotl to AmbLOXe -Transferring amphibian regeneration to mammalian wound healing In contrast to mammals, caudates like the Mexican axolotl (Ambystoma mexicanum) ( Figure 1) are able to regenerate complex body structures such as limbs or even whole organs. This ability has turned the axolotl into a famous model organism for regeneration research. In the case of severe injury or amputation, hemostasis proceeds within seconds, followed by the induction of processes leading to complete regeneration of the lost tissue. AmbLOXe, an epidermal lipoxygenase, was identified to play a role in regeneration processes of the axolotl. Several studies have shown a significant influence of AmbLOXe gene expression in mammalian cells as well. During in vitro experiments, AmbLOXe-transfected cells migrated faster and therefore closed wound gaps sooner compared to controls expressing a human lipoxygenase. A proofof-concept trial in vivo showed similar effects of skin wound treatment with AmbLOXe-expressing cells. For clinical application, it is of interest to have access to sufficient quantities of biologically active AmbLOXe enzyme. Therefore, a strategy for the production and isolation of the enzyme from an E. coli system was devised. Materials and methods: Human keratinocytes and U2-OS cells were transfected with pIRES-EGFP vector (Clontech) containing the genetic sequence of AmbLOXe for a transient expression. The influence on migration in conditioned culture media was also tested by scratch assays. For first trials of protein isolation, E. coli BL21 (D3) (Novagen) were transformed with pET 41 Ek/LIC_AmbLOXe by the heat shock method. This expression system produced only minimal amounts of soluble protein which proceeded to lose activity within 24h of production. As an alternative method, E. coli was transformed with pET28b_His-TEV-AmbLOXe3. In this case, large amounts of AmbLOXe could be produced in inclusion bodies. These inclusion bodies serve as nanopills, which allow immobilization of AmbLOXe on cell culture surfaces, as well as gradual release of AmbLOXe into the culture medium. Wound healing assays (scratch assay and electric cell impedance sensing) were performed with cells seeded onto the immobilized nanopill layer. Musculosceletal diseases and injuries are the most common cause of long term pain and physical disability. Their influence on health and quality of life will clearly increase in the future due to the aging population. Therefore, every medical graduate irrespective of his future specialty should be able to perform a structured, orienting examination of the musculoskeletal system. The instructional approach used to teach skills has substantial influence on what is memorized and what will become part of a doctor's regular examination repertoire. However, the best didactical method still needs to undergo further investigation. Being established in many medical disciplines (e.g. urology and gynecology), teaching associates have proven to be successful in imparting knowledge of various examination methods. Teaching associates are trained simulation patients, giving immediate feedback to the students, learning the method. The aim of the present study is the comparative analysis of the efficiency of three different teaching methods for shoulder and knee examination. Materials and methods: Study participants were fourth year medical Students completing a 210-minute training module in knee and shoulder examination during their three weeks of obligatory surgical training. Students allotted to group one, examined each other under professional supervision. whereas students in group two examined the teaching associates and in group three students are first examined by a professional tutor followed by mutual examinations under supervision. The training module was ruled by a professional tutor in every group. The theoretical backgrounds were illustrated by a standardized power-point-presentation. After the explanation and demonstration by the tutor, the groups had a 50-minute practice time to learn every examination. After the training module the acquired competence in shoulder and knee examination was assessed in a 5-minute OSCE station each. Conclusion: The use of Teaching Associates in shoulder and knee examination improves the acquired competence significantly. The influence of the different instructional approaches on longterm retention will be measured in two more point in time and the results presented on the congress. Receiving constructive feedback can significantly improve future performance. Furthermore, reviewing one's performance by video seems to be a useful adjunct. This study investigates the impact of video feedback on the acquisition of practical surgical skills. Materials and methods: Fourth-year medical students completed a structured training of practical skills as part of their mandatory rotation in surgery. All students received the same training of practical skills. However, for the feedback of their performance of wound management and bedside test, students were assigned to one of four study groups: expert video feedback (receiving feedback by an expert after reviewing the recorded performance), peer video feedback (receiving feedback by a peer student after reviewing the recorded performance), standard video (giving feedback to a standardized video of the skill), oral feedback (receiving feedback by an expert without a video record). Afterwards, students completed two OSCE stations, where they were assessed regarding their acquired competencies. Results: A total of 199 students (48 expert video feedback, 50 peer video feedback, 52 standard video, 49 oral feedback) were included in the study. There were no significant differences between the four groups in the OSCE directly after the training, neither in the checklist rating, nor in the global rating. Conclusion: Students' performance does not differ directly after training as a function of the type of feedback. The longterm effect of the different feedback types will be measured in two more point in time and the results presented on the congress. The Russian Society for Simulation Education in Medicine, ROSOMED [rossomed] in cooperation with the Russian Society of Endosurgeons created in 2015 the BESTA program (Basic Endosurgical Simulation Training and Assessment). Objective metrics of proficiency were determined for each of 10 practical exercise. The learning curve for each task demonstrates 20 to 50 trials -up to 500 attempts totally. The feed-back is essential for education of adults, but in such a case it requires substantial amount of human resources. Our aim was to create a system for computer analysis of the video to provide automated evaluation of proficiency in a real time. Materials and methods: The working group has analysed international training programs such as TopGun, Yales, MISTELS, FLS, PLUS, E-BLUS, SUTT, LASTT and numerous national endosurgical training programs. 10 exercises have been selected both for the training and assessment. Among them 5 tasks were adopted from the course FLS* (Fundamentals of Laparoscopic Surgery), one task -from E-BLUS** and 4 tasks were developed originally by the working group. The standard FLS training equipment was used: commercially available Lap trainer with HD-camera connected via USB to a notebook. The standard 10 mm 300 laparoscope with attached digital HD camera was used in the tasks 1 and 3. The standard laparoscopic instruments were used in all tasks without any modification. Standard and original training devices were used in the tasks. The original software was developed for tracking of the instruments movements, event recognitions and accuracy determination. Results: All 10 tasks have been preliminary evaluated for the possibility of the computer video analysis. Eight of them have been determined to be suitable for automated recognition of the several objective metrics, such as: 1) automated timing count by start and finish of the exercises, 2) measurement of the ambidexterity ratio, 3) instruments path and 4) velocity. The tasks 1 and 3 were not analysed yet, as at the tasks a movable camera attached to laparoscope is used. The following events recognition and accuracy criteria can be automatically obtained: correct transfer of triangles at the task 2 (Peg transfer); correct and precise cutting alongside the marked circle in the task 4 (Precision Cutting); precise suture placement through the marks and number of throws in the tasks 7, 9 И 10 (Extracorporeal suture, Intracorporeal knot and continuous sutures); precise placement of the loop to the marking 8 (Endo-Loop). Detailed results can be reported at the DGCH conference. Automated computer analysis of the real time video of the tasks can be performed. Duration of the tasks, ambidexterity ratio, instruments path and velocity can be determined automatically without modifying the training equipment or endosurgical instruments. The proper or incorrect performance of the tasks and proficiency criteria can be determined as well. That allows to use the computer analysis of BESTA (Basic Endosurgical Simulation Training and Assessment) in a teacher-free environment as a part of endosurgical training. Laparoscopic surgery requires extensive training of surgical residents in order to ensure safety in a clinical setting. Thus, preclinical training curricula have been developed and it has been shown that skills obtained in a preclinical setting can be transferred to the operating room. Here, animal training offers a very realistic training environment, but comes with high costs and ethical problems. Phantom models on the other hand often don't support training of several steps in a procedure, but focus on single tasks in an artificial environment. To overcome this limitation we developed a phantom for simulating rectal resection based on the Open Heidelberg Laparoscopy Phantom (OpenHELP). Materials and methods: The procedure of laparoscopic rectal resection for rectal cancer was analyzed and a model of the different steps was created. For each step a certain task was defined that could be reproduced in a phantom model and then was realized using organs made from silicone and peritoneum made from latex sheets. The final phantom model was used by a single surgeon to test its applicability for laparoscopic training and as a model for research on surgical robotics. Here, the whole procedure was performed several times (n=20) and task time for the single steps as well as for the whole operation was recorded. Results: The procedure of laparoscopic rectal resection was divided into 13 steps: mobilization of sigmoid, mobilization of descending colon, mobilization of splenic flexure, inspection of colon, lancing of retroperitoneum, delineating vessels, division of artery, division of vein, opening of lesser pelvic peritoneum, dissection of rectum, transect rectum, salvage rectum and finally visual inspection of lesser pelvis. The duration of the whole procedure decreased from 71:06 minutes for the first operation to 19:40 minutes for the 20th operation. Here, opening of the lesser pelvic peritoneum was the longest step, duration decreasing from 11:45 min to 4:22 min, and visual inspection of the lesser pelvis was the shortest step, duration decreasing from 0:52 minutes to 0:28 minutes. Conclusion: We developed a phantom modelt hat mimics several steps of a complex laparoscopic procedure, the resection of the rectum for rectal cancer. This phantom models also includes different quadrants of the abdomen to operate in as well as handling of organs as different as colon, spleen and inferior mesenteric vessels. Feasibility of training in this phantom model was shown. Further studies have to include more participants in order to test validity of the model. Simulator-based training improves the operative performance in endoscopic surgery, but it is timeconsuming and can be costly. Therefore the training-sessions must be conducted most efficiently. Mental imagery training is widely used by athletes and pilots. To support the motoric process in complicated movement patterns, this method can also be used by surgeons. We conducted a randomized, controlled study to test the effect of additive mental imagery training on the learning curve of a standardized simulator-based laparoscopic skills training. Materials and methods: Medical students without previous laparoscopy-experience were randomized into two groups. 4 tasks of the "fundamentals of laparoscopic surgery" curriculum were trained on box-trainers in a standardized set of 5 sessions. One group received 4 sessions of an additive mental imagery training (MITG), the control-group (CG) received no additive training. Multiple efficacy and accuracy parameters were recorded to quantify performance before, during and after the training. For data analysis we compared established performance scores and the learning-curves between the groups. Results: 48 participants were included and completed the study. The performance in both groups improved significantly in all 4 tasks. The MITG achieved proficiency faster in the tasks "PEG-transfer" and "pattern-cutting" (No. of trainings needed to achieve proficiency: CG 0.54±0.51; MITG 0.09±0.29; p=0.008 and CG 1.88±1.23; MITG 1.08±1.16, p=0.018 respectively). Variance-analysis showed a significantly steeper learning curve for the MITG in the task "pattern-cutting" (effect-size 0.24; p<0.05). The overall performance of the MITG tended to be better also in the tasks "ligating-loop" and "intracorporal-suture" but variance-analysis showed no significant correlation. Conclusion: With this study we prove the non-inferiority of additive mental imagery training for simulator-based laparoscopy education and saw a significant beneficial effect on the learning curves in two out of four training-tasks. Mental imagery training is an ubiquitary disposable, not time-consuming and affordable technique which should be integrated into the surgical education. Further studies are needed to specify the value and the mode of deliverance of mental imagery training in surgical education. Sportpraxis Professor Knobloch, Hannover Background: Academic medical degrees represent important steps in the career planning of many medical doctors. However, limited resources and changing job prospects for young academic personnel as well as changed social and educational structures increasingly question these qualifications for many years. Due to different regulations in every federal state of Germany there is a known unconformity in prerequisites and regulations for academic degrees at German medical faculties throughout the country. The aim of the presented study was to compare and discuss these differences in order to determine whether reaching an academic medical degree is a simple business or not. Materials and methods: This report is the first part of the KARiMED-study (career in medicine; www.karrierestudie.de). An analysis of all regulations for a medical doctorate ("Dr. med."), postdoctoral lecture qualification ("Habilitation"), and associate professor ("außerplanmäßiger Professor"; APL) from all German medical faculties (n=37) were carried out, according to different primary outcome measures and an established scoring system. Results: The average total score of doctoral regulations was 57.2±9.5 points out of 100 scoring points. Three faculties reached the highest scorings as given by 72-85 points. Only five faculties achieved low scores of 42-45 points. While certain aspects have been defined in all regulations (written thesis, review process, examination requirements and grading of thesis) some items such as the introduction into good clinical practice, the knowledge of methodology as well as the check for plagiarism only seemed to be minor. The overall scoring for habilitation regulations was 21.9±4.0 points (range 12-28; 95% confidence interval (CI) 20.6-23.3) out of 34 scoring points. The habilitation scoring increased significantly in a 12-year comparison from 15.2±5.1 points (p<0.001; 95% CI 13.6-16.9). This rise was mainly due to increased requirements in terms of publication activity, teaching and mandatory board certification. Furthermore, the narrower 95% CI, showed some standardization of the habilitation requirements at German medical schools. The scoring for the APL-requirements was 13.5±3.7 out of 20 points (range 5-19). Sufficient performance in teaching and research with adequate scientific publication was mandatory in more than 88%. Furthermore, 83% of the faculties expected an expert review of the candidate's performance. Conference activities as well as the reduction of the minimum time as an assistant professor appeared to be secondary. Conclusion: The academic medical career is still of high importance in Germany. If it is used for a formal scientificacademic or for a personal-occupational career does only play a minor role. The regulations for academic degrees at German medical faculties, however, show exceeding heterogeneity with highly location bound requirements. Furthermore, the criteria by which the respective work will be reviewed and evaluated are intransparent and often poorly defined. These differences counteract structured transparency and career planning and therefore antagonize equivalent national as well as international opportunities. Taken together, there is a need for substantial changes in form and content of the regulations. Improvements must also be done for better international comparability and visibility. Standardized federal regulations for academic medical degrees might help to increase transparency and would establish scheduled career paths for young motivated academics throughout the country. Background: After the introduction of the Bologna reforms in Europe the educational landscape in Germany has changed significantly. In addition, the social structures have changed, thus the claims of employees from the Generation Y (born between 1980-1999) to the work-life-balance were much more important and represent a key element in the career planning of young medical professionals. However, receiving a doctors degree (Dr. med.) still represent an important step in Germany, while the habilitation thesis is significantly less important. This is underscored by risky job positions in science and research and unknown perspectives for higher academic positions. Materials and methods: As part of the 4th arm of the KARiMED-study (career in medicine; www.karrierestudie.de) we conducted an online survey inviting the medical mid-level faculty in Germany starting in March 2016. The online survey was done asking biographic parameters, subjective ratings and potential needs for reforms concerning the academic and professional career. Results: Actually 679 participants finished the survey of which 44 % seem to be well informed about their career options. However, 47 % are not and would seek better information. This is underscored by 63 % who never received any kind of strategic career mentoring. Generally, academic degrees were found important by 65 %. This is highlighted by the association for better future prospects with a doctorate (58 %), but only in 45 % with a postdoctoral lecture qualification (PLQ; Priv.-Doz.). Furthermore, 91 % think to have better chances for higher job positions with a doctorate. Therefore, 9 % of the participants seek for a doctorate (75 % already have a doctorate), 38 % for PLQ and 18 % for a full professorship. However, only 17 % rate high to very high chances to get a full professorship. In contrast to that 80 % of superiors expect that their medical staff will receive a doctorate, however, only 25 % feel to be well supported by their boss. Additionally, the unproblematic release of clinical work for research activities seems only possible in 3 % and is therefore done mostly in the freetime (48 %). Taken together 52 % wish reforms including structured programs for higher academic qualifications with mandatory agreements, standardized federal regulations, reduced dependency on professors and more transparency as well as the relieve of the clinical working burden. Conclusion: The academic career in medicine is highly valued and seems to be associated with better job positions and future prospects although the chances for a full professorship were only rated low. This is conflicted by the historically grown complex German graduation system but also by a great lack of support of the institutions and direct superiors of the candidates itself. Science and politics agree that young researchers must be offered better career prospects what has already been recommended by the German Science Council in 2013. Therefore there is the need for substantial structural changes providing projectable career pathways such as the tenure track program. In addition, the potential new personnel concept for young (medical) professionals (Y-model) might also be supportive, enabling the Uniklinik Köln, Köln Background: The current student generation has their own expectations towards professional life and pay particular attention to work-life balance. Less interest in work-intensive specialties leads to a shortage of skilled candidates especially in surgery. In order to motivate students into a surgical residency, new priorities become important. A deeper understanding of the underlying arguments and students' expectations towards a surgical training are necessary to counteract a future shortage of specialized surgeons. Materials and methods: We conducted an internet-based survey among medical students at two representative German university hospitals to gain more information about the underlying mechanisms that lead to and against the choice of a surgical career. We particularly paid attention to gender differences and differences between students of different academic years. Results: A total of 1098 students participated in the survey. Sixty-four percent were female. The majority of the students are of the opinion that surgery is an interesting and meaningful profession. In contrast, when it comes to their own career choice, most students (89% female and 81% male) are not willing to choose a surgical specialty. Students are very well willing to spend a high amount of time on their professional life but by the same time demand planning reliability and a sufficient work life balance. Flexibility in working hours and an existing childcare program were identified as predominant factors for all students and in particular for female students. Same counts for a respectful conversional tone and appreciation of the individual work. The factors prestige and salary were less relevant than "selffulfillment" in terms of respectful interaction and reconciliation of work life and private life. There was significant difference in female and male students as female students have clearer ideas concerning career planning but at the same time are less self-confident than their male colleagues. Moreover there was a significant difference between junior and senior students regarding career planning with a shift to less work intensive specialties and especially away from a surgical residency in older students. Adjustments of work time models, working environment, clinical curriculum and a respectful interaction are factors that might increase the willingness of young students to choose a surgical career. Conclusion: In summary our survey reveals that the current student generation is motivated and willing to spend a certain amount on professional life but has clear ideas about "self-fulfillment" and self-confident expectations on future workplace. The responsible surgical leaders should consider to further enhance clinical education, improve working environments and pay attention on a respectful professional interaction. We furthermore have to be aware that according to our data a good percentage of students negatively changes their view on surgery based on their final years of medical school. Why so many young physicians burn outand the three things they need to know (Abstract ID: 982) Brigham and Women's Hospital, Boston Background: The incidence of burn out among physicians is among the highest in the working population. Over 60% of young physicians are reported to show at least one sign of burn out, such as emotional and physical exhaustion, depersonalization and cynicism. Materials and methods: This disturbing fact is not only important for the health and wellbeing of the individual physician, but also holds direct implications to patient safety and treatment as well as economical consequences for health care providers. Results: It is important to understand why burn out rates among young physicians are so high and on the rise and the consequences that derive from it. Conclusion: But even more important are ways and techniques for the individual physician to reduce the risk of burn out and maintain mental and physical health amidst the strains of the 21st century medical work environment. Morbidity and Mortality Conference as a Tool for Quality Assurance (Abstract ID: 315) Background: The implementation of morbidity & mortality-conference-results is the background and rationale behind this project. Quality assurance is a very important aspect of daily life in a neurosurgical clinic as well as an important parameter for the neurosurgical patient. Quality assurance programs of some sort are common but still not implemented in all units. In 10.000 surgical procedures including neurointerventional procedures over a period of 5 years a regular morbidity and mortality conference was organized in our institution. The morbidity and mortality conference was organized not just a control of all neurosurgical and neurointerventional procedures done in the month before, but also as a didactic session to teach residents, fellows, guests and students. Digitized presentation using power point and key note with inclusion of the topic related publications are mandatory. Materials and methods: Shortly after the ending of a month, there were in a medium between 150 -200 procedures analyzed in a regular way in the whole team. Not the neurosurgeon involved is of utmost importance, but the case, the indication for surgery, the procedure, the potential complications and the relation to published literature. A written protocol and a summary and also an implementation of the results discussed into the clinical standards were given. Results: From every morbidity & mortality-conference at least one, sometimes more conclusion were extracted from discussion and implemented in the clinical standards as a very important contribution. Conclusion: In the meantime this implementation and the regular organization of morbidity & mortality-conference is not just a must from the view from the chairman of the department, but also a requested and very important didactic session in education and training of the neurosurgeons in the whole clinic. For many benign and malign disease of the thyroid surgery is the treatment of choice. Improvement in surgery and surgical instruments has led to a reduction in postoperative bleeding. Local hemostatic agents are another growing trend in the field of thyroid surgery. These agents achieve hemostasis either passively by contact activation of the intrinsic coagulation pathway, or actively using added thrombin and/or fibrinogen to produce a fibrin seal. The objective of this systematic review and network meta-analysis was to investigate the value of hemostatic agents to prevent postoperative bleeding after thyroidectomy. Materials and methods: Randomized controlled trials (RCTs) meeting the following criteria were included: adult patients undergoing unilateral or bilateral thyroid resection for benign or malignant disease; comparison of active (AHA) or passive hemostatic agents (PHA) to each other, or to a control group without the specific intervention of interest; and reporting of one or more of the following outcomes. The primary outcome was the rate of cervical hematoma requiring reoperation. Secondary outcomes were as follows: total volume of postoperative blood loss (defined as drain volume in mL until drain removal), time to drain removal (in hours, defined as time from the end of the operation until <20 mL was collected in the drain over a 24h period), rate of cervical hematoma that did not require reintervention, rate of postoperative wound infection, rate of postoperative seroma, length of hospital stay (in days), rate of postoperative recurrent nerve palsy (transient or permanent), and rate of postoperative hypocalcaemia (transient or permanent). For the primary outcome, a Bayesian random-effect model for network meta-analysis with minimally informative prior distributions was performed. Taking clinical heterogeneity in trial participants and treatments into account, a random-effect model was chosen for the meta-analyses. Results: In regards to the primary endpoint no significant difference was observed in the pooled odds ratios. The reoperation rate due to bleeding is highest in the control arm with 58.0% probability, in the AHA arm with 30.2%, and in the PHA arm with 11.8% probability. The precision of the pooled results was low, resulting in large confidence intervals. Active hemostatic agents led to a significantly lower total volume of postoperative blood loss compared to both control and passive hemostatic agents. The probability of blood loss volume was the lowest in the AHA. AHA also proved beneficial with respect to operating time, with a mean reduction of 11 and 20 minutes compared to control and passive hemostatic agents, respectively. No difference was observed for time to drain removal or length of hospital stay. The use of local hemostatic agents does not reduce the risk of clinically relevant postoperative bleeding (cervical hematomas with regard to reoperation), thus raising the question of whether their use should be continued as a standard prophylactic measure. Calculating the perioperative risk[1] in surgical patients often relied on expertise of the surgeon and complication factors known from the literature. Many are guided by the American society of Anaesthesiologists (ASA) classification. This classification originated from a workgroup in 1940 and was first published in the 1960s. Although widely known and well established, several studies have shown the ASA classification to be a poor predictor of in-hospital deaths following inpatient surgery. This is a direct result of poor risk measurement; poor identification and weighting of threat and opportunity impacts and likelihoods. And then critically, making a balanced decision that weights the threat against the opportunity based on objective measures that are contextualized to the case in hand. Many different surgical risk calculators exist trying to calculate perioperative morbidity and mortality, especially in cardiac surgery or orthopedic surgery. Other surgical subspecialties have tried to stratify surgical risk using scoring systems for individual surgical procedures such as hernia repair. The American college of surgeons developed an open access preoperative surgical risk score (Surgeons National Surgical Quality Improvement Program, ACS-NSQIP) which calculates the perioperative risk for a patient for a specific procedure for various outcomes. This program has collected data from 393 hospitals. It allows the surgeon to calculate the perioperative risk and predicted length of hospital stay for a specific procedure predicting 8 major outcomes. [1] Axelos Global Best practice, "Management of Risk: Guidance for Practitioners' (2010 Ed), p.135 defines a risk as "An uncertain event or set of events that, should it occur, will have an effect on the achievement of objectives. A risk is measured by a combination of probability of a perceived threat or opportunity occurring and the magnitude of its impact on objectives." Materials and methods: A review of all publications related to the ACS NSQiP was performed. Sofar we found 35 papers directly comparing a patient cohort to the calculated risk given by the ACS NSQiP. Some have found the calculated risk not to differ significantly from the observed risk. Others have found a statistically significant difference between the calculated and the observed complication risk. We give an overview of the methods of the ACS NSQiP, literature review and outcomes sofar Conclusion: Several models exist to aid surgical decisionmaking. Preoperative knowledge of possible complications and outcomes is crucial for precise planning and informed consenting of the patient. Riskcalculators can help in the planning process, however can not be safely relied on. The thesis of this report is to determine objective measures of risk and opportunity probability and likelihood based on ACS-NSQIP data, defining principles to apply them in context and weight the factors accordingly to provide an objective, auditable and defendable risk barometer. Klinikum rechts der Isar, München Background: INTRODUCTION: With increasing age, the number of patients with cardiac co-morbidities do increase and with this also the individual mono-& dual antiplatelet therapy. This regime seems to increase the perioperative bleeding risk. Therefore, international guidelines recommend to shift the operation in these patients up to one year, which is not possible in pancreatic cancer/PCa. Thus, we aimed to clarify whether PCa patients with a pre-existing mono-or dual antiplatelet therapy have an increased risk of bleeding compared to patients with no antiplatet therapy and with patients who started an antiplatet therapy postoperatively. Results: RESULTS: 125 (14.9%) patients received pre-operative anticoagulatory medication (group I-IV). In these patients, there was no difference in the incidence and severity of bleeding compared to the control group. The cardiac and overall complication rate did not differ in these groups. However, 136 patients (16,2%) who received postoperatively a novel anticoagulatory therapy regime (group V) demonstrated significantly more and severe postoperative bleeding (group Va: p = 0.009; Vb: p = 0.027; Vc: p <0.0001). Also the rate of overall complications was higher only in this special subgroup of patients (Vb: p=0.003; Vc: p= 0,019). Conclusion: CONCLUSION: Overall, it seems that pre-operative mono or dual platelet inhibition does not increase the risk of perioperative bleeding and complications. Patients with novel postoperative anticoagulatory therapy seem to be more at risk. However, until subsequent studies with larger cohorts are present, individual risk assessment should be performed. More than 234 million surgeries are performed worldwide per year including approximately 40 million surgeries in Europe. The over-all inpatient complication rate is around 10% resulting in more than 3.5 million complication-associated-deaths per year worldwide. Besides the effects on patients' health, serious complications result in a tremendous increase of hospitalization costs and a general economic damage. However, almost half of the adverse events are preventable. This presentation focuses on medical safety in Europe and other parts of the world and compares the tools and options that are available and used. Materials and methods: Available data and publications concerning 'Patient safety in surgery' were identified and analyzed through PubMed, the Cochrane Library, the WHO database, the European Commission database and publications and recommendations by several national and international surgical societies. An overview of the current literature on 'Patient Safety' is presented. Results: In Europe, the overall mortality ranges between 0.8 and almost 4% in surgical patients. 75% of the adverse events in surgical patients occur intra-operatively. Thus, the operating room is a high impact area for safety improvements. In 2009 the WHO published the 'Surgical Safety Checklist' and the 'Guidelines for Safer Surgery'. The use of the WHO checklists and guidelines has shown to reduce the inpatient death rate almost by half and complications by one third. The WHO checklists and guidelines are used in two thirds of all European patients, with marked variation across the different European countries (0 -100 %). In addition to the WHO checklists the European Commission published a 'Patient Safety Package' including 'Patient Safety in the EU' and an overview on 'Reporting and learning systems for patient safety incidents across Europe'. Guidelines for a variety of diseases and surgical procedures have been published by several national, European and international surgical societies. Surgical quality is checked and approved through various certification programs, 'high volume' requirements for some of the more complex surgeries exist. In the past years the American College of Surgeons (ACS) has established several quality programs like the National Surgical Quality Improvement Program (NSQIP) and a 'Surgical Risk Calculator' that demonstrated lower complication rates and mortality rates and a significant saving of costs in the participating hospitals. Conclusion: In Europe and other parts of the world the use of surgical safety checklists and guidelines has shown to result in a significant drop of adverse events and complication-associated deaths. Also, the number of non-lethal-complications can be reduced significantly. Further improvements on patient safety can be achieved through the use of risk calculators, critical incidence reporting systems, a good complication management, morbidity and mortality conferences, certification and re-certification processes and the classification of departments in 'centers of competence, reference or excellence'. Alone in Europe, more than 750,000 harm-inflicting medical errrors, 260,000 incidents of permanent disability, and more the 95,000 deaths per year appear to be preventable through best practice perioperative medicine and best practice complication management. Acute kidney injury (AKI) is a frequent complication after major surgical procedures, however its consequences are often underestimated, especially if compared to other surgery-related complications. Therefore we evaluated incidence of non-dialysis (AKI/noRRT) and dialysis-dependent AKI (AKI+RRT) and association with outcome in surgical ICU-patients. Materials and methods: Data on 7119 adults admitted to surgical intensive care unit at tertiary university hospital were retrospectively analyzed during a 5-year period (from January 2011 to December 2015). All patients and also subgroups after major non-cardiac surgical procedures such as: major liver resection, major pancreatic resection, esophageal resection, gastrectomy, multivisceral operation with peritonectomy and hyperthermic intraperitoneal chemotherapy (HIPEC), open and endovascular aortic surgery were assessed for incidence of AKI and need for renal replacement therapy (RRT) according to hospital database, using International Statistical Classification of Diseases (ICD-10) and OPS (Operation and Procedures Codes 2016). Moreover, length of ICU-and hospital stay, age, severity of illness and discharge destination were evaluated. Results: In total 7119 patients were admitted to the surgical intensive care unit between January 2011 and December 2015. 1 737 patients (24.4%) developed AKI during their hospital stay, of whom 575 (8.1%) required RRT. The incidence of AKI increased annually from 20.8% in 2011 to 28.6% in 2015. Patients without any AKI had the shortest length of ICU and hospital stay, compared to patients with AKI/noRRT and patients with +AKI/+RRT, which showed the longest duration of ICU and hospital stay (4.3 vs 6 vs 21 days and 20 vs. 25.7 vs 46.8 days). Having analysed the hospital discharge process we found a significant association between the degree of AKI severity and likelihood for homeward discharge: no AKI vs AKI/no RRT: odds ratio OR 1.77, p<0.001; no AKI vs AKI+RRT OR 8.79, p<0.001; and AKI/no RRT vs AKI+RRT 4.96, p<0.001. Comparing individual surgical procedures, highest incidence for AKI was seen after adult liver transplantation (60.3%), followed by open and endovascular AAA repair (38.7%; 34.9%). Patients undergoing major liver resection or oesophageal and gastric surgery had significantly higher risk to develop AKI and AKI with RRT during ICU stay compared with patients after pancreatic resection or multivisceral operation with peritonectomy and HIPEC (AKI: odds ratio OR 1.35; 95% CI 1.01-1.80; p=0.043 and AKI+RRT: OR 1.79; 95% CI 1.13-2.83; p<0.013). Conclusion: Incidence of AKI in surgical ICU-patients is increasing and its consequences affect short-term patient outcome, e.g. increased mortality and length of stay as well as long-term quality of live, e.g. rarely discharge homewards. Therefore, patients undergoing major surgical procedures like liver transplantation, aortic, major hepatic, esophageal and gastric operations, should be broadly informed about the considerable risk to develop an AKI in postoperative course. Preventable causes of AKI, Ev. Huyssens-Stiftung, Essen Background: Beside theoretical knowledge, operative skills are of major importance in surgical training. Traditionally, skills are trained and evaluated in daily work and education. Objective measuring of surgical abilities is challenging and uncommon. We identified the implantation of totally implantable central venous devices (TICVDs) as a potential model to analyze basic surgical skills and progress in operating performance. Materials and methods: Five surgical novices without any surgical experience were trained in standard implantation of TICVDs via cephalic or subclavian vein in local anesthesia. After assisting 10 operations and performing another 15 procedures under supervision of a surgeon, the beginners started to do cases on their own. A successful operation was defined by two items: (a) operating time < 40 minutes, and (b) no need for assistance by a surgical specialist. For evaluation of the surgical performance, the cumulative sum technique (CUSUM), a sequential analysis statistical method, was used. Acceptable and unacceptable failure rates were defined with 0.8 and 0.7. Operating times were analyzed for the first and the last 25 cases. For group comparison of operation time, the one tailed independent t-test was used. For analysis of need for help by specialists, a logistical regression model with a binary dependent variable and post-hoc significance test was used. Results: In 4 of 5 surgeons a decrease in the mean operating time was noticed, 2 of them significantly improved with -13,8 minutes and -13,1 minutes (p<0,05), 2 improved with -1,7 minutes and -4,2 minutes on average while one had an increased mean operating time with +3,7 minutes. Three surgeons needed assistance less than 6 times throughout all 50 TICVDs. Two others with 10 and 19 assisted TICVDs needed significantly less assistance with growing experience over time (p<0,05). The odds ratios showed a 7% and 12% decreased probability for need of assistance for every TICVAD done by these two surgeons (p<0,05). CUSUM demonstrated a successful learning curve for 4 of 5 surgeons. Three of 5 surgeons met the preset criteria for success after 14, 37 and 38 procedures respectively (Fig. 1). One surgeon achieved the criteria temporarily but showed a decrease in performance in the later operations. In one surgeon no progress could be identified (Fig. 1). Conclusion: TICVAD-Implantation can be learned fast and performed independently by the majority of surgical residents within their first 50 procedures. The CUSUM-method offers a good model to evaluate performance of surgical novices. Picture: Figure 1 CUSUM-test plot for Implantation of totally implantable central venous access devices by five surgical novices: AA1, AA2, AA3, AA4 and AA5. The horizontal lines represent the upper and lower boundaries respectively. Success is displayed as a decrease in CUSUM, while failure is displayed as an increase. Unimedizin Mainz, Mainz Background: The rate of postoperative functional pyloric stenosis and delayed gastric emptying after esophagectomy and gastric pull-up reconstruction is a common complication and in 15-20% of patients clinically relevant. This functional stenosis can lead to severe postoperative complications and is mostly endoscopically treated via pyloric balloon dilatation in the postoperative phase. The preoperative endoscopic balloon dilatation during the common re-staging gastroscopy is an approach to prevent postoperative functional pylorus stenosis and reduce the rate of this complication. In this study we investigated the value of preoperatively pylorus dilatation in patients prior ivor-lewis resection compared to the patients without preoperative pylorus dilatation. Materials and methods: We performed a single-center retrospective analysis of patients who received an ivor-lewis esophagectomy between August 2013 and May 2016, without regard to staging, comorbid conditions, length of hospital stay and other complications. Conclusion: It seems that preoperative pyloric balloon dilatation reduces the risk of postoperative delayed gastric emptying especially in the early postoperative phase. However our limited retrospective date gives only an idea of the potential of preoperative pylorus dilatation, but does not significantly proof this finding; a prospective randomized trial is needed. Background: It is unclear whether the type of ostomy or operation variety influences quality of life (QoL). In a German observational study of 2,647 patients, QoL after colostomy (CS) or small bowel stoma (SBS) formation was evaluated. Materials and methods: Questionaires of the European Organisation for Research and Treatment of Cancer QLQ-C30 and CR-38 were answered by patients and medical care specialists. Patient characteristics, retrospective information about the ostomy and previous treatments, and current stoma-related complications were recorded. All questionnaires were distributed and collected by stoma therapists at the German homecare company PubliCare®. Results: In 1,790 patients a CS and in 756 a SBS was performed. The mean Global Health Score (mGHS-a general QoL indicator) was 52.33 in CS and 49.40 in SBS patients (p = 0.004), but effect size (Cohen's d) was 0.1. In SBS patients, all functional scores were lower and most symptom scores were higher. Conclusion: QoL differed significantly for CS and SBS patients, but patient's effect size was only marginal. Especially female patients need an advanced care after emergency operations. Nevertheless, ongoing professional education and guidance are necessary for all patients. Informed consent (IC) before surgery is common practice in Germany. The main purpose of IC is to allow the patient to make decisions according to their own will (autonomously). Is IC possible to its full extent in cases of urgent operations and what role plays the trust between patient and doctor in these cases? Materials and methods: Beauchamp and Childress illustrate the requirements for IC in the most influential American Medical Ethics book. These requirements are competence (to understand and decide) and voluntariness (in deciding). [1] Urgent operations which should be carried out within a few hours from the time of indication are to be distinguished from emergency and elective surgeries. The time pressure, pain and other symptoms associated with for instance appendicitis, result in a reduction of competence and voluntary action of the patient, hence limiting IC. The time pressure limits the possibility for the patient to obtain information and reduces his knowledge about the impending procedure. To allow the patient to remain able to act on his own behalf trust has to increase. "Trust means to build a positive relationship with the not-knowledgeable person. It makes actions possible despite lack of knowledge."[2] Trust means to put your concerns about something into the hands of someone else and to allow this person scope of discretion [3] The main purpose of IC, is to ensure the autonomy of the patient. On the relationship between autonomy and trust: "However, it is also clear that autonomy and trust support each other and that there are many situations where one relies on the other." [4] Table: 1 Beauchamp TL und Childress JF (2013) Conclusion: In case of urgent operations some requirements of IC are not met to ensure autonomous decisionmaking of the patient. The article addresses the importance of trust in doctor-patient relationship before urgent operations and its implications for clinical practice. Heilig-Geist-Hospital, Bingen Background: Trust is one of the key components in the Doctor-Patient Relationship ( DPR ). The possibilities of obtaining information has changed significantly in our present modern societies, that potentially impact the trust factor in DPR. The intention of this paper is, to show the relevance of trust, from the perspective of the patient and also the reliability of the internet based information from the patients point of view. Materials and methods: Between June 17 2016 -July 22nd 2016, around 243 patients, visiting a general hospital seeking help in the field of visceral or trauma surgery, were interviewed concerning the importance of trust in the DPR. They were asked to answer five questions anonymously with an analogue scale (1 to 10) during their waiting period. Results: 46% of the 243 interviewed patients were female, 54% male; 59% of all patients required trauma surgery and 41% visceral surgery. The number of patients was split into six different age groups, which were largely homogeneous. The age group ranging between 30-39 years was inadequate, representing only 9,8% . The most highlighted response was given to the importance of the trust factor in DPR of 77.7% of all patients, as "very important" ( 10= very important , 1 =not important ). 10,7% of the respondents rated the same question with 9 points on this scale. Analyzing the subgroups, there were no significant differences regarding age or discipline. But regarding gender, the importance of trust was rated significantly higher by female patients. 49 % of the respondents use the internet to obtain further information about health issues. In the group ranging over 60 years still 41 % use this information tool. The reliability of healthcare-information in the internet has been reviewed by those who use it, with an average grade of 5.1 points (1 = low, 10 = high). The group of patients that doesn't use this information, gave an average rating of 3.8 points, this difference was considered significant. Conclusion: Finally we could say that trust is a major factor in the Doctor-Patient-Relationship for a vast majority of the interview patients. The reliability of healthcare information provided in the internet was evaluated as rather low by the same group. Fachklinik Hornheide, Münster Background: A 46-year-old female patient was transferred to us because of a persistent phlegmon of the abdominal wall after laparoscopic adhesiolysis, exzision of a necrotic epiploic appendix and exzision of an ovarian cyst due to abdominal pain one month before. Despite of antibiotic therapy with penicillin for over a week, excision of the abscess and even slightly falling inflammatory markers, the wound situation did not improve. On admission at our hospital there was a phlegmon of the whole lower abdomen with superficial necroses and erosions of the skin. Although the local finding resembled a necrotizing fasciitis, the general condition of the patient was surprisingly good. Materials and methods: Because of the good overall condition we refrained from operating on the patient directly and first decided to take biopsies of the wound margin and wound swabs in cooperation with our dermatologists. The swabs and the biopsy only showed evidence of staphylococcus capitis after an enrichment process. Histologically we found acantholytic cleavage of the epidermis as well as vesicles and bullae due to an exfoliative dermatitis of the type of the staphylococcal scalded skin syndrome. Therefore we continued the conservative procedure with infusion therapy, an adjusted intravenous antibiosis with clindamycin and cefuroxime and daily wound inspections. Dressings were changed daily with fatty gauze and compresses. Results: During the next week of conservative treatment, the clinical finding improved rapidly. There was a nearly complete reepithelialization of the wound surface. The patient could be discharged after two weeks with normalised inflammatory markers. Two weeks later the skin was completely reepithelised. Conclusion: Visual diagnoses can sometimes be misleading. In this case one would tend to operate on the patient because of the local findings which suggested the diagnosis of a phlegmonous process or even a necrotizing fasciitis. Furthermore, the patient had no typical risk factors of the staphylococcal scalded skin syndrome: It is a seldom syndrome which occurs primarily in infants. Immunsuppressed adults with tumour disease, HIV, after organ transplantat, alcohol abuse or patients with renal failure rarely face the disease which is caused by the exotoxins of staphylococcus aureus. The patient had no known comorbidities or any earlier purulent infections. The only laboratory-confirmed bacterium was Staphylococcus capitis which is part of the normal flora of the skin. Staphylococcus aureus could not be detected within the swabs or the biopsy, which probably due to the precedent antibiotic therapy. Nevertheless, the histological findings confirm the most likely diagnosis of the staphylococcal scalded skin syndrome. If we had operated on the patient, there would have been a huge open wound of the abdominal wall with the need of a complex covering technique. If the patient's general condition does not go along with the local findings and the expected diagnosis, one should always think of differential diagnoses. Charité-Universitätsmedizin Berlin, Berlin Background: Surgical site infections (SSI) are rated as one of the most pressing issues in perioperative nosocomial infections calling for new action plans. All measures contributing to the prevention of SSI are highly valuable as they reduce the burden for patients most of all, but also for health care professionals, and decrease costs and unnecessary hospital stay for departments and payors. Materials and methods: Two different intraoperative surgical skin desinfective agents were prospectively compared in an observational study on alcoholic povidone-iodine solution (Braunoderm) and chlorhexidine (Chloraprep). The new agent Chloraprep was introduced and used for 750 consecutive cases and after that the procedure returned to the long established alcoholic povidone-iodine solution Braunoderm for the next 750 cases. Both groups were compared and tested for statistical significance between groups using SPSS. Results: In the overall analysis skin preparation with Chloraprep showed a lower incidence of SSI, with 5.7% in comparison to 8.5% with Braunoderm skin preparation (Graphic 1). The most significant effect was to be noted for emergency procedures with 7.5% SSI in the Chloraprep group versus 15.3% in the Braunoderm group. Longer surgical procedures showed the most profound effect with lower incidences in SSI after use of Chloraprep, whereas in the subgroup of elective surgical procedures no significant difference could be observed (Table 1). In the overall analysis skin preparation with Chloraprep showed a significantly lower incidence of SSI, although this difference was due to the significance in emergency procedures and not in elective surgery. A clear increase in SSI could be shown with a longer the operating time, and from 90 minutes onwards the difference between the skin preparation became significant. In the longer procedures a lower rate of SSI was to be noted for Chloraprep. A randomized study is therefore warranted to yield solid results in order to justify the expenses of the more costly agent, if it can be shown to be superior in the prevention of SSI also in the randomized setting. Klinikum Kaufbeuren, Kaufbeuren Background: Streptococcal myositis is a rare and often fatal acute infection of the superficial fascia and the muscle. The initial symptoms are nonspecific until the fulminant course with tissue destruction and septic shock starts. The lack of classical symptoms makes an early diagnosis and start of therapy difficult. The heavy general symptoms in patients with a history of streptococcal infections in relatives and massive pain should bring up the consideration of a streptococcal myositis.We report on two cases with fulminant streptococcal infections, one of them with lethal outcome within hours after hospital admission. Interestingly, in both cases pain was the major symptom, whereas inflammation markers or radiographic findings were rather unspecific. We want to present clinical data, intraoperative findings as well as the microbiological analysis of the involved bacteria. Additionally, we show a review on the literature especially concerning the role of radiographic imaging, initial antibiotic regimen and surgical strategy. Postoperative wound infections after laparotomy are associated with a prolonged in hospital course, often requiring surgical revision. The current literature states that approximately up to 20% of patients suffer from surgical site infections (SSI) after elective laparotomy. Therefore, one goal seems to be the prevention of SSI. Epicutaneous negative pressure therapy (ENPT) has been shown to be beneficial in a series of cases, however mainly for gynecological and therefore non-contaminated surgeries. In our current investigation we aim to explore the usefulness of EPNT preventing SSI on a visceral surgery population. Materials and methods: This study includes 49 consecutive patients undergoing laparotomy for elective or emergent surgery with an incision of at least 20 cm. Four cases were excluded due to not-wound associated reoperation. Abdominal and wound closure was performed in a standardized fashion, following application of an epicutaneous negative pressure drape with continuous suction of 125 mmHg. Perioperative data including demographics, risk factors for SSI, such as BMI, smoking, diabetes, vascular comorbidities and immunosuppression, as well as operative data (peritonitis, enteral anastomosis, stoma and wound size) were captured prospectively. The drape remains for 5 to 7 days postoperatively if no clinical signs for wound infections occurred. Failure of ENPT is defined as any SSI requiring wound opening within 30 days. Results: The study population consists of 45 patients (mean age 57 ±17 years, 18 female, 27 male, mean BMI 28.3 ±8) that underwent midline laparotomy. Sixteen (35%) cases underwent surgery for incisional hernia, 27 (60%) for gastrointestinal resection and 2 (4%) for cytoreductive surgery with HIPEC. Eighteen percent of them were emergency procedures. SSI infection requiring surgical wound management occurred in 33% overall. Although, SSI tends to occur more frequent after emergency surgery, it did not reach statistical significance (50% vs. 29%; p=0.241). However, neither BMI, smoking, diabetes, vascular comorbidities, nor wound length and depth show any significant correlation to postoperative SSI. Also, enteral anastomosis, peritonitis or presence of an enterostoma was not significantly associated with SSI when ENPT was applied. Conclusion: Although limited by small sample size and heterogeneous study population, our results reveal an alarming high occurrence of SSI even when ENPT is applied. Raising the question of the true benefit of this perioperative treatment modality. However, further prospective randomized trials for evaluation of ENPT in patients requiring laparotomy are necessary to reveal the true (dis)benefit. Epicutaneous negative pressure therapy (ENPT) experiences a true hype for prevention of surgical site infections (SSI) for any kind of laparotomy. However, there is no randomized clinical trial that demonstrate the true benefit yet. ENPT, whether as ready-to-use or customized system, is utilized to protect the surgical incision from contamination and to drain fluids and potential infectious material from the depth of the incision. Yet, there is no study that proofs this theory. Hence, we aim to develop an ex-vivo model to bring some light into the depth of surgical incisions. Materials and methods: Our ex-vivo model is embodied by pork belly, measuring 30 by 30 by 5 cm. An incision is made over a length of 20 cm with various depths to 4 cm in 1 cm increments. A catheter with 4 circumferential pressure detectors, 4 cm apart, is placed on the ground of each incision and connected to the computer-based receiver (Medical Measurement Systems International, Enschede, NL). Wound edges are protected with adhesive drape circumferentially. A black foam is trimmed to 22 by 4 by 3 cm and placed on top of the incision. An additional adhesive drape secures the foam in place (figure 1). Now, baseline pressure is set zero. Continuous suction of 75, 100, 125 and 150 mmHg is applied and recorded for 5 minutes respectively. Finally, stained water is injected through the catheter in 1 ml increments until water level reaches the foam and presents within the connection-tube. After additional 5 minutes of suction, the drape is removed. Results: The intra-incisional pressure increases by 4 mmHg at 1 cm as well as 4 cm with continuous suction of 75 mmHg. With increased suction up to 150 mmHg the immediate pressure doubled, returns to 4 mmHg after 10 seconds however. Simulation of wound fluids using stained water, also increases the immediate pressure within the incision with each injected ml, followed by "normalization" to 4 mmHg, independently of incision depth. After injection of 12 ml stained water at 4 cm depth, it rises up to the foam and presents within the connection-tube. After removing the drape, the entire tissue of the incision was stained. However, 25% of the fluid remained on the ground of the incision. Conclusion: Our data represent the first ex-vivo treatment model for epicutaneous negative pressure therapy. We have shown a noticeable and measureable effect of ENPT in our model. However, the drainage of fluids seems to be insufficient on deeper incisions. Although, handling these results with care, as they do not represent the physiologic environment and clinical treatment time, they do proof the physical effect of ENPT for its purpose. BG Unfallklinik Ludwigshafen, Ludwigshafen Background: Pyoderma gangrenosum (PG) is a rare neutrophilic skin disease characterized by rapidly progressing painful, necrotic ulceration and typically affects patients in their third to sixth decade of life. It most frequently involves lower extremities and is often a diagnosis of exclusion, as there are no specific laboratory or histopathologic findings. In most cases, an underlying systemic disorder, such as inflammatory bowel disease, Hepatitis C, Rheumatoid arthritis, hematologic or lymphoproliferative disorder can be identified but a substantial amount of cases remain elusive. In 25-50 %, skin lacerations by trauma or surgery will eventually trigger new lesions (pathergy phenomenon) and thereby exacerbate progression even further. This renders PG a malicious and challenging disease, especially in the surgical patient. Materials and methods: A 58-year-old woman with a large lower limb tissue defect after elective knee replacement surgery was transferred to our Department for reconstruction. As wounds were rapidly progressing, Necrotizing fasciitis was initially suspected but eventually ruled out by histopathological analysis. A 50 x 15 cm defect was then reconstructed by a combined parascapular latissimus dorsi flap before, a couple days later, the patient developed tender pustules and ulcers on flap and donor site. The diagnosis of Pyoderma gangrenosum was suspected and local and systemic therapy was initiated but treatment proved to be challenging and insufficient at first. Results: Persistent ulcerative necrosis eventually led to flap loss and the patient had to be amputated on her left leg atop of the knee. Repeated histopathological examination never actually confirmed the diagnosis of Pyoderma gangrenosum but clinical presentation and progression was pathognomonic. The treatment was therefore continued with topical corticosteroids and orally administered prednisolone at 1 mg/kg body weight. Over the next weeks, dermatologic symptoms eventually succumbed and, after 3 more surgeries, including plastic coverage of the eroded donor site by rhomboid transposition flap, the patient finally recovered and was able to be transferred to a rehabilitation unit. Conclusion: Pyoderma gangrenosum is rare but should always be included as a differential diagnosis when rapidly progressive ulceration on surgical sites is observed. Almost 80 years after its first description, the disease still remains poorly understood and its appearance is especially challenging in patients requiring large scale tissue reconstruction. With PG, any further surgical intervention may lead to uncontrolled disease exacerbation (pathergy) and must therefore be carefully scrutinized, thereby substantially limiting therapeutic options. Confirmation of diagnosis may be exclusively achieved by clinical appearance and progression. Histopathology and laboratory parameters are non-specific and may even be potentially deceptive. Uniklinikum Bonn, Bonn Background: Visceral artery aneurysm (VAA) is a rare and potentially lethal vascular disease. Both surgery and endovascular management are considered appropriate therapeutic strategies. We evaluated our 10years-experience in a multidisciplinary team. Materials and methods: All patients treated for VAA at Bonn University Hospital between 2005 and 2016 were retrospectively enrolled. Demographic and clinical data were reviewed and postinterventional outcomes in endovascular or surgical treatment groups were compared. The results are discussed with respect to the current guidelines of the german society of vascular surgery (Deutsche Gesellschaft für Gefäßchirurgie). Results: A total of 24 patients (14 women and 10 men; median age 62 years; range 36-91 years) presenting with 28 VAAs were identified. The most common locations were splenic artery (N=8), followed by hepatic artery (N=6), superior mesenteric artery (N=5) and left gastric artery. The larger part of patients with VAA presented with sypmtoms (N=12), with 6 patients treated because of a life-threatening bleeding after rupture. Surgery was performed in 10 patients while 12 patients were treated with an endovascular procedure. In 2 patients a watchful waiting approach was preferred due to small size asymptomatic VAAs. In case of emergency, the majority of the patient were treated with an endovascular procedure (N=5). Overall morbidity was 33%. No difference regarding posttherapeutic morbidity was observed between the endovascular and the surgery group (p=0.337). 30-day hospital mortality was 0% in both groups. Conclusion: VAA is a life-threatening condition and no standard treatment strategy exists. Our data confirm that both surgical and an endovascular approaches are safe and feasible if performed in specialized centres with multidisciplinary experience. Hepatic resections are one of the most complex surgical procedures with considerable morbidity rates. Deciding on the appropriate approach involves appreciating tomography images and considering patient's clinical information. Three-dimensional (3D) operation planning promises to make complex liver surgery better and safer. There are however two problems of current planning systems: first imaging data, although 3D reconstructed, is displayed at two-dimensional (2D) screens depriving the surgeon of an authentic visualization of the intraoperative situation, second crucial clinical patient information is not integrated into the planning environment complicating the decision making process. We present a comprehensive, interactive and immersive method for visualizing operation planning data in a virtual reality (VR) environment using a head-mounted-display Oculus Rift (Oculus VR LLC, California, USA) and compare it to established methods. Materials and methods: Three patients from the Surgical Department of the University Clinic in Heidelberg who underwent hepatic resection were selected. 3D-Models of the liver and gallbladder surface, the arterial, venous and portal venous vasculature, the bile ducts and the liver tumors were created from computed tomography data using open-source tools. We used the VR-HMD to visualize the intraoperative anatomical situation in the abdomen. By using the HMD the user could access the 3D-visualized upper abdomen, clinical patient data and the original CT images in virtual reality. Users could interact with all presented data using a mouse as input device. The opacity of the liver and the intrahepatic structures could be adjusted to fit the needs of the user. In the first step of the evaluation medical students were included and randomized in a 1:1:1 ratio. Operation planning was performed with 2D CT-data on a standard monitor (2D), with a 3D-model on a standard monitor (3D) or within VR. All participants evaluated three consecutive liver cases with increasing difficulty. A score was determined from the correctness of the answers on an 11-item-questionnaire assessing liver anatomy, tumor involvement and proposed liver resection as well as the time to answer. At the end participants evaluated the satisfaction, usefulness and potential of their visualization method.
2019-08-18T23:08:19.374Z
2017-02-16T00:00:00.000
{ "year": 2017, "sha1": "4bc353dcdc735ec7ed9f245863857495599cd2a1", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1515/iss-2017-2002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3bf7a39589b82e76d0b6f7ee5753ea76fd8e0127", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
220127940
pes2o/s2orc
v3-fos-license
The low-luminosity type II SN\,2016aqf: A well-monitored spectral evolution of the Ni/Fe abundance ratio Low-luminosity type II supernovae (LL SNe~II) make up the low explosion energy end of core-collapse SNe, but their study and physical understanding remain limited. We present SN\,2016aqf, a LL SN~II with extensive spectral and photometric coverage. We measure a $V$-band peak magnitude of $-14.58$\,mag, a plateau duration of $\sim$100\,days, and an inferred $^{56}$Ni mass of $0.008 \pm 0.002$\,\msun. The peak bolometric luminosity, L$_{\rm bol} \approx 10^{41.4}$\,erg\,s$^{-1}$, and its spectral evolution is typical of other SNe in the class. Using our late-time spectra, we measure the [\ion{O}{i}] $\lambda\lambda6300, 6364$ lines, which we compare against SN II spectral synthesis models to constrain the progenitor zero-age main-sequence mass. We find this to be 12 $\pm$ 3\,\msun. Our extensive late-time spectral coverage of the [\ion{Fe}{ii}] $\lambda7155$ and [\ion{Ni}{ii}] $\lambda7378$ lines permits a measurement of the Ni/Fe abundance ratio, a parameter sensitive to the inner progenitor structure and explosion mechanism dynamics. We measure a constant abundance ratio evolution of $0.081^{+0.009}_{-0.010}$, and argue that the best epochs to measure the ratio are at $\sim$200 -- 300\,days after explosion. We place this measurement in the context of a large sample of SNe II and compare against various physical, light-curve and spectral parameters, in search of trends which might allow indirect ways of constraining this ratio. We do not find correlations predicted by theoretical models; however, this may be the result of the exact choice of parameters and explosion mechanism in the models, the simplicity of them and/or primordial contamination in the measured abundance ratio. INTRODUCTION Massive stars of M 8 M finish their lives with the collapse of their iron core, which releases great amounts of energy and produces explosions known as core collapse supernovae (CC-SNe). These explosions can leave behind compact remnants in the form of neutron stars or black holes, although the exact details of the outcomes are not well understood. Within the different classes of CCSNe, type II SNe (SNe II), characterised by the presence of hydrogen in their spectra, are the most common (Li et al. 2011;Shivvers et al. 2017). SNe II are a heterogeneous class, with light curves showing different decline rates across a continuum (e.g., Anderson et al. 2014) from plateau (SNe IIP; with a pseudo-constant luminosity for ∼ 70 -120 days) to linear decliners (SNe IIL, or fastdeclining SNe). The light curves generally show two distinct phases: an optically-thick phase, driven by a combination of the expansion of the ejecta (which pushes the photosphere outwards) and the recombination of hydrogen (which pushes the photosphere inwards), and a later optically-thin phase, powered by the radioactive decay of 56 Co. The prototype of this faint sub-class is SN 1997D (de Mello et al. 1997;Turatto et al. 1998). SN 1997D displayed a low luminosity and low expansion velocity. However, it was discovered several weeks after peak, with no well-constrained explosion epoch. The first statistical study of this sub-class was that of Pastorello et al. (2004), who found the class to be characterised by narrow spectral lines (P-Cygni profiles) and low expansion velocities (a few 1000 km s −1 during the late photospheric phase), suggesting low explosion energies (E exp few times 10 50 erg). Their bolometric luminosity during the recombination ranges between ∼ 10 41 erg s −1 and ∼ 10 42 erg s −1 , with SN 1999br (Pastorello et al. 2004) and SN 2010id (Gal-Yam et al. 2011) being the faintest SNe II discovered. They also show lower exponential decay luminosity than the bulk of SNe II, which reflects their low 56 Ni masses (M Ni 10 −2 M ), in agreement with the low explosion energies as expected from the M Ni -E exp relation found in different studies (e.g., Pejcha & Prieto 2015;Kushnir 2015;Müller et al. 2017). Spiro et al. (2014) have since expanded the statistical study of LL SNe II, adding several objects and finding similar characteristics to those found by Pastorello et al. (2004). While the current sample of nebular spectra of LL SNe II is growing, the study of additional events with better cadence and higher signal-to-noise data is essential for understanding their observed diversity. The progenitors of LL SNe II have been shown to be red supergiants (RSGs) with relatively small Zero Age Main Sequence (ZAMS) masses (M 15 M ) using archival pre-SN imaging (e.g. Smartt et al. 2009; and hydrodynamical models (e.g. Dessart et al. 2013b;Martinez & Bersten 2019). However, other studies have suggested the possibility that their progenitors are more massive RSGs with large amounts of fallback material (e.g. Zampieri et al. 2003). Theoretical studies have shown that the nebular [O i] λλ6300, 6364 doublet is a good tracer of the progenitor core mass, and, therefore, of the progenitor ZAMS mass (e.g., Jerkstrand et al. 2012Jerkstrand et al. , 2014Jerkstrand et al. , 2018 and some other studies as well, e.g., Lisakov et al. 2017Lisakov et al. , 2018, making the late-time spectral evolution extremely important for the study of SN progenitors. Furthermore, nebular nucleosynthesis diagnosis is so far consistent with the lack of massive progenitors above ∼20 M (e.g. J14, Jerkstrand et al. 2015a;Valenti et al. 2016). In addition to the study of the nebular [O i] λλ6300, 6364 doublet as progenitor mass estimator, the Ni/Fe abundance ratio, measured from the [Fe ii] λ7155 and [Ni ii] λ7378 lines, have been shown to be important for the understanding of the inner structure of the progenitor and the explosion mechanism dynamics, as the observed iron-group yields are linked to the temperature, density and neutron excess of the layers that become fuel for the rapid burning process of the explosion (Jerkstrand et al. 2015a,b, hereafter J15a, J15b). However, there are few studies of this ratio, mainly due to the lack of late-time spectra and the absence of these features in the available data in the literature. In this paper, we study SN 2016aqf: a well-observed (i.e., excellent spectral and photometric coverage) LL SN II, discovered soon after explosion, with M max V = −14.58 mag, a plateau duration of ∼ 100 days and a measured M Ni = 0.0010 M (see Sec. 3.2 and 4.1). The nebular spectra show the [O i] λλ6300, 6364 doublet. The He i λ7065 emission line is also seen in the spectra of SN 2016aqf, a line associated to SNe with a low progenitor mass, but not present in every LL SN II and not well understood. In addition, SN 2016aqf is one of the few cases where the [Fe ii] λ7155 and [Ni ii] λ7393 lines (produced by 56 Ni and 58 Ni, respectively) can be seen in the nebular spectra (∼ 150 -330 days after the explosion) over ∼ 170 days. This extended coverage of the Ni/Fe abundance ratio presents a unique opportunity to study its evolution, and serves as a test for current late-time spectral modelling as well as providing a rich legacy dataset. This paper is structured as follows: in Sec. 2.1 we describe the observations, data reduction and host galaxy of SN 2016aqf. In Sec. 3 we show the light curve, colour and spectral evolution of SN 2016aqf and compare it with other LL SNe II. In Sec. 4 we estimate the physical parameters of SN 2016aqf, while in Sec. 5 we discuss our findings. Finally, our conclusions are in Sec. 6. Throughout this paper we assume a flat ΛCDM cosmology with H 0 = 70 km s −1 Mpc −1 , Ω M = 0.3 and Ω Λ = 0.7, as these values are widely used in the literature (e.g., Gutiérrez et al. 2018) and the H 0 value lies between the value measured from the CMB (Planck Collaboration et al. 2016) and local measurements (e.g., Riess et al. 2018 Gutiérrez et al. 2018, although see Sec. 2.2), we commenced a follow-up campaign with the extended Public ESO Spectroscopic Survey of Transient Objects (ePESSTO) as part of the programme 'SNe II in Lowluminosity host galaxies'. The final pre-explosion non-detection in the V-band, reported three days before the date of classification by ASAS-SN (57442 MJD), has a limiting magnitude ∼ 16.7 mag, which does not give a stringent constraint on the explosion epoch. Previous non-detections have similar limiting magnitudes. Hence, we decided to estimate the explosion epoch using the spectral matching technique (e.g., Anderson et al. 2014;Gutiérrez et al. 2017). We used gelato 2 (Harutyunyan et al. 2008) to find good spectral matches to the highest resolution spectrum of SN 2016aqf, as it is also one of the first spectra taken (57446 MJD, see below). From the best matching templates, we calculated a mean epoch of the spectrum of ∼6 days after explosion and a mean error added with the standard deviation of the explosion epochs in quadrature of ∼ 4 days. This gives an explosion epoch of MJD 57440.19 ± 4 (slightly different to the estimated epoch in Gutiérrez et al. 2018 as they used the non-detection) Optical BVgri imaging of SN 2016aqf was obtained with the 1.0-m telescope network of the Las Cumbres Observatory (LCO; Brown et al. 2013) as part of both ePESSTO and the 'Las Cumbres Observatory SN Key Project', with 1 http://www.astronomy.ohio-state.edu/assassin/index. shtml 2 https://gelato.tng.iac.es/gelato/ data taken from 8 to 311 d after explosion. All photometric data were reduced following the prescriptions described by Firth et al. (2015). This pipeline subtracts a deep reference image constructed using data obtained in the BVgri bands three years after the first detection of SN 2016aqf to remove the host-galaxy light using a point-spread-function (PSF) matching routine. SN photometry is then measured from the difference images using a PSF-fitting technique. Fig. 1 shows the SN position within the host galaxy. The photometry of SN 2016aqf is presented in Table 1. Spectroscopic observations were obtained with the ESO Faint Object Spectrograph and Camera version 2 (EFOSC2; Buzzoni et al. 1984) at the 3.58-m ESO New Technology Telescope (NTT), the FLOYDS spectrograph (Brown et al. 2013) on the Faulkes Telescope South (FTS), and the Robert Stobie Spectrograph (RSS; Burgh et al. 2003;Kobulnicky et al. 2003) at the Southern African Large Telescope (SALT). FLOYDS spectra were taken as part of the 'Las Cumbres Observatory SN Key Project'. The observations include phases from 2 to 348 d after explosion. EFOSC2 spectra, obtained with grism #13, cover 3500-9300Å at a 21.2Å resolution, the FLOYDS spectra have wavelength coverage of ∼ 3200 -10000Å with a resolution of ∼ 18Å, and the RSS spectrum (Jha & Miszalski 2016) covers 3600-9200Å at ∼ 7Å resolution. The data reduction of the EFOSC2 spectra was performed using the PESSTO pipeline 3 , while the FLOYDS data were reduced using the pyraf-based floydsspec pipeline 4 (Valenti et al. 2014). All spectra are available via the WISeREP 5 repository (Yaron & Gal-Yam 2012). Spectral information is summarised in Table 2. Host Galaxy Photometry of NGC2101 was obtained with the LCO 1.0m telescope network, and spectroscopy with VLT/FORS2, around three years after the SN explosion (2019 February 6 at 04:38:48 UTC). We estimated a galaxy distance of µ = 30.16 ± 0.27 mag (see Sec. 3.2), consistent with the Tully-Fisher value of µ = 30.61 ± 0.80 mag, as reported in the NASA/IPAC Extragalactic Database 6 (NED). Adopting the distance estimated in this work, the galaxy has M B = −17.22±0.34 mag, which is consistent with the value reported in Gutiérrez et al. (2018, −17.66 mag) given the large uncertainties from the reported distance. We use the total apparent corrected B-magnitude, with the total B-magnitude error as reported in HyperLEDA, using error propagation. The radial velocity corrected for Local Group infall onto Virgo is 883 ± 3 km s −1 (Theureau et al. 1998;Terry et al. 2002), as reported in HyperLEDA, a value which we use to estimate the corrected redshift of SN 2016aqf. From the spectrum of the H ii region at the position of the SN, we measure the emission line fluxes of H α, H β, Table 1. SN 2016aqf BV gri-band photometry between +5 and +311 days. BV bands are in Vega magnitude system, while gri bands are in AB magnitude system. MJD Phase Kennicutt & Evans (2012), where the uncertainty is driven by the uncertainty in the distance. Using the calibration of Marino et al. (2013), we then estimate a gas-phase metallicity of (12 + log(O/H)) O3N2 = 8.144±0.025 dex and (12 + log(O/H)) N2 = 8.134 ± 0.042 dex, i.e., below the solar value of 8.69 dex (Asplund et al. 2009). This is low compared to many other SN II host galaxies, but not uncommon (e.g., Anderson et al. 2016). However, the metallicity does not follow the relation found with the Fe ii λ5018 pEW (e.g., Dessart et al. 2014;Anderson et al. 2016;Gutiérrez et al. 2018). This may be caused by the lower temperatures in LL SNe II which causes the earlier appearance of the Fe ii lines in these objects (Gutiérrez et al. 2017). Extinction corrections We adopt a Milky Way extinction value of E(B − V) MW = 0.047 mag, and correct our photometry using the prescription of Schlafly & Finkbeiner (2011) and the Cardelli et al. (1989) reddening law with R V = 3.1. To estimate the host galaxy extinction, we investigated the equivalent-width (EW) of the Na i D (λλ5889, 5895) absorption, a well-known tracer of gas, metals and dust (e.g., Richmond et al. 1994;Munari & Zwitter 1997;Turatto et al. 2003;Poznanski et al. 2012). We note that these relations tend to have large uncertainties. The spectrum at +6 d is the only one that seems to shows Na i D absorption lines from the MW and the host galaxy. We used the relations for one line (D 1 ) and two lines (D 1 +D 2 ) from Poznanski et al. (2012), obtaining upper limits of E(B − V) 0.028 ± 0.011 mag and E(B − V) 0.032±0.006 mag, respectively. This gives a weighted average value of E(B − V) 0.031 mag. Given this very small level of extinction (and its uncertainty), we choose not to make an extinction correction to the SN data. We do not use other methods to estimate this value as they rely on the SN colour; de Jaeger et al. (2018) showed that the majority of colour dispersion of SNe II is intrinsic to the SN. Light curve and distance The BVgri-band light curves of SN 2016aqf (Fig. 2) cover 8 to 311 d after explosion (all phases in this paper are relative to the estimated explosion epoch). As the host galaxy is not in the Hubble flow, we estimated the distance to SN 2016aqf using the Standardized Candle Method (Hamuy & Pinto 2002), which relates the velocity of the ejecta of a SN II to its luminosity during the plateau, and the rela- . SN 2016aqf BV gri-band photometry from +8 to + 311 d. BV bands are in Vega magnitude system, while gri bands are in AB magnitude system. The last non-detection in V band is also shown (inverted triangle). The SN was not visible around the transition from the optically-thick to the optically-thin phase. Offsets have been applied to the photometry for visualisation purposes. As in all figures in this paper, the photometry is corrected for MW extinction but not host extinction, and the data are in the rest-frame. tion of Kasen & Woosley (2009, equation 17) for a redshiftindependent distance estimate. We calculate the distance modulus µ = 30.16 ± 0.27 mag (10.8 ± 1.4 Mpc), which gives M max V = −14.58 mag and a mid-plateau V-band luminosity of −14.63 mag (note the plateau luminosity is slightly brighter; M max V represents the maximum luminosity from the peak closest to the bolometric peak). We estimated M max V from the first epoch of photometry given that the last non-detection helps to obtain a good constrain. During the recombination phase, the SN shows an increase in the Vri-bands luminosity, probably due to its low temperature which shifts the peak luminosity from the ultraviolet (UV) to redder bands more rapidly compared to normal SNe II. The gap in observations between 80 and 150 days was caused by the SN going behind the sun, and coincides with the SN transitioning from the optically-thick to the optically-thin phase. The V-band decreases by ∼ 2 mag across the gap in the light curve, and is an estimate of the decrease caused by the transition from plateau to nebular phase, smaller than other LL SNe II (∼ 3-5 mag; e.g., Spiro et al. 2014). We measured the decline rate in the Vband at early epochs (t 20 days; s 1 ), in the plateau (s 2 ), and in the exponential decay tail (s 3 ) as defined in Anderson et al. (2014, see section 4.2 for the t pt used), obtaining s1 = 0.65 +0.13 −0.12 mag 100 d −1 , s2 = −0.08 +0.01 −0.01 mag 100 d −1 and s3 = 1.22 +0.02 −0.02 mag 100 d −1 . M tail was not measured as the early decline of the exponential decay tail was not observed. Colour Evolution In Fig Table 3. In addition we include SN 2012ec (Maund et al. 2013), a non-LL SN II, as a reference as it has a well-measured Ni/Fe abundance ratio, used in our later analysis. For this comparison sample, we use photometry and spectra obtained from the 'Open Supernova Catalog' (Guillochon et al. 2017) and WISeREP (Yaron & Gal-Yam 2012). Note that we only used epochs with both B and V photometry to calculate colour, without applying interpolations. The photometry of this sample is corrected for MW extinction (see Sec. 3.1), and host galaxy extinction, using the values from the references in Table 3. However, we do not correct for host galaxy extinction when the reported value is an upper limit (this does not represent a problem given the relatively small extinction values, A V < 0.1 mag). The (B − V) evolution of SN 2016aqf is in general flatter than the bulk of our sample, showing similar colours at early epochs (t 15 days), but becoming slightly bluer at later epochs (t 25 days), similar to SN 2012ec. After ∼ 100 d the dispersion in the colour evolution of our sample starts increasing, probably due to the faintness of these objects. Bolometric light curve We estimated the bolometric light curve of SN 2016aqf by applying the bolometric correction from Lyman et al. (2014) (assuming a cooling phase of 20 days). We use the (g − i) colour as it shows the smallest dispersion. Most SNe in our LL SN II sample have only BV RI data, so, to be consistent, we calculated their bolometric light curves (correcting for MW extinction only) by applying the relation from Lyman et al. (2014) as well, but with (B − I) colour as it has the smallest dispersion within the available bands, using the distances from Table 3. Only epochs with simultaneous B and I bands (or g and i for SN 2016aqf) were used. The light curves are shown in Fig. 4 (SN 2008bk is not shown as it does not have epochs with simultaneous B and I coverage). Unfortunately, as the relations from Lyman et al. (2014) only work in a given colour range, we can not estimate the bolometric light curve during the nebular phase of some of the SNe. The luminosity of SN 2016aqf at peak is L bol ≈ 10 41.4 erg s −1 , estimated from the first epoch with photometry. The luminosity of SN 2016aqf during the cooling phase generally decreases less steeply than other LL SNe II. During the plateau phase, the luminosity falls to L bol ≈ 10 41.3 erg s −1 , placing it in the mid-luminosity range of our sample (between SN 2005cs andSN 2002gd). After the gap, the SN has a luminosity of L bol ≈ 10 40.5 erg s −1 , dropping to L bol ≈ 10 39.7 erg s −1 at +300 d. The exponential tail is steeper than 56 Co decay (0.98 mag per 100 days Woosley et al. 1989), although shallower than the decay in the Vband, presumably due to γ-ray leakage. Early spectral evolution The spectra of SN 2016aqf have narrower lines than spectra of normal SNe II, suggesting low expansion velocities and low explosion energies. Spectra obtained during the optically-thick phase are shown in Fig. 5. During the first two weeks, the evolution is mainly dominated by a blue continuum and Balmer lines, showing P-Cygni profiles of Hα and Hβ. Fe ii λ4924, λ5018, λ5169 and Ca ii λλλ8498, 8542, 8662 then appear, becoming prominent at later epochs. The Na i D appears at around one month. Sc ii/Fe ii λ5531, Sc ii λ5663, λ6247 and Ba ii λ6142 appear at around +50 d. O i λ7774 is weakly present after one month. Fig. 6 shows the spectra of SN 2016aqf with other SNe from our comparison sample. The Fe ii lines are present in all SNe, although in SN 2016aqf they are generally weaker. SN 2016aqf is similar to SN 2002gw and SN 2010id, with a relatively featureless spectrum between Hβ and Hα. However, we see no major differences with the rest of the sample at ∼ +15 d. At around +50 d (Fig. 6), SN 2016aqf resembles SN 2009N, with the difference that the Sc ii/Fe ii λ5531, Sc ii λ5663, λ6247 and Ba ii λ6142 lines are weaker (and weaker than most other SNe in our sample). O i λ7774 is seen in the spectrum of most SNe, except SN 2002gd and SN 2016bkv where the signal-to-noise/resolution of the spectra precludes a secure identification. Most SNe have very similar Fe ii and Ca ii NIR line profiles. SN 2016aqf does not display any other peculiarity with respect to the comparison sample. Note that host galaxy extinction may be substantial for SN 2013am (Zhang et al. 2014;Tomasella et al. 2018), explaining the drop in flux at the bluer end of this SN. Table 4 shows a list of lines with pseudo-Equivalent Width (pEW, not corrected for instrumental resolution), including the full-width at half maximum (FWHM, not corrected for instrumental resolution) of Hα, measured from the spectra of SN 2016aqf during the optically-thick phase. Table 3. SN II sample used throughout this work. The data for this sample were taken from the references cited in column References. Table 3). lines are more redshifted (∼ 15Å, or ∼ 630 km s −1 , ∼ 630 km s −1 and ∼ 610 km s −1 ) throughout most of the nebular phase. We also noticed that the [Ni ii] λ7378 line shows almost no redshift (∼ 2Å, or ∼ 80 km s −1 ) at ∼ +150 days before rapidly increasing to ∼ 10Å (∼ 400 km s −1 ) at ∼ +165 days and ∼ 20Å (∼ 800 km s −1 ) at ∼ +270 days. In addition, the [O i] λλ6300, 6364 lines show a minor blueshift (∼ 5Å, ∼ 230 km s −1 ) at ∼ +280 days and then gets blueshifted again in about one month. These shifts could be caused by asymmetries caused by clumps in different layers of the expanding envelope. It is worth mentioning that the [Fe ii] λ7172 and [Ni ii] λ7412 lines can contribute to the shifts in the [Fe ii] λ7155 and [Ni ii] λ7378 lines, respectively. However, due to the resolution of the spectra, we are unable to discern their contribution. Table 5 contains a list of lines and FWHM measurements of SN 2016aqf. When we compare SN 2016aqf to other SNe at > +300 d (see Fig. 8 Expansion velocity evolution The ejecta expansion velocities were measured from the position of the absorption minima for Hβ, Fe ii λ4924, Fe ii λ5018, Fe ii λ5169, Na i D (middle of the doublet), Ba ii λ6142, Sc ii λ6247 and Hα. For Hα, we also estimated the expansion velocity from the FWHM (corrected for the instrumental resolution) of the emission by using v = c × FWHM/λ rest , where c is the speed of light. We include uncertainties in the measurement of the absorption minima, from the host galaxy recession velocity (3 km s −1 , as reported in Hyper-LEDA 7 ; Makarov et al. 2014), the maximum rotation velocity of the galaxy (44.2 km s −1 , as reported in HyperLEDA) and from the instrumental resolution, all added in quadrature. The major contribution to the uncertainty comes from the instrumental resolution. The expansion velocity curves are shown in Fig. 9. The velocities of Hα and Hβ are relatively high ( 8000 km s −1 ) at very early epochs (t 10 days) and drop to ∼ 5000 and 4000 km s −1 at ∼ 50 days, respectively, decreasing at a slower rate afterwards. The Hα velocity estimated from the FWHM is close to that estimated from the absorption minima as shown by Gutiérrez et al. (2017). The velocities of other lines decrease less dramatically, from ∼ 5000 km s −1 at early epochs (t ∼ 10 d), for the Fe ii lines, dropping down to ∼ 3000 km s −1 at ∼ 50 d, and then constant thereafter. In general, the expansion velocity curves of SN 2016aqf fall within the bulk of our sample and follow the general trend, although some of the velocities seem to decrease faster during the first 50 days after explosion. Nickel Mass The M Ni is one of the main physical parameters that characterise CCSNe as it is formed very close to the core (within a few thousand kilometers; e.g., Kasen & Woosley 2009). We estimated the nickel mass of SN 2016aqf by using different methods. These come from: (i) Arnett (1996) For (i), (ii) and (iv), we used the bolometric luminosity of the exponential decay tail at +200 days, calculated in Table 3). Nadezhin 1985). These parameters are related to different light-curve properties and also M Ni , therefore, they are essential for the characterisation of SNe II and CCSNe in general. The relations found by Popov (1993) where M V is the V-band absolute magnitude at the middle of the plateau, t p is the duration of the plateau in days (as in Hamuy 2003), v ph is the expansion velocity of the photosphere at t p /2 (usually measured from the Fe ii λ5169 line, as it has shown to be a good tracer of the photosphere) in 10 3 km s −1 . E exp is expressed in 10 51 erg, and M env and R prog in solar units. We measured M V = −14.63 ± 0.27 mag for which we used Gaussian processes to interpolate the light curve. By using the relativistic Doppler shift, we obtained v ph = 2068 ± 167 km s −1 from the Fe ii λ5169 absorption line minima. Finally, we use t p = 97.9 ± 7.2 days, for which we assumed the same value of SN 2003fb, adding its uncertainty (see Anderson et al. 2014) in quadrature, as these SNe have relatively similar evolution around the transition (t +50 days) in the V band (see Appendix B). With these values for SN 2016aqf we obtained E exp = 0.24 ± 0.13 × 10 51 erg, M env = 9.31 ± 4.26 M and R prog = 152 ± 94 R . The large uncertainties come mainly from the velocity, specifically from the instrumental resolution, and from the distance uncertainty used in calculating the absolute magnitude. We compared these results with similar relations found in the literature (e.g., Kasen & Woosley 2009;Shussman et al. 2016;Sukhbold et al. 2016;Kozyreva et al. 2019;Goldberg et al. 2019;Kozyreva et al. 2020), obtaining similar results. SN 2016aqf follows the E exp −M Ni relation found in SNe II (e.g., Pejcha & Prieto 2015;Müller et al. 2017), and M env follows the M env −E exp relation (e.g., Pejcha & Prieto 2015). If we assume a neutron star (∼ 1.4 M ) as the compact remnant, the progenitor of SN 2016aqf should be a RSG with ∼ 10.7 M . This is a lower limit, as some mass loss is expected due to various processes, e.g., winds ( et al. 2013b). Finally, R prog is well within the normal values of RSG radii, although on the lower end (e.g., Pejcha & Prieto 2015;Müller et al. 2017), but consistent with other estimations for this sub-class of SN (e.g., Chugai & Utrobin 2000;Zampieri et al. 2003;Pastorello et al. 2009;Roy et al. 2011). Progenitor Mass The progenitors of SNe II have been extensively studied through pre-SN images (e.g. Smartt et al. 2009; and hydrodynamical models (e.g. Bersten et al. 2011;Dessart et al. 2013b;Martinez & Bersten 2019). Although there remain some disagreements (e.g., Utrobin & Chugai 2009;Dessart et al. 2013b, for discussions of this discrepancy), there have been recent major improvements due to better cadence observations. The [O i] λλ6300, 6364 nebular-phase lines have also been shown to be good tracers of the core mass of CCSN progenitors (e.g., Elmhamdi et al. 2003;Sahu et al. 2006;Maguire et al. 2010), as at these later epochs we are observing deeper into the progenitor structure. Spectral modelling of the nebular phase has shown good agreement with this and can be used to estimate the progenitor mass (e.g., J12; J14; J18). In order to estimate the progenitor mass of SN 2016aqf, we used the spectral synthesis models from J14 and J18 for progenitors with three different ZAMS masses: 9, 12 and 15 M . The 9 M model has an initial 56 Ni mass of 0.0062 M while the other two models have an initial 56 Ni mass of 0.062 M . We compare the nebular spectra of SN 2016aqf with the models at two different epochs each (see Fig. 10 for models at +300 days). The models are scaled by exp((t mod -t SN )/111.4), where t mod is the epoch of the spectrum of the models and t SN is the epoch of the spectrum of the SN, by the SN nickel mass, M SN Ni /M mod Ni , and by the inverse square of the SN distance, (d mod /d SN ) 2 . The luminosity of some lines, like [O i] λ6300, 6364, scale relatively linearly with the M Ni (as discussed, e.g., in J14), thus, it is reasonably accurate to compare the models rescaled, with the difference in M Ni , to our observed SN. χ 2 values are calculated to quantify these comparisons as well. From Fig. 10 we see that the 12 and 15 M models present similar results, reproducing several lines. They can partially reproduce the [O i] λ6300 line, but the latter does not reproduce the [O i] λ6364 line very well. However, these models under-predict the [Fe ii] λ7155 line and do not reproduce the [Ni ii] λ7378 line and Ca ii NIR triplet. The 9 M mostly over-predicts the flux of lines, but does a good job reproducing the He i λ7065 and [Fe ii] λ7155 lines. In terms of χ 2 values, the 12 M model is slightly better than the 15 M one, while the 9 M model has a poorer fit. In addition, the 12 M model is relatively consistent with the mass estimate from Sec. 4.2, within the uncertainty. We also measured [O i]/[Ca ii] flux ratios (e.g., Maguire et al. 2010) between ∼ 0.5-0.7, which are consistent with the 12 M model and roughly consistent with the 15 M model. Finally, we found that the models reproduce lines better at later epochs ( 300 d) than at early epochs (< 300 d). J18 found the same pattern. There seems to be a very weak detection of [Ni ii] λ6667 (see Fig. 7), partially blended with H α, and the 9 M model predicts similar fluxes for this line and [Ni ii] λ7378, due to the high optical depths ( fig. 20 of J18). Note that this model has only primordial nickel in the hydrogen-zone, no synthesised 58 Ni, and a different setup compared to the other two (e.g., no mixing applied, J18). As the model prediction for [Ni ii] λ7378 is too weak, one can argue the detection of synthesised nickel. The 9 M model over-predicts the [O i] λ6300, 6364 lines, including most other lines. As mentioned above, J18 had similar results at these early epochs, however, this model showed better agreement at later epochs (e.g., > 350 d for SN 2005cs). We did not find better agreement at later epochs. In order to expand our analysis we also compared SN 2016aqf with the progenitor models from Lisakov et al. (2017), specifically, the YN models of 12 M (a set of pistondriven explosion with 56 Ni mixing) as their M Ni (0.01 M ) agree perfectly with our estimation, apart from agreeing with other physical parameters (e.g., E exp = 2.5 × 10 50 erg, M env = 9.45 M ) as well. This comparison, which was done in the same way as with the other models above, is shown in Fig. 10 for the YN2 model as well. As can be seen, the model predicts some of the Ca and the [O i] λ6300, 6364 lines relatively well. Nonetheless, most of the other lines are over-predicted. Other models from Lisakov et al. (2017) did not show better agreement. However, the fact that both 12 M models (from J14 and Lisakov et al. 2017) partially agree with the [O i] λ6300, 6364 lines (the main tracers of the ZAMS mass) strengthen the conclusion that the progenitor is probably a ∼ 12 M RSG star. We would like to emphasise that neither the 9 M model from J18 nor the YN 12 M models from Lisakov et al. (2017) have macroscopic mixing. The consistent overproduction of narrow core lines in both models (see Figs. 10 and ??) suggests that mixing is necessary, which the models from J14 have. In problems predicting the observed diversity of LL SNe II, probably due to the incomplete physics behind these explosions (e.g., assumptions of mixing, 56 Ni mass, rotation). In other words, there is a need of more models with different parameters that can help to understand the observed behaviour of these SNe. As such, we can not exclude a 9 M nor a 15 M progenitor. Thus, we conclude that the progenitor of SN 2016aqf had a ZAMS mass of 12 ± 3 M . A more detailed modelling of the progenitor is needed to improve these constrains, although this is beyond the scope of this work. He I λ7065 The He i λ7065 nebular line has been studied with theoretical modelling (e.g., Dessart et al. 2013a;J18), giving a diagnostic of the He shell. These models predict the appearance of this line in SNe II with low mass progenitors as more massive stars have more extended oxygen shell, shielding the He shell from gamma-ray deposition. However, some LL SNe II do not show this line in their spectra (e.g., SN 2005cs; see Fig. 8). SN 2016aqf shows the clear presence of He i λ7065 throughout the entire nebular coverage. We also see the presence of [C i] λ8727, although it gets partially blended with the Ca ii NIR triplet. We expect to see this carbon line as a result of the He shell burning, so the presence of both lines (He i λ7065 and [C i] λ8727) is consistent with the theoretical prediction. Thus, we believe that SN 2016aqf is a good case study to provide further understanding of the He shell zone through theoretical models. Furthermore, following the discussion from J18, we conclude that this is a Fe core SN and not an electron-capture SN (ECSN), as the latter lack lines produced in the He layer. Ni/Fe abundance ratio As discussed above, the nebular spectra of SNe II contain a lot of information regarding the progenitors as we are looking deeper into its structure. J15a discussed the importance of the ratio between the [Ni ii] λ7378 and [Fe ii] λ7155 lines as indicator of the Ni/Fe abundance ratio. These elements are synthesised very close to the progenitor core and, for this reason, their abundances get affected by the inner structure of the progenitor and the explosion dynamics. More specifically, iron-group yields are directly affected mainly by three properties: temperature, density and neutron excess of the fuel (for a more detailed account, see J15b). For this reason, studying iron-group abundances is key to understanding SNe II. SN 2016aqf is the only SN II to date with a relatively extensive coverage of the evolution of [Ni ii] λ7378 (most other SNe with the presence of this line only have at most ∼ 2 epochs showing it). In Fig. 11 we show the evolution in time of the flux of [Ni ii] λ7378 and [Fe ii] λ7155, and their luminosity ratio. We estimated the fluxes by fitting Gaussians to the profiles. Uncertainties were estimated by repeating the measurements and assuming different continuum levels, but we do not include the uncertainty coming from the instrumental resolution in any of the measured fluxes throughout this work. However, this should not greatly affect the measurements as the spectral lines are in general much wider than the instrumental resolution (e.g., [Fe ii] λ7155 has an average FWHM of ∼ 35Å). We notice that the evolution of the luminosity ratio reaches a quasi-constant value after ∼ 170 days since the explosion. This suggests that at relatively late nebular phase the Ni/Fe abundance ratio is constant as the temperature should not vary much (see J15a), although clumps in the ejecta might cause deviations from the measured values. After removing the value at ∼ +155 days (as the SN might still be in the transition to the optically thin phase) we report a Ni/Fe luminosity ratio weighted mean of 0.906 and a standard deviation of 0.062. The standard deviation gives us a more conservative estimation of the uncertainty in the Ni/Fe luminosity ratio than the uncertainty in the weighted mean. We follow J15a to estimate the Ni ii/Fe ii ratio and in turn the Ni/Fe abundance ratio. It is possible that this line is mainly visible in LL SNe II, where the expansion velocities are lower, producing narrower deblended line profiles. However, it is also seen in non-LL SNe II, other CCSNe (e.g., SN 2006aj;Maeda et al. 2007;Mazzali et al. 2007) and type Ia SNe (SNe Ia; e.g. Maeda et al. 2010). We searched for objects in our LL SN II comparison sample with spectra in which we could detect [Fe ii] λ7155 and [Ni ii] λ7378 to measure the Ni/Fe abundance ratio as for SN 2016aqf. We also expanded this sample to include other LL SNe II: SN 1997D, SN 2003B, SN 2005cs, SN 2008bk, SN 2009N and SN 2013am. SN 1997D and SN 2008bk were not included in our initial sample as they lack good publicly available data. We also include SN 2012ec as it is a well-studied case. In the case of SN 1997D, we measured the ratio at two different epochs, but we used one (at ∼ +384 days) of those, given that the other value (at ∼ +250 days) had relatively large uncertainties. For SN 2009N we took an average between the two values (at ∼ +372 and +412 days) we were able to measure as they were relatively similar. SN 2016bkv was not included as the M Ni values obtained in Nakaoka et al. (2018) and Hosseinzadeh et al. (2018) for this SN are not consistent with each other (∼ 0.01 M and 0.0216 M , respectively), this being necessary for an accurate estimation of the Ni/Fe abundance ratio. For the rest of the SNe, only one value was obtained. Several other LL SNe II show the presence of [Ni ii] λ7378, but it is either blended with other lines or the SNe lack some of the parameters needed to estimate the Ni/Fe abundance ratio. To expand our analysis we looked into other physical parameters related to the Ni/Fe abundance ratio. For example, J15b further analyse and compare this ratio against theoretical models. Some of these models show that at lower progenitor mass, the Ni/Fe abundance ratio should be higher. We investigate this by increasing our sample. Unfortunately not many LL SNe II have measured progenitor masses from pre-SN images, so we added non-LL SNe II as several of these do (e.g., Smartt 2015), while they also show the presence of [Fe ii] λ7155 and [Ni ii] λ7378 in their spectra. We do not include SNe with estimates of the progenitor mass from other methods as they depend on more assumptions than the pre-SN images method, making these estimates less reliable. The SNe included are: SN 2007aa (Anderson et al. 2014;Gutiérrez et al. 2017), SN 2012A (Tomasella et al. 2013) and SN 2012aw (Fraser et al. 2012). All these SNe are included in Table 3. For SN 2007aa we calculated the ejected nickel mass to be M Ni = 0.032 ± 0.009 M (we estimated this value using the relation from Hamuy 2003 and other values from Anderson et al. 2014) and estimated the Ni/Fe abundance ratio also as part of this work. For the other two SNe II, we took the values from J15a, assuming upper and lower uncertainties equal to the average of the uncertainties of the rest of the sample (not taking into account the uncertainties of SN 2012ec as they are too high). The Ni/Fe abundance ratio values for this sample are shown in Table 6. In addition, we compared the Ni/Fe against other phys- Pearson and Spearman's rank correlations were used to investigate if there is any meaningful correlation between these parameters and the Ni/Fe abundance ratio. To account for the measurement uncertainties, we use a Monte Carlo method, assuming Gaussian distributions for symmetric uncertainties, skewed Gaussian distributions for asymmetric uncertainties, and a uniform distribution (with a lower limit of 8 M ) for upper limits in the progenitor masses. We found no significant correlation between the parameters tested above. However, we note that the uncertainties in some parameters are significant. If we do not take into account the uncertainties we obtain a weak correlation between Ni/Fe and M V max and progenitor mass. However, these are mainly driven by one object (SN 2012ec). This null result raises some interesting questions. We did not find a correlation between M Ni and Ni/Fe abundance ratio, which is expected as one would assume the production of 56 Ni to track the production of 58 Ni and 54 Fe (e.g., J15b). We expected to see an anti-correlation between progenitor mass and Ni/Fe abundance ratio, as theory predicts that lower-mass stars have relatively thick silicon shells that more easily encompass the mass cut that separates the ejecta from the compact remnant, ejecting part of their silicon layers, which produces higher Ni/Fe abundance ratios. This is supported by the models from Woosley & Weaver (1995) and Thielemann et al. (1996), but not by those of Limongi & Chieffi (2003) which use thermal bomb explosions instead of pistons, as the former two do (see J15b). Having this in mind, our results either indicate that this anti-correlation can be driven by the exact choice of explosion mechanism (e.g., piston-driven explosions, neutrino mechanism, thermal bomb) and physical parameters (e.g., mass cut, composition, density profile), or that low-mass stars typically do not burn and eject Si shells, but either O shells or possibly merged O-Si shells (e.g., Collins et al. 2018). This is an important constraint both for pre-SN modelling (shell mergers and convection physics that determines whether these Si shells are thin or thick) and explosion theory (which matter falls into NS and which is ejected). Finally, we also need to consider the possibility of having primordial Ni and Fe contaminating the measured Ni/Fe abundance ratio, which could affect our results (as discussed above). As mentioned in J15b, 1D models tend to burn and eject either Si shell or O shell material that gives Ni/Fe abundance ratios of ∼ 3 and ∼ 1 times solar, respectively. Therefore, there is a clear-cut prediction that we should see a bimodal distribution of this ratio, with relatively few cases where the burning covers both shells. However, the observed distribution of our sample seems to cover the whole ∼ 1-3 range. This may suggest that the 1D picture of progenitors is too simplistic. Recent work on multi-D progenitor simulations (e.g., Müller et al. 2016;Collins et al. 2018;Yadav et al. 2020, and references therein), where some of these suggest vigorous convection and shell mixing inside the progenitor. If this happens, Si and O shells could smear together and burning such a mixture would give rise to Ni/Fe abundance ratios covering the observed range depending on the relative masses of the two components. CONCLUSIONS Theoretical modelling has shown that the Ni/Fe abundance ratio, which can be estimated from the [Ni ii] λ7378/[Fe ii] λ7155 lines ratio, gives an insight of the inner structure of progenitors and explosion mechanism dynamics. To date, very few SNe II have shown these lines in their spectra, most of them been LL SNe II. This could be due to their lower explosion energies (hence lower expansion velocities) which facilitates the deblending of lines, although these lines have also been found in one SN Ic and SNe Ia. SN 2016aqf has a similar spectral evolution to other SNe of this faint sub-class and has a bolometric luminosity and expansion velocities that follow the bulk behaviour of LL SNe II. When comparing its nebular spectra to spectral synthesis models to constrain the progenitor mass through the [O i] λλ6300, 6364 lines, we find a relatively good agreement with progenitors of 12 (using two model grids) and 15 M . However, due to uncertainties (e.g., mixing) in the other models, we cannot exclude lower mass (∼ 9 M ) progenitors. In addition, we noted that the lack of macroscopic mixing seen in some models produce too much fine structure in the early nebular spectra, which would need to be considered in future modelling. Hence, we conclude that the progenitor of SN 2016aqf had a ZAMS mass of 12 ± 3 M . To further constraint the progenitor mass a more detailed modelling would be required, although this is outside the scope of this work. As observed from the theoretical modelling of SNe II progenitors, the presence of He i λ7065 and [C i] λ8727 in the spectra is linked to the (at least partial) burning of the He shell, which would suggest that SN 2016aqf is a Fe-core SN instead of an ECSN. SN 2016aqf is a unique case as it has an extended spectral coverage showing the evolution of [Ni ii] λ7378 and [Fe ii] λ7155 lines for over 150 days. The ratio between these lines appears to be relatively constant (at t +170 days), which would suggest that one spectrum at a relatively late epoch would be enough to measure this quantity. An optimal epoch range to measure this ratio is ∼ 200-300 days, given that at earlier epochs the SN can still be in the optically-thick phase when the high opacity blocks the contribution from the lines, and at later epochs the contribution from primordial Fe and Ni is more important. This could vary from SN to SN, so a larger sample with extensive coverage of the [Ni ii] λ7378 and [Fe ii] λ7155 lines is required. When comparing to a sample of SNe II (LL and non-LL included) with measured Ni/Fe abundance ratio, the SN 2016aqf value falls within the middle of the distribution. We did not find any anti-correlation between ZAMS mass and Ni/Fe abundance ratio as predicted by theory. We believe this could mean one of two things. On the one hand, as some models predict this anti-correlation, but others do not, this trend could be driven by the choice of explosion mechanism (e.g., piston-driven explosions, neutrino mechanism, thermal bomb) and physical parameters (e.g., mass cut, composition, density profile). On the other hand, this could mean that low-mass stars typically do not burn and eject Si shells, but instead O shells or possibly merged O-Si shells which would alter the produced Ni/Fe abundance ratio. However, one must keep in mind that there is the possibility of having contamination of primordial Ni and Fe, which can be significant (up to ∼ 40 per cent) and epoch dependent. The current picture of 1D progenitors may be too simplistic, as higher dimensional effects, like mixing and convection, can play an important role, which could help reproduce the observed distribution of Ni/Fe abundance ratio. Finally, we note that nebular-phase spectral coverage of SNe II is essential for the study of these objects. While there exist a number of SN II nebular spectra in the liter-ature, additional higher cadence and higher signal-to-noise observations are required to help improve theoretical models.
2020-06-29T01:00:47.852Z
2020-06-26T00:00:00.000
{ "year": 2020, "sha1": "0f205f96ee3f4153d5fe03cc32cd5160b25dd8cc", "oa_license": null, "oa_url": "https://orca.cardiff.ac.uk/134922/1/staa1932.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0f205f96ee3f4153d5fe03cc32cd5160b25dd8cc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
6836886
pes2o/s2orc
v3-fos-license
Usability Experiments to Evaluate UML / SysML-Based Model Driven Software Engineering Notations for Logic Control in Manufacturing Automation Many industrial companies and researchers are looking for more efficient model driven engineering approaches (MDE) in software engineering of manufacturing automation systems (MS) especially for logic control programming, but are uncertain about the applicability and effort needed to implement those approaches in comparison to classical Programmable Logic Controller (PLC) programming with IEC 61131-3. The paper summarizes results of usability experiments evaluating UML and SysML as software engineering notations for a MDE applied in the domain of manufacturing systems. Modeling MS needs to cover the domain specific characteristics, i.e. hybrid process, real time requirements and communication requirements. In addition the paper presents factors, constraint and practical experience for the development of further usability experiments. The paper gives examples of notational expressiveness and weaknesses of UML and SysML. The appendix delivers detailed master models, representing the correct best suited model, and evaluation schemes of the experiment, which is helpful if setting up own empirical experiments. Introduction Today, manufacturing automation systems (MS) mostly consist of PLC-based control systems [1] programmed in the languages of the IEC 61131-3 standard [2].Since the proportion of system functionality that is realized by software is increasing [3], concepts for supporting automation engineers in handling software complexity are strongly required.Furthermore, spatially distributed MSs tend to result in a spatial distribution of software [4]- [6], i.e. networked automation systems (NAS).One key challenge of manufacturing automation companies in high wage countries is to increase efficiency, effectiveness and quality in design of software engineering for MS to shorten engineering and start-up time and ease maintenance.To reach this goal, model driven software engineering (MDE) including code generation is a promising systematic approach [1].A variety of different modeling notations (general and domain specific ones) were developed and defined by academia and/or tool suppliers during the last decades to improve efficiency and effectiveness and to increase software quality in manufacturing automation, e.g.Vyatkin [7]- [9], Fantuzzi, et al. [10]- [12], Thramboulidis, et al. [3] [13] [14], Vogel-Heuser, et al. [15] [16] or Estévez, et al. [17] [18]. Many industrial companies and researchers are looking for more efficient MDE in software engineering of MS, but are uncertain about the applicability and effort including training and reengineering of component libraries and workflow needed to implement those approaches in their companies.In the last decade many new approaches, e.g.notations as UML and SysML and tools to support these approaches were developed and presented to industry, to improve software quality and efficiency as well as maintainability and evolution including more or less the typical requirements as real time, communication and partially hybrid process characteristics or networked automation systems.The challenge for manufacturing automation companies is to find the most appropriate MDE approach fitting to the companies' particular requirements, e.g. market requirements, customer relation (what to get from the customer and what to deliver), workflow requirements, qualification of personnel, budget and so forth.Consultants try to bridge this gap and support automation companies specifying their requirements and evaluating available approaches and/or tools.In a next step a beta-test may be conducted which is always short in time and delayed by higher prioritized "real" projects.Often poor usability of tools, missing features or missing modular and flexible training units lead to a rejection of a MDE approach [19], which would be applicable and beneficial if systematically selected and introduced.But the risk to spend too much time of experienced application engineers for evaluating a probably inappropriate notation and/or tool is often estimated as too high to allow a systematic and exhaustive evaluation. To overcome this drawback and enable selecting the best notation which fits well to a company's needs, this paper provides a discussion of different ways for usability evaluation in Section 2 including usability evaluation methods and procedures that are well-established and have successfully been applied for decades, e.g. in the automotive domain [20] and in nuclear safety research [21].In the following Section 3, aspects and rules to be considered when designing experiments are investigated in detail.These proposed aspects are exemplified by a variety of experiments conducted in the last decade on usability of software engineering in MS presented in (Section 4).The paper closes with a summary and discussion in Section 5 and 6, respectively.The appendix gives three examples of the evaluation of subjects' models.Appendix A shows subjects' models from the first experiments highlighting the difficulties in understanding class as a structural mechanism to foster modularity and reuse using UML 1.4.Appendix B shows an example of subject's UML model (Evaluation scheme E1) compared to a master model also highlighting the calculation of the later used complexity measure WMC.Appendix C explains the different abstraction mechanism between subjects' solution and master model. Methods in Usability Evaluation Up to now the standard procedure in industry when testing the applicability of different modeling notations in automation technology is a consultation of experts (application engineers) working in the industrial sector of interest [22].Alternatively-or as a supplement to that practice-end users are questioned, e.g. the programmers of PLCs [23] having in fact the same drawbacks.From a methodological point of view, the consultation of experts can either be individually (i.e.interviews) or in an interactive group setting (i.e.focus group or case studies in workshops).Both approaches measure a subjective assessment of the respondents, i.e. their attitudes, opinions and knowledge can be gained-not their objective behavior.Focus groups as well as case studies operate with such small numbers of experts to start exploring usability issues.Focus groups require a moderator asking questions and moderating the discussion.In case studies an observer is required monitoring and observing the experts actions.Gained insights can serve as an inspiration for further, more detailed, in-depth studies with more explicit hypotheses.Consequently, the consultation of experts or end users is not the only appropriate way to investigate the applicability of a notation but to identify which notations might be relevant and/or to gain first qualitative results on weaknesses and drawbacks.A major advantage of focus groups compared to individual interviews is a more natural atmosphere that ideally leads to increased talkativeness and openness of the participants [24].An example of a focus group evaluation is given in [25].In contrast, individual interviews are easier to perform (especially with busy industrial experts), as the persons can be questioned at different times and different places, i.e. no common appointment has to be found.Mostly, a very limited number of experts is available for evaluation, and consequently, a representative quantitative opinion cannot be gathered.However, an evaluation by experts, as for example described in [26], can act as a beta-test of a prototypical tool indicating the opinion trend.The same challenge occurs with case studies testing industrial experts and/or researchers as e.g.described in [27] and [28].In both studies, a significant small group of participants were observed during courses on programming in accordance to the IEC 61499 compared to IEC 61131-3 standard and an Object Oriented approach.Conclusions based on observed aspects can only serve as indicators.Consequently, experiments with a higher number of subjects are required to gain objective, detailed and quantitative results for specific research questions.The in depth investigation of research questions like the applicability of novel modeling notations or programming paradigms presuppose that test persons get familiar with the novel notation or paradigm.The introduction of the required knowledge is very time-consuming and consequently not applicable for a large number of experts, i.e. minimum 15 -20 per compared notation, required for significant, objective results.Furthermore, the comparability of results of different experts with different previous knowledge is very difficult to handle.Thus, such research questions (e.g. the usefulness of a certain new notation) can only be quantified with laboratory experiments, where comparable testing conditions can be created and where, due to the sheer number of tested subjects (plus respective grouping), their individual characteristics and preferences can be considered.Such relevant testing conditions and constraints which have to be considered during experimental design are discussed in detail in the following section.Furthermore, their relationship to usability and how to quantify experimental results in order to deduce quantifiable usability results is presented. Factors for Experimental Design in MS Patig [29] [30] and Gemino and Wand [31] present results of various experiments on the usability of modeling notations in requirement engineering and classical software engineering introducing relevant factors for successful experimental design.Following an experimental approach the experimental design needs to be fixed ad first, choosing the constraints of the experiment, the so called affecting variables (Figure 1), i.e. the process to be controlled, the engineering task as well as the duration of the whole experiment depending on the temporary availability of the subjects and the subjects themselves.These affecting variables are described in Section 3.1.Hereafter we discuss the affected variables or usability requirements for experimental design in Section 3.2, showing the measures to evaluate the different notational approaches.Thereby, we draw both on [29] [30] and [31] as well as our earlier reflections [32].On that basis, we adapted and enlarged the extent of variables with focus on the specific requirements in design and maintenance of MS. Overview of Affecting Variables According to the best of the authors' knowledge, the main three influencing factors seem to be (1) the task, (2) the preceding training and (3) the tested subjects (Figure 1).Because subjects' attendance for training and experiment is a prerequisite, duration is not only a training constraint but also a constraint for the whole experiment including the performances of the engineering task.For experiments with students or apprentices two days is most promising from an organizational point of view. Task To describe the requirements from the technical process point of view the automation task is introduced and separated from the engineering task performed by an application engineer using notations and tools. 1) Automation Task When designing an experiment, e.g. in order to evaluate a given notation for an enterprise, the task to be automated, i.e. the automation task, and its characteristics has to be clarified.The characteristics should be selected similar to characteristics of the real automation tasks in the company's daily business.Due to the complexity of real industrial automation tasks, the main challenge in the experimental set-up is to develop an appropriate automation task with adjusted complexity the subjects can cope within the limited time frame of the experiment and the required training. As measure for complexity of an automation task, the number of inputs and outputs (I/O) is a familiar metric in cost estimation during preparation of an offer distinguishing between analog (closed loop) and digital (logic design) inputs and estimating the complexity of the closed loop control with a factor (see column "automation task").However, it is far too simple to reduce the complexity of an automation tasks to the number of I/Os because the automation task itself involves even more complexity.To derive the complexity of the automation task, it is necessary to describe the automation task itself with a notation.Therefore, then notational influence may lead to different complexity measure for the same control task.In the following measures for control task complexity correlated to the chosen notation are introduced. Lukas et al. [33] introduce different complexity measure, i.e. the size (number of operation and number of state variables), modularity and interconnectedness and applied those in logic control.Frey and Litz [34] introduced complexity metrics for Petrinet using besides others an adapted McCabe metric.Venkatesh et al. [35] proposed to count the number of elements required to represent a certain program in order to measure its com-plexity as well as Lee and Hsu [36], who converted the programs in question into Boolean expressions by using if-then transformations and, afterwards, rated the programs' complexity by comparing the calculated values. For applying an object-oriented notion for describing the automation task, Chidamber and Kemerer developed a set of metrics of OO design [37], e.g.weighted methods per class (WMC) which is a measure for class complexity used in this paper (see column control complexity in Table 1): in order to calculate the WMC of a program, the cyclomatic complexity measure of each method is summed up for all classes, cf.[38].When applying classical IEC 61131-3 code to describe an automation task, the number of FBDs and their instances in case of IEC 61131-3 are a familiar measure [39]. Besides those metrics which rely on the description of the automation task for deriving its complexity, also its characteristics like the type of control loop, i.e. logic control, closed loop control (with synchronization) or a technical process requiring both, called hybrid in the following can be taken into account.Furthermore requirements on automation systems, which have to be fulfilled are introduced: real time requirements, communication requirements between different controllers, and networked automation systems (NAS) as a class of systems with real time and communications requirements because the code and functionality is distributed onto different automation devices. Besides the regular machine control function, diagnosis, exception handling, visualization and other functionality need to be developed."In fact, industry folklore suggests that approximately 90% of the overall control logic is used for exception handling" [40].During the last years, the authors' team analyzed real PLC code from several MS companies and realized that 6% -10% of the lines of code are dealing with diagnosis and safety [41].As a consequence, the mode of operation (EN 13128 [42]: auto, hand, manual etc.) as well as diagnosis including error handling (according to [38] and [41]) are typical automation tasks which provide another possible classification. 2) Engineering Task The category engineering task describes the task to be solved by the human in the experiment, e.g.model (UML, SysML) or program (IEC Code) to control the automation task.Gemino und Wand [31] distinguish two types of human tasks in automation and control: • design and creation versus • understanding and analysis (being typical for maintenance tasks in MS). For both types of tasks, it must be between modeling and programming (dashed lines in Figure 1).Moreover, maturity and functionality of tool support have to be considered as influencing factors.As tool classification three maturity levels and four different functionality types are proposed.Because the comparison of different notations in MDE is focused in this paper the modeling notation as such and its measures should be discussed in more detail. The notation needs to fulfill the requirements given by the automation task, the life cycle model and the engineering task allowing modeling structure and behavior.Different modeling notations also possess different complexities.Calculation schemes are proposed by e.g.Recker et al. 2009 [42] and Rossi and Brinkkemper [43].The expressiveness of this measure is limited to the complexity of the pure notation calculated on the number of its elements. ( ) With O being the number of object types, R describing the relationship types and P the property types of a method, C(M) is the resulting complexity of a heterogeneous modeling language. Schalles [44] compared UML activity diagrams for behavioral modeling and UML class diagrams for structural modeling, taking into account the high complexity differences (Table 2). To allow the comparison of notations' complexity, the complexity of the later discussed notations is calculated, too.The complexity of the UML class diagram (Table 2) is nearly double compared to IEC 61131-3, SysML-AT (see also [45]) and CFC showing that between the typical MS notations the difference is very small. Subjects Subjects' qualification and experience is essential for the outcome of the experiment.Sierla et al. [28] and Hajarnarvis et al. [46] included industrial experts.Sierla introduced teamwork with clearly separated tasks similar to a real project team in industry. Many papers on programmers' competencies in modeling and informatics systems application are related to competence models, e.g.[47]- [49].Usually, competencies in this context are understood as abilities, skills, and knowledge-a perspective which is still prominent in most Anglo-American research on competencies.An example is provided by Curtis [50], who proposed that programming results depend on individual personal factors and mental abilities.His model covers intellectual aptitudes, the knowledge base, cognitive styles, the motivational structure, personality characteristics, and behavioral characteristics.Although Curtis did not empirically test his model, the factors show at least facial validity. Other approaches for gaining insights into competencies required for different programming approaches or skills analyze interviews from experts in the questioned domain [51] or evaluate programmers' behavior when performing certain tasks, e.g.programming tasks [52] or debugging tasks [49]. In prior experiments regarding process operators we tried to measure workload using a secondary task (e.g.communication and documentation) as, among others, proposed by Wickens [53], but could not detect significant effects [54]. Because for statistical significance a minimum of 15 subjects per different notation or approach, are necessary [55], it is obviously impossible to conduct experiments with such a high number of experienced application engineers under same conditions. In ergonomics it is usual to conduct usability experiments in engineering design with students of mechatronics or mechanical engineering because they are future application engineers.Maintenance tasks are performed in Germany mostly by skilled workers, therefore apprentices and technicians are appropriate subjects. Training Kim [56] suggests that the complexity of the design task in the training period and the test period should be increased stepwise. As Kim and Lerch [56], Ruocco [52] and others (i.e.[57]- [62]) point out, repetition is essential for learning object orientation.Ruocco decided for a stepped approach when teaching UML throughout a computer science program.He found that the application of UML during a database course and the incorporation of use case diagrams, sequence diagrams and activity diagrams led to a richer and deeper exposure to UML. In longitudinal studies, too, teaching beginners or freshmen in computer science or object orientation mostly goes along with repeated training [52] [57]- [62].For training purposes, the pedagogic methods of repetition and fade-out, i.e. decreasing support by trainer from training step to training step, seem to be suitable (see for more details 63).In case of IEC 611131-3 or UML prior knowledge (see expertise in section subjects) acts as a disturbing factor if not equally distributed over subjects.Therefore training is necessary to adapt prior knowledge before conducting the experimental task.Time between training and experiment should be nearly the same for all subjects to avoid time depending differences in results. Overview of Affected Variables (Usability Requirements) In order to perform usability experiments, it is necessary to clarify how usability can be measured (affected variables) and which metrics can be used to make quantifiable statements about the advantageousness of the object of research (given by affecting variables). The standard ISO 9241-11 [64] includes studies regarding use efficiency and the satisfaction of the users suggesting the measurement of product's usability in its context of use.According to this standard concerning usability requirements, the main affected variables, are (1) effectiveness, (2) efficiency and (3) user acceptance (cp. Figure 2). Effectiveness, i.e. the quality of the result depends on the completeness and correctness of an engineered solution.Efficiency is the effectiveness in relation to the effort to engineer a solution.Both measures analyze the models developed by the subjects during the experiment compared with a so called master model [64]. User satisfaction is the scale to which users are free of interference and their attitude to use a product [64].Furthermore, the standard ISO 9241-110 contains dialogue principles for human-computer interaction as attributes to usability requirements.Those principles are suitability for task, for learning, for individualization, conformity with user expectations, self-descriptiveness as well as controllability and error tolerance. Effectiveness-Quality of the Resulting Model, Program In (ISO 9241-11:1998) [64] it is proposed to determine the effectiveness by linking the grade of completeness with the grade of correctness.Bevan (1995) [65] defined effectiveness in a different way as a product of quantity and quality: (With N being the number of nodes, E the number of edges and R the number of errors with index task being the model of the subject and goal being the model of the experts taken as the correct solution in the master model). Schalles compared only UML structure and behavior diagrams for business process modeling on an abstract level [44].Applying his approach on MS would fall short.Using different notations modeled solutions are different two regarding number of nodes and edges of the correct master model (see Appendix B for an example).As Strömmann [27] already realized often different correct solutions are created by different subjects, which should be evaluated equally good.Moreover, equality of nodes in a class diagram (abstract representation) and nodes and edges in an activity diagram (low level, object related) are rated equally by Schalles.In MS, the correctness of structural model elements (e.g.classes or function blocks) should be measured differently than the correctness of low level behavioral model elements (e.g.correct steps or transitions from one state to another one [63]) due to the different degree of difficulty and ease of change in case of an error.Because IEC 61131-3 FBD is a language without nodes and edges Schalles approach is not feasible. In accordance with Annett's proposal of an Hierarchical Task Analysis (HTA) [66] the proposed evaluation scheme counts referring to a top down approach all detailed elements modeled by the subject, allowing similar scores e.g. for a combination of class diagrams and created objects in comparison to structure elements in FBD. In order to assess the effectiveness of notations the grade of task completion was used instead of measuring the grade of completeness and the grade of correctness separately before multiplying them.A task is completed, if its solution is logically and syntactically correct.As only correct task solutions, i.e. model elements are counted, it is not necessary to take additional errors into account.This results in the following term: (With N being the number of tasks).This approach has three main advantages: First, it can be applied equally for all kinds of notations as long as the given task can be completed with it and there is no restriction as in [65], that only models with fewer nodes and edges than the master model can be evaluated. Second, the task analysis can be used to select relevant tasks for evaluation and reduce the number of tasks to review, which then can be checked for logical and syntactical correctness, resulting in highly accurate data on task completion. Third, no negative points for errors have to be used to calculate correctness, as this could manipulate results in an undesired way, e.g.errors lead to negative overall efficiency or errors in one part of the model nullify correct solution of others. A fully automated analysis of the student's model compared to the master model is nearly impossible.The difficulty in rating the results of an experiment is comparable to a fair grading of exams by distributing points for correct solutions, but more sophisticated.As a consequence, the development of the correction guidelines for the manual evaluation is required.Points are given by two evaluators independently with a necessary interrater reliability of at least 65%. Efficiency Regarding industrial application the time needed to engineer an automation task correctly is one of the most important measures, defined as efficiency in usability evaluation.The efficiency of a notation can be calculated through a combination of effectiveness and time required for execution of a task (ISO 9241-11:1998). Accoring to Schalles [44], efficiency is defined as: (With effectiveness F and time T). If time is a freely selectable variable, this calculation basically provides a good comparison between notations in terms of effect per time.For the experimental design time may be fixed and restricted to a calculated amount of time with GOMS [67] or pre-experiments similar to an exam or left open for subject's decision, delivering the modeling results when they feel ready.Fixed timing implies that the effectiveness measure already includes a statement on efficiency. Nevertheless time should be recorded to allow analysis of modeling performance over time.Automatic storage of the results in short time intervals allow, e.g.all 5 min or 7 min.the analysis of effectiveness per time in a more specific way. User Acceptance-Subjective Aspects Another possibly affected variable is the subjects' acceptance of the used notation (or tool) and the automation task when executing the task.Here, usability questionnaires based on the standard DIN EN ISO 9241 are best practice. Moreover, aspects as the subjects' mental workload, control belief or motivation can be elicited (see Section 5, Section 4.7 for details). For later analysis the affecting and affected variables and their relations will be evaluated to provide results for the comparison of the notation. Selected Usability Studies In the following Section usability studies (4.1) and usability experiments (4.2 -4.8) with focus on different automation tasks are introduced and classified according to Section 3 (Table 1).The engineering task is classified as structural and/or behavior modeling task.The automation task's complexity is given by number and type of I/O, as well as weighted methods of class and number of variables in case of a classical PLC programming approach using IEC 61131-3 FBD.The five related experiments are stronger related to case studies and to industrial application. The seven experiments by the author's team (4.2 -4.8) highlight different complex automation tasks and different automation systems characteristics as given in Table 1.[68] demonstrate how task analysis could be usefully applied for the preliminary assessment of the effectiveness and perhaps even the efficiency of logic control design methodologies.Lucas [67] calculated the time to create a simple logic design program on the basis of low level user operations, e.g.keystrokes, mouse clicks and mental operations, for IEC 61131-3 Ladder Logic Diagrams (LL 405 min), Petri Nets (PN 1100 min) and modular Finite State machine logic (mFSM 1500 min) showing the significant difference given by the notation itself.To derive the necessary steps and the used strategies, i.e. copy & paste, manual copy, they observed engineers during the design process and surveyed the time needed.Moreover, Lucas and Tilbury [33] [67] provide a way of comparing the complexity of control logic models respectively code of a simple lab scale MS created with the above mentioned notations plus SIPN by analyzing existing programs.They introduce quantitative measurements of complexity of a piece of code: size (i.e.number of operations and state variables), modularity (number of modules) and connectedness.Additionally, they introduce four typical scenarios for accessibility of data from a programmer's point of view, i.e. 1) single output debugging (specific questions regarding specific unexpected behavior in the machine), 2) system manipulation (how the user can manipulate the machine to achieve a desired state), 3) desired system behavior (desired behavior of the machine when examining only the schematics and the logic) and 4) unexpected system behavior (system's response to unexpected events).Because all these questions refer to already existing code they can be categorized to maintenance tasks.The four notations evaluated are compared regarding the four scenarios showing that Ladder Logic is still most appropriate for the first two but hard for Scenario 3 and 4, whereas Petri net, SIPN and mFSM are rated moderate or easy in Scenario 3, but minor in Scenario 1. LL is the small but very interconnected and mFSM the most modular, although largest program. Experiment O.3 and O.4: Reusability Strategies Strömman et al. [27] compared IEC 61499 with IEC 61131-3 in logic control design to foster reuse.Professionals and researchers act as subjects programming a lifter application during a workshop.The resulting solutions differ totally showing different type of approaches, e.g.reuse of existing ST Code copied into an IEC 61499 frame, reuse of design patter, i.e. a state diagram, a mechatronic approach and classical IEC 61131 function block approach, concluding that guidelines to use IEC 61499 are required as well as an environment that fosters collaboration and exchange of information.The results were gained by model comparison and written feedback.Beforehand interviews were conducted to reveal the relevance of the study.Design approaches are context-dependent, i.e. the background of the designers, the existence of legacy software as well as business goals etc. Based on this experience in experiment E0. 4 Sierla et al. [28] organized one courses on IEC 61499 in 2005 to enable twenty practitioners and researchers to propose and negotiate about design alternatives in a team context with recorded interviews.In a second course in 2006 for professionals (3 subjects), researchers (3 subjects) and a standardization worker worked in a team representing the different social groups in a project evaluating the impact of team organization, knowledge integration, and software development method by an interview after the course.The benefit of a modular structure was realized as well as the risk of combining continuous control loops combined with sequential batch control logic.The necessity of shared guidelines, design patterns and tool support was highlighted in more detail especially for batch control systems section. Experiment O.5: Change of Sequence Hajarnarvis et al. [46] compared 63 subjects applying for different methodologies changing the sequence of a given simple program, i.e. contact logic, step logic, SFC and EC.The participants had to change the sequence of a simple task with three motors and one valve.The authors identified different main problems, e.g.insufficient modifications for all but EC and incorrect algorithm for SFC and EC.The results are separated according to the participants' background, i.e. maintenance, planner, programmers and Rockwell personnel compared to the untrained. Experiment E1-Pure UML 1.4 and PLC Programming-Exploratory Study The series of experiment E1 explored the influence of group work compared to individuals, the influence of prior experience in PLC programming and modeling, different qualified subjects, i.e. bachelor students of electrical and information engineering with students integrated into companies (StiP) and technicians modeling and programming a pick & place unit [69]- [71] (see Figure 3). As affected variables the number of steps realized and their correctness was evaluated compared to a master model.The notations compared are UML, ICL and a control group only using S7 PLC programming languages IL, LL and FBD. The results regarding quality of the model, i.e. error rates with 43.84% are disappointing.The high impact of qualification level on number of realized steps and errors is significant (see Table 1, E1 results).The influence of prior knowledge which is in this experiment only based on subjective rating in a questionnaire is evident, too.In this experiment prior knowledge leads to halve the errors.Subjects rate the applicability of both UML and ICL for modeling structural aspects as very poor and for behavior as fair.Comparing groups (2 subjects) with individuals, groups reach a higher number of modeled steps in (23.44 compared to individuals 15.04; p = 0.01), but unfortunately the error rate is not significantly reduced.The experimental results, e.g. the identified errors in the developed models (see 9, Figure B1) are used as input for the further development of UML for MS (E2, E3, E4, E5).The pure models and high error rate reveal an insufficient training and experimental design, but also the weakness of pure UML 1.4 as modeling notation.Subjects claimed a reduced number of diagrams with a clear procedure for UML modeling, a tool to support modeling with integrated code generation, because paper and pencil is not accepted. Experiment E2-Deployment Using Pattern and UML-PA Based on the results of E1 a domain specific language UML-PA was developed with a reduced number of diagrams and domain specific stereotypes [72].The research question was to prove the benefit of such a domain specific language under architectural aspects, i.e. regarding deployment of control loops and the related sensors and actuators connected via a field bus.The subjects should identify correct pattern and connect them to model the system from sensors to actuators including its deployment and communication relations.For this reason UML-PA provides ports to model communication interfaces in so called instance structure diagram. The modeling approach using UML-PA and its instance structure diagram is compared with UML 2.0 diagrams, i.e. class diagram, component diagram, composite structure diagram and deployment diagram.As automation task a simplified real continuous hydraulic press was chosen with 30 control loops to be switched between distance control and pressure control in case of overpressure.Each valve is equipped with a distance sensor to measure the valve opening and each control loop with a pressure transmitter.As additional input the press operator sets the set values of the pressure in the cylinder connected to the valve.The controllers output is the set value of the valve position and to the HMI the valve opening.UML participants checked their results after 1.78 changes and took the results as guidance to find an appropriate solution, UML-PA subjects checked their solution after 3.9 changes [72] (see also [73] for further information).The subjects properly analyzed the task and selected the given pattern establishing the required communication more efficient with UML-PA compared to UML 2.0 (see Table 1, E2), which is easy to understand due to the additional effort, i.e. diagram changes, needed in UML 2.0.This idea is included in the SysML-AT approach discussed in E7.The identified breaks and time needed to understand the relation between different diagrams needs to be optimized regarding improvement of MDE (see E5). Subjects criticized the restricted tool.The restricted tool support encouraged students to follow a trial and error strategy which is unacceptable in a real industrial application. Experiment E3-Error Handling Using plcUML SC vs. IEC 61131-3-SFC Fulfilling the requirements from E1, a reduced number of diagrams with tool support and code generation, Witsch and Vogel-Heuser developed a prototypical plcUML editor implementing UML class diagram and state chart in a real IEC 61131-3 run time development with integrated code generation in CoDeSys 3.x [74] [75] (see also [1]).The plcUML diagrams are integrated similar to SFC as additional language transformed internally into a ST language derivate.Yang et al. [76] applied orthogonal regions in UML state charts to model primary system functions and corresponding traversal features and concurrent behavior.Witsch et al. [74] introduce composite states as groups of states allowing to model error behavior for those grouped states.Evaluation with experts showed the strength of the composite states for error handling as well as mode of operation, the focus of experiment E3. The experiment validates that using state charts is more efficient than using classical SFC in IEC 61131 to proof cyclically sensor states regarding inconsistency as well as timing errors in a single moving cylinder, i.e. a cylinder component of the pick &place unit (cylinder in Figure 3).The mean steps programmed per minute using state charts with composite states was 1.98 points/min compared to classical SFC in IEC 61131-3 with 1.41 points/min given the same points for both solutions to be reached [77].The modeling speed of the SC group was significantly higher than the SFC group even if the SC subjects didn't use composite states. The benefit of composite states is evident for error handling (see Figure 4, left), i.e. in SC the error handling for all states can be handled by one exception transition out of a composite state instead of multiple transitions, i.e. after each activity, error handling activities follow.If an error in the exception handling algorithm is identified or an additional condition needs to be included modifications to the process can be covered in one path in SC, compared to multiple paths in SFC (cf. Figure 4, right). Subjects using composites states estimate their programming experience higher than those who didn't use composites states.Many subjects criticized the absence of an automatic placement of elements in the tool, a site effect in the plcUML condition.In this experiment only exception handling was evaluated with a prototypical tool.A more general design is discussed in Experiment E5 using plcUML with a more mature tool version. Experiment E4-Sequence of Structure and Behavior Modeling in Workflow Using UML with Elaborated Training Concepts The research question to be answered is, whether subjects can be successfully forced to model structure, when asking them to model structure before behavior or whether behavior first is a good strategy for engineers to achieve proper model quality.Therefore a training concept as well as a subsequent experiment has been developed together with researchers from instruction theory [78]. In a pre-experiment for E4 the main focus was to reveal whether the order of modeling is important for the quality of the model.The assumption was that students start with behavior modeling because it is easier for them, and then run short in time before finishing the structural model. The pre-experiment was conducted without tool support only with paper and pencil after a training realized by a lecture and exercise in a very large classes (bachelor students 2nd semester mechanical engineering).The subjects were split in two groups: one group was told to start with structure modeling, the other with behavior modeling. It showed that 35% of the subjects had problems to create suitable classes from similar objects of a plant including their attributes, and methods. Examples of typical errors in the class diagram were (error rate in %): • Objects were listed in addition to the classes, which inherit from the classes (23%). • Classes were used in which objects of the class occur as attributes (7%). Overall, no significant differences concerning the modeling order could be found, but significant differences with respect to the trainer, as the two groups were trained by different teachers. In order to eliminate that confounding effect in the main experiment one trainer trained for both groups.In that study, which has not yet been published, a larger sample (102 subjects) has been tested using the same procedure and task as described for the pre-experiment above. Here, the average participant reached 19.97 out of 46 points, i.e. lacked 26.03 points (SD = 9.1819).Regarding the performance measures, the "behavior first" group scored remarkably higher than the "structure first" group: While the participants in the "behavior first" group achieved 23.4 points on average (SD = 10.326;SE = 0.982), the mean value of the "structure first" group was only 18.4 out of 46 points (SD = 8.220; SE = 1.825). In the "structure first" group, the subjects reached only a mean of 6.14 out of 24 points in behavior modeling (SD = 5.083; SE = 0.607); in the "behavior first" group, however, the average behavior modeling performance was 12.25 points (SD = 6.112;SE = 1.080).This difference is highly significant (T = −5.278,df = 100, p = 0.00). As a result for the next experiments we learned that the class room training was not suitable enough and that forcing students to follow a specific modeling order is not helpful to improve structural models. Experiment E5-plcUMLvs IEC 61131-3 FBD with Apprentices Optimizing Training, Design and Analysis of Results-Exploratory Study In this experiment the superiority of UML compared to FBD in a design task with a sophisticated training and with repetitive application of the notation, the ß-version of an UML tool (called plcUML), for a complex open loop control task and apprentices as subjects should be demonstrated.To allow further analysis between modeling results and subjects' abilities and the development of an individual training fitting to individual abilities in a next step, selected abilities are collected as well as user acceptance. As control task a sub-part of the pick & place unit with multiple reuse (only open loop control, weak real time requirements without communication requirements) should be modeled, i.e. three storage elements with one storage cylinder pushing the work pieces out of the storage and five different terminals with a terminal cylinder each, pushing the work pieces into the terminal.Because in industry very often skilled workers are conducting maintenance tasks and even easy design modifications 1st and 2nd year apprentices from a vocational school in Munich (89 subjects) act as subjects.Selected results of this experiment are reported already in [63]. A hybrid learning environment (HLE), allowing to switching between computer-based and conventional instructional designs] was developed and implemented.During training the groups repeatedly exercised programming and modeling tasks with increasing complexity (named fade out). Several affecting variables related to abilities were obtained, i.e. grades in mathematics, German, automation, and mechatronics as well as cognitive capabilities, motivation levels, challenge, and workload (single instruments are described in [63]).As performance variable the programming/modeling achievement was evaluated.To obtain this value, the developed models/programs were stored (every 5 min) and analyzed manually by two evaluators, who compared them to a master model.The subjects performance was measured as number of correctly modeled or programmed elements and compared with respect to structure, e.g.classes or FBDs on the one hand and behavior, i.e. state charts and FBDs on the other (for details see Appendix A).Unfortunately, the results were disappointing, because an overall significant benefit of plcUML compared to FBD could not be detected, but nevertheless interesting results could be found, e.g. • OO modeling and FBD programming show different relations to variables like cognitive abilities, experience, workload, and knowledge the students' performance in the plcUML/CD + SC groups seems to be less related to previous knowledge and cognitive abilities than students' performance in the 61131/FBD groups [63].• Subjects needed different times for structural modeling using UML/CD vs. FBD (see master model Figure 5).Subjects needed in average 6.22 minutes more time for UML class creation (in comparison to the time needed to create the FB structure.This difference is slightly not significant (ANOVA, F(1, 81) = 3.60, p = 0.06), cf.[63]. On the basis of the unexpected results further analysis of models, modeling process and the relation between model and results as well as subjective results have been conducted.Analyzing the main errors especially the errors in structural model, i.e. classes built: • 42 subjects out of 44 built classes (including superfluous ones) as part of the structural model; • 23 out of 42 used these classes in their behavioral model; • 31 out of 42 modeled a second cylinder class separating storage cylinder and terminal cylinder, but 14 out of those 31 subjects built the second class identically besides the name, this indicates that they understood the class concept but use another type of abstraction, which is more related to the mechanical structure, i.e. a terminal and a storage cylinder are different instead of the software view in which both cylinders are identical.Analyzing tool and training effects gathered from the subjective rating from questionnaire (Figure 6), 27 subjects of the plcUML grouped asked for additional training.From subjects' observation and analysis of the time needed, the authors expected the abstraction needed to build classes and the relationship between CD and SC to be the main challenge, because in the UML groups long thinking breaks occur before modeling classes.In the questionnaire only 5 subjects mentioned that development of classes is difficult, which is surprising regarding the above mentioned errors in building classes and the thinking breaks. Regarding tool aspects (item 1.2 positive and item 2.5 negative in Figure 6) the plcUML tool seems to have some more problems. Further subjective results gained from the questionnaires were: 1) Frustration levels were significantly higher in the UML group compared to the FBD group (p = 0.02); 2) The clearness of FBD was rated significantly higher than of UML (p = 0.017); 3) Behavior programming was rated significantly easier with FBD than with UML (p = 0.012).And 4) subjective quality estimation and factual quality match far better with UML than with FBD (p = 0.025). Because of the observed thinking breaks, we analyzed the modeling progress over time (points over time) in a random sample of only three subjects (with similar quality of model) we found differences in plcUML and IEC group, in the plcUML group there is a longer period of time until points referring to the master model are gathered and there is a clear ramp in points compared to the more steady increase of points in FBS group (Figure 7). For further experiments detailed analysis of modeling progress is needed and, therefore, the cycle time of storing data needs to be reduced and a more efficient approach of analyzing subjects' modeling process over time needs to be developed.Details for the analysis and rating of subjects' models compared to the master model are given in Appendix A. Subjects debugged at different times, some at the beginning and others at the end of the experiments with a nearly complete model.The analysis of debugging is necessary to find errors and will be focused in future work. The design of the experiment including training and data analysis was appropriate delivering detailed relations between abilities and model quality, but revealing still shortcomings of plcUML as notation for apprentices in design tasks.Our assumption is that the necessary abstraction to build classes is too high for this group of subjects.These results fit to the notational complexity of class diagrams of Schalles.Therefore, in further experiments technicians and engineers will be included as subjects with a higher level of knowledge and experience in PLC programming.Additionally, different levels of task complexity will be tested. Experiment E6-Maintenance Task in Early Phases of Notation Development with SysML-AT vs. Continuous Function Chart (CFC) The research question of experiment E6 is how to evaluate three notations in a qualitative way in a very short period of time for training and experiment in an early phase of the development of a notation.E6 evaluates different modeling and programming notations (see also [16]), i.e.Parametric diagram (PD) of SysML-AT [26] vs. Continuous Function Chart (CFC) and IEC 61131-3 Structured Text (ST), regarding a maintenance task, i.e. understandability (analysis and interpretation according to [31]) of model contents in a qualitative way.The experiment was based on three different simple models of physical laws (about 4 -5 sub-blocks and 7 -8 variables), with each model described in every considered notation.Because the evaluation should take place in an early design phase and the time needed should be very short, tool support is not applicable.Bachelor students of mechanical engineering worked without a tool after a very short training, passing all three different notations and all three models.The sequence of the notations was permuted for each subject (see Figure 8) to eliminate learning effects.As a software maintenance scenario, the subjects had to correctly interpret the models' contents, consisting of components (sub-blocks and variables) and data flows to answer questions regarding the model contents correctly.The mean of correctly answered questions was highest for the PD (68.25%) with a positive offset of 3.97% to ST (64.28%) and 4.76% to CFC (63.49%) (see Table 1, E6).The experiment shows, that even a short training with a short time for experiment and a small number of subjects delivers qualitative results.In accordance with the results, in questionnaires that tested the subjective cognitive demand, the subjects rated the PD as the most understandable notation.Furthermore, all of the subjects answered, that they experienced a learning effect regardless of the different notations they used. The results of a focus group that was conducted for additionally evaluating the SysML-AT [25] also indicated that the developed modeling approach is well suited for automation software modeling. Experiment E7-Conceptual Engineering of Structural aspects of Distributed Networked Automation Systems (NAS) The research question in this experiment is whether additional support in structural modeling of NAS realized with characteristics and pattern is beneficial in the conceptual design or whether the resulting complexity hinders the benefit.Besides the instruments regarding user acceptance are more elaborated and should give answers in more detail in relation to quality of models. As the results of E5 and E6 show, plcUML and SysML-AT have positive influence on the programming of a PLC.Following Sierla [28] and the difficulties identified for engineering of distributed systems E7 evaluates a SysML-AT based notation and workflow vs. CFC for a high-level design of NAS in MS (see also [16] [45]).The evaluated approach focuses on the overall design of NAS integrating notation SysML-AT being the successor of plcUML.The SysML-AT based concept contains a workflow procedure referring to a life cycle model (following requirements E1) including communication and real time requirements for a hybrid control task (experiment E7 a).Additionally, characteristics (E7 b) and characteristics plus patterns in (E7 c) are compared to the pure notation and workflow procedure.Conditions b and c are only qualitative measures, because of the small group size. The approach covers the modeling of automation hardware and software as well as of functional and nonfunctional requirements.From described requirements the functions that need to be implemented can be derived and captured within the same model.Hardware elements like sensors, actuators and nodes and their interfaces and properties are considered within the modeling approach as well.This enables the integration and linking of hardware and software models [79].The notation is based on the SysML Block and Requirements diagram using ports to represent software and hardware interfaces (Refer also to [80].Duration of experiment was not restricted, but taken as measure (mean given in Table 1, E7). Characteristics as well as pattern supported subjects in solving the task, i.e. design of the automation concept of a coking plant including belt synchronization without implementation.Main task of the experiment was to conceptually design a closed loop for speed synchronization which included three belts.This comprised all necessary functions, interfaces and relations to the sensors and actuators.The internal behavior and control algorithms were not required, i.e. the structural part of the model needs to be designed.Characteristics detail requirements as well as the later design solution including element relations.During the design the comparison of requirement characteristics and solution characteristics help to decide if the solution fits the requirements (cp. Figure 9).Additionally patterns, divided into functional and deployment patterns, help to find a solution.Functional patterns include proposals and support the engineer in the development of the functional model.Deployment patterns indicate distribution alternatives of functions and support the engineer in the development of the deployment model [82]. The models subjects stored after finishing the given task were analyzed compared to a master model.The results show major difference between the subjects' solutions and master model, i.e. experts' best practice regarding module structure.Similar to Sierla [28] different possible solutions were detected, i.e. most subjects chose a functional oriented modeling approach, instead of a mechatronic approach taking modularity, reuse and architectural aspects of NAS into account.The experiment intended that students follow a mechatronic approach, therefore the master model was built realizing the mechatronic paradigm.In further experiments either the mechatronic approach needs to be integrated into the training or subjects' mental models need to be collected beforehand. Nevertheless, the experiments show a significant benefit of SysML-AT compared to CFC (see Table 1, E7 a to c column results).Regarding notation with life cycle model, i.e.NM subjects gained significant better models compared to CFC (123.1 mean compared to 182 max.points, the best subject gaining 144 points, see appendix C).Using characteristics additionally (NMC), subjects improved their models again.But for those experiments only qualitative results are available due to the limited number of five subjects.Regarding user acceptance measures subjects stated less mental demand using pattern (see Table 3), i.e.NMCP has lowest mental demand with 10.75 of max 20 points.(the higher the more mental demand).The motivational factor "fear of failure" was most pronounced in the group with patterns and characteristics (NMCP).Furthermore, this group showed high external control beliefs meaning that subjects strongly related the outcome of the results to external circumstances and high fatalistic externality meaning that success is assessed as depending on fate, fortune, and chance and, however, subjects perceived low mental demand during task performance.In addition, according to usability aspects, suitability for task was best evaluated for group with characteristics (NMC).Suitability for individualization of patterns was rated significant lower than both other conditions. Based on UML-PA and E2 as well as the experiences gained and rules derived for E5 (including task development, training and tool development) and experimental design in general (see 6) the experimental design of E7 was developed appropriately evaluating also the derived rules (see 6).E7 evaluated the benefit of NM for NAS and hybrid control with real time and communication requirements.Even relations to abilities realized in E5 could be further developed with a more advanced questionnaire.Results reveal more relations to human factors, e.g.mental workload, and usability measures.For further engineering support the challenge is to find a compromise between supports by characteristics and pattern and the approaches' complexity. Summary of the Experiments All experiments focus on the design phase besides E6, a centralized single PLC as control hardware besides E7 and students as subjects besides E1 and E5. • E1 was the first experiment exploring the method of usability evaluation in logic design engineering with a single closed loop controller and compared pure UML 1.4 and PLC in a first attempt without the support of an engineering tool and with a large unstructured task.• E2 focused on a hybrid automation task including communication with the focus to support deployment by simple UML-PA pattern compared to classical UML 2.0 with restricted tool support and a narrow engineering task.• E3 focuses on error handling comparing plcUML State Chart to Sequential Function Chart (SFC) in IEC using a very simple automation sub-task and a short classical training.• E4 focuses on SC vs. IEC 61131-3-SFC sequence of structure and behavior modeling in workflow using UML 2.0 with a didactically more elaborated but classically conducted training concepts in smaller sub groups with the goal to increase the quality of the structure model.• E5 is similar to E1 also an exploratory experiment further developing the method of usability engineering experiments using a real software engineering tool with embedded UML the so called plcUML compared to IEC 61131-3 FBD with apprentices optimizing repetitive training, and exercise, an elaborate training environment and smaller automation task with reusable sub-process, including also human factors and prior knowledge.• E6 focuses on a maintenance task in the early phases of notation development with SysML-AT vs. Continuous Function Chart (CFC) to show the benefit of easy and quick sub-experiments in the development process of the notation.• E7 Conceptual Engineering of Structural aspects of distributed networked automation systems (NAS) including a procedure for life cycle support and characteristics for pattern selection and reuse with a detailed analysis of user acceptance including motivation.In every single description of an experiment the research questions as well as the most important aspects of the experimental design, results and lessons learned regarding usability aspects are discussed as well as results for further development of MDE, i.e. notation, procedure and tool.Most experiments are based on prior experiments and notational development resulting from a prior experiment is tested in one of the following experiments. Selected Results for Future Usability Experiments The following section summarizes the best practice rules gathered to the best of our knowledge.At first the criteria for the selection and configuration of the affecting variables (see Figure 1) are discussed, e.g. the task, the training and selecting a group of subjects.Afterwards the criteria for selecting the affected variables are discussed (see Figure 2). Task Development As affecting variable (see Figure 1), the type of the engineering task (maintenance or design) and automation task complexity and characteristics are key issues in relation to the complexity of the new notation or approach to be evaluated and the time available for training and the experiment itself. 1) Automation Task To classify or rank the automation task complexity compared to other experiments and to estimate the time needed for training as well as the task itself in the experiment, the authors introduced some measures, i.e. number of I/O, number and type of control loops and depending on the used notation the WMC and number of states for OO design and the number of FBDs and variables for classical PLC programming using IEC 61131-3.Besides the task characteristics, i.e. real time, communication requirements and the tasks type as well as the inclusion of exception handling (E3) and mode of operation are relevant, too.In the above introduced experiments the WMC reaches from 3 in E2 a strongly restricted experiment using pattern to 43 in E5 and 45 in E1 in a more industrial related scenario. It is obvious that a complete engineering task consists of a lot of decision points with different ways to a correct solution.These variation possibilities need to be covered by an evaluation scheme. 2) Engineering Task Starting with HTA or GOMS, the required steps to fulfill the task are found.The quality of the HTA depends on the skills and experience (also industrial) of the experts conducting the HTA.Interviews with industrial experts are helpful to find appropriate subtasks as well as typical module libraries available to be provided in the experimental setup. Modeling mostly consists of structural and behavioral aspects.In most of the experiments described above, structure and behavior were an issue (Table 1, column engineering task).All experiments besides E6 dealt with design and model creation (E2 model configuration).E6 highlights maintenance tasks and showed that tool support may be neglected for easy tasks as well as training may be very short compared to design tasks. For both, modeling and training, the designer has to decide whether to provide a life cycle model or even a method and a tool.For more complex engineering tasks a tool is a prerequisite to gain subjects acceptance (not reached in E1, E4 and E6) and motivation.On the other hand, a prototypical tool (E3) leads to results that may be induced by the tool and not by the notation to be evaluated.Sophisticated tools need additional time for training.The prototype plcUML or SysML-AT, therefore, needs to be carefully tested by novices and persons belonging to the qualification group of future users before conducting the experiment, to ensure an effective detection of as many defects of the tool as possible prior to the experiment.Since otherwise frustration will rise and may act as disturbing factor in the experiment (E7 c). Development of Training As discussed above, an appropriate training is a prerequisite for meaningful results, but hard to achieve (E5 not E7 c)) in the first experiments.A hybrid learning environment is advantageous to reduce disturbances by individual trainers as in the pre experiment of E4.Furthermore, process simulation offers high benefits as to testing and debugging the software.For more complex notations and procedures, e.g.OO and UML, repetitive training with fade out is beneficial (E5).A training period of 1.5 days for OO with apprentices as subjects and 0.5 days for E7 a) with students as subjects was appropriate.With a very simple task or strictly focused hypothesis and a restrictive tool significantly shorter duration can be reached (E2 and E6). Selection of Subjects Besides E1, we decided for individual subjects to allow the identification of reasons and dependencies to individual abilities.This excludes to examine benefits of group work as found in Sierla [28].In the field of MS engineering students are a typical group of subjects for design tasks as well as technicians and apprentices for maintenance tasks and simple design modifications at customer site.The necessary numbers of subjects per cell to gain quantitative results is minimum 15.Different skills and abilities, e.g.mathematics are often related to results and act as disturbance factors. Pre-tests are recommended to adjust distribution of subjects to groups regarding expertise and abilities.Different tests are available (E5) or adaptable (e.g. on general intelligence [83] or on previous knowledge).Missing or insufficient motivation may also be a disturbing factor as realized in experiment E7c.Also, mental workload, i.e. the cognitive demand perceived during modeling tasks is a critical factor for the probability of errors and, therefore, should be at an intermediate level (E5 and E7).When analyzing specific aspects of a notation in more detail after the main experiment, group sizes from 6 to 8 are regularly implemented to get qualitative results.In E6 the sequence of the notations was permuted for each subject to eliminate learning effects instead of using one notation for one group, which in case of E6 would have multiplied the necessary number of subjects by three. Measuring Affected Variables/Usability Requirements To analyze the gained result and to evaluate it, master models are recommended, developed by the designer of the experiment together with other experts. Data Collection-Organizational and Technical Challenges For the data analysis observation and recording of subjects' results are most important.The easiest way to observe subjects is to take a video, but the manual analysis of the video is time consuming.In engineering tasks using an engineering tool, the most often implemented strategy is to store the model cyclically with a selected time (all 2 or 5 minutes E5 and E7 5 min) or if a new input is typed in the model (E2).The cyclical storing strategy has the disadvantage of losing information in between storing intervals similar to the sampling of an analogue value.Storing the model with every subject's input has the disadvantage of large amounts of data, which need to be analyzed later.The strategy may not be integrated in real tools as necessary if using a ß-version of an industrial tool (in E5).The challenge is to implement storing strategies in the prototype or to get access to a market leading tool in case it should be used for evaluation.The CoDeSys implementation was easy to realize for the authors' team due to the gathered developer's knowledge of the plcUML-Plugin.Additionally to model analysis, human observers are advantageous especially in case of pre-experiments and to include additional information gathered by observation.Unfortunately, this is expensive, because the observers need to be trained; the observation needs to be documented in a standardized form and approximately 1 observer is required for 2 -4 subjects.In E5 long periods of thinking breaks in the OO groups before building classes were found and included in further analysis.The analysis of results gained, i.e. model consolidation over time seems to be useful, but is depending on the availability of data and ease of analysis. In psychology thinking aloud is an often implemented method, which is often not accepted and applicable by engineering students (E1).Another issue is to gain information why subjects make mistakes or chose a specific solution.To a certain degree this information may be gained by individual interviews or online questionnaires directly after the experiment (E4).In E4 subjects were asked to analyze their solutions compared with the master solution and give reasons, e.g.lack of time, translation problems, distraction etc., for their mistakes.The method is promising, but hard to realize with large groups of subjects because of possible interviewer effects with regard to the questions asked. Effectiveness Usability evaluation concerning affected variables, i.e. effectiveness, efficiency and user acceptance was realized with different methods.To assess effectiveness, completeness and correctness are measured by counting the numbers of correct steps compared to the master model, e.g. in the behavior model, e.g. a state chart the number of steps, in the structure model in FBD the number of variables, the number of classes and objects in a class diagram (for evaluation scheme E5 see Appendix A). The difficulty in rating the results of an experiment is comparable to grading exams by distributing points for correct solutions, but more sophisticated.Points are given by two evaluators independently with a necessary interrater reliability of at least 65% (E5, E7 see Appendix). Efficiency To evaluate efficiency time stamps need to be included in the stored data and analyzed or as mentioned in 1) the cyclically stored data are taken to analyze efficiency over time.In most experiments efficiency is effectiveness in the given period of time subjects got for the experiment.In most experiments time was limited due to organizational reasons, besides E7 where time was taken as a variable: When subjects felt ready they submitted their solution and the time needed was stored. User Acceptance For evaluation of user acceptance in all of the above described experiments questionnaires based inter alia on the EN ISO 9241 and on recognized tests as RSME [83] and NASA-TLX [84] were implemented and further developed from one to the next experiment to analyze subjective values regarding modeling as such, the notation evaluated and/or the tool used, e.g.E1 and E5.Furthermore, extended evaluation of attributes for usability requirements examined by EN ISO 9241-110 questionnaire (E7) was additionally used to collect users' assessment of applicability of patterns and characteristics.Results revealed suitability for task and for individualization as appropriate indicators of difference.Questionnaires regarding the notation and tool may also reveal weaknesses of training and notation (E5 class concept). Selected Results for the Development of Future Notations for Model Based Software Engineering In MS, hybrid control tasks, real time and communication requirements of different complexity need to be engineered during design and maintained during operation covering structure and behavior in MS models. From the results of E1, we realized that pure UML 1.4 with its five diagrams used in E1 is confusing and not appropriate especially for structure models.Additionally embedded tool support in PLC development environments and a procedure is requested by subjects.Forcing students to follow a specific modeling order, e.g.behavior or structure first (E4) is not helpful to improve structural models.The introduction of plcUML embedding class diagrams and state charts into an IEC 61131-3 tool enlarged with composite states for error handling showed benefit, but tool aspects as placement were criticized (E3).In E5 a more general, but simple logic design task with reuse revealed weaknesses of plcUML in design tasks for apprentices using a ß-Version of the tool.The challenge for apprentices was the necessary abstraction when building classes.Weaknesses in training and tool were criticized (Figure 6).The tool has been further developed and integrated in CoDeSys by industry in June 2013 now used in different industrial companies and research.Experiments focusing on maintenance tasks, evaluated in E6 with students of mechanical engineering, indicated that the SysML-AT PD has advantages compared to CFC and ST (qualitatively). All these evaluations concentrated on the automation software of one centralized PLC.Regarding deployment and NAS two experiments were conducted, i.e.E2 and E7 including communication and real time requirements.In E2 a domain specific UML the UML-PA with reduced number of diagrams was beneficial in deployment of software to hardware devices like PLCs, using patterns with a very simple conceptual control task.The restricted tool was criticized, but the reduced number of diagrams was advantageous compared to UML 2.0.plcUML, consists of the Class Diagram for modeling software structure as well as the Activity Diagram and State Chart for modeling discrete software behavior using Activity Diagrams in the early phases of the software lifecycle for specification issues and the State Chart for detailed modeling of behaviour.Further developments of plcUML, namely the SysML-AT added the SysML Parametric Diagram for modeling constraints as mathematical equations to describe physical laws to the diagrams of plcUML.Although advantages of both notations were noticed, the results from E6 (focus group) indicate that a MDE approach for MS has to consider and support requirements analysis and architectural design and a supporting method.Especially for NAS the architectural design is even more important.Such a method was developed and positively evaluated in experiment E7 to be most appropriate for all typical requirements of automation in MS.Recent works currently develop an approach that contains the developed methodology for NAS and requirements modeling followed by software modeling and generation based on the plcUML and SysML-AT. Conclusion and Outlook MDE approaches should increase efficiency and quality in design and maintenance of software engineering for MS.The article showed results of usability experiments using pure UML, domain specific UML versions, i.e.UML-PA and UML E as well as domain specific SysML-AT for mainteance purposes and NAS.Summarizing the most important technical issues pure UML 1.4 or 2.0 is not appropriate, but plcUML with reduced number of diagramms and a supporting modeling process integrated in an IEC 61131-environment to support roundtrip engineering.For error handling plcUML SC with commmposite states are beneficial compared to IEC 61131-3 FBD.Structural modeling using pure or even plcUML is still a challenge for many subjects as well as the creation of classes in the sense of abstraction used in computer science.Abstraction in automation and mechatronics is different to computer science, i.e. more related to physics, also in distributed systems application.Complexity of notation (class diagramm and E7) relates to difficulties in applying the notation in an experiment with time restrictions (2 days).For NAS the applicability of notation was positively and quantitaive evaluated and for characteristics and pattern further experiments and longer training time is needed.Ongoing research is looking at a detailed analysis of humans' mistakes trying to find reasons by interviewing subjects after the experiment. Regarding real industrial software engineering tasks in MS all these experiments lack of experienced subjects, i.e. application engineers and the start-up phase with debugging.Real applications and some applications engineers are included in [85].The classical debugging phase to find faults is not explicitly analyzed up to now even if Myers [86] provides an interesting approach to classify runtime faults and the underlying software errors.Debugging in E5 was limited to simulation and restricted due to given time.At the moment we implement inteviews after another experiment focusing on reuse of modules with apprentices to analyse faults categorized to Myers' classification. Regarding usability aspects the presented experiments proofed the relevant affecting and affected variables (Figure 1 and Figure 2) to be taken into account when designing the experiment. To increase efficiency and quality of software in the development process of an industrial company in machine and plant manufacturing model based approaches using notations as UML and SysML are applicable and could be proven as partially quantitive beneficial.The prerequiste for a real benefit is the availability of an integrated tool support in the IEC 61131-3 especially for maintenance reasons to guarantee consistency of model and implemented code.Nevertheless, it is will not be easy to introduce and implement MDE using UML and SysML in an industrial company.Training and rules for application are necessary as well as a workflow to integrate existing legacy software developed in years.To integrate legacy software the existing software needs to be analyzed at first and modularity concepts need to be developed as a prerequisite for MDE.Variablilty analysis from software engineering should be implemented to maintain and evolve models and code synchronously. Further research is also needed regarding the integration of more advanced controllers into the usability evaluation, e.g.modeled in Matlab/Simulink.This results in a maximum of 20 points available for the structure model in plcUML.The plcUML behavior model quality was measured by identifying correct method calls, sequences of variable comparisons and states (cf. Figure B3).If the subsequent state after a logically correct variable comparison included a logically correct method call an additional point was given. In Similar to the plcUML model quality measurement, the FBD program quality was evaluated.For the structure quality every necessary in-and output for the FBs was counted, cf. Figure B5.This results in a maximum of 32 points for FBD model structure quality.The FBD behavior model quality was measured by identifying correct FB or FC calls, sequences of variable comparisons and the connection of these elements, cf. Figure B6.If the subsequent call after a logically correct variable comparison included a logically correct FB or FC call an additional point was given. B.2. WMC Calculation WMC is defined as the sum of Ci.Ci is the cyclomatic complexity of the ith Method, and is calculated by counting the conditions of the method +1, cf.McCabe 1976 [87].In Figure B7 an example for WMC calculation is given.In this case only the methods auto and manual of the example class are relevant.The auto method contains several conditions and therefore has a cyclomatic complexity corresponding to the number of included conditions +1 as defined by McCabe, resulting in Cauto = 10.The manual method does not contain any conditions resulting in a cyclomatic complexity Cmanual of 1. Finally these two complexity values sum up to an Figure 1 . Figure 1.Affecting variables (dashed lines show dependencies between group and substructure and maintenance/interpretation and substructure). Figure 4 . Figure 4. Comparison of subjects' best solution in SC group (left) and SFC group (right). Figure 6 . Figure 6.Subjective statements after the experiment. Figure 7 . Figure 7. Modeling progress over time for 3 subjects of each group. Figure A1 . Figure A1.Modular UML behavior model of one subject (left: hand written model; right: translated model). Figure B4(a) and Figure B4(b) a complete example measurement for one student's model in UML is shown.Missing Points are depicted as Xs.The quality of the structure model (Figure B4(a)) is 13/20 or 65% and quality of the behavior model 24/67 or 35.82% (Figure B4(b)).The overall model quality is 37/87 or 42.53%. Table 1 . Overview of the experiments. Table 2 . Complexity values for heterogeneous modeling languages. Table 3 . Results of human factors and usability measurement in E7. Behavior model quality measurement plcUML.
2017-12-10T06:00:51.707Z
2014-10-20T00:00:00.000
{ "year": 2014, "sha1": "ecbbd74c0c72e6ff05a09790cb11ba1c3437775f", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=51054", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "ecbbd74c0c72e6ff05a09790cb11ba1c3437775f", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
229434728
pes2o/s2orc
v3-fos-license
The Link Between Asset Risk Management and Maintenance Performance: A Study of Industrial Manufacturing Companies Purpose: The purpose of this paper is to examine risk management practices and their impact on performance. Specifically, the study aimed to examine risk management practices as part of physical asset management and their impact on maintenance management and its performance. Methodology/Approach: The empirical data were obtained from 76 manufacturing companies. Partial Least Squares Path Modeling (PLS-PM) was applied to evaluate the measurement and structural model. Findings: The results emphasized the importance of integrating risk management practices into asset management processes in order to improve performance outcomes. Research Limitation/Implication: This study contributes to a better understanding of how companies could achieve higher performance results by implementing risk management practices. The results of this study can help managers identify key asset risk management practices. Despite the important implications that can be derived from this study, further research that would extend the model to include additional performance measures and/or asset management dimensions would be of great importance. Originality/Value of paper: By analyzing the interrelationships between asset risk management practices and their direct and indirect effects on maintenance performance, the study provides important insights for the development of strategies to promote the novel and important discipline of asset management. Category: Research paper INTRODUCTION Today's global marketplace puts tremendous pressure on manufacturers to continually adapt proactive, innovative strategies to improve their manufacturing capabilities (Ahuja and Khamba, 2008). While asset availability and reliability are becoming critical issues in capital-intensive operations, the strategic importance of maintenance in such companies should be recognized (Tsang, 2002). With physical asset management that is even more profound than traditional maintenance management, companies should be able to realize their full potential and effectively achieve their business objectives. Consequently, effective management of physical assets is playing an increasingly important role in optimizing the business profitability (Maletič et al., 2018;Schuman and Brent, 2005). As a result, asset managers today face many challenges, such as the need to achieve social and environmental objectives in addition to more traditional technical and economic goals, the importance of risk management, and the need to use the best available technology in the asset management process (Thorpe, 2010). As Woodhouse (2007) noted, physical asset management represents the best sustainable mix of asset care (i.e., maintenance and risk management) and asset utilization (i.e., using the asset to achieve a business objective or performance advantage). Efficient management of existing and emerging risks of industrial technologies is therefore critical for companies (Pačaiová, Sinay and Nagyová, 2017) that want to meet the requirements of various areas of organizational management (e.g., occupational health and safety, accident prevention, critical infrastructure, transportation of hazardous materials, environmental or financial requirements) (Pačaiová, 2018). This means that risk management is an important element of any asset management system. To realize value, asset management, therefore, involves balancing the costs, opportunities and risks against the desired performance of assets to achieve organizational objectives (ISO, 2014). Most of the earlier studies on risk management focused on Enterprise Risk Management (ERM), with the researchers'primary aim being to investigate the role of ERM in supply chain management (Olson and Wu, 2010;Wu and Olson, 2010). Another group of studies has tried to address the Risk-based thinking (RBT) in an ISO standards-compliant way (Chiarini, 2017;Pačaiová, Sinay and Nagyová, 2017). Recently, considerable efforts have been made to develop a risk-based approach to safety analysis within maintenance processes, especially in specific environments such as offshore pipeline maintenance (Li et al., 2019) or technical maintenance system optimization (Gill, 2017). Although previous studies have examined the relationship between risk management and performance implications (Callahan and Soileau, 2017;Zhang et al., 2018), several research gaps remain unexamined. Accordingly, the literature has not paid sufficient attention to the impact of risk management practices on various aspects of organizational performance (e.g., maintenance performance directly related to physical assets). The rationale for conducting this research is the need to examine the relationships between asset risk management practices and maintenance performance. Using empirical data collected from industrial companies, this study attempts to fill this gap. There is therefore a lack of understanding of the mechanisms that might explain how key elements of risk management are related to maintenance performance. Our study builds on findings from previous research investigating the relationship between risk management and performance outcomes (e.g. Callahan and Soileau, 2017), in particular by bridging the risk with maintenance management (Pačaiová and Ižaríková, 2019). We thus add a novel perspective by conceptualizing and operationalizing risk management and linking core elements of risk management to maintenance performance. The structure of the paper is as follows: Section 2 provides an overview of the relevant literature on risk and maintenance management. Section 3 aims to illustrate a methodological framework for this study. Section 4 aims to present the data analysis, while section 5 concludes with a summary of the main findings, in particular by highlighting them from a theoretical and practical point of view and by outlining limitations and future research directions. Risk Management In the past, much has been written about risk management. Many scholars have studied ERM in companies (e.g. Hoyt and Liebenberg, 2011). This literature covers a number of approaches, including some frameworks, risk categorization, processes and mitigation strategies. In addition, International Organization for Standardization (ISO) has published ISO 31000:2009 Risk Management Principles to provide guidance on ERM implementation. A new version was recently published. ISO 31000:2018 provides more strategic guidance than ISO 31000:2009 and places more emphasis on both senior management involvement and the integration of risk management in the organization. There are many definitions of risk and risk management. The ISO defines risk as the "impact of uncertainty on objectives". The ISO 31000:2009 definition of risk shifts the focus from the previous preoccupation with the possibility of an event (something happening) to the possibility of an effect and especially an impact on objectives (Purdy, 2010). As noted by Wu and Olson (2010), risk can include a variety of factors with potential impacts on the activities, processes and resources of any organization. The authors explained that external factors can result from economic changes, financial market developments, and threats that occur in political, legal, technological, and demographic environments. One of the recurring themes in IS0 31000 for effectiveness is that risk management must be integrated into a company's decision-making processes (Purdy, 2010). For manufacturing companies, risk management can be described as a fundamental and unchanging process and represents an iterative approach (ALARP-As Low As Reasonably Practicable) that the designer or developing engineer must consider when designing the physical asset (i.e. the machine and equipment), but also the user when managing workplace safety (Pačaiová, Markulik and Nagyová, 2016). Maintenance Management Maintenance management in the form of a Management System is currently not subject to any specific standard. Normally, Maintenance Management System (MMS) is associated with the software application of maintenance management (Grubb and Takang, 2003;Starr et al., 2010). The European standard for maintenance management of physical assets (European standards, 2014) describes the interaction between the requirements of the company, the physical assets and the management of its maintenance. It is based on the four main areas of the company's requirements, which are transferred to the management of physical assets through strategic analysis based on risk assessment (RBT). These four requirement areas are divided into the organizational goals, market requirements, stakeholder requirements (e.g. society, requirements of government legislation) and technologies in terms of their structure, inherent reliability, flexibility, know-how and, of course, their maintenance. The standard describes how these requirements are manifested through strategic management in the policy and objectives of physical asset management. The asset management plan must be translated into the maintenance management plan and strategies. Understanding the relationship between the organization's asset management objectives and maintenance management objectives is considered a gap in the understanding of how the maintenance management system works. It is obvious that the decision process in maintenance applies a suitable strategy (preventive, predictive or corrective) (Al-Najjar, 2007;Bevilacqua and Braglia, 2000;Flores-Colen and de Brito, 2010). Indeed, effective and efficient maintenance processes and activities should be based on risk management (Arunraj and Maiti, 2007;Khan and Haddara, 2003). In general, there are two approaches to integrating risks into maintenance processes: 1. Maintenance planning and activities are based on unconscious decisions of maintenance personnel with high qualification and responsibility and taking into account the equipment risk (Gill, 2017;Sakai, 2010). 2. Maintenance management is based on specific concepts such as Total Productive Maintenance (TPM), Reliability Centred Maintenance (RCM) or risk-based inspection (RBI), which include risk management principles and tools (Ahuja and Khamba, 2008;Sakai, 2010). With regard to the first approach, it should be noted that the skills are usually oriented towards quality management tools that are generally used for process assessment. For example, Failure Mode and Effects Analysis (Process FMEA: P-FMEA) aims to identify potential non-conformities and their sources (Teng and Ho, 1996). It can also be used for maintenance processes, applied to equipment (physical asset) as a process element, whose functional failure affects product quality or causes unacceptable downtime. After the analysis, Pareto analysis (the 80/20 rule) can be used for decision making in maintenance, for example, for strategy optimization, to assess which equipment with the highest risk (risk priority number RPN specification) and its failures are involved in 80% of the problems. It is a similar approach to RCM. In small companies, the maintenance personnel only decide on empirical skills that result from many years of experience and the documentation of the device manufacturer (Teng and Ho, 1996). In general, the state authority, e.g. the labor inspectorate, checks whether a documented maintenance plan exists as an accident prevention measure. The second approach is more sophisticated and is usually based on consideration of the acceptable level of loss in an entity when a default occurs on a particular asset. In the automotive industry, there is a strong emphasis on quality (product, delivery time). Accordingly, quality management standards (e.g. IATF, 2016) are strictly required. These standards are aligned with TPM. This Japanese concept (from the 70th of the last century) is based on principles described by TPM eight pillars (Chlebus et al., 2015) and uses tools whose application minimizes the probability of failure (5S methodology). TPM prevents problems (losses) related to safety, environment, quality, ineffective management procedures, operating errors and poorly performed maintenance. This maintenance management system prevents any hazards/risks in the company that affect business objectives. The origin of the RCM methodology is the aircraft industry in the USA. RCM is typically applied in the petrochemical, nuclear power, gas, steel and other "heavy"industries (Srikrishna, Yadava and Rao, 1996). The need for high reliability is a typical aspect of the technology, and failure of the technology has a significant impact on the activities of companies and on society and the environment. RCM uses Critical Equipment Analysis -a methodology that helps to identify usually three categories of high-risk equipment: A -high risk (prevention strategy focused on reliability and safety), B -medium risk (high availability requirement) and C -low risk (cost optimization strategy) (Hansson, Backlund and Lycke, 2003). The next step of the RCM is the implementation of FMEA for risky equipment -the priority is applied to category A and after B the optimization of the maintenance plan and strategies is considered. RBI is a very specific concept that mainly uses quantitative risk management tools. Inspections of pressure vessels, pipelines, cranes and electrical equipment are under legal control in most European countries because the consequences of their failure have an impact on the health and/or life of people. Containers and pipelines containing dangerous goods are hazardous technologies and their risk depends on the probability of failure and scenarios (e.g. fire, explosion, toxicity) resulting from loss of containment due to specific conditions and the impact on property, society and the environment. In this case, maintenance management is the preventive approach to how the probability of failure can be minimized by an effective and efficient predictive maintenance strategy. The inspection interval is based on a quantitative risk assessment (e.g. combination of fault tree FTA and event ETA tree analysis or layer of protection analysis LOPA) and the level of risk depends on equipment condition monitoring and failure prediction (Pačaiová, Sinay and Nagyová, 2017). These concepts and methodologies in maintenance management can be modified in practical application through optimization and cost minimization. Why is it important to improve maintenance performance based on risk assessment? In the past TPM, Overall Equipment Effectiveness -OEE (Hedman, Subramaniyan and Almström, 2016) (2007) provides three main groups of Key Performance Indicators in maintenance (organizational, technical and economic), but the complexity of using performance indicators in risk management usually depends on the maintenance maturity of the organization (Tubis and Werbińska-Wojciechowska, 2017). Risk Management and Performance Several authors (e.g. Gordon, Loeb and Tseng, 2009;Ritchie and Brindley, 2007) have addressed the relationship between risk and performance. These studies have looked at risk mainly from a supply chain perspective. However, risk has also been a key issue for researchers in the field of maintenance and physical asset management. According to Parida and Kumar (2006), maintenance provides critical support to heavy and capital-intensive industries by keeping machinery and equipment in a safe operating condition. It is widely recognized that maintenance is a key function in maintaining the long-term viability of an organization (e.g. Al-Najjar, 2007;Maletič et al., 2014). It is argued that maintenance performance is a result of complex activities. More significantly, it is necessary to apply risk management methods when making decisions and controlling maintenance activities (Pačaiová, Glatz and Kacvinský, 2012). In addition, previous studies have also looked at risk management as part of the management of physical assets (e.g. Maletič et al., 2018;Pačaiová and Grenčík, 2014). It could also be argued that asset, risk and maintenance management are strongly interrelated. The latter implies that performance and risk are related. Data Collection Procedure This empirical study is based on a questionnaire survey. To ensure the face validity of the questionnaire, all measured variables were reviewed by academics and experts from industry. Accordingly, a pilot study was carried out in Slovakia, taking into account a sample of 19 Slovakian enterprises from the manufacturing sector. The final survey was conducted among Slovenian manufacturing enterprises. The questionnaire with the cover letter indicating the purpose of the study was sent to the target persons by e-mail. It was asked to address the questionnaire to employees who hold a managerial position in relation to maintenance and operational decision-making processes. The questionnaire was sent to 300 Slovenian companies in the manufacturing industry. A total of 76 usable answers were collected within the given time frame, which corresponds to a response rate of 25.3 percent. The population for this study is composed of micro (8%), small (12%), medium-sized (45.3%) and large (34.7%) enterprises. Research Model A research model has been developed that shows the connections between the core elements of asset risk management and maintenance performance. First, a thorough literature review was conducted, which included relevant scientific publications and international standards. In the following steps, theoretical constructs were identified. This conceptual background forms the basis for outlining the proposed research model. In accordance with the literature and relevant standards (such as ISO, 2018), four constructs of asset risk management were conceptualized and operationalized. Asset risk management measures were developed on the basis of ISO (2018), which define the "Risk Context (LV1)" in connection with organizational activities, the "Risk Identification (LV2)" (source of hazard/threat), the "Risk Analysis and Evaluation (LV3)" (steps for risk assessment) and the "Risk Treatment (LV4)". With reference to previous measurements (Maletič, Maletič and Gomišcek, 2012), the study measures maintenance performance as the unidimensional latent variable. The corresponding items for measuring asset risk management and maintenance performance are shown in Table 1. The questionnaire items for risk management were operationalised using 5-point Likert scales, where 1 means that respondents strongly disagree and 5 that they strongly agree. With regard to maintenance performance measures, respondents were asked to estimate performance aspects in line with the industry average over the last three years using a 5-point Likert scale. We have applied Partial Least Squares Path Modeling (PLS-PM) using the Rpackage plspm to assess the measurement and the structural model (Sanchez, 2013). Previous studies have argued that PLS-PM is particularly suitable for small sample sizes (Chin and Newsted, 1999). ANALYSIS AND RESULTS To evaluate the PLS-PM measurement model (outer model) (Sanchez, 2013), loadings and communalities were examined. As suggested by Sanchez (2013), loadings should be above the value of 0.7. The results of the evaluation of the outer model (loadings, weights and communalities) for studied constructs are presented in Appendix. As the results show, the majority of the values exceed the loading threshold criterion of 0.7. The loadings for 4 items are between 0.6 and 0.7; however, the items have been retained in the model due to the content validity. In addition, cross-loadings were also checked with regard to the validity of the measurement model. The following indices were used to assess the block unidimensionality: Cronbach's Alpha, Dillon-Goldstein's Rho and eigenvalues (see Table 1). The results show that Cronbach's alpha values for LV1, LV3, LV4 and LV5 were above the recommended value of 0.70 (Hair et al., 2010;Sanchez, 2013). The results show that the Cronbach alpha value for LV2 is below the recommended value, but the corresponding composite reliability is above the recommended value. The composite reliability was assessed by Dillon-Goldstein's rho. In the literature (Sanchez, 2013) the cut-off value of 0.7 is suggested to consider the corresponding block as unidimensional. The results show that the Dillon-Goldstein's rho value exceeds the cut-off point of 0.7 for all constructs. Additionally, the block is considered unidimensional if the first eigenvalue is greater than one. It appears that all indicator blocks fulfill this criterion. Furthermore, for the purpose of assessing convergent validity (Sanchez, 2013), the average variance extracted (AVE) was used to measure the amount of variance that a latent variable captures from its indicators (Sanchez, 2013). The results show that the AVE values for LV1 to LV4 are above the conventional threshold of 0.5 (Sanchez, 2013). As the AVE value for LV5 is just below the recommended value, it is also considered acceptable. The results regarding the evaluation of the structural (inner) model are presented in Table 2. According to the results of the coefficients of determination (R 2 ), 50.5% of the variance of the "Maintenance Performance (LV5)" is explained by corresponding prediction variables (e.g. LV2-LV4). Furthermore, the average communality values represent the average of all squared correlations between each manifest variable and the corresponding latent variable scores in the model. As the results show, the highest value corresponds to "Risk Analysis and Evaluation (LV3)", while the lowest value corresponds to "Maintenance Performance (LV5)". The mean redundancy illustrates the percentage of variance in the endogenous block predicted from the independent latent variables. A high redundancy emphasises the ability to predict. Therefore, prediction by means of redundancy could be outlined for "Risk Treatment (LV4)". It could be interpreted that 30.1% of the variability of block LV4 is predicted by "Risk Context (LV1)". The path analysis was further performed to test the relationships between the latent variables. The results concerning the inner model are shown in Figure 1. The path coefficients represent the strength and direction of the relationships between the latent variables (Sanchez, 2013). According to the results, the "Risk Context (LV1)" has a strong direct influence on the variables LV2 to LV4 (0.632; 0.622; 0.671 and p < 0.01). As regards the effect on "Maintenance Performance (LV5)", "Risk Treatment (LV4)" seems to be the dominant variable (0.490, t= 3.76, p < 0.01). Regarding the indirect effect, it can be outlined that "Risk Context (LV1)" indirectly (0.500) influences "Maintenance Performance (LV5)" through "Risk Identification (LV2)", "Risk Analysis and Evaluation (LV3)" and "Risk Treatment (LV4)" influences the "Maintenance Performance (LV5)". Notes: **statistically significant at the 0.01 level. DISCUSSION AND CONCLUSIONS The potential links between risk management and performance outcomes have attracted considerable attention in recent years, as risk management issues have become one of the main concerns of a wide range of stakeholders in organizations. However, there are still few papers in the academic literature on asset management that specifically address the relationship between risk management and performance outcomes. Therefore, this study determines the importance of risk management and its impact on business results, particularly maintenance performance. From the perspective of theoretical explanation and empirical evaluation, this study therefore contributes to a greater clarity and understanding of the relationship between risk management practices and maintenance management. Our results support the idea of conceptualizing and operationalizing risk management within the framework of standard ISO (2018). The results of this study are consistent with theoretical arguments in the literature, which considers risk management as an important elementary form of performance measurement in maintenance (Söderholm and Norrbin, 2013). Thus, our results strengthen credence to the growing importance of integrating risk management into the asset management framework (Trindade et al., 2019). Our findings are consistent with previous findings that suggest that organizational context definition, opportunity and risk identification, monitoring and analysis are among the most important factors supporting the realization of value from physical assets ( Furthermore, as the results show, it could be argued that the most important predictors of maintenance performance are risk identification and risk treatment. Our results reinforce the belief in the growing importance of linking risk management to performance measurement (Arena and Arnaboldi, 2014). In addition, as shown in previous research (Callahan and Soileau, 2017), operational performance could be improved by a commitment to company-wide risk assessment and management. As evidenced by the results, our study revealed no direct impact of risk analysis on maintenance performance. Several plausible explanations could be delivered in this regard. The results of the risk analysis include, for example, the identified hazards and risk factors that have the potential to cause harm. These results are then incorporated into action plans (which are part of risk treatment) that bear a positive association with maintenance performance. As mentioned earlier, Risk Treatment (LV4) is the strongest predictor of maintenance performance in our model (β = 0.490, t= 3.76, p < 0.01). Therefore, although no direct effects of Risk Analysis and Evaluation (LV3)were found, possible indirect effects on maintenance performance through Risk Treatment (LV4) can be indicated. We build on previous research and distinguish our study from the work previously published in the risk management literature in the following ways First, unlike previous studies, our study focuses on risk management in the context of asset management. Second, by looking at the importance of assessing the maintenance performance of companies (Liyanage, 2007), we examined whether and to what extent risk management activities contribute to maintenance performance (because risk mitigation, probability of failure) in asset management mainly depends on a proactive maintenance strategy). Accordingly, this study adds risk and asset management perspectives to the existing research on maintenance performance. Previous studies have mainly focused on the development of maintenance performance measurement systems (Parida et al., 2015). Finally, also in a departure from previous research that addressed risk in maintenance activities (e.g. Wijeratne, Perera and De Silva, 2014), our study proposes the empirically validated structural model, thereby expanding the literature on the benefits of integrating risk management into maintenance and asset management activities. Since asset management has become an attractive area of research, many researchers have worked in a variety of areas, such as exploring the applicability of advanced decision support techniques in different maintenance and asset management business processes (De la Fuente et al., 2018), developing the theoretical framework for physical asset management (Alhazmi, 2018), studying the performance implications of physical asset management practices (Maletič et al., 2018), developing a risk-based approach to maintenance (e.g. Arunraj and Maiti, 2007;Li et al., 2019;Pačaiová, Sinay and Nagyová, 2017) The biggest gap in this area results from neglecting the potential of integrating risk management into the physical asset management framework. The present study aims to contribute to the existing research gap by bridging the risk and asset management, especially from the performance results perspective. The results of this study may provide additional management insights that have the potential to support the decision making process regarding the management of physical assets and maintenance. One important aspect of physical asset management is therefore to achieve the right balance between performance, costs and associated risks in pursuing business objectives. Indeed, managers should integrate risk management into the asset management plan to proactively and holistically address the underlying issues. Managers in management and operations (M&O) are advised to follow well-established frameworks (such as EFNMS-EAMC, 2012;GFMAM, 2014;IAM, 2015) and relevant European and international standards that recognize the integration of risk management into maintenance and asset management activities. For future research we propose a combination of qualitative and quantitative studies to further investigate the proposed model. Furthermore, the proposed model may be extended to include additional performance measures and/or asset management dimensions. Future studies could also take into account some other limitations of this study. For example, given the relatively small number of companies surveyed, potential control variables could not be included without compromising statistical power. It is therefore recommended that future studies include relevant control variables and test the model with a larger sample of organizational units.
2020-12-03T09:03:08.946Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "89947388b6964721619168296b0755127f1c7847", "oa_license": "CCBY", "oa_url": "https://qip-journal.eu/index.php/QIP/article/download/1477/1224", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c7ab4b80ee5db435a8924444273a9469fbf7c344", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }