id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
269875033
pes2o/s2orc
v3-fos-license
Chemical Constituents, Antioxidant, and Antimicrobial Activities of Ethyl Acetate Fractionated Extract from Rhizomes of Zingiber monophyllum Gagnep.: In vitro and in silico Screenings Objective/Background: Zingiber monophyllum Gagnep., a member of the Zingiberaceae family, is known for its significant biological activities. The current study aimed to determine the volatile components of the ethyl acetate (EtOAc) fractionated extract found in the rhizomes of this species. This is the first report on the chemical composition and bioactivities of Z. monophyllum rhizomes fractionated extract. Methods: The chemical constituents were analyzed and determined using gas chromatography-mass spectrometry (GC-MS). Antioxidant activities were evaluated by 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging and ferric reducing antioxidant power (FRAP) assays using ascorbic acid as a positive control. Antibacterial and antifungal properties of the EtOAc fractionated extract of Z. monophyllum rhizomes were assessed against Escherichia coli, Pseudomonas aeruginosa, Salmonella enterica, Enterococcus faecalis, Staphylococcus aureus, Bacillus cereus, and Candida albicans. Density functional theory (DFT) and molecular docking were also employed to illustrate antioxidant and antimicrobial activities. Results: Nine components were identified by GC-MS analysis from the EtOAc fractionated extract of Z. monophyllum rhizomes. (E)-labda-8(17),12-diene-15,16-dial (9), spathulenol (2), and neointermedeol (5) were the major components (21.8%, 16.8%, and 11.9%, respectively). Moderate antioxidant activities of the EtOAc fractionated extract were observed via both the DPPH assay and the FRAP assay using ascorbic acid as the standard compound. The extract demonstrated remarkable antimicrobial activity against all examined microbial strains, except for P. aeruginosa. The DFT study analyzed the antioxidant potential of each component in the fractionated extract. Molecular docking study chose E. faecalis DNA gyrase B, E. coli DNA gyrase B, S. aureus biotin protein ligase, E. faecalis Alanine racemase, and C. albicans N-myristoyltransferase as potential target proteins for antimicrobial activity. Conclusion: In this study, the chemical composition of the EtOAc fractionated extract of Z monophyllum rhizomes was demonstrated through GC-MS analysis for the first time. Nine components, including alloaromadendrene, spathulenol, globulol, τ-cadinol, neointermedeol, aromadendrene oxide-(2), ambrial, (E)-15,16-dinorlabda-8(17),11-dien-13-one, and (E)-lambda-8 (17),12-diene-15,16-dial along with relative content were identified in this fractionated extract. The bioassays revealed that the fractionated extract showed moderate antioxidant activities and significant antimicrobial activities. The antioxidant and antimicrobial potential of each component was also theoretically examined by the DFT study and molecular docking study, respectively. Introduction ][5] In previous studies, the extracts from the rhizomes of Zingiber species also had antimicrobial properties.For example, several strains (eg, Bacillus cereus, Pseudomonas aeruginosa, etc) proved to be highly susceptible to the activity of the ethyl acetate extract of Z. officinalis rhizomes. 6Similarly, the ethyl acetate extract from Z. neesanum rhizomes showed potential microbicidal effects. 7Other extracts examined in previous reports also revealed interesting antibacterial activities. 8However, there has been little information on the antimicrobial activity of the extracts from the rhizomes of Z. monophyllum, indicating the need for further study to diversify the source of promising antibacterial materials. Previous studies have reported that antioxidant compounds have been widely used in medicine to treat various diseases and illnesses.They have the ability to interrupt, delay, or prevent oxidation. 9As a result, antioxidants can protect the body by preventing aging, cell damage, and oxidative stress-related diseases. 10esearch on antioxidant compositions from plant-based extracts has received great interest for their potential as herbal drugs.Furthermore, antioxidants also exhibit various pharmacological effects including antibacterial, antiviral, anti-inflammatory, antiaging, and antitumor activities. 11Because of these beneficial properties of antioxidants, the general aim of many studies has been carried out to find new natural sources of antioxidant properties.Up to now, the information about the phytochemical components and biological activities of the extract from Z. monophyllum rhizomes is still limited.In this work, we reported the volatile compositions of the EtOAc fractionated extract from Z. monophyllum rhizomes, in vitro antioxidant, antimicrobial activities combined with the density functional theory (DFT) approach, and molecular docking simulation. Antioxidant Activity The EtOAc fractionated extract of Z. monophyllum rhizomes was evaluated for its antioxidant activities via 2,2-diphenyl-1picrylhydrazyl (DPPH) radical scavenging and ferric reducing antioxidant power (FRAP) assays (Supplemental, Figures S11 and S12).As a result, the EtOAc fractionated extract showed moderate antioxidant effects, with an IC 50 value of 1.862 ± 0.730 mg/mL in the DPPH assay using ascorbic acid as the reference drug (IC 50 value of 0.015 ± 0.000 mg/mL) and an EC 50 value of 1.519 mg/mL in the FRAP assay using ascorbic acid as the standard compound (IC 50 value of 0.070 ± 0.000 mg/mL).Comparing to other ginger extracts, the EtOAc fractionated extract of Z. monophyllum rhizomes showed relatively smaller activities.Specifically, the high-pressurized CO 2 extract of ginger roots from Vietnam showed an IC 50 of 0.64 µg/mL against DPPH free radicals. 14In another study, the n-hexane and methanol extracts of ginger displayed the highest scavenging percentage of 82% to 88% against DPPH free radicals. 15he smaller activities of the EtOAc fractionated extract of Z. monophyllum rhizomes than other ginger extracts could be explained based on the difference in extracted solvent and species.So far, there has been limited information on the antioxidant activity of Z. monophyllum rhizomes in the literature. Antimicrobial Activity The antimicrobial activity of the EtOAc fractionated extract of Z. monophyllum rhizomes was evaluated against pathogenic microorganisms, and the resulting IC 50 values are presented in Table 2.This extract exhibited notable inhibitory activity against all examined microbial strains, except for P. aeruginosa.Notably, the EtOAc fractionated extract of Z. monophyllum rhizomes demonstrated significantly higher antibacterial and antiyeast activities when compared to the 2 drugs, streptomycin and cycloheximide, with IC 50 values ranging from 4.89 to 5.34 µg/ mL.Specifically, within the context of the antimicrobial assay, the EtOAc fractionated extract exhibited the most potent inhibition against Staphylococcus aureus (IC 50 = 4.89 µg/mL), followed by B. cereus (IC 50 = 4.97 µg/mL) and C. albicans (IC 50 = 4.98 µg/mL).Regarding Gram-negative bacteria, E. coli and Salmonella enterica displayed similar sensitivity to the fractionated extract, with IC 50 values of 5.23 and 5.18 µg/mL, respectively.However, no antibacterial activity against P. aeruginosa was observed.Kader et al 16 found that the crude ethanol extract of Z. zerumbet rhizomes and its fractions exhibited mild to moderate antimicrobial activity against 2 Gram-positive and 4 Gram-negative bacteria with MIC values of 128 to 256 µg/ mL.In another study, various extracts of Z. officinale var.officinale and Z. officinale var.rubrum showed weak antimicrobial activity. 17hese findings may demonstrate P. aeruginosa's relatively insensitive to the extract among the tested organisms, suggesting that Gram-positive bacteria tend to be more susceptible to the EtOAc fractionated extract than their Gram-negative counterparts. Previous studies have reported a range of biological effects associated with the EtOAc fraction of Stachys schtschegleevii leaves and stems, including antibacterial, antifungal, and antioxidant activities, among others.Therefore, several EtOAc fractions of various plant sources have been identified as potential reservoirs of antimicrobial agents.For example, the EtOAc fraction obtained from Ruta officinalis flowers was demonstrated to exhibit in vitro antibacterial, antioxidant, antiinflammatory, and analgesic activities, likely attributed to their rich polyphenolic content. 18In the present study, the EtOAc fractionated extract was found to contain a substantial quantity of bioactive compounds, including spathulenol (16.8%), which has previously been reported to possess antimicrobial effects against Mycobacterium tuberculosis.Additionally, neointermedeol (11.9%), a compound also found in Artemisia argyi essential oil, exhibited antibacterial effects against Listeria monocytogenes, E. coli, Proteus vulgaris, Salmonella enteritidis, and Aspergillus niger. 19onsequently, these results are consistent with the robust antibacterial activities observed in the EtOAc fractionated extract derived from Z. monophyllum rhizomes in this study.Given these findings, further comprehensive investigations of Z. monophyllum are warranted to illuminate and substantiate its potential applications in the pharmaceutical field. DFT-based Optimized Structures and Quantum Chemical Properties The DFT analysis was performed on the identified compounds in the EtOAc fractionated extract of Z. monophyllum rhizomes.These chemical constituents are responsible for the antioxidant potential of Z. monophyllum's rhizomes.Through DFT, various characteristics were determined, including the energies of frontier orbitals (highest occupied molecular orbital [HOMO] and lowest unoccupied molecular orbital [LUMO]), as well as several reactivity parameters such as dipole moment, electron affinity, ionization energy, hardness, softness, electronegativity, chemical potential, electrophilicity, electron-accepting power, and electron-donor power, which are significant tools to describe the reactivity, stability, and binding capacity of molecules. 20The M052X/def2-TZVPP method was employed for these calculations (Supplemental, Table S1 and Figure S13).Pi-electron delocalization is a crucial factor in the stabilization of molecules and the reactivity of donor-acceptor sites, as per theoretical understanding.The electron transfer characteristics of the HOMO indicate the tendency for intermolecular electron donation, whereas the LUMO reflects electron-accepting capability.A greater E HOMO (corresponding to a lower ionization potential, I o ) and a lower E LUMO (corresponding to a higher electron affinity, A) contribute to enhanced electron-donating ability and increased sensitivity to receiving electrons, respectively.Meanwhile, lower gap energy, indicating easier electron transfer, signifies improved antioxidant reactivity.Figure 2 illustrates that the HOMO and LUMO profiles of several molecules, such as 1, 2, 3, 4, 5, and 6, exhibit a relatively even distribution across their molecular structures.This suggests that these molecules possess greater flexibility for intermolecular interactions concerning charge transfer.On the other hand, the remaining compounds concentrate the electron density within specific regions of their molecular structures.However, the overall stability cannot be definitively determined since it also depends on the chemical/physical affinity of the ligand target. Molecular Docking Docking studies have been conducted in an endeavor to predict the antimicrobial mechanisms of action of test molecules.Enterococcus faecalis DNA gyrase B (PDB ID: 4GEE), E. coli DNA gyrase B (PDB ID: 4DUH), S. aureus biotin protein ligase (PDB ID: 3V7R), E. faecalis Alanine racemase (PDB ID: 3E6E), and C. albicans N-myristoyltransferase (PDB ID: 1IYK) have been selected as potential target proteins for antimicrobial activity.3][24] As a constituent of the Alanine racemase family, 3E6E serves as a prevalent bacterial enzyme facilitating the interconversion between ʟand ᴅ-alanine, critical precursors for cell wall peptidoglycan synthesis. 25Myristoyl-CoA:protein N-myristoyltransferase (PDB ID: 1IYK) emerges as a promising target for antifungal drugs, recognized as a monomeric enzyme catalyzing the transfer of myristate from myristoyl-CoA to the N-terminal glycine residue of various viral and eukaryotic proteins. 26erification prior to screening is executed to ensure the reliability and precision of the AutoDock Vina v1.2.3 program.Consequently, validation is established through the redocking of the inhibitory compound in the complex with the crystal structure of E coli DNA gyrase B, yielding a reasonable low root-mean-square deviation (RMSD) value (< 2 Å) of 0.461546 Å. 27 The co-crystallized structures of 4-[[4 ′ -methyl-2 ′ -(propanoylamino)-4,5 ′ -bi-1,3-thiazol-2-yl]amino]benzoic are depicted in Figure S14 (Supplemental information).In this study, molecular docking analysis has been conducted to assess the interactions and binding affinities of the main compounds of the EtOAc fractionated extract from Z. monophyllum rhizomes against target proteins.Table 3 demonstrates binding energies, and Figures S15 to S19 (Supplemental information) depict the amino acid residues within the receptor with which the ligands interact. Conclusions The current research marks the first concentrated effort to establish the volatile compositions of the EtOAc fractionated extract of Z. monophyllum through the GC-MS analysis.The research also demonstrated the antioxidant and antibacterial potential of the EtOAc fractionated extract of Z. monophyllum rhizomes.Nine compounds along with their relative content were identified, including alloaromadendrene, ,11-dien-13-one, and (E)-lambda-8 (17),12-diene-15,16-dial.The antioxidant screening revealed that the extract showed moderate antioxidant activity and this was supported by the DFT study.Furthermore, the extract exhibited remarkable antimicrobial activity except for P. aeruginosa.Molecular docking studies were also performed to display binding affinities of identified compounds with selected antibacterial target proteins.The isolation of these bioactive compounds and more in-depth studies to analyze their biological activities and conduct clinical trials for exploring and developing novel drug formulations will be carried out and reported in the future. Plant Material Rhizomes of Z. monophyllum were collected from Kon Ka Kinh National Park, Gia Lai Province, Vietnam in December 2022.Identification of the plant material was performed by Assoc. Prof. Dr Nguyen Hoang Tuan (Faculty of Pharmacognosy and Traditional Medicine, Hanoi University of Pharmacy, Vietnam).A voucher specimen (No.G1LE) was deposited in the Herbarium of the Department of Chemistry, Vinh University, Vinh City, Nghean, Vietnam. Preparation of Ethyl Acetate Fractionated Extract Dried rhizomes of Z. monophyllum (4.5 kg) were crushed into coarse powder, and extracted with methanol (5 times, 6.0 L each) at room temperature.The combined methanol extracts were concentrated under reduced pressure to give a residue that was suspended in water and partitioned successively with hexane and ethyl acetate (EtOAc).The crude extracts were stored in a refrigerator (4 °C) to future analyses. GC-MS Analysis The GC-MS analysis was conducted with an Agilent 7890B GC coupled with an Agilent 7890B GC system, equipped with HP-5MS UI column (30 m × 0.25 mm i.d.× 0.25 μm film thickness).Helium was the carrier gas, at a flow rate of 1.5 mL/min.A volume of 1.0 μL of diluted samples (1000 ppm) was injected with a split ratio was 10:1.The oven temperature rose from 80 °C (1 min kept) up to 300 °C at the rate of 20 °C/min, then kept constant at 300 °C for 15 min.Mass spectra were recorded at 70 eV.The mass range was from 50 to 550 m/z (2.0 scan/s).The chemical components of the EtOAc fractionated extract were identified by comparison of their MS fragmentation patterns, retention indices with those in the literature (NIST17) and by co-injection with authentic compounds.The percentage of each component was calculated by comparing the average area of its peak to the total area of all the peaks. 28,29 Vitro Antioxidant Activity The antioxidant activity of the EtOAc fractionated extract was determined using the DPPH method, 30 with ascorbic acid used as a positive control.The samples (0.1 mL) or negative control (ionized water) were mixed with the 3 mM DPPH solution (0.1 mL) and incubated for 30 min.The absorbance of the mixture was measured at 517 nm.The DPPH scavenging activity (%) was calculated using the following equation: where A NC denoted the absorbance of the negative control and A t represented the absorbance of the tested samples. The ferric-reducing power was determined using the FRAP method in previous studies. 31,32The working FRAP (240 mL) solution was mixed with the fractionated extract (10 mL) and incubated for 15 min.The absorbance was then spectroscopically measured at 593 nm.The reducing power was expressed as an absorbance.The concentration of fractionated extract with an absorbance of 0.5 is known as EC 50 . In Vitro Antimicrobial Activity E. coli ATCC 25922, P. aeruginosa ATCC 27853, S. enterica ATCC 13076, E. faecalis ATCC 299212, S. aureus ATCC 25923, B. cereus ATCC 14579, and C. albicans ATCC 10231 were purchased from the National Institute for Food Control (Hanoi, Vietnam).To assess the antibacterial and antifungal properties of the EtOAc fractionated extract of Z. monophyllum rhizomes, we followed the methodology outlined by Hadacek and Greger. 33tock solutions of the EtOAc fractionated extract were prepared in 1% DMSO.In brief, the bacterial strains and yeast were cultured to achieve an approximate concentration of 2 × 105 CFU/mL.Subsequently, 50 µL of the bacterial or yeast culture was introduced into Luria-Bertani medium supplemented with various concentrations of the EtOAc fractionated extract, along with DMSO as a control.The mixtures were then incubated at 37 °C for a duration of 24 h.After the 24 h incubation period, we measured the optical density of the culture wells using a spectrophotometer (BioTeK Instruments, Inc., Highland Park) equipped with Rawdata software.The antimicrobial properties of the EtOAc fractionated extract were quantified in terms of Half-Maximal Inhibitory Concentration (IC 50 ), which represents the concentration of the EtOAc fractionated extract required to reduce cell growth by half after 24 h of incubation.As a positive control, bacterial and yeast cells were subjected to exposure to streptomycin and cycloheximide, respectively.All experimental procedures were performed in triplicate. Statistical Analysis All the experiments were conducted in triplicate.The results were represented as mean ± standard deviation.The statistical comparison was performed using F-test 2-sample (Microsoft Excel, Microsoft, 2018).P values less than .5 were considered significant. Quantum Chemical Calculation Using theoretical chemistry simulations, the bioactivity of compounds was connected with their molecular electronic structures in order to reveal probable interaction pathways with active sites in biological molecules. 34The chosen compounds were subjected to geometric optimization using the Gaussian 09 software in the gas phase. 35The frozen-core approximation for nonvalence-shell electrons with a greater basis set def2-TZVPP obtained single-point energies at M052X/6-311 ++G(d,p)-level optimized geometries.Frontier orbital analysis was carried out using NBO 5.1 at the level of theory M052X/def2-TZVPP. 36Following that, the conformational analysis of the HOMO, LUMO, and molecular electrostatic potential surface were used to assess the local reactivity.The following relationships were used to determine molecular characteristics, including the HOMO energy (E HOMO ), LUMO energy (E LUMO ), band gap energy (E gap ), electron chemical potential (μ), softness (σ), hardness (η), electrophilicity (ω), and electronegativity (χ). 37,38HOMO = IE (1) ΔE gap = E LUMO--E HOMO (3) η = E LUMO − E HOMO 2 (5) Molecular Docking Simulation.To identify the manner of contact and binding energy via ligand-enzyme interaction that supports the biological activity (antimicrobial activity) of the main compounds in the EtOAc fractionated extract, a molecular docking study was employed.The PDB format files of E. faecalis DNA gyrase B, E. coli DNA gyrase B, S. aureus biotin protein ligase, E. faecalis Alanine racemase, and C. albicans N-myristoyltransferase target proteins were downloaded from the RCSB protein data bank (https://www.rcsb.org/),1][22][23][24] The target proteins were then thoroughly prepared for established docking by adding missing hydrogen, Kollman charge, and energy minimization using the software AutoDockTools and Swiss-PdbViewer. 391][42] The docked pose with the highest binding affinity and most pronounced amino acid residue interactions for each docked ligand was chosen and visualized by the Discovery Studio Visualizer software. Figure 1 . Figure 1.Chemical structures of nine main compounds identified from the EtOAc fractionated extract of Z. monophyllum rhizomes by GC-MS analysis. Table 1 . Major Compounds Detected from the EtOAc Fractionated Extract of Z. monophyllum Rhizomes. Table 2 . Antimicrobial Activity of the EtOAc Fractionated Extract of Z. monophyllum Rhizomes. Table 3 . The Binding Affinity of Main Compounds in the EtOAc Fractionated Extract with Enterococcus faecalis DNA gyrase B, Escherichia coli DNA gyrase B, Staphylococcus aureus Biotin Protein Ligase, E. faecalis Alanine Racemase, and C. albicans N-myristoyltransferase.
2024-05-19T15:18:36.402Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "862ef9b3d57464caf7735a9c3df36e053d869080", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1934578X241253443", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "41e5b3d6a900d1ebbbd47e0dcb9cded6c92dbf30", "s2fieldsofstudy": [ "Chemistry", "Medicine", "Environmental Science" ], "extfieldsofstudy": [] }
225703705
pes2o/s2orc
v3-fos-license
Age of Unani drugs and the concept of shelf-life: A comparative assessment It is a legal obligation for all conventional pharmaceutical products carrying the dates of manufacture and expiry on the label. The period between these two dates is called the ‘life period’ or ‘shelf-life’ of a product. It is the time over which the quality of a product remains within specifications by which the efficacy and safety of the product can be assured. Shelf-life is applicable on Unani drugs too, however, not the same as the conventional pharmaceuticals. As long before Unani physicians have proposed the concept of Aamare Advia (ages of drugs) mainly for single drugs. In true sense, the two concepts are the same but the way of estimation of ‘shelf-life’ is different. In conventional pharmaceutics, it is considered in terms of stability studies whereas in Unani medicine it has been prefixed. The present review will explain these concepts with a comparison. INTRODUCTION Science of medicine has been in different civilizations, therefore, named as traditional medicine, folk medicine, tribal medicine, oriental medicine etc. The Hellenistic origin of Unani medicine has remained the most popular one. This system is based on the teachings of Greek, Arab and Indian physicians. The basic concepts that laid the foundation of this medicine are cosmogonic. Hippocrates is believed to be the first one who gave medicine a scientific basis. With the advent of time, the concepts and approaches towards illness and disease changed. Ilmul Advia has been the foremost subject of Unani medicine. It deals with the study of the effect of drug and food on the human body. Besides, it includes rules that are associated with anything taken as drug and food. (1) Aamare Advia is a concept, which is related to the part governed by the rules. Undeniably, herbal drugs are in use for over the longer periods. Alternative medicines still face tribulations, the crucial one is the quality of herbs used as a drug. There are issues with cultivation, harvesting, quality preparations and processing that make herbs shoddy. Most drugs are prescribed without any knowledge of their potency at the time of prescription. Therefore, it is important to know how to protect the drugs from annihilation. Conventional medicine talks about the age of drugs in an expression of shelf-life, which is the time during which the drug has decreased to 90% of its initial concentration or remains stable (retain > 95% potency) under specified storage conditions . (2) It indicates the assurance of the efficacy, safety and esthetics of the product. (3) It is also a legal obligation. Since, conventional medicine has no concept of the single drug as traditional systems of medicine have. Rather, it considers isolated compounds, which are a single chemical entity. The conventional concept of shelf-life of drugs is objective and can be measured at parametric scales. Unani physicians have long before proposed the concept of Aamare Advia. Their clinical experience, imagination, intuitive work, rational behavior towards drugs suggested that when should a part of plant be collected. It is said that as long as the drug remains intact on its plant, its life thrives for a long period, but once detached the life shortens. (4) Unani concept of shelf-life is a bit different from the conventional concept. It is the time between the intactness of the plant part with the origin and the time with which the quality of a drug remains within specifications. Unani physicians also say that nature has fixed time of collection for every single part of a plant. Collection at those fixed timings indicates that the part is maximally potent and efficient. Collection done earlier or later makes the part futile" . (5) Unani physicians have fixed age of most parts of drugs based on their experience provided the drugs are stored under prescribed conditions. According to Unani concept, the shelf-life of drugs is subjective and can't be estimated at parametric scales. Today, scientists say that herbal drugs are collected strictly when they contain the maximum concentration of active ingredients. The advantage of on hand environmental condition is also considered while collecting drugs. This means that drugs have a natural capability to act promptly for a particular time, after that it shows least or no action. This can be correlated with the Unani concept of the relation of activity of drugs with planets, which may be nothing but a change in the environment with the movement of planets influencing the growth of the plant and thereby constituents. Ancient Unani scholars have clarified that drugs don't prove effective all times. They believed that when a planet is ascending, the effect seems to appear in maximum and vice versa. This has led to a moot point that every drug can be under the influence of season or environmental condition. They suggested collection of the drug from its original habitat and that too in suitable environmental condition to produce good results. Practically, the concept does not seem to be applied because we can't get such drugs with maximum content. Drugs are used when required. Therefore, fresh drugs are stored in suitable conditions. Although, shelf-life has been found applicable on the finished contemporary pharmaceuticals. For Unani, it is applied on single and compound drugs both. If, after collection, the drug remains stored for any length of time, there appear changes in the physicochemical properties. Such changes are brought up by high temperature, presence of moisture in the storage, and sunlight etc. Some of these factors further give rise to the growth of microbes, the secondary factors, which further deteriorate the stored drugs. The change in the properties brings alteration in the chemical structure of many active constituents. Besides, we do not know how much the drug has already spoiled after procurement. It is difficult to know the potency and the age of every drug. Whatever has so far been said by the ancient physicians is based on their experience. In short, it may be said environmental factors can affect the age of drugs. There is a thumb rule that unless the drug changes in colour, taste, odour, it is considered to be useful. (6) There is no objectivity in confirming the potency and stability of a drug and for how long it is efficient enough to serve the purpose. Help from new technology has been taken that has satisfactorily assessed the condition of drugs concerning the conditions to which almost all drugs come across. One such method /model is stability study. An environmental chamber called a climatic chamber or climate chamber is an enclosure used to test the effects of specified environmental conditions on biological items, industrial products, materials, and electronic devices and components. In this model, a fresh drug is allowed to undergo critical conditions like increased temperature, pressure, excess light, and high humidity. The drug will be kept for a certain period and all these increased environmental conditions will accelerate the process and bring early deterioration. This technology has helped in the determination of deterioration in a short time. After that, the same drug is assessed by various physical, chemical, biological and analytical processes that will determine the changes in parameters. (7) The shelf-life is determined by real-time stability studies or by extrapolation from accelerated degradation studies. The expiry date does not mean that medicine has lost potency or became toxic. But, the quality of the medicine is not assured beyond the expiry date and the manufacturer is not liable if any harm arises from the use of the product. 2 Loss of potency beyond the life period of the formulations depends on the drug as well as the storage conditions. High humidity and temperature accelerate the degradation of many drugs. Though the majority of medicines, especially solid oral dosage forms, remain safe and active years after the stated expiry date, their use cannot be legally allowed beyond the date. (2) METHODOLOGY The paper is a review of information on the age and stability of the drug in Unani and Conventional medicines. A search was made to collect the available information from classical Unani books, Contemporary reference books, journals, articles, periodicals, and other published works. The literature from Unani medicine is cited as a reference wherever they are quoted. The keywords used in the classical books for the search was as Aamare Advia and shelf-life. The information was categorized and kept at suitable places in introduction, literature review and discussion sections. Literature review In Unani medicine, the word 'shelf life' of drugs is not directly mentioned as has been in conventional medicine. Instead, a term Aamare Advia (ages of drugs) is found in some Unani books. The issue with shelf life and Aamare Advia is whether they can be used interchangeably or not. To some extent, they may be, if seen in a contemporary perspective, but at times, not because shelf -life is assigned to a conventional pharmaceutical after stability studies, whereas the age of single herbal drug is prefixed. However, the shelf life may be applicable for finished herbal products. There are two types of plants, one which thrives for a few years and the other one, which is seasonal and does not survive for more than a year. After a certain period, there appear some changes that alter the colour, odour, taste etc. of the drug. Physicians suggest the use of these drugs before such changes are visible. (5) Among the various types of single drugs, one should select the drug which has peculiar odour and taste . (4) It is mentioned that no drug should have directly experimented on the human body unless one is sure about its organoleptic properties in a way that if taste appears to be unpleasant and one feels repulsion, it means it is spoiled. Similarly, if the taste is unpalatable, it indicates the drug spoiled. (8) For plant origin drugs, some Unani physicians have said that as long as the drug continues to remain intact with its main body, it is exquisite and survives for a longer period but once detached, its life shortens and hence storage becomes important. Similar suggestions were made for the storage of root, stem, flower, bark and other plant parts. Since, drugs have natural propensity to lose efficacy and become weaker, therefore, collection and storage are playing pivotal roles. It is said that collection of a drug depends on its natural habitat, geographical distribution; a definite period and a particular season. (9) here are three sources of plant origin drugs, viz. cultivated, grown in deserts and grown on hills/ mountains. Although all have almost similar, it is said that hilly plants are more potent. (1) The purpose of storing drugs is to prevent them from degradation and an intention to maintain their potency. The purpose of converting single drugs into certain formulations like tablets and pills is also to store drugs for long. (6) There is a belief that once a dried drug is powdered, mixed with some gums which act as binders and then shaped into a tablet form, the age of the drug in its tablet form is many times more than a simple powder form. Tablet lasts for at least one year while powder lasts for only two months. (1) For storage, moisture-free spaces are preferred, where temperature, as well as moisture, are moderate. Dust, dirt, or any other kind of filth should not be there. The age of the plant is playing a substantial role which regulates the total quantity of active constituents to be present in the drug and also determines the relative proportions of other components. (10) Evidences and researches have proved that the composition of a number of secondary metabolites in the plant show diurnal variation i.e. vary throughout the day and night. If the overall amount of alkaloid or glycoside does not change to any level but they may interchange. (11) This is the reason why there is a need for all pharmaceutical products to carry the date of manufacture and date of expiry on their label. Comparison of concept of age in Unani and conventional medicine is focused in this paper, especially reviewing the concept of the age of crude drugs as per the Unani literature when sophisticated devices were not invented and the conventional medicine which determines ages of drugs with the help of certain innovative ideas. However, Amare Advia has included all aspects related to the age of a drug like its period of efficacy, potency, degradation by environmental factors etc. For them, morphology and organoleptic characters of the drug were the only tools known. Diminished color, odor, deformed texture was an indication that the drug is not to be used further. Organoleptic evaluation of the age of herbs since ancient times holds a distinctly limited value now. Ancient physicians differentiated species of mentha, clove and cardamom by smell. But smell and such other characters are not enough to know the rate at which the drug has degraded. Age had no proper definition for them. "With the best management practices under a less limited environment, it is possible to achieve the highest plant yield. Maximum yield achievable under the production system includes the best of all controllable factors needed to produce the highest possible yield. The yield maximum generally varies with climate, soils and growing seasons. In a place where all the parameters are optimum during a perfect growing season, maximum yield should be approximate to the maximum potential yield." (12) The present scenario exhibits a greater demand for herb-based drugs, the reason is an alarming increase in the side-effects, addiction and adverse reactions associated with the conventional drug. Apart from these, there exists one important drawback linked with conventional medicine and that is, escalating prices of the drug. So, as per the requirement of our health systems, it lays some amount of emphasis on Herbal medicine. Herbal medicine on all together can't be proclaimed as the safest form of the drug. They are safe in terms of their limited or lesser adverse effects. With these drugs, the matter of concern is their age. Crude herbal drugs consist of organized as well as non-organized parts like leaf, flower, seed, wood, bark, root, oils, gums etc. Ancient physicians determined the age by organoleptic characteristics which were the only method of knowing the identity and quality of the drug. Description of color, odor, taste, consistency was some open features. Ancient physicians described the general conditions of the drug-like size, shape, markings on the outer, inner surfaces, fractures etc. They differentiated species of Mentha by smell only. Similarly, the quality of some volatile oils containing drugs like clove and cardamom was also determined by their smell. Age had no proper definition for them but they gave preference to fresh herbs over older herbs. In this modern era, we know age by knowing the stability of a drug. Now scientists have been able to fix the definite life for a drug. Accelerated stability chambers have facilitated and lessened the labour and time. The quality and potency of a drug can now be easily determined. Stability for a drug as per the modern definition implies its stability of pharmaceutical agents which determines the potency and efficacy of a drug over the period. Less stable drug means potency and efficacy for little time. It refers to the loss of uniformity, loss of elegance, reduction in bioavailability, production of toxic contents, or breakdown of drug. Determining the stability or age of a drug gives a clue for how long a drug should be used. "Stability of pharmaceutical products refer to the capacity of the products or a given drug substance to remain within established specifications of identity, potency, and purity during a specified period. " (12) One of the requirements of the material to be used as a drug is that they should possess the maximum activity and thus should contain a maximum percentage of active chemical constituents. (11) For this reason, drugs are purposely cultivated because, in the process of cultivation, attention is paid to the number of things like the selection of seeds or root, type of soil, climate, weather, light, temperature, moisture, rainfall, and many other growth factors. Any drug obtained under the supervision yields a good amount of required ingredients. DISCUSSION Plant origin drugs are leaves, roots, seeds, flowers, fruits, extract, gum, bark, oil/fat, milk/latex, branch, shoots, whole plant, salt, proteins, and active constituents. To obtain these drugs, there is a general rule that they should be collected from a plant which is grown to the fullest. 12 For a collection of the drugs from their source, the seasons are usually mattered of considerable importance, as there is a shred of evidence that the amount and the nature of the active constituents do not remain constant throughout the year. (13) Drugs are collected during different seasons, at a particular time of the day and a definite stage of development (Ghani). Knowing the type of crude drug and area of collection, the drugs are collected when they contain the maximum concentration of active ingredients. The environmental conditions are also taken into consideration while collecting the crude drugs. (13) Drugs are to be collected in a specific time like roots, branches when the plant is completely grown, leave when they seem to be in their maximum size and no changes appear in them, flowers are collected when they bloom to the fullest. Dried flowers are not collected because they have already undergone certain changes. Fruits are collected when they are ripe, seeds when their outer covering is intact and is filled with the content (Maghribi, 2007). (14) Before these crude drugs are made available in the market, there is a need for some preparations and the reason for preparation is to stabilize them while being transported or stored and also mark the absence of foreign organic or non-organic matter. Such preparation of crude drugs takes care of drug elegance. (12) Drugs with natural colour, odour, taste are considered most potent like grass which can thrive better with its natural characters for about one to two years after that changes appear. (15) Only when we understand the concept of the potency of a drug, we come to know how important it is to protect these drugs. When low potency drugs are prescribed, we wonder at their reduced efficacy and results (Majoosi,2010). It is said that as long as the drug remains intact with its plant, its life is long and it can thrive for long period but once detached, its life shortens and only then storage plays its role. (4) Gum show potency for 3 years, extracts shows it for little less time than gums. Flowers and leaves are potent within a year only. Milk from various plants show different periods of expiry like opium lasts for 50 years, Farfiyun for 40 years, Scammony for 20 years, thus marking their average period of activity for 10 years. Among oils, those with coldwet temperament show their potency till 3 weeks while oils with hotwet or hot-dry last for 1 year and after that become rancid, with an exception of balsam oil which has a long period of potency and olive oil lasting for 4 years. Different flowers show different ages. Oily and covered seeds are potent for a year while non-oily and uncovered ones last for a week only, less oily seeds have a potency of 2-3 years. Leaves remain potent for 2-3 years only if provided with proper storage conditions otherwise their potency lasts for one year only. The potency of barks, roots, branches usually depends on their structure-hollow or solid, soft or hard, flexible or rigid. To preserve each drug, first, it is important to dry them in the shade so that it remains moisture-free. A little amount of sunlight can be given to dry them ensuring that drug contains no more moisture. (5) In case, the enzymatic action is needed, slow drying at a moderate temperature is necessary and if not, then drying should take place soon as after the collection. (10) Preservation of crude drugs needs sound knowledge of their physical and chemical properties. Good quality of the drugs can be maintained if they are preserved well. Apart from protection against adverse physical and chemical changes, the preservation against insects or mould attack is also essential. Some contents of the drug are stable to the temperature and sunlight, and the drugs can be dried directly in the sunshine. (1) Drugs containing volatile oils lose their aroma if not dried or if the oil is not collected from them immediately. All drugs are liable to develop molds also (William, 2008) (10) storage, moisture-free spaces are preferred, where temperature, as well as moisture, are moderate. Dust, dirt, or any other kind of filth should not be there. (5) If the flowers and leaves are supposed to retain their aroma and color respectively, then they should be subjected to rapid drying keeping the temperature in mind which should not destroy the constituents and the physical nature of the drug. As a precaution, leaves, herbs, flowers are dried at 20-40 0 C while barks, roots at 30-65 0 C. (10) in Unani Medicine and that of conventional medicine are not the same but up to some extent. The Unani concept of age is based on the change in the morphological as well as organoleptic characters. By observing these parameters, the Unani physicians determined and fixed ages of different parts of the plant, mineral or animals. Physicians of those times concluded that up to any particular time, these characters maintain their maximum level and till then the drug is said to be potent. Once these characters show annihilation, the drug loses its potency too. These parameters of determination of age are subjective and don't justify how by observing the morphological and organoleptic characters one fixes the age of a crude drug. In Conventional Medicine, the concept of age is an expiry of the drug which means that the drug has become less potent and cannot be further put to use. They have given certain parameters by which expiry of the drug can be measured by knowing the shelf-life. Before marketing of the drug, all conventional drugs pass through stressed conditions which usually the drugs come across like at medicinal stores or houses. The shelf-life is calculated by keeping the drug in the stability chamber under stress conditions of high temperature, humidity, pressure. The changes appearing in the drug are measured at suitable intervals. CONCLUSION Aamare Advia is a basic concept, whereas shelf-life is much accurate and designed way of age determination. Although, the methods of shelf-life can't be fully implemented on Unani drugs, but advantage can be taken of.
2020-08-20T10:12:07.641Z
2020-06-26T00:00:00.000
{ "year": 2020, "sha1": "4576ef7c40af5c17ff2e863efdf9f57b1213cf1f", "oa_license": null, "oa_url": "https://doi.org/10.31254/phyto.2020.9310", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "503bd0a83cbd1ba1ebe617dd16fe5683aa8bc76b", "s2fieldsofstudy": [ "Medicine", "Law" ], "extfieldsofstudy": [ "Biology" ] }
225191593
pes2o/s2orc
v3-fos-license
Effect of Air Quality on Cardio-Respiratory Systems in Northern Thailand (Chiang Mai, Chiang Rai and Nan Province) Poor air quality is an important problem in several countries, especially in northern Thailand. Several studies have reported the association between these problems and risks of human health. However, little is known regarding the effects of the air quality on cardio-respiratory systems among people of different ages. The aim of this study is to compare the effects of air quality on pulmonary function and cardiovascular endurance before high PM10, high PM10, and after high PM10 periods in children, adults and elderly groups in the north of Thailand. A prospective cohort study with three different periods was designed. A sample of 450 participants (i.e., children, adults, and elderly people) was random, and were recruited in Chiang Rai, Chiang Mai, and Nan. Pulmonary function tests and cardiovascular endurance were measured by spirometer and six-minute walk distance (6MWD), respectively. A total of 335 participants were recruited; 96 children, 119 adults and 120 elderly people. For pulmonary function, force expiratory volume in first second/Force vital capacity (FEV1/FVC) in the children’s group found significant differences when compared before high PM10 high PM10 period ( 2.289%) and before high PM10 – after high PM10 period ( 2.324%). Also, 6MWD found significant differences in children, adults, and elderly groups when compared before high PM10 high PM10 period ( 80.480,  36.640, and  25.511 meter, respectively) and before high PM10 after high PM10 period ( 70.488,  22.874, and  16.374 meter, respectively). Therefore, air quality had a negative effect on the cardiorespiratory system. INTRODUCTION Poor air quality is a significant public health problem in several countries including the northern part of Thailand. Northern Thailand faces problems with the haze problem (i.e., air pollution) due to the landscape of the north, which is a mountainous and pan-area. Further, haze might be released from neighboring countries and transported in the atmosphere resulting in increased air pollution ( Vichit-Vadakan and Vajanapoom, 2011) . The main causes of haze and air pollution are forest fires and the burning of agricultural materials. The Department of National Parks Wildlife and Plant Conservation in 2016 reported that the number of fires was 6,685 and the damaged areas was 112,523.90 Rai from 1 st October 2005 to 24 th May 2016 (Forest Fire Control Division National Park, 2016). In addition, the northern part of Thailand usually faces haze and air pollution problems in particular Chiang Mai, Chiang Rai, Mae Hong Son, and Nan. Substances in the haze and air pollution are many, but include Sulfur dioxide (SO 2 ), Nitrogen dioxide (NO 2 ), Carbon monoxide (CO), Ozone (O 3 ), especially Particulate matter (PM) (Brook et al., 2010;Wang et al., 2016). A large amount of exposure leads to decreased pulmonary function, increased risk of respiratory symptoms, airway inflammation, and fibrosis of lungs . Moreover, CO can combine with hemoglobin in the blood (HbCO). These prevent hemoglobin from carrying oxygen to the tissues, effectively reducing the oxygen-carrying capacity of the blood (Townsend et al., 2002). Besides, PM could be absorbed into lungs and caught in the small airways and alveoli (Nemmar et al., 2002;Lu et al., 2013). Moreover, exposure to particulate air pollutants (e.g., Particulate Matter with a diameter smaller than 10 μm: PM 10 ) related to health problems such as increased mortality rate, decreased life expectancy, and increased respiratory and cardiovascular symptoms in acute and chronic exposures ( Pope et al. , 2004;He et al. , 2010;Karakatsani et al. , 2012;Roy et al., 2012;Pothirat et al., 2019a;Pothirat et al., 2019b). A recent study in Thailand has reported that PM 10 was associated with acute respiratory syndrome (i.e, exacerbation of chronic obstructive pulmonary disease) from data records in hospital (Pothirat et al., 2019a). In addition, PM 10 are also a risk of daily mortality and causes of death from respiratory disease (i.e., chronic obstructive pulmonary disease) and cardiovascular disease (i.e., coronary artery disease) in Chiang Mai, Thailand (Pothirat et al., 2019b). However, the majority of these reports were extracted from the National centre for health Statistics or data from hospitals. A few studies examined the health impact of exposure in different participants and in different times within the same study. Therefore, this study focused on the health impacts of the air quality on the cardio-respiratory systems in term of pulmonary function and cardiovascular endurance in the three different time periods (i.e., before high PM 10 period, high PM 10 period, and after high PM 10 period) in children, adults, and elderly groups who live in northern Thailand. METHODS A prospective cohort study was designed with three durations; before high PM 10 period, high PM 10 period, and after high PM 10 period in different three areas in the northern part of Thailand. Generally, there is no criteria for selecting episodic values from a monitoring site database (Reizer and Juda-Rezler, 2016). Therefore, the present study has given the definition of a high PM 10 period by a period of a high concentration PM 10 levels during the last five years. According to the Pollution Control Department data during five years; between 2012 and 2016), high PM 10 period is approximately between in March and April. Therefore, the data was collected in April (as a high PM 10 ). Before high PM 10 period was defined as before in March and after high PM 10 period was defined as after in April. Thus, before high PM 10 period, the data was collected from December 11 st to 29 th , 2016. High PM10 period was recruited during April 18 th to 27 th , 2017, and after high PM 10 period was recruited during June 19 th to 27 th , 2017. According to the report from The Pollution Control Department in 2012-2016, it was found that Chiang Mai, Chiang Rai, and Nan provinces had air population problems, in particular haze ( Figure 1). So, air quality monitoring sites were set up in Chiang Mai at Mae Chaem district and Meuang district, Chiang Rai at Mae Sai district and Meuang district, and Nan province at Chaloem Phra Kiat district and Meuang district (The Pollution Control Department, 2016). In addition, the local meteorological instruments are used in these district's areas. Therefore, these areas were explored to monitor PM 10 , CO, and air quality index (AQI) and also determined the health effect of haze on cardio-respiratory systems. Three provinces in the northern part of Thailand were selected because of a high prevalence of PM 10. Participants who were living in Chiang Mai (Mae Chaem district and Meuang district, Chiang Rai (i.e., Mae Sai district and Meuang district), and Nan provinces (i.e, Chaloem Phra Kiat district and Meuang district) were invited to the study. No-one has researched the overall effects of the different levels of air pollution (before high PM 10 , during PM 10 and after high PM 10 ) on the different age groups within a given population. Therefore, the sample size calculations were set as an effect size was 0.2 and the sufficient power was 80%. Therefore, 54 participants in six areas were recruited. However, to prevent the drop out, 75 participants in each area were recruited (Chiang Mai province at Meuang district and Mae Chaem district, Chiang Rai province at Meuang district and Mae Sai district, and Nan province at Meuang district and Chaloem Phra Kiat district). A 150 participants in the children's group (aged between 10-15 years old), 150 participants in the adult's groups (aged between 18-59 years old), and 150 participants in the elderly group (aged ≥ 60 years old) were examined. These participants were able to understand and communicate with Thai language. However, participants with unstable angina, recent myocardial infarction, pulmonary embolus, resting heart rate > 120 beat per minute., systolic blood pressure >180 mmHg or/and diastolic blood pressure > 100 mmHg were excluded. In addition, participants who had been diagnosed with a neurological disease, musculoskeletal disease that might interfere with test performances, and pregnant woman were also excluded from this study. All participants were asked to complete an informed consent before starting of the study. The study protocol approval was obtained from the Ethics Committee of Thammasat University. Pulmonary function test was performed by Spirometer (MicroLab™ spirometer, CareFusion Company, United Kingdom). The protocol was followed from the American Thoracic Society (ATS) (Miller, 2005). Briefly, the participants were asked to blow out into the tube as hard and fast and then keep exhale for at least six seconds. Force vital capacity (FVC), force expiratory volume in first second (FEV 1 ), ratio of force expiratory volume in first second and force vital capacity (FEV 1 /FVC), and peak expiratory flow (PEF) were recorded. Moreover, the six-minute walk test (6MWT) was performed for evaluation cardiovascular endurance. The protocol was followed from the ATS (American Thoracic Society, 2002) and distance for 6MWT was recorded. Shortly, all participants were instructed to walk 30 meters along a straight corridor for 6 minutes. Heart rate, blood pressure, oxygen saturation and rate of perceived of exertion were performed before and after the tests. Distances from 6 minutes were then recorded. In the study, the PM 10 , CO and AQI episode values were defined as a situation with the average daily these concentrations at air quality monitoring sites. Information on air quality was obtained from reporting of The Pollution Control Department. Data on PM 10 , CO, and AQI were then recorded. The SPSS version 22.0 was used for analysis. A p-value was set at less than 0.05. Repeated measure ANOVA tests and Bonferroni post-hoc tests were conducted to compare whether the health impacts on cardio-respiratory systems (e.g., pulmonary function and six-minute walk distance (6MWD)) in three different situations (before high PM 10 , high PM 10 , and after high PM 10 ) and were conducted to compare pollutant concentrations in three different situations. RESULTS The study was performed in six districts within three provinces in the northern part of Thailand; Chiang Mai (Meuang district and Mae Chaem district), Chiang Rai (Meuang district and Mae Sai district), and Nan (Meuang district and Chaloem Phra Kiat district). In three different periods, 450 participants were recruited at baseline in the study. They were composed of 150 participants in each group (i.e., children, adults, and elderly people). However, 115 participants (26%) dropped out from the study (52 participants were busy, 14 participants were sick, eight participants moved out, 40 participants could not be contacted, and one participant died). Therefore, only 335 participants (74%) participated in this study ( Figure 2). The study consisted of 96 participants in the children's group (mean aged 12.33±1.45 years), 119 participants in the adult's group (mean aged 43.23±10.19 years), and 120 participants in the elderly group (mean aged 69.58±7.93 years). Table 1 displays baseline health and general information in children, adults, and elderly participants. The averages of PM 10 , CO in the atmosphere, and AQI in three different periods; before high PM 10 period, high PM 10 period, and after high PM 10 period were presented in Figure 3-5. PM 10 , CO in the atmosphere, and AQI found statistically significant differences when compared before high PM 10 periodhigh PM 10 period and in another time (high PM 10 periodafter high PM 10 period) (P < 0.001). Moreover, PM 10 and AQI found statistically significant differences when compared before high PM 10 periodafter high PM 10 period (P < 0.001). Note: *** indicated P < 0.001. Figure 5. The averages of AQI in three different periods: before high PM 10 period, high PM 10 period, and after high PM 10 period. Pulmonary function in three different periods: before high PM10 period, high PM10 period, and after high PM10 period in children, adults, and elderly groups. Mean values and standard deviation of pulmonary function in the children's, adult's, and elderly groups are displayed in the Table 2. In children group, FVC was found statistically significant differences when compared in three different periods (between before high PM 10 periodhigh PM 10 period, high PM 10 periodafter high PM 10 period, and before high PM 10 periodafter high PM 10 period were -0.123 L; P<0.001, -0.043 L; P=0.020, and -0.166 L; P <0.001 respectively). Besides, FEV 1 /FVC found statistically significant differences when compared before high PM 10 periodhigh PM 10 period ( 2.289%; P=0.048) and in another time (before high PM 10 periodafter high PM 10 period) ( 2.324%; P=0.017). However, FEV 1 and PEF found no significant differences when compared before high PM 10 periodhigh PM 10 period (P>0.05). Also, in the adults and elderly groups, there were no significant differences in all variables of pulmonary function when compared before high PM 10 periodhigh PM 10 period (P>0.05). Cardiovascular endurance in three different periods: before high PM10 period, high PM10 period, and after high PM10 period in the children's, adult's, and elderly groups. 6MWD has been indicated in cardiovascular endurance. The results found that 6MWD was found statistically significant differences when compared before high PM 10 periodhigh PM 10 period and in another time (high PM 10 periodafter high PM 10 period) in children group ( 348.20 ± 7.80 357.34 ± 8.35 < 0.001 ** 0.347 0.027 * Note: a P-value from the repeated measure ANOVA when compared during before high PM10 periodhigh PM10 period. b P-value from the repeated measure ANOVA when compared during high PM10 periodafter high PM10 c P-value from the repeated measure ANOVA when compared during before high PM10 periodafter high PM10 * indicated P < 0.05, *** indicated P < 0.001 DISCUSSION The study provides evidence that haze and air pollution or air quality associated with increased risk of pulmonary function, as measured by FEV 1 /FVC in the children's group and cardiovascular endurance in children's, adult's, and elderly groups. Pulmonary function The results found that FVC in the children's group found statistically significant differences when compared in three different periods. Mean value of FVC in before high PM 10 period, high PM 10 period, and after high PM 10 period were 2.36 liter, 2.48 liter, and 2.52 liter respectively. Some previous studies reported the minimal clinically important difference (MCID) which is the smallest change in a measure. MCID was changed in 2-6% (du Bois et al., 2011). However, differences of FVC in three periods in this present study was 1.6%. Therefore, FVC in this study was not changed in clinical practice. Pulmonary obstructive has been defined as FEV 1 /FVC (He et al., 2010). The study found that in the children's group found statistically significant differences when compared before high PM 10 periodhigh PM 10 period and in another time (before high PM 10 periodafter high PM 10 period). Mean value of FEV 1 /FVC in before high PM 10 period, high PM 10 period, and after high PM 10 period were 92.48%, 90.19%, and 90.16% respectively. However, FEV 1 /FVC in high PM 10 period was less than FEV 1 /FVC in before high PM 10 period. This showed a negative trend of obstructive status. Increased pollutants in high PM 10 period might be obstructed the air into the lungs. However, air quality in the after high PM 10 period returned to good air quality, but FEV 1 /FVC in after high PM 10 period was not equal to FEV 1 /FVC in before high PM 10 period. The possible reason would be the duration from high PM 10 period to after high PM 10 period, only two months follow-up that might be not enough for a fully recovery. In addition, FEV 1 and PEF in the children's group and all variables of pulmonary function in the adult's and elderly group found no significant differences when compared before high PM 10 periodhigh PM 10 period. It might be the concentration of pollutants levels observed in the present study in high PM 10 period were relatively lower than the standard level, so statistical analysis did not substantially change the results. However, these were similar to several previous studies (Aekplakorn et al., 2003;Hoek et al., 2012). The results of these studies revealed that there were no significant differences association of pollutants and pulmonary functions. Aekplakorn et al. (2003) examined in the short term exposure, while Hoek et al. (2012) examined in moderate levels of air pollutant. Therefore, short-term exposures to air pollution with low concentration levels might not significantly affect pulmonary function. However, the present study was inconsistent with several previous studies (Ackermann-Liebrich et al., 1997;Goss et al., 2004;Schikowski et al., 2005;Downs et al., 2007;Kan et al., 2007). These studies showed significant differences in association of pollutants and pulmonary functions. It might be because of collecting data in long term period and high pollutant concentration. For example, Downs et al. (2007) reported participants who had exposure to PM 10 displayed a reduction in lung function. They found that the net effect of a decline of 10 μg of PM 10 per cubic meter over an 11-year period was to reduce the annual rate of decline in FEV 1 by 9% and of FEF 25-75 by 16%. However, the mechanism regarding the PM and the human's health (i.e., lung function) are unclear. Some studies suggested that PM might mediate adverse health effects via the generation of reactive oxygen species (Hogervorst et al., 2006;Janssen et al., 2015), activation of cell signaling pathways, and alterations of respiratory tract barrier function and antioxidant defenses, all of which may lead to airway inflammation and changes in pulmonary function (Janssen et al., 2015). Cardiovascular endurance The 6MWT has been useful for measuring in functional capacity. The evidence supports the theory that 6MWT has been associated with cardiovascular disease; shorter walking higher risk of cardiovascular disease (Yap et al., 2015;Zotter-Tufaro et al., 2015). The present study found that there were significant differences of 6MWD when compared before high PM 10 periodhigh PM 10 period in children's, adult's, and elderly groups. A possible reason could be pollutant concentration levels. The air quality index ranges from 0 to more than 300, with 0 to 50 representing good air quality; 51-100, moderate; 101 to 200, unhealthy; 201 to 300, very unhealthy; and 301 or more, hazardous. A value below 100 has no known health effects for the majority of the healthy human (The Pollution Control Department, 2016). Although air quality in both periods were not effect on human's health, high PM 10 period had amount of pollutant concentrations greater than before high PM 10 period. Besides, air quality in high PM 10 period in this present study was moderate air quality (AQI was 70.26). Exposure to air pollution might be a negative association of air quality index and cardiovascular endurance. Du et al. (2016) reported that PM in air pollution is related to altered vessel functions and increased cardiovascular disease. Further, some studies reported that O 3 which is one of air pollutant index induces inflammation in bronchial inflammation (Alexis et al., 2010;Song et al., 2011), affected to difficult breathing , and resulting to poor physical performance. Other reasons could be that high PM 10 period was in summer season which had high temperature than before high PM 10 period (The average highest temperature in before high PM 10 period and high PM 10 period was approximately 31 • C and 37 • C respectively). Some studies suggested that performance capacity might be altered in hot environments (Peiffer and Abbiss, 2011). Also, several previous studies reported that increased temperature could affect to decrease performance capacity (Galloway and Maughan, 1997;Tatterson et al., 2000;Lindemann et al., 2017) due to increases in core temperature, heart rate, rate of perceived exertion, metabolic rate, as well as dehydration (González-Alonso et al., 2008;Tansey and Johnson, 2015). Therefore, the other pollutants or temperature was not accounted for the confounding factors that might be affect cardiovascular endurance; a further study need to explore. Furthermore, 6MWD was found significant differences when compared before high PM 10 periodafter high PM 10 period. Mean value of 6MWD in after high PM 10 period was less than before high PM 10 period. It might be because duration from high PM 10 period to after high PM 10 period was not enough for recovery. The present study has several limitations. First, this study collected data over a short time period and the concentration of pollutants observed in the present study was relatively within the standard levels, so some statistical analysis did not substantially change the results. According to the government policy regarding the burning period and intense rain and storms weather during the recruited participants, high air pollution was not noted. Lastly, a small sample size was observed. Therefore, long-term study during high air pollution with a large sample size might confirm whether poor air quality may lead to cardiovascular and respiratory systems in different aged with different duration of the study. CONCLUSION A moderate air quality is associated with increased risk of obstructive status and decreased cardiovascular endurance. Regarding the cardiorespiratory perspective within two months of follow-up, these parameters might not have fully recovered. Therefore, people who are at risk of cardiorespiratory disease should be recommended to use a personal protective mask to protect their health from haze. Further, the government should be responsible, for the policies controlling haze and air pollution.
2020-10-28T18:06:41.104Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "33b74a65a74dfbfc286e8bd427ac816f5fad6116", "oa_license": null, "oa_url": "https://doi.org/10.12982/cmujns.2020.0045", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c1aca7666e2a44815e64485f4066da629833b63c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
11926965
pes2o/s2orc
v3-fos-license
Previous Renal Replacement Therapy Time at Start of Peritoneal Dialysis Independently Impact on Peritoneal Membrane Ultrafiltration Failure Background. Peritoneal membrane changes are induced by uraemia per se. We hypothesise that previous renal replacement therapy (RRT) time and residual renal function (RRF) at start of peritoneal dialysis impact on ultrafiltration failure (UFF). Methods. The time course of PET parameters from 123 incident patients, followed for median 26 (4–105) months, was evaluated by mixed linear model. Glucose 3.86% solutions were not used in their standard therapy. Sex, age, diabetes, previous RRT time, RRF, comorbidity score, PD modality and peritonitis episodes were investigated as possible determinants of UFF-free survival. Results. PET parameters remained stable during follow up. CA125 decreased significantly. Inherent UFF was diagnosed in 8 patients, 5 spontaneously recovering. Acquired UFF group presented type I UFF profile with compromised sodium sieving. At baseline they had lower RRF and longer previous time of RRT which remained significantly associated with UFF-free survival by Cox multivariate analysis (HR 0.648 (0.428–0.980), P = 0.04) and (HR 1.016 (1.004–1.028), P = 0.009, resp.). UFF free survival was 97%, 87% and 83% at 1, 3 and 5 years, respectively. Conclusions. Inherent UFF is often unpredictable but transitory. On the other hand baseline lower RRF and previous RRT time independently impact on ultrafiltration failure free survival. In spite of these detrimental factors generally stable long-term peritoneal transport parameters is achievable with a 5-year cumulative UFF free survival of 83%. This study adds a further argument for a PD-first policy. Introduction Peritoneal membrane ultrafiltration failure (UFF) is a relevant long-term complication menacing peritoneal dialysis (PD) [1]. It has been reported to lead to technique failure in a rate of 1.7% [2] to 13.7% [3]. Peritoneal morphological changes seem to be related to dialysis solutions, bioincompatibility, and to infections. Uremic milieu per se may also contribute to peritoneal changes since both submesothelial fibrosis and vascular changes are already present in uremic patients, before dialysis induction. The median thickness of the submesothelial compact collagenous zone was 50 micron for normal subjects, but was 140 micron for uremic predialysis patients, 150 micron for patients undergoing hemodialysis, and 270 micron for patients undergoing PD [4]. Honda et al. concluded that the average peritoneal thickness was increased in uremic patients and progressively thickened as the duration of peritoneal dialysis prolonged, while the lumen/vessel diameter ratio was lower in uremia than normal and progressively decreased as the duration of peritoneal dialysis was prolonged [5]. Thus, the effect of uremia on the baseline and time dependent profiles of peritoneal membrane function deserves further studies. It is a continuous bystander in dialysis patients only more recently introduced in PD animal models [6], but often excluded from UFF analysis [7]. Currently, the determinants of small solutes, proteins, and water transport across the peritoneal membrane, as well as their evolution during PD therapy, are still a matter of debate. Recently, some mechanisms involved in acquired UFF have been identified but less is known about the role of previous renal replacement therapy time in this issue. Moreover, early UFF is still an unexplained phenomenon. A fast transport status is the primary mechanism of UFF and it is sometimes documented as an inherent condition whose clinical impact has been debated [8][9][10][11] but early UFF still remains often unexplained [7,12]. Later during PD, loss of glucose osmotic conductance might add to the process of acquired UFF, with a disproportionally more severe compromise of free water transport [13]. Additionally, it is known that peritoneal fibrosis is induced by PD solutions but uraemia per se is also a fibrogenic factor [14]. Residual renal function and precious renal replacement therapy time at PD start are clinical variables that reflect the cumulative uremia stage. We aim to identify relevant clinical determinants of early and acquired UFF, focusing on the independent impact of previous renal replacement therapy time and residual renal function at start of PD. Its eventual independent impact may strengthen PD prescription as a first renal replacement therapy option. Patients and Methods We prospectively studied 123 consecutive peritoneal dialysis incident patients enrolled at Hospital Santo António PD Unit since 1st January 2001. All patients were free of hypertonic 3.86% glucose solutions. Standard prescription included low-GDPs solutions; median glucose concentration exposure was 1.65% (range 1.36%-2.27%) and 40% used icodextrin. Age, diabetes, previous renal replacement therapy time (RRT), baseline residual renal function (RRF) quantified as glomerular filtration rate (GFR mL/min/1.73 m 2 )-based on 24 hrs urine collections with determinations of creatinine and urea, Davies comorbidity score, automated PD, and peritonitis events were investigated as possible determinants of baseline or late UFF. All patients performed baseline and yearly 3.86%-peritoneal equilibration tests (PETs), being followed for median 26 (4-105) months: D/P creatinine, D/D0 glucose, sodium sieving, and peritoneal ultrafiltration (UF) were analyzed, and UF failure was defined as a net UF lower than 400 mL after a 4-hour dwell with 3.86%. PET; CA125 appearance rate was also calculated after 4 hours of PET dwell. The time course of PET parameters was explored by repeated measurements mixed linear model analysis with SPSS software. Clinical and laboratory parameters considered to be possible determinants of UFF were investigated and its impact on UFF-free survival was studied by using Cox multivariate analysis. Investigation was made both in the whole cohort and in the subgroup after excluding patients admitted after renal graft failure. Time Course of Peritoneal Membrane Function. By repeated measurements mixed model analysis, it was shown that small solute, UF, and sodium-sieving parameters remained essentially stable during the followup. A U-shaped curve of D/P creatinine was documented, but this variation with time did not attain significance ( Figure 1). CA125 decreased progressively (P = 0.009) (Figure 2), mainly in late UFF patients. The same profile was documented in the subgroup of patients after excluding those admitted after renal graft failure (D/P creatinine U-shaped curve though P = ns; for Ca125 parameter P = 0.015). On the other hand, the acquired UFF group presented type I UFF profile with clearly compromised sodium sieving (D/P creatinine was 0.83 ± 0.10 versus 0.72 ± 0.12, P = 0.035 and D/PNa60 0.92 ± 0.028 versus 0.87 ± 0.034, P = 0.010) ( Table 2). They had significantly lower baseline RRF (P = 0.009) and longer previous RRT time (P = 0.003) (Figure 3). Discussion Our study highlights that residual renal function and previous cumulative renal replacement therapy time, in a contemporary PD population-free of hypertonic 3.86% glucose solutions exposition, independently impact on ultrafiltration-failure-free survival. This study therefore adds a new argument for a PD-first policy as a strategy to improve technique survival. Additionally it documented that important membrane functional changes occur already from start of PD. Measuring peritoneal transport characteristics is an approach which gives objective and reproducible information on peritoneal performance and possible etiological factors of UFF [15]. A fast transport status however, either alone or in combination with other alterations in membrane function, remains the most common underlying mechanism of UFF. We indeed showed that acquired UFF group presented type I UFF profile with compromised sodium sieving. UFF in longterm PD is most often due to a combination of a rapid disappearance of the osmotic gradient, together with an impairment of transcellular water transport (TCWT) [13]. But the activity of water channels is dependent and limited by the crystalloid osmotic pressure [16] which our methodology did not allow to be calculated, being a limitation for characterization of the late stage UFF. In spite of that we were able to document free water transport compromise by the indirect sign of decreased sodium sieving. For this reason, we are now measuring the actual UF and effluent sodium after 60 min dwell followed by effluent reinfusion and completion of standardized 4-hour 3.86% PET which allows evaluation of both free water and standardized small solute transport [17]. Finally, back filtration of fluid through the capillaries and fluid reabsorption from the peritoneal cavity into tissues and lymphatics is a recognized mechanism of UF failure and accounts for approximately 25% of the cases of UF dysfunction, but only investigational methods with tracer macromolecules hard to apply in a clinical ward are able to evaluate this. More relevant to our study was to highlight that baseline UFF is prevalent but often transitory and not predicted by baseline clinical variables according to previous investigations [7][8][9][10][11][12]. Many aspects of early stage transport changes and mechanisms indeed remain to be understood. While lymphatic absorption cannot be excluded as a cause of early UFF, the evolution of patients recovering ultrafiltration capacity does not support such etiology. We can speculate that although no significant changes were documented in small solute transport at baseline between the groups with and without UFF, membrane structural changes induced by uremia per se namely interstitium fibrosis might justify the marginal compromise of sodium sieving. This indeed gives lumped information and is not only dependent on an increase of diffusive mass transport coefficients for small solutes, but also on a decrease of the glucose osmotic conductance (number and function of aquaporins, number and diameter of small pores) and on reduction of ultrafiltration coefficient of the peritoneal membrane (role for the interstitium changes). Interestingly, we found a U-shaped curve of D/P creatinine in the followup, already previously reported by our group and others [8,13,18] though not attaining statistical significance in this contemporary cohort. The early phase of D/P creatinine normalisation may express an adaptive process whose mechanisms are unclear but may include early recruitment or vasodilation of vessels mediated by vasoactive mediators, many of them secreted by mesothelial cells. Therefore, in some of our patients a transitory fast transport status may explain the inherent UFF. In other patients, the causes of such baseline UFF are not clear, pointing to the complexity of peritoneal membrane timedependent functional changes. The risk phase with clinical impact may be documented by the late increasing side of the U-shaped curve, with decreasing mesothelial cell mass as a marker of structural changes that go along with UFF and sodium sieving compromise. Again we highlight the importance of routine membrane monitorization also including an accessible and affordable structural marker-CA125 effluent appearance rate [19]. However, our global population presented stability in the transport rates for small molecules and sodium sieving over time. This is in accordance with previous publications where small-solute transport parameters were found to be increased only in long-term patients [20], but happily, in disagreement with the gloomier reports of sustained and inexorable increase of D/P creatinine over time, already from the start [21]. On the other hand, uremia and baseline GFR as its surrogate, is indeed an important bystander not usually taken into account in peritoneal membrane changes investigation. We identified it here as a clinical variable that independently impacts on UFF-free survival. This clue deserves further investigation but suggests that uremia may be crucial to explain acquired peritoneal membrane changes, and although it has not been associated with baseline transport characteristics may modulate membrane timedependent profile [4][5][6]. As a limitation of our study, we did not control for a panel of pharmacological agents shown experimentally to modulate membrane structure, namely, renin angiotensine system inhibitors and erythropoiesis stimulating agents [22,23]. However, since the use of these agents is massive in our PD patients, it is not presumed to change our results. In spite of some controversy [18], our study also showed that the influence of peritonitis on the development of UFF seems to be limited. It has been found that patients with a history of peritonitis were not different from patients without a previous peritonitis episode in terms of D/P ratio and mass transfer area coefficient of low molecular weight solutes, lymphatic absorption rate, transcapillary ultrafiltration, and net ultrafiltration [24]. Only clusters of peritonitis or peritonitis episodes that occur later in PD have been described as causing a decrease in UF [25]. Considering the link between comorbidity and peritoneal transport, data is controversial. Some papers document that systemic inflammation associated with comorbid diseases and elevated interleukin-(IL-) 6 level may induce vasodilation and neoangiogenesis in peritoneal membrane [26]. We did not find any association between morbidity and higher transport rates, like others [27], nor comorbidity score was predictive of UFF. As a structural marker, effluent cancer antigen 125 can be used reflecting mesothelial cell mass and cell turnover in stable, noninfectious PD patients. Its decrease with the duration of PD, described previously [28], is consistent with the reported cell loss observed in peritoneal biopsies. Such profile of effluent CA125 appearance rate is therefore more likely a sign of damage to the peritoneum than a causative factor of UF by itself. It can be interpreted as an additional prognostic sign, adding to the changes of D/P creatinine and effluent IL-6 [29]. In conclusion, this paper documents early-stage peritoneal membrane changes with transitory cases of inherent ultrafiltration capacity failure dissociated from smallsolute transport, whose mechanisms remain unclear. On the other hand, lower baseline RRF and previous longer RRT were associated with acquired UFF in our population. In spite of these detrimental factors, we found generally stable long-term peritoneal transport parameters with 5 years 83% cumulative UFF-free survival. By highlighting the importance of previous cumulative RRT time and baseline RRF concerning peritoneal membrane function status these results support a PD-first strategy in the integrated renal replacement treatment plan.
2014-10-01T00:00:00.000Z
2011-09-29T00:00:00.000
{ "year": 2011, "sha1": "e0662a6f668118902c8fa6df2fae1252205601fc", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijn/2011/685457.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f2e8066d2b958ad4f7578d0d6b2ae6bd33f434d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119190168
pes2o/s2orc
v3-fos-license
Stochasticity, a variable stellar upper-mass limit, binaries and star-formation rate indicators Using our Binary Population And Spectral Synthesis (BPASS) code we explore the effects on star-formation rate indicators of stochastically sampling the stellar initial mass function, adding a cluster mass dependent stellar upper-mass limit and including binary stars. We create synthetic spectra of young clusters and star-forming galaxies and compare these to observations of H(alpha) emission from isolated clusters and the relation between H(alpha) and FUV emission from nearby galaxies. We find that observations of clusters tend to favour a purely stochastic sampling of the initial mass function for clusters less than 100M(Sun), rather than the maximum stellar mass being dependant on the total cluster mass. It is more difficult to determine whether the same is true for more massive clusters. We also find that binary stars blur some of the observational differences that occur when a cluster-mass dependent stellar upper-mass limit is imposed when filling the IMF. The effect is greatest when modelling the observed H(alpha) and FUV star-formation rate ratios in galaxies. This is because mass transfer and merging of stars owing to binary evolution creates more massive stars and stars that have greater mass than the initial maximum imposed on the stellar population. INTRODUCTION A key problem when modelling stellar populations is how to determine the distribution of initial stellar masses in the population. The conventional method is to define an initial mass function (IMF) according to which the number of stars of a given mass is calculated as a function of the mass. Typically a power-law of the mass is used. The first was suggested by Salpeter (1955) where dN (M ) ∝ M −2.35 dM for 0.3 < M/M⊙ < 10. Despite being 57 years old this IMF is still widely used and appears to be universal. This slope holds over a wide range of stellar masses, only flattening in gradient below stellar masses of around 1M⊙ (Miller & Scalo 1979;Kroupa 2001;Chabrier 2003;Bastian, Covey & Meyer 2010). The IMF provides the distribution of stellar initial masses in a stellar population such as that found in stellar clusters. However when trying to simulate the stellar population in a galaxy it is important to recognise that a galaxy is not made up of one unique stellar population. A galaxy is actually made up of numbers of stellar clusters each with ⋆ E-mail: jje@ast.cam.ac.uk their own mass and age. The masses of these clusters are also described by their own cluster initial mass function. Furthermore fluctuations in the IMF of these clusters, especially if their mass is less than 10 4 M⊙, can provide large variation in the ionising fluxes from the cluster (Cerviño et al 2003;Villaverde, Cerviño & Luridiana 2010a,b). The implication of stars forming in clusters is that the stellar IMF (SIMF) does not apply across an entire galaxy. Instead to get the galaxy-wide distribution of stellar masses we must model a number of stellar clusters with different masses according to a cluster initial mass function (CIMF) and within each clusters apply a SIMF to produce an integrated galaxial initial mass function (IGIMF) (e.g. Weidner & Kroupa 2006; Pflamm-Altenburg, Weidner & Kroupa 2007). This is the case even if not all the clusters are dense enough to remain bound over their lifetimes (Portegies Zwart, McMillan & Gieles 2010;Bressert et al. 2010;Gieles & Portegies Zwart 2011). Bastian, Covey & Meyer (2010) and Haas & Anders (2010) reviewed and investigated the importance of combining a CIMF and a SIMF to make a galaxy-wide SIMF. They found that the resultant galaxy-wide SIMF is most sensitive to the minimum mass for a cluster, the slope of the CIMF and whether the mass of the most massive star is limited by the cluster mass. The most extreme articulation of the last factor is whether it is physically possible for a 100M⊙ cluster to be composed of a single 100M⊙ star or the more probable scenario of a cluster composed of many lower-mass stars.. The alternative is that such low-mass clusters can only put say 10 percent or less of their total mass into the most massive star. If the former is possible it would suggest that some O stars might form in isolation (here taken to be in isolation with no other O or B stars in the same cluster so the other cluster members are of a much lower mass) and this would have important implications for the star-formation process. I.e. is star formation a pure-stochastic process or a bottom-up process with low-mass stars formed first and high-mass stars only formed if there is enough material left. After accounting for runaway O stars contaminating the apparent number of isolated (with no companions at all) O stars de Wit et al. (2005) suggested that at most 4 ± 2 per cent of O stars form outside a cluster environment. Parker & Goodwin (2007) considered that an isolated O star only meant there was no other OB star in the same cluster. This is the case when a 100M⊙ cluster is composed of one star that contains most of the mass of the cluster with a few very low-mass companions. They modelled the populations of star clusters using a standard CIMF with a slope of −2 and predicted that 5 per cent of O stars formed in clusters that had no other O or B stars and would be observed as isolated when in fact they are just massive stars that have been able to form in a low-mass cluster. The argument against the conclusion that O stars can form in low-mass clusters has been presented by Vanbeveren (1982), Weidner & Kroupa (2006) and Weidner, Kroupa & Bonnell (2010) who have suggested that observations indicate that, for a specific cluster mass, there is a maximum possible stellar mass well below the total cluster mass. If this were the case then, from their model, a 100M⊙ cluster could not form stars with masses above 10M⊙. Therefore when a synthetic galaxy is created from the convolution of cluster and stellar IMFs there would be a dearth of the most massive stars compared with when there are no restrictions on the maximum stellar mass. Pflamm-Altenburg, Weidner & Kroupa (2007 have suggested that there are differences in star-formation rate indicators when stellar populations are modelled by the two different IMF filling methods. However the observations used consider the differences that occur at low star-formation rates and thus uncertainties and low-number fluctuations between observed systems make it difficult to determine whether the maximum stellar mass depends on the cluster mass. However the complementary studies by Elmegreen (2006), Parker & Goodwin (2007) and Maschberger & Clarke (2008) show that similar observations indicate that there is no evidence for restrictions on the maximum stellar mass in clusters. In this paper we examine recent observations (Lee et al. 2009;Lamb et al. 2010;Calzetti et al. 2010 The novel feature of this work is that we are able to demonstrate how binary stars alter our population synthesis predictions. Recent observations (Pinsonneault & Stanek 2006;Kobulnicky & Fryer 2007;Kiminki et al. 2009) indicate that the binary fraction in young massive stellar populations is close to one. It is therefore vital to include binary stars especially those that interact. In our populations approximately two thirds of binaries interact. We first outline our stellar evolution models and the method of our spectral synthesis. We then describe our two ways to determine the distribution of initial masses in synthetic clusters and galaxies. Next we discuss the observational implications of varying the IMF-filling method on the Hα and FUV star-formation rate indicators. Finally we present our conclusions. Binary population and spectral synthesis We have developed a novel and unique code to produce synthetic stellar populations that include binary stars (Eldridge, Izzard & Tout 2008;Eldridge & Stanway 2009). While similar codes exist our Binary Population and Spectral Synthesis (BPASS) code has three important features each of which set it apart from other codes and enable it to study stochastic effects on the IMF. First, and most important, is the inclusion of binary evolution when modelling the stellar populations. The general effect of binaries is to cause a population of stars to look bluer at older ages than predicted by single-star models. Secondly, a large number of detailed stellar evolution models are used to create the synthetic populations rather than an approximate rapid population synthesis method. Thirdly, we use as many theoretical inputs in our synthesis with as few empirical inputs as possible to create a completely synthetic model to compare with observations. BPASS uses approximately 15,000 detailed stellar models calculated by the Cambridge STARS code as described by Eldridge, Izzard & Tout (2008). These include single star and binary models with initial masses between 0.5 and 120M⊙ and 5 and 120M⊙ respectively. We take 120M⊙ to be our most massive star possible because of our limited grid of binary evolution models. Above this mass the mass-loss rates at solar metallicity on the main-sequence are high (Vink et al. 2011) and the evolutionary timescales of the stars vary little as the initial mass is increased further. We note that, owing to stars merging our binary populations include some single stars that have effective initial masses of 200M⊙ and above. The minimum binary primary mass of 5M⊙ is selected because initially our binary models were specifically created to study the progenitors of core-collapse supernovae. The main-sequence lifetime of a 5M⊙ star is 100Myrs, which is the period we use for the duration of the star-burst in our constant star-formation models so there is no effect from low-mass binaries in these models. Furthermore observations indicate the binary fraction decreases at the low masses (Duquennoy & Mayor 1991;Leinert et al. 1997;Bouy et al. 2003). However the binary fraction is more complicated than we assume here and is determined by a star cluster's dynamics, environment and age. It is thought that stars of all masses can form in binary or multiple systems but these can be broken up by dynamical interactions in young clusters (e.g. Goodwin & Kroupa 2005;Fregeau, Ivanova & Rasio 2009). We note that Han, Podsiadlowski & Lynas-Gray (2007) found low-mass binary stars can explain the excess UV flux observed in Elliptical galaxies. Such systems do not contribute strongly until 1Gyr after formation at which time our estimated UV fluxes should increase only slightly. Creating a new grid of low-mass detailed binary models for inclusion in BPASS is unnecessary for this work to demonstrate the importance of binary stars. Here we use models at solar metallicity with a metallicity mass fraction of Z = 0.02. We include convective overshooting and a mass-loss prescription that combines the mass-loss rates of Vink, de Koter & Lamers (2001), de Jager, Nieuwenhuijzen & van der Hucht (1988) and Nugis & Lamers (2000). The binary evolution accounts for Roche-lobe overflow, common-envelope evolution, mass transfer and neutron-star kicks which affect the survival of binary stars after a supernova. These models are combined with the stellar atmosphere spectra of Smith, Norris & Crowther (2002), Hamann, Gräfener & Liermann (2006) and Westera et al. (2002) to predict the spectra of the stellar populations. A significant change we make here, compared with our previous work, is to break from our previous assumption that the SIMF can be described by a simple Salpeter law over the entire mass range of stars. Our method requires us to consider that all stars are born in clusters. The mass of these clusters is described by a CIMF and the mass distribution of stars within each cluster is described by the SIMF. This is achieved by first picking a cluster mass and then filling the cluster with stars from the SIMF. We model multiple clusters together to create synthetic galaxies with different star-formation histories but with the same mean constant star-formation rate over a long period of time. We use two methods of populating the SIMF for our synthetic clusters. They differ by whether we limit the maximum stellar mass or not. Our first method is to assume that any star can occur in any cluster such that, Mmax M cl . I.e. the star cannot be more massive than the cluster it inhabits. This we refer to as pure stochastic sampling (PSS) of the SIMF. In this SIMF we assume a Salpeter slope of -2.35 between 0.5 and 120 M⊙ and a slope of -1.3 between 0.1 and 0.5M⊙. It is similar to the constrained sampling method outlined by Weidner & Kroupa (2006) and used by Villaverde, Cerviño & Luridiana (2010b). Our second case has the maximum mass of a star in a cluster dependent on the total mass of the cluster. We use the relation calculated by Pflamm-Altenburg, Weidner & Kroupa (2007) which is given by, log 10 (Mmax/M⊙) = 2.56 log 10 (M cl /M⊙) × 3.82 9.17 + (log 10 (M cl /M⊙)) 9.17 where Mmax is the maximum stellar mass possible in a cluster of mass M cl . We therefore use Mmax from this equation as the maximum mass in our initial mass function up to a limit of 120M⊙ in our synthetic clusters. We refer to this method as the cluster mass dependent maximum stellar mass (CMDMSM) method. The resulting clusters are sim-ilar to those from the sorted-sampling method outlined by Weidner & Kroupa (2006). We note that our synthetic populations have some limitations. In Figures 1 and 2 there are diagonal and horizontal linear features in the distribution of model populations. These arise at low cluster-masses and star-formation rates owing to the limited resolution of the stellar model initial masses and time bins used in our synthesis. This becomes most noticeable when there is only one massive star in the stellar population. One solution to this would be to interpolate between stellar models but given that stellar evolution is non-linear and binary evolution is even less predictable we avoid spurious results from interpolations and select the closest model available. Also our binary population models are not complete and here we are only demonstrating the importance of including binary stars. For example, we do not include binaries with initial primary masses below 5M⊙ and as yet we do not consider the emission from X-ray binaries. This would provide another source of ionising flux that would also effect the Hα and UV flux ratio. The effect would be more important at low cluster masses and low star-formation rates where one X-ray binary would dominate the entire ionising flux from the stellar population (Mirabel et al. 2011). Creating synthetic clusters We use the PSS and CMDMSM methods to create synthetic stellar populations in two regimes. In the first we consider individual stellar clusters with all the stars coeval. We create models of stellar clusters with both PSS and CMDMSM and investigate how they affect the Hα line flux per M⊙ in the cluster. Our process for creating a synthetic cluster to compare to the observations of Calzetti et al. (2010) is as follows. (i) We randomly generate a cluster mass between 10 and 10 6 M⊙ from the CIMF which has a slope of −2 (de Grijs et al. 2003;Lada & Lada 2003). (ii) We fill the cluster with stars, the masses of which are picked at random from the SIMF with the maximum stellar mass given by PSS or CMDMSM. (iii) We add stars to the cluster until the total mass is greater than our target cluster mass. We then consider whether the final cluster mass is closer to the target cluster mass with or without the last star added to the cluster. If the mass is closer without the last star we remove the last star from the cluster. This is similar to the sorted sampling of Weidner & Kroupa (2006) and makes it less likely that a star can be added that is more massive than the target cluster mass as in the soft sampling of Elmegreen (2006). (iv) We randomly generate the cluster age between 1 and 8 Myr. This is to match the observed age range of Calzetti et al. (2010). (v) We calculate the Hα flux for the resultant stellar population. This is done with theoretical stellar atmospheres and stellar models to predict the resultant total spectrum as described by Eldridge & Stanway (2009). We calculate the number of ionising photons from wavelengths shortward of 912Å and convert this to the flux of Hα by assuming 10 11.87 ionising photons give rise to 1 erg s −1 of Hα flux. This process is repeated for many different cluster masses so that we can build up a picture of how Hα flux varies with cluster mass for clusters aged between 1 and 8 Myr. Calzetti et al. (2010) performed an observational study of such clusters and provide the observed mean Hα flux per M⊙ for two different masses of clusters. We compare our models to these observed populations in Section 3.1. In the binary population case we include a companion for every star that has an initial mass greater than 5M⊙. We assign binary parameters at random from a flat initial mass ratio and flat distribution of the logarithm of the initial separation using the model closest to the parameters from our grid of models calculated by Eldridge, Izzard & Tout (2008). We include the mass of the companion in the total cluster mass. Synthetic galaxies Our second set of population models are for synthetic galaxies with an assumed constant star-formation rate. Rather than fill up the population of a galaxy according to a galaxywide IMF we create the galaxy from a set of clusters that each have their own individual age and stellar population. To create a galaxy we first pick a star-formation rate between 10 −5 to 10 M⊙yr −1 . We then create the synthetic galaxy as follows. (i) We pick a cluster mass at random from a CIMF which has a slope of −2 between 50 and 10 6 M⊙. (ii) We fill the cluster with a stellar population as described in Section 2.2 and aged to between 0 and 100 Myr, chosen at random from a uniform distribution. (iii) We continue this process until the total mass created in the galaxy over 100 Myr gives the required star-formation rate. (iv) With this stellar population we calculate the number of ionising photons from wavelengths shortward of 912Å and convert this to the flux of Hα by assuming 10 11.87 ionising photons give rise to 1 erg s −1 of Hα flux. We also calculate the UV flux density at a wavelength of 1500Å. (v) From these Hα and UV fluxes we calculate an apparent star-formation rate from both and find their ratio. We assume a star-formation rate of 1M⊙ yr −1 produces a Hα flux of log 10 (F (Hα)/ergs s −1 ) = 41.1 and a UV flux density of log 10 (F (1500Å)/ergs s −1 Hz −1 ) = 27.85 as in Kennicutt (1998). We perform these simulations for single and binary populations and for the PSS and CMDMSM methods of filling the IMF so that the differences can be compared. Here we use a different range of cluster masses based on the suggestion of Lada & Lada (2003) that there is a turn-over in the mass function of molecular clouds at around 50 M⊙. We also only consider a period of 100 Myr because this is of the order of a typical star-formation burst duration (McQuinn et al. 2009). We also find that increasing the age beyond 100 Myr has little effect on our results because it is the typical lifetime of stars that contribute to the FUV. We note that, when used to create a synthetic galaxy, our CMDMSM method is based on the IGIMF method of Weidner & Kroupa (2006). However we do not limit the maximum cluster mass in a synthetic galaxy by the total star-formation rate as they do in their IGIMF method. Recent investigations of the CIMF suggest that there is no such dependence (Gieles 2009;Larsen 2009). In this work we wish to concentrate on whether the maximum stellar mass depends on the cluster mass. We have calculated IGIMF models to see the effect of including such a limit and find our models are in agreement with those of Pflamm-Altenburg, Weidner & Kroupa (2007) and Pflamm-Altenburg, Weidner & Kroupa (2009). Also like Fumagalli, da Silva & Krumholz (2011) we find that IGIMF synthetic galaxies cannot reproduce the observed spread of star-formation rate ratios. This is because restricting the maximum cluster mass decreases the number of massive stars even more dramatically than they are in our CMDMSM models. In Section 3.2 we compare the synthetic populations to the observed galaxies of Lee et al. (2009). The novel feature of our approach is in not forcing clusters to form at the same time but allowing each to have a different age. This leads to much more scatter in the predicted observables of our synthetic galaxies. This was also found by Fumagalli, da Silva & Krumholz (2011) and Weisz et al. (2012). We have also varied the age range used for the synthetic galaxies and find that increasing the age has little effect on our results. Using a younger upper age limit increases the amount of Hα flux relative to the UV flux. This is because the stars that cause Hα emission are more massive and typically have lifetimes of 10 Myr or less, while stars that contribute to the FUV continuum span a much greater lifetime range of up to 100 Myr. Calzetti et al. (2010) suggested a novel test for determining how the IMF defines the population of stellar clusters. They studied the production of ionising photons by young clusters in NGC5194. If an IMF is populated purely stochastically then one 10 5 M⊙ cluster should have the same stellar content as a hundred 1000M⊙ clusters. Therefore both samples would have the same Hα flux per M⊙ of stars. However if the IMF of a 1000M⊙ cluster is devoid of massive stars due to a link between Mmax and M Cl then the hundred 1000M⊙ clusters would have less Hα flux per M⊙ than a 10 5 M⊙ cluster. The Hα from individual clusters For our population of synthetic clusters we have calculated the mean Hα flux per M⊙ as for the observed clusters of Calzetti et al. (2010). Figure 1 shows our synthetic clusters as points along with the mean Hα flux. We see that at cluster masses below 10 4 M⊙ the results diverge. With PSS it is possible to have one massive star making up most of the mass of a cluster while with CMDMSM this is not possible. Therefore for PSS and CMDMSM the observed mean Hα flux drops from the mean value of around 10 34.1 erg s −1 M −1 ⊙ at around 10 2 or 10 4 M⊙ respectively. Therefore by measuring the Hα flux for clusters in between these key masses we should be able to determine how nature fills the IMF. Calzetti et al. (2010) provide two observed values for their two mass bins. The first and higher value does not include clusters that are undetected in Hα. The second includes these non-detections. The observations at a cluster mass of 10 4.5 M⊙ agree with the predicted mean Hα flux. However the observed points at 10 3 M⊙ are less conclusive. The point without the non-detections lies on the the PSS line, while the point including the non-detections lies in between the PSS and CMDMSM lines. Thus PSS gives a better fit but a refined CMDMSM scheme that allows a higher maximum mass for a certain cluster mass may match the Calzetti et al. (2010) data. An alternative method to discriminate between PSS and CMDMSM is to search for individual massive stars that are in low-mass clusters. One example is the Wolf-Rayet star γ-Velorum, the nearest Wolf-Rayet star to the Sun in the Galaxy. It is a binary system containing stars that were ini-tially 35 and 30M⊙ in a cluster with a total mass of between 250 and 350M⊙ (De Marco et al. 2000;Jefferies et al. 2009;Eldridge 2009). We have indicated the location of this cluster in Figure 1. We see that the PSS clusters overlap with the parameters of this cluster. The CMDMSM models for a single star population do not reach this region. The binary CMDMSM models do reach the parameter space for γ-Velorum. The small number of such models indicates such clusters would be rare. This suggests that PSS is more likely to be in action in nature although a more relaxed form of CMDMSM would also fit the observed data. Other more extreme examples of low-mass clusters with a single massive star were observed by Lamb et al. (2010). They observed apparently isolated O stars and found low mass clusters associated with these stars. Using the stellar and cluster masses derived by Lamb et al. (2010) and estimating the ionising flux for the massive star we have plotted their clusters in Figure 1. They are only reproduced by our PSS method. This agrees with previous studies by Testi et al. (1997) Testi, Palla & Natta (1998, 1999 and Parker & Goodwin (2007) who use similar arguments. Maschberger & Clarke (2008) also made a detailed study of all available information and also favour PSS. However Weidner, Kroupa & Bonnell (2010) performed a similar analysis and found that for low mass clusters, below 100M⊙ PSS is favoured but more massive clusters appears to have a CMDMSM relation. The observations of Calzetti et al. (2010) do not currently favour either PSS or CMDMSM. Here we can only agree that PSS occurs in low-mass, up to 100M⊙, clusters. For more massive clusters, of around 1000M⊙, it is difficult to differentiate between PSS and CMDMSM. Finally for cluster masses more than about 10 4 M⊙ the differences are less important. Finally we note that our results are in line with those of Villaverde, Cerviño & Luridiana (2010b). They suggest that for cluster masses below 10 4 M⊙ there is a highly asymmetric scatter of the ionising flux around the mean integrated values from standard synthesis models because the single most massive star dominates the ionising flux of the cluster. This manifests itself in our results by the increased spread in Hα flux per M⊙ at low cluster masses. We note that they suggest that PSS is more favoured than CMDMSM. Lee et al. (2009), Meurer et al. (2009 and Boselli et al. (2009) have attempted to gain insight into the IMF by looking at emission from entire galaxies. They brought together Hα observations with far UV continuum observations. Here we concentrate on the results of Lee et al. (2009) because their set of galaxies are a volume limited sample of 315 within 11 Mpc. The emission of these two spectral starformation rate indicators are determined by the number of stars with masses greater than 20 and 3M⊙ respectively. Therefore measuring the ratio of the two fluxes, or the relative star-formation rates measured for the galaxies, gives an indication of the number of stars in different mass regimes. Lee et al. (2009) found that as the Hα flux decreases the Hα/UV ratio decreases so there is more UV flux than expected. Pflamm-Altenburg, Weidner & Kroupa (2009) have suggested that this turn down is evidence for IGIMF determining the galaxy-wide IMF of these galaxies. Their study was based on single star models alone. Here we repeat their analysis with binary as well as single star models and also our stochastic approach to the star-formation history, with stellar clusters forming independently from one another. The Hα and FUV in 11HUGS galaxies We plot our synthetic galaxies in Figure 2. We see that, at star-formation rates above 10 −2 M⊙yr −1 , the spread of models is similar but CMDMSM gives a slightly greater scatter towards lower values of the Hα/UV ratio. This can be more easily seen in Figure 3 where we bin the synthetic and observed galaxies with star-formation rates above 10 −2 M⊙yr −1 by their Hα/UV ratio. CMDMSM has a greater range of ratios because of the relative lack of mas-sive stars in the total stellar population. We see CMDMSM reproduces the lowest ratios at the highest star-formation rates. PSS produces much higher ratios. However in this model we have assumed no leakage of any ionising photons. This would reduce the contribution from the Hα flux by up to around 50 per cent (see Zurita et al. 2002, for example). This would lead to lower ratio values at high star-formation rates for both PSS and CMDMSM. To account for the leakage or loss of ionising photons from a galaxy or absorption by dust grains we have made a simple adjustment to our models. In Figure 3 we have modified our synthetic ratio distributions by assuming that galaxies lose between 0 and 50 per cent of their ionising photons. We take our synthetic populations and smear them by this range of possible leakage fraction so that the mean leakage is 25 per cent. Even for this modest loss of ionising photons the ratio distribution changes the PSS model to match the range of observed galaxies. At the same time the CMDMSM method has a slightly worse agreement. To test the significance of these differences we have used a χ 2 test to compare the observed distributions to the synthetic populations. We find that without leakage only CMDMSM with single stars is a probable match. However with leakage only the CMDMSM single star synthetic population is ruled out. Fumagalli, da Silva & Krumholz (2011) used a leakage fraction of 5 per cent and stated that their results were not dependent on the amount of leakage. This is because they compared the amount of Hα to mean values of Hα flux which are less sensitive to leakage than the Hα/UV flux ratio (see their figure 2). They found that for leakage fractions up to 40 per cent their results were unaffected. This is within the mean leakage of 25 per cent that we apply to our models. For the single star population the greatest difference between PSS and CMDMSM is seen in the different paths of the mean ratio values versus the star-formation rate determined from the UV flux. CMDMSM decreases much sooner than PSS at around 0.1M⊙yr −1 . However there is a large possible range around these mean values in both cases and the difference is only approximately 1σ. When we consider the binary population we see that the difference between the two IMF filling methods is substantially reduced. This is because, while the IMF initially leads to fewer massive stars in the CMDMSM case, binary interactions, such as merging and mass-transfer, increase the number of massive stars relative to a single-star population. If we are to distinguish between PSS and CMDMSM by means of the downturn in this ratio we must repeat the analysis that led to Figure 3 for lower star-formation rates. We show the result for starformation rates between 10 −2 and 10 −4 M⊙/yr −1 in Figure 4. By eye PSS provides a better fit to the observed population than CMDMSM. This is because CMDMSM has an extended tail of galaxies towards lower ratios. PSS does not have this tail. However including binary stars in the synthetic galaxies reduces it further in both CMDMSM and PSS. Also the tail might not be present in the observed data owing to selection biases. A χ 2 test reveals that both PSS populations are a probable match to the observed distribution. The single-star CMDMSM distribution does not match the observed distribution. However our binary CMDMSM population produces an equally likely fit to the observed Figure 2. The ratio of SFR measured by Hα and UV fluxes verses the SFR from UV flux. The asterisks are the observations of Lee et al. (2009) while the shaded region show the density of our individual realisations of synthetic galaxies. The thick solid lines indicate the mean ratios for the synthetic galaxies and their 1σ limits. The dashed lines show the mean ratios for the other IMF filling method with the same stellar population. The upper panels are for PSS and the lower panels are for CMDMSM. While the left panels are for a single star population and the right panels are for binary populations. Here we assume that a star formation rate of 1M ⊙ yr −1 is equivalent to a log 10 (F (Hα)/ergs s −1 ) = 41.1 and a UV flux density of log 10 (F (1500Å)/ergs s −1Å ) = 27.85. Linear features are due to limited resolution in initial mass, separation and mass ratio parameter space of our binary models. data. Our results also show that some ionising photon leakage is required if our PSS models are to match observations. Lee et al. (2009) noted that their results indicate a downturn in the Hα to UV ratio at low star-formation rates. This could be explained by the IGIMF model put forward by Pflamm-Altenburg, Weidner & Kroupa (2009). At first comparison of the synthetic and observed galaxies in Figures 2 and 3 tempts us to agree with this deduction. This is mainly because the spread of the observed galaxies at higher star-formation rates is better reproduced by the CMDMSM, single star models. The most significant difference between PSS and CMDMSM is in the region where the star formation rates drop below 10 −2 M⊙yr −1 . All our models are able to reproduce the observed galaxies with the lowest ratios at low star-formation rates. Therefore it is not possible to differentiate between PSS and CMDMSM from these observations. Furthermore the inclusion of binaries in stellar population models means that any difference between PSS and CMDMSM is only apparent at star-formation rates below those in the observed sample of Lee et al. (2009). Therefore, from the observed distribution of Hα to FUV ratio, it is not possible to discriminate between PSS and CMDMSM owing to the uncertainties in the importance of binary evolution and ionising photon leakage. Our conclusions are broadly in line with those of Fumagalli, da Silva & Krumholz (2011). However they compared PSS models to IGIMF models. The IGIMF models restrict the number of massive stars in the synthetic galaxies further because they impose a maximum cluster mass that depends on the total star-formation rate. We have only im- Figure 3. The distribution of Hα to UV ratio for observed and synthetic galaxies with star-formation rates between 10 −2 and 1 M ⊙ y −1 . The red line represents the observed sample of Lee et al. (2009) while the solid line represents the relevant synthetic galaxies from Figure 2. The dashed line represents the synthetic observations smeared by a flat leakage of ionising photons distributed between a leakage fraction of 0 and 50 per cent. The left panels are for PSS and the right panels are for CMDMSM. The first and third panels are for a single star population and the second and fourth panels are for binary populations. posed a cluster mass that dependent maximum stellar mass and have shown that a CMDMSM alone cannot be ruled out. An important conclusion to draw from our models (and those of Fumagalli, da Silva & Krumholz 2011;Weisz et al. 2012) is that the scatter and variation of the Hα/UV ratio is not due to the IMF filling method but it depends more on the star-formation history of each individual galaxy. A general trend we find is that those systems with less star-formation in the last 10 Myr have lower ratios, while those with most of the star formation in the last 10 Myr have higher ratios even at low mean star-formation rates. This is because the stars responsible for Hα emission typically have ages of 10 Myr or less. This indicates that any simulation that predicts the properties of a sample of galaxies must take into account the stochastic nature of star-formation and recognise not only that each cluster has its own stellar content but also that each cluster has its own age independent of the other clusters. If there are enough clusters in a galaxy this leads to an average stellar population. However if there are only a few clusters the appearance of the galaxy-wide stellar population can be very different from what might be expected for a simple stellar population with a smooth star-formation history. The importance of binaries From our results it is possible to qualitatively demonstrate the need to use binary star models. For individual clusters binaries seem to have little affect. This is because of the short period of 8 Myr we have used to match the observed clusters in this case. In our synthetic galaxies, with 100 Myr of starformation, we see that the scatter of the synthetic galaxies is reduced slightly if binaries are included and the mean SFR ratio starts to decrease at lower SFRs. This is more clearly shown in Figure 5 in which we compare populations with different IMFs and single-star to binary star ratios. We see that for a single-star population the ratio begins to drop between 10 −2 and 10 −3 M⊙ yr −1 while for binary populations this drop begins between 10 −3 and 10 −4 M⊙ yr −1 . The binary effect makes it difficult to distinguish between the PSS and CMDMSM IMF filling methods at any star-formation rate. Binary evolution affects the observed SFRs because through mass transfer between and merging of stars it increases the number of massive stars at the expense of lowermass stars. We demonstrate this in Figure 5. We see here that binary populations typically produce similar Hα/UV flux ratios to a single star population when the IMF slope is shallower. That is until low star-formation rates at which point, for a single cluster with a significant population of bi-nary stars we can also expect the apparent IMF to be flatter. Furthermore the most massive star in a cluster might not have been the most massive star when it formed. Therefore interacting binary stars have a strong effect and must be included when attempts are made to determine the IMF from observations of stellar systems. CONCLUSIONS We have investigated two uncertainties in population synthesis. These are how the IMF is filled and the effects of interacting binary star evolution. The Hα flux per M⊙ observed in samples of clusters is consistent with PSS of the SIMF for clusters around 100M⊙ because individual low-mass clusters with one or two massive OB and WR stars, such as the Velorum cluster or those presented by 2010) is not significant enough to rule out CMDMSM. At these masses, around 10 3 M⊙ and above, it also becomes more difficult to to differentiate between PSS and CMDMSM because of the blurring effect of binary stars and in addition to the lack of conclusive data in this mass range. We have also considered the ratio of the Hα to UV fluxes in galaxies. Observationally there is a significant scatter that can be explained by the stochastic nature of the star-formation history. We find it difficult to differentiate between PSS and CMDMSM. This is because we find some evidence that the leakage or loss of ionising photons must be considered. In addition, including binary star populations makes it difficult to distinguish between the methods for filling the IMF. Only single-star CMDMSM populations can be ruled out with the observations of galaxies with SFRs below 10 −2 M⊙yr −1 . The ratio of Hα to UV flux for stellar populations, including binary stars, varies less than that for populations of single stars. Binaries can merge and mass-transfer can produce more massive stars than were present in the initial population. Therefore the expected star-formation rates for galaxies in which it will be possible to detect differences between PSS and CMDMSM are much lower than currently observed. Furthermore, because the leakage or loss of ionising photons from young stellar populations must be considered, it becomes even more difficult to discern the IMFfilling method from observations of galaxies with low Hα to UV ratios. We suggest that it may be more fruitful to find galaxies with low overall star-formation rates but with high Hα to UV ratios. That is galaxies that are rich in clusters similar to those found by Lamb et al. (2010). ACKNOWLEDGEMENTS JJE would like to thank the anonymous referee for his very constructive comments which have lead to a much improved paper. JJE is supported by the Institute of Astronomy's STFC Theory Rolling grant. JJE would also like to thank Joe Walmswell, Monica Relano, Ben Johnson, Dan Weisz, Janice Lee, Daniella Calzetti, Sally Oey, Mark Gieles, Mieguel Cerviño, Michele Fumagalli, Robert da Silva and Christopher Tout for very helpful discussions and comments on this paper.
2012-01-31T21:27:15.000Z
2011-06-21T00:00:00.000
{ "year": 2012, "sha1": "2cb84055f2a36eb20f1fdecf7fbd386d5e921167", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/422/1/794/18605707/mnras0422-0794.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "2cb84055f2a36eb20f1fdecf7fbd386d5e921167", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244831485
pes2o/s2orc
v3-fos-license
Dilatometric Analysis and Kinetics Research of Martensitic Transformation under a Temperature Gradient and Stress Based on material constitutive models and the classic Koistinen–Marburger (KM) kinetics model, a new dilatometric analysis model was developed to extract the kinetics curve of martensitic transformation under a temperature gradient and stress from the measured dilatometric data and to determine the transformation parameters. The proposed dilatometric analysis model is generally for athermal martensitic transformation, relying only on the average atom volume of martensite and austenite. Furthermore, through theoretical calculations, the proposed model also provided a more accurate method for obtaining the martensite start temperature, which is different from the traditional method. According to the dilatometric analysis results for the martensitic transformation of a type of high-strength low-alloy steel, and the thermodynamic basis of martensitic transformation, a refined kinetics model was developed that successfully predicted the martensitic transformation kinetics curves under different stresses, taking into account the physical significance of the transformation parameter α and the driving force of stress for martensitic transformation. Introduction The expansion of metal is essentially a continuous or discontinuous change in atomic volume caused by temperature change or phase transformation. This physical nature makes dilatometric analysis a powerful technique for studying the phase transformation behaviors in ferrous alloys [1][2][3]. The dilatometric data measured by a sensitive high-speed dilatometer can provide detailed information on the thermal expansion characteristics and the change in average atomic volume during transformation [4]. Using specific analysis models, the product phase fraction can be extracted as a function of temperature or time from the dilatometric curve. The classic analysis model proposed to calculate the phase fraction from the dilatometric curve is the lever rule [5]. As shown in Figure 1, the linear expansion behaviors of the dilatometric curve are extrapolated into the temperature range, where phase transformation occurs. Assuming that the fraction of the product phase is proportional to the dilatation strain, at a given temperature, the fraction of the product phase can be calculated using Equation (1), according to the relative position of the dilatometric curve between the two baselines extrapolated from the linear segments: the lever rule model [1,4,6,7]: (1) The transformation is essentially complete when the maximum strain of the dilatometric curve is reached, usually at room temperature. (2) The lever rule can only be applied to single-phase transformation or to multiple phase transformations if they can be considered to be in sequence, with no overlaps. (3) The lever rule is only valid for a transformation without repartition of alloy elements. The previous premises limit the accuracy and availability of the lever rule in most materials, specifically in continuously-cooled steels after austenitizing. To overcome these shortcomings, dilatometric analysis models based on the average atom volume were developed to extract the transformation kinetics curve. Takahashi and Bhadeshia first examined the proportional relationship between the dimensional change and the fraction of product phase and provided a quantitative method related to lattice parameters [8]. Then Onink et al. conducted pioneering research in the quantification of simultaneous transformations [9,10]. The lattice parameters of austenite cementite and ferrite at elevated temperatures were measured by neutron diffraction and formulated as a function of carbon content. A numerical model was proposed to calculate the phase transformation kinetics curve of hyper-eutectoid Fe-C steel during an isothermal transformation by the formulated lattice parameters. In subsequent studies, most of the researchers tried to expand the Onink model to a wider range of applications, while some researchers took a different approach, using the density of the constituting phases as the basis of their models [6,11,12]. Li et al. [13,14] suggested a dilatometric analysis model for the isothermal austenite decomposition in both hyper-eutectoid and hypo-eutectoid Fe-C steels. Some researchers took the effect of alloying elements on lattice parameters into account [15,16]. Garcia et al. [17] and Kop et al. [1] improved the model to analyze the transformations in continuously heating or cooling steels. In Kop's study, the non-linear relationship between temperature and the atom volume of austenite due to the repartition of carbon was considered, which was normally neglected in the standard analysis of the dilatometric data. The easily-ignored shortcoming of the average atom volume models, which did not consider the effect of the non-isotropic strain during transformation [4], was studied by Suh and Oh. In their study, the non-isotropic strain was attributed to the transformation plasticity, expressed as being proportional to the fraction of the product phase. In reference It should be realized that there are three implicit premises for the establishment of the lever rule model [1,4,6,7]: (1) The transformation is essentially complete when the maximum strain of the dilatometric curve is reached, usually at room temperature. (2) The lever rule can only be applied to single-phase transformation or to multiple phase transformations if they can be considered to be in sequence, with no overlaps. (3) The lever rule is only valid for a transformation without repartition of alloy elements. The previous premises limit the accuracy and availability of the lever rule in most materials, specifically in continuously-cooled steels after austenitizing. To overcome these shortcomings, dilatometric analysis models based on the average atom volume were developed to extract the transformation kinetics curve. Takahashi and Bhadeshia first examined the proportional relationship between the dimensional change and the fraction of product phase and provided a quantitative method related to lattice parameters [8]. Then Onink et al. conducted pioneering research in the quantification of simultaneous transformations [9,10]. The lattice parameters of austenite cementite and ferrite at elevated temperatures were measured by neutron diffraction and formulated as a function of carbon content. A numerical model was proposed to calculate the phase transformation kinetics curve of hyper-eutectoid Fe-C steel during an isothermal transformation by the formulated lattice parameters. In subsequent studies, most of the researchers tried to expand the Onink model to a wider range of applications, while some researchers took a different approach, using the density of the constituting phases as the basis of their models [6,11,12]. Li et al. [13,14] suggested a dilatometric analysis model for the isothermal austenite decomposition in both hyper-eutectoid and hypo-eutectoid Fe-C steels. Some researchers took the effect of alloying elements on lattice parameters into account [15,16]. Garcia et al. [17] and Kop et al. [1] improved the model to analyze the transformations in continuously heating or cooling steels. In Kop's study, the non-linear relationship between temperature and the atom volume of austenite due to the repartition of carbon was considered, which was normally neglected in the standard analysis of the dilatometric data. The easily-ignored shortcoming of the average atom volume models, which did not consider the effect of the non-isotropic strain during transformation [4], was studied by Suh and Oh. In their study, the non-isotropic strain was attributed to the transformation plasticity, expressed as being proportional to the fraction of the product phase. In reference [18], they further distinguished the contribution of individual transformations to the evolution of non-isotropic dilatation and proposed a pair of linear relationships with different slopes. The previous models aimed at the transformations without stress (mostly ferrite and pearlite transformations in steels, rather than martensitic transformation). However, martensitic transformation is essentially a stress-assisted transformation and stress can directly affect the kinetics, due to the stress-induced transformation. In addition, the mechanical behavior of the specimen is affected by stress during transformation, leading to transformation plasticity strain. Therefore, it is essential to develop an analytical model that takes the effect of stress into account. Another easily overlooked fact is that the surface of the specimen [19], which is exposed to convective cooling, radiative cooling, and even stronger conduction cooling by mediums, can often be cooler than the core zone of the specimen. The temperature gradient can change the dilatometric curve significantly, through the pre-transformation on the cooler surface. In the present paper, a new dilatometric analysis model was proposed to deal with the martensitic transformation under the function of temperature gradient and stress. By comparing the kinetics curves under different conditions, an improved kinetics model was developed that considers the physical significance of the parameter α and the effect of mechanical driving energy from stress. The Temperature Field and the Martensite-Start Temperature in the Specimen As shown in Figure 2, the transforming zone of the specimen for the Gleeble thermalmechanical simulator in the present paper can be divided into two zones. Due to the position close to the thermocouples, the central/middle zone of the specimen can be regarded as an isothermal zone, since its temperature can be precisely controlled by the simulator. There is a temperature gradient in the surface/edge zone, due to stronger heat transfer. [18], they further distinguished the contribution of individual transformations to the evolution of non-isotropic dilatation and proposed a pair of linear relationships with different slopes. The previous models aimed at the transformations without stress (mostly ferrite and pearlite transformations in steels, rather than martensitic transformation). However, martensitic transformation is essentially a stress-assisted transformation and stress can directly affect the kinetics, due to the stress-induced transformation. In addition, the mechanical behavior of the specimen is affected by stress during transformation, leading to transformation plasticity strain. Therefore, it is essential to develop an analytical model that takes the effect of stress into account. Another easily overlooked fact is that the surface of the specimen [19], which is exposed to convective cooling, radiative cooling, and even stronger conduction cooling by mediums, can often be cooler than the core zone of the specimen. The temperature gradient can change the dilatometric curve significantly, through the pre-transformation on the cooler surface. In the present paper, a new dilatometric analysis model was proposed to deal with the martensitic transformation under the function of temperature gradient and stress. By comparing the kinetics curves under different conditions, an improved kinetics model was developed that considers the physical significance of the parameter α and the effect of mechanical driving energy from stress. The Temperature Field and the Martensite-Start Temperature in the Specimen As shown in Figure 2, the transforming zone of the specimen for the Gleeble thermalmechanical simulator in the present paper can be divided into two zones. Due to the position close to the thermocouples, the central/middle zone of the specimen can be regarded as an isothermal zone, since its temperature can be precisely controlled by the simulator. There is a temperature gradient in the surface/edge zone, due to stronger heat transfer. Based on Fourier′s law and energy conservation law, the one-dimensional transient nonlinear differential equation along the transverse direction of the specimen can be expressed as [20]: Based on Fourier s law and energy conservation law, the one-dimensional transient nonlinear differential equation along the transverse direction of the specimen can be expressed as [20]: where t is time, ρ is density, c is the specific heat capacity, λ is the heat transfer coefficient, and q v is the internal heat source, which can be expressed as the sum of the transformation latent heat q 1 and the heat from electric current q e : where ∆H is the enthalpy difference between martensite and austenite. According to Equation (2), since the central/middle zone is an isothermal zone, the heat from the electric current can be expressed as: When the temperature gradient in the surface/edge zone is small, Equation (2) can be approximated as: where T x is the temperature at the point with the relative position x. The boundary conditions in the dilatometric experiment can be expressed as: where T s is the ambient temperature, M s is the martensite-start temperature, h is the heat transfer coefficient of the surface, which can be approximated as a constant in a small temperature range. Considering the symmetry, half of the specimen is taken as the research object. According to Equations (5) and (6), the integral calculation gives: where ∆T 0 is the maximal difference of temperature between the central/middle zone and the surface/edge zone. With a small temperature gradient, due to less impact on kinetics curve, ∆T 0 can be approximated as a constant during transformation and calculated by: where L is the width of the specimen, λ 0 is the heat transfer coefficient of austenite at the reference temperature. According to Equation (7), when T x = M s , the martensitic transformation starts at the point with the relative position x, and the martensite-start temperature M sx measured by thermocouples in the core/middle zone can be expressed by: where M s is the martensitic transformation start temperature without stress. Patel and Cohen [21] considered that the work done by stress contributed to the driving force of transformation and gave an expression for M s , the transformation start temperature under tensile stress: where ∆G γ→α is the difference of Gibbs free energy between martensite and austenite, U max is the maximum mechanical driving energy, and σ 1 is the tensile stress applied on the specimen. Although the temperature gradient and the non-simultaneous martensitic transformation lead to internal stress in the specimen, the strain from transformation and transformation plasticity can rapidly reduce the internal stress and result in a uniform stress field in the complete specimen. Therefore, with a small temperature gradient, the martensite induced by internal stress can be ignored. Then, according to Equations (9) and (10), the martensite start temperature M sx and the martensite start temperature under external stress are shown in Figure 3. perature under tensile stress: is the difference of Gibbs free energy between martensite and austenite, max U  is the maximum mechanical driving energy, and 1  is the tensile stress applied on the specimen. Although the temperature gradient and the non-simultaneous martensitic transformation lead to internal stress in the specimen, the strain from transformation and transformation plasticity can rapidly reduce the internal stress and result in a uniform stress field in the complete specimen. Therefore, with a small temperature gradient, the martensite induced by internal stress can be ignored. Then, according to Equations (9) and (10), the martensite start temperature sx M and the martensite start temperature under external stress are shown in Figure 3. Extracting the Model of the Martensitic Kinetics Curve under a Temperature Gradient and Stress During martensitic transformation under stress, the measured strain change   can be written as the sum of individual components, as the following [22]: Extracting the Model of the Martensitic Kinetics Curve under a Temperature Gradient and Stress During martensitic transformation under stress, the measured strain change ∆ε can be written as the sum of individual components, as the following [22]: where ∆ε e , ∆ε p , ∆ε T , ∆ε tr , and ∆ε tp are the strain changes induced by elastic deformation, plastic deformation, temperature change, transformation, and transformation plasticity. For the test on a Gleeble thermal-mechanical simulator, the measured strain is longitudinal to the load application/current flow axis. Assuming that the stress is less than the yield strength during the martensitic transformation, then it can be written as [23]: where ∆ε l , ∆ε el , and ∆ε tpl are the measured strain, the elastic strain, and the transformation plasticity stain in the longitudinal direction to the load application/current flow axis. ∆ε et and ∆ε tpt are the elastic stain and the transformation plasticity stain in the transverse direction. µ is the Poisson ratio of specimens. The change of transformation strain can be calculated by [4]: where V 0 is the average atomic volume of austenite at the reference temperature, and ∆V is the difference between the average atomic volume of martensite and austenite. Taking M s0 , the martensite start temperature under stress at the point with the relative position 0, as the reference temperature, the change of temperature change strain can be obtained by mixing law [23]: where β γ and β m are the expansion coefficient of austenite and martensite. According to Schuh and Dunand's induction [24], the change of the transformation plasticity stain can be approximated as: where ∆V/V is the volume mismatch between austenite and martensite, ∆V 0 /V 0 is the volume mismatch at the reference temperature, σ Y is the yield stress of the weaker phase, and σ 1 is the applied external stress. Considering that most of the martensite is generated rapidly near M s , the strain from transformation plasticity can be approximated as a linear function of the martensitic fraction. Then the change of transformation plasticity strain can be calculated by: where ∆V s /V s is the volume mismatch at M s . During the martensitic transformation, the material parameters of the specimen change with the martensitic fraction. The Young s modulus of the specimen can be expressed by [23]: where K m and K γ are Young s modulus of the martensite and the austenite. The strain due to elastic deformation can be calculated by Hooke's law, The change of elastic strain can be calculated by: Combining Equations (12)- (14), (16), and (19), the strain change in the longitudinal direction has the following relationship with the martensitic fraction: Since the term (β γ − β m )(T − M s ) is much smaller than the other terms, Equation (20) can be simplified to: Equation (21) reveals that the martensitic fraction is approximately linearly related to the difference between the measured strain and the strain due to temperature change. The Determine of the Transformation Parameter α and the Martensite Start Temperature The classic martensitic transformation kinetics model that has been widely applied was proposed by Koistinen and Marburger in 1959 [25]. In this study, the accurate fraction of retained austenite in different Fe-C alloys with 0.37 to 1.10 wt.% carbon was measured with an X-ray diffractometer and a fitted relationship was found, as follow: where α is a constant and equal to 0.011 for Fe-C alloy with less than 0.11 wt.% carbon. Ignoring the difference between the transformation parameters α in the surface/edge zone and the core/middle zone, the martensitic fraction under a temperature gradient can be calculated using the following equations. When M s ≤ T ≤ M s0 , the martensitic fraction can be calculated by: where f S is the martensitic fraction of the surface/edge zone. When T ≤ M s , the martensitic fraction can be calculated by: where f C is the martensitic fraction of the core/middle zone, M g s is the equivalent transformation start temperature under a temperature gradient and can be expressed by: where F M s is the fraction of martensite at M s . Although the KM model was found to fit well with the experimental data only in the initial stage in many studies, according to Equations (23) and (24) Although the KM model was found to fit well with the experimental data only in the initial stage in many studies, according to Equations (23) and (24), the initial value of parameters α and Experimental Procedure The chemical composition of the studied low-carbon alloyed steel is shown in Table 1, which was measured using a Spectrolab M10 stationary metal analyzer. The specimens, from a cold-rolled sheet, with an original microstructure of ferrite and pearlite and a thickness of 1.8 mm, were cut into shape, as shown in Figure 5. 1.240 0.232 0.002 0.116 0.031 0.012 0.018 0.016 0.011 0.005 0.005 0.002 Experimental Procedure The chemical composition of the studied low-carbon alloyed steel is shown in Table 1, which was measured using a Spectrolab M10 stationary metal analyzer. The specimens, from a cold-rolled sheet, with an original microstructure of ferrite and pearlite and a thickness of 1.8 mm, were cut into shape, as shown in Figure 5. The chemical composition of the studied low-carbon alloyed steel is shown in Table 1, which was measured using a Spectrolab M10 stationary metal analyzer. The specimens, from a cold-rolled sheet, with an original microstructure of ferrite and pearlite and a thickness of 1.8 mm, were cut into shape, as shown in Figure 5. The heating, quenching, and loading process was performed using a Gleeble-1500 thermal-mechanical simulator, and the applied stress in the experimental process was set according to Figure 6. To obtain an initial complete, homogeneous austenitizing microstructure, the specimens were heated to the austenitizing temperature of 950 °C with a rate of 10 °C/s and had a soaking time of 3 min. As shown in Figure 6, the specimens were cooled to 850 °C at a rate of 30 °C/s after soaking. Then a constant tensile stress was put on the specimens. With the constant stress, the specimens were quenched to room temperature, with a cooling rate of 30 °C/s. The heating, quenching, and loading process was performed using a Gleeble-1500 thermal-mechanical simulator, and the applied stress in the experimental process was set according to Figure 6. Figure 7 shows the measured dilatometric curves of the investigated low-alloy steel under a temperature gradient without external stress. Although the martensitic kinetic curves of many kinds of steel, including the investigated steel, have a similar shape to the KM model, it is important to note that, according to many studies, the parameter α is only constant in the middle stage of the transformation kinetics curve, between 5% and 60% martensite, and changes in the initial stage and the ending stage [26]. The Kinetics of Martensitic Transformation without External Stress Through experimental observation, Magee [27] derived the thermodynamic form of the KM-model. The newly formed number of martensite laths dN and the change of the driving force dU have the following proportional relationship: Then the fraction of the newly formed martensite can be expressed by: where V is the average volume of newly formed martensitic laths. The integral calculation gives Equation (22), and α can be expressed as: The volume of martensitic laths is constrained by the grain boundary and the formed To obtain an initial complete, homogeneous austenitizing microstructure, the specimens were heated to the austenitizing temperature of 950 • C with a rate of 10 • C/s and had a soaking time of 3 min. As shown in Figure 6, the specimens were cooled to 850 • C at a rate of 30 • C/s after soaking. Then a constant tensile stress was put on the specimens. With the constant stress, the specimens were quenched to room temperature, with a cooling rate of 30 • C/s. Figure 7 shows the measured dilatometric curves of the investigated low-alloy steel under a temperature gradient without external stress. Although the martensitic kinetic curves of many kinds of steel, including the investigated steel, have a similar shape to the KM model, it is important to note that, according to many studies, the parameter α is only constant in the middle stage of the transformation kinetics curve, between 5% and 60% martensite, and changes in the initial stage and the ending stage [26]. According to Equation (29), the parameter F  can be extracted from the measured kinetics curve by: The Kinetics of Martensitic Transformation without External Stress (30) Figure 8 shows for the tested steel, F  equals 0.0237. Through experimental observation, Magee [27] derived the thermodynamic form of the KM-model. The newly formed number of martensite laths dN and the change of the driving force dU have the following proportional relationship: Then the fraction of the newly formed martensite can be expressed by: (27) where V is the average volume of newly formed martensitic laths. The integral calculation gives Equation (22), and α can be expressed as: The volume of martensitic laths is constrained by the grain boundary and the formed laths and changes gradually during transformation, which means α is not always constant during martensitic transformation. Therefore, it is reasonable to improve the KM-model as following: where α F is a parameter that is constant at the beginning of the transformation but changes with the fraction of the formed martensite in the following stages. According to Equation (29), the parameter α F can be extracted from the measured kinetics curve by: Figure 8 shows the parameter α F as a function of the formed martensite fraction. It indicates that α F is constant in the first half of the transformation, where the transformed martensite is less than 47%. Then, α F enters a linearly decreasing stage until 80% of austenite has transformed into martensite. Beyond this stage, α F becomes a constant once more and equals approximately one-quarter of the initial value. When f is more than 87%, α F , starts to decrease linearly again, with a higher rate. for the tested steel, F  equals 0.0237. Through fitting, α F can be expressed as: for the tested steel, α F equals 0.0237. When T < M s , according to Equations (29) and (31), the kinetics curve of martensitic transformation without stress can be expressed as: The proposed kinetics model shows a good agreement with the experimental curve, as shown in Figure 7a. Considering that the turning points of the parameter α F are only dependent on geometric constraints, it can be concluded that the proposed model applies to all the lath martensitic transformations with close habit planes and sliding directions. In Figure 7b, the agreement between the experimental data and the model prediction confirms that the deviation between the experimental data and the previous kinetics models in the initial stage comes from the effect of the temperature gradient. The Kinetics of Martensitic Transformation under Stress According to Equation (34), F  can be extracted from the measured kinetics curve by: The parameter F  under stress is shown in Figure 9 and can be expressed by the following equation through fitting:   Considering the mechanical driving energy from stress, according to Equation (26), the fraction of the newly formed martensite can be expressed by the following equation, when T ≤ M s : where U is the average mechanical driving force from the applied stress. ϕ is a constant. F M s is the fraction of martensite induced by stress at M s . The integral calculation gives an approximate expression for the martensitic fraction: where M st s is the equivalent transformation start temperature under stress and can be expressed by: α F is the transformation parameter under stress and can be expressed by: where ψ is a constant. Approximately, assuming that the parameter α F in the surface/edge zone is always equal to α F in the core/middle zone, when T < M s , the total fraction of formed martensite can be obtained by: where M st-g s is the equivalent transformation start temperature under stress and can be calculated by: According to Equation (34), α F can be extracted from the measured kinetics curve by: The parameter α F under stress is shown in Figure 9 and can be expressed by the following equation through fitting: When T < M s , according to Equations (34) and (40), the kinetics curve of martensitic transformation under stress can be expressed as: Conclusions Based on the proportional relationship between the martensitic fraction and the difference of the measured strain from thermal strain under stress, a new dilatometric analysis model was suggested to extract the kinetics curves and determine the transformation parameters. According to the dilatometric analysis results under different stresses, the KM kinetics model was refined and the improved model showed excellent agreement with the experiment results. Furthermore, the following conclusions can be drawn. (1) The parameter α F was not a constant but a variable, expressed as a segmentation function with the martensitic fraction as the independent variable. This phenomenon can be attributed to the linear relationship between α and the average volume of newly formed martensitic laths. (2) As a part of the driving force of martensitic transformation, the mechanical energy from stress increased the value of α F linearly.
2021-12-03T16:33:22.945Z
2021-11-28T00:00:00.000
{ "year": 2021, "sha1": "eed93f526ab1d832d03b3a8ee4e90dd45242108a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/23/7271/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2ddef64261288eb36a201547a78bfcd327ca2fd5", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
49315603
pes2o/s2orc
v3-fos-license
Enhancement of Lymphatic Vessels in the Superficial Layer in a Rat Model of a Lymphedematous Response Background: The morphologic and histologic behavior of lymphatic vessels in lymphedema has not been well analyzed using laboratory animals. The purpose of the present study was to elucidate the regeneration process of lymphatic vessels after acute lymphedema in a rat model. Methods: The acute lymphedema was induced by an amputation and a replantation surgery on a rat hind limb. Recovery of lymphatic flow was traced using fluorescent lymphography with dye injection. The morphology and number of lymphatic vessels were immunohistochemically detected and quantified in both superficial and deep layers. Results: The swelling was the most severe, and the number of lymphatic vessels in the superficial layer was significantly and maximally increased on postoperative day 3. Backflows and overflows were also detectable in the superficial layer on postoperative day 3. The number of lymphatic vessels had decreased but remained significantly higher than that in the controls on postoperative day 14, when the swelling decreased to the levels in the controls. In contrast, the number of lymphatic vessels in the deep layer showed a tendency toward increased numbers; however, it was not statistically significant on postoperative day 3, 7, or 14. Conclusions: We have obtained solid evidence showing the differential potency of lymphatic vessels between the superficial and the deep layers after temporal lymphedematous induction. Further analysis of lymphedematous responses in animal models could provide new insights into the challenges associated with the clinical treatment of lymphedema. INTRODUCTION The lymphatic system plays an important role in maintaining the homeostasis of tissue fluid, immune cell trafficking, and absorption of dietary lipids. Lymphatics are present in the skin and almost all internal organs excluding the central nervous system, bone marrow, and avascular tissues such as the epidermis. The lymphatic network drains interstitial fluid from the tissues and returns it to the vascular system. Aberrant lymphangiogenesis is associated with the pathogenesis of human disorders including lymphedema, tumor metastasis, and inflammatory conditions such as asthma, psoriasis, and rheumatoid arthritis. 1,2 We must collect analytical findings and information concerning lymphatic flow at both the experimental and clinical levels when examining clinical treatments for lymphedema. Lymphoscintigraphy is a gold standard for diagnosis when pathological changes of lymphatic vessels must be identified. Computed tomography and fluorescence lymphographies can provide views of lymph flow in detail. [3][4][5][6] However, only partial lymphatic pathways were detectable by such methods after the uptake of contrast agents, but whole vessels were not visible. There is little information about the horizontal anatomy concerning the localization of lymphatic vessels in the superficial and/or deep layers. The number of lymphatic vessels localizing Disclosure: The authors have no financial interest to declare in relation to the content of this article. The Article Processing Charge was paid for by the authors. Enhancement of Lymphatic Vessels in the Superficial Layer in a Rat Model of a Lymphedematous Response in horizontal levels and in which layers and regions the lymphatic vessels distribute is inconsistent. To our knowledge, there have been no reports about lymphatic distribution in deep regions. The distribution of all lymphatic vessels in horizontal cross-section must be elucidated first in laboratory animals. 7,8 Intradermal or subdermal lymphangiogenesis has been studied morphologically, whereas physiological and pathological lymphatic responses in the deep layers, such as the intramuscular and peri-muscular layers including deep fascia, are still unknown. An adequate animal model to study lymphedema could be very powerful to help reveal the molecular and cellular backgrounds underlying lymphangiogenesis and in developing further treatments for human clinical lymphedema from a novel viewpoint. [9][10][11][12] In the present study, we tried to elucidate the lymphatic distribution in the superficial and deep layers of the lower leg, and then tried to trace the course of lymphangiogenesis in an acute lymphedema animal model after an amputation and replantation procedure. Our animal model cannot repeat human clinical lymphedema completely; however, important information needed to solve clinical problems could be realized at this experimental level. Acute Lymphedema Model Adult male Wistar rats (SLC, Shizuoka, Japan) weighing 250-350 g were used in this study. All animal experiments were conducted in strict accordance with institutional and NIH guidelines for "Using Animals in Intramural Research," and all experimental protocols were approved by the Animal Research Committee of Okayama University, Japan (No. OKU-2014176). All rats were intraperitoneally injected with pentobarbital sodium (Dainippon Sumitomo Pharma. Co., Osaka, Japan) at 50 mg/kg of rat body weight for anesthesia, and their hair was carefully removed with depilatory around the surgical area of the legs. We evaluated the lymphatic pathways following an acute lymphedema model of the rat hind limb. The hind limbs of the anesthetized rats were amputated around the right groin line. The hind limb was cut so that the groin lymph node was contained in the central side, and the popliteal lymph node was contained in the peripheral side ( Fig. 1). Soon after the amputation, the replantation surgery was carried out using an allograft. First, the femoral bone was fixed with a 20-G needle. The femoral artery and vein were anastomosed using 10-0 nylon. The muscles and skin were sutured with 3-0 silk. The lymphatic vessels were not anastomosed. The operated rats were caged individually with ad libitum access to food until they underwent the following processes. They were put in Elizabethan collars so as not to bite themselves. Tissue Harvest and Immunohistochemical Staining of Lymphatic Vessels The rats were anesthetized and perfused transcardially with 4% paraformaldehyde fixative. The rats' hind limbs were amputated around the right groin line. The amputated lower extremities were fixed again in 10% neutral buffered formalin overnight, defatted in ethanol for 4 days, and then decalcified by soaking in 10% ethylenediaminetetraacetic acid (EDTA) for 1 month. The tissues were cut in at 5 mm peripherally from the groin line (or from the suture line). All the histochemical analyses were carried out on the sections at the peripheral part in reference to the suture line. The tissues were embedded in paraffin, and 4.5-µm thick sections were prepared as whole horizontal sections Hind limb amputation and replantation procedures. amputation was carried out between the inguinal lymph node (upper yellow mark) and the popliteal lymph node (lower yellow mark). the femoral bone (green), the femoral artery (red), and the femoral vein (blue) were cut once. the femoral bone was fixed internally, the femoral artery and vein were anastomosed microsurgically, and then the muscle, subcutaneous tissue, and skin were sutured layer to layer. of the hind limb. The sections were then deparaffinized in xylene and rehydrated. After antigen retrieval and blocking, sections were incubated with anti-rat podoplanin monoclonal antibody (11035 AngioBio Co., San Diego, Calif.) overnight at 4°C. 13 Next, sections were incubated in antimouse IgG Horse Radish Peroxidase conjugated (414171 Nichirei. Co., Tokyo, Japan). The immunopositivities were visualized using a 3-3'-diaminobenzidine tetrahydrochloride Substrate Kit for Peroxidase (Vector Laboratories, Burlingame, Calif.). Finally, the sections were dehydrated and mounted. In the normal and postoperative days (PODs) 3, 7, and 14 subject rats (each n = 3), we manually counted all the immunopositive lymphatic vessels in the horizontal cross sections. We defined "the superficial layer" as the dermis and the hypodermis layers including the epimysium, and "the deep layer" as the subfascial layer excluding the epimysium. The number of lymphatic vessels was counted manually in 4 independent specimens from each animal. Measuring of the Ankle Circumference The ankle circumferences were measured manually before and after the replantation procedure on PODs 1, 3, 5, 7, 10, and 14 (n = 5). We defined POD 0 as the control before the operation. The mean circumference was obtained by taking the average of 10 measurements from 1 sample. Fluorescence Lymphography with Indocyanine Green The near-infrared fluorescence imager PDE (Hamamatsu Photonics Co., Hamamatsu, Japan) was used to observe lymphatic flow. We injected 0.02 ml of 5 mg/ml indocyanine green (ICG) (Diagnogreen, Daiichi-Sankyo, Fig. 2. immunohistochemical studies of lymphatic vessels reacted with podoplanin antibody on pOD 0. a, Hematoxylin and eosin staining of a horizontal slice section of rat hind limb at a lower magnification. B, We defined "the superficial layer" as the dermis and the hypodermis layers including the epimysium, and "the deep layer" as the subfascial layer excluding the epimysium. lymphatic vessels detected in the superficial layer are shown in the center, and the deep layer in the area below. C, D, and E, podoplaninpositive lymphatic vessels in the superficial layer: the lymphatic valves were detectable (green arrowheads). there were lymphatic vessels smaller than 10 µm diameter (red arrow head) and squashed or linear-shaped immunopositivity (blue arrowhead). this staining pattern and the immunopositive debris were undetectable as lymphatic vessels in this study. in the deep layer, immunopositivities were also dense between the periostea and muscles on the medial side of the femur (F). in the muscular layer, the lymphatic vessels were observed in the neurovascular bundles (G). PRS Global Open • 2018 Tokyo, Japan) intradermally into the rats' hind toes using a 30-gauge needle. 14 Six rats were observed from POD 0 to POD 28. Fluorescence images were taken up to 120 minutes after the ICG injections. Dye Injection Procedure The lymphatic pathways were visualized after the dye injection procedure as described previously. 15 In this study, we manually injected acrylic ink (Sakura Acryl Colors, Sakura Color Products Co., Osaka, Japan) that had been diluted in saline directly into the subdermal capillary lymphatic vessels of the dorsalis pedis. The dye was delivered from the capillary lymphatic vessels to the collecting lymphatic vessels in the superficial layer, but was not delivered to the lymphatic vessels in the deep layer of the lower leg. Next, the dye was observed to ascend immediately to the inguinal region and the intraperitoneal lymph nodes. The skin was carefully removed to observe the subdermal lymphatic vessels directly. We injected the dye directly into the lymphatics of the superficial tissues, and we performed histological examinations of them on PODs 0, 3, and 7. Statistical Analysis Statistical comparisons of the ankle circumferences and the number of lymphatic vessels were carried out using the Student's unpaired t test. Statistical significance was set at P value less than 0.05 and 0.01. All numerical data are presented as the mean ± SD. The Normal Distribution of Lymphatic Vessels in Horizontal Whole Sections of the Lower Leg In normal conditions, we examined the distribution and morphology of the lymphatic vessels, considering the superficial layer and the deep layer separately. The mean number of preoperative lymphatic vessels on POD 0 was 70.6 ± 8.4 in the superficial layer and 190 ± 32.5 in the deep layer. Various shapes and sizes of immunopositive vessels were detected in linear, nearly circle, irregular circle, or elliptical forms, and their diameters ranged from less than 10 µm to more than 100 µm (Fig. 2). We defined such immunoreactivity with cavities as the lymphatic vessels based on their morphology, however could not define the linear type as a collapsed lymph duct in this study (Fig. 2E center, right). The linear staining pathways were not conspicuous in the deep layer, but were in the superficial layer. In the deep layer, immunopositivities were also dense between the periostea and muscles on the medial side of the femur. In the muscular layer, the lymphatic vessels were observed in the neurovascular bundles or along the small vessels. ICG lymphography and dye injection procedures showed evidence of lymphatic linear pathways from the dorsalis pedis to the dorsal side of the hind limb into the popliteal lymph node area (Fig. 3A) and then on into the peritoneal cavity. No ICG-fluorescence or dye leakage from lymphatic ducts was detected. Temporal Change of Lymphatic Vessels in Acute Lymphedema Edematous change achieved a peak on POD 3 (Fig. 4). At this point, in ICG lymphography, the area in the periphery compared with the suture line showed uniformly high fluorointensity representing dermal backflow on both the ventral and dorsal sides. ICG-fluorescence was not detected across the suture line (Fig. 3B, indicated by red line). The dye injection procedure revealed back-flows from the collecting vessels to the capillary vessels, and overflows to the inter-tissue spaces. The number of lymphatic vessels in the superficial layer increased significantly (Figs. 5, 6). On the other hand, in the deep layer, the number of immunopositive-staining areas showed a tendency to increase from the preoperative status. However, this increase was not statistically significant (Fig. 5A). On and after POD 5, the swelling started to decrease but still enlarged significantly (Fig. 4). During this period, Fig. 3. time-dependent change of iCG-lymphangiography before and after the surgery. a, preoperative lymphangiography: lymph flows were detectable on the dorsal side. B, pOD 2: iCG-fluorescence was stuffed at the distal part from the suture line (red line). no fluorescence was detected beyond the proximal part. C, pOD 5: iCG-fluorescence was detectable across the suture line (detected by yellow allow). Stuffed fluorescence was also detectable. the inguinal lymph node was fluoro-positive (indicated by green asterisks). D, pOD 14: Stuffed fluorescence was washed out from the distal part of the hind limbs. Fluoro-positive lymph flows across the suture line to the inguinal lymph node (indicated by green asterisks) were detectable. ICG lymphography showed recanalized pathways, and the ICG-fluorescence was still very noticeable on distal part of the suture line (Fig. 3C). The number of lymphatic vessels on POD 7 decreased with statistical significance compared with POD 3, but still remained increased significantly compared with the preoperative state (Fig. 5A). On and after POD 10, the edema resolved to baseline (Fig. 4). At this point, ICG lymphography showed recanalization, and pooling of the fluorescence disappeared completely (Fig. 3D). Dye injection procedures on POD 14 also showed the recanalized pathway clearly extending across the suture line (Fig. 7). The number of lymphatic vessels on POD 14 decreased with statistical significance compared with POD 3, but remained significantly increased from the preoperative state similar to POD 7 (Fig. 5A). An Increase in the Number of Lymphatic Vessels in the Superficial Layer From our study, we developed the following 2 hypotheses to explain the increase in the number of lymphatic Fig. 6. Swelling lymphatic vessels in the superficial layer after replantation surgery. a, pOD 0: dye filled in the collecting lymphatic vessels (indicated by red arrows). there was no leakage to the extraductal regions and no backflow to the dermal or subdermal capillary lymph vessels (indicated by blue arrows). B, pOD 3: Collecting lymphatic vessels were overfilled with dye and were swelling (red arrows). Capillary vessels filled with the ink (indicated by blue arrows) due to back-flows were detectable. Green arrows show overflow to the extravascular space of lymphatic capillary. Higher magnification microphotographs have been inserted to show immunopositivities (inserts in each photograph). vessels. First, the reconstruction of lymphatic vessels was promptly stimulated after the operation. It is generally accepted that lymphangiogenesis is triggered by inflammation and retention of lymph fluid. 9,16,17 Therefore, we have strongly considered the involvement of lymphangiogenesis in the retention of lymph flow in our system. However, as shown in Figure 3, we can also suggest the lymphatic drainage inosculation between donor and recipient lymphatic vessels near the suture site because of the rapid restoration of ICG flow in the recipients' main lymphatic trunk. Therefore, as our second hypothesis, we suggest that all the lymph vessels showed little or no change from before to after the operation; however, we were unable to detect the majority of the superficial lymphatic vessels on POD 0 by our methods. From our observations, debris, and/or linear staining with podoplanin-immunoreactivities were conspicuous in the superficial layer on POD 0, but we did not define them as lymphatic vessels (Fig. 2). The afferent lymphatic flow in the lymphatic capillaries might be too weak and too small to dilate the vessels. The lymphatic vessels might collapse to linear configurations or smaller sizes under normal conditions. After surgery, such vessels might swell and then become detectable by our methods. On POD 3, the reason for the increasing number of lymphatic vessels could be explained by the second hypothesis mentioned above. We have shown that the temporal retention of lymphatic fluid occurred along the swelling around the ankle after the amputation and replantation surgery, evidenced by the use of ICG lymphangiography and the dye injection method. This lymphatic retention also brought back-flow to the capillary lymph vessels and overflow to the interstitial spaces by extrusion of lymphatic fluids from the lymphatic collecting vessels on POD 3 (Fig. 6B below). Therefore, previously collapsed lymph vessels became dilated, and we could then detect them by podoplanin-immunostaining. We observed that the number of lymphatic vessels increased on POD 3. We detected parallel relationships between the retention of lymphatic fluids detected by ICG lymphangiography (Fig. 3) and the ankle circumference (Fig. 4). We also detected parallel relationships between the ankle circumference and the number of lymphatic vessels in the superficial layer until POD 3. On POD 7, the swelling was decreasing (Fig. 4), and ICG retention below the suture line was also decreasing (Fig. 3); however, the number of lymphatic vessels in the superficial layer remained increased. These discrepant findings fit the first hypothesis mentioned above. Newly constructed lymphatic networks in the superficial layer after lymphangiogenesis was already occurring. On the other hand, the once dilated lymph vessels started to collapse again on POD 7. Our data on ICG-lymphangiography (Fig. 3) clearly show that the lymphatic fluids were already flowing from the periphery beyond the suture line back to the midline after POD 7. Therefore, we could recognize that swelling did not depend on the actual number of lymph vessels, but on the lymphatic flows from the periphery to the central body parts. Taken together with these hypotheses, we can speculate that the previously collapsed lymphatic vessels dilated around POD 3, and then collapsed again after PODs 7-14. The newly constructed lymphatic vessels after lymphangiogenesis might have contributed to the increased number of lymphatic vessels on PODs 7 and 14. In the Deep Layer, There Was No Significant Increase in the Number of Lymphatic Vessels From our results, the lymphatic vessels in the deep layer did not significantly increase in acute lymphedema. We were unable to detect the collapsed shape and linear staining using the podoplanin antibody in the deep layer at any time point. Histologically, the superficial layer (the dermis) has a rich network of small and thin lymphatic vessels. These branches descend to the deeper layer and join a larger and thick lymphatic trunk with relatively thick walls. In the normal state, the number of lymphatic vessels in the deep layer (190 ± 32.5) was approximately 3 times higher than in the superficial layer (70.6 ± 8.4) on POD 0 (Fig. 5). Stanton et al. 18 clearly demonstrated by scintigraphy that lymph flow in the deep layer was ~2-3 times higher than that in the superficial layer in human physiological conditions. Stanton's finding is consistent with our present data. From these findings, we suggest that the potential for pathological retention of lymphatic fluid in the deep layer was weaker than in the superficial layer, and that after the amputation and replantation surgery, collapsed and linear lymph vessels would dilate only among the rich and thin networks of lymphatic drainage in the superficial layer. We cannot completely rule out decreased blood flow as the main reason that the lymphatic flows brought on swelling despite the anastomoses of arteries and veins. However, we have obtained solid evidence of back-flows of lymphatic fluids and increased number of lymphatic vessels in the amputated and replanted legs. In a recent clinic, we have tried to overcome lymphedema by lymphovenous anastomoses in the superficial layers mainly; however, the results were unfortunately inconsistent. 3 To our knowledge, there are no challenges against lymph vessels focused on the deeper layer. Based on our present findings, the different behaviors of the lymph vessels in the superficial and deeper layers after a transient lymphedematous response present a hint as to how to treat lymphedema clinically and microsurgically in both the superficial and the deeper layers. CONCLUSIONS We have demonstrated a temporal lymphedematous response in the rat model. The rat's edema worsened on POD 3, but recovered to normal on POD 10. The number of lymph vessels increased during this acute phase but only in the superficial layer, not in the deeper layer. This increase still remained at a higher level after attenuation of the edema. We have discovered new findings showing the differences in activity between the lymphatic vessels in the superficial and the deep layers. Future attempts to enhance the recovery of lymph flows in our rat model after lymphovenous anastomosis may offer a potential strategy to cure lymphedema clinically, according to basic evidence on the anatomy and reconstitution of peripheral lymphatic vessels in the superficial and the deep layers.
2018-06-21T00:27:26.518Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "0004b6b0664f7e165220f2c0eebbf9d6b3ea4993", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/gox.0000000000001770", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0004b6b0664f7e165220f2c0eebbf9d6b3ea4993", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
188345461
pes2o/s2orc
v3-fos-license
CHARACTERIZATION OF HEAT WAVES: A CASE STUDY FOR PENINSULAR MALAYSIA The present work aims to investigate the characteristics of heat waves in Peninsular Malaysia based on the Excess Heat Factor (EHF) Index. This index was calculated based on the daily maximum and minimum temperatures over nine meteorological stations in Peninsular Malaysia during the period 2001 to 2010. The selected station is representing all of the states in Peninsular Malaysia. Statistical analysis found that the highest of the EHF happened at the Kuala Lumpur station in 2002 with an index of 9.1°C2 and the lowest was in Alor Setar in 2006 with an index of 0.1°C2. The EHF moderate was found at Kuantan with an index of 4.2°C2. Moreover, the longest heat wave with 24 days has happened in Ipoh, Perak with amplitude of 29.4°C – 33.0°C. Most of the heat wave characterized in Malaysia occurred during the El Nino events especially moderate El Nino in 2002 until 2005, and 2010. The Southeast, northeast and west part of Malaysia experience the highest average heat wave activity. These results indicated that the heat wave conditions in Peninsular Malaysia are anxious and this requires immediate investigation because it has a direct impact on agriculture, particularly health, economic, and human being. INTRODUCTION Heatwave is one of the most threatening natural hazards and can adversely affect ecosystem, infrastructure, human health, and social life (Zuo et al., 2014). Populations are very vulnerable to changes in heat wave attributes and extreme heat wave events can increase human morbidity and mortality rates (Anderson & Bell, 2011). Previous studies show that heat wave is responsible for more deaths than all other natural hazards in Australia (Coates et al., 2014) and climate risk in Romania (Bocancea, 2018). Although there is no standard definition of the heat wave, it can be referred as a period of consecutive days of abnormal weather. Heat waves are increasing globally due to the effects of climate change. Numerous studies have indicated that climate change is expected to exacerbate the heat wave events, particularly more frequent in their duration and intensity (Coumou & Rahmstorf, 2012). Asia region also experiences adverse effects of climate change. The Intergovernmental Panel on Climate Change (IPCC, 2013) states that South Asian countries, will be at the greatest risk for the emergence of heat waves. The heat wave impacts could be enlarged and significant from region to region, for example in developing countries such as Malaysia. The heat wave event is one of the uncommon natural hazards observed and has had a significant impact on Malaysia. However, the heat wave mitigation has not been taken seriously by most governments and non-governmental organization (NGOs) working on monitoring and disaster risk reduction. The impact of heat wave in Malaysia is currently under-reported and it is difficult to assess information such as morbidity, mortality, and economic consequences. Although the heat wave events in Malaysia have not been extensively investigated, in recent years there have been a number of studies focusing on climate change and temperatures. Wai et al. (2005Wai et al. ( ) used 50 years (1951Wai et al. ( -2001 of temperature data set to study the global warming trend in Malaysia and found that the annual mean temperature increased from 0.99 to 3.44°C /100 years. A study by Makerami et al. (2012) found that an acceptable thermal condition in outdoor spaces of a hot and humid climate of Malaysia is less than 34°C where the comfort condition is in the early morning (9-10 am) and late afternoon (4-5 pm). Mohd Salleh et al. (2015) also found that most of the stations across Peninsular Malaysia showed an inclination toward a temperature above the annual mean surface temperature of 26°C to 28°C and with high annual precipitation values (1200-2400 mm). From the preliminary studies above provide clues that the investigation of heat wave and their characteristics will be useful to improve human understanding and awareness. This study will focus on the analysis of heat wave characteristics in Peninsular Malaysia from 2001 until 2010. We choose an Excess Heat Factor (EHF) method across different climates, which is an ideal method to normalize the climatology variation in heat wave from a hazard point of view (Perkins & Alexander, 2013). This method is introduced to create a universal method for heat wave definition and measurement (Nairn et al., 2009). The EHF index capable provides results which more focused on the pattern of heat wave frequency, duration, amplitude, and a number of days in the past decade. The results of the investigation of heat wave will provide a better understanding and importance of climatology events and early warning systems especially during extreme weather in Malaysia. DATA AND METHODS Section 2.1 describes details about the data set used in this study and Section 2.2 describes the method conducted to determine the heat wave events and characterization. DATA The daily maximum temperature (Tmax) and minimum temperature (Tmin), both in °C units for the period 2001-2010 were obtained from the Malaysian Meteorological Department (MetMalaysia). Data on temperature were based on the availability of the meteorological data at nine stations across Peninsular Malaysia. The meteorological station in Johor Bahru and Malacca is represent the Southwest of Peninsular Malaysia; Kuala Lumpur and Perak is for the West; Penang and Perlis is for the Northwest; Pahang, Kelantan, and Terengganu is for the East. All the selection stations represent the urban, suburban, and industrial area. In this study, we focused on Peninsular Malaysia for the year 2001-2010 where the previous study has shown that there were El Nino events recorded globally during this study period. Table 1 compiling the detail of each station and Figure 1 depicts the location of the stations. The figure shows that Peninsular Malaysia or West Malaysia is located in Southeast Asia between 1°N -6°N and 101°E -105°E, which covers an area of 130,598 km². The climate of Peninsular Malaysia is characterized by two monsoon regimes namely the Southwest Monsoon (SWM) and the Northeast Monsoon (NEM). The SWM usually influenced by low-level southwesterly winds begins in May and ends in August. For NEM, it is dominated by northeasterly winds that cross over the South China Sea. This season usually begins in November and usually lasts between 3-4 months the following year in February. During NEM, the exposed areas on the eastern part of Peninsular Malaysia receive heavy rainfall while the SWM is a drier period for the whole country, particularly for the other states of the west and north coast of Peninsular Malaysia. For a larger view, Peninsular Malaysia surrounded by two large oceans which are the Pacific Ocean to the east and the Indian Ocean to the west. This situation makes Peninsular Malaysia climate strongly influenced by natural climate variability associated with these oceans (Tangang et al., 2012). METHODS There are a number of indices to determine the heat wave that has been developed based on the difference parameter. In this study, we chose the EHF index method to determine the heat wave events, which was developed by Nairn and Fawceet (2013).The EHF was determined based on the combined effect of two excessive heat indices, i.e. excess heat index significance (EHIsig) and excess heat index acclimatization (EHIaccl) (Nairn and Fawceet, 2015). The EHIsig defined as an unusually high heat arising from a high daytime temperature to unusually high overnight temperature. It can be measured by comparing directly three days period (TDP) of daily mean temperature (DMT) against a climate reference annual temperature (95 th percentile). For the calculation 95 th percentile, we used 10 years period climate reference at each particular location. From the results, if the TDP average is higher than the climate reference, then each day in this period is marked as unusual warm or excess heat event. The units of EHIsig are degree Celsius (°C) and the equation is given by (Nairn & Fawcett, 2013). where Ti is the DMT on day i and T95 is the 95 th percentile of DMT for the climate reference period of 2001-2010. In addition to EHIsig, DMT is the average of Tmax and Tmin as given by The second heat index is EHIaccl which defined as a period of heat that is warmer than the recent past (on average). Acclimatization to higher temperatures depends on the characteristics of human physical adaptation that takes between two and six weeks, which involved adjustment of physiological cardiovascular, endocrine, and the renal systems (Nairn & Fawcett, 2015). In this index, 30 days has been used as the period required for determining acclimatization. EHIaccl can be measured by determining the difference between the same three-day-averaged DMT compared against the average DMT over the preceding 30 days. The units of EHIaccl is also in °C and the index is given by (Nairn & Fawcett, 2013). where Ti is the DMT on day i. EHIaccl is an anomaly of three day of DMT with respect to the previous 30 days. Positive EHIaccl means the three days are warmer than the recent past (on average). Then, the EHF as in equation (4) is calculated based on the combination of these two indices, i.e. EHIsig and EHIaccl, which provides a comparative measure of frequency, duration, amplitude, and spatial distribution of a heat wave event. The unit of EHF is °C 2 . EHF = EHI sig × max (1, EHI accl ) where 1 in equation 4 is basically necessary to make a small positive value at least for a short heat wave. EHF calculation incorporates the effects of humidity on heat tolerance indirectly by using the mean, rather than the maximum daily temperature. It provides a comparative measure of intensity, load, duration and spatial distribution of a heat wave event, and has a strong signal-to-noise ratio (Guyton & Hall, 2000). Universal understanding the impact of a heat wave on human health are varied, but mostly shows that vulnerable population for the following three days is more sensitive and affected (Keggenhoff et al., 2014). As a result, the heat wave is defined as a period of at least three days with EHF > 0 (positive value) with the combination of excess heat and heat stress with respect to the local climate 13 . Then, we use the EHF index results to calculate annual values of four heat wave characteristics based on Fischer and Schar (2006) as below. 1. The heat wave amplitude (HWA) is the peak of EHF value from the hottest heat wave event of the year. 2. The heat wave number (HWN) is the annual number of heat wave events. 3. The heat wave frequency (HWF) is the annual frequency of days contributing to the heat wave events (the sum of participating heat wave days per year). 4. The heat wave day (HWD) is the duration of the longest annual heat wave event (must be ≥ 3 days). To characterize a spatial distribution of each heat wave in Peninsular Malaysia, we used the inverse distance weighted (IDW) interpolation method in ArcGIS version 10.3 software. ArcGIS software provides tools like spatial analysis tools for spatial data analysis either raster or vector data that apply statistical theory and technique. Figure 2 shows the overall methodology that has been conducted in this study. The next section discusses the result and discussion of the finding. RESULT AND DISCUSSION Heat wave conditions exist when the EHF is positive at least three consecutive days while a single or double positive EHF value does not define a heat wave (Perkins & Alexander, 2013). Based on the EHF index, the heat wave events for all stations from 2002 to 2006 and 2010 have been identified to determine the characteristics of HWN, HWF, HWD, and HWA. Figure From the result, the heat wave event almost happened during southwest monsoon (SWM) which is between March and July. Major heat wave events with the highest EHF index happened in 2002 for all locations except Alor Setar in 2010. The critical value recorded during the study period may be due to severe dry spells in East Malaysia that have been recorded during the El Niño events which are three driest years (1963, 1997, and 2002) for Peninsular Malaysia (MMD, 2009). We can conclude that the variation of heat wave events for the most area in Peninsular Malaysia is affected by El Nino during the period 2001-2010. Abul and Gazi (2016) compiled El Nino event and classified during 1952-2010 globally which indicated moderate El Nino during 2002-2003-2010while weak El Nino event happened during 2004-2005and 2006-2007. The result of this study also shows the difference of EHF index refers to the urban, industrial and suburban area. Urban and industrial areas such as Kuala Lumpur, Ipoh, Kuantan, Kota Bahru, and Johor Bahru recorded higher EHF index than the suburban area (Alor Setar, Kuala Terengganu, Melaka, and Kota Bahru). For urban and industrial regions, it is exposed to more heat wave events compared to the suburban or rural area. The effect of heat in urban areas is probably due to the result of the interaction of synergies such as surface moisture deficiency in urban areas, low wind speed, and also a difference in ambient temperature between the two areas (Fisher et al., 2007). Table 2 shows the summary of the heat wave characteristics over Peninsular Malaysia during the period 2001-2010. From the Table, Figure 4 shows the general spatial distribution of heat wave characteristic for Peninsular Malaysia. HWN Figure 4(a) showed the highest average value which concentrated in the southwest and northwest of Peninsular Malaysia (1.5-1.9 event/year) and the lowest value is observed in the east part of Peninsular Malaysia (0.50-0.70 event/year). Similar to HWN, the highest average value for HWF (>14 days/year) can be found in the southwest and northwest and the lowest 6.0-8.0 days/year) is in East Peninsular Malaysia (Figure 4(b)). In Figure 4(c), HWD shows the longest average duration of 4 to 7 days per year which predominantly located in the west and northwest of Peninsular Malaysia. The shortest heat waves (1-2 days/year) are observed in the east part of Peninsular Malaysia. Figure 4(d) shows the HWA with the highest (1.3°C2-1.5°C2/year) and the lowest (0.5°C2-0.7°C2/year) is located in west part and some part of the Northeast (Alor Setar) and southeast (Melaka) of Peninsular Malaysia. From these spatial distributions, the heat wave events are dominated by seasons in the southeast, northeast, and west parts of Malaysia where the majority of heat waves has happened during the southwest monsoon season. This characteristic is also similar to what happened in Iran (Esmailnejad, 2016). Based on the result represented by 95 percentile, HWD, HWF, and HWN, most of the station are resulting in longer duration of heat wave events which more than a week. This condition of the heat wave in Peninsular Malaysia can induce to affect public health especially during SWM and El Nino events. More than that, the higher value in HWA indices will have a greater impact on agriculture, which usually due to high values of evapotranspiration associated with the lack of precipitation (Croitoru et al., 2016). The yields can be severely affected when crop exposed to stressful condition during heat waves, mainly due to the lack of water in the irrigation system during the drought season. Over the study period, the range of heat wave amplitude for all locations is between 29.4°C and 33.0°C. Compared to the optimum annual temperature for the major economic crops in Malaysia (NC2, 2000) in Table 3, we can assume that the heat wave events during the study period affected some agriculture plants, especially crop production. CONCLUSIONS This paper has successfully characterized the heat wave events in Peninsular Malaysia based on daily maximum and minimum temperature during the period 2001-2010. The combined effect of excess heat (EHIsig) and heat stress (EHIaccl) were employed to obtain the EHF index. The result showed that the highest EHF index has happened at Kuala Lumpur in 2002 and the lowest was at Alor Setar in 2006. The most of the heat wave event was recorded during El Nino events especially during moderate El Nino in 2002 until 2005 and 2010. Based on HWD, HWF, HWN, and HWA, locations with the highest climatological values has been characterized and found that the southwest (Johor Bahru) and the west (Kuala Lumpur) of Peninsular Malaysia are the highest for HWN and HWF, and HWA, respectively. On the other hand, the longest duration of the heat wave (HWD) was found in Ipoh with 24 days. The characteristics of heat wave from EHF index was also compared to spatial distribution maps which shows that southeast, northeast and west part of Malaysia experience a more heat wave event during the study period. The results of our investigation showed that the heat wave conditions in Peninsular Malaysia are anxiously, and therefore, critical study and exploration is needed because this event has a direct impact on agriculture, economic, and human health. Under these circumstances, further research needs to be undertaken in order to produce standard definition and threshold heat wave for Malaysia. This information would be useful for health policymakers to enable them to better plan for future climate change impacts. The use of longer-term period would possible to give more accurate information about the occurrence of heat wave indices and events overall Malaysia. ACKNOWLEDGMENTS We would like to thank the Malaysia Meteorological Department for providing the temperature data. The second author is PhD candidate and supported by the Faculty Science and Natural Resources, University of Malaysia Sabah, 88400, Kota Kinabalu, Sabah.
2019-06-13T13:23:25.611Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "4b8fdd4d865c2d4b5337ae8940b777ad5f23d205", "oa_license": null, "oa_url": "http://technicalgeography.org/pdf/1_2019/11_suparta.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d81bca87ef5145ed62780f8e18206c383f6a0b92", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
7643238
pes2o/s2orc
v3-fos-license
Recurrent urethrovesical anastomotic strictures following artificial urinary sphincter implantation: a case report Introduction The management of an anastomotic stricture after a radical prostatectomy can become a complex and difficult situation when an artificial urinary sphincter precedes the formation of the stricture. The urethral narrowing does not allow the passage of the routinely used urological instruments and no previous reports have suggested alternate approaches. Case presentation We present the case of a 68-year-old Greek man diagnosed as having a recurrent anastomotic stricture approximately two years after a radical prostatectomy and three years after the implantation of an artificial urinary sphincter, and propose novel alternate methods of treatment. Our patient was first subjected to stricture incision with the use of a rigid ureteroscope with a holmium:yttrium-aluminium-garnet laser fiber, which was followed by a second successful attempt with the use of a pediatric resectoscope. After a one-year follow-up, our patient is doing well, with no evidence of recurrence. Conclusions To the best of our knowledge, this is the first report of the management of recurrent urethral strictures following an artificial urinary sphincter implantation. Minimal invasive techniques with the use of small caliber instruments may offer efficient treatment options, diminishing the danger of urethral corrosion. Introduction Despite improvements and refinements in the surgical techniques used for radical prostatectomy (RP), complications still exist. The commonest are incontinence and loss of erectile function. The next most common complication, with rates ranging from 0.48% to 32%, is the formation of urethrovesical anastomosis (UVA) stricture [1,2]. These strictures tend to have a high incidence of recurrence and several treatment options have been proposed such as dilatation, endoscopic cold-knife incision, urethral stent placement, electrocautery resection, anastomotic urethroplasty and intermittent self-catheterization. However, the problem becomes very complex in the presence of a previously placed artificial urinary sphincter (AUS). The approach to the stricture can be extremely difficult by the routinely used techniques and instruments. Until now, the management of recurrent contractures was simultaneous or before the placement of an AUS [3][4][5]. To the best of our knowledge, we present a case where novel methods were used to treat this complex and difficult situation. Case presentation A 68-year-old Greek man was referred to our department for evaluation two years after an open retropubic RP. He presented with lower urinary tract symptoms and symptoms of urinary incontinence. His medical history was notable for hypertension and atrial fibrillation. Our patient was assessed with cystourethrography and cystourethroscopy and the presence of the anastomotic stricture was verified. An endoscopic cold-knife incision was performed successfully. Six months later, and after the recurrence of a urethral stricture was ruled out, our patient underwent an AUS placement for the management of incontinence. The decision to implant an AUS was taken after evaluating our patient with urethroscopy, during which a non-functioning external sphincter was observed. Our patient's post-operative course was uneventful. Our patient had regular follow-up visits with ultrasonography and was free of symptoms for a four-year period. Follow-up of our patient was performed with post-void residual and uroflow measurements. Three years after the implantation of the AUS, our patient was readmitted with voiding obstructive symptoms and the recurrence of the urethrovesical contracture was verified by urethroscopy. The AUS was deactivated at that time. Under general anesthesia, with our patient in the lithotomy position, an 11F Olympus rigid ureteroscope was passed to the area of the stenosis (Figure 1). A holmium: yttrium-aluminium-garnet (Ho:YAG) laser with a 365 μm end-firing quartz fiber was passed through the working channel at a setting of 1J with a frequency of 10 Hz (10W). This could be increased during the procedure according to the surgeon's preference. Deep incisions in the scar tissue were performed by direct contact of the laser tip until visualization of the peri-vesical fat. An 18F Foley catheter was then introduced and left in place for three days. Our patient again experienced a recurrence six months later. He was subjected to an endoscopic incision of the stricture with the use of a 9F pediatric resectoscope ( Figure 2). Resection of the stricture was performed ( Figure 3) and an 18F Foley catheter was placed. Our patient was discharged two days later after removal of the catheter and evaluation of his urinary function. Six weeks later, the AUS was reactivated. Our patient has been recurrence free after an 18-month follow-up period. Discussion One of the concerns after RP is the occurrence of potentially recurrent UVA strictures. This complication appears normally within a few months following surgery. Risk factors for the occurrence of strictures are previous bladder neck surgery, urinary extravasation and excessive intra-operative blood loss [6,7]. There are varying degrees of association of anastomotic contracture and stress urinary incontinence [8,9]. The AUS was introduced as a treatment for post-prostatectomy incontinence with excellent results [7]. One of the major, but unresolved, concerns of AUS placement is the time of implantation following the initial management of the stricture. A period ranging from six weeks to seven months has been reported [4,10]. Because no conclusion had been made, we decided to wait for six months before we placed the AUS. Unfortunately, even this interval was not enough. Thus, prospective studies are needed to establish the optimal interval. The management of a post-prostatectomy contracture has been performed with one-stage or two-stage procedures combining an aggressive incision of the stricture, followed by the AUS placement [4,5,7]. Others have suggested a transperineal urethroplasty combined with AUS implantation [11]. Although several treatment options such as dilatation, cold-knife incision, electrocautery incision or resection of the stenotic bladder neck, Urolume stent placement, triamcinolone injection and use of the Ho:YAG laser have been proposed, the optimal management of UVA contracture has not been determined yet. Also, no prospective studies have been published. Yurkanin et al. [12] reported the achievement of good results by using cold-knife incision with a response rate of up to 87% after one session. A comparative study by Ramchandani et al. [13] however, reported that balloon dilatations were as effective as cold-knife incisions and suggested that cold-knife incisions should be left for complicated cases. In our case cold-knife incision seemed to be an attractive choice for the treatment of the UVA stenosis at our patient's first visit, since the stricture was detected early and the scarring process was still limited. Furthermore, the role of the transurethral incision remains highly debatable. A two-stage approach with Urolume stenting of the contracture prior to an AUS has been reported with acceptable outcomes [3,10]. A recent study by the Baylor College of Medicine reported a 17-month satisfaction rate of 89% in nine patients [3]. Placement of a Urolume stent however, is not without complications, such as migration, hematuria, encrustation and re-obstruction due to the hyperplastic tissue ingrowth [3,14]. Moreover, the extraction of this stent can be very difficult for the urologist due to catastrophic effects to the urethral tissue. An excellent review by Bader and colleagues has summarized Ho:YAG laser use [15]. The reviewed studies were neither randomized nor prospective, and the patient cohorts and the follow-up periods were limited [4,16,17]. The Ho:YAG laser is safe and easy to handle and was reported to have a success rate of 83% in a series of 24 patients [4]. Under direct vision a controlled incision and vaporizing of the scar tissue can be performed [18]. The end-firing fiber of the holmium laser is light and flexible and can be used with a rigid and a flexible endoscope, due to its small caliber. Although the physical characteristics of this laser type are advantageous, due to minimal tissue penetration and accurate targeting, safe conclusions about its efficacy and effectiveness cannot be drawn. All of the above approaches cannot be safely used in the presence of an AUS due to urethral narrowing. In our patient the passage to the area of the stenosis was difficult. Thus, the use of the 11F rigid ureteroscope along with the Ho:YAG laser seemed to be the ideal treatment for our patient since the flexible end-firing fiber of the laser made the access to the stricture easier. Furthermore, with the holmium laser we could control the firing pulses accurately with a foot switch; thus damage to the collateral healthy tissue was prevented [14], which is very important especially in a patient with an AUS who presents with a recurrent UVA. In general, instrumentation to the urethra in such patients could lead to urethral erosion, subsequent AUS removal and all the relevant repercussions for the patient. In an effort to minimize the danger of erosion, minimally invasive techniques are required. An interesting approach was reported by Eltahawy et al. using a pediatric cystoscope [4]. The small caliber of this scope (7.5F) is ideal for passing through a narrowed urethra. However, we decided to try a pediatric resectoscope (9F) (Figure 1) due to our previous failure with the Ho:YAG laser. The intra-operative use of the resectoscope was excellent, allowing for a potent recanalization. Two important issues should be mentioned: the first is related to the movement of the resectoscope, which is passive. The second one concerns the resected chips, which are easily removed by irrigation saline via the working channel of the resectoscope due to their small size. We advocate the presence of a pediatric scope in an adult urological department, despite its cost, because it can be life saving in cases of urethral stenosis in general. Conclusions Patients who are post-RP with an implanted AUS with the complication of an UVA contracture can be difficult to manage due to narrowing of the urethra. Although the ideal treatment for recurrent UVA strictures remains debatable, our case shows that the urologist must be aware of several treatment options, especially when a plethora of instruments are available. Use of a rigid ureteroscope or the pediatric resectoscope seems appealing due to their small caliber, but larger patient series and longer follow-up periods are essential in order to draw safe conclusions. Consent Written informed consent was obtained from the patient for publication of this manuscript and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
2017-06-23T08:14:18.774Z
2012-04-03T00:00:00.000
{ "year": 2012, "sha1": "1d4470aa8f33e98ca01f326080710ed719cb743f", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-6-94", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61d821a7a8a88b64328788201484ce57349a2f8e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238233538
pes2o/s2orc
v3-fos-license
Patient gender and rotator cuff surgery: are there differences in outcome? Background Although rotator cuff syndrome is common and extensively studied from the perspective of producing healed tendons, influence of gender on patient-reported outcomes is less well examined. As activity and role demands may vary widely between men and women, clarity on whether gender is an important factor in outcome would enhance patient education and expectation management. Our purpose was to determine if differences exist in patient-reported outcomes between men and women undergoing rotator cuff surgery. Methods One hundred forty-eight participants (76 W:72 M) aged 35–75 undergoing surgery for unilateral symptomatic rotator cuff syndrome were followed for 12 months after surgery. Demographics, surgical data, and the Western Ontario Rotator Cuff (WORC) scores were collected. Surgery was performed by two fellowship-trained shoulder surgeons at a single site. Results There were no gender-based differences in overall WORC score or subcategory scores by 12 months post-op. Pain scores were similar at all time points in men and women. Women were more likely to have dominant-arm surgery and had smaller rotator cuff tears than men. Complication rates were low, and satisfaction was high in both groups. Conclusion Patient gender doesn’t appear to exert an important effect on patient-reported rotator cuff outcomes in this prospective cohort. Further work examining other covariates as well as the qualitative experience of going through rotator cuff repair should provide greater insight into factors that influence patient-reported outcomes. Supplementary Information The online version contains supplementary material available at 10.1186/s12891-021-04701-y. Introduction Approximately 180,000 1 Canadian adults will develop symptomatic rotator cuff syndrome (a symptomatic rotator cuff tear, with or without accompanying long biceps tendon or acromioclavicular pathology) each year, experiencing the pain and disability that results [1]. Sufferers classically experience pain and weakness with reaching activities and fragmented sleep due to night pain [2]. The negative impact of rotator cuff syndrome on daily life varies, but some are unable to work, some are unable to be care-givers, and many experience loss of recreational activities important to their physical and emotional health. The patient population facing symptomatic rotator cuff repairs is diverse: from roughly 30-80 years old, men and women, office workers and labourers, physically active and sedentary [3]. Much effort has focused on biological and technical factors to improve the rates of "successful" surgery. It is becoming increasingly clear, however, that addressing structural elements alone does not always produce the expected treatment outcomes from a patient perspective [4]. In addition to biological factors identified as important to structural outcomes such as increasing age, current smoking, and diabetes [5], patient-specific factors such as gender and psychosocial diversity may Open Access *Correspondence: marlis.sabo@gmail.com 1 SCRUBS Research Unit, University of Calgary, 4448 Front St SE, AB T3M 1M4 Calgary, Canada Full list of author information is available at the end of the article also have a role to play in patient-reported outcomes of rotator cuff surgery. Despite gender being one of the most foundational demographic traits, it has rarely been explicitly researched as a primary factor in rotator cuff outcomes. It is important to appreciate that there is a distinction between a person's gender and that person's sex. The World Health Organization differentiates them as follows: "Gender refers to the characteristics of women, men, girls and boys that are socially constructed. This includes norms, behaviours and roles associated with being a woman, man, girl or boy, as well as relationships with each other. " Sex, then "refers to the different biological and physiological characteristics of females, males and intersex persons, such as chromosomes, hormones and reproductive organs. " [6]. Investigating potential influences of gender on rotator cuff outcomes is important because functioning of the upper extremities may impact gender-specific behaviors and activities of daily living differently for men and women. While more attention is being paid to this, the picture is incomplete. Previous studies on rotator cuff outcomes have not made a distinction between gender and sex, and some have not even provided any data regarding either factor. Others have conducted secondary analyses with patient "gender" examined for its role in outcomes [7][8][9][10][11][12][13][14][15] with little commentary on the significance of any related findings. This may be understandable because gender was not the primary focus of any of the foregoing studies. On the other hand, this does limit the ability to discuss the impact of patient gender in patient-reported outcomes of rotator cuff surgery. However, a few studies have provided insight into gender and outcomes. Razmjou et al. [16] and Gibson et al. [17] both found gender-based differences in lifestyle scores on the Western Ontario Rotator Cuff Index in pre-operative rotator cuff patients. A prospective cohort from Daniels et al. [18] showed females had greater early VAS pain scores and poorer ASES scores until around 3 months post-operatively, and by 1 year post-op, outcomes in males and females were similar. Recent work by Pauly et al. [19] examined the relationship between patient sex, age, and rate of collagen-1 deposition, MRI healing at 1 year, and clinical scores at 1 year (ASES, WORC) and found no sex-based differences at 1 year. Continuing to flesh out the implications of gender-based differences in patient-reported outcomes may help construct future outcome studies and aid in more effective expectation management for patients. Although the focus of this work will be on genderbased factors, physical factors such as patient height (which is typically different between males and females) could also influence the outcomes of a condition that affects overhead reaching activities. The purpose of this study is to use a prospective cohort to explicitly examine the role of patient gender in patientreported outcomes of rotator cuff surgery. We hypothesized that gender would not be a major influence on patient-reported outcomes. Potential confounding influence of height on outcomes was also examined. Participants Eligible participants were aged 35-75 years undergoing elective surgery for unilateral partial and/or full-thickness rotator cuff tendon tears confirmed by ultrasound or MRI and were recruited by one of two fellowship-trained shoulder surgeons at a single-center between February 2016 and June 2017. Exclusion criteria were bilateral symptomatic rotator cuff disease, previous surgery on the operative shoulder, rotator cuff arthropathy, significant alternate sources of pain such as cervical spine disease or a chronic pain disorder (such as fibromyalgia or complex regional pain syndrome), and inability to complete questionnaires in English. Participants with known psychiatric diagnoses such as anxiety, depression, or related conditions were not excluded, nor were patients with potential gain issues, such as Workers' Compensation board claims, litigation, or those injured in motor vehicle collisions. Transgendered participants were placed in the gender category that best matched their self-identified gender (affected 1 participant). No participants were excluded for being unable to identify with either of the two gender groups, although no openly non-binary or gender-fluid patients were screened during this time. Informed consent was obtained from all participants prior to enrollment. Figure 1 traces the path of the cohort from time of screening until final follow-up, accounting for where losses occurred. The project was reviewed and approved by the local Research Ethics Board (REB 15-1229) and was conducted according to the principles of the Declaration of Helsinki. Variables and outcome measures Demographic data collected included gender, hand dominance, medical comorbidities, current medications, and smoking history. The disease-specific patient-reported outcome measure used was the Western Ontario Rotator Cuff score (WORC). The WORC score is a validated, self-reported measure of rotator cuff disease severity [20]. It consists of 21 questions in 5 domains, each with a 100 mm visual analogue scale (VAS). Higher total scores on the WORC indicate increased pain and functional disability. For the 2-week and 6-week visit, pain levels were determined by two 100 mm VAS scales drawn from the WORC-Pain domain to reduce patient questionnaire burden and to allow direct tracking of the evolution of pain over time. Post-operatively, VAS pain scores were collected at 2 and 6 weeks, while WORC scores were collected at 12 and 24 weeks, and 1 year post-operatively. Rotator cuff tear characteristics were grouped based on intra-operative findings as follows: (1) any single partial-thickness tear in a single tendon, (2) partial-thickness tears in two or more tendons, (3) any full-thickness tear in a single tendon, (4) any full-thickness tear in a single tendon and any partial-thickness tear in a second tendon, and (5) full-thickness tears in two or more tendons. Post-operative complications such as infection, nerve injury, excessive pain, excessive stiffness, failure of repair, and re-operation were also monitored for the duration of the study. Surgical intervention and post-operative care Two fellowship-trained shoulder surgeons performed all surgical procedures. All patients presented for rotator cuff repair, with additional procedures relating to the long biceps tendon or acromioclavicular joint at the discretion of the surgeon based on clinical presentation. All patients received pre-operative antibiotics and were treated most commonly with combined general and regional anesthesia, with a minority receiving either regional only or general anesthesia only. All procedures were conducted arthroscopically. Biceps tenodeses were conducted through arthroscopic or mini-open approaches depending on surgeon preference and clinical situation. All patients received a diagnostic arthroscopy, a subacromial bursectomy and rotator cuff assessment. Two patients out of the cohort were found to have partial thickness tears smaller than anticipated based on preoperative imaging and therefore received only an arthroscopic bursectomy. Choice of anchors and type of repair were at the surgeon's discretion and influenced by tear size, morphology, and mobility. Augmentation was used in 1 patient. Partial repairs were undertaken in situations where a full repair was not possible. Partial tears that were high grade were typically completed and repaired. Post-operative care consisted of wearing a sling for 6 weeks, with active-assisted range of motion commenced at 2 weeks post-op. Two protocols were employed based on small-medium tears and large-massive tears. The progressions were similar, but the large-massive protocol progressed slightly more slowly than the small-medium program. These protocols are summarized in Additional file 1. Each patient selected a physiotherapist to guide rehab according to the protocol provided by the surgeon. Adjustments to progression through the protocol were also made at surgeon's discretion based on patient progress and specific rehabilitation deficits. The large geographic area served by this center precluded having all patients see the same team of therapists. Post-operative imaging of the operative shoulder was at the surgeon's discretion based on clinical indication and was not conducted routinely as that does not reflect realworld clinical practice in the environment in which this study was conducted. Sample size and data analysis Our primary endpoint was chosen to be the difference in WORC score at 12-month follow-up. Using the MCID and available psychometric data on the WORC, it was determined that we required 35 men and 35 women to complete 12-month follow-up to have 80% power to detect a difference of 11.5% (the MCID). Since additional subgroup analysis was desired, this sample was increased to allow detection of medium effect sizes in such analyses. For example, n = 134 would allow for 80% power to detect an f 2 = 0.1 for a multiple linear regression analysis with 5 variables, while n = 138 would allow a 2-group 4-timepoint repeated measures ANOVA analysis to detect an effect size of 0.1 at 80% power. A practice audit revealed that a loss to follow-up of up to 30% would need to be planned for. With the aid of a dedicated research assistant, it was felt this could be reduced to about 25%. Thus, we aimed to recruit 92 men and 92 women for the cohort. Descriptive statistical analysis was conducted with t-tests (for example, age, height, BMI) and chi-square tests (for example, occupation, comorbidities, tear pattern or size) as appropriate. Analysis of the patients not completing the study was performed with one-way ANOVA (age), Kruskal-Wallis H-test (WORC score), Fischer Exact tests (remainder of characteristics). Pearson product-moment correlation between height and WORC score was also performed. Repeated-measures ANOVA testing was used with gender and time as inputs. Several patients (20 of 148) had missing data points between time 0 and 12 months and so were excluded from repeatedmeasures analyses. Normality of the data distribution for the repeated measures ANOVA was confirmed through a histogram of the residuals demonstrating a normal distribution. Graph Pad (San Diego, CA) and R (version 3.6.3, 2020/02/29, R Foundation for Statistical Computing, Vienna, Austria) was used for the analyses. Statistical advice was sought from affiliated biostatisticians prior to the commencement of the study for sample size determination (including the decisions about scaling the sample size as described) 2 and for the higher-level analyses at the conclusion of the study (including the repeated measures ANOVA and discussion of the normality of the residuals). 3 Demographics and characteristics of the women and men A total of 148 patients completed 12 months of followup (Fig. 1). Table 1 shows the relevant demographics of this cohort. Of note, the women were more likely to carry a past diagnosis of a mood disorder. Women were much more likely to be presenting for surgery on their dominant arm compared to men. There were similar types of occupation in both groups, and similar rates of smoking and diabetes. No differences between those withdrawing or being lost to follow-up and those completing the study were noted. The cohort represented a mixture of acute, acute-on-chronic, and chronic tears. For those with a clear onset of symptoms or injury (102 participants), the median time from symptoms to surgery was 15.5 months (range 1-112 months). There were no differences between those who did not complete the 12 months of follow-up and those who did with respect to age (p = 0.57), baseline WORC score (p = 0.72), secondary gain concerns (p = 0.65), tear size (p = 0.35), tear pattern (p = 0.48), smoking status (p = 0.22), gender (p = 0.46), dominant arm operated (p = 1.0), diabetes mellitus (p = 0.32), presence of a mood disorder (p = 0.66), or occupation type (p = 0.78). Table 2 shows the intra-operative findings. All tear patterns were represented, with superior and anterosuperior being the most commonly found. Women tended to have smaller tears than men in this cohort. "Complete in 1 of Multiple" refers to a situation in which a large to massive posterosuperior tear only allowed for a portion of infraspinatus to be repaired rather than, for example, all of infraspinatus but none of supraspinatus. A higher rate of partial repair in men was observed. Fig. 2 shows the comparison of WORC score at baseline and at 12 months for men and women. Both groups improved substantially, but there were no differences between men and women at either time 0 or at 12 months follow-up. Figure 3 shows the progression of pain scores over time for both genders. This VAS was a summed total of reported sharp pain and dull pain taken from the WORC pain section. Again, both genders showed a substantial improvement in pain over the course of the postoperative year with no difference between groups at any time points. Patient-reported rotator cuff outcomes The WORC score is divided into 5 domains that are assessed. No differences between men and women were Relationship between WORC score and tear pattern and size The influence of intra-operative tear size on WORC score was also examined. Tear shape (anterior, anterosuperior, superior, posterosuperior, and massive) was not associated with significant differences in WORC score at baseline (p = 0.84) or at 12 months post-operative (p = 0.48). Tear size was similarly not associated with significant differences at baseline (p = 0.98) or 12 months post-operatively (p = 0.82). Relationship between patient height and WORC score An examination of the relationship between height and WORC score revealed that although women were significantly shorter than men in this cohort (Table 1), patient height did not prove to have an important influence on WORC score (R = 0.035). Gender, satisfaction, and patient feedback At the 12-month follow-up, patients were asked three yes/no questions about their experience. First, whether they were satisfied with the results of their rotator cuff surgery (W:93% yes, M:91% yes). Second, if they knew at time of electing for surgery what they know now, would they have the surgery again(W:92% yes, M:98% yes)? Third, if a friend or family member had a similar shoulder problem, would they recommend surgery to them (W:97% yes, M:100% yes)? No significant differences were noted between men and women for each of Complications Complication rate was low for the cohort. There were no deep infections. Two patients experienced neurological symptoms in the lower arm, one of which had partially improved at 1 year. One underwent revision surgery for a failed biceps tenodesis. Two underwent reoperation with a non-study surgeon to receive a Superior Capsular Reconstruction. In addition, a further 5 patients had a documented repair failure, most of whom underwent revision surgery. One retear had a documented fall before 12 weeks post-op. Three patients received corticosteroid injection therapy during the year after surgery. Discussion This study has demonstrated no significant difference in patient-reported outcome of rotator cuff surgery between men and women 1 year after surgery. It has also demonstrated no significant relationship between tear pattern or tear size and patient-reported outcome at 1 year following surgery. Women were more likely to be undergoing surgery on their dominant arm, more likely to have a smaller tear size, and more likely to have had a full rotator cuff repair than men. There were more women with litigation related to their shoulder problem than men. Both women and men experienced statistically and clinically significant improvement in WORC score from preop to final follow-up. Patient height and WORC score Because people of a wide range of height have to negotiate standardized environments at home and in the workplace, we did examine the relationship between patient height and WORC score. Height is not evenly distributed across genders. Our hypothesis was that since persons of smaller stature likely spend more of their activity with their arms at/above shoulder height, their reported outcomes may reflect greater individual activity limitations related to rotator cuff disease even after surgery. This did not prove to be the case here. Gender and outcome Existing literature examining the relationship between gender and outcome has mainly dealt with gender as a subanalysis, with mixed findings. Table 3 provides a summary of the relevant literature commenting on the outcome of rotator cuff surgery in female patients derived from secondary analyses. Despite the liberal use of the term "gender" in these studies, the author intent was more consistent with "sex" in all but the Ramzjou study [16]. Comparison between studies is challenging because of the diversity of outcome measures used, but the trend was towards poorer outcomes in female patients. One closely relevant work on gender and rotator cuff disease to this current study is from Ramzjou et al. in 2006 [16]. They assessed 279 patients (108 women) undergoing rotator cuff surgery. They noted differences in prevalence of emotional disturbance in women (using the WORC-Emotions scale), and the structural pathology was distributed differently between younger men and women. This difference faded with age. They also noted a trend towards differences in the WORC-Lifestyle scale by gender. This difference was also noted by Gibson et al. in a more recent investigation [17]. The Ramzjou paper proposed that there may be gender-based role activities that account for this difference (such as different grooming and dressing practices) [16]. In a subsequent report from Ramzjou et al., women were found to have poorer pre and post-op WORC scores than the men in the cohort [21]. One difference between the cohorts was that patients undergoing decompressiononly as the intended procedure were included (whereas our cohort was intended to include a rotator cuff repair from enrollment). This may be important as decompression surgery is not clinically significantly better than nonoperative treatment [22], thereby potentially performing differently from a rotator cuff repair. More women in the 170-patient cohort had decompression-only. Their cohort had similar tear sizes between genders, where women in this cohort tended to have tears confined to the supraspinatus. Their post-operative outcomes were collected at 6 months after surgery, whereas this cohort was followed for 1 year. In contrast to the Ramzjou cohort, this cohort did not note gender-based differences in the WORC total score or subscores at the 1-year follow-up. It is not possible to directly compare gender-based satisfaction from this cohort to the Razmjou cohort [21] as they approached this question differently. Further work will delve into potential explanations for why this potential difference might be so. A second prospective cohort followed by Daniels et al. [18] included 283 patients (130F:153M) for 1 year using the ASES score as their primary disease-specific outcome measure. They found differences at baseline and for the first 3 months after rotator cuff repair between males and females, which disappeared by 1 year. Our cohort affirms their findings of similar 1-year results in both genders. It is possible that their larger sample size allowed for a smaller effect size to be statistically significant. In our cohort, small differences in VAS pain scores could be seen, but these were neither statistically nor clinically significant. Literature on patient sex and sensitivity to various clinical and experimental pain modalities show mixed results, but tend to favor females being more sensitive to painful stimuli, making this variation plausible [23]. In view of this, some questions arise. Do women have more symptoms for a given tear size than men, or could it be that tear size isn't well correlated to patient-reported Sex and gender terminology was used interchangeably in this paper making it challenging to discern the intended category choices. Despite this, the usage is most consistent with "sex" rather than "gender" c Similar to the above, mixed terminology makes it difficult to discern the authors' intent Oh/2012 [13] Sex/Female 57F:61M Explore relationship between pre-op concern and expectation of post-op recovery on postop outcome MODEMS score, SST, Constant Author/Year Female sex was associated with the "high concern" group; they "high concern" group had poorer outcomes (ie: less improvement from pre-op state). symptoms? We looked at tear size by number of tendons involved, and by tear pattern, and there were no differences in WORC score in either analysis. This is consistent with the work by Wylie et al. which suggested that tear size and symptoms may not cohere well [24]. Interestingly, although their work did show relation between tear size and VAS function scales, this current study did not relate tendon involvement to WORC score. Differing techniques of measurement and different instruments limit direct comparisons in this domain. This follow-up question is well-suited to a case-control design rather than a cohort design. We also noted a striking difference between dominant operative arms between the men and women. Women were much more likely to be undergoing surgery on their dominant arm (86% vs 58%). Ramzjou et al. [21] didn't explicitly state the percentage of dominant arms operated by gender, but more women were right-handed and underwent right-sided surgery. Again, direct comparison is challenging due to different ways of presenting the data. This again may gesture towards gender-based differences in activity demands. Although Gibson et al. noted a small but significant difference in WORC-Lifestyle scores by gender in the pre-operative analysis of this cohort [17], this difference is no longer statistically or clinically significant at 1 year follow-up. It would be interesting to determine whether this represents a consistent return of function to baseline or pre-morbid activity, or whether this represents a longterm acceptance and adaptation of lifestyle activities as part of a holistic post-operative recovery. Whether meaningful differences in outcomes of rotator cuff surgery depends on patient gender is likely subject to one of two realities. First, as this current study and others have shown, no significant differences at minimum 1-year follow-up exists and this is a true finding. Alternatively, our clinical tools are not well configured to detect gender-based differences in patient-reported outcomes. Furtado et al. reflect on the short-WORC and the variably applicability of items such as dressing/undressing and hair styling across genders [25]. Whether enough gender difference exists to merit rethinking our patient-reported outcomes is beyond the scope of this study, and assuming no important differences exist may be premature. Sex, gender, and the future of outcomes research As our understanding of sex and gender becomes more developed and substantially more complex, it is worth considering how outcomes research should approach this important issue. The Canadian Institutes for Health Research [26] actively encourages consideration of sex and gender in research design. Outcome measures to assess gender now exist. The intent of this current work was explicitly to approach outcomes based on gender, not sex. An argument can be made to approach patient-reported outcomes on a gender basis given that they quantify lived experience moreso than a physiologic or structural outcome. However, this is a blunt distinction that doesn't provide much guidance on how to clearly and consistently decide between gender and sex, nor does it help with adequate inclusion of patients and participants who do not identify as man or woman. It is hoped that with time, some clarity may be added to this complex discussion. In the meantime, a step forward is to be conscious that sex and gender are not interchangeable terms and this should be deliberately addressed in study design. Limitations This study was appropriately powered for its primary endpoint, but may be underpowered for the subanalyses. There are several limitations. A pragmatic approach was taken to have this study reflect clinical practice as much as possible. It is not feasible or practical to continue follow-up for patients past 1 year. There is emerging evidence to suggest that longer follow-up may not be fruitful 15 . Longer follow-up might reveal improved WORC scores beyond those reported, but whether those differences are clinically meaningful to offset the potential exposure of more repair failures or retears is unclear. Second, it is not feasible to obtain timely pre-operative MRI scans on all patients prior to rotator cuff surgery in the setting of this study. It is also not feasible to re-image all post-operative patients with MRI. While post-operative imaging was at surgeon discretion based on clinical indications, the final status of the repairs at 12 months has not been confirmed across all participants. Repair status was not a primary outcome of this study, but it does limit ability to comment on asymptomatic or mildly symptomatic retears. Third, we elected to use categories rather than measurements of tear size. Because of the mixed imaging modalities pre-operatively, tendon involvement was obtained from the operative record. While this reduces measurement errors between assessors and avoids classification errors based on highly variable patient and tendon footprint size, it does complicate ability to compare directly with studies where tear length/area is reported in detail. Amongst existing classifications of rotator cuff tear size, some focus more on number/location of involved tendons more than absolute linear size. For example, Wylie et al. 24 did use both quantitative and qualitative data for tear size. Fourth, this was a single-center study. While this increases the consistency of surgical interventions, it could limit generalizability. Fifth, there was the potential for heterogeneity in post-operative rehabilitation amongst the cohort. Despite all patients being provided with the same protocols, we were not able to monitor all the physiotherapists that could have had contact during the time of the study. It is possible that this introduces a factor not quantified by the study. Finally, this cohort contains a high proportion of sedentary workers which may limit the generalizability of these results to heavy laborers or athletes of either gender. Conclusions In this prospective cohort of 148 patients, we have shown that patient gender does not have an important effect on WORC scores 1 year after arthroscopic rotator cuff repair. In comparing the men and women in this study, we do note that women were more likely to be having surgery on a dominant arm and tended to have smaller structural pathology than the participating men. Further work examining other covariates as well as the qualitative experience of going through rotator cuff repair should provide greater insight into factors that influence patient-reported outcomes.
2021-10-01T13:27:59.566Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "1745a18b169baec46430b89e804b3a057137611a", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-021-04701-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e0903f85c113a4a1d70af92515e2f1b2f430fcff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229609256
pes2o/s2orc
v3-fos-license
Government support of small businesses in the 2020 crisis conditions from the market approach viewpoint . The paper offers a situational overview of small business as a driver of economic growth in Russia during the 2020 crisis and the main aspects of current state policy on market-based support of that sector. The analysis also highlights major barriers that have arisen lately impeding the efficiency of governmental efforts to support small business. However, new aspects stimulating business development in market economy emerge together with the obstacles. Introduction The current economic environment (complicated by the COVID-19 pandemic and affected by the 2020 crisis also triggered by the restrictions of forced governmental responses) has definitely influenced the activities in various sectors of Russian economy. Let us review specifically small business as a growth driver, consider the governmental support steps, and reveal the barriers limiting these efforts, in particular those resulting directly from imperfection of the digital backbone facilitating their interaction.In accordance with the Forecast of the RF Ministry of Economic Development for the period until 2030, small business is indicated among the most promising drivers of economic growth including implementation of innovative products. In particular, GDP share of small and medium business in Russia in 2019 was 20.3%, i.e. 1/5 of the total volume of produced goods, performed work, and rendered services.Besides generation of products and services, small business sector plays a socially important role creating workplaces. In 2019 small businesses employed 25.6% (18.3 mln.) of the entire working populationregistered within the time frame. It should be noted as well that small business sector comprising selfemployed entrepreneurs, micro businesses, and small businesses accounts for the majority of people employed in the small and medium business sector. In the recent years, small businesses have proven to be a sector open to innovations in various areas and intensively using digital technologies to their benefit in ongoing operations. During 2018-2019 the share of small businesses that have experienced innovation effects (implementation and application of innovative products) reached 36-38% of all entities in that economic sector while in the medium and large business sector innovative changes affected 31-33% of all the companies. The share of small businesses using digital technologies in at least one aspect was 67.5% as of January 1, 2020 (in particular, IT in business implies electronic workflow, accounting software, online point-ofsale terminals, etc.), which is significant beyond doubt.Thus, small business is an important sector of Russian economy including its digital environment. Therefore, examining the adverse effects on small business and analyzing the vectors of state policy and barriers thereto currently represent a relevant issue of the day. Methods and types of the Earth's remote sensing Two key peculiarities of crisis conditions should be noted as matters directly affecting small business entities. First, the problem has demand-based nature, i.e. manifestations of the crisis are characterized by a sharp drop of demand in various branches of economy including the falling demand for private goods registered since the process beginning. The second peculiarity is related to the occurrence location. Unlike the financial crisis of 2008, current crisis unfolds directly in the real sector and therefore requires controversial steps necessary to restore the economy.Thus, based on that background, we observe a sharp short-term decrease in the Purchasing Managers' Index (PMI) among small businesses in Russia. Analysis of the main ERS data sources for the DEM development However, long-term study of the index allows to estimate the depth of the slide in that sector of economy. In this case, business activity falls significantly to a level never observed throughout the last 10 years. Fig. 3. Russian small business PMI A drop of retail sales' volume is observed in small business sector. It should be noted at that: 52% of small businesses are working specifically in retail. Thus, we may assume that a real crisis situation exists in the small business sector. Fig. 4. Small business sales dynamics, % During analysis of business situation within the economic system and the role of incentives for development including innovation and introduction of digital methods, an essential aspect is represented by the composite leading indicators, which are also exhibiting a drastic drop in the country. Thus, the current environment slows down the process of digital transformation of economy and its participants. First, credit and financial support is provided including interest-free loans for current salaries and an opportunity to use discounted loan rates for business credits. Second, reporting for tax purposes is simplified: fewer tax forms are required and the periods for reporting are extended, the number of tax checks and inspection visits is reduced in addition to preferential rates for insurance payments. Benefits related to rental payments for state-owned and municipal property should also be mentioned. Finally, information support is organized providing updates on the changes directly affecting operations of companies.As an important note, the listed support efforts are targeted at -and available only to enterprises of "affected" industries making up just 23% of all small businesses as their eligibility is also based on the revenue volume. The original list of affected industries covered 18% of small businesses, the list approved later included only 5% more. No doubt, the list of affected industries is quite limited because the entire sector suffers from lowering business activity, the volume of retail sales, and decreasing demand.Thus, one cannot but note that manifestations of state support have controversial character including accessibility for affected small business representatives. In particular, one may note that the policy ignores the specifics of small business operation, such as absence of significant reserve funds and direct dependence of revenues and expenses on the sales volume. Therefore, concessionary loans for salary payments for a few months and extension of tax payment periods appear to be a "trap" because small businesses have no available reserve funds to use and cannot ensure future accumulation of funds to cover the outstanding dues.Second, there is a considerable time gap between approval of support vectors and efforts and the establishment of relevant eligibility requirements to the support recipients. As a result, some small businesses anticipating further aggravation of the situation in advance minimized their expenses including salary payments. Consequently, only a small part of potential recipients was able to use the support.Finally, the support request procedure is evidently lacking a clear algorithm that the applicants could use and remains quite bureaucratic in character. In confirmation of the controversies revealed, allow us to present the structure of small businesses arranged according to their ability to procure the support based on statistical data. Only 3.1% of companies in affected industries were able to receive at least one kind of benefits or support, making up to approximately 0.71% of the total number of small businesses. A significant number of requests was declined because of ineligibility based on the existing requirements. Quite a lot of companies also decided to forfeit their right to apply for support, probably, because they realized their ineligibility.Thus, there is a problem of decreasing business confidence index among small businesses in the short term. However, the business confidence index has not been too high lately, so its decrease is not critical considering the general background. Conclusions Still, a decrease of small business confidence index may be directly related to the state support entry barriers impeding the efficiency of these efforts.In particular, institutional barriers stand out as obstacles manifested in high transaction costs related to three key aspects. First, this implies the charges of information search because there is no efficient mechanism that can be used to receive the support benefits. Second, the charges include the costs related to the acquisition of special rights within the framework of state support as the procedure takes quite a lot of time because of bureaucratic administration. Finally, the absence of a designated agent on the small business side should be noted. Small businesses do not always have highly skilled financial officers optimizing business processes and minimizing expenses. Thus, the typical staff has no motivation to search for and use the support benefits while the owners in most cases have no required competences.The first two issues are directly related to the problems of the digital system for interaction between small businesses and state authorities. Therefore, these barriers can be eliminatedby taking digital and electronic interactions to a higher level.Among economic barriers, we should underline such aspects as narrow focus of the governmental support efforts combined with hardly comprehensible logic behind the list of affected industries. The second barrier is manifested in the lack of understanding of small business specifics in Russian economy.The former aspect cannot be analyzed independently from the market mechanism acting directly as the foundation of Russian economy; consequently, support of numerous companies during crisis is hardly appropriate because that will decrease the competitiveness of such recipients. It should also be noted that current challenging environment vividly reflects the need for flexibility in business operations. Thus, the situation may encourage discovery of new, more promising and competitive paths for small business development including implementation of IT methods and solutions.
2020-11-26T09:06:34.619Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "a1356c090615caea46d5d3c14f3ce1549cff1935", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/68/e3sconf_ift2020_03037.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "05f653fe7ca50411806b1460e9c11e4073aa7c4e", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
225655583
pes2o/s2orc
v3-fos-license
Diurnal Gain and Nocturnal Reduction of Body Weight in Young Adult Rabbits: The Reverse of the Circadian Rhythm Observed in Rats and Mice Citation: Kawamura S, Yamazoe H, Hosokawa Y (2020) Diurnal Gain and Nocturnal Reduction of Body Weight in Young Adult Rabbits: The Reverse of the Circadian Rhythm Observed in Rats and Mice. J Toxicol Cur Res 4: 016. Abstract Understanding circadian rhythms in experimental animals is im-portant to comprehensively evaluate animal responses to chemical exposure and gain deep insight into the pharmacological and toxico- logical effects of chemical exposure. Animals may respond different-ly to chemical exposure at different time points because many bodily functions have daily rhythms. In rats and mice, major experimental animals used in toxicology studies, circadian changes in physi- ological parameters including body weight, food consumption and hormones have been reported. In rabbits, the other principal experimental animal in teratology, circadian rhythms of behavioural functions such as physical activity and food intake, but not body weight change, have been described. To better understand fundamental biological characteristics of rabbits, we measured body weight and food consumption of male and female rabbits of two strains in the morning and evening for several days, calculating diurnal and nocturnal body weight changes and food intake per hour during the interval. Rabbits as well as rats and mice ate more at night than during the day. Nevertheless, rabbits showed diurnal increase and nocturnal decrease of body weight. This is the reverse of the circadian change observed in rats and mice. There was no strain-specific difference in the circadian rhythms in body weight and food consumption in rabbits. Male and female rabbits showed a similar daily rhythm in body weight and food consumption. In conclusion, there was a remark-able species difference in circadian rhythm in body weight between rats and rabbits. Introduction Physiological responses to chemical exposure in animals may vary depending on the time of day of the exposure since many functions at molecular, cellular, tissue and organism levels show daily rhythms. Understanding circadian rhythms in experimental animals is important to comprehensively evaluate animal responses to chemical exposure and gain deep insight into the pharmacological and toxicological effects of chemical exposure. Circadian changes in body weight and food consumption have been studied in rats along with other functions including urinary concentration [1], water consumption and spontaneous activity [2], taste preferences and fluid intake [3] and hypothalamic ATP [4] and in mice along with water consumption and spontaneous activity [5]. Daily behavioral, hormonal and neurochemical rhythms were also investigated in rats [6]. In rabbits, a principal experimental animal in teratology, daily rhythms of behavioral functions were investigated such as daily rhythms of locomotor activity, food and water intake, hard feces and urine excretion, hematological parameters, serotonin concentration in the brainstem, content and absorption of volatile fatty acids, visual evoked potential and intraocular pressure [7]. There is little information, however, about circadian rhythm in body weight in rabbits. Our objective in this study was to determine how body weight changes diurnally and nocturnally relative to food intake and using this information, better understand fundamental biological characteristics of rabbits. Animals Ten male and 48 female Japanese White (JW) and 14 male and 40 female New Zealand White (NZW) rabbits were purchased from Nisseiken Co., Ltd. (Tokyo, Japan) and Kitayama Labes Co., Ltd. (Nagano, Japan), respectively and used at age 5 to 7 months. Twenty male and 46 female Sprague-Dawley (SD) rats were from Charles River Laboratories Japan, Inc. (Kanagawa, Japan) and used at age 9 to 10 weeks. Twenty-five female ICR mice were from Japan SLC, Inc. (Shizuoka, Japan) and used at age 9weeks. The animals were individually housed in hanging wire aluminum cages in rooms where air was exchanged more than 10 times per hour. Temperature and humidity were maintained at 22-26°C and 45-65% for rats and mice and at 20-24°C and 45-65% for rabbits. Artificial light was maintained on a 12-h cycle (light on from 8:00 to 20:00). Pelleted diet and tap water were available ad libitum. JW and NZW rabbits were fed NRT-1 (Nisseiken Co., Ltd.) and LRC4 (Oriental Yeast Co., Ltd., Tokyo, Japan), Diurnal Gain and Nocturnal Reduction of Body Weight in Young Adult Rabbits: The Reverse of the Circadian Rhythm Observed in Rats and Mice Abstract Understanding circadian rhythms in experimental animals is important to comprehensively evaluate animal responses to chemical exposure and gain deep insight into the pharmacological and toxicological effects of chemical exposure. Animals may respond differently to chemical exposure at different time points because many bodily functions have daily rhythms. In rats and mice, major experimental animals used in toxicology studies, circadian changes in physiological parameters including body weight, food consumption and hormones have been reported. In rabbits, the other principal experimental animal in teratology, circadian rhythms of behavioural functions such as physical activity and food intake, but not body weight change, have been described. To better understand fundamental biological characteristics of rabbits, we measured body weight and food consumption of male and female rabbits of two strains in the morning and evening for several days, calculating diurnal and nocturnal body weight changes and food intake per hour during the interval. Rabbits as well as rats and mice ate more at night than during the day. Nevertheless, rabbits showed diurnal increase and nocturnal decrease of body weight. This is the reverse of the circadian change observed in rats and mice. There was no strain-specific difference in the circadian rhythms in body weight and food consumption in rab-bits. Male and female rabbits showed a similar daily rhythm in body weight and food consumption. In conclusion, there was a remarkable species difference in circadian rhythm in body weight between rats and rabbits. Rats and mice Daily rhythms of body weight and food intake have been reported in rats [1][2][3][4] and in mice [5]. In both species, body weight was increased during the night and decreased during the day. Rats and mice ate more in the nighttime than the daytime. We confirmed these rhythms as shown in figures 2-4. In young female adult rats, body weight changes during the night or the day were much larger than increments of body weight (2-3g) measured every 24h at a fixed point in time such as the peak (9:00) or trough (16:00) time ( Figure 2a). Body weight increased by almost 9g, nearly 4% of the body weight, during the night, while the night time increment of body weight was largely lost during the day (Figure 2b). Female rats ate mostly at night and less than 0.2g of the diet per hour during the daytime ( Figure 2c). As shown in figure 3, males showed a pattern of daily changes in body weight and food intake similar to that of females. Body weight fluctuation was larger in males than females. Like rats, female mice exhibited an increment of body weight at night and decrement during the day and greater food intake during the night than the day though not to the extent that rats did ( Figure 4). Based on these results, there was no sex-specific difference in circadian rhythms in body weight and food consumption in young adult rats and also, no species difference in these rhythms between female rats and mice. Rabbits In contrast to rats and mice, female JW rabbits showed a diurnal increase and nocturnal decrease in body weight. Female rabbits ate all day long, but consumed more of the diet at night than during the day ( Figure 5). Although not as evident as females, this inverse pattern of body weight change was also observed in male JW rabbits (Figures 6a and 6b). Food intake was greater at night in males when compared to females (Figure 6c). We investigated whether the daily rhythm of body weight observed in JW rabbits was also found in another strain of rabbits, NZW rabbits. Diurnal gain and nocturnal reduction of body weight were also observed in male and female NZW rabbits. NZW rabbits showed greater food intake at night (Figures 7 and 8). The daytime increment and night time decrement were a small percentage of total body weight. However, as observed in rats and mice, body weight changes during the night or the day were larger than the overall change during a 24h interval. In contrast to rats who eat mostly at night, daytime food intake in rabbits was 60-90% of the night time amount. In conclusion, there were no strain-specific and sex-specific differences in daily change of body weight and food consumption in rabbits. It is conceivable that, in general, rabbits show diurnal gain and nocturnal loss of body weight, the reverse of the rhythm observed in rats and mice. Discussion We conducted this study to investigate rabbit daily rhythms in body weight and food consumption in order to better understand fundamental biological characteristics in rabbits, the principal experimental animals used in teratological studies. In rats, both body weight and food intake increased at night and decreased during the day. The circadian rhythm in food intake in rabbits was similar to that observed in rats. However, the daily rhythm in body weight change in rabbits was the inverse of that observed in rats. While daytime food intake was decreased in rabbits, their daytime body weight was increased. It is well known that rats are active at night. The rabbit is also innately a nocturnally active animal [8]. The chronological pattern of activity in either animal may be generally similar as shown by the similarity in their nocturnal patterns of food intake. Because the animals used were young adults, they may have been similar in life stage and may have reached a comparable stage of circadian rhythm development. Rabbits attain stable coupling to the light-dark cycle by 45 to 80 days after weaning (at 29 days postpartum), with behavioral functions (locomotor activity, feeding, drinking, micturition and hard feces excretion) occurring in the dark period [9]. In rats, a pattern of nocturnal weight gain reflective of food intake begins at 19 days postpartum, becoming increasingly pronounced thereafter [10]. Gene expression rhythms in the suprachiasmatic nucleus, known as the master clock, are responsive to photoperiod on postnatal day 10 [11]. Expression of Period 1, a core clock gene, in pineal, liver, thyroid, adrenal, and lung tissues peaks during the night by postnatal day 25 [12]. The profile of daily clock genes expression in the adrenal reaches maturity on postnatal day 14 in rats [13]. Consequently, species difference in body weight change would not be due to the pattern of behavioural activity and degree of maturity. Rats show an estrous cycle as young adults. Individual body weight changes can be influenced by the estrous cycle [14]. Rats were weighed for five consecutive days, during which each rat could be in a state of estrus at least once. Nevertheless, circadian changes of body weight and food intake were comparable from one day to the next during the observation period. The group mean might nullify the impacts of individual estrous cycles. Alternatively, there was no large difference in body weight change among rats in different stages of the estrous cycle in this study. In contrast to ovulation in rats, ovulation in rabbits is initiated by copulatory stimulation and is not cyclical. Regardless of estrous cyclicity, there was no sex-specific difference in circadian change in body weight in either species. This fact suggests that the estrous cycle is not a key factor affecting the difference in daily body weight rhythm between rats and rabbits. Energy balance is the difference between energy intake and consumption. Energy expenditure is due to the basal metabolic rate, physical activity and diet-induced thermo genesis [15]. In rats, nocturnal gain and diurnal loss of body weight are due to hyperphagia at night and hypophagia during the day [16]. Caloric intake exceeds caloric expenditures at night. During the daytime, caloric intake is lower than the concomitant energy expenditure. The weight gain-weight loss cycle is associated with active lipogenesis during the night time and lipolysis during the daytime. As in previous investigations, this study showed that rats consumed food largely during the night and gained weight at night. While rabbits consumed a considerable amount of food during the daytime, food intake was elevated during the night time when compared to the daytime. This is consistent with a previous report [8]. Rabbits consumed 56% of daily food intake at night. Accordingly, it was unexpected that rabbits gained, rather than lost, weight during the day when they ingested less food than they did at night. Interestingly, a pattern of behavioural functions observed in normal rabbits is similar to that in hypothalamically-lesioned rats, in which body weight is diurnally increased and nocturnally decreased and daytime and nighttime food intakes are comparable [17]. In addition, mice carrying a mutant Clock gene, a key circadian gene, showed an elevated daytime food intake, consuming nearly 50% of total daily intake, while remaining much less active during the day than at night [18]. The rabbit may be an interesting model to investigate relationships between feeding time, energy balance, body weight control and circadian rhythms. The present study demonstrated diurnal increment in body weight in the rabbit, which is the reverse observed in the rat. While additional work is required to reach more firm conclusions, we found a remarkable species difference in daily rhythm in body weight between rats and rabbits. To elucidate the basis of this species difference, further investigation will be necessary.
2020-08-27T09:04:36.247Z
2020-06-17T00:00:00.000
{ "year": 2020, "sha1": "b7c32945aa8b5e5dd165c8271f7e55dfb0e75108", "oa_license": "CCBY", "oa_url": "https://doi.org/10.24966/tcr-3735/100016", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "393ef3c0303cf1f22bbca74a35cd70521c9052e8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
33106562
pes2o/s2orc
v3-fos-license
Adenovirus vectors for high-efficiency gene transfer into mammalian cells Characteristics such as versatility, stability and high-level expression make adenovirus vectors invaluable tools for the expression of transgenes in mammalian cells for the development of recombinant viral vaccines and for delivery of therapeutic genes. denoviruses are used extensively to deliver genes into mammalian cells, particularly where there is a requirement for high-level expression of transgene products in cultured cells, or for use as recombinant viral vaccines or in gene therapy (reviewed in Ref. 1). The boundaries between the latter two applications are somewhat blurred, as the use of viral vectors as vaccines (e.g. for immunotherapy of cancer) is not fundamentally different from their use in gene therapy. These viruses are particularly well suited for many applications for several reasons: their stability and ability to grow to high titres; their ease of manipulation and purification; and their ability to transduce many mammalian cell types from numerous species, including both dividing and nondividing cells in vitro and in vivo. Vectorology The adenovirion 2 is a nonenveloped icosahedral capsid of approximately 700 nm comprising only protein and DNA, the latter consisting of a linear double-stranded DNA of approximately 30-40 kb (Fig. 1a). DNA replication and virion assembly take place in the nucleus of infected cells, and the production of huge amounts of virions and virion products results in cell death and the release of several thousand infectious viruses per cell at the end of the replication cycle. There are many kinds of adenovirus vectors and many ways of constructing them 3 . At one extreme are the nondefective vectors that retain all essential viral genes and have inserts of foreign DNA in nonessential regions of the genome, and at the other extreme are the vectors from which all viral genes have been deleted and substituted with foreign DNA (up to 36 kb) 4 . The transcriptional organization of a typical adenovirus genome is illustrated in Fig. 1b. From the perspective of adenovectorology, the most important regions are the early regions 1 and 3 (E1 and E3). E3 is nonessential and can be deleted without interfering with the ability of the virus to replicate, and E1, although essential, can also be deleted, resulting in a defective virus that is propagated in E1-expressing cells such as 293 cells (Ad5-transformed human embryonic kidney cells). The most commonly used vectors are those containing deletions of E1 and E3, with inserts of foreign DNA in E1. Such vectors, which are generally referred to as firstgeneration (FG) vectors, are defective for replication in normal cells but can efficiently transduce most cells. FG vectors are particularly useful for gene transfer into cultured cells and for gene therapy applications that require transient gene expression. FG vectors are not suitable for long-term expression because they retain most viral genes and express them at low levels, resulting in an immune response against transduced cells in vivo. Currently, the best available adenovirus vectors for long-term expression in vivo are ones from which all viral genes have been deleted 5 . These fully deleted (FD) Adenovirus vectors for high-efficiency gene transfer into mammalian cells Applications FG vectors are easy to engineer, propagate and purify, and have numerous uses where efficient gene delivery and high-level expression are desired. Thus, they are excellent research tools, and will be used increasingly as novel genes are discovered and their products become a subject for investigation. Because the vectors can deliver genes encoding antigens and express them at high levels in vivo in any mammalian species, they are excellent candidates as recombinant viral vaccines. Indeed, vectors capable of immunizing animals against rabies 6 , herpes viruses 7 , rotaviruses 8 and coronaviruses 9 have all been developed. FG vectors are particularly suited for use in cancer immunotherapy strategies because of the ability of the vector to tranduce most cell types, including nondividing cells, and its ability to express transgene products to high levels. In these regimens, transient expression is preferred over long-term expression, and the inflammatory response and cytotoxic T lymphocyte (CTL) activity associated with administration of FG vectors may be advantageous. Several FG vectors have been produced that express a variety of cytokines and other immunomodulatory proteins 10,11 . These have yielded encouraging results when tested in tumour models in animals and some have been used in clinical trials 12 . FD vectors are technically more difficult to engineer, propagate and purify than FG vectors but have a much higher therapeutic index and give much longer expression in vivo 5 . Thus, FD vectors may find use in 'classical' gene therapy such as enzyme replacement, where the desired outcome is permanent expression of the transgene product. Concluding remarks In summary, adenovirus vectors come in many forms and have great versatility and high efficacy when designed and used appropriately. They will play an increasingly important role as agents for gene transfer into mammalian cells. n recent years, there has been an increasing interest in the role of natural killer (NK) cells in primary host defense and their connection with adaptive immunity. NK-cell recognition of pathogen-infected cells and tumors is based on the expression of multiple cell-surface receptors that bind either major histocompatibility complex (MHC) class I or non-MHC ligands and transduce either inhibitory or activating signals. MHC class I receptors normally inhibit NK-cell activation when engaged by self-MHC, while allowing effector responses to occur when class I molecules are downregulated by viruses or transformation. Other receptors recognize ligands that have yet to be defined and trigger lysis and cytokine production. The balance of all of these signals controls NK-cell activation. Receptors for MHC class I molecules Studies over the past ten years have led to the discovery of two families of MHC class I receptors. Immunoglobulin (Ig) superfamily (Ig SF) receptors include the human killer cell Ig-like receptor (KIR) and the human Ig-like transcript 2 (ILT2)/leukocyte inhibitory receptor 1 (LIR-1). KIR and ILT/LIR recognize groups of class I allotypes rather than individual MHC class I-peptide complexes. In particular, KIRs with two Ig-like domains (KIR2D) recognize groups of HLA-C allotypes, which differ at positions 77-80 of the ␣1 domain. ILT2/LIR-1 has a very broad specificity, binding to classical and non-classical class I allotypes as well as the viral-encoded class I-like molecule UL-18. All the genes encoding KIR and ILT/LIR receptors are clustered in a ϳ1 Mbp leukocyte receptor complex (LRC) on human chromosome 19 (19q13.4), which has been entirely sequenced (Michael J. Wilson and John Trowsdale, Cambridge University, Cambridge, UK). Genomic analysis of the LRC in two distinct haplotypes, as well as analysis of KIR genes in the chimpanzee (Peter Parham, Stanford, CA, USA), indicate a rapid evolution of KIR in primates. Such evolution can only be partly explained by the functional link with the MHC, the products of which are ligands for some of the KIRs and the ILT molecules. The interaction of KIRs with MHC class I In a new and exciting development in the study of the interaction of KIRs with class I, Peter D. Sun (Rockville, MD, USA) presented the first crystal structure of a KIR (KIR2DL2) together with its ligand (HLA-Cw3) at 3Å A high-resolution view of NK-cell receptors: structure and function
2018-04-03T02:19:29.009Z
2000-08-25T00:00:00.000
{ "year": 2000, "sha1": "2c7e3ef5a746241b46c252255f6bc8694f18d72d", "oa_license": null, "oa_url": "http://www.cell.com/article/S0167569900016765/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "98df2e40187cf5a7def4482e9cc8c5f0a7513667", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
229192843
pes2o/s2orc
v3-fos-license
Therapeutic preferences and factors determining the use of inhaled corticosteroids with long-acting β2-agonists in patients with asthma and chronic obstructive pulmonary disease Introduction Inhaled corticosteroids (ICS) and long-acting β2-agonists (LABA) are a part of standard therapy of bronchial asthma and chronic obstructive pulmonary disease (COPD). Aim Assessment of the therapeutic preferences and factors determining the choice of polytherapy with ICS and LABA in patients with asthma and COPD in daily clinical practice. Material and methods This multicentre, open-label, post-marketing observational survey was performed nation-wide with the participation of 245 doctors and 13,800 patients with asthma or COPD on polytherapy with ICS and LABA. The study questionnaire included two parts: concerning doctors’ preferences in the use of ICS and LABA and their prescription in patients as well as efficacy and tolerance of inhaled drugs during two consecutive visits. Results The study doctors frequently declared a choice of polytherapy with formoterol and fluticasone in patients with asthma and COPD. The most important factors supporting the choice of ICS and LABA, declared by doctors, were safety and efficacy. ICS and LABA polytherapy with formoterol and fluticasone was used in 71.0% of patients with asthma and 81.4% with COPD. The most important factors explaining the choice of this drug combination were safety (75.3% and 72.5%, respectively) and efficacy (75.2% and 71.9%, respectively). Conclusions Formoterol and fluticasone polytherapy is frequently chosen by Polish physicians in the treatment of asthma and COPD due to its high efficacy and safety. In accordance with doctors’ declaration, in the study group this therapy was characterized by the highest effectiveness and the best tolerance. Introduction Chronic inflammation of the lower respiratory tract is involved in the pathogenesis of both asthma and chronic obstructive pulmonary disease (COPD) [1,2]. The long-term goals of asthma therapy are to obtain a good control of symptoms, maintain a healthy level of activity as well as to minimize the risk of exacerbations, airflow reduction and side effects [1]. The choice of drugs in the treatment of asthma depends on the degree of disease control. Currently there are 5 degrees of treatment, reflecting its intensity. Drugs used in the treat-ment of asthma are prescribed for permanent use for controlling symptoms and immediate use temporarily in case of the exacerbation. Drugs controlling the course of the disease include: inhaled corticosteroids (ICS) -currently most effective anti-inflammatory drugs preferred in chronic asthma, antileukotrienes, long-acting β 2adrenergic agonists (LABA), theophylline in a sustained release form, cromones, oral LABA -used exceptionally, anti-IgE antibodies (omalizumab), corticosteroids applied systemically -primarily orally. Drugs used temporarily include: fast-acting inhaled β 2 -mimetics, corticosteroids Advances in Dermatology and Allergology 5, October/2021 applied systemically -primarily orally, optionally intravenously, anticholinergics drugs, theophylline in a shortacting form, fast-acting oral β 2 -mimetics used rarely, if the patient cannot take medications by inhalation. In asthma treatment, administration of drugs by inhalation is preferred because the drug reaches the airways directly and reaches the therapeutic concentration with a limited risk of systemic adverse effects. ICS and LABA are recommended in the second and higher degrees of treatment [1]. The goal of treatment in stable COPD is to decrease severity of disease symptoms, improve tolerance of exercise and health condition as well as to prevent disease progression, prevent and treat exacerbations and reduce mortality. A number of factors, including perceived severity of dyspnoea, frequency of exacerbations, the severity of the airflow restriction through the bronchi should be taken into account in taking the therapeutic decision. Currently available drugs can improve bronchial patency, reduce dyspnoea and other symptoms and reduce the frequency of exacerbations. The therapeutic options include: bronchodilators (short-acting β 2 -mimetics, LABA, short-acting anticholinergics), ICS and phosphodiesterase 4 inhibitors [2,3]. It should be noted that polytherapy with ICS and LABA is more effective than monotherapy with each of these drugs [4,5]. There are no data on making the therapeutic decision by Polish doctors concerning the prescription of polytherapy with ICS and LABA in asthma and COPD and factors influencing these decisions. Aim The aim of the multicentre, open-label, post-marketing observational survey was to assess the therapeutic preferences in the use of polytherapy with ICS and LABA in patients with asthma or COPD in daily clinical practice. In addition, the efficacy and tolerance of the prescribed inhaled therapy was assessed. Material and methods A nation-wide, multicentre, open-label, post-marketing observational survey involved 245 doctors (11 general practitioners, 23 allergists and 211 pulmonologists) and 13,800 patients treated with ICS and LABA (7,416 diagnosed with asthma and 6,384 diagnosed with COPD) interviewed during two subsequent visits. The survey did not meet the criteria of a medical experiment and thus did not require the Bioethics Committee's approval. It was performed from January 2016 to December 2018. The survey was performed among doctors recruited by medical representatives, specialists in family or internal medicine, pulmonology or allergology, currently licensed to practice, who completed and signed the Application Form for the Study and mailed it to Europharma. The inclusion criteria for outpatients were: age of 18 years and over, diagnosis of asthma or COPD, and use of polytherapy with ICS and LABA. The exclusion criterion was inability to obtain answers to questions contained in the survey and refusal of the patient to participate in the survey. The participating physician had a dual role in the survey. They answered the questions regarding their medical practice and filled out questionnaires for at least 20 patients who fulfilled the inclusion criteria during one visit resulting from a clinical need of the patient. The first part of the questionnaire included demographic data of the doctors (speciality, work experience, place of work) and data on their clinical practice (the frequency of use of different polytherapy with ICS and LABA in patients with asthma or COPD and factors affecting these decisions). The second part of the questionnaire on the first visit included patients' demographic data (gender, age, education level, place of residence and professional activity), clinical data (basic diagnosis due to the patient's request, duration of asthma or COPD, number of asthma and COPD exacerbations and related hospitalizations during the least 3 months, the degree of asthma and COPD control, recommended treatment regimen and occurrence of concomitant diseases). On visit 2 (about 3 months after visit 1) control of disease symptoms was assessed. In addition, the patients' opinion on effectiveness and tolerance of used pharmacotherapy consisted with ICS and LABA were assessed on the basis of a 4-point scale (1 -no, 2 -moderate, 3 -good, 4 -very good and 1 -difficult-to-accept discomfort, 2 -acceptable discomfort, 3 -good tolerance, 4 -very good tolerance, respectively). Statistical analysis Statistical analysis was performed with Statistica 12.0 software (TIBCO Software Inc., Palo Alto, CA, USA). Values of variables were presented as percentages and mean values with standard deviations (SD). Separate groups were compared using the c 2 test and c 2 test for trend. The value of p < 0.05 was considered to be statistically significant. Doctors' therapeutic preferences The study group of doctors (characteristics presented in Table 1) most frequently declared the use of formoterol with fluticasone in polytherapy of asthma and COPD (81.1% and 82.0%, respectively). The most important factors declared by physicians as determining the choice of this polytherapy in the treatment of asthma and COPD were efficacy (88.2% and 82.4%, respectively) and safety (83.3% in both cases) - Table 2. Advances in Dermatology and Allergology 5, October/2021 Factors determining the use of polytherapy with ICS and LABA in enrolled patients The analysis included 7,416 patients with asthma, and 6,384 with COPD (Table 3). Concomitant diseases occurred in 44.0% of patients with asthma, and 84.6% with COPD, most often hypertension. The severity of asthma was assessed as controlled in 53.5%, partially controlled in 41.2% and uncontrolled in 5.3% of the study group. Exacerbation of the disease during the last 3 months was reported by 28.8% of patients (in 75.8% once, 21.6% twice and 2.6% more than twice). 5.2% of patients with asthma were hospitalized due to the exacerbation in the last 3 months. The severity of COPD was scored as category A in 3.8%, B in 35.8%, C in 40.6% and D in 19.8% of patients. Exacerbation of the disease in the last 3 months was reported by 44.9% of patients (in 71.1% once, 25.7% twice and 3.2% more than twice). 13.6% of patients with COPD were hospitalized due to the exacerbation in the last 3 months (Table 4). The most frequent ICS with LABA polytherapy in patients with asthma and COPD included: formoterol with fluticasone (71.1% and 81.4%, respectively), formoterol with budesonide (11.2% and 8.7%, respectively), formoterol with beclomethasone (8.0% and 4.6%, respectively), and salmeterol with fluticasone (7.5% and 9.2%, respectively) - Table 4. The most important factors deter- mining the use of ICS and LABA polytherapy in the treatment of asthma as well as COPD were: safety, efficacy, the doctor's own experience with use of this drug, convenience of use, and impact of pharmacotherapy on the quality of life ( Table 5). The efficacy and tolerance of pharmacotherapy used During the observation, 96.5% of the study group continued the treatment. The most frequent reasons for discontinuation were resolution of symptoms (25.4%). In 5.8% of the study group an exacerbation of the disease occurred (in 95.6% once), and 0.9% of patients were hospitalized due to the exacerbation of the disease (all once). During the observation, in patients with asthma, the efficacy of pharmacotherapy used and assessed as very good increased significantly (from 64.6% to 79.7%, p < 0.001). On the first visit, the percentage of patients with asthma who assessed efficacy of pharmacotherapy used as very good was the highest among patients treated with formoterol and fluticasone (71.2%) and the lowest among patients treated with salmeterol and fluticasone (57.8%). During the observation, the efficacy of polytherapy with most frequent combinations of ICS and LABA in asthma control increased significantly (Figures 1 A-D). The highest scores at the end of observation were showed for formoterol with budesonide (82.3%). Similarly, during the observation in all patients with COPD, the efficacy of pharmacotherapy used and assessed as very good increased significantly (from 50.6% to 68.0%, p < 0.001). On the first visit, the percentage of patients with COPD who assessed efficacy of pharmacotherapy used as very good was the highest among patients treated with formoterol and fluticasone (56.3%) and the lowest among patients treated with formoterol and budesonide (34.9%). In the subgroup with COPD receiving polytherapy with formoterol and fluticasone, the efficacy assessed as very good increased significantly (from 56.3% to 70.4%, p < 0.001), with formoterol and budesonide (from 34.9% to 61.2%, p < 0.001) and with salmeterol and fluticasone (from 42.9% to 57.6%, During the observation in all patients with asthma, the tolerance of pharmacotherapy used and assessed as very good increased significantly (from 69.0% to 79.8%, p < 0.001). On the first visit, the percentage of patients with asthma who assessed tolerance of pharmacotherapy used as very good was the highest among patients treated with formoterol and fluticasone (62.3%) and the lowest among patients treated with formoterol and budesonide (39.2%). In a subgroup with asthma on polytherapy with formoterol and fluticasone, the tolerance assessed as very good increased significantly (from 62.3% to 72.6%, p < 0.001), with formoterol and budesonide (from 39.2% to 59.0%, p < 0.001) and with salmeterol and fluticasone (from 49.1% to 59.3%, p < 0.001) - Advances in Dermatology and Allergology 5, October/2021 Discussion The presented study was the first large survey that assessed therapeutic preferences in the use of ICS and Similarly, during the observation in all patients with COPD, the tolerance of pharmacotherapy used and assessed as very good increased significantly (from 56.7% to 69.8%, p < 0.001). On the first visit, the percentage of patients with COPD who assessed the tolerance of pharmacotherapy used as very good was the highest among patients treated with formoterol and fluticasone (71.2%) and the lowest among patients treated with salmeterol and fluticasone (57.8%). In a subgroup with COPD on polytherapy with formoterol and fluticasone, LABA polytherapy and its efficacy and tolerance in patients with asthma or COPD in daily clinical practice, performed in the Polish population. It should also be noted that so far such a study has not been conducted in other populations. Over 80% of doctors who participated in this study declared that the most common ICS and LABA regimen in the treatment of asthma and COPD is a combination of formoterol with fluticasone. As the very important factors influencing the choice of this therapy for both asthma and COPD, the doctors most often reported its efficacy and safety. The preferences and factors influencing the choice of medications declared by the doctors were reflected in the individual decisions made for the patients enrolled in the study. More than 70% of patients with asthma included in the study were using formoterol with fluticasone. Very important factors determining the choice of this therapy were efficacy and safety. Efficacy of polytherapy with formoterol and fluticasone was confirmed by results obtained in our study and showing a significant increase in the percentage of patients assessing it as very good. This percentage was increasing despite the therapeutic regimen used. However, it should be noted that on both visits the percentage of patients who assessed efficacy of treatment as very good was the highest among patients receiving polytherapy containing formoterol and fluticasone. During the observation, the percentage of patients who assessed the tolerance of pharmacotherapy used as very good increased significantly. However, the increase in the percentage of patients scoring the tolerance of treatment as very good was independent of the regimen used. Moreover, it should be noted that on both visits, the percentage of patients who assessed tolerance of the treatment as very good was the highest among patients treated with a combination of formoterol and fluticasone. Greater efficacy and safety of polytherapy with a combination of formoterol with fluticasone than formoterol with budesonide in patients with asthma as shown in our study, was also confirmed by the results of a randomized trial [6]. In addition, the efficacy and safety of combined therapy with formoterol and fluticasone was comparable to those found in clinical trials and also a non-interventional post-authorization observational study conducted in a group of over 2,500 patients with asthma [7]. In another non-interventional, post-marketing observational study, similarly to the presented survey, greater efficacy of polytherapy with formoterol and fluticasone was found compared to the therapy with formoterol and budesonide despite the lack of differences in the patients' compliance with the recommendations [8]. It should also be emphasized that in the cost-effect Advances in Dermatology and Allergology 5, October/2021 Advances in Dermatology and Allergology 5, October/2021 Advances in Dermatology and Allergology 5, October/2021 assessment study, the change or initiation of a combination therapy with formoterol and fluticasone was associated with a better cost-effect ratio than treatment with a combination of formoterol and salbutamol [9]. Similarly to a group with asthma, the declarations of the doctor were also reflected in the therapeutic decision taken for patients with COPD. Over 80% of patients with COPD were treated with a combination of formoterol and fluticasone. Factors influencing the selection of a therapeutic regimen, indicated as the most important, were efficacy and safety. In accordance with these factors, during the observation a significantly increased percentage of patients with COPD assessed efficacy of the treatment used as very good regardless of the therapeutic regimen used. However, it should be noted that on both visits, the percentage of patients assessing the effectiveness of the treatment as very good was the highest among those treated with a combination of formoterol and fluticasone. Higher efficacy of polytherapy with fluticasone and formoterol than monotherapy with formoterol in the treatment of COPD was confirmed in a 12-month randomized trial involving nearly 2,000 patients [10]. The efficacy and safety of this therapy has also been confirmed by the results of a 52-week, randomized, double-blind phase III trial (EFFECT -Efficacy of Fluticasone propionate/FormotErol in COPD Treatment) [11]. In summary, our multicentre, open label, post-marketing observational study has shown that polytherapy containing formoterol and fluticasone is preferably used in the treatment of asthma and COPD by Polish general practitioners and specialists due to its efficacy and safety. Preferences of doctors were reflected in the prescription pattern shown in the study group. The main limitations of our survey were a small number of participating general practitioners, subjective assessment of treatment efficacy and different duration of the use of current pharmacotherapy. Conclusions Formoterol and fluticasone polytherapy is frequently chosen in the treatment of asthma and COPD by Polish physicians due to its high efficacy and safety. In accordance with doctors' declaration, in the study group this therapy was characterized by the highest effectiveness and the best tolerance.
2020-11-12T09:07:47.709Z
2020-10-16T00:00:00.000
{ "year": 2020, "sha1": "5bc5928879c0bb78410a844cee0236554e2b8474", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-7/pdf-42064-10?filename=Therapeutic.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0fefbbd73bc14f24d7bff0eeb995c353d778e6d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256074160
pes2o/s2orc
v3-fos-license
Characteristics and progression of patients with frontotemporal dementia in a regional memory clinic network Due to heterogeneous clinical presentation, difficult differential diagnosis with Alzheimer’s disease (AD) and psychiatric disorders, and evolving clinical criteria, the epidemiology and natural history of frontotemporal lobar degeneration (FTD) remain elusive. In order to better characterize FTD patients, we relied on the database of a regional memory clinic network with standardized diagnostic procedures and chose AD patients as a comparator. Patients that were first referred to our network between January 2010 and December 2016 and whose last clinical diagnosis was degenerative or vascular dementia were included. Comparisons were conducted between FTD and AD as well as between the different FTD syndromes, divided into language variants (lvFTD), behavioral variant (bvFTD), and FTD with primarily motor symptoms (mFTD). Cognitive progression was estimated with the yearly decline in Mini Mental State Examination (MMSE). Among the patients that were referred to our network in the 6-year time span, 690 were ultimately diagnosed with FTD and 18,831 with AD. Patients with FTD syndromes represented 2.6% of all-cause dementias. The age-standardized incidence was 2.90 per 100,000 person-year and incidence peaked between 75 and 79 years. Compared to AD, patients with FTD syndromes had a longer referral delay and delay to diagnosis. Patients with FTD syndromes had a higher MMSE score than AD at first referral while their progression was similar. mFTD patients had the shortest survival while survival in bvFTD, lvFTD, and AD did not significantly differ. FTD patients, especially those with the behavioral variant, received more antidepressants, anxiolytics, and antipsychotics than AD patients. FTD syndromes differ with AD in characteristics at baseline, progression rate, and treatment. Despite a broad use of the new diagnostic criteria in an organized memory clinic network, FTD syndromes are longer to diagnose and account for a low proportion of dementia cases, suggesting persistent underdiagnosis. Congruent with recent publications, the late peak of incidence warns against considering FTD as being exclusively a young-onset dementia. Background Frontotemporal lobar degeneration (FTD) is the second leading cause of early-onset dementia after Alzheimer's disease (AD) [1]. FTD is characterized by changes in behavior and/or language due to the relatively selective atrophy of the frontal and temporal lobes [2]. In the past decade, the nosology of FTD has evolved outstandingly, prompting changes in diagnostic criteria. There are three main clinical presentations of FTD. The behavioral variant of FTD (bvFTD) is defined by an early and prominent behavioral and dysexecutive syndrome, whose core symptoms were revised by Rascovsky et al. in 2011 [3]. The two language variants of FTD (lvFTD) include the semantic and non-fluent presentations of primary progressive aphasia (PPA), also defined by updated clinical criteria [4]. In addition, FTD can initially present with motor symptoms (mFTD) such as features of atypical parkinsonism (progressive supranuclear palsy [PSP] and corticobasal syndrome [CBS]) [5]. Although being an umbrella term underlain by > 20 different possible pathologies [6], FTD stands as a unifying entity because of the lack of correlations between FTD syndromes and pathology [7]. bvFTD, for example, can be underlain by tau, TDP-43 or rarer pathologies, and on the contrary, one single pathology, such as PSP, can manifest with several clinical syndromes [6]. One exception to the unpredictability of the underlying pathology is the identification of a causal genetic mutation. Patients with FTD syndromes have a positive family history in 26-31% [8], highlighting the importance of genetics. The most common FTD mutations, all linked to a specific pathology, are found on MAPT, PGRN, and C9ORF72 genes [8]. FTD prevalence was estimated between 0.01-4.61 per 1000 person and the incidence between 0.01-2.5 per 1000 person/year [9]. In recent dementia cohorts, FTD cases have been found to account for 1.6 to 7% of dementia cases [10,11]. However, those figures need to be considered with caution. First, FTD is still underdiagnosed: neuropathological studies performed in communities where brain donation reaches a high level of acceptance show that as much as 5-9% of the elderly population with or without cognitive impairment at death has FTD pathology [12,13]. It has been previously estimated that about 40% of FTD are misdiagnosed [14] and time to diagnosis is longer than for other dementias [15,16]. Second, with some exceptions [17], most past estimations have been done using the previous Lund and Manchester [18] or Neary criteria [19]. Yet, the revised clinical criteria and the addition of new syndromes to the FTD spectrum outdate previous publications. Third, advances in neuropsychology, neuroimaging, and cerebrospinal fluid (CSF) biomarkers and genetics have improved FTD diagnosis in challenging situations such as psychiatric, amnestic, or late-onset presentations of the disease [20][21][22]. However, beyond research purposes, whether improving FTD diagnosis at the population level would stand a cost-benefit analysis is a subject that should be addressed open-mindedly. Indeed, one could argue that differential dementia diagnosis workup is a costly venture [23] that can be questioned in the absence of disease-modifying treatments. The demonstration that FTD diagnosis is associated with different prognoses and therapeutic approaches in routine care would advocate against a symptomatic approach of dementia. Thus, data sharing on current FTD diagnoses and management is needed. We undertook the present study in a large regional memory clinic (MC) network to get a better overview of the incidence, characteristics and natural history of FTD syndromes defined using recent diagnostic criteria. The objectives were to study the characteristics of the FTD patients referred to the network from January 2010 to December 2016, including age at onset, time to diagnosis, clinical presentations, cognitive progression, and treatment. Patient selection Founded in 1993, the Méotis network is the first French MC network, involving 30 MCs in the French Nord and Pas-de-Calais departments, sharing data within a common patient database since 1997. Meotis database reached a caseload of over 104,000 patients in 2018, representing more than 350,000 visits [24]. In all MCs, a multidisciplinary assessment is performed by neurologists, geriatricians, psychologists, dedicated nurses, and social workers; whenever necessary, patients can be assessed by psychiatrists, speech therapists, and dedicated nurses. Diagnostic work-up is harmonized throughout the network, and standardized data on patient characteristics and healthcare activity are systematically collected. All harmonized data are monitored and computerized by a data manager in the tertiary-referral Memory Resources and Research Center (MRRC) of the Lille University Hospital. We included patients that were referred for the first time to one of the network's MC from January 2010 to December 2016 and whose last clinical diagnosis during the follow-up was FTD, AD, or other causes of dementia. We first extracted all dementia cases to calculate the respective proportions of AD and FTD syndromes. Then, we focused on the subpopulation of AD and FTD syndromes for systematic comparisons. Since AD is the dominant cause of dementia, AD patients were chosen as a comparator. Data extraction was performed on September 2019, 33 months after the end of the inclusion period. For the few patients that received a diagnosis of bvFTD and lvFTD before the new criteria were published and were not followed up beyond 2011, we checked retrospectively that they fulfilled the revised diagnostic criteria. The bvFTD group comprised pure bvFTD [3] and a minority of patients with associated amyotrophic lateral sclerosis. The lvFTD group included a semantic and non-fluent agrammatic PPA [4] as well as rarer PPA variant such as apraxia of speech. The mFTD group comprised the PSP [25] and CBS [26] patients. Patients with overt motor neurone disease at presentation are usually not referred to our network because of a specialized regional amyotrophic lateral sclerosis care pathway. Data collection We extracted the following data from the Méotis database: sex, age at first referral, referral delay, age at diagnosis, symptom onset, and diagnostic procedures. We collected the Mini Mental State Examination (MMSE) [27] and the short 4-item Instrumental Activities of Daily Living (IADL-4) [28] scores at first referral. In this article, IADLs score was calculated by summing up the number of maintained activities (ranging from 0 (full dependence) to 4 (complete autonomy)). The referral delay was defined as the interval, expressed in months, between symptoms onset (declared by the patient and caregiver) and first referral to the network. The clinical follow-up was defined as the interval, expressed in years, between the first and the last visit within the network. The survival was defined as the interval, expressed in years, between disease onset and death. Drug treatment was recorded at every visit. A patient was considered under a specific drug treatment if it was recorded at least once during follow-up. Only the last clinical diagnosis was considered in this study because of its higher accuracy. The last diagnosis was the one made or kept after all diagnostic procedures and retained at follow-up. Diagnosis wandering was defined as the time from first referral to the last retained clinical diagnosis. The date of death was retrieved from the National Institute of Statistics and Economic Studies (French: Institut national de la statistique et des études économiques) national death database thanks to the MatchID tool (https://deces.matchid.io/) on September 2020. The datasets considered in the current study are available from the corresponding author on reasonable request. The database was declared to the ad hoc commission (Commission Nationale Informatique et Libertés (CNIL)) protecting personal data (#2146189 V1). Privacy and confidentiality rules were respected. Statistical analysis Quantitative variables were described by the mean and standard deviation if the distribution was normal or by the median and interquartile range otherwise. Qualitative variables were described by the numbers and percentages of each modality. Diagnostic subtypes (bvFTD, FTD mFTD and AD) were described and compared across all parameters. Quantitative variables were analyzed by an ANOVA or the Kruskal-Wallis non-parametric equivalent. Qualitative variables were compared by an exact chi-square or Fisher's test (in the case of a theoretical number of cases below 5). A Bonferroni correction was applied to posthoc comparisons of the FTD subgroups with respect to the AD group. The effect size was calculated as the standardized mean difference (for quantitative variables) and the Cramer's V coefficient (for qualitative variables). A mixed linear model analyzed the evolution of the MMSE over time. The factors introduced into the model were time, diagnosis, and the interaction between diagnosis and time. Incidence rates were calculated as the number of incident cases divided by the total number of person-years (py) for the catchment area over the 7 years. All rates were calculated using the reference population of the corresponding geographic area estimated by the French National Institute of Demographic Research (INED) on January 2015, as population at risk. Therefore, no variation was assumed during the 7 years of the study period. Age-standardized rates were calculated using the Revised European Standard Population 2013 (ESP2013). Results were presented in cases per 100,000 personyears. Concerning mortality, median survival time after diagnosis was calculated for each diagnostic subtype, survival was estimated using the Kaplan-Meier model, and the log-rank test was used to test of differences in survival curves according to diagnostic subtype. Hazard ratios (HRs) were also adjusted for age and sex using Cox regression. The analyses were performed using SAS software (version 9.4). Study population Data from 26,525 demented patients followed in the network and fulfilling inclusion criteria were extracted. Among them, 2369 have first been seen at the MRRC (Lille tertiary-referral MC) and 24,156 first at one of the MCs belonging to the network. During the 7 years of follow-up, 690 incident cases of FTD syndromes were identified, giving a crude incidence rate of 2.42 per 100,000 person-years ( Table 1). The FTD incidence across age groups at diagnosis reached its peak in the 75-to-79 year-old group, with an incidence rate of 14.95 per 100,000 person-years. The agestandardized incidence rate was 2.90 per 100,000 person-years. Characteristics of patients with FTD syndromes The sex ratio significantly differed between AD and FTD patients (p < 0.0001, d = 0.1) ( Table 2). Men represented 47% of FTD and only 30% of AD patients. Patients with FTD syndromes were younger than those with AD at first referral (70.4 vs. 80.6 years, p < 0.0001), and bvFTD patients were younger than the remaining FTD syndromes (69.4 vs. 72.3 years). MMSE scores at first referral were higher in FTD syndromes than in AD (21.8 vs. 18.9, p < 0.0001, d = 0.5). Likewise, the median IADL-4 score was higher in patients with FTD syndromes compared to AD patients (3 vs. 2, p < 0.0001, d = 0.5), favoring a more preserved autonomy in instrumental activities. Among the FTD syndromes, a positive family history of dementia was identified in 14%, as compared with 2.3% of AD patients (p < 0.0001, d = 0.1). Among the 294 FTD syndromes referred to the MRRC, a genetic mutation was detected in 34% of the 99 patients in whom the genetic analysis was performed (47% in C9Orf72, 32% in PGRN, and 21% in MAPT genes). Mutations were more likely to be retrieved in bvFTD (95%) than in lvFTD (5%) or mFTD (0%). See Table 1 for detailed comparisons between FTD syndromes and AD. Diagnosis of FTD syndromes We then systematically studied the time to referral, time to diagnosis and diagnostic workup of FTD compared to AD patients. Referral delay was longer for FTD syndromes compared to AD (37.6 vs. 31.8 months, p < 0.0001, d = 0.4). Among the FTD syndromes, referral delay was the highest for bvFTD (40.0 vs. 33.3 months in other FTD). Diagnosis wandering was longer for FTD syndromes compared to AD (9.8 vs. 5.8 months, p < 0.001, d = 0.1), but similar across FTD syndromes. As part of the standardized dementia diagnosis procedure, all of our patients performed an MRI, if not contraindicated. The diagnostic workup of FTD patients in the whole Méotis network included more often a FDG-PET and a lumbar puncture that the one of AD patients (23.2% vs. 2.6% and 27.5% vs. 3.96% respectively, p < 0.001 and d = 0.2 for both comparisons, Table 2). Brain imaging and lumbar puncture were more consistently used in Lille MMRC both for AD and FTD diagnosis (our unshown data). Correlations between clinical diagnoses and pathology were excellent in the 15 patients of the study population who came to autopsy. Among the patients with available pathological examination, the 4 in the bvFTD group had FTLD-TDP (n = 3) or FTLD-FUS (n = 1) pathologies. All 5 patients in the mFTD group had PSP or CBD pathology. All 6 patients in the AD group had AD pathology +/− cerebral amyloid angiopathy or Lewy body pathology (n = 3 and 2, respectively). Natural history of FTD syndromes Cognitive progression estimated by the rate of MMSE decline was then assessed. Overall, there was no significant difference in the rate of MMSE decline between FTD syndromes and AD. Across FTD syndromes, bvFTD did not significantly differ from AD in the rate of MMSE decline per year (the slope was a bvFTD = − 2.0 in bvFTD against a AD = − 1.8 in AD, p = 0.4). However, the decline was higher in lvFTD (a lvFTD = − 2.8) and mFTD (a mFTD = − 2.6) than in AD patients (p = 0.003 and p = 0.02, respectively) (Fig. 1b). Follow-up was longer for FTD syndromes compared to AD (24.1 vs. 17.5 months, p < 0.0001, d = 0.2), and more specifically, bvFTD and lvFTD patients had a significantly longer follow-up than AD patients. As of September 2020, 48% of bvFTD, 53% of lvFTD, 76% of mFTD, and 59.1% of AD patients had died ( Table 2). The median survival time after diagnostic was 5.5 years for the entire sample and varied significantly according to the diagnosis subtype (6.5 years for bvFTD, 6.1 for lvFTD, 5.5 for AD, and 4.0 for mFTD, p < 0.001) (Fig. 1c) Discussion The main findings of the present study are threefold: (1) despite new sets of criteria, diagnoses of FTD syndromes remained low in routine care in our regional memory clinic network; when diagnosed, bvFTD patients had longer referral delay and diagnostic wandering than AD patients; (2) the peak of incidence of bvFTD occurred between 75 and 79 years, clearly advocating against the conception of FTD as exclusively an early-onset dementia; (3) FTD syndromes differed from AD with regard to cognition and autonomy at baseline, cognitive decline, and disease duration; and (4) therapeutic strategies radically differed from the ones in AD. Misconceptions about FTD lead to underdiagnosis In this retrospective study, we calculated an FTD agestandardized incidence rate of 2.9/100.000 py in our region. Our results stand in-between the ones of two recent studies using updated FTD criteria that found an incidence of 1.6/100.000 py in the UK (Norfolk and Cambridgeshire counties) and 3.05/100.000 py in Italy (Leccia and Brescia provinces) [17,29]. However, while we used the same European reference population as our British colleagues, Logroscino et al. used the Italian population for standardization. Standardization of their incidence rate with the same European population yields an FTD agestandardized incidence rate of 2.78/100.000 py, strikingly similar to ours (our unshown data). We found that FTD syndromes represented 3% of the Méotis network caseload. Similar MC surveys in Netherlands [11] and Sweden [10] had 7% and 3.6% of FTD syndromes, respectively. However, all patients in the Dutch cohort were followed in the Alzheimer center of the VU University Medical Center (VUmc), a tertiary center where atypical dementias are likely to be addressed, possibly leading to an overrepresentation of FTD patients. Likewise, there was a 8.1% proportion of FTD patients in Lille tertiary center. A recent review on the epidemiology of FTD highlighted three studies with high methodological standards [9]. In these publications using the Lund and Manchester [18] or Neary [19] criteria, FTD syndromes accounted for 1.1% [30], 3% [31], and 3.8% [32] of dementia cases, which is consistent with our findings. In sharp contrast, consistent with the underdiagnosis of FTD, systematic neuropathology surveys show much higher figures. In UK brain banks from donors (of whom two thirds had dementia), FTD represented 5.1% of diagnoses [33] and up to 9.4% of elderly people participating in a community brain donation program were found to have some FTD lesions at autopsy [13]. The reasons for FTD underdiagnosis are manifold. First, late-onset FTD are often overlooked. FTD is historically considered as a major cause of early onset dementia [1], which probably contributes to FTD diagnosis being overlooked in late-onset dementia. Yet, in recent studies with pathological confirmation, one fourth of FTD cases had an age at onset > 65 years [34]. In the recent literature, there is a trend toward an increase in the age at diagnosis of FTD syndromes, which may relate to the increasing age at dementia diagnosis in recent surveys [24]. While older studies showed an age at diagnosis of 65.9 years [35], we found an age at diagnosis of 71.3 years, which compares to recent publications showing a mean age at diagnosis of 69.4 [36], 70.0 [37], or 71.3 years [29]. Interestingly, the peak of incidence occurred between 75 and 79 years in our survey as in the aforementioned Italian and English studies [17,29], reminding that FTD is not only a dementia of early onset. Second, the positive diagnosis of bvFTD and its differentiation with primary psychiatric disorders is another diagnostic challenge [38] that is reflected by the increased time to presentation and time to diagnosis of the bvFTD variants as compared with the others [39,40]. Prolonged diagnostic wandering in bvFTD, associated in our study with an increased reliance on diagnostic biomarkers, seems to be a universal finding [15][16][17] and suggests that many cases could remain misdiagnosed. Future studies should focus on the exact determinants of the delay in referral and in diagnosis. Third, all the possible clinical presentations of FTD have not been thoroughly described and some are not taken into account by the available clinical criteria. The amnestic variant of FTD, in particular, is difficult to differentiate from AD [20,41] in particular in late-onset dementia [42]. Another example is the right temporal variant of FTD-although a recent publication proposing clinical criteria will contribute to fill the gap [43]. Overall, our survey confirms that FTD are still probably overlooked despite the use of novel clinical criteria and incorporation of new phenotypes. While progress has been made in the recognition of late-onset forms, differential diagnosis between FTD and AD remains a challenge, particularly in the oldest old, and bvFTD cases are probably still mistaken for primary psychiatric disorders. FTD syndromes differ with AD in baseline characteristics and natural history We found several key differences between FTD syndromes and AD at baseline. First, as we had previously shown [39], we confirmed that the MMSE score is higher in FTD. However, behavior, social cognition, and executive functions, the main domain impaired in bvFTD, are not properly assessed by the MMSE, which somewhat undermines the assumption that the general cognitive status is better preserved in FTD syndromes. The higher IADL-4 score in FTD compared to AD contrasted with past studies that retrieved either lower [44] or equal [45] autonomy. However, IADL-4 only assesses restriction in four activities (telephone, transportation, drug treatment, and finances) that are best associated with future dementia risk [28], thus preventing a direct comparison of our results with studies that employed the full ADL. The younger age and the better preservation of memory and visuo-motor functions may explain the lesser impairment found in FTD as compared to AD. Impaired functional capacity in bvFTD is primarily due to behavioral symptoms and impaired social cognition, and the routine (although complex) instrumental activities of the IADL-4 may not be the most representative of the loss of autonomy in FTD syndromes. Among the FTD syndromes, the lvFTD patients had the most preserved autonomy, as found in previous studies [44,45]. Although FTD syndromes as a whole had a similar rate of MMSE decline to AD, lvFTD and mFTD variants specifically showed a higher rate of MMSE decline in time. Additionally, lvFTD had a slightly lower score at baseline than the other variants. Since the MMSE relies mostly on language, aphasia has likely impacted the score in lvFTD. In recent studies, patterns of longitudinal MMSE decline across the FTD phenotypes have already been studied, and semantic dementia cases were shown to decline the most [46]. Regarding survival, we, as others [17], reported that mFTD had the more severe prognosis of FTD syndromes [17], followed by lvFTD and bvFTD. Despite similar MMSE decline rates between bvFTD and AD, mFTD patients had a significantly lower survival median. Therapeutic strategies in FTD The drug treatments used in FTD syndromes markedly differed from the ones used in AD. These observations should be interpreted with caution since differences may only reflect different customs, and not different responses to treatment. However, clinical guidance on the symptomatic treatment of FTD is limited [47], prompting physicians to use psychotropic drugs that may be used nonspecifically in dementia, based on the medical needs and immediate efficacy. Hence, the prescription habits in FTD may also reflect the neuropsychiatric symptoms and treatment response of FTD patients. A publication from the Boxer's group showed that offlabel use of AChEI and memantine in FTD was common in the USA in 2010 [48]. In our region and in the 2010-2019 time span, we found that AChEI and memantine were used in only 12.0% and 5.7% of FTD syndromes, in accordance with recent data supporting lack of efficacy-or even deleterious effects in bvFTD and mFTD ( [49,50], reviewed in [51,52]). The remaining prescriptions may reflect diagnostic hesitations with AD at the beginning of follow-up. Antipsychotics and anxiolytics were more frequently used in FTD syndromes than in AD, and the difference with AD was driven by the bvFTD variant. Antipsychotics are prescribed to treat agitation in dementia whatever the etiology (AD, FTD, or others) [53], although their use is restricted to patients with severe symptoms (aggression, agitation, or psychosis) who fail to respond adequately to other pharmacological and nonpharmacological treatments. The use of anxiolytics and antipsychotics in 38.3% and 24.4% of bvFTD patients, as opposed to 23.6% and 9.3% in AD, is thus a reflection of the higher rate of productive behavioral symptoms (e.g., agitation, aggression, and psychosis) in this variant. However, the low rate of antipsychotics use in FTD demonstrated that physicians took into account the alerts on side effects [54,55] and increased mortality rate [56] in FTD and dementia patients treated with antipsychotics. The black box warning from the Food and Drug Administration was followed by a similar warning from the French Haute Autorité de Santé in 2009 (https://www.has-sante.fr/ jcms/c_885227/fr/limiter-la-prescription-de-neuroleptiques -dans-la-maladie-d-alzheimer) that had found a strong echo in the neurologic and geriatric communities. The most remarkable difference however regarded the prescription of antidepressants, which was twice as important in bvFTD (55.2%) as in AD (27.0%). Indeed, although results are mixed, comprehensive reviews of the evidence from clinical trials favored the use of selective serotonin reuptake inhibitors to treat behavioral symptoms [47,51,52,57]. Our team in particular demonstrated that trazodone, a serotonin antagonist and reuptake inhibitor, reduced irritability, agitation, and depressive symptoms in FTD [58]. The much better tolerance profile and apparent efficacy of serotonin-acting drugs logically imposed them as the mainstay of FTD treatment in our network. Strengths and limitations This naturalistic study of a 6-year period is rooted in 23 years of data sharing and harmonization across a regionwide memory clinic network [24]. It allowed to analyze the trends of real-life FTD diagnosis and care over time. We reached a considerable number of new patients per year, equivalent to the one of nation-wide MC networks. Analyzing the characteristics of consecutive FTD patients first referred between 2010 and 2016 allowed us to focus on patients in which the diagnosis was made using the new criteria for bvFTD and lvFTD [3] and strengthened by follow-up. By considering a wide spectrum of FTD variants, we included patients that are often withdrawn from FTD cohorts. Our survey confirmed many previously published data, which reinforces of the quality and validity of our database. We showed that approximately two thirds of FTD patients had a behavioral variant (bvFTD), and 17% had a language variant, which matches other databases [9,59,60]. We, as others, found a sex ratio of approximately 1:1 in FTD [9]. Thirty-five percent of our FTD patients had a family history of neuropsychiatric disease, in agreement with the literature [14,61]. Only our rate of mutations was lower than previously reported since a mutation was identified in C9ORF72, MATP, or GRN in only 6% of the FTD patients that had a genetic analysis, against 10-15% in the literature [62]. Last but not the least, pathological diagnoses when available matched the clinical diagnoses, confirming the high accuracy of the clinical diagnoses made in a structured regional network and confirmed by a prolonged follow-up. Our survey has however a few limitations. First, important data are not systematically populated in our database. We still lack accurate cognitive, functional, or disease-specific scales to assess disease progression. Furthermore, the mean follow-up of~2 years precludes a comprehensive overview of FTD progression in many of our patients. We also acknowledge a selection bias due to the different networks involved in movement disorders and dementia care in our region, an issue that had been acknowledged in similar studies. Patients with overt motor neuron disease at presentation were not included because they were referred to a specialized regional care pathway rather than to memory clinics. Likewise, the PSP and CBS patients that were referred to our memory clinics were probably the ones presenting early behavioral and/or cognitive changes. Conversely, PSP and CBS with prominent motor symptoms were likely to be followed in movement disorders clinics, where secondary referral to MCs is not systematic. Still, our incidence rate compares to the ones of two regional cohorts including the full FTD spectrum [17,29]. Conclusion and outlook Overall, our study showed that FTD syndromes have specific clinical features, different progression patterns, and therapeutic strategies. Yet, even in a region with an organized memory clinic network, FTD is still overlooked and diagnosis wandering remains longer than in AD. Psychiatric, amnestic, and/or late-onset presentations of FTD are particularly treacherous, and the overlap between cognitive/behavioral and motor presentations leads to an underestimation of the motoric presentations of FTD in memory clinics. There is an obvious need of accurate FTD biomarkers to improve FTD diagnosis. Until and even after the avenue of such biomarkers, neuropsychology has and will have a role to play at a limited cost. The development of novel tests exploring new domains of social cognition beyond mentalization and emotion recognition is a steppingstone in this direction. Social cognition deficits have been found to be a reliable and effective cognitive marker of FTD, especially in patients with a psychiatric [63] or amnestic [64,65] presentations. Social cognition deficits are probably underestimated in mFTD as well [66], advocating for a more systematic assessment of social cognition in memory, geriatric, movement disorders, and psychiatry clinics. In order to improve FTD diagnosis, the classical boundaries between specialties should be broken. Indeed, it is only through a harmonization in diagnostic procedures and databases involving geriatricians, movement disorders specialists, old-age psychiatrists, neuropsychologists, speech-therapists, and memory clinics that the real scope of FTD will be thoroughly apprehended. The gathering of these different disciplines into consortiums such as the Centers of Excellence in Neurodegeneration (CoEN) responds to this objective. Additionally, initiatives are needed to raise awareness on FTD in the general population. At the eve of diseasemodifying therapies, misdiagnosis of FTD may already be a loss of opportunity for patients. analysis. TL did the study concept and design, did the analysis and interpretation of data, drafted the manuscript, revised the manuscript for important intellectual content, and supervised the overall study. The authors read and approved the final manuscript.
2023-01-22T14:35:06.545Z
2021-01-08T00:00:00.000
{ "year": 2021, "sha1": "61df9579bcf49b1b7ad06699ecd3e968c17b57d2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13195-020-00753-9", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "61df9579bcf49b1b7ad06699ecd3e968c17b57d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
249211567
pes2o/s2orc
v3-fos-license
Dry Matter Accumulation in Maize in Response to Film Mulching and Plant Density in Northeast China Film mulching in combination with high plant density (PD) is a common agronomic technique in rainfed maize (Zea mays L.) production. However, the effects of combining colored plastic film mulching and PD on dry matter accumulation (DMA) dynamics and yield of spring maize have not been thoroughly elucidated to date. Thus, a 2-year field experiment was conducted with three mulching treatments (no mulching (M0), transparent plastic film mulching (M1), and black plastic film mulching (M2)) and five plant densities (60,000 (D1), 67,500 (D2), 75,000 (D3), 82,500 (D4), and 90,000 plants ha−1 (D5)). A logistic equation was used to simulate the DMA process of spring maize by taking the effective accumulated air temperature compensated by effective accumulated soil temperature as the independent variable. The results showed that compared with M0 treatment, the growth period of M1 and M2 treatments was preceded by 10 and 4 days in 2016, and 10 and 7 days in 2017, respectively. The corrected logistic equation performed well in the characterization of maize DMA process with its characteristic parameter (final DMA, a; maximum growth rate of DMA, GRmax; effective accumulated temperature under maximum growth rate of DMA, xinf; effective accumulated temperature when maize stops growing, xmax; effective accumulated temperature when maize enters the fast-growing period, x1). Plastic film color mainly affected DMA by influencing xinf. PD mainly affected DMA by affecting GRmax and x1. During the first slow growing period, the DMA of M1 treatment was the largest among the three mulching treatments, however, during the fast growing period, the DMA of M2 treatment accelerated and exceeded that of M1 treatment, resulting in the largest final DMA(a) and yield. When the PD was increased from D1 to D4, the maximum growth rate (GRmax) continued to increase, and the effective accumulated temperature when maize enters the fast growing period (x1) continued to decrease, which substantially increased the final DMA(a) and yield. The application of M2D4 treatment can harmonize the relevant factors to improve the DMA and yield of spring maize in rainfed regions of Northeast China. Introduction Maize (Zea mays L.) is the most prevalent cereal crop in China, and its yield increasing has an important role in grain production across the whole country [1]. As a typical cool, high-latitude, and rainfed area, Northeast China is one of the most important cultivated regions for cereal crops in China [2], with its spring maize planting area and yield accounting for approximately 30.9% and 34.0% of the country's output [3]. In recent years, scarce precipitation and low temperature frequently occurred during the spring in Northeast China, representing the main limiting factors for increasing maize yield [4,5]. Appropriately increasing plant density (PD) is conducive to increasing stand leaf area index, enhancing the Table 1. Influence of different film mulching treatments on maize developmental progress, the effective accumulated soil temperature (EAST), the effective accumulated air temperature (EAAT) and the effective accumulated soil temperature under no mulching when the time is the same as plastic film mulching (EAST-NMTSM) in 2016. Different letters are significantly different at the p < 0.05 probability level. Growth Stages Mulching Treatments The warming effect of plastic film mulching is mainly manifested in the early growth stages (Tables 1 and 2). From the sowing to the three-leaf stage, the effective accumulated soil temperature under the M1 and M2 treatments was 28.4 • C d (daily average 1.8 • C) and 18.1 • C d (daily average 1.0 • C) higher than that under no mulching when the time is the same as plastic film mulching (NMTSM) in 2016, respectively. The extra effective accumulated soil temperature of plastic film mulching treatments compensated for the insufficient effective accumulated air temperature. Therefore, the period of the maize sowing-three-leaf stage of the M1 and M2 treatments were shorter by 5 and 3 days, respectively, compared to that of the M0 treatment. Similarly, the effective accumulated soil temperature under the M1 and M2 treatments increased by 20.6 • C d (daily average 1.5 • C) and 18.6 • C d (daily average 1.2 • C) in 2017, respectively, compared to NMTSM, the M0, M1 and M2 treatments completed the growth from sowing to three-leaf stage in 20, 14 and 16 days, respectively. With the advancement of the growth process, the warming effect of film mulching weakened. During maize three-leaf-joining stage, jointing-tasseling stage, the effective accumulated soil temperature under the M1 and M2 treatments increased by 30.3 • C d (daily average 1.2 • C) and 18.4 • C d (daily average 0.8 • C), 18.3 • C d (average daily 0.7 • C) and 11.9 • C d (average daily 0.5 • C) in 2016, respectively, compared to NMTSM. In 2017, the effective accumulated soil temperature under the M1 and M2 treatments increased by 34.6 • C d (daily average 1.3 • C) and 23.8 • C d (daily average 1.1 • C), 26.7 • C d (daily average 1.0 • C) and 14.7 • C d (daily average 0.7 • C), respectively, compared to NMSTM. These results indicate that the effect of the M1 treatment on increasing soil temperature is stronger than that of the M2 treatment. Effects of Different Treatments on Spring Maize Yield ANOVA revealed significant effects of the different treatments on maize yield and yield components in 2016 and 2017 (Table 3). The maize yields under the M1 and M2 treatments increased by 4.63% and 7.64% in 2016, and by 4.39% and 8.28%, in 2017, respectively, compared to the M0 treatment ( Table 4). The results indicated that maize yield could be increased using plastic film mulching, with the greatest improvement expected for the M2 treatment. As for the PD, the maize yields in the D4 and D5 treatments were not significantly different (p > 0.05); however, they were greater than that for the lower PD treatments. Specifically, the maize yield of the D4 treatment was larger than that of other PD treatments by 2.58-25.25% in 2016 and by 1.27-22.45% in 2017. In terms of yield composition, the differences of spike length, kernels per ear and 100-kernels weight under different film mulching were significant (p < 0.05 or p < 0.01) and sequenced as following: M2 > M1 > M0. All spike length, spike diameter, kernels per ear and 100-kernels weight decreased with the increase of PD. The ability of the fitted RWU relationship to predict the general trend of dependence of the obtained yields on the PD is shown in Figure 1. For all three mulching treatments in both years, the maximum yield was obtained for a PD of the D4 treatment with a subsequent decrease at the D5 treatment (Table 4). The yield response to the PD was noticeably stronger for the two film mulching treatments as compared to the M0 treatment, i.e., increasing the PD in the relevant range from D2 to D4 treatment caused a greater increment in yield (both absolutely and relatively) (Figure 1a). The response of the relative yields to the PD, reliably described by Y/Y max with r 0 α −1 = 0.0112 m 2 , was very similar for both years (Figure 1b). Fitting the Logistic Equation to the DMA Curves The logistic equation was used to fit the dry matter accumulation of spring maize in 2016 and 2017. The coefficient of determination (R 2 ) varied from 0.89 to 0.99, and the p-values of all treatments were <0.05 (Table 5), i.e., the logistic equation adequately describes the DMA patterns of all treatments. The logistic curve parameter k had the largest variation range, with C V values of 14.5% and 19.1% in 2016 and 2017, respectively, i.e., different treatments have the greatest impact on the steepness of the logistic curve. Followed by the parameter a, with C V values of 11.6% and 16.0%, respectively; the variability of the x c values was the smallest, with C V values of 7.4% and 6.4%, respectively. These results indicated that the logistic equation, with the effective accumulated air temperature compensated by the effective accumulated soil surface temperature as an independent variable, can adequately simulate the DMA process of maize under conditions of different colored plastic film mulching and PD. Effect of Different Treatments on the Dynamic Process of DMA The plastic film color, PD and their interaction, all had significant effects on the final DMA (a) (p < 0.01 or p < 0.05, Table 6). In addition, plastic film color had a significant effect on the effective accumulated temperature at the maximum growth rate of DMA, x inf (p < 0.01). PD had a significant effect on the maximum growth rate, GR max , and on the effective accumulated temperature when entering the fast growing period, x 1 (p < 0.05 or p < 0.01). The interaction between plastic film color and PD had no significant effect on x inf , x max and x 1 , but had a significant effect on GR max (p < 0.05). With respect to the main effects, the values of a under the three mulching treatments followed the following order: M2 > M1 > M0 ( Table 7). The values of x inf under the M1 and M2 treatments were similar, and they were higher than that of the M0 treatment. The value of a increased with the increasing PD in 2016. However, it first increased and then decreased with the increase of PD in 2017. The maximum value of a was obtained in D3 treatment. GR max increased with the increasing PD, with a substantial increase seen from D1 to D3 treatments, but not from D3 to D5 treatments. The value of x 1 firstly decreased and then increased with the increasing PD, while it was the lowest for the D4 treatment (54.6-145.8 • C d in 2016, 17.1-43.6 • C d in 2017, respectively, lower than that for the other PD treatments). Table 7. Effects of different treatments on dynamic characteristic parameters of dry matter accumulation. Different letters are significantly different at the p < 0.05 probability level. Year Treatments a (kg ha −1 ) DMA showed a trend of "slow-fast-slow" growth process. In the first slow growth period, the DMA under the three mulching treatments followed the order: M1 > M2 > M0 (Figure 2a). However, in the fast growth period, from the jointing to the filling stage, the DMA under the M2 treatment accelerated, exceeding that under the M1 treatment. The film treatments increased the time of the fast growth period; the DMA under the M0 treatment entered the final slow growth period earlier than the M1 and M2 treatments. The PD affected the whole growth process (Figure 2b). From the D1 to the D4 treatments, DMA increased with PD, no matter in the two slow growth periods or in the fast growth periods, with the DMA curves for the D4 and D5 treatments practically coinciding, indicating that increasing PD beyond that of the D4 treatment did not significantly increase the DMA. Effect of Different Treatments on the Dynamic Characteristics of DMA The a and GR max were significantly affected by the PD and mulching (Table 6), thereby representing the two most relevant parameters reflecting the dependence of soil water availability on the PD. Therefore, the a priori unknown interplant competition factor (4πr 0 α −1 ) of the universal Y/Y max relationship was evaluated by simultaneously fitting (via a minimum-least-squares procedure) the following two analogous expressions for a(ρ) and The fitted values were r 0 α −1 = 0.0112 m 2 , a(ρ→∞) = 46,813 plants ha −1 , and GR max (ρ→∞) = 49.3 kg (ha • C d) −1 , and the fitted relative water uptake expression (1−exp(4πr 0 α −1 ρ)), along with the relative a and GR max parameters, are depicted in Figure 3. The multivariate analysis of variance revealed a significant interaction (p < 0.05) between the effects of mulching treatments and PD on a. This can also be seen in Figure 4a, which shows that, for the M0 and M1 treatments, an increase in the PD beyond D3 treatment caused a modest, if at all, increase in a, while, for the M2 treatment, there was a substantial increase in a. There was considerably more rain in the growing season of 2016 compared to the 2017 season (). This is probably the reason why a reached a maximum value at D3 treatment in 2017, but continued to increase with an increase in PD in 2016 (Figure 4b). A similar analysis of the interaction between the effects of mulching treatments and PD on GR max revealed that, while, for the two film mulching treatments, GR max increased steeply with an in increase in PD, for the M0 treatment, the effect of PD was opposite (at least in the range of D3 to D5 treatment), i.e., the GR max decreased with increasing PD (Figure 5a). The GR max was much higher in 2017 compared to 2016, albeit increasing with PD in both years (Figure 5b). The Relationship between the Final DMA(a) and the Dynamic Characteristics of DMA In order to further analyze the influence of the characteristic parameters of the dynamic process of DMA on a of spring maize, a path analysis was performed ( Table 8). The direct path coefficients show that the order of influence on a is GR max > x inf > x max > x 1 . GR max had the greatest effect on a (p < 0.01), whereby the contribution to R 2 was 0.9669, indicating that a greater maximum growth rate results in a greater DMA. The second most influential factor was x inf , indicating that a larger effective accumulated temperature when reaching the maximum growth rate results in a greater DMA. Finally, the correlation coefficients of x max and x 1 were negative, indicating that a smaller effective accumulated temperature when growth stops or when entering the fast growing period results in a greater DMA; however, the influence was weak. Accumulated Temperature under Plastic Film and Model Application Earlier studies revealed that plastic film mulching can increase soil temperature, thereby improving crop growth [12]. In our study, the transparent plastic film mulching (M1) was more effective in increasing soil temperature than the black plastic film (M2), which is due to that, on the one hand, the transparent film can transmit more solar radiation and reduce the reflection of solar radiation [23], and on the other hand, the long wave radiation can be blocked by the dew condensation on the transparent film, causing the soil temperature to rise. The higher soil temperature promotes the growth and development of maize. Previous studies have suggested that plastic film mulching clearly reduced the number of days required for maize to ripe [24]. In our study, the increasing soil temperature by plastic film mulching is mainly manifested in the early growth stage, which eliminates the damage of low temperature and chill damage in the spring. This is the reason for reducing the number of days to germination and enhancing early seedling growth, with the greatest improvement seen for the M1 treatment. In early applications, the logistic model, which typically uses time as the independent variable, was used to describe the plant growth process [25]. Sepaskhah et al. [20] suggested replacing the time variable with ecological factors such as accumulated temperature (degreedays), as it better reflects maize growth. Behind the use of accumulated temperature is the notion that the growth rate of crops is mainly affected by temperature; in other words, only when the accumulated temperature reaches a certain value can a certain growth stage be accomplished [26]. Under the same accumulated air temperature, the maize growth rate with plastic film mulching is obviously faster than under that with no mulching, somewhat contradicting the accumulated temperature concept. The compensation effect of accumulated soil temperature on accumulated air temperature [27] with plastic film mulching explains this apparent contradiction. In the present study, a logistic model using effective accumulated air temperature compensated by effective accumulated soil temperature as the independent variable was adopted. The results (p and R 2 values) show that the logistic curve could adequately simulate the process of DMA under different colored plastic film mulching and PD conditions. Dry Matter Accumulation Dynamics The plants were relatively short, and there was no significant difference in ground shading among density treatments at the seedling stage of spring maize. While after seedling stage, high-density treatment formed a larger canopy structure, which led to the increase in ground shading and the decrease in soil temperature [13]. However, the leaf area index increased with an increase in PD, such that more photosynthetically active radiation could be intercepted, thereby leading to an increase in DMA [28]. Moriri et al. [29] also stated that with an increase in maize PD, the DMA increased. We found that GR max increased significantly with the increase in PD from the 60,000 (D1) to the 75,000 (D3) plants ha −1 treatments, but it did not increase significantly with a further increase in PD. The value of x l firstly decreased and then increased with the increase in PD, being lowest for the 82,500 plants ha −1 (D4) treatment (Table 7). Consequently, when the PD increased from the D1 to D4 treatments, the DMA increased gradually, whereas a further increase in PD did not significantly increase the DMA (Figure 2b). This is so because the photosynthetic rates of the middle-height and lower leaves of the plant are reduced, resulting in fewer photosynthetic products per leaf (area) under high-density conditions [6], with a lower DMA per plant. If the increase in PD makes up for the decrease in dry matter production per plant, the DMA per area would increase; otherwise, the areal DMA would reach a plateau or even decrease. Dang et al. [30] found that plastic film mulching can conserve soil water, increase soil temperature in the early stages of the crop, thus accelerating crop growth and development. In our study, the effect of the M1 treatment on improving soil temperature was better than that of the M2 treatment, and the DMA under the M1 treatment was the largest among the three mulching treatments in the first slow growing period. However, the DMA under the M2 treatment exceeded that of the M1 treatment in the fast growing period, resulting in the largest a and yield (Figure 2a). This is because the higher soil temperature in the M1 treatment is beneficial for DMA in the early growth stages; however, it also shortens the growth period, affecting the grain-filling process. Compared with the M1 treatment, the M2 treatment could provide a better soil temperature, suitable for maize growth in all growth stages, with a good effect on soil water storage [13], Therefore, with black-plastic-film mulching, upon increasing PD beyond the D3 treatment, a still rose sharply ( Figure 4). Usually, the correlation of water uptake with maize DMA is stronger than its correlation with grain yield [14]. The general, mean trends of a(ρ) and GR max (ρ) for the different mulching treatments in both years was described reasonably well by Equation (1) with r 0 α −1 = 0.0112 m 2 ; of course, this was expected, as a(ρ) and GR max (ρ) were used to evaluate r 0 α −1 . However, the ability of the fitted RWU-PD relationship to predict the general trend of dependence of the obtained yields on the PD (Figure 1) is more constructive, in accordance with the high correlation found between the fitted value of a and the measured yields. Sun et al. [13] found that although plastic film mulching increased the costs of materials and residual film recovery, it also substantially increased the maize yield compared with no mulching, resulting in better economic benefits of the M2 treatment than that of the M1 treatment. With the extension of the use time and with the continuous expansion of its usage, while plastic film mulching brings more benefits to agricultural production, it also produces negative effects such as farmland residues and landscape pollution [31]. So far, there are two ways to solve the problem of plastic film pollution: one is to increase the recovery of plastic film, and the other is to develop and promote degradable films. Conclusions (1) The increase in soil temperature caused by plastic film mulching accelerated the growth and development of spring maize, resulting in a shorter growth period as compared to the no mulching control, with the greatest improvement obtained from transparent plastic film mulching (M1). (2) The dry matter accumulation (DMA) under transparent film mulching was the largest among all the mulching treatments in the first slow growing period. However, during the fast growing period, the DMA under black film mulching was higher than that of the transparent film, resulting in the largest final DMA (a) and yield. Among all plant densities, the PD of 82,500 plants ha −1 had relative larger maximum rate of DMA (GR max ) and the smallest effective accumulated temperature when entering the fast growing period (x l ), thus increased a and maize yield. (3) The plant density of 82,500 plants ha −1 combined with black plastic film mulching (M2D4) can increase the DMA and economic yield of maize in rainfed regions of Northeast China. Site Description In 2016 and 2017, field experiments were performed at the experimental station of the Water Conservancy College of Shenyang Agricultural University (41 • 44 N, 123 • 27 E), located in Shenyang City, Liaoning Province in Northeast China. The exact geographical location is shown in Figure 6. The study area is characterized by a semi-humid continental climate. The average annual precipitation is 703 mm, the average annual temperature is 8.0 • C. The monthly mean maximum and minimum temperatures of the two maize growth seasons (seeded on 1 May and harvested on 27 September in 2016; seeded on 3 May and harvested on 23 September in 2017) were 30.5 and 12.4 • C, and 31.9 and 13.2 • C, respectively (Figure 7). Precipitation during the 2016 and 2017 maize growth seasons was 790 and 305 mm, respectively. The soil of the study site is a silty clay loam (Table 9). In the 0-100 cm soil layer, the average soil bulk density is 1.41 g cm −3 , the average (volumetric) water content at field capacity (1/3 bar) is 0.38 cm 3 cm −3 , and the average wilting point (15 bar) is 0.18 cm 3 cm −3 . Moreover, the topsoil (0-20 cm) contains 20.8 g kg −1 organic matter, 0.87 g kg −1 total N, 8.9 mg kg −1 available P, and 75.6 mg kg −1 available K. During the study period, the average groundwater depth in the experimental area was about 4.2 m. Experimental Design and Field Management A density-tolerant maize variety (cv. Liangyu 99) was used with a traditional largeridge double-line planting method (Figure 8). The experiment was laid out as a split-plot block design. The main plot was three mulching treatments: no mulching (M0), transparent plastic film mulching (M1, 1.2 m wide × 0.008 mm thick), and black plastic film mulching (M2, 1.2 m wide × 0.008 mm thick). The split plot was five PD treatments: 60,000 (D1), 67,500 (D2), 75,000 (D3), 82,500 (D4), and 90,000 plants ha −1 (D5), recommended by local farmers' practice. A total of 15 combinations were tested, and each combination was repeated three times, totaling 45 experimental plots (6.0 m × 3.6 m). Protective lines were set up around the plot, and border lines were set up between different planting modes. The plots were fertilized only once, during sowing, with a compound fertilizer that contained 243 kg N ha −1 , 135 kg P 2 O 5 ha −1 , and 117 kg K 2 O ha −1 . Field managements were in line with local farmers' practices. No supplemental (to natural rainfall) irrigation was provided during the whole maize growing period. Temperature Air temperature was obtained from a meteorological station equipped near to the experimental site. Soil temperature at 5 cm depth was recorded using soil thermometers (Chuangji Instrument Co., Ltd., Hengshui, China). The soil thermometers were inserted vertically near the maize root system. Soil temperature was measured daily at 6:00 a.m. and 2:00 p.m. Maize Phenology The exact dates corresponding to the main growth stages of maize such as the threeleaf, jointing, tasseling, silking and maturity stages were observed and recorded. Dry Matter Accumulation (DMA) At the end of the seedling, jointing, heading, filling, and maturity stages, three representative plants from each plot were cut at the base of the stem, bagged according to the stem, leaf, and spike classification, inserted into an oven to de-enzyme (105 • C) for 30 min, and then dried at a temperature of 80 • C until the mass stabilized to determine the above-ground dry matter. Yield and Yield Components At the end of the growing period, maize yields were measured by hand harvesting of all plants from a 10 m 2 area in each plot. Yields were weighed after threshing, and their dry weight was determined on the basis of a grain moisture content of 14%. Maize moisture content was measured using a PM-8188-A grain moisture meter (Sanfeng precision measuring instrument Co., Ltd., Zhongshan, China). And spike length, spike diameter, kernels per ear and 100-kernels weight were measured in the laboratory. Compensation Effect of Soil Accumulated Temperature on Air Accumulated Temperature A normal process of crop growth and development requires a certain level of accumulated temperature (also termed "growing degree days (GDD)"), which, upon reaching a certain value, determines the fulfillment of a certain growth period [25]. According to the description in "Crop Cultivation Science" [32], for spring maize in the Shenyang area, the lower biological limit temperature (T b ) is 8 • C and the upper biological limit temperature (T u ) is 35 • C. Currently, it is common to evaluate the effective accumulated temperature as follows: where T cum is the effective accumulated temperature ( • C d), and T i refers to the daily average air temperature (T a ) or the daily average soil surface temperature (T s ) ( • C). When T i exceeds 35 • C, it is calculated at 35 • C, and when T i is lower than 8 • C, it is calculated at 8 • C [32]. The maize growth rate is mainly affected by the soil temperature at 5 cm depth [27], which can be used to evaluate the thermal effect of different mulching treatments. The maize growth period under film mulching is shorter than under no mulching, because the increase in soil temperature due to film mulching makes up for the low effective air temperature. Thus, the maize warming compensation coefficient (K) can be evaluated as follows [33]: where T cum-a-AL is the effective accumulated air temperature under no mulching ( • C d), T cum-a-FM is the effective accumulated air temperature under film mulching ( • C d), T cum-s-FM is the effective accumulated soil temperature under film mulching ( • C d), and T cum-s-AL is the effective accumulated soil temperature under no mulching when the time is the same as film mulching ( • C d). ∆T is the compensation value of the accumulated air temperature for every 1 • C increase in the accumulated soil temperature under film mulching compared to that under no mulching. It is calculated using an empirical formula (with K from Equation (4)) as follows: Here T s-FM is the daily average soil temperature at 5 cm under film mulching ( • C), and T s-AL is the daily average soil temperature at 5 cm under no mulching ( • C). Generally, a larger ratio of soil temperature to air temperature results in greater compensation. T a-FM , the daily average effective air temperature after the compensation of accumulated soil temperature under film mulching ( • C) is related to T a-AL , the daily average effective air temperature under no mulching ( • C), through: According to the study of Sun et al. [13], the compensation effect of soil temperature on air temperature after maize tasseling is negligible. The compensatory coefficient for effective accumulated air temperature of plastic film mulching is shown in Table 10. Brief Introduction of the Logistic Equation The logistic equation (also called the "logistic curve") was proposed by the Belgian mathematician Verhulst to describe population growth, and, in the context of crop modeling, it can be used to describe plant growth as follows: where y is plant height, leaf area index or DMA, etc, t is the number of days after emergence, and a, b and c are fitting coefficients. In this study, Origin 2016 software was used to fit the relationship between the DMA of spring maize and the effective accumulated air temperature compensated by effective accumulated soil surface temperature. It was found that the trend of the logistic curve was similar to the trend of the measured data. The optimized logistic equation was as follows: where y is the DMA, x is the effective accumulated temperature, and a, k, and x c are parameters (a is the final value of the DMA, x c is the sigmoid's inflection point (midpoint), and k is the logistic growth rate (the steepness of the curve)). Firstly, a function expression of Equation (8) is set in origin 2016 software. Then, input the measured value of DMA and the corresponding effective accumulated temperature in software, and the fitted curve and the corresponding values of a, k and x c can be obtained by running the set function. Next, the relevant characteristics can be calculated: The differentiation of Equation (8) provides the rate of DMA (termed the growth rate, GR). Furthermore, equating the derivative of Equation (9) to 0 provides the accumulated temperature for which the growth rate is maximum, which is recorded as x inf : The maximum growth rate, GR max (substituting Equation (10) into Equation (9)), can be determined as follows: Darroch and Baker [34] considered that, when the DMA reaches 95% of the final DMA, i.e., when y = 0.95a, growth practically stops. This happens when the effective accumulated temperature, x max , satisfies the following relationship (from Equation (8)): The effective accumulated temperature at which the rate of increase in GR is maximum, x l (obtained by equating the second derivative of Equation (9) to 0), can be regarded as the beginning of the "fast growing period" of spring maize: Evaluation of Relative Water Uptake (RWU) Dependence on Plant Density The effect of PD on plant growth and yield was evaluated via also the universal Y-PD relationship proposed by Friedman [14], accounting for water availability and competition among neighboring root systems, and assuming that the relative yield, i.e., the yield normalized by the maximum yield obtained at maximum PD (Y/Y max ), is equal to the relative water uptake (RWU), i.e., the water uptake normalized by the water uptake at maximum PD. According to the Y-PD relationship, the yield for a given PD increases approximately linearly with increasing root system radius and soil capillary length (and with decreasing planting rectangularity [14], ignored in the present analysis). Consequently, the interplant competition factor is approximately equal to 4πr 0 α −1 , i.e., to the surface area of a sphere with a radius equal to the geometric mean of the radius of the root system (r 0 ) and the soil capillary length (α −1 ). Thus, if we accept the hypothesis regarding the correlation between relative yield (Y/Y max ) and relative water availability, i.e., that water availability plays the dominant role in determining the Y(ρ) relationship, it takes the following universal form: where α −1 and r 0 are measured in units of m and ρ, the plant density, in plants·m −2 . Its dimensionless form, incorporating the dimensionless PD (P = 4ρ/α 2 ) and the dimensionless radius of the root system (R 0 = αr 0 /2), is Y/Y max = 1 − exp(−2πR 0 P). Firstly, the interplant competition factor (4πr 0 α −1 ) was evaluated on the basis of the dependence of the fitted final DMA, a (kg ha −1 ), and of the maximum growth rate, GR max (kg (ha • C d) −1 ), on the PD, (plants m −2 ). Then, the fitted value of r 0 α −1 was used to analyze the effect of mulching on the dependence of a, GR max , and yield on the PD (i.e., a(ρ), GR max (ρ) and Y(ρ)). Statistical Analysis Origin 2016 (OriginLab Corporation, MA, USA) and DPS 7.05 (Hangzhou Ruifeng Information Technology Co., Ltd., Zhejiang, China) were used for data analysis, and Duncan's new complex range method was used for the significance test. Differences at p < 0.05 level were considered statistically significant. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no competing interests.
2022-06-01T15:09:39.612Z
2022-05-26T00:00:00.000
{ "year": 2022, "sha1": "5ab405f794c7f6ff87ea3db785b6bc3cb85cd79e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/11/11/1411/pdf?version=1653554462", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1b2ca06f28a36c913a0b60f58cb77f2bdf104aa", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
67773363
pes2o/s2orc
v3-fos-license
Towards a deeper understanding of the behavioural implications of bidirectional activity-based ambient displays in ambient assisted living environments . In this chapter, we investigate the extent to which the real-time bidirectional exchange of activity information can influence context-awareness, social presence, social connectedness, and importantly inter-personal activity synchrony in mediated ambient assisted living (AAL) environments. Additionally, we describe the design, development, and assessment of a bidirectional ambient display platform to support real-time activity awareness and social connectedness in mediated AAL contexts. In a semi-controlled study, we evaluate a conglomerate of activity-based lighting displays, to determine the effects of real-time bidirectional deployment on behaviour and social connectedness. Exploiting everyday objects, human activity levels are projected with a Philips Hue lamp, LED wallet, and LED walking cane, which render this information based on predefined patterns of light. Results from the current study show ten-dencies toward (1) an increase in implicit social interactions ( e.g., the sense of experienced social presence and connectedness), (2) more positive social behaviours between the elderly and their caregivers in mediated AAL contexts, and (3) sporadic moments of interpersonal activity synchrony however, further investigation is necessary to determine the extent of this variable in mediated AAL Towards a Deeper Understanding of the Behavioural Implications of Bidirectional Activity-Based Ambient Displays in Ambient Assisted Living Environments 1 Introduction In the 21st century, ageing populations around the world are increasing dramatically.Nowadays, most persons can expect to live until they are 60 and beyond, which according to the World Health Organization1 , is a 'first time occurrence' in mankind's history.Population ageing presents critical challenges, which include but are not limited to the following (i) frailty, (ii) physical disabilities, (iii) cognitive and cardiovascular diseases as well as (iv) vulnerabilities to social isolation and loneliness.Despite these difficulties, many older adults are insistent on striving to maintain their autonomy and quality of life [27].Therefore, social engagement and enhancing the quality of life of older adults are of a high priority on the political agendas of many ageing societies including Europe and Asia.Notably, this problem space presents challenges and opportunities for designing and developing technology-rich environments capable of supporting healthy and active ageing as demonstrated by the researchers in [10,35,61].Ambient Assisted Living (AAL)2 encompasses a broad range of Information and Communications Technologies (ICT) for enhancing functional independence, social interaction, and the overall quality of life among older adults.Currently, most AAL interventions are geared toward safety and ambulatory monitoring for emergency detection.Such systems are mostly driven by Ambient Intelligence (AmI) and can be seamlessly interwoven into the existing life patterns of older adults.In fact, ambient Intelligence (AmI), aspires to detect people's state and adaptively respond to their needs and behaviours through the integration of ubiquitous technologies in their environment [82].Drawing from disciplines such as artificial intelligence, human computer interaction, pervasive/ubiquitous computing, and computer networks, AmI systems can sense, reason, and adapt to offer personalized services based on the user's context, intentions, and emotions [1,16].In this way, such systems, also known as context-aware systems [66], can be integrated into AAL environments to provide better care and support for the elderly living independently. Weiser's vision for ubiquitous computing is described in his seminal work entitled The Computer for the 21st Century [86].In his narrative, Weiser envisaged a world where technology would silently reside in the background or periphery of the user's attention and is available at a glance when needed.Consequently, the allocation of minimum attentional resources would enable peripheral interaction with a system as suggested in the following statement."The most profound technologies are those that disappear.They weave themselves into the fabric of everyday life until they are indistinguishable from it" [86, p. 1].To this end, ubiquitous computing aims to enable calm technology [87], whereby information is transported easily between the center and periphery of attention.To achieve this, ambient displays, a sub-discipline of AmI, generally refer to systems intended for portraying various types of context information, e.g., weather, stock prices, or the presence or activities of others in the periphery of the users' attention [62]. Within the AAL domain, some studies [11,15,18,19,55,58,65,69,84] have demonstrated benefits associated with aesthetically pleasing and informative ambient displays to raise context awareness and strengthen social interaction.Also, previous works including our very own [21,25,27] have demonstrated that physical activity information equally shared between two remote users can provide a sense of peripheral presence and interpersonal awareness; thus stimulating positive social behaviours in AAL contexts.However, these behavioural implications following the receipt of real-time activity cues through bidirectional activity-based ambient displays in remote AAL contexts have not been dealt with in depth.Therefore, this chapter further investigates the behavioural implications of real-time bidirectional activity-based ambient displays in mediated AAL contexts and is foreseen to provide further insights into the potential benefits and usage possibilities of bidirectional activity-based ambient displays.Consequently, this can inform the design decisions regarding the functionality, adoption, and acceptance within mediated AAL environments. In the remaining sections of this chapter, we will discuss the following.First, we will review the literature on social well-being and its related measures.Then, we describe our design rationale and provide an overview of our system.After that, we present a user study describing our evaluation process, and later we expound upon our findings on the effect of the system on social connectedness, social presence, and interpersonal activity synchrony.Ultimately, we make our conclusions and discuss our plans for future work. Social Well-Being and Related Measures To begin with, it is necessary to understand the notion of social well-being as a critical aspect of 'ageing in place'.Social well-being has become a central topic in gerontological research [44,74] and is defined by the authors in [47] as "the appraisal of one's circumstance and functioning in society" (p.122).According to Abraham Maslow's hierarchy of needs, love and a sense of belonging are vital for human functioning, which transcends to the primal need for intimacy, family, and friendship [53]. In Maslow's hierarchy, once physiological and safety needs are met then a person can strive to satisfy the need for love and belonging, which is essential to fulfil esteem needs and if possible attain a state of self-actualization.Therefore, sociality is crucial for well-being, as human beings are naturally driven by an inherent desire to belong and maintain strong and lasting bonds [5].Accordingly, this need is satisfied through regular and positive interactions with long-term social contacts [5]. Throughout the past decades, several researchers in psychology and social sciences have documented substantial empirical evidence on the impact of social relationships on promoting health, longevity, and optimal physical functioning in older adults [50,89].In particular, socially active senior citizens are often physically and mentally healthier when compared to those who are socially isolated [17,88].However, the absence of close family ties and fulfilling social relationships may cause undesirable implications such as loneliness and depression in older adults [72]. With the onset of better employment or educational opportunities, geographical distance between family members has become a primary barrier for effective communication and the provision of care for older adults [7].Essentially, while living apart, it is crucial to stay connected and keep abreast of each others' activities.Although the proliferation of computer-mediated technologies such as instant messaging, free or relatively cheap Voice over IP calls, and email can augment communication, such technologies are sometimes intrusive and require more attentional resources for communication.As such, this chapter explores the concept of social connectedness through peripheral technology designed to facilitate real-time activity awareness and improve interaction between the elderly and their caregivers in mediated environments.To reduce disturbances in daily life activities, we believe that an indirect means of awareness of each other's context and activities can sustain close connections and reduce the risks of social isolation and loneliness among older adults. Social Connectedness The generally accepted use of the term social connectedness usually refers to a sense of "belongingness and relatedness between people" [81, p. 1].Also, Van Bel et al. discuss the importance of understanding the temporal aspects of belongingness, which can be experienced on two levels, i.e., the (i) 'momentary' or (ii) 'continuous' feeling of connectedness.However, the authors in [80] gave precedence to the long-term experience, which is more distinctive in relatively stable interpersonal relationships; whereas the short-term experience of connectedness can be influenced by a person's current emotion, their present assessment of their sense of belongingness or their interactions with another individual.Other factors such as age, context, gender, personality traits, culture, individual preferences, and previous relationship experience can also affect how people experience social connectedness [34]. Altogether, a sense of belonging appears to be embodied in the concept of social connectedness, such that an increase in social connectedness can lead to the positive feeling of having enough social contacts and also, support the personal assessment of being a valued member of a group.To determine a person's social connectedness with others, Van Bel et al. suggest the following five dimensions [81]. 1. Relationship Salience -The continued sensation of presence and togetherness with another despite being in different locations. 2. Contact quality -The subjective assessment of the quality of interaction with others in a person's social network.3. Shared understanding -having common interests, ideologies, and perspectives with people in one's social network.4. Knowing each others' experiences -becoming emotionally aware of each other's subjective feelings along with recognizing and understanding the counterpart's experience and how they think.5. Feelings of closeness -examines the intensity of the attachment with one person against all other relationships.Also, assesses the quality of communication and emphasizes confidentiality and openness in relationships. Awareness systems build on the construct of connectedness oriented communication, which is closely aligned with the exchange of affective and relational information aimed at maintaining relationships and promoting a strong sense of connectedness [52].Basically, social connectedness assesses the emotional experience of belongingness and can be measured qualitatively by determining heightened feelings of closeness, commonalities between relational partners, and the mutual expression of feelings and thoughts [81].The construct can be approached quantitatively by assessing how one perceives their social situation (i.e., social appraisal) and their personal evaluation of relationship salience (i.e., the presence of another) [81]. While the notion of social connectedness is difficult to measure, the design community has noticed its relevance to tailor novel socially aware technologies to facilitate a sense of belonging in mediated environments [84].However, there are other applicable measurements (e.g., social presence) related to this phenomenon that will now be addressed. Social Presence Despite many attempts to define social presence, the scientific community has not yet reached a consensus on its definition.A more concrete view is formulated by Biocca et al. in [8], where they define social presence as a "sense of being with another in a mediated environment" (p.10), not only replicating face-to-face interactions but also considering the mediated experience of human and nonhuman intelligence (e.g., artificial intelligence).This shorthand definition further elaborates on the "moment-to-moment awareness of co-presence of a mediated body and the sense of accessibility of the other being's psychological, emotional, and intentional states" [8, p. 10].Therefore, social presence is categorized into three distinct levels as explicated by Biocca et al. below [8]. 1. Level one (the perceptual level) -one becomes aware of the co-presence of the mediated other.2. Level two (the subjective level) -is comprised of four dimensions describing the perceived accessibility of the mediated other's: -attentional engagement -emotional state -comprehension -behavioural interaction 3. Level three (the intersubjective level) -assesses the degree of symmetry or correlation between one's own feeling of social presence and their impressions of the mediated other's psychological sense of social presence.It goes further to examine concepts such as interdependent actions e.g., reciprocity/motor mimicry in mediated environments, which is closely related to the notion of interpersonal activity synchrony [14], a concept generally known to foster socially cohesive behaviours in relationships, a focal point to be investigated in this chapter. Coordinated Actions -Interpersonal Activity Synchrony For many years, coordinated actions have been considered to enhance relationships and are deemed as an essential component of social behaviour and interactions [4,6,[12][13][14].In addition, scholars such as [4,12] suggest a possible link between perception and behaviour such that automatic mimicry can be evoked by the mere perception of an interaction partner's behaviour.In this chapter, interpersonal activity synchrony is investigated through a set of analogous and sometimes overlapping terms namely (i) behavioural coordination, (ii) coordinated action, (iii) motor coordination/synchrony, and (iv) emotion contagion.Coordinated behaviour has been shown in a variety of contexts such as parent-infant bonding [14], teacher-student interactions [6], and intimate relationships [45], such that coordinated action, i.e., interpersonal activity synchrony is regarded as an indicator of social interaction.In particular, previous studies have examined this construct with reference to the synchronization of bodily actions such as oscillations of rhythmic limb [68] and lower leg [67] movements.Likewise, some scholars have found evidence of interpersonal motor coordination while two people either (i) walked side-by-side [79,90] or (ii) swayed side-by-side in rocking chairs [64].Added to motor synchrony, other studies have investigated coordinated behavioural markers in terms of the mimicry of conversations, collective musical behaviour, dancing, laughter, facial expression, and emotions [13,33].Altogether, these indicators can be combined under one umbrella term, emotion contagion, which is defined as follows."The tendency to automatically mimic and synchronize facial expressions, vocalizations, postures, and movements with those of another person's and, consequently, to converge emotionally" [40, p. 5]. A key problem with much of the literature examining behavioural coordination is that they tend to focus on face-to-face interactions with very little studies conducted in mediated environments.While we wholeheartedly agree that face-to-face interaction is perhaps one of the most active forms of interpersonal interaction [59,70], given its offerings of immediate feedback, engagement, and interpretation of non-verbal communication cues among others, we also believe that there is a need to explore other types of interaction, especially for enabling peripheral interaction in AAL.As mentioned earlier, Biocca et al. highlighted interdependent actions as a critical determinant of social presence in mediated environments [8].Thus, in an attempt to facilitate coordinated behaviour in mediated AAL environments, this chapter evaluates the extent to which the system can trigger or influence interpersonal activity synchrony. Interpersonal Synchrony -Computational Methods in the Field Very few studies [30,33,43] address the issue of interpersonal synchrony in mediated environments.Thus, to gain a deeper understanding of this social phenomenon we had to review studies demonstrating synchrony in both real life and mediated contexts.From the literature reviewed, e.g., [30,43,64,79] we can infer the following indicators of synchrony. -co-action -coordination -mimicry -emotion contagion So, how do we compute interpersonal activity synchrony in mediated AAL environments?Findings from different studies suggest that activity synchrony is determined by calculating the autocorrelation [76] or Pearson correlation [78] of the linear coupling of activity patterns.Also, researchers such as Haken et al. have considered an in-phase approach to synchrony such that motor signals are homologous and in synchrony [37].Concerning mediated environments, scholars such as those in [30,43] suggest cross-correlation measures for computing physiological linkage -a related measure of emotion contagion.Moreover, Biocca et al. conferred in their model of social presence that the degree of symmetry or correlation is a measure of social presence [8]. Although correlation measures are critical for calculating interpersonal synchrony, there are other mathematical constructs to consider.For example, Hove and Risen discussed the necessity of imposing a temporal lag (lasting a couple of seconds) following the reference behaviour in the cross-correlation calculation so that mimicry and by extension synchrony can be determined [42]. Considering the previously explored computational methods for evaluating interpersonal activity synchrony, we will employ cross-correlation measures for assessing this phenomenon in this chapter.Furthermore, we will impose a lag to compute this cross-correlation.More details on our evaluation and data analytical methods will be described later in this chapter.We will now provide an brief overview of our bidirectional activity-based system and subsequently discuss our methodology. Design Rationale As mentioned earlier, our bidirectional activity-based implementation is an ambient lighting system that detects human activities and provides visual feedback through a LED cane, LED wallet, and Philips hue light orbs to create a sense of awareness and social connectedness between older adults and their caregivers.We were guided by the following design heuristics obtained through a thorough review of the literature [51,54,83], interviews with design experts, and our own findings from previous research [23,26,28] using ambient displays. -The system should be practical, not distracting, portable, perceptible, comfortable, meaningful, reliable, subtle, discrete, aesthetically pleasing, accessible and safe.-The system should accommodate the vision and motor impairments of the elderly population and should appeal to the intrinsic motivation to share knowledge.-The system should support ease of use, affordance and learnability bearing in mind that the elderly are susceptible to cognitive impairments, which affects their attention and memory.-The system should support the elderly's autonomy and should seamlessly fit into their existing lifestyle patterns. Motivated by the central goal of designing usable, acceptable, and accessible products for the elderly and their caregiver counterparts we sought to determine appropriate everyday objects for conveying activity information that would meet our design criteria.This was done over the course of several brainstorming sessions with experts in the field, designers, and prospective users.Notably, to provide an "always connected" service, we were interested in complementing our already existing Hue lighting system with portable ambient lighting devices.After much deliberation and reference to the Smart Cane System designed by [49], we decided that the LED cane and wallet were most suited for conveying activity information while simultaneously adhering to the design heuristics. System Components The entire system is composed of 5 major subsystems as illustrated in Fig. 1.A remote server subsystem resides in the central part of the system and is responsible for classifying human activities and relaying detected activities to other subsystems.A LED and Hue subsystem are located on each side of the remote server subsystem, respectively.Each LED subsystem consists of a waist-mounted smartphone, an Espressif (ESP) microcontroller with Wi-Fi capability, and an LED ring or strip.The waist-mounted phone is equipped with an accelerometer and a gyroscope for measuring the proper acceleration and orientation of the body, respectively (cf.[25]).A custom built Android application i.e., the LED controller application (app.), collects the accelerometer and gyroscope readings (sensor data) at a frequency of 50 Hz (cf.[20]) and sends it to the remote server subsystem for classification.The Android application maintains two socket connections to the central remote server, one for sending sensor data to the server for classification and the other for receiving the classified activities of the counterpart.Subsequently, the classified activities received are mapped to activity levels and then transformed to lighting property encodings, which is later broadcasted to the led strip/ring via the ESP microcontroller Wi-Fi module.To achieve this, the waist mounted phone requires a 3G/4G internet connection by which data is streamed to the remote server and a portable Wi-Fi hotspot to provide an internet connection to the ESP Wi-Fi module.Besides, the Hue subsystem consists of a mobile phone with Wi-Fi internet connection and a Philips Hue bridge and bulb.Another custom-built Android application (i.e., the Hue controller), maintains a single socket connection to the central server subsystem for receiving the classified activities of the partner.The Hue controller then relays this information to the hue bulbs as light property encodings via the hue bridge.The Hue subsystems are deployed indoors to convey bidirectional activity information while users are situated in the comfort of their homes while the LED devices are carried when users are outdoors.This enables an "always connected" system to users.Please refer to [24] for more details on our real-time activity-based bidirectional framework. Methodology In a semi-controlled study, we evaluated a conglomerate of activity-based lighting displays designed in [21,24,26], to determine the effects of bidirectional deployment on behaviour and social connectedness.Our experimental approach can be described in three main stages, which are listed below. 1.The Pre-trial -Following the design and development of our real-time bidirectional activity-based implementation in [24], we conducted two practice sessions with a prospective caregiver and the elderly stakeholders to identify system glitches and obtain technical insights and practical recommendations for system deployment and improvement.2. The Real Deployment -Following system adjustments, our bidirectional activity-based system was deployed in semi-controlled mediated environments to evaluate the effects on synchronized activities, context-awareness, social connectedness and social presence, information clarity, attentional engagement, and the users' willingness to adopt the system.3. Post Deployment Interview -We conducted a series of in-depth interviews to determine the participants' experiences and acceptance of our activity-based system and how it affected their behaviour. Ekman et al. maintain that synchrony is inherently activated by the degree to which people are exposed to the same stimulus [30].The authors further highlight a study by Hasson et al. [39] whereby participants were exposed to an identical visual stimulus (i.e., a movie scene) to incite synchronized cortical activity.Accordingly, influencing our study design decision to expose half of our participants to the same stimulus (i.e., scripted activities of an actor) to induce interpersonal activity synchrony.Inspired by the previous studies on interpersonal synchrony [14,57] and physiological linkage [30,43] to enhance interpersonal connectedness we assume the relevance of these constructs to provide social support in AAL environments.As such, we defined the following research questions. -To what extent does activity awareness through a bidirectional activity-based system impact the synchronization of the counterpart's activity level with that of the caregiver?-How does the activity level of an actor (caregiver) modulate the activity levels of their counterpart?-What are the implications of the bidirectional activity-based system on • social connectedness, • social presence, • context-awareness, • information clarity, • attentional engagement and, • the users' willingness to adopt the system? Participants Participants were recruited through personal networks and referrals from a retired professor, and engineer in the Netherlands.Notably, both the retired professor and the engineer acted as proxies to represent prospective elderly recruits. Thus, before experimentation all system requirements, designs, prototypes, and the study design were repeatedly cross-validated with these proxies.This was done as a measure to guarantee system functionality, user comfort, and privacy so that they could proceed with the recruitment.Overall, twenty-four persons (twelve pairs) participated in the study.The following are criteria for the inclusion and exclusion of participants in this study. -Equal numbers of younger adults and elderly participations are essential for this study.-Prospective younger adults should be over 18 years while prospective older adults had to be over 65 years of age. -All prospective older adults should be relatively healthy with no history of chronic, motor, or mental diseases.-All prospective older adults should live independently and demonstrate the ability to execute their ADLs on their own.-Equal numbers of male and female participations are valuable for this study. Each participant was assigned to one of two distinct user groups: (i) caregiver -who is expected to execute a series of scripted activities while simultaneously maintaining awareness of their counterpart through the proposed bidirectional activity-based system and (ii) the counterpart -who upon receiving the caregivers' activities via the ambient display is expected to carry out their activities at their own free will.In this study, an elderly participant could serve as a caregiver, which was determined by the preliminary results in [25], showing evidence of elderly persons caring for their fellow elderly loved ones.The participant demographics are presented in Table 1.To preserve anonymity, caregivers are indicated by letters A-L and their respective counterparts are disguised using letters M-X, and not names. Participants ranged in age from 21-75 (mean age = 47.8 and standard deviation = 20.8).In addition, we noticed that our sample size was comprised of the relatively 'young elderly'.Participants were from different cultural backgrounds.In particular, the sample was dominated by the Dutch (58%), followed by the Chinese (17%), the Malaysians 13%, and a few (4% each) Ghanaian, Iranian, and Tanzanian participants.All participants except one pair were somewhat familiar with each other.For example, most elderly participants were members of clubs and societies for retired professionals in the Netherlands, while others were neighbours, friends, colleagues, or relatives.In addition, all participants were educated having attained either secondary diplomas, bachelor, master, or doctoral degrees.No participant reported ill health.The experiment was conducted in English and Dutch to facilitate the Dutch speaking participants.Participants received information of the protocol and provided their written, informed consent according to the Central Committee on Research Involving Human Subject3 . Experiment Set-Up The experiment was conducted in two separate living labs at the Eindhoven University of Technology (Tu/e).These rooms were each equipped with the following items: a sofa, dining table and chairs, books, map of the building, notebook and pen, music for relaxing, coffee table, computers with WiFi connection, dumbbells and exercise videos, refreshments, newspapers, games (puzzles, bowling, and diabolo), Philips Hue light Orbs, which formed part of the room design, Philips Hue bridge, smartphone (with the custom-built Hue controller app cf.Fig. 1), and a portable LED ambient display (cane for the counterpart and wallet for the caregiver).Figure 2 demonstrates the set-up of the rooms before and after the ambient displays were deployed while Fig. 3 depicts sample game and exercise items in the rooms.Adhering to the protocol for activity detection described in [20,24,25], our hybrid SVM-HMM HAR model deployed in a central server subsystem, was used to detect six basic activities (standing, sitting, walking, walking upstairs and downstairs, and laying) from data received via a waist-mounted smartphone equipped with accelerometer and gyroscope sensors and an internet connection.Activities classified are saved on the server before they are sent to the Hue and LED controller subsystems.These controller subsystems are responsible for abstracting the detected activities into activity levels and mapping them to coloured lighting encodings and finally transmitting them to the ambient display components of the bidirectional system.The ambient display components of the system are the Hue light orbs, NeoPixel LEDs fitted on a wallet and a cane as illustrated in Fig. 4. The displays render red coloured lighting for high activity levels (walking, walking upstairs and downstairs), green for passive activity levels (standing and walking), and blue coloured lighting for resting activity level (laying). Evaluation Measures -Social Connectedness -Participants rated their perceptions of their feelings of relational closeness toward their counterpart using the IOS scale [3].-Social Presence -Participants evaluated their sense of co-presence, perceived attentional engagement, and their perception of behavioural interdependence using an adapted version of the Networked Minds Social Presence Inventory developed by [38]. Experiment Protocol We employed a repeated measures design [32], with one independent variable namely the interaction style (with activity-based ambient light and with white light).There were two experimental conditions having two interaction styles each lasting for 30 min each. -With activity-based ambient light -such that there is a bidirectional exchange of activity level information between the caregiver-counterpart pair using smart objects such as the Philips Hue, a LED cane, and wallet.This is the intervention condition.-With white light -such that there is no exchange of activity information between caregiver and their counterpart.This is the control condition. In both conditions, the caregiver followed a script and performed a similar sequence of activities.To minimize carry-over and order effects, we counterbalanced interaction styles using an AB-BA format [32].There were two experimenters to facilitate this study.The dependent variables examined include (i) the synchrony of activity levels -interpersonal activity synchrony (on the part of the counterpart), (ii) context-awareness, (iii) social connectedness, (iv) social presence (behavioural interdependence i.e., the counterpart's synchronized actions with the caregiver), (v) information clarity, (vi) attentional engagement, and (vii) system adoption. Prior to the experiment, the experimenters ensured that the server was properly communicating with all subsystems.Thereafter, a meet and greet session was held with each caregiver-counterpart pair.The experimenters elaborated on the experimental details such as the significance of the light encodings, experimental conditions, measurement instruments, ambient displays, and moderated the signing of the informed consent forms.Each caregiver-counterpart pair was then fitted with the waist-mounted smartphone. Subsequently, both the caregiver and their counterpart were placed in two separated living labs.Note that upon arrival, participants were orientated with their environment and told that they were not limited to remain indoors during each condition.In particular, caregivers were encouraged to follow a script comprising of five activities each lasting six minutes.Caregivers were also advised to execute the activities in sequential order.An example of the scripted sequence of activities is given below. 1. Read book or the newspaper or browse the internet 2. Do some physical exercise 3. Do some mental activity e.g., puzzle 4. Take a stroll 5. Lie on the couch In contrast, the counterparts were not expected to follow a script.Instead, they were given a deck of activity cards (see Fig. 5 indicating the types of activities they could perform within the experiment), bearing in mind that there were no restrictions in the order or time spent in a particular activity.Additionally, counterparts were instructed to record the sequence of activities performed and the time spent in each activity in the notebook provided.This was done to establish the ground truth in a minimally invasive way.After the experiment preliminaries were completed, in a pre-test participants ranked their assessment of relationship closeness with their counterpart.Each experimental condition lasted for 30 min.Also, at the end of each experimental condition all participants completed a post-test ranking their interpersonal closeness with the IOS scale.Following the completion of both experimental conditions, participants ranked their experience of social presence using an adapted version of the social presence questionnaire [38] and thereafter participated in a post-evaluation interview, which was audio-taped.Interviews conducted in Dutch were facilitated and translated with the assistance a native Dutch speaker. Quantitative Results The results from both interactions styles, i.e., (i) with activity-based ambient light and (ii) with white light were analysed using the R Project for Statistical Computing.The analytical methods and research outcomes are presented and discussed below. Clarity of Perceived Bidirectional Activity Levels From the shorthand definition of social presence [8], it can be inferred that an understanding of a mediated body's intentional states is an important prerequisite for promulgating social presence in mediated environments.Figure 6 shows a scatter plot of the clarity of the information perceived in both interaction styles. Noteworthy differences were found in the reports of information clarity with respect to the perception of activity levels in the activity-based ambient light interaction and that of white light.Statistically, a one-way ANOVA with repeated measures gave F (1, 23) = 70 and p = 1.97e−08.Furthermore, by computing the η 2 p (partial eta squared) measure, we obtained an effect size of 0.75, which is substantial according to the recommendations for the magnitude of effect sizes by [56].From the results, we can infer that the information portrayed in the "activity-based ambient light" interaction was clear and meaningful.However, this will be confirmed later by the qualitative results. Perceived Attentional Engagement From our study findings in [21,25,26] that the overuse of attentional resources was a marked limitation in both studies.A remarkable result to emerge from the data is that there were fewer accounts of attentional burden during system deployment.Figure 7 provides an overview of the subjective estimates of attentional resources utilized in both interaction styles. The scatter plots illustrate almost similar distributions between the "with white light" and the "with activity-based ambient light" interaction styles with no statistically significant difference (p = 0.195) between them.Our findings appear to be well supported by the participants' qualitative accounts of multitasking only taking occasional glances at their partner's activities to avoid distraction and concentrate on their primary tasks. Relationship Closeness Pre-and Post Interaction Styles As discussed in our review of social connectedness, Van Bel et al. highlighted the feeling of closeness as a dimension of social connectedness [81].Consequently, this measure was computed to determine the implications on interpersonal closeness with and without the activity-based ambient display. A one-way analysis of variance (ANOVA) with repeated measures was calculated, which revealed a statistically significant difference between the selfreported IOS pre-and post-experiments with F (2, 46) = 16.25 and p = 4.58e−06.In addition, by computing the η 2 p measure yielded an effect size of 0.41, which is reasonably large according to the recommendations for the magnitude of effect sizes by [56].Figure 8 portrays a box plot of the perceived relationship closeness pre-and post-interaction styles.From Fig. 8, it is apparent that the mean IOS depreciates during the white light interaction in which there was no exchange of activity information between interaction partners.A pairwise comparison revealed a statistically significant difference in relationship closeness before stimulus exposure and following the interaction with activity-based ambient light resulting in a p-value of 0.00251. Comparing the IOS ratings before exposure and post the interaction with white light did not reveal a statistical difference (p = 0.0568). Estimation of Co-presence The findings from the study in [25], point to the likelihood of experienced social presence -the feeling of being with mediated the other [8].As we sought to validate this finding, participants gave their estimations of perceived co-presence in each interaction style.By deploying a one-way ANOVA with repeated measures we obtained a statistically significant result with F (1, 23) = 26.74 and p = 3.05e−05.Moreover, using η 2 p we obtained an effect size of 0.54, which is relatively large according to the rules of thumb on the magnitude of effect sizes by [56].From Fig. 9, it is apparent that there were more reports of experienced co-presence in the "activity-based ambient light" interaction when compared to the interaction "with white light".This finding reinforces the usefulness of bidirectional activitybased displays for stimulating social presence. The Extent of the Caregivers' Influence on the Counterparts' Activity Levels Behavioural interdependence is underlined as an important dimension of social presence [8].Thus, self-reports of interdependent actions could complement the cross-correlation analysis on sensed activity data.Recall that this measure was only ranked by the counterparts as caregivers were expected to strictly follow the activity script.A one-way ANOVA with repeated measures revealed a statistically significant difference between the reported influence with F (1, 11) = 10.24 and p = 0.00845.Also, by calculating η 2 p the results show an effect size of 0.48, which is large enough according to the rules of thumb on the magnitude of effect sizes by [56].Figure 10 demonstrates the degree of symmetry of the counterparts' activity levels with that of the caregiver.Overall, counterparts reported that they were more motivated to coordinate their activity levels with that of their caregivers while interacting with the activity-based ambient light in comparison to their interaction with white light.This confirms our assumption that a stimulus is necessary to create awareness and prompt a behavioural change to act upon the information received in mediated environments.In the case of the "with white light" interaction, the activity information was unknown, and hence there was no interaction. System Adoption Following system deployment, we wanted to determine the number of participants who were interested in adopting the system in the long-term.Logically, system adoption was only computed for the "with activity-based ambient light" interaction style.In this case, both the caregivers and their counterparts stated their perceptions on future system adoption.Their subjective attitudes toward adoption are depicted in Fig. 11.The findings suggest that participants were moderately inclined toward system adoption in the long-run.Additional insights are further implied in the qualitative analysis. Towards Interpersonal Activity Synchrony -The Caregiver's Influence on Their Counterpart's Activity Levels To analyse interpersonal activity synchrony, we calculated the sample crosscorrelation coefficient (CCF) [71] between activity levels of caregivers and their counterparts for every 6-min interval that the caregivers remained in an activity level specified by the script.Due to time constraints, the script specified 5 activities to be performed within a 30-min interval.Therefore, activity levels were distributed equally in 6-min intervals.Note that resting, passive, and active activity levels were assigned the following values 0, 1, and 2, respectively.As described in the system architecture of the bidirectional ambient display platform (cf.[24]), the server detected a maximum of two activities for every 5 s worth of data from the waist-mounted smartphone.This implies that a minimum of 2.5 s of sensor data was required in order to detect an activity.This introduced a minimum lag of 2.5 s (1 lag unit) and a maximum lag between 5 s (2 lag units) and 7.5 s (3 lag units) for an activity to be collected, detected, and transmitted to a participant.The sample CCF of time-series variables x and y, representing both the caregiver's and their counterpart's activity levels respectively, at time t, given a lag τ was calculated as follows: Given a sample cross-covariance, the sample cross-correlation (CCF) is given by: where n is number of activity levels detected within a 6-min interval and x and ȳ are the means of the activity levels of a participant pair (i.e., elderly -caregiver) within a 6-min interval.With negative lags, the caregiver is made to lead their counterpart to serve as a reference for analysing activity synchrony of the counterpart.The sample (CCF) was calculated for each 6-min interval.Thereafter, the mean of the sample CCFs with lags −1 ≤ τ ≤ −3 were estimated for each interval.From the analysis shown in Fig. 12, we found no statistically significant pattern of activity synchrony between caregiver and their counterparts in the "with activity-based ambient light" interaction style.In some instances, we observed a significant positive correlation (indicating interpersonal activity synchrony) as in the case of the participant pair KW, but there was no other significant positive or negative correlations among the remaining cases.Consequently, these findings need to be interpreted with caution as we are unable to make any significant assertions regarding interpersonal activity synchrony on the basis that there was also no consistent sample CCFs within and between interaction styles.Notwithstanding the lack of synchrony, we observed that in the activitybased ambient light interaction, counterparts were in most cases as equally or more active than their caregivers whilst counterparts were frequently observed to be less active than their caregivers during the interaction with white light.Table 2 portrays the percentages of time partners spent in each activity level per interaction style.While Figs. 13 and 14 demonstrate the mean activity levels with white light and with activity-based ambient light.This finding together with estimations of the influence of the receipt of caregivers' activity information on their counterparts points to the possibility of interpersonal activity synchrony in long-term deployments of the system. Discussion In this experiment, we aspired to investigate how the exchange of activity information between two user groups (caregivers and their counterparts) would affect the following: interpersonal synchrony on the part of the counterpart, interpersonal relationship closeness, co-presence, behavioural interdependence, information clarity, attentional engagement, system adoption, and interpersonal activity synchrony.The results have further strengthened our confidence that the "with activity-based ambient light" interaction style was clearly more effective for affecting the social connectedness experience in the case of our experiment.The subjects reported increased sensations of relational closeness, co-presence, co-action with their caregiver partner in the case of the elderly, and information clarity during their interactions with the activity-based ambient display.Moreover, the idea of usable everyday objects for the bidirectional exchange of activity information was supported by a large number of participants.In addition, the findings on attentional allocation are in accordance with our intended goal to facilitate perception at a glance thereby facilitating divided attention [46].Regrettably, evidence of interpersonal activity synchrony was significantly weaker than anticipated in this short-term deployment.However, it is worthwhile to note that the assessment of interpersonal activity synchrony in mediated environments is not trivial.In fact, in [63] Rashidi et al. reminds us of the difficulty in measuring ADLs based on the assertion that the sequence and the way in which activities are performed may vary across individuals.From this claim, it is clear that this assumption not only holds true for the recognition of ADLs in general, but also for computing interpersonal activity synchrony using peripheral displays in AAL environments.Likewise, [30] also articulated their uncertainties regarding the extent to which synchrony can occur in mediated environments.Although there were spontaneous instances of interpersonal activity synchrony as clarified in our qualitative analysis, we believe that 30 min was not enough to significantly affect interpersonal activity synchrony in mediated domains.Further work will focus on longer deployments to estimate the effects on interpersonal activity synchrony in mediated AAL environments.We will now present the qualitative findings, which was done to ascertain a subjective viewpoint and corroborate our findings on the aforementioned dependent variables ((i) interpersonal activity synchrony (on the part of the counterpart), (ii) context-awareness, (iii) social connectedness, (iv) social presence (behavioural interdependence i.e., the counterpart's synchronized actions with the caregiver), (v) information clarity, (vi) attentional engagement, and (vii) system adoption) examined in the quantitative research. Qualitative Results Our analytical approach bears close resemblance to the procedure proposed by [73] such that interview transcripts were analysed and the findings were discussed and validated with a professional care support worker.Important ideas and suggestions provided by the domain expert were taken into account during the discussions.Exploiting the thematic analysis [9] approach, two hundred and ninety-four statements were examined to identify major themes and sub-themes related to the users' impressions on perception, usability, system adoption, interpersonal activity synchrony, and envisioned system benefits among others.These themes and sub-themes are now discussed. Perceived Usefulness of Bidirectional Activity-Based Displays for Promoting Context-Awareness The participant majority praised the system for its ability to raise contextawareness.In particular, most interviewees reported on the system's ability to trigger context-awareness owing to the following properties. -peripheral features enabling divided attention -information clarity -respects privacy and dignity rights -simplicity and effortlessness -portability and multifunctional everyday objects -implicit communication channel From the quotations below, further insight can be gleaned on the implications of bidirectional for promoting social connectedness. "I only looked at the light for a few minutes, and then I just started focusing on what I was doing while taking occasional glances." -Q "It is rather clear what he is doing." -D "I don't feel like someone is looking around for me. It is simple." -W "The ability to carry the wallet around is good so you can see the activities of your partner whilst you are outside and when you in the house you can see the lamp." -A "The cane is so cool, I can carry it around, and it serves two purposes one as a light and the other to access information anywhere. " -M "We sit in the same office and to know what he is doing I would have to look at him, and then I can see that he is working on his computer. Now, the lamp is a bit more discrete and is a good indicator of his activities." -F "It would be nice to see what they are doing without FaceTime or taking too much time to feel their existence."-C Uncertainty Although a large number of participants acknowledged the potential benefits of the system, yet, there were a few elderly participants who consistently expressed uncertainty regarding ambient technologies for social connectedness.During the investigation, it was apparent that one participant appeared to be technologyilliterate e.g., he expressed his disdain for assistive devices due to technical inexperience).In addition, the significance of culture on adoption played an important role in the level of uncertainty and ultimately another participant's disapproval of peripheral technologies. "I never use a computer, in fact, I don't know how to use it. I don't even have a smartphone. I know there are technologies to call the doctor if you need help but I would never use them, I would rather use the telephone." -X It is also evident that culture played a significant role in the adoption of peripheral technologies.In fact, some Dutch participants reflected response patterns that were highly individualistic4 in nature. "I don't need to know what another person is doing every moment of the day." -X "Okay, I see different lights showing me my partner's activities. Then, I didn't know what to do with it. What are the implications? Why do I need it? If my mother were alive then maybe when she was ill it would have been useful, but I don't need it now to keep in touch with my friend." -W "It is not so important for me maybe there are positive effects but not for me.Most of the time, my wife and I we leave each other free, so I don't need it."-V Moreover, another respondent perceived that the system could easily disappear in the background, which he thought to be negative especially for both context-awareness and social connectedness. "I saw different lights, but it could be the same as having the television on and it fades in the background.If there are changes, then I wouldn't notice them and I wouldn't feel anything."-U Role of Perceptual Processes on the Experience of Social Connectedness and Context-Awareness Based on the responses we can infer a possible link between context-awareness and social connectedness, which is exemplified by Mr. A's comment. "Perceiving your partner's activity information makes you feel like you know their daily routines so you can form a mental pattern of what they do overtime and that can make you feel connected." -A Also, from Mr. A's statement, we can deduce the relevance of cognitive processes discussed in [21] (e.g., attention, perception, pattern recognition, and memory) as key concepts essential for facilitating context-awareness and social connectedness.Furthermore, it is imperative that the activity information received from the display is aligned with the user's mental model of their counterpart's activities.This is reflective of top-down processing as discussed by the authors in [31]. Moreover, from Mr. M's comment, we reckon the significance of habituation (cf.[36,85]) for social interaction in mediated environments.This is reflected in the following quotation. "I think if I get accustomed to observing someone else's activity then over time I would feel even more connected."-M Interactivity and Social Influence The opportunity to exchange activity information without communication media such as Skype, FaceTime, or text messaging was highly valued among younger participants.A possible explanation for their acceptance can be attributed to multiple references to separation by geographical distance from their parents.In general, a great deal of social presence was experienced between younger interaction partners coupled with sporadic occurrences of interpersonal activity synchrony between them.Furthermore, respondents elaborated on the potential social influences of bidirectional activity-based ambient systems and highlighted the effects on engagement by virtue of the cryptic nature of the display.Social Presence.Like the respondents in [25], most younger participants were very passionate about the system's indirect influence on social presence and by extension social connectedness.Example statements are given below. "I liked the fact that although we were in different places, I still felt like she was quite close to me. I knew what she was doing and I was wondering what she thought about my activities. I am quite anxious for us to discuss our activities later." -B "I feel like she is somehow with me indirectly."-O "With ambient light even though I was alone, I didn't feel alone.I think this will be useful for lonely people."-I Interpersonal Activity Synchrony.Most of the younger participants were captivated by the possibility of synchronizing their activities with their partner.Furthermore, the participant majority suggested that the exchange of activity levels could create intimacy and increase social interaction.This is encapsulated in the statement below. "It is nice to see what the other person is doing and that perhaps you can do the same things together to form some kind of bond." -R An interesting observation reflecting interpersonal activity synchrony of two interacting partners is demonstrated below. In one instance, the caregiver stated the following."There were times I had the feeling that he was doing what I was doing because when I was doing physical exercise, his light was also red." -G While her counterpart mentioned, "I had the impression that she was mirroring me especially when I was resting she was resting."-S This interaction is evidenced in Fig. 15. Social Influence: Persuasion Versus Peer Pressure.Although synchrony appears to be intriguing, some participants argued that it could potentially have positive and negative effects on social interaction.Positive influences include the system's functional role in persuading its users to engage in the same activities.An example statement is given below."When I saw that my partner was active it made me feel like I should have been active as well.Also, while I was exercising and she was relaxing I felt like I wanted to relax as well."-C On the other hand, a few participants stated that the system appeared to have adverse consequences resulting in social pressure to prevent embarrassment.In one instance, a participant mentioned that she was uncertain as to whether or not she should coordinate her activities with her partner.This is shown below. "There was a moment when I was sitting because I already finished exercising and I was going to read, but then she was engaged in a physical activity maybe exercising.I didn't want her to feel like I wasn't doing anything.I felt a bit embarrassed.She was doing something productive, and I was just there sitting.That's not good for my reputation."-O While another respondent was bothered by the system's persuasive effects as an implicit trigger point for stress. "For me, my mom always wants me to exercise and also my dad is trying to lose weight.So, if we are both home and my dad is exercising, then it could influence me to exercise also.But this could be silently stressful because I can see my father is exercising and I am either sleeping or eating a hamburger or watching TV.Then, I could feel a bit stressed."-N Mysterious Engagements.As highlighted in [25], some participants expressed a liking for the system's mysterious effects, which prompted them to mentally decrypt the exact nature of their partner's activities.This is reflected in the following statement. "Sometimes, I was guessing what my partner was doing. In some instances, I knew she was doing some kind of mental activity but I didn't know exactly what she was doing. I would say it was a bit mysterious." -Q The respondent further argued that the system's mysterious effects could stimulate communication through other communication media. Relevance to the Frail Elderly -I'm Still Young I Don't Need It Now Like the elderly respondents in [23], most elderly participants in this study commented on the relevance of the context-awareness systems for the frail elderly.These comments illustrate the tendency among our older participants to still feel young inside [2] by articulating their independence and stating how they demystified ageist stereotypes, e.g., ill-health, cognitive decline, feeling sad or lonely, and the lack of vigour or vitality discussed by the authors in [77].Example statements are presented below."My wife and I are very active, so we don't need it now maybe when we are older."-D "I am alright, I am very capable of taking care of myself at home." -H Risks and Emergency Management As pointed out earlier, the majority of our younger participants were excited about the social connectedness benefits of the system.However, some participants were more focussed on the context-awareness features mainly for its potential in supporting the safety and monitoring of their elderly loved ones.One elderly participant was readily accepting of such systems because of her husband's current battle with dementia.As Mrs. T reflected on her husband's dementia, she stated the following."With lighting colour changes, I can easily observe my husband's activities while he is in another room without being present with him all the time."-T Also, others mentioned the need for such systems for anomaly detection to identify irregular movement patterns of their elderly loved ones.These accounts are discussed below. "If my relative is sick then is important to know if she is not moving at all." -I "I want to know if something goes wrong" -N Although the bidirectional activity-based ambient displays were designed to provide context-awareness and enhance social connectedness, some participants suggested that it is still necessary to provide emergency detection capabilities to complement the existing system.Additionally, a few participants suggested the need for an alarm feature for notifying caregivers in the event of an emergency. "How can you distinguish between a person sleeping or an accident where someone has fallen on the floor?I think there is need of an extra indication for falling."-D Also, Mr. K suggested that by introducing additional physiological measurements such as heart rate along with an alarm system could assist professional care workers."A supervision system for a nurse monitoring several people could detect heart rhythm and send an alarm if something is wrong."-K Furthermore, Mrs. T pointed out that an alarm system could assist with the monitoring of her husband with dementia who tends to wander off outdoors. "What if my husband wakes up from his sleep and starts moving?What if he wanders off outdoors?Maybe the system could signal an alarm once the front door is opened or illuminate all the colours at once to indicate some form of danger."-T Design Suggestions Overall, the design suggestions include ideas to offer more subtlety, humanize the display, improve aesthetics, battery life and sensor comfort, the addition of ancillary features such as vibration and sound, reduced sensitivity, and an extension of the system's scope.Support Invisible Design.Going back to Weiser's vision of calm technology [86] ("those that disappear [...] They weave themselves into the fabric of everyday life until they are indistinguishable from it" (p. 1).A few participants made recommendations to improve the subtlety of the design.The following comments suggest how this can be achieved through simplicity, smaller LEDs, and reduced brightness for portable displays. "Although the wallet is useful and attractive the light is quite obvious.Let Improve the Battery Life.Interestingly, one participant observed the battery limitations of the LED wallet.This is depicted below. "I think it's a good system however the lifespan of the LED battery is short."-A Recommendations for maximizing the performance of the battery life (e.g., exploiting devices that work at 1.8 V) were discussed in (IWANN). Colour.Like the experimental results in [21,23,25,26], various participants desired the freedom of colour choice based on personal preference.Moreover, a few respondents were more in favour of exploiting green for resting and blue for passive.While other younger participants were cognizant of the implicit association of red with danger and a few expressed disturbance and restlessness with the colour red.As such, warmer colours such as orange were proposed as a replacement for red.Example statements are highlighted below. "Intuitively, I would use green for a state of calmness and blue for mental activity."-R "For physical activity, I would use orange or yellow, something warm."-C Position of the Smartphone.Even though all older participants expressed their satisfaction with the waist-mounted smartphone, there were a few younger participants who expressed their discomfort.In hindsight, these participants expressed discomfort during physical activities and one participant described her overall experience with the smartphone sensor as "burdensome".To rid themselves of the excess baggage, they proposed the following. "The smartphone was a bit heavy. If it's on my personal smartphone it's okay, but if I have to carry an extra smartphone it might be too much." -F "The smartphone could be in the pocket to prevent discomfort during exercise."-B Vibration/Sound Effects.Although most participants were pleased with the peripheral nature of the system, a few were critical on the system's ability to sustain awareness during high periods of concentration.Accordingly, they prescribed additional sound or vibration effects to alert the user's attention and in some cases minimize the cognitive load.These recommendations are illustrated below. "Maybe, add some vibration because when we are doing a mental activity we tend to focus and vibration could make us more alert."-E "Maybe, I would add sound effects so that I wouldn't have to always look at the light."-L Exploitation of Additional Everyday Objects. Although almost all the informants were positive toward our design choice of exploiting a cane and wallet, there were two respondents who suggested other everyday objects such as an ambient smartphone or an ambient id/key card.Their propositions are encapsulated within the following comments. "You can use something that's more portable something like a mobile phone.Maybe you can use the Philips Ambilight TV as a reference."-R "In the context of a caregiver, I wouldn't check my wallet all the time.They always carry an ID or a key card so some indication on those objects could be better."-C Expanding the System Scope.A few participants were desirous of knowing the strength of the activity level, which could be illustrated with additional colours or changes in light intensity. "I would increase the brightness based on the intensity of the activity."-W However, one participant articulated her preference for only two activity levels namely (i) active or (ii) inactive to reduce any misconceptions of an intermediate activity level.Recall that a similar abstraction is implemented in [25].Her citation is recorded below. "Sometimes I forgot the meaning of the green and wondered whether they were engaged in mental activities or not.I think it would be better to have active or inactive states."-C Remarkably, the temporal nature of activity information (cf.[21,26]) was reiterated by a young male informant when he voiced the following. "It would be nice if I could see a summary of the data so I can see what happened in the past."-S Moreover, one responded urged for an expansion of the system to support self-tracking."It's an interesting concept.However, I am more interested in knowing how I react when I am reading or sleeping or exercising.This would give me personal biofeedback."-B Design Considerations for Bidirectional Ambient Displays for AAL There were some key factors that emerged during the discussions with our participants, which include the following. - Context and Purpose. From the commentaries, we observed that a few younger adults highlighted that the context and purpose of the system could affect adoption.Importantly, one young person stated that the system was only relevant for context-awareness only if her elderly relative was ill.Otherwise, it could be distracting. "It depends on the situation if my relative is sick, then I will use it. But if I don't need to know what she is doing then it would be disturbing for my own life. So, the purpose is important." -I With reference to situational context, another young person mentioned its relevance only in the home. "Also, context is important if I am at home and they are at home then possibly it is okay.If I am at work and they are at work, then I don't need to know what they are doing.What's important is that they are okay."-N However, in the home context, the user further expressed privacy concerns in the following statement. "The thing is sometimes I sleep late and I wouldn't want them to know that.In truth, there are some things that I need to hide.I wouldn't want them to call and say why are you sleeping so late?" -N To address privacy and situational context concerns, one participant suggested a service upon request functionality to maintain the right to control, access, and disseminate activity information at his convenience. "I would use the lamp when it's a service on request so I should be able to control the functionality.It's a personal system so it should be visible to others only if I want to show them."-S Reverting to N's reference on the importance of situational context, she also mentioned that consideration should be given to the time zones of two interacting partners for successful adoption. "For me, I need to consider the time zones because sometimes when they are sleeping I am active and vice versa.Sometimes it would be disturbing for them."-N Thus, by extension, we believe that the time-zones can affect the degree of synchrony between two interaction partners. Spatial Position and the Stability of Social Bonds. From the remarks, we see that spatial position can change how the information is perceived and the degree of experienced social connectedness. "In a real life situation, the positioning of the light in the room would be extremely important."-P "I didn't really feel the connection with the light maybe because of the location of the lamp."-E Besides, both P and E shared similar perspectives that perception and social connectedness are not only determined by the spatial position but also the stability of the emotional connection, which serves as a motive for observing the display consequently affecting how deeply the information is processed. "In fact, I think the real connection outside the experiment will influence the results.If I don't have a good relationship with the partner, then I won't feel anything."-E "If there is an emotional connection between the person in the other room or the person that you are taking care of.Then, there is a positive motivation to look at the lights."-P Aesthetics.In a general sense, aesthetics was a major perceived benefit of the installation of the ambient displays.Thus, in designing bidirectional ambient technologies consideration must be given to the aesthetic needs of the participants.In retrospect, the participants postulated that the light's aesthetic properties created a pleasant atmosphere, fostered creative thinking through its mysterious effects, and led to elements of surprise, and more fun and playful interactions.Example remarks are demonstrated below.Figure 16 demonstrates a participant's interaction with the cane."I think the cane is an eye-catcher for the elderly.I think it's is nice and I like the fact that it surprises me." -S "It was very fun and playful!You can use it for special activities in the home."-T Also, C contends the prescriptive interpretation of Sullivan's notion of 'form follows function' [75] as she suggests that form is attuned to function in the statement below. "It indicates the partner's activities and these colours add a certain ambiance to the room." -C However, Lidwell et al. [48] assert that the prescriptive interpretation of 'form follows function' "aesthetic considerations in design should be secondary to functional considerations" (p.106). General Discussion Overall, our participants identified several aspects that they found positive about the bidirectional activity-based ambient displays.Most participants could multitask, feel a sense of their partner's presence, access the activity information any and everywhere, understand the information received, enjoy an implicitly shared interaction, coordinate their activities to some extent, and maintain their privacy.Altogether, we can deduce from our findings that the process of experienced context-awareness and social connectedness among our participants included five phases: (i) visual perception, (ii) attention, (iii) memory, (iv) curiosity, and (v) habituation.Subsequently, the bidirectional exchange of activity information may consciously or unconsciously affect behavioural responses as depicted by the periodic accounts of interpersonal activity synchrony within this study.These irregular instances of coordinated actions could spark interest for further inquiry on the possibility of interpersonal activity synchrony in mediated AAL environments. On the negative side, a few persons desired increased sensor comfort, more discreet portable displays while some felt that ambient technologies were an invasion of their personal privacy.To address privacy concerns, one informant suggested the addition of a "service upon request feature."Likewise, Hoof et al. [41] recommended that the user has complete control over his information collected and distributed in smart home environments. On a different note, the most striking result to emerge from the discussion was the consistent reference to safety and monitoring systems.In fact, this was not surprising as the sense of safety and security in AAL environments has been a recurring theme throughout this doctoral research.A possible interpretation for this recurrence can be found in Maslow's hierarchy of needs, such that safety and family security needs precede the need for love and belonging [53].Accordingly, we can infer that once our participants can guarantee the safety of their loved ones, then they can proceed to other forms of interaction to create a sense of belongingness in mediated AAL environments.As such, our design challenge has now become greater given the system scope has stretched beyond the main goal of promoting social connectedness through bidirectional ambient displays. Going back to Mr. A's statement regarding a mental pattern of the partner's routine activities, it is clear that participants refer to their mental model as a reference for understanding their partner's activities.Consequently, this raises the challenge of designing peripheral technologies, which are coherent with the user's mental model.Norman suggests that misfortune could arise if the 'system image' is incoherent with the user's conceptual model [60].Thus, the information portrayed should match with the user's ideology of their partner's activities.To address this, one could deploy highly accurate machine learning classification algorithms.However, system trust is critical for determining the match between the information presented and the user's conceptual model.Also, if there is no system trust then challenges with learnability and usability could emerge. From our findings, technical literacy and cultural values can shape the users' experience of interacting with the system.Recall that our bidirectional activitybased system exploits ambient technologies and IoT to create awareness and maintain social connectedness between two interaction partners in AAL.Thus, Demiris et al. [29] highlight that inadequate technical literacy could impede the process "because the discussion of security and privacy concerns or issues of accuracy and reliability of sensor systems or other computing applications often require basic understanding of networking and data transfer" (p.110).Thus, driving the need for technological literacy interventions in AAL. Conclusion and Limitations To strengthen our assessment of the behavioural implications of bidirectional activity-based displays, this chapter provides a background on interpersonal activity synchrony.Based on the knowledge acquired from prior works, it was possible to evaluate interpersonal activity synchrony by computing the crosscorrelation coefficient of the counterpart's activity levels with that of their caregiver's.The results of a semi-controlled study suggest higher incidents of subjective interpersonal relationship closeness, experienced social presence, behavioural interdependence (for the counterpart only), information clarity, and the participants' willingness to adopt the technology, while utilizing minimum attentional resources with the activity-based ambient light interaction style.However, there was hardly any occurrence of interpersonal activity synchrony by using the crosscorrelation approach.Nonetheless, during the post-trial interview, a few participants reported sporadic moments of synchrony during their interaction with the activity-based ambient light.Furthermore, in the said interaction style counterparts demonstrated increased tendencies to remain active in contrast to their interaction with white light. It is plausible that some limitations could have influenced the results of this study.To begin with, we acknowledge convenience sampling as a constraint of this work.Accordingly, the findings are not entirely representative of all users within AAL community.To heighten the interest in our system, one option for future work is to specify the inclusion criteria only for the frail elderly, e.g., those with (Parkinson's disease, Alzheimer's disease, or even users with epilepsy).This we know would reduce the population of our study.On the other hand, it could increase the interest in our system. We are aware that a larger data stream of activity data is necessary to better estimate interpersonal activity synchrony in mediated environments.This can be achieved by increasing the number of participants and deploying a significantly longer experiment in the users' natural environments. Unfortunately, the self-awareness of the wearable smart-phone sensor from [25] is still an open problem that will be addressed in future work.Notably, if our algorithms were independent to orientation and location, it could be one of the best contributions in the field of activity recognition for AAL.There are some attempts, but are very limited. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/),which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. Fig. 2 . Fig. 2. Snapshots of the experiment set-up pre-and post deployment of the ambient displays. Fig. 3 . Fig. 3.An illustration of the sample games and exercises available in the rooms. Fig. 4 . Fig. 4. A pictographic representation of the activity-based ambient display components captured during experimentation.(Color figure online) Fig. 5 . Fig. 5.A snapshot describing the possible activities, which could be performed in the experiment. Fig. 7 . Fig. 7. Scatter plot of the estimated attentional resources utilized per interaction style. Fig. 9 . Fig. 9. Scatter plot illustrating the extent of co-presence between participant pairs. Fig. 10 . Fig. 10.Scatter plot showing the extent of the caregivers' influence on the counterparts' activity levels. Fig. 11 . Fig. 11.Scatter plot representing the subjective ratings on system adoption. Fig. 13 . Fig. 13.Mean activity levels in the interaction with white light. Fig. 14 . Fig. 14.Mean activity levels in the interaction with activity-based ambient light. Fig. 15 . Fig. 15.Pictorial representation of interpersonal activity synchrony, the counterpart is observed in a resting state while is his caregiver is also in a resting state as depicted by the blue light.This snapshot was captured during the experiment.(Color figure online) Table 1 . Demographic characteristics of participants Table 2 . Percentage of time spent in activity levels 's say you have to pay with the wallet then everyone says hey it's Christmas time!Therefore, a much simpler LED would be sufficient."-K"Is it necessary to have such a long stick to receive information?Is it possible to have something smaller?I think that would be better."-WAlthough most participants were enthralled by the implicit communication characteristics of the light, there were two exceptions.In fact, these participants expressed interest in more explicit interpersonal communication features.This is apparent in the quotations below."Whenyou are in the same room with a person, and you feel like you want to talk you can just talk to them.But in two different rooms, you cannot talk to the lights."-G "Maybe, we can interact not only by changing activity states with the lights but also exchanging messages saying now let's get active."-C Generally participants were satisfied with the level of privacy offered by the system.Example accounts are given below."You have a feeling of connectivity indicating what the partner is doing without disturbing him with camera supervision.So, everyone is free to do what he or she wants while there is still a feeling that there is life, to say the least."-K "It gives a good indication of what the other is doing.It is simple, and there is a certain privacy it provides.You don't feel observed."-P Still, a few participants were fundamentally concerned with the potential privacy risks of ambient technologies.For example, Mrs. H remarked on the 'big brother is watching you' effect of the deployment of context-aware technologies in AAL environments."I won't like it if I lost my independence and someone can see if I feel okay or not.I would like to maintain my privacy as it's my right not to be okay.Someone else doesn't need to know.For me, it would feel like a 'big brother is watching me'.No, I wouldn't want to be constantly monitored so that someone can see how I feel.No, I don't like that."-H Moreover, even though some participants were well aware of the privacy risks they were more willing to trade privacy for security.For example, Ms. O argued in the following statement."It's kind of uncomfortable for me to know that my mother always knows what I am doing right now, but for both of us to determine if we are in a 'safe' state then this system is very good.We are two far away [...] I want to her to know that I am okay."-O
2019-02-21T16:24:18.021Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "abeb4a98d2e4c8cc4084b4220756273b07129365", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-10752-9_6.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "9f1aff54394d58c9c4a678bcab057d57805580fb", "s2fieldsofstudy": [ "Engineering", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Computer Science" ] }
212775967
pes2o/s2orc
v3-fos-license
Reducing the Duration and Improving Hospitalisation Time by Using New Surgical Tehniques and Psychotherapy ANDRA GEORGIANA TOMA1, PAUL SALAHORU2*, MARIUS VALERIU HINGANU2*, DELIA HINGANU2, LUCIA CORINA DIMA COZMA3, ALEXANDRU PATRASCU5, CRISTINA GRIGORESCU4 1Alexandru Ioan Cuza University, Faculty of Psychology and Education Sciences, 11 Carol I Blvd., 700506, Iasi, Romanian 2 Grigore T. Popa University of Medicine and Pharmacy, Faculty of Medicine, Anatomy Department, 16 Universitatii Str., 700115, Iasi, Romania 3 Grigore T. Popa University of Medicine and Pharmacy, Faculty of Medicine, Internal Medicine Department, 16 Universitatii Str., 700115, Iasi, Romania 4 Grigore T. Popa University of Medicine and Pharmacy, Faculty of Medicine, Thoracic Surgery Department, 16 Universitatii Str., 700115, Iasi, Romania 5 Grigore T. Popa University of Medicine and Pharmacy, Faculty of Medicine, Orthopaedic Surgery Department, 16 Universitatii Str., 700115, Iasi, Romania As far as surgical techniques are concerned, video-assisted thoracoscopy comes with lot of improved results compared to classical surgical techniques. This technique seems to reduce the parameters reflecting the period of hospitalization, exposure to analgesics, patient's recovery rate, and the level of pain felt. Studies have been conducted that have demonstrated that, hypnosis can significantly improve the duration of pain exposure as well as the intensity perceived by the patient. In domain literature report cases in which hypnosis has been proven effective is in the preoperative preparation of surgical patients. furthermore, by applying hypnosis, positive results are found during the postoperative care of patients. This study aims at the results of video-assisted surgical techniques, and at the opportunity of integrating complementary therapeutic elements such as hypnosis to improve the parameters interested in perioperative care of the surgical patient. Keywords: Surgical tehniques, psychoterapy, thoracoscopy, hypnosis, pain One of the main goals targeted by the application of different surgical techniques, as well as the introduction of therapeutic elements specific to postoperative care, is the reduction of the period of hospitalization. Also, the patient's comfort is dependent on the period of hospitalization, which also has a significant impact on the total costs acquired during the therapy [1,2]. Therefore, over time, therapeutic elements used in treating surgical patients were in constant evolution, constantly coming up with new approaches that will increase convenience for patients and improving the outcomes [3,4]. As far as surgical techniques are concerned, videoassisted thoracoscopy comes with lot of improved results compared to classical surgical technique [5]. This technique seems to reduce the parameters reflecting the period of hospitalization, exposure to analgesics, patient's recovery rate, and the level of pain felt [6,7]. Besides video-assisted thoracoscopy, a number of noninvasive interdisciplinary approaches can be used, which, according to the literature, have shown that they can improve the parameters mentioned above [8,9]. Studies have been conducted that have demonstrated that, hypnosis can significantly improve the duration of pain exposure as well as the intensity perceived by the patient. In domain literature report cases in which hypnosis has been proven effective is in the preoperative preparation of surgical patients. furthermore, by applying hypnosis, positive results are found during the postoperative care of patients [10]. The positive effects of hypnosis used in pain management in surgical patients arise due to the subjective nature of pain. Thus, one of the factors that influence the appearance of pain is represented by its anticipation by the patient. From this point of view, we can say that the extent to which pain is felt by the patient can be influenced by techniques that can manipulate elements specific to the placebo phenomenon [10]. This study aims at the results of video-assisted surgical techniques, and at the opportunity of integrating complementary therapeutic elements such as hypnosis to improve theparameters interested in perioperative care of the surgical patient. Experimental parts Materials and methods For this study, a total of 643 patients were hospitalised in the Thoracic Surgery Clinic of the Pneumophysiology Hospital in Iasi. All patientshospitalized had diagnoses that led to surgery. Surgery was aimed with a curative, diagnostic or palliative purpose, depending on the indication being assessed. Regardless of the surgical indication, the present study evaluates the dynamics of the medication administration, respectively the duration of hospitalisation, depending on the type of surgery performed in each patient. Through these two parameters, conclusions can be drawn regarding the level of pain experienced by the patient in the postoperative period and in the recovery rate following surgery. This study also aims at finding solutions for the improvement of these two parameters by applying adjuvant techniques in perioperative therapy. Results and discussions The 643 patients listed in the study were hospitalised, presenting various diagnoses that required three categories of surgery, depending on the necessity and purpose. Thus, we talk about surgical interventions: palliative, curative or diagnostic. Figure 1 shows the numerical disposition of patients according to the type of surgery. From figure 1 we observe that most of the surgical interventions (48.52 %) were performed for diagnostic purposes. 36.23 % of surgical interventions were performed for curative purposes, and the lowest number of surgical interventions (15.24%) was performed inpalliative considerations. the distribution of the type of surgical technique in relation with the nature of surgery indication are shown in table 1. Table 1 highlights that, indifferent of the nature of the surgical indication, the surgical technique used has been distributed approximately uniformly from a numerical point of view. A total of 330 patients (51%) benefited from classical surgery, and the number of patients using the VATS technique was 313 (49%). Thus, we can consider that the results obtained are relevant to draw conclusions about the effectiveness of video-assisted thoracoscopy in reducing the duration of hospitalisation, as well as the amount of analgesia administered. To understand the differences relative to the parameters evaluated in the study, depending on the surgical technique used, we analysed the results obtained and shown in table 2. The data presented in table 2 highlight major differences in terms of the amount of analgesic administered (expressed during exposure to analgesics), as well as the hospitalisation period in patients who benefited from VATS, compared with patients operated on classical way. Thus, in the case of patients who have had palliative intervention, the duration of analgesic administration is on average 8% lower and the duration of hospitalization is reduced by 15%. Patients undergoing curative surgery need, on average, a shorter analgesic administration period, reduced by 46%, and the hospitalization period is reduced by half. By applying VATS, the best results are obtained in patients receiving surgery for diagnostic purposes: the average duration of analgesics administration is 54% lower, and a 68% shorter hospitalisation period. Taking into consideration all these data, obtained through descriptive statistics, we can see that it is possible to reduce the amount of analgesics and the duration of hospitalization by using VATS. Thus, we assert that video-assisted thoracoscopy has notable benefits in reducing pain, as well as increasing postoperative recovery rates. These two parameters can be improved by using adjunctive elements, represented by hypnosis techniques to reduce pain and improve postoperative recovery, as presented by studies conducted so far. The diagnostic goal of the surgical interventions was imposed by the necessity of obtaining biopsy specimensin the majority of the evaluated patients. Palliative interventions were indicated in patients with neoplastic disease that caused secondary lesions in the pleura, with neoplastic pleural effusion. In these cases, surgery has improved the clinical condition of patients by elimination of the pleural effusion and ameliorating respiratory function. Surgical curative interventions were indicated to patients with non-neoplastic pleural effusions, mostly post-traumatic(hemopneumothorax). Surgical interventions were performed either by classical surgery (open surgery) or by video-assisted thoracoscopy (VATS). In order to understand the results obtained by video-assisted thoracoscopy,in relation with the two parameters evaluated in this study, the patients were divided into two groups: patients who benefited from classic surgery, respectively patients who benefited from VATS surger y. The numerical values reflecting Our study attempts to evaluate the opportunity of introducing hypnosis as complementary therapy in thoracic surgery. From analysis of in-domain literature, the benefits of hypnosis can be represented by: increasing the capacity to overcome anxiety, improving anaesthesia results, increasing analgesia, increasing pain resistance and improving post-operative recovery. The definition proposed by Montgomery characterizes hypnosis as an agreement between a person designated as a hypnotist and a person designated as a patient or client who participates in a psychotherapeutic technique that produces suggestions for changing sensation, perception, cognition, affection, mood or behaviour. The primary and crucial element that distinguishes meditation or relaxation hypnosis is the use of the suggestion. Patients asked about this experience describe the following: modified body image, time distortion, dissociation, feelings of relaxation and peace, focusing attention and increasing positive affection, but diminishing memory and self-awareness. From a clinical point of view, there are three phases during the hypnosis session, starting with induction, followed by therapeutic suggestions, and finally the hypnotic state appears [10]. This definition emphasizes the relationship between the hypnotist and the patient, a prerequisite for anyone who practices hypnosis [10]. Similarly, in a randomized study, Defechereux et al. found lower levels of pain, less fatigue, improved recovery and low inflammatory response (IL-6) a day after surgery, with the use of hypnoanalgesia compared to general anaesthesia. Hypnosis techniques improve intraoperative comfort and reduce anxiety, pain and intraoperative requirements for anxiolytic and analgesic drugs while providing optimal surgical conditions and faster recovery rates [11]. In a randomized study, Lang et al. shown that hypnosis is beneficial during invasive medical procedures by lowering symptoms of pain and anxiety, improving hemodynamic stability and shortening surgery times [11]. Studies conducted by Faymonville et al. shows a significant reduction in pain score, anaesthesia use, postoperative nausea, when patients underwent local anaesthesia and hypnosiscompared to general anaesthesia [11]. A case report describes the surgical treatment of a young female patient suffering from pilocytic astrocytoma. The patient refused to have dental treatment under general anaesthesia and demanded hypnosis. As a result of the high acceptance of hypnosis by patients with oral and maxillofacial surgery, Hermes et al. (2002) established a hypnosis procedure incorporated in the activity of the department. Until 2007, 400 traumatological, oral and reconstructive surgeries were performed under local anaesthesia combined with hypnosis [11]. Hypnosis with anaesthetic aim has been used in the following medical procedures: maintaining a leg graft when it was important for the patient to be blocked in a fixed position for a long time; specific cases involving laminectomy, thyroidectomy, vein suturing, recto-vaginal fistula surger y, haemorrhoidectomy, hysterectomy, pneumonectomy, mitral commissurotomy, and cardiac dysplasia; The hypnotic procedure is meant to minimize bleeding stimulating constructive attitudes, hope and willingness to recover [12]. Among the most useful suggestions that may be given to improve the postoperative recovery of a patient are: a) lack of nausea, b) the ability to cough without pain, c) increased fluid intake and appetite for food, d) a lower amount of narcotics and sedatives need, e) vomit and hiccup reflex control. Ideally, they should be given prior to hypnosis intervention, but they may be given while the patient is still on the table [12]. Marmer (1959) notes the classic indications of hypnosis in anaesthesia and claims to be useful in: a) the ability to overcome anxiety and fear, b) to provide a more comfortable reaction during anaesthesia and postoperative recovery, c) to increase the resilience to pain, and d) the ability to induce anaesthesia and analgesia [12]. Henry Bennett (1993) has developed and tested new techniques of surgical hypnosis such as: a) reducing the amount of chemical agents required for anaesthesia or their complete replacement; b) reducing the amount of blood lost during surgery; c) restarting blood circulation in the affected area during the healing process, d) increased healing speed and recovery time after surgery. Bennett believes that hypnosis can also be administered to patients with an average level of hypnotisability [12]. According to in-domain literature, we observ e a number of positive effects of hypnosis: a lower amount of medications before surger y increases the ability to recover more quickly after surgery, eliminating the initial fear, pros-traumatization and the chances for the patient to develop anxiety long-term, neurosis or phobiadecreases [12]. Taking into consideration the literature where hypnosis is presented as a potential anaesthetic and analgesic, it is appropriate to carry out further studies aimed at assessing the effects of hypnosis in thoracic surger y. It is also essential to monitor the results obtained from the application of video-assisted surgical techniques, along with complementary therapies such as hypnosis techniques for reducing pain and increasing patient comfort in the perioperative period. It is also necessary to consider hypnosis as an alternative to drug anaesthesia, which has the potential to reduce the amount of administered chemical agents, with effects on the cost of therapy as well as on the health of patients [12]. Conclusions Video-assisted thoracoscopy brings notable benefits on reducing pain and shortening postoperative recovery time. These two parameters can be improved by using adjunctive elements, represented by hypnosis techniques to reduce pain and improve postoperative recovery. Both video-assisted surgery techniques, and hypnosis can be applied with the aim of improving recovery of patients. Also, positive results regarding the total hospitalization costs can be found using the two techniques. A lower amount of drugs before surgery increases the probability that the patients will recover faster after surgery, eliminating initial fear, the pros-traumatization and decreases the chances for the patients to develop longterm anxiety, neurosis or phobia. It is necessary to consider hypnosis as an alternative to drug anaesthesia, which has the potential to reduce the amount of administered chemical agents, with effects on the cost of therapy, as well as on the health of patients. The hypnosis procedure is meant to minimize bleeding, stimulating constructive attitudes, hope and willingness to recover. It is appropriate to conduct new studies to assess the effects of hypnosis in thoracic surgery.
2020-01-23T09:09:53.447Z
2019-02-15T00:00:00.000
{ "year": 2019, "sha1": "2131c08e90bad8e7242638c467bf82ffa6c550f5", "oa_license": "CCBY", "oa_url": "https://revistadechimie.ro/pdf/31%20TOMA%20A%20G%201%2019.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9ee1db576580725b242eaf4efffb6342435f0795", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
110216178
pes2o/s2orc
v3-fos-license
School Building Defect Pattern In providing a conducive learning environment for the student, the school building must be in good condition. This paper is evaluating the existing condition of primary school building in Sarawak, Malaysia. It focuses on building defects pattern for school building. The primary data collection is from the school building condition survey with involvement of 24 primary schools. The schools have been selected using simple random sampling and stratified sampling (of school age as the variable of selection). The reporting method is based on Condition Survey Protocol (CSP) 1 Matrix. Data analysis covers descriptive and inferential statistics. The analysis carried out found that the overall 4,725 defects have been identified. The building defect pattern is mainly on Ground Level of 3,176 defects, the highest number of defects components found on walls (798). 16.2% defects are cracks from 11 common defects and most of all the highest score of defects based on age of the building were the building in the range of 11 to 20 years. Introduction The condition of school buildings is a very important factor in influencing the school environment and should be evaluated.However, there is lacked of published research in this field in Malaysia.The evaluation of school's building has not been formally developed.Since the school building is the main asset to the learning process, information on the current condition to the building is very important in the school's management.This study focuses on assessing the condition of school buildings, which one of the keys processes in the life cycle of a comprehensive asset management and facilities management.This assessment is important that the asset of the building is capable of supporting a school's core operations, which need to operate efficiently and effectively in providing a quality learning environment to the school users.This paper discusses the evaluation of school buildings condition based on CSP1 Matrix's assessment and analysis, focusing on building defects pattern for school building. Literature Review Maintenance of school building includes activities to maintain school facilities as to keep it in good condition.In Malaysia, school building maintenance usually neglected [1] and there are no sufficient guidelines for this process [2].Maintenance work is not only necessary to the aging building.It is needed as to new building as well.New building will not remain constant during its lifetime [3].Assessment of building condition is therefore needed as one of the proactive steps in managing and maintaining the performance of school facilities.School is the building that used for the teaching purposes, and classrooms are physical spaces that designed to support in person teaching and learning activities.There is a significant impact of the school building conditions on student achievement [4,5,6,7,8,9,10] because the built environment can influence users' behavior [11].The relationship between schools building condition with student achievement was explained by Uline and Tschannen-Moran [9] who asserts that student from school with the better environment showed higher achievement.Schneider [12] added that school facilities had a direct impact on teaching and learning, while good school facilities can be provided by an efficient maintenance.Besides, characteristic of the occupied space affect the exchange of information and working environment.In addition, the physical condition of schools affects the behaviors and attitudes of both the teachers and students.The building and spaces reflect the message of life, activities and social values of users.The features such as color, shape and arrangement are able to help students and teachers to identify a clear mental image of the environment. Extraction from a report issued by the MOE [13], typically operational and maintenance cost is between 60-80% of the total cost of facility during its lifetime and revealed that there were weaknesses into asset maintenance activities carried out by the government.This action will reduce the future maintenance cost, and it must be done by the experts. Materials and Methods Data for the evaluation of school building condition is gathered from samples of public school in Kuching, Sarawak.Data collection and analysis conducted based on CSP1 Matrix.There are 134 public primary schools in Kuching Division [14].The sampling criteria used are based on school age, which refer to the first building constructed for the school which in the range of one year to 65 years.Two sampling methods used are simple random sampling and stratified sampling.Variable of Selection (VOS) used for the calculation of sample size was the rate of school age.The calculation of sample size was using the Simple Random Sampling (SRS) formula.Based on the calculation, 24 samples of school have been selected.The condition of building component is evaluated using CSP1 Matrix [15].This protocol requires the information of every defect to be assessed in terms of its condition and priority.All defects identified are assessed and recorded on-site with the evidence (photos and plan tag).The score obtained from the scoring system determine the level of defects/component such as good, fair and dilapidated.Besides, the possible cause of the defects is also identified.This information is recorded in Defect Sheet, and then it was transferred to the Schedule of Building Condition [15].A summary of finding such as the number of defects, total score and overall building rating are based on CSP1 Matrix.The data is statistically analyzed using the software of Statistical Package for Social Sciences (SPSS). Results and Discussions Assessment of the physical condition of school building at Kuching Division was conducted on 24 schools.In total, 4,275 defects were identified and the total mark is 45,868.This means that the rating for overall condition of the buildings is 9.71, which at a fair level but close to dilapidated. The Number of Defects Based on Building Levels According to Table I, the highest number of defects recorded on Ground Level of 3176 defects, while the lowest number recorded on Roof Level of 38 defects.Total defects recorded to show that the MATEC Web of Conferences 01007-p.2 higher the level of building, the number of defects was decreasing.From the aspect of schools' age, the highest number of defects recorded by the school over 20 years old of 2838 defects.Highest defects on Ground Floor recorded by the school over 20 years old.However, for the other levels, the highest number of defects recorded by the schools between the ages of 11 to 20 years. The number of defects based on component There were 67 major component included in this survey.Table III presents the only component that has more than 100 defects.The highest number of defects found on walls (798) followed by floors (690), doors (629), fittings (575), windows (541) and ceilings (476).This component is the main parts to the building and its cover most of the buildings.Meanwhile, there are some other components (which are not listed in diagram 1) that have a little number of defects such as fire extinguishers, balcony railing and cabinets. The Types of Defects There are 207 types of defects recorded during this survey.Table II shows the only types of defects that have more than 100 defects, which can be assumed to be common building defects.Based on Table II, there are eleven common defects with cracks as the highest types of defects (16.2%) followed by missing (13.9%), damaged (8.6%), broken (7.0%) and punch (4.6%).Large number of cracks is because it often occurs on the walls and floor, which are major components of a building. The Relationship between Components with Sub-Components and Types of Defects Independent Chi-Square Test has been used to measure the correlation of relationship between the components and sub-components.Components is intended consists of doors, floors, walls, windows, ceilings, sanitary facilities, equipment, waste pipes and others.Sub-components consists of frames, ceiling boards, tiles, and so on.Based on Table 5, the results of analysis show that the components and sub-components have significant relationships.In addition, the types of defects are also connected to the components.This is proven after the Chi-Square Test conducted in which a p-value less than 0.05.In other words, in the event of defects in components such as doors, indirectly sub-components such as door leaf and frames also had an impact.In the same time, components also influence the occurrence of defects. Conclusions As overall school building condition, it is in fair condition, but it close to dilapidated.The age of the schools is giving the idea of its building condition, where the older the school, the higher number of building defects is expected.Apart from this, the critical age of building condition is within the range of 11 to be 20 years old, where the number of building defects keeps on increasing.This is supported by the findings that the two schools which is found in dilapidated condition are the school that is more than 20 years.This finding gives an implication for this study is going to help the school management to better plan and prioritize the school maintenance activity.By using CSP1 Matrix, it helps to prioritize the building defects.Defects with the red-coded means serious action need to be taken care first, and followed by the yellow-coded.This is also helping the maintenance budget to be spent wisely according to the priority. DOI: 10 .1051/ C Owned by the authors, published by EDP Sciences, Figure 1 : Figure 1: Number of defects based on components Table 1 : The number of building defects based on schools' age and building levels Table 2 : The Types of Defects Table 3 : Chi Square Test for sub-components and type of defects *This table presents the only types of defects that has more than 100 defects 01007-p.4
2018-12-12T12:18:50.241Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "14e5aee767e5a84f56952d1cb6a93316bffaa725", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2014/06/matecconf_bsfmec2014_01007.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "14e5aee767e5a84f56952d1cb6a93316bffaa725", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Engineering" ] }
119011376
pes2o/s2orc
v3-fos-license
AC-field-controlled localization-delocalization transition in one dimensional disordered system Based on the random dimer model, we study correlated disorder in a one dimensional system driven by a strong AC field. As the correlations in a random system may generate extended states and enhance transport in DC fields, we explore the role that AC fields have on these properties. We find that similar to ordered structures, AC fields renormalize the effective hopping constant to a smaller value, and thus help to localize a state. We find that AC fields control then a localization-delocalization transition in a given one dimensional systems with correlated disorder. The competition between band renormalization (band collapse/dynamic localization), Anderson localization, and the structure correlation is shown to result in interesting transport properties. I. INTRODUCTION The dynamics of electrons in ordered semiconductor superlattices driven by electric fields has received a great deal of attention. For example, well-known predictions for quantum mechanical behavior, such as Wannierstark ladders and Bloch oscillations, which are difficult to observe in ordinary solids, were verified in beautiful experiments. 1 Furthermore, behavior such as negative differential conductance, 2 fractional Wannier-Stark ladders, 3 excitonic Franz-Keldysh effect, 4 among others, have been the focus of recent work. Of particular recent interest are the localization and delocalization behavior of electrons in the presence of external electric fields, which have direct effect on the macroscopic transport properties of the system. For example, in a system with both AC and DC fields, an appropriate AC field will delocalize the Wannier-Stark ladder states induced by a strong pure DC field. 5 An intense AC field can by itself also lead to dynamical localization of the carriers. 6 This phenomenon has been studied extensively in various systems, 7 especially in quantum dot pairs, 8 and finite linear arrays. 9 Certainly, disorder or imperfections are unavoidable in a real system. It is widely known that localization due to disorder plays a fundamental role in a variety of physical situations. In particular, it has been of interest to investigate disordered systems in the presence of electric fields: Hone et al., 10 and Zhang et al. 11 studied the case of one impurity in the presence of AC fields. Holthaus et al. studied AC-field-controlled Anderson localization in disordered semiconductor superlattices. 12 As scaling theory shows that all eigenstates in disordered one-dimensional (1D) systems are localized, 13 previous studies have focused on the effects of electric fields on the localization length. 12 On a related area, the existence of metallic states in a class of conducting polymers, such as polyaniline and heavily doped polyacetylene, was identified by Dunlap et al. 14 with extended states in 1D systems that exist if short-range correlations in the disordered structure are taken into account. The existence of extended states in this random dimer model was also verified in experiments with GaAs-AlGaAs superlattices designed to exhibit such correlated disorder. 15 More general correlations have also been studied in 1D systems. Perturbation theories for the random dimer model were developed in Ref. [16], and particle transport in models with correlated diagonal and off-diagonal disorder were discussed also by Flores. 17 The random dimer model driven by a DC field was studied in 18 . The delocalization behavior in 1D models with long-ranged correlated disorder for the on-site energies was studied by a renormalization technique, 19 and by a Hamiltonian approach. 20 More recently, the Kronig-Penney model with correlated disorder was studied, 21 demonstrating that a mobility edge may exist for disordered systems with appropriate long-range correlated disorder. The role of structural correlations in the sequence disorder in DNA molecules has also been studied recently, 22,23 as this represents a real system that exhibits structural correlations between the diagonal and off-diagonal elements in a tight-binding representation. In this rich context, it is important to study the competition between dynamic localization (due to AC field), Anderson localization (due to disorder) and the correlation in the disorder, and to investigate the role of external fields on the transition from the localized to the delocalized state in 1D systems. In this paper, we concentrate on the short range correlations of the random dimer model, and study the localization-delocalization transition driven by external AC electric fields. We find that AC electric fields induce a transition from extended to localized states under suitable conditions, and find the transition point analytically in the high frequency limit. We also show that the transition for lower frequencies is shifted in field, as a precursor of DC-field results. Although our results are for a relatively simple 1D model potential, we expect that they will be relevant for a variety of dissimilar systems, including polymers, 14 exciton transfer in active media, 24 semiconductor superlattices, 15 quantum dot arrays, 9 and even hole transport in complex molecules 22,23 , as their dynamics is described well by effective 1D models. After introducing our general model in Sec. II and presenting an analysis of the high frequency regime for the single-dimer system in Sec. III, we present numerical results and general discussions in Sec. IV. A. Model We consider a 1D random dimer model driven by an AC electric field. The appropriate Hamiltonian is then where R is the hopping amplitude between nearest neighbors, E(t) = E 1 cos(ωt) is the time-dependent field with frequency ω, d is the constant separation between chain sites, and the on-site energy parameter is ε m = ε a (or ε b ) with probability Q (or 1 − Q; here we typically choose Q = 1/2, as it represents the most disordered system, although other values are also used), and ε b is assigned to a pair of nearest neighbor sites when it occurs. Since the Hamiltonian is periodic in time, the Floquet theorem implies that the state can be written as where ε is the quasi-energy, φ n is the Wannier state and C n is the probability amplitude for an electron on site n at time t, which is periodic in time, i.e., C n (t) = C n (t + T ), with T = 2π/ω. The Schrödinger equation can then be written as i ∂ ∂t C n (t) = (ε n − ε)C n (t) + R(C n+1 + C n−1 ) + nedE 1 cos(ωt)C n . Since the term containing the electric field is proportional to n, it is not suitable to perform perturbative calculations. We introduce the following transformation where β = edE 1 /ω. It is easy to see that |C ′ n (t)| = |C n (t)|, C ′ n (t + T ) = C ′ n (t), and that C ′ n (t) satisfies the equation Since C ′ n (t) is periodic in time, we can expand it in Fourier series, C ′ n (t) = m A m n e imωt . Using the identity 25 where J m is the m-th Bessel function, we can obtain an equation B. High frequency limit We analyze here the behavior of our model in the high frequency regime. Apart from illustrative, this limit is of practical importance, since much interest exists in systems driven by THz fields. 4,26 In the high frequency limit, A 0 n gives the most important contribution to C n (t). The equation We first keep only the terms with l = 0, and obtain, This equation indicates simply that in the high frequency limit, the effect of the AC field is to suppress hopping and change R to an effective hopping constant R eff = RJ 0 (β). This is a well-known result in driven systems. 7 Let us consider the corrections coming from terms containing A ±1 n . The relevant equations are These equations (11) can be further simplified, and in conjunction with (10), we find that where we have defined B n = A 1 n − A −1 n . Thus the higher Fourier component corrections to the effective bandwidth R eff = RJ 0 (β) are of higher order in R/ω, which of course makes them small in the high frequency limit, R/ω ≪ 1. In this limit, the model with AC field behaves essentially as a system without electric field, except for a rescaling of the bandwidth given to the lowest order by R eff = RJ 0 edE 1 ω . In the random dimer model without AC field, the localization-delocalization transition occurs when ε − = |ε a − ε b | is twice the bandwidth. 14 We then intuitively expect that ε − = 2R eff would be the transition point between localized and delocalized states in the presence of the AC field. Before giving numerical evidence for this transition in the full random dimer system, we analyze the single impurity case to gain further understanding of this problem. III. SINGLE IMPURITY-DIMER CASE It is instructive to see what happens when only one impurity-dimer is involved in an otherwise periodic 1D chain. We consider both the cases with and without an AC field for comparison. A. Static case Let us first consider the scattering effects introduced by a single site-impurity in a chain, in the absence of AC field. 14 We let all site energies be ε a , except for site 0 we can get the transmission probability where k is the wave vector of the incoming Bloch wave, and ε − = |ε b −ε a | measures the impurity detuning, and as such is a measure of the "disorder" or impurity strength. One can see that for this one-impurity case, |T | 2 < 1 for ε − = 0. In a random multi-impurity system, a series of n scattering events would naturally lead to |T | 2n ≪ 1, and very small amplitude for the outgoing wave, resulting in localization in the thermodynamic limit. If instead, we assign a pair of sites 0 and 1 with energy ε b to form a single dimer impurity, the transmission probability is 14 In this one impurity-dimer case, |T | = 1, whenever ε − = −2R cos kd. This is a sort of resonance effect due to the internal structure of the impurities. Thus, in the presence of the peculiar kind of short-range correlated disorder described by the random dimer model, there are states with unity transmission probability, which clearly have an extended character (even if they only appear at a single value of the energy). This one impurity-dimer calculation, makes intuitive the appearance of extended states in the random multi-dimer system at peculiar energy values, and captures qualitatively the reason for the unusual behavior of the random dimer model [14,27]. B. The case with AC electric field We now turn to explore what happens when an AC electric field is turned on. Hone et al. studied the system of an isolated defect driven by strong electric fields. 10 We will make use of Green's functions in terms the Floquet formalism. We consider the resolvent operator as a function of the complex frequency z where H = H 0 + V , H 0 is the unperturbed Hamiltonian, V is the impurity potential, and G 0 (z) = 1 z − H 0 . In the representation of Wannier states, the Green's function is In the high frequency limit, R/ω ≪ 1, G 0 is 10 where R eff = RJ 0 edE 1 ω , and The sign is chosen such that q falls inside the unit circle. For an isolated defect with V jl = νδ j,0 δ 0,l (where ν = ε − = |ε b − ε a | is the case of different site energy), the probability p(l) for the defect state (with energy ε = R eff (q + 1 q )) to occupy the l th site is determined by the residue of G ll . One finds that 10 (20) Notice that p(l) falls exponentially from the site l = 0 where the defect is localized, with a characteristic decay length that is reduced for increasing |ν|/R eff , as one might suspect. Now we consider a one-dimer model, with V jl = ν(δ j0 δ 0l + δ j1 δ 1l ), and ν = ε − . After some calculation, we find so that where a is defined as . Defining γ = ν 2R eff , for the pole at q 1 = 1 2γ + 1 , we get and since q 1 < 1, this corresponds to a localized state. For the pole q 2 = 1 2γ − 1 , one gets When ν = ε − > 2R eff , q 2 < 1, and this corresponds to a localized state with localization length ∼ 1/ ln(2γ − 1). When γ approaches 1 from above, the localization length diverges, indicating a transition to a delocalized state. Thus the single dimer impurity in an AC field yields the same conclusion as in the high frequency case, and the transition from localized to extended states occurs at the point ν = 2R eff . IV. NUMERICAL RESULTS AND DISCUSSION Most of our numerical calculations were performed on a chain with 1501 sites. We solved equation (4) with initial condition C n (t = 0) = δ n,0 , and analyze the subsequent development. The site energies were chosen from a bi-valued distribution, ε n = ε a and ε n = ε b , with probability 1 2 . As the site prob remains, the "degree of disorder" is controlled by larger values of ε − = |ε b − ε a |, as we will see in what follows. A. High frequency regime R ω ≪ 1 In Fig. 1 we show numerical calculations of the meansquare displacement, m 2 = n n 2 |C n | 2 , versus time. One can see in curve (a) that when ε − = |ε b − ε a | = R eff , the mean-square displacement m 2 ∼ t 3/2 . This is known as the superdiffusive transport regime. Diffusive transport ( m 2 ∼ t) is shown either when ε − = 0.97 · 2R eff (curve b) or perhaps for ε − = 2R eff (curve c). On the other hand, curve (d) for ε − = 2R = 2.27R eff shows how the mean square displacement is increasingly bounded (subdiffusive), m 2 ∼ t 0.36 . Further increasing ε − > 2R eff , as shown in curve (e)(ε − = ω = 3.78R eff ) results in completely bounded motion m 2 ≈const., as anticipated from the analytical discussion above. We believe the subdiffusive behavior for ε − = 2R > 2R eff is a crossover behavior due to finite size effects, masking the anticipated extended → localization transition at ε − = 2R eff . In fact, from the Eq. (15) (with R replaced by R eff ) for the transmission probability, we may estimate the localization length λ ∼ −1/ ln |T (k)| 2 ∼ 1 δ + βk 2 , where ε − = −δ − 2R eff , and δ is small. The number of "extended states" (states with localization length λ larger than the system size L) is ∆N ∼ L( 1 L − δ) 1/2 . We believe theses "extended states" lead to the subdiffusive behavior. With increasing of δ, as in curve (e), or increasing system size, one gets ∆N = 0, if 1 N < δ. It is very difficult to go beyond the crossover regime by numerical simulations, as it requires simulations in very large system sizes, with longer equilibration times, requiring longer simulation time to obtain accurate values of the self-averaged quantities in a ln Rt fashion. Scaling studies of this transition would be interesting. The structure of Fig. 1 is similar to that shown in [14] in the absence of AC fields. For a fixed AC electric field amplitude, there is a transition from extended to localized state behavior with increasing disorder. The role of AC electric fields can be seen to effectively decrease the hopping constant, thus contributing to the localization of carriers. For example, for ε − ≤ 2R, in the case without electric fields, results in extended states (and diffusive transport). 14 However, when an AC electric field (with edE 1 /ω = 0.7) is turned on, the mean squared displacement is suppressed. This localization-delocalization transition is clearly induced by the AC electric field, as the transition shifts to ε ≃ 2R eff It is interesting to see the situation for a stronger field β = edE 1 /ω = 2.405, (the first root of J 0 ). In this case, R eff = 0, and we expect that even for very weak disorder the state will be localized. Our results in Fig. 2 (with very small disorder, ε − /R = 0.33) show that this is indeed the case. This is also in agreement with the fact that even in the limit of ε − = 0, i.e. when there is no disorder, the states are localized when band collapse occurs (i.e. R eff = 0). 28 This is nothing but the well-known dynamical localization. 6 In Fig. 2 one notices that there are oscillations in the mean square displacement. This is the manifestation of the time dependence of the electric field in this case of weak disorder. In fact these oscillations also exist in Figs. 1 and 3, except that they are nearly invisible in those cases because the disorder ε − and the displacements being larger, "hide" the oscillations. B. Low frequency regime R ω ∼ 1 In Fig. 3 we show the transition from localized to extended state in the low frequency limit. One can see that the transition point is no longer the same as in the high frequency regime ε − = 2RJ 0 (edE 1 /ω). For example, for ε − = R eff < 2R eff (curve b) the state is sub-diffusive, with m 2 ∼ t 0.67 . As expected, when ε − is small enough, for example ε − = 0.34R eff (curve a), the state is extended and shows superdiffusive behavior. These results are of course different from the high frequency limits, since in this regime our previous analysis fails. Furthermore, in the extreme low frequency limit, the system tends to that with a DC field: It is known that localized states have a power-law behavior in DC field, instead of the more typical exponential localization in 1D disordered systems. ε− = R eff shows near localization well below the critical value of ε− = 2R eff for high frequency. C. Inverse participation ratio To understand more clearly how the AC electric field controls the degree of localization, it is useful to extract information from the Floquet states for the system driven by periodic electric fields. The Floquet states u m can be expanded with respect to the Wannier states φ l , We calculate the averaged inverse participation ratio P , If a Floquet state is nearly localized at individual Wannier states, P tends to 1, while P vanishes as 1/N if the state is extended; the larger P characterizes a more localized state. In Fig. 4, we show P for different values of ε − = |ε b − ε a | and dimer concentrations versus electric field strength E 1 in the high frequency regime, R/ω = 0.1. We find sharp peaks at edE 1 /ω = 2.405, as this value results in R eff = 0, and thus the effective hopping along the chain vanishes. We can enhance the degree of localization in the random dimer model by increasing the detuning ε − or the dimer concentration Q. For cases (a) and (b) in Fig. 4, Q is the same (= 0.5), but ε − changes from 0.16 in (a) to 0.07 in (b). In contrast, for (b) and (c), the value ε − is the same, but Q = 0.2 is smaller in (c). It is clear that P is larger overall for the more disordered systems, and although a peak appears always at edE 1 /ω ≃ 2.4, decreasing disorder suppresses the peak value and overall amplitude of P . One can also observe that there is a relatively sudden enhancement of P for the system in (a) for edE 1 /ω 0.9, while for (b) and (c) this occurs between edE 1 /ω ≃ 1.7, and 3.3. From our previous discussion, we know that the localizationdelocalization transition occurs at ε − = 2RJ 0 (edE 1 /ω). From this formula, we find that the transition point for (a) is in fact at edE 1 /ω = 0.92, while for (b) and (c) it occurs for edE 1 /ω = 1.78, and 3.33. These match very well with our numerical calculation. To elucidate further the role of correlations, we compare P in an Anderson model (without correlations) with a random dimer model system, as shown in Fig. 5. For a more quantitative comparison, we let the variance of the Anderson model distribution, W 2 /12, be the same as that in the random dimer model, 1 2 (ε 2 a + ε 2 b ). It is evident that P is much larger in the Anderson model (indicating a more localized system), and that P varies smoothly with electric field, indicating no localizationdelocalization transition with field. 12 This figure also indicates that an important effect of the presence of the random dimer short range correlations is to delocalize a few states, reducing globally the value of P in the system. It is clear that the dynamical behavior of a system with correlated disorder is a subtle competition between correlation and disorder. D. conclusions We have studied the AC-field controlled random dimer model. The dynamics of our system depends on the competition between band renormalization (band collapse/dynamic localization), Anderson localization, and the correlation (dimer structure). We find that there is an AC electric field induced transition from extended to localized states, which is absent in the Anderson model. The transition point is found analytically for the high frequency limit, and found to occur when ε − = |ε b − ε a | ≃ R eff = 2RJ 0 (edE 1 /ω). The dynamical localization is not only recovered as a natural limit in the absence of disorder, but also shows its effects in the transport properties of the system with disorder and correlation (the peaks in Figure 4 and 5). The generalization of our results to a N -dimer model is straightforward, and expected to yield qualitatively similar results. Our theoretical predictions could be checked in a variety of systems, and especially on experiments in GaAs-AlGaAs random-dimer superlattices. 15 In experiments, tuning external AC field is a relative easy task compared with changing disorder or correlation in a desired way. Generalizations to different and more complex correlations are also expected to give interesting results.
2019-04-14T02:10:58.759Z
2006-03-27T00:00:00.000
{ "year": 2006, "sha1": "a0c5c6564350b7625814a44d8a8003519c5abdd0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0603720", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a0c5c6564350b7625814a44d8a8003519c5abdd0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1106629
pes2o/s2orc
v3-fos-license
Quickest Change Detection Approach to Optimal Control in Markov Decision Processes with Model Changes Optimal control in non-stationary Markov decision processes (MDP) is a challenging problem. The aim in such a control problem is to maximize the long-term discounted reward when the transition dynamics or the reward function can change over time. When a prior knowledge of change statistics is available, the standard Bayesian approach to this problem is to reformulate it as a partially observable MDP (POMDP) and solve it using approximate POMDP solvers, which are typically computationally demanding. In this paper, the problem is analyzed through the viewpoint of quickest change detection (QCD), a set of tools for detecting a change in the distribution of a sequence of random variables. Current methods applying QCD to such problems only passively detect changes by following prescribed policies, without optimizing the choice of actions for long term performance. We demonstrate that ignoring the reward-detection trade-off can cause a significant loss in long term rewards, and propose a two threshold switching strategy to solve the issue. A non-Bayesian problem formulation is also proposed for scenarios where a Bayesian formulation cannot be defined. The performance of the proposed two threshold strategy is examined through numerical analysis on a non-stationary MDP task, and the strategy outperforms the state-of-the-art QCD methods in both Bayesian and non-Bayesian settings. I. INTRODUCTION AND MOTIVATION In their life time, autonomous agents may be required to deal with non-stationary stochastic environments, where parameters of the world (dynamics, cost and goals) change over time (for the same states spaces). For example, in robotic navigation, a path planner might need to accommodate dynamically changing friction coefficients of road surfaces or wind conditions to ensure safety. In inventory control, an order management agent might need to account for the time varying distribution of demands for goods and services when determining the order at each decision period. A service robot may have to adapt to different personality or behavior of the person it is assisting. In a multi-agent system, an agent may have to adapt to changing strategies of the opponent or other players [1]. In order to maximize long term rewards, it is paramount that the agents are able to quickly detect the changes in the environment and adapt their policies accordingly. When the environment is stationary, many of these problems for autonomous agents, path planning or robot control can be formulated as an MDP, for which there is a rich literature [2].When the environment is non-stationary, there is a limited understanding on the best strategy to employ. In this paper, we are interested in the case where the nonstationary process consists of several stationary processes. Each stationary process corresponds to an MDP, and the non-stationary nature is captured by changes in the transition and/or reward structure of the MDPs. That is, we consider the whole operation in a non-stationary environment as a global task, which can be decomposed into several stationary subtasks. This is a general case often encountered in real world applications, such as robotics, inventory controls, etc. For example, in inventory control, it is reasonable to assume that the demand rate would be steady for some time, before changing to another value. For path planning, one can assume that the road surfaces or wind conditions are steady for a while before changing, etc. Solving dynamic decision making in non-stationary environments optimally is an extremely hard problem. Although it is extensively studied in the literature, due to the analytical intractability of the problem, however, existing works aim towards developing approximate solutions. For example, in some papers, the problem is reformulated as a Partially Observable MDP (POMDP), and approximate POMDP solvers are used to obtain a solution; e.g., see [3] and [4]. In others, estimates are maintained of the current MDP parameters, and next state or reward predictors are used to assess whether the parameters of the active MDP has changed; see e.g. [5] and [6]. In [7] and [8], new models such as hidden mode MDPs (hmMDPs) and mixed observability MDPs (MOMDPs) are proposed to capture the transition between different MDPs, and approximate solutions are obtained. Although these approximate solutions are promising tools for solving non-stationary MDPs, they are restricted to problems where the model sizes are known a priori, and they might still suffer from high computational overhead. The approach of multitask or transfer learning has also been used to study these problems. However, in these approaches, tasks are usually assumed to be well separated and given before learning and planning take place. There is no explicit concept of automatic change detection in these problems. Problems of this nature are also explored in the adaptive control literature. For example, in [9] and [10], the authors study optimal control of an MDP with unknown parameters in the transition kernel. Again, there is no concept of a change in this problem. In this paper, we approach the problem of optimal control in MDPs with model changes through the viewpoint of classical quickest change detection (QCD) [11]. The changes in properties of MDPs (transition or reward) cause a change in the law of the state-action sequence. Tools and ideas from the QCD literature can be used to detect these changes in law. QCD algorithms are scalable, and are optimized for quickest detection. In spite of these properties, these tools are either never used, or are not fully exploited in science and engineering literature, especially in the problem of interest in this paper. For example, sequential detection approaches are applied in [12] and [13], where the optimal policy for each MDP is executed, and a change detection algorithm is employed to detect model changes. We will show in this paper that such a naive approach may lead to significant loss in performance. The key idea is that what is optimal for optimizing rewards may not necessarily be optimal for quickest detection of model changes. Thus, there is a fundamental reward-detection trade-off to be exploited. The purpose of this paper is to articulate this trade-off in a mathematically precise manner, and to propose solutions that we claim best exploit this trade-off. Specifically, we propose a computationally efficient, simple two-threshold strategy to quickly detect model changes, without significant loss in rewards. We show that such an approach is superior to the existing methods in the literature. Specifically, we show that the two-threshold strategy leads to better performance as compared to the single threshold classical QCD tests, as employed in [13]. Note that it is shown in [13] that a QCD approach is better (with respect to the Bayesian problem we use in this paper; see (2) below) than other approaches in the literature, e.g., [6]. For benchmarking purposes, we also compare with the MOMDP solution [8] and a random action strategy. II. PROBLEM FORMULATION Assuming we have a family of MDPs {M θ }, where θ takes value in some index set Θ, which could be finite or infinite. For each θ , an MDP is defined by a four tuple: M θ = (S , A , T θ , R θ ), where S and A are respectively the state space and the action space common to all MDPs, T θ is the transition kernel, and R θ is the reward function. A decision maker observes a sequence of states {s k } k≥0 , s k ∈ S , ∀k, and for each observed state s k , it chooses an action a k ∈ A . For each pair (s k , a k ), the next state s k+1 takes values according to the law dictated by the kernel: ∀k The reward obtained by choosing action a k after observing s k is given by R θ (s k , a k ). We operate in a non-stationary environment, which means that the transition structure or the reward structure can change over time. Specifically, at time say γ 1 , the parameter θ of the MDP changes from θ = θ 0 to say θ = θ 1 . At a later time, say γ 2 , the parameter changes from θ = θ 1 to θ = θ 2 , and so on. To be precise, denote the non-stationary dynamics for k ≥ 0 as P(s k+1 = s | a k = a,s k = s) = T θ 0 (s, a, s ), for k < γ 1 T θ 1 (s, a, s ), for γ 1 ≤ k < γ 2 Reward for (s k , a k ) := R k (s k , a k ) For simplicity of exposition, for now we restrict our attention to the case where there is only one change point γ, and there are only two models, a pre-change model M 0 = (S , A , T 0 , R 0 ) and a post-change model M 1 = (S , A , T 1 , R 1 ). In Section VI, we discuss extensions of the ideas developed for the two model case, to the case where there are more than two models (possibly infinite), and possibly more than one change point. We consider two different problem formulations: Bayesian and Non-Bayesian. A. Bayesian Formulation Suppose we have a prior on the single change point γ = γ 1 , and we have chosen a policy Π = (π 1 , π 2 , · · · ) that maps the states {s k } to actions {a k }. The MDPs M 0 and M 1 , the policy Π, and the prior on the change point, together induce a joint distribution on the product space of state spaces and actions which we denote by P, and the corresponding expectation by E. The objective is to choose the policy Π so as to maximize a long term reward. where β ∈ [0, 1) is a discount factor. Note that the change point γ is unobservable. As a result, a Markov policy may not be optimal, and the policy Π above has to use the past history to choose the optimal action. To accommodate model changes, problem (2) can be reformulated as a Partially Observable MDP (POMDP) (see Lemma 1 below), and can be solved using approximate POMDP solvers. However, it is computationally expensive to solve this problem numerically, especially when the number of possible models is large. We thus use other approaches to obtain approximate solutions. The vehicle we have chosen to obtain the approximate solutions is the theory of QCD. We show in Section VII that our proposed solutions performs as well as the solution obtained from standard approximate POMDP solvers in the literature, while at the same time it incurs significantly less computational cost, online and off-line. Lemma 1: The non-stationary MDP problem (2) is equivalent to a POMDP problem with the model specified as where I(·) is an indicator function, and F is a model transition matrix. Given the prior over a change point to be geometric distribution, i.e.,γ ∼ Geom(λ ), it can be shown that By converting the original non-stationary MDP problem into a POMDP problem, one can apply any existing POMDP solvers to obtain an approximate POMDP policy, which maps the belief state b(x) to an action a. Considering the fact that here the state space can be factorized into an observable part and an unobservable part, this POMDP model can be treated as a mixed observable MDP (MOMDP). In an MOMDP, the belief state becomes a union of |S| disjoint |Θ|-dimension subspaces, and all the operations are performed in the lower dimension, hence is more time efficient to solve a MOMDP than to solve a POMDP. B. Non-Bayesian Formulation In practice, a prior on the change point is not often known. Thus, a Bayesian formulation cannot be defined. If the change point is treated as an unknown constant, there are infinite possible laws on the state-action joint spaces, one for each possible change point. In statistics, there are two ways to study non-Bayesian problems: minimax or maxmin and Neyman-Pearson type criteria. For minimax or maxmin criterion we may consider the following problem: where E γ is the expectation with respect to the probability measure when change occurs at time γ. However, there are two major issues with such an approach in practice. First, a minimax approach may lead to a conservative or pessimistic approach to policy design. Secondly, it is not always straightforward to compute such costs for comparing various competing algorithms. Since the problem under consideration is about identifying the correct regime, we also consider a Neyman-Pearson type of criterion: trying to maximize reward in one regime subject to a constraint on the performance under alternative regime. We take this approach (in addition to the Bayesian formulation) in our paper and consider the following problem. Thus, the objective is to optimize rewards when the change occurs at time 1, subject to a constraint on the performance when the change never occurs. In sequential analysis literature, one often uses a minimax objective rather than the average under E 1 . As mentioned earlier, this leads to a pessimistic viewpoint. Also, for many sequential detection algorithms in the literature, the worst case is often achieved for γ = 1. can also be seen as a proxy for the minimax or maxmin reward. Another advantage of the Neyman-Pearson type criterion is that it maintains a balance in reward structure between the pre-and post-change regime. A discounted cost criterion with a discount factor β < 1, as chosen in Bayesian and maxmin problems, may not always be appropriate for the application in hand. This is because such a criterion penalizes under-performing under M 0 more than it penalizes underperforming under M 1 . In reality, we may be interested in maintaining good performance under every regime. C. Oracle Policy Let V 0 (x) and V 1 (x) be the cost/reward to go at state x for the MDP models M 0 and M 1 , respectively. Suppose the following conditions are satisfied: 1) the change point is exactly known, 2) the change point γ >> 1, ∀x, y, that is the reward to go for the model M 1 is not sensitive to the initial state x. The implication of the first two conditions is that it is approximately optimal to use the optimal policy for M 0 before the change point. The implication of the last condition is that, no matter which policy is used for model M 0 , it is still approximately optimal to use the optimal policy for model M 1 after the change point. As a result, if all three of these conditions are satisfied, then one can just implement the optimal policy for the model M 0 before change and use the optimal policy for the model M 1 after change, and achieve close to maximum average rewards. In the following, such a policy is referred to as the Oracle. Since the change point is unknown, it is clear that if the change can be reliably detected, then one can hope to achieve rewards close to what the Oracle can achieve. There are powerful algorithms in the literature that one can use to detect this change as quickly as possible. We briefly discuss relevant ideas and algorithms in the next section, which serves as a background to readers not familiar with the literature on quickest change detection. III. BACKGROUND ON QUICKEST CHANGE DETECTION In the quickest change detection literature, algorithms are developed that allows one to quickly detect a change in the distribution of a stochastic process from one law to another. The optimality properties of the algorithms are studied under various modeling assumptions and problem formulations. The theoretical foundations were laid by the work of Wald [14] and Shiryaev [15]; see also [11] for a survey of the area. We discuss three such algorithms in the context of problems (2) and (5). Suppose we employ a certain policy Π and observe a sequence of states and actions (driven by the policy) {(s k , a k )}. At the change point Γ, modeled as a random variable with prior probability mass function φ , the model changes from M 0 to M 1 . Thus, the law of the stochastic sequence {(s k , a k )} changes. Then the following algorithm, called the Shiryaev algorithm [16], has strong optimality properties. The Shiryaev algorithm dictates that we compute the following statistic at each time n: and declare that a change in the model has occurred at the stopping time τ s = min{n ≥ 1 : S n > A}. Here, A is a threshold chosen to control false alarms. If the prior on the change point is geometric with parameter ρ, then the statistic S n can be computed recursively: It is shown in [16] (9) Thus, the Shiryaev algorithm achieves the best average detection delay over all stopping times that satisfied a given probability of false alarm constraint of ζ , as ζ → 0. Here, d is a constant that is a function of the prior φ and equals | log(1 − ρ)| for Γ ∼ Geom(ρ). The quantity I π is the Kullback-Leibler information number defined by Thus, larger this number I π is, the smaller is the detection delay. We use the subscript π to emphasize that the information number is a function of the policy Π employed. If the prior on the change point is not known, or if a prior cannot be defined, one can replace the Shiryaev statistic by the CUmulative SUM statistic (CUSUM) [17] W n = max where m is a window size that depends on the false alarm constraint. The CUSUM algorithm has strong optimality properties, similar to (9), but with average delay and probability of false alarm expressions replaced by suitable minimax delay and mean time to false alarm expressions [17]. Specifically, if Here, E γ is the expectation when the change point is at γ, and E ∞ is the expectation when the change never occurs. Thus, the CUSUM algorithm minimizes the maximum of detection delay-maximum over all possible change points-subject to a constraint η on the mean time to false alarm, as η → ∞. Again, the optimal performance, for a given false alarm, depends on the information number I π . Thus, the number I π is the fundamental quantity of interest in quickest change detection problems. It will also play a fundamental role in the rest of the paper. An alternative to CUSUM in the non-Bayesian setting is to put ρ = 0 in the Shiryaev algorithm and obtain what is called the Shiryaev-Roberts (SR) statistic SR n = (1 + SR n−1 ) T 1 (s n−1 , a n−1 , s n ) T 0 (s n−1 , a n−1 , s n ) and use the stopping time τ sr = min{n ≥ 1 : SR n > A}. In the numerical results reported in Section VII, we use the SR test instead of CUSUM because of the ease of implementation. The algorithms discussed above can be easily modified to also account for a change of distribution of rewards. We do not discuss that here for simplicity of exposition. Also, the state and action sequences are more informative. Thus, unless only the reward undergoes a change, and the transition functions do not change, only then it will be useful to consider rewards for detection. Trade-off Between Detection Performance and Reward Note that the problems in (2) and (5) are not classical change detection problems. The objective is not to optimize delay subject to constraint on false alarms, but to optimize long term rewards. Thus, while quick detection is key, what is optimal for change detection may not be optimal with respect to optimizing rewards, and vice versa. Thus, there is a fundamental trade-off between detection performance and maximizing rewards. See Section IV below for a more rigorous statement. The fundamental contribution of this paper is a computationally efficient two-threshold algorithm that optimizes this detection-reward trade-off to achieve near optimal performance. As discussed in the introduction, most of the works in the existing literature either do not exploit tools from the change detection literature, or very few who exploit them, ignore this trade-off in their design. IV. QUICKEST DETECTION WITH LOCALLY OPTIMAL SOLUTION One approach to solving either (2) or (5) is to use a locally optimal approach, i.e., to start with using the optimal policy for model M 0 , compute the change detection statistics (Shiryaev, SR or CUSUM) over time, and switch to the optimal policy for the model M 1 at the time when the change detection algorithm crosses its threshold. Mathematically, let Π 0 = (π 0 , π 0 , · · · ) be the (stationary) Markov optimal policy for model M 0 , and let Π 1 = (π 1 , π 1 , · · · ) be the (stationary) Markov optimal policy for model M 1 . Then use the following policy Π loc = (π 0 , π 0 , · · · , π 0 τ−1 times , π 1 , π 1 , · · · τ onward ), where τ is the stopping time for the Shiryaev, SR or the CUSUM algorithm. a) Choosing the threshold: In the quickest change detection problem, the threshold A used in the Shiryaev, SR or CUSUM tests is designed to satisfy the constraint on the rate of false alarms. However, for problems in (2) and (5), there is no notion of a false alarm. For the non-Bayesian criterion, one can choose the threshold to satisfy the constraint of α (see (5)). However, in the Bayesian setting, the threshold is a free parameter, and can be optimized for optimal performance. b) Issues with policy Π loc : As we will show in Section VII, the policy Π loc may lead to poor performance. This is because, as discussed in Section III, the detection performance of the stopping rule τ used depends on the information number I π 0 ; see (10). Let Then, it is possible that and that I π 0 is itself quite small. This may lead to poor detection performance, and hence may cause significant loss of revenue or rewards. If it so happens that I π 0 ≈ I max , and I π 0 is significant, then one can expect that this simple strategy Π loc itself may lead to near optimal performance. But, this is more a matter of chance, and more sophisticated approaches are needed that considers every possibility. We note that in some cases one may be able to use MOMDP or POMDP based approximations to compute the optimal policy for our problem, but such solutions are computationally demanding. Using a quickest change detection approach leads to computationally efficient algorithmic techniques. V. A TWO-THRESHOLD SWITCHING STRATEGY TO EXPLOIT REWARD-DETECTION TRADE-OFF One way to improve the performance of the policy Π loc is to replace the policy Π 0 by another policy that better exploits the reward-detection trade-off. One option is to replace Π 0 by a policy that maximizes the information number I π . Define for each state s ∈ S , the information maximizing policy π KL (s) as Specifically, use Using this policy will lead to quickest and most efficient detection of model changes. However, since the policy π KL may not be optimal for the MDP model M 0 , it may cause significant loss of rewards before the change point. We propose to use a simple two-threshold strategy that switches between π 0 and π KL , using π 0 to optimize rewards, and switching to π KL for information extraction. The proposed switching strategy is closely related to the notion of sequential design of experiments [18], or to the notion of exploitation and exploration in multi-arm bandit problems [19]. The two-threshold policy Π TT is described in Algorithm 5.1. In words, fix two thresholds A and B, with B < A. Also, Algorithm 5.1: Π TT : Two-Threshold Switching Strategy Require: Thresholds A and B, B < A; optimal policies π 0 , π 1 , π KL ; transition kernels T 1 (s, a, s ) and T 0 (s, a, s ) 1: Start with S 0 = 0 2: while S n ≤ A do 3: Use locally optimal policy Π 0 = (π 0 , π 0 , · · · ) 4: Compute the statistic S n using (8) 5: If S n ≤ B continue using the policy π 0 6: If S n > B use π KL 7: n = n + 1 8: end while 9: Switch to policy Π 1 = (π 1 , π 1 , · · · ) 10: return assume that we are in the Bayesian setting so that we can compute the Shiryaev statistic S n (8). Then the policy Π TT is defined as follows. Start with the locally optimal policy Π 0 = (π 0 , π 0 , · · · ) and compute the statistic S n . As long as S n ≤ B, keep using the Markov policy π 0 . If S n > B, suggesting that we may be in the wrong model, extract more information by using π KL . If S n goes below B again, switch back to π 0 , else we keep using π KL . If S n > A, switch to the optimal Markov policy for model M 1 , that is π 1 . In short, use π 0 when S n ∈ [0, B], use π KL when S n ∈ (B, A], and use π 1 when S n ∈ (A, ∞). Thus, the change detection statistic S n is used as a belief on the unknown model, and is used for detection between M 0 and M 1 , depending on whether S n is small or large, respectively. For moderate values of S n , the policy π KL is used to improve detection performance. In the non-Bayesian setting, we can replace the Shiryaev statistic S n by the SR statistic SR n (14) or the CUSUM statistic W n (11). a) On the choice of thresholds A and B: Note first that setting A = B reduces the two-threshold policy Π TT to the locally optimum policy Π loc . Also, by setting B = 0 the policy reduces to Π KL . As discussed earlier, this will lead to good detection performance, but may lead to loss of rewards. Akin to Π loc , the thresholds A and B here are free parameters, even for the non-Bayesian setting. Thus, the thresholds can be optimized for best performance: choosing the optimal thresholds is equivalent to choosing the best detection-reward trade-off. b) Bayesian vs Non-Bayesian: We emphasize that the definition of Π TT is implicitly different for Bayesian and non-Bayesian cases. In the Bayesian case, we have access to the prior on the change point, and we use that to compute the Shiryaev statistic S n . However, in the non-Bayesian case, we replace the Shiryaev statistic by the Shiryaev-Roberts statistic SR n or the CUSUM statistic W n . c) When will Π TT outperform Π loc ?: The amount of performance gain depends in general on the problem structure: reward function and the transition kernel. However, a condition which must be satisfied is I π 0 I max . This condition often leads to better detection performance for Π TT as compared to Π loc , resulting in efficient detection and more average rewards. Another condition under which performance gains are significant is when under-performing under any of the models is penalized equally. For example, in the Bayesian case (2), if the discount factor β is small, then the rewards lost in delaying a transition to the optimal policy of M 1 will not affect the overall cost. As a result, a policy can cause large delays, without significantly loosing rewards, and a good detection performance may not necessarily lead to noticeable gain in rewards. Numerical results are provided in the next section to support these observations. VI. MULTIPLE MODELS AND CHANGE POINTS For a parametrized family of models {M θ }, if the model changes from M θ 0 to some other unknown model in the family, then the Shiryaev, SR or CUSUM statistic cannot be used for change detection. This is because one needs the exact knowledge of the post-change distribution to compute those statistics. Popular alternatives are to use generalized likelihood ratio (GLR) based tests or mixture based tests. We only discuss the former. A GLR statistic for change detection is defined as where ε is the minimum magnitude of change that can occur in the problem. A replacement for Π loc in this setting would be to use locally optimal policy and switch at the time of stopping, where now the stopping rule is τ g = min{n ≥ 1 : G n > A}. Since there is more than one possibility for the post-change model, one can use the maximum likelihood estimate (the θ that achieved the maximum in the expression for G n at the time of stopping) to choose the model. When the number of models is finite, one can also use the theory of fault isolation to choose the right model [20]. A replacement for Π TT in this setting is even less obvious because of the difficulty in defining an appropriate version of π KL . We propose the following (20) Thus, maximize the worst case KL divergence. Analytical and numerical performance of these schemes will be reported elsewhere. If there are multiple change points, then as long as the gap between the change points is large enough, we can repeat the two model algorithm for consecutive change points. Specifically, if the GLR based algorithm detects a change and gives θ 1 as an estimate for the post-change parameter, then we can reset the algorithm with θ 1 as the pre-change parameter, and reapply the GLR based test. As long as there is a significant gap between model parameters, one would expect this technique to be close to optimal. VII. NUMERICAL RESULTS We now compare the two policies Π TT and Π loc for an inventory control problem, and show that using Π TT results in significant gains. We use the inventory control problem to illustrate the ideas because of its simplicity. The understanding developed through this simple problem can be extended to more complex problems. a) Inventory control problem: The state s k is the level of the inventory at time k, and the maximum size of the inventory is N. The action a k is the additional inventory to be ordered based on the state s k , making the total inventory s k + a k . A stochastic demand w k arrives making the residual inventory or the next state s k+1 = max{0, s k + a k − w k }. Let c to be cost of ordering a unit of inventory, h be the holding cost per unit, and p be the cost of loosing a unit of demand. Thus the cost per unit time is given by It is assumed that at the change point, the distribution of the demands changes from Poisson(λ ) to Uniform. The demands are assumed to be independent, conditioned on the change point. For clarification, note that the problem formulations (2) and (5) are about maximizing rewards, but the objective in the inventory control problem is to minimize cost. b) Numerical results for Bayesian setting: Under the Bayesian setting (2), it is assumed that the change occurs with a geometric prior ρ = 0.01. The other parameter values used are c = 1, h = 5, β = 0.99, λ = 2, N = 10, 20, and p is chosen to be 100, 200 or 300. This is a classical optimal control problem and the optimal policy under each model, Poisson or Uniform, can be found using value iteration [2]. The performance of various policies discussed are tabulated in Table I. The results are obtained by performing average over 1000 independent runs, each of horizon size 1000. The Oracle policy is the one that knows the change point, and switches from the optimal policy for the Poisson model to the optimal policy for the Uniform model, exactly at the change point. The policy Π loc employs the Shiryaev algorithm (7). Since there is no concept of false alarms, the choice of threshold A is optimized to achieve minimum possible cost. The policy Π TT also employs the Shiryaev algorithm, and the values of the two thresholds are also chosen to achieve minimum possible cost. For comparison, the costs achieved by performing an MOMDP approximation and a random action strategy is also shown. The performance of the policy Π TT is comparable to that of the MOMDP solution, and significantly better than that of Π loc . Note that the gain is more significant when I max is much larger than I π 0 . Also, since the Poisson arrival rate chosen implies less demand as compared to the Uniform arrival rate, the optimal control is more aggressive for the Uniform arrival model. A delay in detecting the change will result in significant loss of demands. The chosen value of the discount factor β and p ensures that delays in detection, and hence loss in demands, are sufficiently penalized under both regimes, pre-change and post-change. c) Numerical results for Non-Bayesian setting: An overall discounted criterion like (2) may not be the best or even fair way to compare competing algorithms for a nonstationary environment. One of the primary reasons for using (2) as a criterion was to show that the proposed algorithm can perform comparable to an MOMDP approximation. A more appropriate performance metric is a variational one (5). In this setting, instead of optimizing the threshold used in the policies, the thresholds are selected to satisfy the constraint of α on E ∞ ∑ ∞ k=0 β k C k (s k , a k ) , in which the change never occurs. For these choices of the thresholds, the performance E 1 ∑ ∞ k=0 β k C k (s k , a k ) is evaluated for both Π TT and Π loc in which the change occurs at time 1. This way, performance under both the regimes or MDPs are equally weighted, and an algorithm that performs well in every setting can be obtained. The performance comparison is plotted in Fig. 1. The parameter used are c = 1, h = 5, β = 0.99, λ = 2, N = 20, and p = 200. We use the Shiryaev-Roberts (14) algorithm to detect changes. Clearly, Π TT is superior. Finally, note that since we do not have a prior on the change point, a POMDP or MOMDP solution cannot be obtained. VIII. CONCLUSIONS This paper presents a novel way to combine techniques used for stationary MDPs and quickest change detection to solve non-stationary MDPs by considering the tradeoff between change detection and reward maximization, an important problem that has not been adequate addressed before. Our method uses a two threshold switching strategy to exploit reward-detection trade-off. Our numerical results show that the proposed method can achieve better trade-off, and outperform the state-of-the art QCD method and MOMDP method for solving non-stationary MDP tasks. Future work will include application of ideas developed in this paper to design of reinforcement learning based algorithms for decision making in non-stationary environments. See [21] for a stochastic approximation based approach to this problem.
2016-09-24T08:40:55.662Z
2016-09-21T00:00:00.000
{ "year": 2016, "sha1": "c4d49cc1c9f8a648fb0b9a0503ea653e90e652c8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1609.06757", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a9dbe846d84956347e12a15699c57e3e4248d446", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
6329680
pes2o/s2orc
v3-fos-license
The properties of the host galaxy and the immediate environment of GRB 980425 / SN 1998bw from the multi-wavelength spectral energy distribution We present an analysis of the spectral energy distribution (SED) of the galaxy ESO 184-G82, the host of the closest known long gamma-ray burst (GRB) 980425 and its associated supernova (SN) 1998bw. We use our observations obtained at the Australia Telescope Compact Array (the third>3 sigma radio detection of a GRB host) as well as archival infrared and ultraviolet (UV) observations to estimate its star formation state. We find that ESO 184-G82 has a UV star formation rate (SFR) and stellar mass consistent with the population of cosmological GRB hosts and of local dwarf galaxies. However, it has a higher specific SFR (per unit stellar mass) than luminous spiral galaxies. The mass of ESO 184-G82 is dominated by an older stellar population in contrast to the majority of GRB hosts. The Wolf-Rayet region ~800 pc from the SN site experienced a starburst episode during which the majority of its stellar population was built up. Unlike that of the entire galaxy, its SED is similar to those of cosmological submillimeter/radio-bright GRB hosts with hot dust content. These findings add to the picture that in general, the environments of GRBs on 1-3 kpc scales are associated with high specific SFR and hot dust. INTRODUCTION Long gamma-ray bursts (GRBs) are associated with the death of massive stars (e.g. Galama et al. 1998;Hjorth et al. 2003b;Stanek et al. 2003). This makes them of special interest in cosmology because they possibly trace the evolution of the rate of star formation in the Universe (e.g. Lamb & Reichart 2000;Jakobsson et al. 2005Jakobsson et al. , 2006b. Indirect evidence of the nature of GRBs was found by studying their host galaxies (e.g. Bloom et al. 1998;Christensen et al. 2004;Sollerman et al. 2005;Castro Cerón et al. 2009;Savaglio et al. 2009). Moreover, several studies on the immediate environments of GRBs suggest a close connection of long GRBs with regions of star formation, and therefore that their progenitors are likely massive stars. Fruchter et al. (2006) found that GRBs trace the ultraviolet (UV) brightest parts of their host (see also Bloom et al. 2002). Thöne et al. (2008) studied in detail the environment (in 3 kpc bins) of GRB 060505, concluding that it originated in the youngest, most metal-poor and most intense starforming region in the host galaxy. Similarly, Östlin et al. (2008) found that the 0.3 kpc environment of GRB 030329 is much younger than the entire galaxy and its estimated age suggests a conservative lower limit on the mass of the GRB progenitor equal to 12 M ⊙ . Finally, a significant number of other GRBs were reported to reside in dense star-forming regions (Castro-Tirado et al. 1999;Holland & Hjorth 1999;Hjorth et al. 2003a;Savaglio et al. 2003;Vreeswijk et al. 2004;Chen et al. 2005Chen et al. , 2006Fynbo et al. 2006a;Watson et al. 2006Watson et al. , 2007Prochaska et al. 2007a,b;Ruiz-Velasco et al. 2007) and molecular clouds (Galama & Wijers 2001;Stratta et al. 2004;Jakobsson et al. 2006a;Campana et al. 2007;Prochaska et al. 2008). GRB 980425 is the closest known GRB (z = 0.0085; Tinney et al. 1998), therefore it is an excellent laboratory for local GRB studies. Galama et al. (1998) reported the Type Ic supernova (SN) 1998bw exploding inside the error box of GRB 980425. Its lightcurve was well modeled by an explosion of a Wolf-Rayet (WR) star (Iwamoto et al. 1998), which is a highly evolved and massive star that has lost its outer hydrogen layers. Up to now, three other GRBs have also been spectroscopically confirmed to be associated with SNe: GRB 030329 (Hjorth et al. 2003b;Matheson et al. 2003;Stanek et al. 2003), 031203 (Cobb et al. 2004;Gal-Yam et al. 2004;Malesani et al. 2004; Thomsen et al. 2004) and 060218 (Ferrero et al. 2006;Mirabal et al. 2006;Modjaz et al. 2006;Pian et al. 2006;Soderberg et al. 2006;Sollerman et al. 2006), while two GRBs were confirmed to be SN-less: GRB 060505 and 060614 (Fynbo et al. 2006b; Della Valle et al. 2006;Gal-Yam et al. 2006). The host galaxy of GRB 980425 / SN 1998bw (ESO 184-G82; Holmberg et al. 1977) is a dwarf (0.02 of the characteristic blue luminosity, L * B ; Fynbo et al. 2000) barred spiral (SBc;Fynbo et al. 2000) with axis diameters of 12 and 10 kpc (down to B = 26.5 mag arcsec −2 ; Sollerman et al. 2005), dominated by a large number of star-forming regions (Fynbo et al. 2000;Sollerman et al. 2005). SN 1998bw occurred inside one of these, ∼ 800 pc southeast of a region displaying a Wolf-Rayet type signature spectrum (hereafter: WR region; Hammer et al. 2006). The WR region dominates the galaxy's emission at 24 µm (Le ) and is the youngest region within the host exhibiting very low metallicity (Christensen et al. 2008). In this paper we present fits to the spectral energy distribution (SED) of ESO 184-G82 and the WR region and compare their properties to other galaxies. Section 2 lists the data sources (including the third radio detection of a GRB host after those reported by Berger et al. 2001Berger et al. , 2003 used for the SED modeling of Section 3. We derive properties of the host galaxy and WR region in Section 4, discussing their implications in Section 5. Section 6 closes with our conclusions. We use a cosmological model with H 0 = 70 km s −1 Mpc −1 , Ω Λ = 0.7 and Ω m = 0.3, so ESO 184-G82 is at a luminosity distance of 36.5 Mpc and 1 ′′ corresponds to 175 pc at its redshift. DATA We undertook deep radio observations of the host galaxy of GRB 980425 on 2007 August 18 using the Australia Telescope Compact Array (proposal no. C1651, PI: Micha lowski) using the hybrid H168 configuration, with antennas positioned on both east-west and north-south tracks, and baselines of 60-4500 m. Simultaneous observations were made at 6 cm (4.8 GHz) and 3 cm (8.64 GHz), with a bandwidth of 128 MHz at each frequency. A total of 10.5 hr of data were obtained. Calibrator source PKS B1934-638 was utilized to set the absolute flux calibration of the array as well as to calibrate phases and gains. Data reduction and analysis was done using the MIRIAD package (Sault & Killeen 2004). Antenna #1 was excluded from the analysis due to phase instabilities, thus reducing the number of possible baselines from 15 to 10. Calibrated visibilities were Fourier transformed using "robust weighting", which combines high signalto-noise ratio with enhanced sidelobe suppression. The final synthesized beam sizes for 6 and 3 cm images were 76 ′′ × 38 ′′ and 37 ′′ × 21 ′′ , respectively, with root-meansquare (rms) values of 46 and 27 µJy beam −1 . The host galaxy, ESO 184-G82, was only detected at 6 cm. This is only the third > 3σ radio detection of a GRB host, after those of GRB 980703 and GRB 000418 (Berger et al. 2001. Note that the radio observations of GRB 000301C and GRB 000911 were also reported to be > 3σ detection, but after removal of the afterglow signal, the significance of the host detections drops below 3σ. As ESO 184-G82 slightly overlaps Galaxy A, reported by Foley et al. (2006), ∼ 70 ′′ south (see Figure 1), its flux density was estimated by simultaneous fitting of two twodimensional Gaussian functions to the data with their centroids, sizes, and orientations as free parameters. The lack of residuals left after the subtraction of these two Gaussians rules out a significant contamination of the Galaxy A to the measured flux of the host. ESO 184-G82 was not detected at 3 cm down to a 3σ limiting flux of 0.18 mJy. We obtained U -band photometry on the Danish 1.5m Telescope on La Silla during the period 2007 May-June. In total 3.75 hr were spent on the target. The data were reduced in a standard manner using IRAF (Tody 1986(Tody , 1993. We performed photometry on archival JHK images from NTT/SofI (Patat et al. 2001), VLT/ISAAC (Sollerman et al. 2002) and Two Micron All Sky Survey (2MASS Jarrett et al. 2000) 9 , as well as BV RI images from VLT/FORS1 (Sollerman et al. 2005) and UV images from GALEX (Martin et al. 2003(Martin et al. , 2005 10 . The flux was measured in an aperture of 50 ′′ diameter for the whole galaxy and 2.4-3.6" (depending on the seeing of the particular image) for the WR region. The results of our photometry and the fluxes obtained from the literature are presented in Table 1 and a mosaic of images is shown in Figure 2. Finally we analyzed the X-ray (2-10 keV) image from Kouveliotou et al. (2004). It was, however, not used in the modeling since our SED templates do not cover this wavelength regime. SED MODELING In order to model the SEDs of ESO 184-G82 and of the WR region we utilized the set of 35 000 SED models from Iglesias-Páramo et al. (2007) developed in GRASIL (Silva et al. 1998) 11 based on numerical calculations of the radiative transfer within a galaxy. They cover a broad range of galaxy properties from quiescent to starburst. We scaled all the SEDs to match the observational data and chose the one with the lowest χ 2 to derive the galaxy characteristics. The radio parts of model SEDs were scaled down by an appropriate factor to account for the decreased efficiency of nonthermal radio emission of dwarf galaxies (see Equation (1) and discussion in Section 4.2 and in Bell 2003). Table 2). Note that the K-band image was obtained when the SN was still bright. The X-ray image reveals two compact sources 1.5 ′′ apart (overlapping at the image shown): the SN and an ultra-luminous X-ray source . Note. -Flux densities are given in mJy and are corrected for Galactic extinction assuming E(V − B) = 0.059 (Schlegel et al. 1998) and the extinction curve of Cardelli et al. (1989). The row marked by % shows the percentage contribution of the WR region to the total galaxy emission. The upper limit is 3σ and errors are 1σ. References Namely a dwarf galaxy has a lower radio flux than it would result from scaling down the high-luminosity SED template and the GRASIL model does not take into account this effect. From the SFR-radio flux relation of Bell (2003, see equation (1) below) we inferred that the radio part of the SED template corresponding to ESO 184-G82 should be ≈ 3.5 times lower than in the original template. Anyway, even such corrected templates overpredict the value of radio data points so we excluded them from the fitting (see Section 5.2 for a discussion). The best fits 12 are shown in Figure 3 and the resulting properties of the galaxy are listed in Table 2 (see Micha lowski et al. 2008, and Sections 4.2 and 4.3 for details on how these were derived from the SEDs). Stellar Masses The broadband SED of the host of GRB 980425 is consistent with that of a galaxy with an old stellar population (the time since the beginning of the galaxy evolution is equal to 12 Gyr, see Column 2 of Table 2) built up quiescently without any starburst episode (consistent with the conclusion of Sollerman et al. 2005) at a rate comparable to the present value. The age estimate is however uncertain due to degeneracy between age and dust extinction as well as metallicity, namely that if one increases the assumed metallicity or decreases the extinction then the resulting age will increase. The derived stellar mass agrees with previous estimates (Castro Cerón et al. 2009;Savaglio et al. 2009). On the other hand, the comparison of Columns 8 and 9 of Table 2 reveals that the stellar mass of the WR region is dominated by a starburst episode, so that it has built up a negligible fraction of the stellar mass before the starburst. According to our SED model, this starburst is still ongoing and started 50 Myr ago. Interestingly this is the starburst age predicted for GRB hosts by Lapi et al. (2008) based on the argument that for older starbursts the metallicity becomes too high to produce a GRB. Star Formation Rates The SFR of the entire galaxy, as well as that of the WR region, was calculated from UV and infrared (IR) fluxes (Table 1) using the conversions of Kennicutt (1998). The radio SFR (M ⊙ yr −1 ) was calculated from the radio luminosity L 1.4 GHz (erg s −1 Hz −1 ) using the method proposed by Bell (2003): (1) where L c = 6.4 × 10 28 erg s −1 Hz −1 is a critical luminosity (see below). This relation was derived based on a sample of 249 galaxies spanning a wide range in luminosities including normal and intensely star-forming galaxies, starbursts, ultraluminous IR galaxies and blue compact dwarfs. The luminosity at the rest frequency of 1.4 GHz, L 1.4 GHz (erg s −1 Hz −1 ), of a galaxy at redshift z and luminosity distance D L (cm), can be calculated from the flux density F ν (Jy) at the observed radio frequency 12 The SED fits can be downloaded from http://archive.dark-cosmology.dk ν obs (GHz) assuming the radio spectral slope α = −0.75 (Yun & Carilli 2002): This relation (Equation (1)) diverges significantly from the usual methods (Condon 1992;Yun & Carilli 2002) for low-luminosity galaxies, because the nonthermal radio emission is not effective in such galaxies and the relation between SFR and radio luminosity becomes nonlinear below L c (SFR 3 M ⊙ yr −1 ). This effect is likely caused either by the fact that cosmic-ray electrons responsible for the radio emission escape from galaxies of small sizes (Bell 2003) or that the ordered magnetic field in dwarf galaxies is weaker and therefore magnetic field due to SNe (responsible for acceleration of electrons) is less efficient because it results from contraction and amplification of the global field. The SFR derived from SED modeling (Column 4 of Table 2) agrees (within a factor of 2) with the estimates derived from UV, IR, and radio for the entire galaxy, suggesting little extinction (see also Section 4.3). All the estimates are also consistent with the X-ray SFR upper limit of 2.8 M ⊙ yr −1 derived by Watson et al. (2004). As noted by Le Floc'h et al. (2006) the contribution of the WR region to the galaxy luminosity at 24 µm is ∼ 75% (see Table 1 and Figure 3). However, according to our SED fit, it only emits 15% of the total IR luminosity (it would require high-resolution far-IR or submillimeter imaging to confirm this result). Under the assumption that the total IR luminosity is proportional to the SFR (Kennicutt 1998), this is consistent within a factor of 2 with the finding of Sollerman et al. (2005) and Christensen et al. (2008) that the WR region harbors about one-third of the host star formation (as also suggested by the SFRs derived directly from SED fits, see Column 4 of Table 2). Dust Properties We derived the dust temperature by fitting a graybody curve to the model SED near the dust peak (as in Micha lowski et al. 2008). The dust in the WR region is much hotter than the average over the entire galaxy (see Column 11 of Table 2 and note on Figure 3 that the SED of the WR region peaks at shorter wavelengths than that of the entire galaxy). This hints at a very intense starburst episode and a strong radiation field, consistent with the discussion in Section 4.1. High dust temperatures are not uncommon for GRB hosts. They were found for higher-redshift (z = 0.9-1.5) GRBs with similar conclusions about their origin (Micha lowski et al. 2008). Moreover, Bloom et al. (2003) and Djorgovski et al. (2001) noted that high flux ratios between the [Ne III] and [O II] lines in GRB hosts suggest the presence of very hot H II regions. The total dust mass, M d , was estimated using the method of Taylor et al. (2005) based on the formalism developed by Hildebrand (1983): where F ν is the flux density (either observed or interpolated from an SED model) at the rest frequency 10 0 10 1 10 2 10 3 10 4 10 5 Rest wavelength (µm) . Squares and circles: detections of the host galaxy and the WR region, respectively, with errors, in most cases, smaller than the size of the symbols. Arrow: 3σ upper limit (values marked at the base). The hashed columns mark the wavelength ranges corresponding to the UV, optical, near-IR, mid-IR, far-IR, and radio domains. For a discussion of the discrepancy between the data and models at radio wavelengths see Section 5.2. (5) We estimated the flux at 450 µm from the SED models. The results of dust masses are given in Column 10 of Table 2 assuming β = 1.3. There exists a degeneracy between the value of this parameter and resulting dust mass in a way that more dust is expected if lower β is assumed. The uncertainties quoted in Table 2 are large, because we allowed a broad range of β (0 − 2; Yun & Carilli 2002). Hatsukade et al. (2007) derived an upper limit on the molecular mass of the host of GRB 980425 M H2 < 3 × 10 8 M ⊙ . Therefore from our dust mass estimate we derive a molecular gas-to-dust ratio M H2 /M d < 107. This value is lower than the molecular gas-to-dust ratio for the Milky Way (∼ 140 − 400; Sodroski et al. 1997;Draine et al. 2007) and other spirals (∼ 1000 ± 500; Devereux & Young 1990;Stevens et al. 2005), but consistent with the values for high-redshift submillimeter galaxies (54 +14 −11 ; Kovács et al. 2006), and for the nuclear regions of local luminous IR galaxies (LIRGs), ultraluminous IR galaxies (ULIRGs) (120 ± 28; Wilson et al. 2008) and of local, far-IR-selected galaxies (∼ 50; Seaquist et al. 2004). It indicates that the host of GRB 980425 harbors a relatively large amount of dust, or that its gas reservoir is significantly depleted. However this conclusion is based on an uncertain dust mass estimate, so should be checked with deep submillimeter observations. Our SED fits are consistent with negligible extinction for both the entire galaxy and the WR region (Column 13 of Table 2). Very low reddening for the entire galaxy was also found by Patat et al. (2001) and Sollerman et al. (2005) from the width of the Na I D doublet and SED fitting, respectively. On the other hand, using the Balmer decrement, Savaglio et al. (2009) andChristensen et al. (2008) derived A V = 1.73 and 0.93, respectively, for the entire galaxy, whereas Hammer et al. (2006) and Christensen et al. (2008) obtained A V = 1.51 and 0.53, respectively, for the WR region. However, extinction derived from emission lines of the H II regions is usually higher than from the SED modeling (Savaglio et al. 2009). The Host Galaxy From the SED modeling it is apparent that ESO 184-G82, the host galaxy of GRB 980425 / SN 1998bw, is a normal dwarf star-forming spiral. None of its properties (Table 2) are exceptionally high or low. In particular its mass, SFR, and size are broadly consistent with the range obtained for a sample of local dwarf galaxies (Fiigures 5 and 17 of Woo et al. 2008, in this respect ESO 184-G82 is very similar to the Large Magellanic Cloud) and for a sample of local blue compact galaxies (Figure 2 of Sollerman et al. 2005). Its specific SFR (φ ≡ SFR/M * = 0.23 Gyr −1 ) is consistent with the range of φ found for other GRB hosts by Castro Cerón et al. (2009) based on UV (but lower than for a subsample detected in mid-IR; Castro Cerón et al. 2006). However its φ is higher than for the majority of nearby spiral galaxies hosting SNe (see Figure 8 of Thöne et al. 2009). High φ for other GRB hosts were also reported by, e.g., Christensen et al. (2004) and predicted theoretically by Courty et al. (2004Courty et al. ( , 2007 and Lapi et al. (2008). This is in agreement with the finding of Iglesias-Páramo et al. (2006 and Zheng et al. (2007) that low-mass galaxies in general have high φ. As stated in Section 4.1, the SED of ESO 184-G82 is consistent with being of a nonstarburst nature. This is also supported by its stellar building time (T SFR ≡ φ −1 = 4 Gyr) being not much less than the Hubble time and low SFR per unit area equal to 0.004 M ⊙ yr −1 kpc −2 (see the relevant discussion in Heckman 2005). ESO 184-G82 is the only GRB host with a clear ∼ 1.6 µm bump in the SED (compare Figure 3 with Figure 4 of Savaglio et al. 2009). According to Sawicki (2002) this feature starts to be apparent for a galaxy older than 100 Myr (see his Figure 1). The preference of not having the bump for other GRB hosts likely indicates that on average they are very young galaxies, although we stress that in many cases the optical and near-IR data presented by Savaglio et al. (2009) do not cover the wavelengths into where the bump is redshifted. Radio Detection The SED model presented in Figure 3 (solid line) overpredicts the radio fluxes by a factor of 1.5 (> 2.3) in the 6 (3) cm band. We suggest that this may result from the following effect. Radio wavelengths probe current star formation activity ( 10 8 yr; Condon 1992;Cannon & Skillman 2004), unlike UV (Kennicutt 1998;Christensen et al. 2004) and IR (Calzetti et al. 2007), at which even older galaxies can be luminous. Therefore it seems likely that only a limited part of the galaxy is younger than 10 8 yr, so the galaxy is fainter in radio than its UV and IR fluxes would imply. This is supported by Sollerman et al. (2005) who noticed that the colors of the GRB 980425 host are consistent with a constant SFR over 5-7 Gyr without any starburst episode. Therefore if we assume that the IR probes the total SFR, then the radio data point would be a factor of ∼ 2 (≈ SFR IR /SFR radio ) higher if the radio were also sensitive to star formation older than 10 8 yr. We calculated the radio spectral index α defined as F ν ∝ ν α , so α ν2 ν1 = log[F ν (ν 2 )/F ν (ν 1 )]/ log(ν 2 /ν 1 ). The radio SED of ESO 184-G82 (see Table 1) is very steep with α 8.64 4.8 < −1.44. This is consistent with the steepest slopes in the sample of ULIRGs discussed by Clemens et al. (2008) and interpreted as an indication of spectral aging of relativistic electrons (the lifetime of high-energy electrons emitting high-frequency radiation is shorter than for low-energy electrons). The same conclusion is drawn by Hirashita & Hunt (2006) who pre-dicted a steepening of the radio slope ∼ 10 Myr after a starburst when synchrotron radiation starts to dominate over free-free emission from H II regions (see also Bressan et al. 2002;Cannon & Skillman 2004). In summary, such a steep radio slope indicates that the bulk of star formation activity in the host of GRB 980425 is not recent. As mentioned in Section 4.2 the radio SFR for dwarf galaxies can underpredict the true value if derived using usual methods. Since GRB hosts are in general subluminous at all wavelengths (Hogg & Fruchter 1999;Hanlon et al. 2000;Hjorth et al. 2000Hjorth et al. , 2002Vreeswijk et al. 2001;Fynbo et al. 2002Fynbo et al. , 2003Fynbo et al. , 2006bBerger et al. 2003;Le Floc'h et al. 2003Christensen et al. 2004;Courty et al. 2004;Tanvir et al. 2004Tanvir et al. , 2008Jakobsson et al. 2005;Sollerman et al. 2005Sollerman et al. , 2006Fruchter et al. 2006;Priddey et al. 2006;Chary et al. 2007;Ovaldsen et al. 2007;Thöne et al. 2007;Wiersema et al. 2007;Castro Cerón et al. 2009;Savaglio et al. 2009) we suggest that the Bell (2003) relation (Equation (1)) should be used to calculate their radio SFRs. Indeed, in the case of the host of GRB 980425, one would get a value of 0.068 M ⊙ yr −1 using the method of Yun & Carilli (2002), a value much smaller than the UV SFR. The radio luminosity is supposed to trace both unobscured and obscured SFRs (because radio is not affected by dust), so such a low value is clearly an underestimation of the true SFR. The relation of Bell (2003) is however not necessary (but gives reasonable results) for the high-luminosity subsample of GRB hosts where usual methods result in radio SFRs consistent with other diagnostics (see Table 1 of Micha lowski & Hjorth 2007). WR Region The WR region emits 7% of the host's UV flux. Its contribution falls to below 1% in the near-IR and rises steeply to 75% in the mid-IR. As mentioned in Sections 4.1 and 4.3, an intense starburst episode together with low stellar mass provide a consistent explanation of the shape of the SED. Indeed our SED fit suggests that the WR region harbors as much as 12-26% of the total star formation activity, but its contribution to the galaxy stellar and dust masses is negligible (see Columns 8 and 10 of Table 2). The φ of the WR region is 22 Gyr −1 . High φ in the immediate environment of GRBs was also found by Thöne et al. (2008, see their Figure 4; the spatial resolution was 3 kpc in this case) and is consistent with the findings of Fruchter et al. (2006). We stress that we do not claim here that GRB 980425 is physically connected with the WR region, just that it occurred in the most intense star-forming part of the galaxy (note in Figure 2 that the Southern spiral arm is the only part of the galaxy where X-ray point sources are found, indicative of intense star formation; Kouveliotou et al. 2004). Because of the proximity of the SN region to the WR region, it is very likely that their star formation was triggered by the same mechanism and therefore that the nature of their star formation is similar. The starburst nature of the WR region is confirmed by its stellar building time (T SFR = 57 Myr) being much less than the Hubble time, and its very high SFR per unit area equal to 6 M ⊙ yr −1 kpc −2 (Heckman 2005). It is worth noting that the SED of the WR region is qualitatively similar to the SEDs presented by Micha lowski et al. (2008) for submillimeter/radio-bright GRB hosts: blue in the optical, luminous in the mid-IR, and indicating hot dust content. The similarities are highlighted in Figure 3 where the WR region model (dashed line) and the model corresponding to GRB 000418 (dotted line) are compared. The agreement is striking, but note that in order to suppress the very high IR luminosity of the host of GRB 000418, we needed to modify the SED model presented by Micha lowski et al. (2008) by changing the escape parameter from 50 to 10 Myr (the time after which stars begin to escape from molecular clouds; see Panuzzo et al. 2007, for a discussion of this parameter). The WR region was also found to be similar to high-z GRB hosts with respect to emission line ratios (indicative of age and metallicity), unlike the entire host galaxy ESO 184-G82, which appears to be older than other GRB hosts (Christensen et al. 2008). The picture that emerges from these findings is that the ∼1-3 kpc scale environment of a GRB represents the youngest and most intensely star-forming region of a host galaxy, harboring the hottest dust. If present at high redshifts such regions may dominate the emission (and therefore, derived properties) of distant GRB hosts. CONCLUSIONS In this paper we have presented the UV-to-radio SED fitting of the host galaxy of GRB 980425 / SN 1998bw and of the WR region close to the SN position. The host galaxy of GRB 980425 is a normal dwarf spiral galaxy with somewhat elevated star formation activity compared to other spirals (though it is not necessary to invoke any starburst episode to explain its SED). The steep radio slope and the presence of the ∼ 1.6 µm bump in the SED indicate the existence of an old stellar population. Its low radio luminosity can be explained by the suppression of synchrotron emission in dwarf galaxies and the fact that radio is only sensitive to recent star formation. The emission of the WR region close to the GRB position is dominated by an ongoing starburst episode, during which almost all of its stars were formed. It contributes significantly to the star formation of the entire galaxy. In many aspects the WR region is similar to highredshift GRB hosts: it is a blue, young region of intense star formation containing hot dust. The presence of the GRB close to this region indicates that GRBs appear to be associated with regions of high specific SFR and high dust temperatures. We thank Joanna Baradziej, Eelco van Kampen, Robin Wark, Mark Wieringa for discussion and comments; our referee Sandra Savaglio for help in improving this paper; Jorge Iglesias-Páramo for kindly providing his SED templates; Naomi McClure-Griffiths for help with ATCA observations; Paul M. Vreeswijk, Jesper Sollerman, Ferdinando Patat, and Chris Lidman for providing the reduced optical and near-infrared images; Andreas O. Jaunsen for help on the data reduction; and Laura Silva for making the GRASIL code available. The Dark Cosmology Centre is funded by the Danish National Research Foundation. M. J. M. would like to acknowledge support from The Faculty of Science, University of Copenhagen. J. M. C. C. gratefully acknowledges support from the Instrumentcenter for Dansk Astrofysik and the Niels Bohr Institutet's International PhD School of Excellence. The Australia Telescope Compact Array is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The authors acknowledge the data analysis facilities provided by the Starlink Project which is run by CCLRC on behalf of PPARC. This research has made use of the NASA's Astrophysics Data System; the GHostS database (http://www.grbhosts.org/), which is partly funded by Spitzer/NASA grant RSA Agreement No. 1287913; the Gamma-Ray Burst Afterglows site (http://www.mpe.mpg.de/∼jcg/grb.html), which is maintained by Jochen Greiner; IRAF, distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation; the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration; and SAOImage DS9, developed by Smithsonian Astrophysical Observatory. Galaxy Evolution Explorer (GALEX ) is a NASA Small Explorer, launched in 2003 April. We gratefully acknowledge NASA's support for construction, operation, and science analysis for the GALEX mission, developed in cooperation with the Centre National d'Etudes Spatiales of France and the Korean Ministry of Science and Technology.
2009-02-22T16:15:21.000Z
2008-09-02T00:00:00.000
{ "year": 2008, "sha1": "32b790f8123d28a1cb60530a50f151ce6cc054d8", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/0004-637X/693/1/347/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "32b790f8123d28a1cb60530a50f151ce6cc054d8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
227297504
pes2o/s2orc
v3-fos-license
Sensation Seeking and Gambling Behavior in Adolescence: Can Externalizing Problems Moderate this Relationship? Gambling is a widespread phenomenon during adolescence. Among different risk factors involved in the onset of adolescent gambling behaviors, one factor that is studied is the sensation seeking personality trait. However, the literature is heterogeneous and a direct relationship between sensation seeking and gaming behaviors has not always been highlighted. This suggests that the relationship can be influenced by other factors. In particular, we explored the moderating role of externalizing problems in this relationship. A total of 363 adolescents (232 males and 131 females) aged 14 to 20 (M = 16.35, SD = 1.36) completed a battery of questionnaires aimed to assess their gambling behaviors, as well as the levels of externalizing problems and sensation seeking. The results showed that sensation seeking was associated with gambling severity, but this relationship was significant when externalizing problems were high and medium. On the contrary, when externalizing problems were low, the relationship between sensation seeking and gambling severity was not significant. Overall, sensation seeking in adolescence can favor the implementation of risk behaviors, such as gambling, but only in association with the presence of externalizing problems. Limitations, strengths, and social and clinical implications of the present study are discussed. Introduction Adolescence is a crucial transitional period characterized by significant physical, cognitive, emotional, and social changes [1]. In particular, it represents a vulnerable phase to maladjustment [2] and the implementation of addiction and risk-taking behaviors [3,4]. This is due to the fact that, in general, adolescents lack recognition and/or awareness of the potential negative effects of risk behavior, have a self-perceived invulnerability, and have a high level of sensation seeking and risk-taking, to which a not fully developed cognitive ability may also be added [3,[5][6][7]. Therefore, it is not surprising that, as often happens with other forms of addiction [4], adolescence is a critical period in which gambling behaviors arise and can become problematic [8]. Gambling behaviors themselves represent a risk-taking activity because they include those behaviors that involve betting money or items of material value on an event with an uncertain outcome in the hope of obtaining further money and/or material goods [9]. In this regard, recent literature has highlighted that gambling is spreading widely among adolescents, so much so that it has become a serious social and public health problem [10][11][12][13] with deleterious psychological, social, relational, and financial consequences, both in the short-and long-term [3,14,15]. In particular, a recent systematic review conducted by Calado et al. [11] showed that 0.2-12.3% of adolescents meet the criteria for being classified as problem gamblers. In the Italian context, the country of the present study, the percentage of adolescents who gamble has increased rapidly in recent years. For example, Chiesi et al. [16] showed that 17% and 7% were classified as at-risk or problematic gamblers, respectively. A successive study highlighted higher percentages, showing that, although 73.2% of Italian adolescents do not have gambling problems, 18.5% can be classified as at-risk gamblers and 8.3% as problematic gamblers [17]. More recently, Cosenza, Ciccarelli, and Nigro [18] found even higher percentages of gamblers among Italian adolescents, showing that 20.2% and 9.8% can be defined as at-risk or problematic gamblers, respectively. These data are alarming given that, in many countries, including Italy, the law prohibits minors from engaging in gambling activities. This rapid increase may be due to the fact that, in addition to the very large and widespread dissemination of slot machines and scratch card dealers, in recent years, technological and information technology developments have favored the proliferation of a large number of games and apps linked to gambling. These games and apps are easily accessible and available even to minors since access restrictions are easily circumvented [3]. Given the high prevalence of gambling among adolescents and the negative consequences it can have, it is not surprising that the literature has focused on the recognition of risk factors involved in the development of gambling problems in adolescents [19][20][21][22]. One of the most studied risk factors involved in the onset of gambling is sensation seeking, which is the tendency to pursue "varied, novel, complex, and intense sensations and experiences, and the willingness to take physical, social, legal, and financial risks for the sake of such experiences" [23] (p. 27). In this regard, many studies have found a relationship between sensation seeking and gambling severity in adolescents. For example, a study conducted by Donati et al. [24] highlighted that sensation seeking was a significant predictor of at-risk and problematic gambling. Similarly, Estevez et al. [25] found that sensation seeking was high in young gamblers, as did Reardon et al. [26], who highlighted a positive correlation between these two variables. Another recent study found that one of the antecedents of regular gambling was high sensation seeking scores [27]. Moreover, Donati et al. [28] showed that sensation seeking has a significant direct effect on gambling severity, with higher levels of sensation seeking being predictors of greater severity. Briefly, such evidence suggests that the pursuit of novelty and intense stimulation, typical of individuals with high scores in sensation seeking, may explain why some youth are attracted to gambling. In fact, gambling often represents an escape from everyday life, guided by curiosity for new experiences and desire for excitement deriving from the unpredictability of those experiences [29,30]. However, not all studies in the literature agree on this aspect, and some have found no direct role of sensation seeking in gambling severity [31,32]. Considering these heterogeneous data, one may surmise that there are additional variables that could moderate this relationship. To this point, Calderia et al. [33] found that the direct role of sensation seeking in gambling was completely attenuated when considering its indirect path through the frequency of alcohol and drug use. In our opinion, however, the presence of general externalizing problems, as well as alcohol and drug use, can intervene in the relationship between sensation seeking and gambling. Externalizing problems, including the violation of age-appropriate rules and expectations, interpersonal conflict, oppositionality, aggression, and impulsiveness, can be considered a construct that includes both aggressive and delinquent behaviors [34]. The relevant literature has shown that externalizing problems increases in prevalence during adolescence and can be considered a predictor of antisocial behavior in adults [35,36]. Furthermore, a plethora of studies has highlighted that these types of problems are connected to both sensation seeking and gambling severity. In particular, sensation seeking has been found to be a predictor of a wide range of externalizing problem behaviors in adolescence [23,37] and is strongly related with antisocial and delinquent behavior [38][39][40]. Regarding gambling, a meta-analysis of longitudinal studies has pointed out that antisocial behaviors, violence, and uncontrolled temperament represent some early risk factors for the development of gambling problems [20]. In addition, several studies have shown that gambling severity was significantly associated with marked externalizing problems [10,41], and Allami et al. [19] verified that adolescents showing externalizing profiles at 12 years of age reported a large number of gambling-related problems at 16 and 23 years old. Starting from these considerations, the main aim of the present study was to explore the moderating role of externalizing problems in the relationship between sensation seeking and gambling severity. Our conceptual model is reported in Figure 1. [19] verified that adolescents showing externalizing profiles at 12 years of age reported a large number of gambling-related problems at 16 and 23 years old. Starting from these considerations, the main aim of the present study was to explore the moderating role of externalizing problems in the relationship between sensation seeking and gambling severity. Our conceptual model is reported in Figure 1. In particular, we hypothesized that the level of sensation seeking was significantly linked to the severity of gambling behaviors. However, we supposed that this relationship was greater in the presence of externalizing problems, and that it could be insignificant in the absence of externalizing problems. Participants A total of 363 adolescents (232 males and 131 females) between the ages of 14 to 20 (M = 16.35, SD = 1.36), who were attending two high schools in the metropolitan area of Florence, were recruited for the present study. More than 93% of the students came from central Italy from families characterized by a middle/high socio-educational background, with more than 59% of fathers and 72% of mothers having a high school diploma or university degree. In addition, 95% of fathers and more than 75% of mothers had a job. Procedure A cross-sectional study was conducted in accordance with the guidelines for the ethical treatment of human participants of the Italian Psychological Association. First, the Ethical Committee of the University of Florence approved the study (n. 81120/2018). Second, written authorization was obtained by the principals of the two high schools, selected according to a casual criteria applied to all high schools in the metropolitan area of Florence. Then, all students were informed of the aims of the study, that participation was anonymous, that they could withdraw at any time, and that their participation was voluntary without any reward. All students signed informed consent and, in case of minor students, informed consent was signed by parents. Data collection was performed in class during normal school hours. Specifically, two trained researchers went to the classrooms and collectively administered the questionnaires in paper form to the students. In particular, we hypothesized that the level of sensation seeking was significantly linked to the severity of gambling behaviors. However, we supposed that this relationship was greater in the presence of externalizing problems, and that it could be insignificant in the absence of externalizing problems. Participants A total of 363 adolescents (232 males and 131 females) between the ages of 14 to 20 (M = 16.35, SD = 1.36), who were attending two high schools in the metropolitan area of Florence, were recruited for the present study. More than 93% of the students came from central Italy from families characterized by a middle/high socio-educational background, with more than 59% of fathers and 72% of mothers having a high school diploma or university degree. In addition, 95% of fathers and more than 75% of mothers had a job. Procedure A cross-sectional study was conducted in accordance with the guidelines for the ethical treatment of human participants of the Italian Psychological Association. First, the Ethical Committee of the University of Florence approved the study (n. 81120/2018). Second, written authorization was obtained by the principals of the two high schools, selected according to a casual criteria applied to all high schools in the metropolitan area of Florence. Then, all students were informed of the aims of the study, that participation was anonymous, that they could withdraw at any time, and that their participation was voluntary without any reward. All students signed informed consent and, in case of minor students, informed consent was signed by parents. Data collection was performed in class during normal school hours. Specifically, two trained researchers went to the classrooms and collectively administered the questionnaires in paper form to the students. Measures The Italian version of the South Oaks Gambling Screen Revised for Adolescents (SOGS-RA) [16], developed by Winters et al. [42], was used to assess gambling behavior. The SOGS-RA is a self-report instrument composed of two parts. The first part measures the frequency of gambling, the typology of gambling activity, and the amount of money spent on gambling the previous year. From the first part, we obtained the percentage of adolescents who reported having gambled during the previous year. The second part of the instrument is composed of 12 dichotomous items. The total score, obtained by the sum of these items, measures the severity of gaming behaviors as a continuous variable. The Cronbach's alpha for the present sample was 0.71, which represents an acceptable value [43]. The Italian version of the Youth Self Report (YRS) [34,44] was employed to assess the level of externalizing behavior problems. The externalizing scale of the YRS is composed of two syndrome scales (delinquent and aggressive behavior scales) consisting of 30 items rated on a three-point scale, from zero (not true) to two (very true or often true). For the present sample, the Cronbach's alpha was 0.84 for the externalizing scale, which represents a good value [43]. The Italian version of the Brief Sensation Seeking Scale (BSSS) [45], developed by Zuckerman et al. [46], was used to measure sensation seeking traits. The BSSS is a self-report instrument composed of eight items rated on a five-point Likert scale, from one (totally disagree) to five (totally agree). For the present study, the Cronbach's alpha was 0.71, which represents an acceptable value [43]. Data Analysis The collected data were analyzed using SPSS version 23.0 (IBM, Armonk, NY, USA). Missing data were completely random (Little's Missing Completely At Random (MCAR) test: χ 2 = 80.51, df = 98; p = 0.900) and an expectation maximization (EM) algorithm was employed to substitute missing items. Descriptive statistics and pairwise correlation coefficients were performed. The normality of each variable was explored using Curran and colleagues' criteria that stabilized an accepted range for skewness of ±2 and kurtosis of ±7 [47]. Regarding Pearson's correlations, Cohen's criteria were used [48], which indicate correlations around 0.10, near 0.30, and 0.50 or higher, which are small, medium, and large effect sizes, respectively. Then, prior to addressing the aims of the study, gender effect on gambling behaviors, externalizing behavior problems, and sensation seeking were controlled by performing a series of t-tests, and the corresponding effect sizes were reported (Cohen's d). Finally, in order to explore the role of externalizing problems on the relationship between sensation seeking and gambling severity, a hierarchical regression analysis consisting of four consecutive steps was carried out, with gambling as the dependent variable. These analyses were performed following Aiken and West's procedure [49]. Sensation seeking and externalizing problems were centered at the sample mean for both main effect and interaction terms to reduce potential multi-collinearity. Then, for the first hierarchical regression, gender as a dummy variable was entered in Step 1, sensation seeking was entered in Step 2, externalizing problems (moderator variables) were entered in Step 3, and the two-way interaction between sensation seeking and externalizing problems was entered in Step 4. Significant interaction between independent variables (sensation seeking) and moderating variables (externalizing problems) was graphically represented using ModGraph [50], and moderating variables and independent variables were represented as low (values 1 SD below the mean), medium (values ranging between 1 SD below mean and 1 SD above mean), or high (values 1 SD above the mean). Finally, simple slope analyses were conducted using post-hoc regression to explore the significance of each slope. In this regard, the externalizing problem variable was standardized and three groups were considered: low level of externalizing problems (values < −1); medium level of externalizing problems (values ranging from −1 to 1); and high level of externalizing problems (values > 1). Results Regarding gambling, data collected by the first part of the SOGS-RA showed that more than 67% of adolescents (n = 245) declared that they had gambled at least once in the previous 12 months. In particular, 193 (78.8%) were minors. Table 1 shows the descriptive statistics and pairwise correlation coefficients of externalizing problems, sensation seeking, and gambling, as collected using the second part of the SOGS-RA as continuous variables. Therefore, in subsequent analyses, all participants were included. All variables presented a normal distribution. Moreover, higher levels of externalizing problems were linked to higher levels of sensation seeking and gambling severity. Finally, the level of sensation seeking presented a medium and positive correlation with the level of gambling problems. Significant differences emerged between males and females on all variables. In particular, males reported higher levels of externalizing problems (males: M = 11.78, SD = 7.10; females: M = 9.37, SD Given these gender differences, gender was inserted as a control variable in the subsequent analyses. Table 2 shows the results of the first hierarchical regression in reference to the moderating role of externalizing problems in the relationship between sensation seeking and gambling severity. In Step 1, gender accounted for 14% of the variance in gambling severity, F (1, 361) = 57.29, p = 0.000. In Step 2, sensation seeking explained 0.04% of additional variance, F (2, 360) = 37.82, p = 0.000. In Step 3, the moderating variable of the level of externalizing problems explained 0.03% of additional variance, F (3, 359) = 29.73, p = 0.000. Finally, in Step 4, the interaction terms explained 0.02% of additional variance, F (4, 358) = 24.53, p = 0.000. Higher levels of sensation seeking were more strongly associated with higher levels of gambling severity at higher levels of externalizing problems. This interaction is shown in Figure 2. Post hoc analyses showed that the relationship between sensation seeking and gambling severity was significant when externalizing problems were medium (β = 0.26, p = 0.000) and high (β = 0.35, p = 0.002). On the contrary, the relationships were non-significant when externalizing problems were low (β = 0.12, p = 0.323). Discussion The main focus of the present study was to explore the relationship between sensation seeking traits and gambling severity in adolescents by exploring the moderating role played by the presence of externalizing problems in this relationship. We hypothesized that the seeking of novel and intense stimulation could represent a significant risk factor to the development of gambling problems. However, considering the results of previous studies, we also posited that the levels of externalizing problems manifested by adolescents could play a significant role as a moderating variable in this relationship. Specifically, we assumed that this relationship may be significantly greater in the presence of externalizing problems and may not be significant in their absence. In other words, since externalizing problems are linked to impulsive and uncontrolled behaviors, they can represent a risk factor in the relationship between the sensation seeking trait and gambling severity. In line with recent literature, our results showed a relevant and alarming picture, highlighting that gambling is a very common phenomenon among adolescents [13,[16][17][18]51]. In fact, more than 67% of the adolescents in our sample declared that they had gambled at least once in the previous year. Moreover, of the adolescent gamblers, 78.8% (n = 193) were minors, despite the fact that, according to current legislation, it is illegal for them to gamble. These results are in line with previous studies conducted in the Italian context on gambling among adolescent minors and those of legal age [13,16]. In addition, our results highlight that gambling severity are significantly and positively correlated both with high levels of sensation seeking and with the presence of externalized problems. These data are in line with studies present in the literature that highlight that the desire for excitement, novelty, and intense stimulation, characterizing individuals who have high levels of sensation seeking, is closely connected to gambling severity [24][25][26][27][28]. In addition, the presence of Post hoc analyses showed that the relationship between sensation seeking and gambling severity was significant when externalizing problems were medium (β = 0.26, p = 0.000) and high (β = 0.35, p = 0.002). On the contrary, the relationships were non-significant when externalizing problems were low (β = 0.12, p = 0.323). Discussion The main focus of the present study was to explore the relationship between sensation seeking traits and gambling severity in adolescents by exploring the moderating role played by the presence of externalizing problems in this relationship. We hypothesized that the seeking of novel and intense stimulation could represent a significant risk factor to the development of gambling problems. However, considering the results of previous studies, we also posited that the levels of externalizing problems manifested by adolescents could play a significant role as a moderating variable in this relationship. Specifically, we assumed that this relationship may be significantly greater in the presence of externalizing problems and may not be significant in their absence. In other words, since externalizing problems are linked to impulsive and uncontrolled behaviors, they can represent a risk factor in the relationship between the sensation seeking trait and gambling severity. In line with recent literature, our results showed a relevant and alarming picture, highlighting that gambling is a very common phenomenon among adolescents [13,[16][17][18]51]. In fact, more than 67% of the adolescents in our sample declared that they had gambled at least once in the previous year. Moreover, of the adolescent gamblers, 78.8% (n = 193) were minors, despite the fact that, according to current legislation, it is illegal for them to gamble. These results are in line with previous studies conducted in the Italian context on gambling among adolescent minors and those of legal age [13,16]. In addition, our results highlight that gambling severity are significantly and positively correlated both with high levels of sensation seeking and with the presence of externalized problems. These data are in line with studies present in the literature that highlight that the desire for excitement, novelty, and intense stimulation, characterizing individuals who have high levels of sensation seeking, is closely connected to gambling severity [24][25][26][27][28]. In addition, the presence of aggressive and delinquent behaviors, typical of an externalizing profile, are also correlated to gambling severity in adolescence [10,19,20,41]. Finally, although the presence of sensation seeking was related to the severity of gambling behaviors, our results show that this relationship was moderated by the levels of externalizing problems experienced by adolescents. In particular, the relationship between sensation seeking and gambling severity was found to be significant only when adolescents show medium or high externalizing problems while, on the contrary, this relationship disappears when the externalizing problems are low. These results point out the risk of sensation seeking in the development of antisocial and uncontrolled behaviors in adolescents. However, it could be that sensation seeking alone is not a significant risk factor in gambling activity when the adolescent does not present externalizing problems. These results provide a significant contribution to the knowledge of risk factors in adolescent gambling; however, the present study presents some limitations. The first limitation is related to the convenience sample used in the study. It would be useful for future research to extend recruitment to a more heterogeneous sample (e.g., provenience and socio-educational background) since the present results are based on a sample of adolescents mostly from central Italy and from families characterized by a middle/high socio-educational background. The second limitation concerns the nature of the data, given that they are based only on self-report questionnaires that may not fully reflect true adolescent gambling behavior due to possible biases deriving from the social stigma associated with such behavior. The third limitation is that measures of social desirability have not been considered, and this could be a problem, as the participants could have difficulty admitting their gambling problems and/or the presence of an aggressive or delinquent nature. Moreover, since this is a cross-sectional study, it is impossible to determine the direction of the observed effects and to infer casual relations. Another limitation is the use of the SOGS-RA to assess the presence and severity of gambling behaviors, considering issues of item content and false positives [61]. However, in addition to the fact that the evidence from item response theory supports the reliability and suitability of the SOGS-RA as a screening tool in adolescents [16], we used the questionnaire results only as continuous measures in the data analysis. Another limitation is linked to the amount of variance explained. Although the results were significant, the variance explained is very small. This may be due to the fact that the proposed theoretical model is not exhaustive, and other variables may certainly play a significant role in the interaction between sensation seeking and gambling. For example, the attachment relationship with parents appears to be an important factor in the onset of problematic gambling behavior [62]. Despite these limitations, our results have relevant social and clinical implications. From a social point of view, we found that most adolescents, even minors, gamble. From a social policy point of view, it would be helpful to have greater legal control of gambling operations and to implement gambling prevention programs among adolescents. Moreover, from a clinical point of view, our results underline the importance for clinicians who deal with adolescents with gambling problems to pay attention to the personality traits that characterize them. Above all, clinicians should work on aspects related to individual well-being, in order to promote the reduction of problems underlying deviant and antisocial behaviors. In line with this goal, it would be useful for future research to continue to investigate other variables involved in adolescent gambling problems. In fact, this would be very useful information for clinicians to better direct their work and more effectively help adolescents who are involved in gambling behaviors that could become more problematic with increasing age. Gambling in adolescence is considered a risk factor for a wide range of negative consequences, both in the short-and long-term [14], and it is often associated with gambling problems in adulthood [63][64][65]. Therefore, this research area should be further studied in future investigations. Conclusions In the present study we explored the moderating role of externalizing problems in the relationship between sensation seeking and gambling severity using a representative sample of Italian adolescents. Overall, results highlighted that the level of sensation seeking trait in adolescence was significantly associated to gambling severity. However, this relationship was significant only in presence of high level of externalizing problems. In other words, externalizing problems represent a significant risk factor for the severity of gambling behaviors in adolescence. In fact, without the presence of these problems, the relationship between sensation seeking and gambling was insignificant. Author Contributions: Conceptualization and design, F.T. and L.P.; methodology, F.T. and L.P.; analysis, L.P. and S.G.; writing-original draft preparation, L.P. and S.G.; writing-review and editing, F.T.; supervision, F.T.; funding acquisition, F.T. All authors have read and agreed to the published version of the manuscript.
2020-12-06T14:06:28.085Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "0d41bd66ba0bc0ac1d7bd84160da577584055b89", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijerph17238986", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a0954a7f5263367adf908b0e501fbd50bb0a03d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
162183174
pes2o/s2orc
v3-fos-license
The pathogenesis, diagnosis and management of congenital dyserythropoietic anaemia type I Summary Congenital dyserythropoietic anaemia type I (CDA‐I) is one of a heterogeneous group of inherited anaemias characterised by ineffective erythropoiesis. CDA‐I is caused by bi‐allelic mutations in either CDAN1 or C15orf41 and, to date, 56 causative mutations have been documented. The diagnostic pathway is reviewed and the utility of genetic testing in reducing the time taken to reach an accurate molecular diagnosis and avoiding bone marrow aspiration, where possible, is described. The management of CDA‐I patients is discussed, highlighting both general and specific measures which impact on disease progression. The use of interferon alpha and careful management of iron overload are reviewed and suggest the most favourable outcomes are achieved when CDA‐I patients are managed with a holistic and multidisciplinary approach. Finally, the current understanding of the molecular and cellular pathogenesis of CDA‐I is presented, highlighting critical questions likely to lead to improved therapy for this disease. Congenital dyserythropoietic anaemia type I (CDA-I) (Online Mendelian Inheritance in Man [OMIM] entry: 224120; Orphanet: D64.4 and DCS-10) is one of a heterogeneous group of disorders termed the congenital dyserythropoietic anaemias (CDAs). Unlike other rare anaemias, the CDAs are characterised by ineffective erythropoiesis and morphological abnormalities of erythroblasts. Other haematopoietic lineages are unaffected and there is a haemolytic component. Dyserythropoiesis is defined as the presence of erythroblast abnormalities indicative of aberrant proliferation or differentiation (Crookston et al, 1966). The World Health Organization recognizes nine types of erythroid dysplasia (Brunning et al, 2008). Although there are further categories and subcategories of abnormalities described on dysplastic bone marrow smears, concordance amongst expert haematologists reviewing identical dyserythropoietic slides is poor (Goasguen et al, 2018). Crookston's original description allows for a minority of dyserythropoiesis in normal bone marrow (Crookston et al, 1966). There are three main types of CDA (CDA-I, CDA-II and CDA-III) and although each has specific morphological and clinical features, blood films show overlapping abnormalities. All subtypes show anisocytosis and poikilocytosis and CDA-I has macrocytic red cells while types II and III are usually normocytic (Bain et al, 2010). The major CDA subgroups were originally proposed based on specific erythroblast morphological abnormalities on bone marrow light microscopy of aspirates (Heimpel & Wendt, 1968): CDA-I is indicated by binucleate macrocytic erythroblasts and internuclear bridging, CDA-II by the presence of 10-35% binucleate late erythroblasts and CDA-III by the presence of giant multinucleate erythroblasts with up to 12 nuclei per cell. Electron microscopy of erythroblasts reveals a characteristic pattern of chromatin abnormalities in CDA-I (see below) and a double cellular membrane in CDA-II. This classification system has facilitated the systematic collection of CDA cases, allowed the best available treatment to be delivered and led to the identification of causative genes. However, the degree to which these disorders share a molecular basis is unclear. Since the discovery of CDAN1 as a causative gene for CDA-I in 2002, the genes for the three main types of CDA have been identified. Approximately 90% of CDA-I cases are caused by bi-allelic mutations in CDAN1 or C15orf41 (Dgany et al, 2002;Babbs et al, 2013), CDA-II is caused by biallelic mutations in SEC23B ) and CDA-III is a dominant disorder caused by the P916R mutation of KIF23 (Liljeholm et al, 2013). Identification of the causative genes has shed light on the pathogenesis of these disorders, opened new avenues for research, allowed accurate molecular diagnosis and carrier testing of family members and impacted disease management. Variants of CDA have also been described, for example the dominantly inherited E325K mutation of KLF1 causing an anaemia termed CDA-IV and X-linked forms caused by GATA1 mutations (Nichols et al, 2000;Singleton et al, 2009;Arnaud et al, 2010). Further CDA subtypes have been suggested, however, the extent to which these are distinct entities will become clearer as our understanding of their molecular pathogenesis improves. As with many rare disorders, establishing an accurate estimate of incidence and prevalence of CDA-I is difficult. Over 300 cases have been reported (Iolascon et al, 2013). Most are sporadic cases from diverse regions such as Western Europe, North Africa and Asia, ) while some series are accounted for by a founder effect, particularly in the Middle East (Tamary et al, 2005). The lack of reported cases from Sub-Saharan Africa or South America may reflect ascertainment bias (Heimpel et al, 2010). Clinical presentation Diagnosis of CDA-I is often predicated on a high index of suspicion. With rare disorders, awareness of the condition is necessary before appropriate investigations are instigated. Traditional pathways for the investigation of rare inherited anaemias follow the usual sequence of history and examination, standard haematological and biochemical assays, possibly radiological imaging, a bone marrow aspirate and trephine and genetic analysis. However, this approach is quickly changing with the advent of clinical-grade next-generation sequencing (Fig 1). Nevertheless, all investigations will be guided by a thorough initial consultation with the patient. History and examination While some cases have been identified in utero (Parez et al, 2000;Kato et al, 2001;Lin et al, 2014), most present in childhood or young adulthood (Iolascon et al, 2011;Shalev et al, 2017). Cases detected in utero have led to fetal demise if untreated, but intra-uterine transfusions support survival to term, followed by lifelong transfusion dependence. Later presentation can be due to intermittent jaundice and fatigue or the requirement for occasional blood transfusion for severe anaemia, while some have presented with secondary iron overload (Kawabata et al, 2012) or pigment gallstones (Fujino et al, 2013). Retrieving a neonatal history can be useful as some neonates with CDA-I experience prolonged jaundice and/or the requirement for a blood transfusion perinatally, with no further transfusions thereafter. As a recessive disorder, CDA-I is more likely to occur in consanguineous families and this should be directly questioned. Intriguingly, there is marked heterogeneity in phenotype expression, not only between unrelated patients with different mutations, but even in siblings with identical mutations, where, for example, one sibling presents in the neonatal period and the other at 2 years of age (al-Fawaz & al-Mashhadani, 1995;Heimpel et al, 2010). Examination may reveal pallor, mild jaundice and splenomegaly, which is found in all cases, at least radiologically Shalev et al, 2017). The finding of frontal bossing can be observed in the more severe untreated cases, and this may prompt the need for imaging to assess for extramedullary haematopoiesis (Heimpel et al, 2003). The finding of limb abnormalities in the presence of anaemia should suggest CDA-I. Haematology and biochemical assays A full blood count typically reveals moderate macrocytic anaemia, with a haemoglobin (Hb) of 66-116 g/l (mean 92 g/l), mean corpuscular volume (MCV) 100-120 and reticulocytopenia, although 30% of cases have a normal MCV (Wickramasinghe, 1998). There are reports of transient neutropenia/thrombocytopenia, but these lineages are generally unaffected (Meznarich et al, 2018). The hallmark of CDA-I is an absolute or relative reticulocytopenia, indicative of ineffective erythropoiesis (Heimpel, 2004). Ineffective erythropoiesis was originally demonstrated using ferrokinetic studies, where the fraction of 59 Fe present in peripheral red blood cells is calculated 2 weeks post-intravenous infusion. Where erythropoiesis is effective, e.g. when anaemia occurs due to bleeding, iron deficiency or haemolysis, the fraction of red cells containing 59 Fe is~75-80%, but in ineffective erythropoiesis, it may be as low as 25-30% (Lewis, 2001). Recently, a new clinical index, termed the bone marrow responsiveness index (BMRI), has been developed to discriminate haemolytic anaemia from ineffective erythropoiesis. This is defined as: [(absolute reticulocyte count) 9 (patient Hb/normal Hb)] and was shown to be a highly sensitive parameter (90Á4%) to achieve a clinical diagnosis of CDA-II. This metric is likely to be useful in CDA-I but is yet to be formally validated in this disease. The blood film, as illustrated in Fig 2, is markedly abnormal with anisocytosis and various poikilocytes, including teardrops, ovalocytes, elliptocytes, microspherocytes, irregularly contracted red cells (Heimpel et al, 2010) and occasional nucleated red cells ), although these are not a typical feature. Red cell distribution width, as a quantitative assessment of the aniosopoikilocytosis seen on films, is elevated in CDA-I (Wickramasinghe, 1998;Kamiya & Manabe, 2010). Renal and liver function are normal unless perturbed secondary to severe iron overload, but unconjugated bilirubin and lactate dehydrogenase will be elevated from the haemolysis and ineffective erythropoiesis. Haptoglobin levels are reduced secondary to intra-and extra-vascular haemolysis (Heimpel, 2004). Comparison between the conventional pathway and an alternative NGS-based pathway for the investigation of inherited anaemias. In the conventional pathway, patients are investigated sequentially where a differential diagnosis is devised based on history, examination and standard blood investigations. Specialised tests are then requested according to the suspected diagnosis and a bone marrow investigation frequently performed. Genetic testing is then reserved as a confirmatory test. In the alternative pathway, NGS is employed early, obviating the need for bone marrow biopsies in some patients where clear molecular diagnoses can be made by NGS. Where variants of uncertain significance are identified functional tests are required to confirm or refute the variant's pathogenicity. This alternative pathway can cut the time to diagnosis, remove the need for some bone marrow biopsies, provide accurate diagnosis of cases and allow genetic counselling. eADA, erythrocyte adenine deaminase; EMA, eosin-5-maleimide; Hb HPLC, haemoglobin high performance liquid chromatography; LDH, lactate dehydrogenase; NGS, next-generation sequencing. Key alternative diagnoses, which must be ruled out, include vitamin B12 and folate deficiency, autoimmune haemolytic anaemias, haemoglobinopathies or infections, such as human immunodeficiency virus or leishmaniasis. Ferritin and transferrin saturations should be assessed to evaluate for secondary iron overload. A high level of soluble transferrin receptor and low/unrecordable serum hepcidin in the absence of iron deficiency has also been noted in some CDA-I patients (Cazzola et al, 1999), but neither test is readily available in the clinic. Bone marrow examination A bone marrow aspirate and trephine are usually carried out next, allowing rapid differentiation between, for example, Diamond Blackfan Anaemia (DBA) and CDA, both of which present with a reticulocytopenic anaemia. Findings on bone marrow examination were originally described by Heimpel and Wendt (1968); a hallmark of the condition is marked erythroid hyperplasia, with an excess of erythroid precursors compared to myeloid precursors (Heimpel, 2004). Between 2Á4% and 10% of late polychromatic erythroblasts precursors are binucleate (Heimpel et al, 2010), where nuclei can be unequal sizes . Intermediate erythroblasts exhibit internuclear bridges in 1-8% of cells examined and 30-60% of late erythroblasts exhibit a range of abnormalities, including polychromasia, megaloblastic changes, multinuclearity, karyorrhexis and basophilic stippling (Iolascon et al, 2012). At least 20% of erythroblasts must have an abnormality on light microscopy for a diagnosis of CDA-I (Heimpel et al, 2010). The diagnosis is based on a constellation of abnormalities but the absence of internuclear bridges calls a diagnosis of CDA-I into question. A recent study examining concordance in the morphological identification of dyserythropoiesis in two CDA-I cases found agreement between only 4/7 expert haematologists (Goasguen et al, 2018). Diagnostic certainty relies on genetic confirmation or the gold standard of scanning electron microscopy (SEM) to identify nuclear abnormalities. This often requires a second bone marrow aspirate, as SEM samples require specific preparation for best results and CDA-I is usually unsuspected prior to the light microscopy findings. SEM findings include widening of nuclear pores, darkening of heterochromatin with electron lucent areas within abnormally electron dense heterochromatin (Fig 3) described as 'spongy heterochromatin' (Wickramasinghe, 1998) or 'Swiss cheese appearance' (Heimpel et al, 2006). Invagination of the cytoplasm into the nucleus has also been described (Iolascon et al, 2013). Other features Non-haematological manifestations of CDA-I are described in 10-20% of cases (Wickramasinghe, 1998), mostly involving the axial skeleton, such as missing distal phalanges, syndactyly (especially of the toes), and complete lack of nail formation (Brichard et al, 1994). Brown skin pigmentation (Heimpel & Wendt, 1968) and neurological deficits (Wickramasinghe, 1998) have occasionally been reported. However, as some families in which CDA-I is diagnosed have a high degree of consanguinity, other features may represent epiphenotypes arising from different recessive conditions (Renella & Wood, 2009). The presence of angioid streaks has been described in CDA-I. These represent a break in Bruch's membrane, a collagenous layer underneath the pigment epithelium of the retina, which has been described in a small number of CDA-I patients and is associated with loss of visual acuity (Frimmel & Kniestedt, 2016). Imaging Abdominal ultrasound reveals universal splenomegaly (Heimpel et al, 2006) and gallstones in 50-60% of adults with CDA-I . Imaging with plain radiographs or magnetic resonance imaging (MRI) to investigate and characterise paraspinal masses secondary to extramedullary haematopoiesis may be necessary . Imaging for organ iron overload by T2* MRI or Ferriscan â is guided by biochemical iron studies. However, it has been reported that 38% of CDAs (type not specified) have cardiac iron on MRI, and 25% acquire cardiac iron loading by 10 years of age. Raised iron in the pancreas was described in six CDA cases and median liver iron concentration from T2* was 5Á9 mg/g dry weight (upper limit of normal = 1Á8 mg/g dry weight). While the role of imaging for iron loading is yet to be systematically evaluated in CDA-I patients, this study suggests that this patient cohort warrants routine assessment of iron status (Berdoukas et al, 2013). Genetic analysis CDA-I is a recessive disease caused by biallelic mutations in either CDAN1 or C15orf41 (Dgany et al, 2002;Babbs et al, 2013). There are 51 causative mutations documented in CDAN1 and 5 in C15orf41 (Fig 4). Currently no patients have been identified with compound mutations in these genes. Approximately 10% of CDA-I cases remain unexplained by mutations in either of the known genes and there may be cis-acting regulatory mutations affecting these genes or further, yet to be identified, loci causative of CDA-I. A recent review of the role of SEM in the diagnosis of CDA-I concluded that genetic analysis for CDAN1 and C15orf41 should only be done once SEM has confirmed the presence of the pathognomonic 'spongy heterochromatin' abnormality (Resnitzky et al, 2017). We feel that genetic testing may obviate the need for SEM in some cases, thus avoiding an additional bone marrow aspirate with associated risks, which would be performed under general anaesthetic in the paediatric population. Genetic testing can be performed from a small amount of peripheral blood early in the diagnostic process (Fig 1). While knowledge of the precise underlying mutation does not currently carry prognostic information, it guides discussion, confirms the diagnosis and allows preimplantation genetic diagnosis. Current approaches to genetic analysis include targeted panels (containing 50-200 genes), whole exome sequencing (WES) and whole genome sequencing (WGS). Where WES and WGS are undertaken for clinical diagnostics rather than research, e.g. as part of the National Health Service (NHS) England 100 000 genome project, the whole exome/genome is sequenced but only data from a pre-determined set of genes is analysed. Targeted panels have been shown to be clinically useful in the diagnosis of rare inherited anaemias, including CDA-I (Gerrard et al, 2013;Roy et al, 2016;Russo et al, 2018;Shefer Averbuch et al, 2018). Reported diagnostic rates vary from~38% to~65%, depending on the number and types of genes included and the depth of phenotypic assessment undertaken. In~10% of cases, this approach reveals an unsuspected diagnosis (Roy et al, 2016). Making an accurate diagnosis is paramount to instituting correct therapy, for example steroids for DBA, splenectomy for pyruvate kinase deficiency and administration of interferon alpha (IFNa) for CDA-I. In our recent study of a series of 20 cases with a presumed diagnosis of CDA-I (from blood results and light microscopy and/or SEM), 55% received a molecular diagnosis from the targeted panel. The majority of these confirmed CDA-I with mutations in CDAN1 or C15orf41, but 20% had an alternative diagnosis, such as DBA with an unusual marrow in early childhood, and PK deficiency (Roy et al, 2016). Extrapolating from these findings, up to 55% of patients may be able to avoid an unnecessary bone marrow aspiration when genetic analysis is performed earlier in the diagnostic pathway. In addition, time to diagnosis is notoriously long for patients with rare disorders and the delay between onset of symptoms and a formal diagnosis has been reported to be 12 years in some cases of CDA-I (Fujino et al, 2013). A German CDA Registry reported the age of the 21 patients at the time of initial diagnosis of CDA-I ranged between 0Á1 and 45 years (median 17Á3 years) and that 11 of 21 cases were previously misdiagnosed as congenital haemolytic anaemia (Heimpel et al, 2006). This underscores the utility of genetic testing to provide an accurate molecular diagnosis. Holistic approach Akin to the management of thalassaemia patients, whose clinical course CDA-I patients often closely resemble, the most favourable outcomes are achieved when CDA-I patients are managed with a holistic and multidisciplinary approach. The diagnosis is usually a surprise to patients and their families and, as with rare disorders, patients suffer from a sense of isolation and that their physicians have inadequate knowledge about their condition (Budych et al, 2012). There is often the need for genetic counselling of the parents for future pregnancies. For patients receiving life-long transfusions, issues such as long-term intravenous access, logistics of regular transfusion and transfusion complications, are as important as they are in thalassaemia patients. Irrespective of transfusion status, CDA-I is associated with iron overload and compliance with chelation is particularly important, especially around adolescence. Transition from paediatric to adult services should be carefully managed, and in the UK, National Institute for Health and Care Excellence (NICE) guidelines recommend the use of the "Ready, Steady, Go" approach (https://www.nice.org.uk/sharedlearning/implement ing-transition-care-locally-and-nationally-using-the-ready-stea dy-go-programme). Complex patients may need to be managed by a team of psychologists, dietician, endocrinologists and bone specialists, while others will have a very mild disorder and require only a yearly review with a haematologist. Management of the anaemia This depends on severity and patient characteristics and may change according to the patient's lifestyle. Some patients require transfusions only perinatally whilst others require them during additional marrow stress, e.g. intercurrent infections or pregnancy. True transfusion dependence occurs in 3-4% of cases (Heimpel et al, 2006), although genotype-phenotype correlations have not yet been convincingly described. Transfusions increase the prevalence and severity of iron overload. Transfusion practice should follow guidelines for chronically transfused patients, and managing transfusion in CDA-I patients according to practices developed for haemoglobinopathies is entirely appropriate. As such, blood units should be as fresh as possible, preferably <10 days since sampling from the donor (Milkins et al, 2013;Davis et al, 2017) and patients should receive the Hepatitis B vaccine and be offered lifelong folic acid replacement due to the haemolytic component. The only difference with haemoglobinopathy patients concerns the requirement for extended phenotyping to reduce the likelihood of developing antibodies. This is less critical than for the haemoglobinopathies where there is commonly a mismatch between the donor and recipient ethnicity (Davis et al, 2017), but is still considered best practice for CDA-I patients. Interferon alpha The only specific treatment available for CDA-I is interferon alpha (IFNa). Its discovery was fortuitous when IFNa was given to a French patient with transfusion-dependent CDA-I who had contracted Hepatitis C through contaminated blood (Lavabre-Bertrand et al, 1995) and became transfusion independent with reduction in ineffective erythropoiesis and amelioration of SEM features on bone marrow aspirate. Intriguingly, while 4 weeks of treatment ameliorated the erythroid:myeloid ratio and the SEM features reduced from 57% to 15Á6%, the percentage of erythroblasts exhibiting internuclear bridges and binuclearity was unchanged. Prolonged treatment with IFNa leads to a stable Hb with ongoing normalisation in SEM features (Heimpel et al, 2006). Bilirubin falls in parallel as intramedullary haemolysis, characteristic of ineffective erythropoiesis, improves. Improvement in ineffective erythropoiesis in response to IFNa is supported by ferrokinetic studies (Lavabre-Bertrand et al, 1995). Responses to IFNa are rapid, occurring within 4 weeks, but are not universal (Marwaha et al, 2005) and patients have required cessation of treatment due to side effects . These include gastrointestinal symptoms, flu-like symptoms and depression. Reported effective doses vary between 4-10 million units per week (Heimpel, 2004;Lavabre-Bertrand et al, 2004) but individual dose titration is required. Pegylated interferon at a dose of 30-50 lg/week has also been used effectively. The mechanism of action of IFNa is not understood. In vitro, IFN production by Epstein-Barr virus-transformed lymphocytes from CDA-I patients is reduced, (Wickramasinghe et al, 1997) suggesting a deficit. It remains unclear whether CDA-I patients respond to IFNa due to subnormal production of this in vivo, or whether the molecular defect leading to CDA-I can be overcome by the over-expression of IFN-responsive genes. Further insights into the mechanism of action of IFNa may allow for the development of other, targeted therapies. Erythropoietin (Epo) Epo levels are mildly elevated in CDA-I patients, but remain inappropriately low for the degree of anaemia. Efforts to correct the anaemia using recombinant human Epo did not result in any rise in Hb, increase in reticulocyte count or fall in iron overload (Tamary et al, 1999). As such, Epo is not considered a treatment for CDA-I. Splenectomy Splenectomy in CDA-I has not been evaluated systematically, but evidence from small case series suggests exercising caution. In one series, six patients were splenectomised for severe anaemia, five of whom had a rise in Hb . However, this benefit was offset by a rise in mortality, with 3/6 patients dying between the ages of 40-60 from pulmonary hypertension (n = 1) and sepsis (n = 3). In another series, 7/21 patients underwent splenectomy with no response in Hb (Heimpel et al, 2006). Recent European Haematology Association recommendations highlight the high rate of complications in CDA-I and suggest splenectomy should be reserved for patients with painful splenomegaly and/or significant thrombocytopenia/leucopenia (Iolascon et al, 2017). Management of iron overload As detailed above, the aetiology of iron overload in non-transfused CDA-I patients is directly related to ineffective erythropoiesis. CDA-I patients have unrecordable levels of hepcidin, leading to unopposed gastrointestinal iron absorption and deposition in target organs (Tamary et al, 2008;Kawabata et al, 2012). Suppression of hepcidin is hypothesized to result from excess erythroferrone production by the greatly expanded pool of erythroblasts. This mechanism has been successfully demonstrated in thalassaemia (Kautz et al, 2015) and CDA-II . Serum growth differentiation factor 15 (GDF15) levels (a marker of ineffective erythropoiesis) were found to be elevated in CDA-I patients, with a correlation between GDF15 and ferritin and anti-correlation with serum hepcidin levels (Tamary et al, 2008). There is no clinical consensus on the frequency of monitoring for tissue iron overload. In our centre we perform yearly T2* MRI in patients from the age of 10 years ), although others investigate every 5 years once serum ferritin exceeds 600 lg/l from the age of 20 years . The management of iron overload in CDA-I patients should be the same as for thalassaemia patients, namely chelators, be that sub-cutaneous (deferrioxamine) or oral (deferiprone, deferasirox). However, a unique approach in these patients is to titrate the dose of IFNa such that the patient's Hb rises above that needed to avert the symptoms of anaemia so that regular venesections can be carried out. This has proven a very useful technique in some CDA-I patients in clinical practice in several UK centres of which the authors are aware. Management of extramedullary haematopoieis Extramedullary haematopoiesis (EMH), ranging in severity from minor to bulky, is a recognised complication of CDA-I, although the exact prevalence is unclear (Heimpel et al, 2006. Management is in line with the EMH in thalassaemia with therapeutic options including surgical debulking, low dose irradiation and commencing regular transfusions to suppress EMH. Management of osteoporosis Osteoporosis is present in 89% of CDA-I cases . The aetiology is probably multifactorial due to marrow expansion, diabetes and hypothyroidism, parathyroid gland dysfunction and the toxic effects of iron and chelators on osteoblasts (Voskaridou & Terpos, 2008). Active management entails treatment of calcium and vitamin D deficiency, bone densitometry scanning and regular review by bone specialists . Shalev et al (2004) reviewed the outcomes in 28 spontaneous pregnancies in 18 women over a 15-year period in a Bedouin tribe. The complication rate was high (64%) and included one first trimester spontaneous abortion, one stillbirth and 42% low birth weight infants. The rate of infants requiring caesarean sections was statistically significantly higher than a control group of Bedouin women (35Á7% vs. 11%) for reasons including fetal distress and pre-eclampsia. Not infrequently, previously transfusion independent women with CDA-I develop a transfusion requirement during pregnancy (Roy & Pavord, 2018). Management of endocrinopathies Endocrinopathies have been described in 10-40% of CDA-I patients and include diabetes mellitus and hypothyroidism (Heimpel et al, 2006;Shalev et al, 2017), as well as pituitary failure leading to growth retardation (Facon et al, 1990). These are thought to be secondary complications from iron overload and poor compliance with chelation. Their management requires close collaboration with endocrinologists as well as psychologists to aid medication compliance. Bone marrow transplantation Paediatric bone marrow transplants have been carried out in a few patients with CDA-I. In a small series, three children underwent matched sibling allografts. All were severe and diagnosed before 6 months of age, had hepatosplenomegaly and two required chelation. Conditioning was with cyclophosphamide 50 mg/kg/day for 4 days, busulfan 4 mg/kg/day for 4 days and antithymocyte globulin 30 mg/kg for 4 doses prestem cell transplantation. All three engrafted and became transfusion independent (Ayas et al, 2002). One of the barriers to transplantation is poor prognosis conferred by pre-transplant iron overload, yet the most severe patients who would most benefit from transplant are the ones whose response to chelation is limited (Buchbinder et al, 2012). Pulmonary hypertension CDA-I may present with pulmonary hypertension in the neonatal period, in association with other congenital anomalies (El-Sheikh et al, 2014;Landau et al, 2015). Treatment includes inhaled nitric oxide and high frequency oscillation ventilation. While pulmonary hypertension may be a late complication of CDA-I, the extent of this is unknown. There are currently no guidelines on whether and how pulmonary hypertension should be assessed and treated in CDA-I patients. For patients with sickle cell disease, this is screened by tricuspid jet velocity followed by right heart catheterization in patients with tricuspid jet velocity >2Á5 m/s. A recent meta-analysis (Wang et al, 2018) compared different types of treatment for all types of idiopathic pulmonary hypertension including endothelin receptor antagonists, phosphodiesterase type-5 inhibitors, prostaglandin I2, soluble guanylate cyclase stimulator and selective non-prostanoid prostacyclin receptor agonists or combination treatment. Results differed depending on the outcome used to measure response but the best responses appeared to be for vardenafil (taken orally) and iloprost + bosantan (inhaled). Whether these would be the best treatment for CDA-I patients with pulmonary hypertension is not known. Potential for gene editing Gene editing could provide hope for a cure. In CDA-I, gene editing would require homology-directed repair to integrate a donor template and so each targeted mutation would require individually designed editing, which remains currently out of reach. However, in the longer term, elucidation of the pathway affected in CDA-I may identify a therapeutic target gene, which would allow a universal therapy to be developed. Pathogenesis Rare diseases (defined as those affecting <1:2000 of the population) have been estimated to number between 6000-8000, collectively affecting some 30 million European Union citizens (http://www.eurordis.org/). However, the study of rare diseases has an impact reaching far beyond affected individuals alone: for example, research into the molecular basis behind Fanconi anaemia, a congenital bone marrow failure syndrome affecting~1000 individuals worldwide, has provided critical insights into the link between genomic instability and malignancy, has significantly advanced our understanding of DNA repair mechanisms (Schindler & Hoehn, 2007) and has led directly to the development of therapeutic agents for patients with BRCA1/2 mutations (Fong et al, 2009). CDA-I is an example of a rare disease which has the potential to inform about general cellular processes. Codanin-1 (CDAN1) and C15orf41 expression levels are extremely low in all cell types, yet are widely expressed, and loss of either protein is incompatible with life. Both proteins are likely to play a critical role in DNA repair and/or chromatin assembly following DNA replication and understanding their function will elucidate universal cellular processes. A number of fundamental questions remain unanswered about the biology and pathology of CDA-I (Fig 5). The main hurdle to advancing our understanding of the function of CDA-I proteins is the lack of molecular reagents, including antibodies, appropriate cell lines and access to primary material, although attempts have been made to address this by generating erythroid cell lines with engineered tags at the endogenous loci (Moir-Meyer et al, 2018). Gene function Much of our knowledge about Codanin-1 and C15orf41 derives from studies in osteosarcoma (U-2-OS) cells and cervical cancer (HeLa) cells, both of which are cytogenetically abnormal cell lines that may not reflect biology in primary erythroid cells. For example, Codanin-1 knock-down in U-2-OS cells results in a faster cell cycle (Ask et al, 2012), whereas deletion of the endogenous Cdan1 in mice results in early embryonic lethality (Renella et al, 2011). Both CDA-I proteins are widely expressed and yet, when mutated, affect only the erythroid lineage, suggesting the pathological mechanism must be investigated in this cell type. However, insights into potential functions of these proteins gained from studies in non-erythroid cells will be presented. Codanin-1 Bi-allelic mutations of CDAN1 account for~80% of CDA-I cases. The gene comprises 28 exons and encodes the protein Codanin-1, which is relatively large (~134 kD) and highly evolutionarily conserved in fish, frogs and flies with no human orthologs and no apparent homologue in worms and yeast (Dgany et al, 2002). The Drosophila homologue, discs lost (Dlt), is required for cell survival and cell-cycle progression (Pielage et al, 2003). There are no functionally conserved domains in Codanin-1 to facilitate functional predictions, however, a putative peptide binding site has been identified through which Codanin-1 may interact with the well-described histone chaperone Asf1 Ask et al, 2012). Regulation of CDAN1 expression appears to depend on the cell cycle as the promoter contains several binding sites for the cell-cycle regulated transcription factor E2F that increases expression of Codanin-1 in S-phase in HeLa cells (Noy-Lotan et al, 2009). Codanin-1 is reported to be enriched in the Are mature red blood cells descended only from "normal" intermediate erythroblasts? Heterochromatin composition in CDA-I erythroblasts is unknown. It remains unclear why erythroblasts are specifically affected by mutations in CDAN1 and C15ORF41, which are widely expressed. Late Erythroblast Fig 5. Summary of current knowledge of pathophysiology of CDA-I (black type) and key questions (blue type). The top panel focuses on different sub-cellular compartments. While spongy heterochromatin is the pathognomonic feature of CDA-I, the composition of the electronlucent areas is unknown. Whether they are true euchromatin or abnormally packaged heterochromatin would indicate the function(s) of the key proteins. The known interactions between Codanin-1, C15orf41 and the histone chaperone ASF1 are shown and the abnormal shuttling of these proteins into the nucleus in the context of mutated CDAN1 in non-erythroid cells suggests a possible problem in delivery of histones in the rechromatinisation of replicating DNA. Because of its predicted role as a nuclease, C15orf41 may play a role in clearing blocks to replication fork progression (such as interstrand cross-links) or replication intermediaries. Determining whether interferon alpha acts at this level would narrow down the key cellular processes in which Codanin-1/C15orf41 are involved. In the middle left panel, the possibility that Codanin-1 has a direct function in the developing limb bud by affecting the coordinated interaction between the signalling centres is suggested. In the right middle panel the chromosomal location of CDAN1 and C15orf41 is shown and the existence of further loci is strongly suggested by the lack of molecular diagnoses in 10% of EM-proven CDA-I cases. The bottom panel illustrates erythroid differentiation and the conundrums of the predominantly erythroid abnormalities in CDA-I despite broad expression of CDAN1 and C15orf41 and the fact that only subset of erythroid progenitors are morphologically affected in the bone marrow. AER, apical ectodermal ridge; Stem cell, haematopoietic stem cell; BFUe, burst forming unit (erythroid); CFUe, colony forming unit (erythroid); ProE, pro-erythroblast; ZPA, zone of polarising activity. nucleus in the K562 erythroleukaemia line, U-2-OS and HeLa cells. Codanin-1 has been found to associate with DNA during interphase in HeLa cells and excluded from mitotic condensing chromosomes (Noy-Lotan et al, 2009). However, other reports suggest that Codanin-1 is mainly localised to the cytoplasm in U-2-OS cells (Ask et al, 2012). This discrepancy needs to be resolved by determining the localisation in primary erythroid cells using independent antibodies when the protein is expressed at native levels. C15orf41 Bi-allelic mutations in C15orf41 cause~10% of CDA-I cases (Babbs et al, 2013). Similar to Codanin-1, C15orf41 protein is widely conserved with orthologs broadly distributed in eukaryotes and in members of the archaea and viruses, indicating a highly conserved function. C15orf41 is present in all species where CDAN1 is found, and none where it is not, suggesting the proteins function in concert. Additionally, the pathognomonic CDA-I heterochromatin defects that arise when either gene is mutated strongly suggest a common pathway, although a direct interaction between the two proteins has not been shown (Fig 5). Sequence conservation shows C15orf41 protein belongs to the PD-(D/E)XK family of restriction endonucleases, a diverse group of phosphodiesterases involved in genome maintenance processes, such as DNA damage repair, Holliday junction resolution and RNA processing (Laganeckas et al, 2011). Notable members of this family include the DNA repair nucleases Mus81 and XPF (ERCC4/FANCQ) that play key roles in DNA lesion resolution and maintenance of genome stability (Steczkiewicz et al, 2012). This points to a defect in DNA repair in CDA-I, however, the nature of the lesions that result from impaired C15orf41 function remain unknown. Role in chromatin assembly C15orf41 and Codanin-1 both interact with the histone chaperone Asf1b (anti silencing factor 1b) (Ewing et al, 2007;Ask et al, 2012). Asf1 is essential for chromatin assembly in human cells (Groth et al, 2005(Groth et al, , 2007, playing a role donating new histones to chromatin assembly factor 1 (CAF1) (Mello et al, 2002) for incorporation into nucleosomes following DNA replication. Asf1 binds histone H3-H4 heterodimers in the cytoplasm and chaperones them into the nucleus where they are transferred to downstream chromatin assembly factors (Campos et al, 2010;. Codanin-1 sequesters Asf1 in the cytoplasm, thereby negatively regulating the supply of histone bound Asf1 to the nucleus (Ask et al, 2012). Mutations in Codanin-1, which impair the interaction with Asf1, may allow unregulated Asf1 to access the nucleus thereby disrupting the fine-tuned delivery of histones known to be critical in correctly rechromatinising newly synthesised DNA . It remains to be shown to what extent abnormal histone delivery at the replication fork leads to the specific abnormalities in chromatin and heterochromatin seen in CDA-I erythroblasts. Lineage specificity CDAN1 and C15orf41 are ubiquitously expressed, albeit at a relatively low level in most tissues, and no individual harbouring two loss of function alleles of either gene has been identified. Additionally, mice embryos homozygous for null Cdan1 alleles die prior to implantation, suggesting that Codanin-1 is essential prior to the onset of erythropoiesis (Renella et al, 2011). Given that Codanin-1 and C15orf41 are highly conserved, ubiquitously expressed and appear essential, it is of great interest that abnormalities in CDA-I are restricted to the erythroid lineage, suggesting that erythroblasts have a specific requirement for Codanin-1 and C15orf41. One possibility may be that erythroid progenitors have a uniquely fast cell cycle, although CDA patients do not manifest abnormalities of other tissues containing fast-dividing cell types, such as gut epithelium or hair follicles. It has been reported that, in mice, a particularly rapid cell cycle is required at the start of terminal erythroid differentiation for erythroid lineage commitment, through a mechanism of passive genome demethylation causing the PU.1 (also termed SPI1) switch and lineage commitment (Pop et al, 2010;Shearstone et al, 2011). However, it remains to be shown whether a similar phenomenon exists in human erythroid differentiation and how this may be impacted by impaired chromatin assembly. Other hypotheses include nuclear extrusion in erythroblasts, which requires the eviction of histones such as H3 and H4 (Zhao et al, 2016) and C15orf41 and Codanin-1 may play a role in this process. It also may be of significance that the proportion of erythroblasts displaying chromatin abnormalities varies from patient to patient and remaining erythroblasts appear to undergo normal terminal maturation, suggesting a threshold effect. Whether the non-haematological manifestations of CDA-I reflect severe anaemia in utero or are directly due to the effects of a mutated or reduced amount of Codanin-1, as has been previously suggested (Goede et al, 2006), is difficult to ascertain. Certainly, some features, such as diabetes and growth retardation are more readily ascribed to the iron overload that accompanies CDA-I. However, the ubiquitous expression of Codanin-1 and C15orf41 in all tissue types suggests a direct effect in tissues beyond the erythroid lineage. Compromised cell division may affect the rapidly dividing cells of the progress zone and the apical ectodermal ridge in developing limb buds (Tickle, 2015). Limb abnormalities in CDA-I patients are usually asymmetrical reductions, suggesting a defined signalling pathway is not uniformly affected. However, compromised cell division is likely to be stochastic with a threshold effect, as seen in erythroblasts, and therefore malformations would not be expected to be uniform or bilateral. In addition, if the skeletal defects were due to tissue hypoxia, they could be expected to mirror those seen in Bart's Hydrops Fetalis Syndrome where the absence of HbF creates severe hypoxia. However, in that form of thalassaemia non-haematological manifestations are more neurological and urogenital, with no descriptions of acral dysostosis (Songdej et al, 2017). Erythroblast abnormalities Characterisation of the cellular abnormalities in CDA-I erythroblasts could shed light on the function of the two genes involved. Analysis of cell cycle distribution in cultured erythroblasts showed an increase in cells in S-phase in CDA-I, which are, paradoxically, not actively synthesising DNA when tested, suggesting an S-phase arrest in CDA-I (Wickramasinghe & Pippard, 1986). This points to a problem with DNA replication, but needs to be refined using a larger number of patients. Unrepaired DNA lesions act as physical barriers to replication fork progression and nicks, gaps and stretches of ssDNA can be both sources and symptoms of stress (Zeman & Cimprich, 2014). Because C15orf41 is predicted to be an endonuclease it may be that there are more unrepaired lesions in CDA-I patients and therefore more stalled replication forks, underlying the proposed S-phase arrest. Alternatively, replication intermediaries that are usually cleared by C15orf41 may underlie some of the nuclear abnormalities seen in CDA-I erythroblasts. Given that these events are stochastic and would also be likely to be affected by genetic background, this may go some way to explaining the variability between patients. It would be very informative to distinguish these possibilities (blocked replication vs. unresolved replication intermediates). Elucidating the nature of the electron-lucent areas in the heterochromatin seen by SEM (Fig 3) would shed light on the underlying pathology of CDA-I, especially as resolving this abnormality is associated with improved Hb levels in patients treated with IFNa (Lavabre-Bertrand et al, 1995, 2004. The differentiating stain used in SEM is osmium tetroxide, however, why this differentially binds the heterochromatin in affected nuclei is unclear. It may be that the "holes" contain protein, lipids or improperly packaged heterochromatin affecting its transcriptional status. Do the mutations in either CDAN1 or C15orf41 represent different entities? It has been suggested that CDA-I arising from mutations of C15orf41 may be more severe . In our clinical and laboratory experience, at the time of writing there are insufficient data to draw this distinction with certainty. Given that CDA-I with specific chromatin abnormalities arises from mutation of either gene, we propose that this disease simply be termed CDA-I until new insights on genotype/phenotype correlations are obtained. Patient welfare CDA-I remains a rare disorder and shares some of the hurdles and obstacles borne by patients with other rare conditions. Delays in diagnosis will hopefully be reduced by the implementation of gene panels as part of routine testing. Earlier diagnosis should allow treatment prior to the development of iron overload and associated organ damage. Due to the rarity of the condition, CDA-I patients should be reviewed at least annually in a centre with a specialist interest in rare anaemias and access to specialized monitoring and multidisciplinary meetings. Such clinicians belong to national and international networks of experts which collaborate to provide optimal care and access to research developments. EuroBloodNet, a European rare disease network, promotes trans-national working and sharing of expertise. Finally, patients with rare conditions benefit from support offered by national patient networks. A recent James Lind Alliance Priority Setting Partnership identified the number one (of ten) research questions important to these patients and their families as "Would a national formal network of clinicians with expertise and/or a national MDT (multidisciplinary team meeting) improve care for patients with rare inherited anaemias?" (http://www.jla.nihr.ac.uk/priority-setting-partnerships/rareinherited-anaemias/top-10-priorities.htm). Therefore, collaboration between clinicians, patients and research partners is critical to guiding and securing optimal clinical care for CDA-I.
2019-03-11T17:24:36.343Z
2019-03-05T00:00:00.000
{ "year": 2019, "sha1": "18eb9f76a3ca5b1e5c631a411091002611065b12", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bjh.15817", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "18eb9f76a3ca5b1e5c631a411091002611065b12", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5527655
pes2o/s2orc
v3-fos-license
Infectious entry of equine herpesvirus-1 into host cells through different endocytic pathways We investigated the mechanism by which equine herpesvirus-1 (EHV-1) enters primary cultured equine brain microvascular endothelial cells (EBMECs) and equine dermis (E. Derm) cells. EHV-1 colocalized with caveolin in EBMECs and the infection was greatly reduced by the expression of a dominant negative form of equine caveolin-1 (ecavY14F), suggesting that EHV-1 enters EBMECs via caveolar endocytosis. EHV-1 entry into E. Derm cells was significantly reduced by ATP depletion and treatments with lysosomotropic agents. Enveloped virions were detected from E. Derm cells by infectious virus recovery assay after viral internalization, suggesting that EHV-1 enters E. Derm cells via energy- and pH-dependent endocytosis. These results suggest that EHV-1 utilizes multiple endocytic pathways in different cell types to establish productive infection. Introduction Viruses deliver their genomes and accessory proteins into host cells in order to initiate their replication. Certain enveloped viruses, including retroviruses (Stein et al., 1987), enter cells through direct fusion of the virion envelope with the plasma membrane, a process that is followed by the release of the viral capsid or genome into the cytoplasm. Other enveloped viruses, such as influenza virus (Matlin et al., 1981) and Semliki Forest virus (Helenius et al., 1980), as well as most nonenveloped viruses rely on the cellular endocytic machinery for their entry into host cells. Productive infection with alphaherpesviruses had been thought to be established only by direct fusion of the viral envelope with the plasma membrane, as demonstrated by electron microscopic analysis and the effects of treatment with neutralizing antibodies (Fuller and Spear, 1987;Fuller et al., 1989;Fuller and Lee, 1992). Agents that perturb endocytosis were found to have little or no effect on herpes simplex virus (HSV) infection in HEp-2 and Vero cells (Wittels and Spear, 1991). Furthermore, entry of HSV-1 via endocytic vesicles was shown to result in degradation of the virus particles (Campadelli-Fiume et al., 1988). However, it has recently become clear that HSV successfully infects HeLa, receptor-expressing CHO, and C10 murine melanoma cells as well as primary and transformed human epidermal keratinocytes via endocytosis (Nicola et al., 2003Nicola and Straus, 2004;Milne et al., 2005). The cellular and viral requirements for the endocytic entry of HSV into these cells have been characterized. In HeLa and receptor-expressing CHO cells, infectious entry of HSV requires trafficking of the virus to an acidic intracellular compartment, phosphatidylinositol 3-kinase activity, glycoprotein D (gD) receptors, as well as viral gB, gD, and gH-gL (Nicola et al., 2003;Nicola and Straus, 2004). The pathway into C10 murine melanoma cells is gD receptor dependent but independent of vesicles with a low pH (Milne et al., 2005). HSV enters primary and transformed human epidermal keratinocytes, an important target cell population in vivo, by a pH-and tyrosine phosphorylation-dependent mechanism . Equine herpesvirus-1 (EHV-1), an alphaherpesvirus of the family Herpesviridae, is distributed worldwide and causes rhinopneumonitis, abortion, and encephalomyelitis in horses (Storts and Montgomery, 2001). With the use of ultrastructural analysis, we have previously suggested that EHV-1 enters equine brain microvascular endothelial cells (EBMECs) via endocytosis (Hasebe et al., 2006). Similar ultrastructural observations were described for EHV-1 endocytosis in mouse fibroblast L-M cells (Abodeely et al., 1970). Frampton et al. (2007) demonstrated that EHV-1 strain L11ΔgIΔgE, which lacks gI and gE, enters CHO-K1 cells by endocytosis, while entry pathway into equine dermis (E. Derm) cells and rabbit kidney (RK13) cells is direct fusion of viral envelope with the plasma membrane. van de Walle et al. (2008) reported that integrin on the surface of the host cells is involved in the endocytosis of EHV-1. It has remained unclear, however, whether the other strains of EHV-1 utilize endocytosis to enter the susceptible cells. Here, we have investigated the entry mechanism of EHV-1 into EBMECs and E. Derm cells. With the use of confocal immunofluorescence microscopy, we examined the localization both of EHV-1 and of the endocytic markers clathrin and caveolin during viral internalization. Moreover, we evaluated the involvement of caveolar endocytosis in EHV-1 entry with the cells expressing a dominant negative form of caveolin-1. We also assessed the role of tyrosine kinase activity and of low pH in the endosomal compartment in EHV-1 entry with the use of pharmacological approaches. We also performed energy depletion experiments and infectious virus recovery assay for direct indications of endocytosis. Our data identify caveolar endocytosis as an entry pathway for alphaherpesviruses. Moreover, our results demonstrate the existence of multiple endocytic pathways for EHV-1 entry. Susceptibility of EBMECs and E. Derm cells to EHV-1 infection EBMECs and E. Derm cells were examined for their ability to support EHV-1 replication. Kinetics of viral growth in E. Derm cells was similar to those in EBMECs ( Fig. 1), which were typical of a fully productive infection (Hasebe et al., 2006). These results demonstrated that both EBMECs and E. Derm cells were susceptible to EHV-1 infection and that there was no significant difference in viral replication between EBMECs and E. Derm cells. Ultrastructural analysis of the early stage of EHV-1 entry We have previously suggested that the entry of EHV-1 in EBMECs occurs via endocytosis by using electron microscopy (Hasebe et al., 2006). Here, we further examined the mode of EHV-1 entry into EBMECs and E. Derm cells using electron microscopy. At 10 min post infection (p.i.), enveloped virions were detected in noncoated vesicles within the cytoplasm of EBMECs ( Fig. 2A, Hasebe et al., 2006) and E. Derm cells (Figs. 2B, C). These observations imply that EHV-1 enters E. Derm cells as well as EBMECs via endocytosis. We were unable to quantify the numbers of enveloped viral particles in the endosomes, because we could not catch enough particles having the complete recognizable structure (i.e. core, capsid and envelope) for the quantification. Localization of EHV-1 and clathrin during viral internalization Clathrin-dependent endocytosis plays a major role in the entry of many viruses, having been classically described for Semliki Forest virus, vesicular stomatitis virus, and influenza virus (Helenius et al., 1980;Matlin et al., 1981Matlin et al., , 1982. Although the vesicles containing the virions appeared not to possess clathrin coats by electron microscopy ( Fig. 2; Hasebe et al., 2006), these observations did not exclude the possible involvement of clathrin-dependent endocytosis in EHV-1 entry. We therefore examined the EHV-1-infected cells by two-color immunofluorescence staining for EHV-1 and clathrin-heavy chain at 10 or 30 min p.i. Both in infected EBMECs (Figs. 3A-F) and E. Derm cells (Figs. 3G-L), EHV-1 immunoreactivity did not colocalize with clathrin at any time points. Lack of cross reactivity of anti-EHV-1 antibodies with anti-clathrin antibody was confirmed by the double staining of uninfected EBMECs and E. Derm cells (data not shown). The specificity of the mouse monoclonal antibody to clathrin-heavy chain in EBMECs and E. Derm cells was examined by Western blotting, in which the antibody detected the specific band at 180 kDa (data not shown). Localization of EHV-1 and caveolae during viral internalization Caveolar endocytosis has emerged as a route of entry for several viruses including simian virus 40 (SV40) (Anderson et al., 1996), mouse polyomavirus (Richterová et al., 2001;Gilbert et al., 2003;Gilbert and Benjamin, 2004), echovirus (Marjomäki et al., 2002), human papillomavirus type 31 (Bousarghin et al., 2003;Smith et al., 2007), human polyomavirus BK (BKV) (Eash et al., 2004), and species C human adenovirus (Colin et al., 2005). So we next examined the possible association of caveolae with EHV-1 during viral internalization. Two-color immunostaining revealed that EHV-1 gB and caveolin, the major component of caveolae, were colocalized in EBMECs at 10 (Figs. 4A-C) and 30 min p.i. (Figs. 4D-F). Three fields were chosen at random and more than 50 signals for EHV-1 immunoreactivity were counted. Approximately 40% of the EHV-1 signal in EBMECs was colocalized with caveolin at 10 min p.i. The percentage of colocalization was reduced to 19% at 30 min p.i. Such colocalization was not detected in infected E. Derm cells at any time points (Figs. 4G-L). Lack of cross reactivity of anti-EHV-1 antibody with anti-caveolin antibodies was confirmed by the double staining of uninfected EBMECs and E. Derm cells (data not shown). The specificity of the rabbit polyclonal antibodies to caveolin in EBMECs and E. Derm cells was examined by Western blotting, in which the antibodies detected the specific bands at 24 kDa for alpha isoform and/or 21 kDa for beta isoform (data not shown). Effects of a dominant negative form of caveolin-1 in EHV-1 entry Previous studies have reported that the tyrosine phosphorylation of caveolin-1 at residue 14 mediates the release of caveolae from the plasma membrane and is an integral part of certain signaling pathways (Parton et al., 1994;Aoki et al., 1999;Orlichenko et al., 2006). To confirm the involvement of caveolae vesicles in EHV-1 entry into EBMECs, we constructed equine caveolin-1 mutant. This mutant encodes a protein in which tyrosine 14 is mutated to phenylalanine (ecavY14F). This mutant has been known to act as a dominant negative inhibitor of caveolin-1 (Orlichenko et al., 2006). The ecavY14F construct and wild type equine caveolin-1 (WT ecav) were expressed in EBMECs and E. Derm cells using a lentivirus vector. Cells expressing either WT ecav or ecavY14F were infected with EHV-1 strain Ab4-GFP (Ab4-GFP). The Ab4-GFP harbors a GFP expression cassette between gene 62 and gene 63 (Ibrahim et al., 2004). Therefore, the viral infected cells show GFP signal. Fewer EHV-1infected EBMECs were observed in cells expressing ecavY14F than in cells expressing WT ecav (p b 0.01). However, there was no difference in the number of the EHV-1-infected E. Derm cells expressing ecavY14F or WT ecav (Figs. 5A, B). Expression of WT ecav and ecavY14F was confirmed by Western blotting with rabbit anticaveolin polyclonal antibodies (Fig. 5C). Effects of a tyrosine kinase inhibitor on EHV-1 entry The effects of endocytosis inhibitors on EHV-1 infection were assessed by quantification of ICP0 RNA, the product of an early gene of EHV-1. The production of ICP0 RNA is indicative of successful entry of the viral genome into the nucleus and is maximal at 3 h p.i. in RK13 cells (Kimura et al., 2004). Given that the onset of EHV-1 DNA synthesis in RK13 and L-M cells was detected at 4 h p.i. (Caughman et al., 1985;O'Callaghan et al., 1968), the abundance of ICP0 RNA at 3 h p.i. is thought to reflect the number of virions that have infected the cells. The ICP0 RNA could be detected in EHV-1-infected RK13 cells at a multiplicity of infection (m.o.i.) of 0.004 plaque forming unit (p.f.u.) per cell (data not shown). Previous studies have suggested that cellular tyrosine kinase activity aids EHV-1 infection (Frampton et al., 2007). Cellular tyrosine kinase activity is important for receptor-mediated endocytosis (Greenberg et al., 1993;Lamaze et al., 1993;McPherson et al., 2001). Furthermore, tyrosine phosphorylation of caveolin-1 at residue 14 is important in signaling pathways mediating release of caveolae from the plasma membrane since caveolar fission is decreased by kinase inhibition (Parton et al., 1994;Aoki et al., 1999). We therefore examined the effects of genistein, a tyrosine kinase inhibitor (Akiyama et al., 1987), on the endocytosis of EHV-1. The amount of ICP0 RNA in EBMECs at 3 h p.i. was greatly reduced by treatment with genistein at a concentration of 50 μg/ml (Fig. 6A). In contrast, genistein had no effect on the abundance of ICP0 RNA in E. Derm cells. To eliminate the possibility that the results of E. Derm cells were due to the inefficient uptake of genistein, we assessed the effect of genistein on the tyrosine phosphorylation of caveolin-1 in E. Derm cells (Fig. 6B). Tyrosine phosphorylation of caveolin-1 at residue 14 was diminished by the treatment of genistein at 100 μg/ml, suggesting that the concentration of genistein used in this study was effective to down-regulate the tyrosine phosphorylation of caveolin-1. Neither the morphology of both cell types nor the level of expression of the cellular housekeeping gene for horse GAPDH was affected by genistein in both cell types at 50 μg/ml and 100 μg/ml (data not shown). Effects of lysosomotropic agents on EHV-1 entry Low pH in endosome is important for many viruses to enter the host cells either via clathrin-dependent endocytosis or clathrin-and caveolae-independent pathway Yoshimura and Ohnishi, 1984;Blumenthal et al., 1987;Nicola et al., 2003). Intracellular low pH is involved in BKV entry by caveolae-dependent endocytosis (Eash et al., 2004) although SV40 infection by caveolar endocytosis is pH independent (Ashok and Atwood, 2003). To determine whether an acidic compartment is required for EHV-1 infectivity, we examined the effects of lysosomotropic agents on EHV-1 ICP0 RNA production. Cells were treated with bafilomycin A1 and ammonium chloride to neutralize the pH of acidic organelles (Tsiang and Superti, 1984;van Weert et al., 1995;Dröse and Altendorf, 1997) and infected with EHV-1 in the continued presence of the reagent. The abundance of ICP0 RNA in E. Derm cells was reduced in a concentration-dependent manner by treatment with bafilomycin A1 (Fig. 7A) and ammonium chloride (Fig. 7B). On the other hand, EHV-1 entry into EBMECs was not inhibited by bafilomycin A1 (Fig. 7A) and ammonium chloride at 10 mM (Fig. 7B). The effect of ammonium chloride at 20 mM in EBMECs was not determined, because the expression of horse GAPDH was significantly reduced (p b 0.01; data not shown) suggesting that 20 mM of ammonium chloride is toxic in EBMECs. A fluorescent pH indicator probe, LysoSensor™ Yellow/Blue DND-160, was used to confirm that the pH of the organelles was neutralized. This probe exhibits a pH-dependent increase in fluorescence intensity upon acidification when the cells are excited at 405 nm and the fluorescence is emitted at 490 nm. Untreated EBMECs and E. Derm cells exhibited punctuate staining. The staining of LysoSensor was diminished in EBMECs by the treatment with 0.2 μM of bafilomycin A1 and with 10 mM of ammonium chloride (Figs. 7C,D). In E. Derm cells, signals of LysoSensor were diminished by the treatment with 1 μM of bafilomycin A1 and 20 mM of ammonium chloride (Figs. 7C, D). The concentration of bafilomycin A1 and ammonium chloride used here seemed to be non-toxic, because neither the morphology of both cell types nor the level of expression of GAPDH was affected (data not shown). Effects of ATP depletion on EHV-1 entry into E. Derm cells EM studies and the effects of lysosomotropic agents suggest the possibility that EHV-1 enters E. Derm cells via endocytosis. However, these data are insufficient to demonstrate that endocytosis is involved in EHV-1 entry into E. Derm cells. To assess this possibility, we examined the effect of ATP depletion on EHV-1 infection to E. Derm cells. ATP depletion is known to inhibit endocytosis, but has no effect on herpesvirus entry by direct fusion of viral envelopes with plasma membranes (Nicola et al., 2003). E. Derm cells were pretreated with glucose-free media containing 2-deoxy-D-glucose for 1 h, and then infected with Ab4-GFP for 1 h in the continued presence of glucosefree media. After virus infection, the media were replaced with complete growth media. To confirm that ATP depletion affects the viral entry step, the cells were infected with EHV-1 with media including glucose for 1 h, and then cultured with glucose-free media for 2 h. The number of infected cells was later evaluated by counting GFP-positive cells at 12 h p.i. ATP depletion during viral entry greatly reduced EHV-1 infection (P b 0.001) whereas ATP depletion post viral entry did not significantly reduce EHV-1 infection (Fig. 8). Frampton et al. (2007) performed infectious virus recovery assays to demonstrate that EHV-1 L11ΔgIΔgE strain enters CHO-K1 cells via endocytosis. Since endocytosed virions possess envelopes in the early phase of entry, infectivity can be detected by titration on RK13 cells. On the other hand, when virus penetrates by direct fusion of envelope with plasma membrane, infectious virions cannot be detected due to the loss of the viral envelope. To confirm that EHV-1 enters E. Derm cells via endocytosis, we performed infectious virus recovery assay. After incubation at 4°C for 5 min, the cells were infected with EHV-1 at an m.o.i. of 10 p.f.u. per cell for 2 h at 4°C. The temperature was then shifted to 37°C to allow the virus internalization. At 0, 7.5, 15, 30 and 45 min after the temperature shift, the viruses on the cell surface were inactivated by washing with acidic buffer. The internalized infectious viruses were titrated on RK13 cells. For a control to eliminate the possibility of detecting virions remaining on the cell surface, we used NIH3T3 cells, which seem to be resistant to EHV-1 entry because we could not detect viral RNA at 12 h p.i. by RT-PCR (data not shown). At 0 and 7.5 min, no infectious virus was detected from E. Derm and NIH3T3 cells. At 15 min, virus was recovered from E. Derm cells. The virus titer from E. Derm cells reached a peak at 30 min and slightly declined at 45 min. In contrast, no virus was recovered from NIH3T3 cells at any time point (Fig. 9). Discussion The colocalization of EHV-1 with caveolin at the early stage of infection and the significant effect of a dominant negative form of caveolin-1 on the EHV-1 infection suggests that the virus enters EBMECs via caveolar endocytosis. The results of double immunolabeling indicated that clathrin-dependent endocytosis plays a relatively minor role in EHV-1 entry into EBMECs. As far as we are aware, our study is the first to demonstrate alphaherpesvirus entry into cells via caveolar endocytosis. The production of EHV-1 ICP0 RNA in EBMECs was blocked by the tyrosine kinase inhibitor genistein, indicating a requirement for tyrosine phosphorylation in the entry of EHV-1 into these cells although we cannot exclude the possibility that genistein might affect ICP0 gene expression after viral entry. Tyrosine phosphorylation initiates signal transduction events that lead to receptor-mediated endocytosis (McPherson et al., 2001). In caveolar endocytosis, tyrosine kinase activity is required for phosphorylation of caveolin at residue 14, which induces caveolar vesiculation and enclosure of ligands within caveolae (Aoki et al., 1999;Chen and Norkin, 1999). Our result indicated that EHV-1 may induce caveolin phosphorylation, which activates the subsequent signal transduction. Caveolae have traditionally been described as smooth invaginations of the plasma membrane with a diameter of 50 to 80 nm (Palade, 1953;Yamada, 1955). Most viruses that have been shown to enter host cells via caveolae are nonenveloped and therefore smaller than typical caveolar invaginations. However, recent studies have shown that this traditional description of caveolar morphology is inadequate, as caveolae with flat or tubular forms have also been detected (Anderson, 1998). Caveolae-dependent endocytosis has also been found to contribute to the entry of enveloped viruses, such as filoviruses and human coronavirus (Empig and Goldsmith, 2002;Nomura et al., 2004). Our data now provide support for the notion that caveolar vesicles mediate the delivery of large enveloped viruses. Infection of endothelial cells in the horse central nervous system (CNS) is required for establishment of EHV-1-induced encephalomyelitis, which is characterized by vasculitis, thrombosis, and secondary ischemia of neuronal tissue (Edington et al., 1986). We have previously proposed that EBMECs are an appropriate in vitro model for studies of the endotheliotropism of EHV-1 (Hasebe et al., 2006). Primary cultured brain microvascular endothelial cells (BMECs) retain several characteristics of CNS endothelial cells in vivo (Joó, 1996). Infection of human BMECs with Escherichia coli K1 results in the formation of abundant caveolae that mediate bacterial uptake (Sukumaran et al., 2002). Immunohistochemical studies of normal brain tissue have also shown the caveolar compartment to be pronounced in endothelial cells, suggesting an important physiological role for caveolar mediated endocytosis in vivo (Virgintino et al., 2002). It is therefore reasonable to propose that EHV-1 makes use of caveolar endocytosis to infect endothelial cells in the horse CNS. Viral entry through caveolae has traditionally been considered to occur in a pH-neutral setting, bypassing the acidic endosome (Ashok and Atwood, 2003). Recently, caveolin-1-positive endosomes have been shown to deliver caveolae-internalized cargo to the Golgi complex (Nichols, 2003) and BKV enters Vero cells via this pathway (Eash et al., 2004). The effects of the lysosomotropic agent used in this study suggest that acidic intracellular organelles do not facilitate EHV-1 infection in EBMECs. Therefore, EHV-1 may be transported to pHneutral organelles after internalization via caveolae. Immunofluorescence microscopic analysis and the lack of an effect of the dominant negative form of caveolin-1 on EHV-1 infection in E. Derm cells suggested that EHV-1 entry occurred by a caveolaeindependent route. Such a route might be operative in vivo for EHV-1 infection of certain cell types, such as lymphocytes, that do not appear to form caveolae. The principal pathway of EHV-1 entry into E. Derm cells also appears to be clathrin independent, given that EHV-1 immunoreactivity did not colocalize with clathrin-heavy chain in these cells. Despite the lack of colocalization of EHV-1 with endocytic markers, our data from electron microscopy, lysosomotropic agent treatment, energy depletions and infectious virus recovery assays suggest that EHV-1 enters E. Derm cells via energy-and pH-dependent endocytosis. In contrast, Frampton et al. (2007) suggested that EHV-1 strain L11ΔgIΔgE entry into E. Derm cells occurs by direct fusion at the cell surface. The difference between Frampton's study and our own could be explained by the difference in virus strain. The strain L11ΔgIΔgE is a mutant which lacks gI and gE, resulting in attenuated virulence in mice and reduction of viral growth on RK13 cells compared to the parental virus strain RacL11 (Frampton et al., 2002). It has been thought that gE of varicella zoster virus (VZV), another Varicellovirus, is associated with viral entry. Notably, Li et al. (2006) reported that gE interacts with insulin-degrading enzyme, which acts as a cellular receptor mediating cell-free VZV infection and cell-to-cell spread. Therefore, the lack of gE might influence the EHV-1 entry mechanism. The dependence of entry mechanisms on virus strain has been reported for human papillomavirus by Bousarghin et al. (2003), who demonstrated that although they are very closely related viruses human papillomavirus types 16, 31 and 58 use different pathways to enter cells. In conclusion, our results suggest that EHV-1 entry pathways are cell type dependent. Furthermore, they show that EHV-1 enters certain cell types via caveolar endocytosis, a pathway that has not previously been known to mediate the entry of alphaherpesviruses. Cells and virus The EBMECs were isolated from the brain of a 6-month-old horse as described previously (Hasebe et al., 2006) and were cultured in Medium 199 Earl's (Invitrogen, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (FBS; Sigma, St. Louis, MO, USA) and both penicillin (100 U/ml) and streptomycin (100 μg/ml) (Invitrogen). The E. Derm cells were obtained from American Type Culture Collection (Manassas, VA, USA) and cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 0.1 mM nonessential amino acids (Invitrogen) and 10% FBS. The RK13 cells were cultured in minimum essential medium (MEM) supplemented with 10% FBS. The 293T cells were cultured in DMEM supplemented with 10% FBS. The EHV-1 strain HH1 was isolated from an aborted equine fetus in Japan (Kawakami et al., 1970). An EHV-1 mutant, Ab4-GFP, was generously provided by Dr. H. Fukushi. (Gifu University, Gifu, Japan). The Ab4-GFP was constructed by inserting a GFP expression cassette into the intergenic region between ORF62 and ORF63 of EHV-1 Ab4 strain (Ibrahim et al., 2004). Stock viruses were grown in confluent monolayers of RK13 cells. In preparation for RNA dot-blot analysis, viruses were treated with RNase (20 ng/ml) for 1 h at 37°C to remove contaminating RNA in the stock virus. Viral titer was determined by a plaque formation assay using RK13 cells. Chemicals and antibodies Genistein was obtained from Sigma, and bafilomycin A1 and ammonium chloride were from Wako (Osaka, Japan). Bafilomycin A1 and genistein were dissolved in dimethyl sulfoxide at 1 mM and 100 mg/ml, respectively. Ammonium chloride was dissolved in distilled water at 5 M. The final concentration of dimethyl sulfoxide in culture medium was ≤0.1%, and the same concentration was also added to control incubations. Rabbit polyclonal antibodies to EHV-1 were kindly provided by Dr. R. Kirisawa (Rakuno Gakuen University, Hokkaido, Japan), and a mouse monoclonal antibody to EHV-1 gB protein was kindly provided by Dr. T. Matsumura (Japan Racing Association, Tochigi, Japan). Rabbit polyclonal antibodies to caveolin, a mouse monoclonal antibody to caveolin (pY14) and a mouse monoclonal antibody to clathrin-heavy chain were obtained from BD Transduction Laboratories (San Jose, CA, USA). Alexa Fluor 488-conjugated goat antibodies to mouse immunoglobulin G, Alexa Fluor 594-conjugated goat antibodies to rabbit immunoglobulin G, 4′,6-diamidino-2-phenylindole (DAPI) and Lyso-Sensor™ Yellow/Blue DND-160 were from Molecular Probes (Leiden, The Netherlands). Horse radish peroxidase (HRP)-conjugated goat antibodies to mouse immunoglobulin and HRP-conjugated goat antibodies to rabbit immunoglobulin were obtained from Biosource (Camarillo, CA, USA). Viral growth Confluent monolayers of EBMECs or E. Derm cells seeded into 24well plates were infected with EHV-1 at an m.o.i. of 5 p.f.u. per cell. The infected cells were incubated at 37°C for 1 h to allow attachment of the virus, then washed three times with phosphate-buffered saline (PBS), provided with fresh growth media, and incubated further at 37°C. At 0 h (immediately after seeding the virus), 8 h, 16 h or 24 h p.i., the supernatants were removed and the cells were collected. The cells were suspended in 1 ml of MEM and lysed by three cycles of freezing and thawing. Viral titer was determined by the plaque formation on RK13 cells. Electron microscopy EBMECs or E. Derm cells cultured in six-well plates were exposed to EHV-1 at an m.o.i. of 150 p.f.u. per cell and incubated for 2 h at 4°C to allow attachment of the virus to the cell surface. After subsequent incubation for 10 min at 37°C, the cells were collected and fixed overnight at 4°C with 2.5% glutaraldehyde. The cells were subsequently exposed to 2% osmic acid, dehydrated, and embedded in Epon 812 (Shell Chemical Company, New York, NY, USA). Sections were cut at a thickness of 70-80 nm, mounted on coated grids, stained with uranyl acetate and lead citrate, and examined with an electron microscope (JEM-1210; Japan Electron Optics Laboratory, Tokyo, Japan). Indirect immunofluorescence staining Confluent monolayers of EBMECs or E. Derm cells in eight-well chamber slides (BD Falcon, San Jose, CA, USA) were infected with EHV-1 at an m.o.i. of 10 p.f.u. per cell. After incubation at 37°C for 10 or 30 min, the cells were fixed with 3.7% paraformaldehyde for 5 min and permeabilized with 0.1% Triton X-100 for 5 min. The cells were washed with PBS and incubated for 1 h at room temperature first with 2% bovine serum albumin (Sigma) and then with primary antibodies. For simultaneous detection of EHV-1 and clathrin, cells were stained with rabbit anti-EHV-1 polyclonal antibodies and mouse anticlathrin-heavy chain monoclonal antibody. For simultaneous detection of EHV-1 and caveolae, cells were stained with mouse anti-EHV-1 gB monoclonal antibody and rabbit anti-caveolin polyclonal antibodies. Then the cells were incubated for 1 h at room temperature with secondary antibodies, mounted with the use of fluorescence mounting medium (Dako Cytomation, Carpinteria, CA, USA), and examined with a laser-scanning confocal microscope (Olympus, Tokyo, Japan). For all primary antibodies, control images were evaluated to ensure nonoverlapping binding of secondary antibodies and specific detection for each excitation channel. The images were processed with FV10-ASV 1.4 Viewer (Olympus). Coincidence of the immunoreactivity between rabbit anti-EHV-1 polyclonal antibodies and mouse anti-EHV-1 gB monoclonal antibody was confirmed (data not shown). Expression of wild type and mutant form of caveolin-1 The EBMECs and E. Derm cells were infected with lentiviral vectors at an m.o.i. of 0.01 infectious unit per cell and incubated for 24 h. Then the cells were infected with EHV-1 at an m.o.i. of 1 p.f.u. per cell. After 16 h, the cells were fixed with 3.7% paraformaldehyde, permeabilized with 0.1% Triton X-100 and counterstained with DAPI, a nuclear stain. The GFP and DAPI signal was evaluated with fluorescent microscopy (Olympus). The number of GFP expressing cells and the number of nuclei were counted by Image-J (NIH, Bethesda, MD, USA). The relative proportion of EHV-1-infected cells was calculated by dividing the number of the GFP-positive cells by the number of the nuclei. The number of the GFP-positive cells in wild type ecav expressing EBMECs or E. Derm cells was defined as 100%. Antisense RNA probes were prepared with a digoxigenin-based RNA labeling kit (SP6/T7; Roche Diagnostics). Plasmids containing cloned cDNA were linearized with Not I for synthesis of RNA in the presence of digoxigenin-11-UTP. The labeled probes generated from 1 μg of plasmid DNA were precipitated with ethanol, dissolved in 50 μl of RNase-free water, and stored at −80°C. Dot-blot analysis Confluent monolayers of EBMECs or E. Derm cells in six-well plates were treated with inhibitor at 37°C for 1 h and then infected with RNase-treated EHV-1 at an m.o.i. of 5 p.f.u. per cell for 1 h at 37°C in the continued presence of an inhibitor. After additional 2 h incubation in the presence of inhibitor, total RNA was extracted with using Trizol reagent (Invitrogen), treated with DNase with the use of a kit (Ambion, Austin, TX, USA), and diluted to a concentration of 2 mg/ml. Hybridization was performed as previously described (Kimura et al., 2004). In brief, 1 μl of each sample was spotted onto a dry positively charged nylon membrane (Roche) and allowed to dry in air. The RNA was fixed to the membrane with a UV cross-linker (XL-1000; Spectronics, Lincoln, NE) and baked for 30 min at 80°C. The membrane was then incubated for 3 h at 68°C in a solution containing 0.25 M sodium phosphate buffer (pH 7.2), 10% SDS, 1 mM EDTA, and 2% blocking reagent. Hybridization was performed for 12 h at 68°C in the same solution containing the digoxigenin-labeled cRNA probe (20 ng/ml), after which the membrane was washed three times (each for 20 min) with 25 mM sodium phosphate buffer (pH 7.2) containing 10% SDS and 1 mM EDTA. Hybridization complexes were detected with alkaline phosphatase-conjugated antibodies to digoxigenin and disodium 3-(4-methoxyspiro{1,2-dioxetane-3,2′-(5′-chloro)tricyclo [3.3.1.1. 3,7 ]decan}-4-yl) phenyl phosphate (CSPD) as the chemiluminescent substrate (Roche). Quantitative analysis of autoradiograms was performed with Scion Image software. The relative amount of EHV-1 ICP0 RNA was calculated by dividing the intensity of the signal for ICP0 RNA by that of the signal for GAPDH mRNA. The adjusted signal intensity for infected but mock-treated cells was defined as 100%. Immunoprecipitation and Western blot analysis Confluent monolayers of E. Derm cells in 60 mm dishes were treated with genistein at 37°C for 1 h, washed once with PBS and lysed in RIPA buffer with Complete protease inhibitor cocktail. The cell lysates were mixed at 4°C with rabbit polyclonal antibodies to caveolin for 1 h and collected on protein A-Sepharose beads (GE Healthcare Bio-Science Corp, NJ, USA). The immunoprecipitates were subjected to SDS-page and Western blotting with a mouse monoclonal antibody to caveolin (pY14) (1:1000 dilution in 2% low-fat milk in TBS-T) or rabbit polyclonal antibodies to caveolin for primary antibodies and HRP-conjugated goat antibodies to mouse immunoglobulin (1:5000 dilution in 2% low-fat milk in TBS-T) or HRPconjugated goat antibodies to rabbit immunoglobulin for secondary antibodies. Vital staining with pH indicator Cells seeded on 35 mm glass bottom dishes were incubated with or without bafilomycin A1 or ammonium chloride at 37°C for 1 h, and then exposed to 2 μM of a pH sensitive fluorescence dye, LysoSensor Yellow/Blue DND-160, in pre-warmed growth medium. After incubation at 37°C for 5 min, the medium was replaced with fresh medium. The fluorescence was observed using a confocal laserscanning microscope with excitation at 405 nm and emission was measured at 480-510 nm. ATP depletion Cells were incubated in ATP depletion media composed of glucosefree, FBS-free DMEM (Invitrogen) with 10 mM 2-deoxyglucose (Sigma) for 1 h and infected with Ab4-GFP at an m.o.i. of 5 per cell for 1 h in ATP depletion media. After EHV-1 infection, the cells were treated with citrate buffer pH 3.0 to inactivate remaining viruses on cell surface. The media were replaced with regular culture media and the cells were cultured at 37°C. For untreated controls, the cells were incubated with FBS-free DMEM for 1 h, infected with Ab4-GFP in FBSfree DMEM for 1 h and the media were replaced with regular culture media. For samples depleted of ATP after post entry, the cells were infected with Ab4-GFP in FBS-free DMEM. After viral infection for 1 h, the cells were incubated in ATP depletion media for 2 h and the media were replaced with regular culture media. At 12 h p.i., the cells were harvested, fixed with 4% paraformaldehyde and GFP-positive cells were counted by FACS Canto (BD Biosciences, Sun Jose, CA, USA). Infectious virus recovery assay Infectious virus recovery assay was performed as previously described in Frampton et al. (2007) with some modifications. E. Derm and NIH3T3 cells seeded in 24 well plates were washed with ice-cold DMEM supplemented with 25 mM HEPES and 1% FBS and incubated on ice for 5 min. The cells were infected with EHV-1 HH1 strain at an m.o.i. of 10 p.f.u. per cell for 2 h at 4°C. The media were replaced with pre-warmed fresh DMEM containing 25 mM HEPES and 1% FBS at 37°C. At 0, 7.5, 15, 30 and 45 min after incubation at 37°C, the cells were washed with glycine pH 3.0 for 1 min at room temperature, washed with DMEM containing 25 mM HEPES and harvested. The cells were freeze-thawed once and sonicated three times for 15 s each. The infectious virus was detected by titration on RK13 cells. Triplicate samples were measured for each time point. Statistical analysis Quantitative data are expressed as means ± SD and were compared with Student's t test. A P value of b0.05 was considered statistically significant.
2018-04-03T04:35:46.820Z
2009-08-31T00:00:00.000
{ "year": 2009, "sha1": "9871ea656f4892027673e91db816367f2c388dff", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.virol.2009.07.032", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "a948d3d1edcc5f8010e4787c340927a7e9f3982f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
204008414
pes2o/s2orc
v3-fos-license
Transition between dissipatively stabilized helical states We analyze a $XXZ$ spin-1/2 chain which is driven dissipatively at its boundaries. The dissipative driving is modelled by Lindblad jump operators which only act on both boundary spins. In the limit of large dissipation, we find that the boundary spins are pinned to a certain value and at special values of the interaction anisotropy, the steady states are formed by a rank-2 mixture of helical states with opposite winding numbers. Contrarily to previous stabilization of topological states, these helical states are not protected by a gap in the spectrum of the Lindbladian. By changing the anisotropy, the transition between these steady states takes place via mixed states of higher rank. In particular, crossing the value of zero anisotropy a totally mixed state is found as the steady state. The transition between the different winding numbers via mixed states can be seen in the light of the transitions between different topological states in dissipatively driven systems. The results are obtained developing a perturbation theory in the inverse dissipative coupling strength and using the numerical exact diagonalization and matrix product state methods. Over decades, dissipation has been considered as a destructive influence which destroys the coherence properties of quantum systems. Recently, this point of view has been revised, since tailored environments have been employed in order to dissipatively drive a quantum many-body system into a desired steady state, the so-called attractor state [1]. Even if an external perturbation is applied over a certain time window, the system flows back to the attractor state afterwards. Examples of many-body states that can be reached via an attractor dynamics of a tailored environment are Bose-Einstein condensates [2], number squeezed states [3], Tonks-like states [4], superconducting states [5,6], and, more recently, topologically interesting states [7][8][9][10][11]. These comprise Chern insulators [12] and the Hofstadter model of atoms in an optical cavity [9]. Topological states are characterized by the existence of invariants which can only change in steps by a global action on the system. A paradigmatic example is the use of the stepwise change of the electrical resistance in the quantum Hall effect in topological insulators which is employed for the definition of the standard for the electrical resistance [13]. The classification of topological properties in noninteracting closed systems has attracted considerable attention [14][15][16]. In contrast, topological properties in interacting or In open noninteracting systems [8] two important ingredients were identified for reaching stable topologically nontrivial states. The first one is the existence of a dissipative gap, i.e., a gap in the spectrum of the Lindbladian above the steady state. The second one can only be introduced in noninteracting systems and is the so-called purity gap. This gap measures the purity of the most strongly mixed mode of the bulk. Here, we go far beyond current studies and show how the intriguing interplay of interactions and a tailored dissipative coupling can give access to topologically interesting properties. To do this, we study by exact analytical and numerical methods the paradigmatic spin-1/2 X X Z-quantum spin chain with dissipative boundaries. Previous work has uncovered far-from-equilibrium steady states of a helical nature with remarkable transport properties [17][18][19][20]. In this Rapid Communication we focus on a specific configuration of this system for which the jump operators at the boundary sites are identical and thus lead to an additional reflection symmetry. We find using a perturbative expansion in the limit of large dissipation that rank-2 steady states-formed by helical states with opposite winding numbers-are dissipatively generated at certain discrete values of the anisotropy parameter due to the space reflection symmetry. The winding numbers have integer values and therefore, similar to topological invariants, can only change their values in integer steps. The helical steady states are not protected by a finite gap which is in contrast to topological states in open systems found previously [8]. However, the dissipative attractor dynamics stabilizes these helical states. As one varies the interaction strength a transition between two helical states occurs, which takes place via higher rank mixtures of states to which several different winding numbers contribute. When the anisotropy changes sign, the steady state transits even via a completely mixed state. Our findings rely on both analytical perturbative methods which are valid for any system size and numerical methods for systems up to N = 12 sites. We describe the X X Z chain with a density operator ρ by the Lindblad master equation Below we shall seth = 1. The first term on the right-hand side describes the unitary evolution due to the X X Z Hamiltonian, Here, S α j = σ α j /2 are the spin-1/2 operators and σ α j the Pauli matrices acting on site j. The parameter is the anisotropy which determines the quantum phases that appear in an isolated system. The identity I is added for convenience. For | | 1 the ground state of the X X Z Hamiltonian is a gapless Tomonaga-Luttinger liquid. For values | | > 1 a gapped phase occurs which corresponds to a ferromagnetic or antiferromagnetic ground state, respectively. N is the number of sites and we assume in the following for convenience N to be an even number. The second term describes the dissipative coupling to the environment in Lindblad form Here, is the effective dissipation strength, and L j are the jump operators which act only at the boundary sites j = 1 and j = N and target the density matrix belonging to the eigenstate |↑ x of the spin operator in the x direction defined by σ x |↑ x = |↑ x . Explicitly, L 1 = S y 1 + iS z 1 and L N = S y N + iS z N . We can show that in this situation a unique steady state exists [20]. In the Zeno limit of large dissipative coupling → ∞, the boundary spins to lowest order are pinned in the steady state to the states defined by D 1,N [|↑ x 1,N ↑ x | 1,N ] = 0. The dissipation free subspace of the system is thus the whole Hilbert space spanned by the bulk spins and fixed boundary spins 1, N which are collinear and oriented in the positive x direction, i.e., |↑ x . Previous studies [17,19,21] have found that for many choices of the boundary dissipation a fine tuning of the anisotropy * m = cos[ϕ m + δϕ/(N − 1)] with the angle ϕ m = (2π m)/(N − 1), with m = −N/2 . . . N/2, generates a pure steady state which is a spin-helix state, where δϕ is a twist angle between the targeted boundary polarizations. We use the * to denote the fine-tuned values and the subindex m in order to distinguish the helicity of the arising state. Here, the state on each site is represented in the basis chosen along the z direction and the spin precesses in the XY plane around the z axis. However, this steady state will become unstable if the spin states targeted at the boundaries become collinear δϕ = 0 and m = 0, as is the case in the chosen situation. We found similar results for δϕ = π . For the situation, where the spins are locked to the dissipation-free subspace, the system can be viewed as a spin chain on a ring, where the site 1 and N are glued to the same site. Within this configuration, important quantities are the winding numbers of the spin along the ring. They can be determined by the discrete Fourier transform where m = −(N − 2)/2, . . . , (N − 2)/2 denotes the winding number around the z axis and the amplitudes w m can be interpreted as the corresponding weights. Due to the symmetry of the considered system, the relation w m = w −m holds. We note that in a finite system states corresponding to different winding numbers can overlap. However, this overlap vanishes exponentially with system size. In the limit of infinite system size the states corresponding to different winding numbers become orthogonal and the winding number corresponds to a topological invariant. An intriguing behavior can be seen in the von Neumann entropy S = − i p i log 2 (p i ), where p i are the eigenvalues of the density matrix ρ. In Fig. 1 we show the dependence of the von Neumann entropy on the anisotropy for a small system (N = 6) and a strong amplitude of the dissipative driving = 250J. For this small system, exact diagonalization is used to solve the quantum master equation (1). A drastic behavior in the von Neumann entropy can be seen at the values * m = cos ϕ m with the angle ϕ m = 2π m/5 with m ∈ {1, 2}. At * m two amplitudes w ±m of the winding numbers become dominant, whereas the other values become negligible. This signals that a helical state of rank 2 with two opposite winding numbers arises as the steady state. Below, after showing further numerical analyses, we will use perturbative arguments to analytically derive that the steady state in the Zeno limit is of the form with |s , |a being orthogonal linear combinations of the spin-helix states | ± m | bulk , restricted to sites 2, . . . , N − 1, with opposite chiralities, |s = A s (|m + |−m )| bulk and |a = A a (|m − |−m )| bulk with b, A s , and A a weights. Additional particularities occur at = 1, where the entropy drops to zero, signaling a pure state which is a helical state corresponding to the winding number m = 0, and at = 0 where a totally mixed state appears. This result demonstrates that steady states with different winding numbers can be reached by a fine tuning of the anisotropy. For finite dissipation strength and fine-tuned anisotropies, we find numerically (not shown) that the steady state is approached as tr[ρ 2 ( ) − (ρ (0) ) 2 ] ∝ (J/ ) 2 , where ρ( ) denotes the steady state at a finite value of . The states at * seem not to be protected by a gap in the spectrum of the Lindbladian-defined as the absolute value of the smallest nonvanishing real part of the eigenvalues of the Lindbladianas can be seen from Fig. 2 where we show that the gap in the Zeno limit closes as 1/ . This is in contrast to previous findings, where the topologically interesting states were protected by a gap [8]. However, the stability of the rank-2 helical state follows from the dissipative nature of the Lindbladian dynamics, the so-called attractor dynamics. Since the steady state is unique for the chosen dissipators, any initial quantum state is guaranteed to approach the targeted rank-2 state (6) asymptotically in time. In particular, also if a perturbation is applied over a certain time, the state relaxes back to the rank-2 helical state. Further, the transition from one helical state to the other goes via the intermediate values of the anisotropy. In Fig. 1, this transition is performed via states which are composed of many different winding numbers and have a much larger von Neumann entropy. For the point which is close to * ±2 , we have a relatively slow dependence, whereas the point corresponding to * ±1 has a very steep dependence. Let us note that the behavior around the special points * steepens with increasing system length. In order to verify that this is not just a particularity of the small system size, we used a purification implementation of the matrix product state (MPS) method for open quantum systems [22][23][24] as described in Ref. [25] to determine the steady states for larger systems. We have chosen to double the system size to N = 12. To obtain the steady state, we use the time-dependent MPS method based on a second-order Suzuki-Trotter decomposition with time step t to compute the long-time evolution of an arbitrary state which in this case is chosen to be the Néel state. To overcome the problem of slow relaxation during the attractor dynamics we employ a gradual time evolution procedure. As we are only interested in the steady state and the exact dynamics is irrelevant, we first apply an evolution in a fast-relaxing parameter regime to prepare the initial state ρ ini for the final evolution (see Supplemental Material [26] for details). This enables us to provide simulation results for different parameter ranges of the interaction anisotropy and the dissipative coupling . The simulation is based on an efficient compression scheme that is well controlled by observing the so-called truncation weight. We verified convergence in this parameter and confirmed that our main findings are not affected by the compression. The final time evolution was computed for a duration of T = 1000/J using a maximal truncation error of ε = 10 −12 and a time step t = 0.1/J. The steady-state expectation values of the required observables are extracted by calculating the average over the last 2000 time steps and are shown in Fig. 3. Also for these larger systems one can nicely see a similar behavior as described for N = 6. As can be seen in Fig. 3(a), the behavior around the point * ±5 shows that only the winding numbers m = ±5 have an appreciable amplitude and the amplitudes of the other winding numbers rise slowly in its neighborhood. This is compatible with the analytically expected rank-2 steady state decomposed of the two different winding numbers. The steepness of the rise of the amplitudes of additional winding numbers at the special points depends on the system size. In particular, with increasing system size the required value of in order to resolve the special point rises. This is accompanied by an exponential increase of the timescales, such that it becomes very difficult to resolve the steady state in the Zeno limit at * for very large system sizes. The approach of the Zeno limit can be clearly seen in the dependence on the value of . One finds that the expectation value of the boundary spins collapses already for relatively low values of and becomes locked to the expected value of the dissipation-free subspace around the value of ≈ 100J (not shown). This validates the interpretation that in the large limit the system is close to a ring in which the winding numbers can be associated with topological invariants. Further, as shown in Fig. 3(b) for the value * ±5 , the amplitudes of the winding numbers rapidly approach the expected values for the predicted helical state for increasing , i.e., all amplitudes become negligible except for the amplitudes for m ± 5 which remain finite. In the following we justify analytically the appearance of the steady state of rank 2 occurring in the Zeno limit at the points * . To this end, we expand the density matrix of the steady state in orders of 1/ as ρ( ) = ∞ n=0 ρ (n) −n . Inserting this ansatz into the Lindblad equation, one can decompose the equation in different orders. The zero-order condition leads to the condition that the density matrix of the boundary spins lies in the dissipation free subspace, i.e., In the first order of expansion (see the Appendix), we obtain the condition Here, H eff acts in the Hilbert space of the internal bulk sites 2, . . . , N − 1 only. It is given by a X X Z Hamiltonian with boundary fields For anisotropies * m the helix states | ± m (4), restricted to the internal sites 2 to N − 1, are eigenstates of H eff with eigenvalue 0, i.e., H eff | ± m | bulk = 0. Thus, the condition (7) is fulfilled by the ansatz R 0 = b|s s| + (1 − b)|a a| which has rank 2. We would like to emphasize that the condition (7) derived perturbatively is very useful for locally acting dissipation and might prove to be useful for a variety of different setups. In order to find the weight b, we investigate the compatibility conditions arising in the second order in 1/ . Among other conditions (see the Appendix) we obtain where the last equality holds for m and N − 1 coprime. The overlap η vanishes exponentially with system size and the predicted rank-2 steady state has contributions of the two helical states | ± m ±m|. Further conditions (see the Appendix) need to be fulfilled by the steady state, so that the rank-2 state Eq. (6) is not necessarily the steady state. Considering our numerical findings (up to N = 13), we come to the conjecture that the state Eq. (6) is the true steady state at the fine-tuned anisotropy * ±m in the Zeno limit, whenever N − 1 and m are coprime. If the coprime condition is violated, i.e., the ratio m/(N − 1) can be simplified, then we find that the nonequilibrium steady state has higher rank r > 2. Details on the occurring states are presented in Ref. [27]. One very interesting open question which remains is what happens to these findings in the thermodynamic limit. In this limit the fine-tuned values of the anisotropy become dense and the states of different winding numbers become close. It would be interesting to see whether the rank-2 steady states remain stable solutions and how a crossing between the different states can take place. To summarize, we have found that helical states can be the steady states of a X X Z model of finite size which is coupled at its boundaries to dissipation. We see that in this case the helical states are not protected by gaps in the Lindblad spectrum and that the transition between helical states with different winding numbers goes via highly mixed states. This opens the question of whether other examples exist of topologically interesting states in dissipatively driven systems which are not protected by a gap in the Lindbladian. A further interesting direction would be the investigation of these transitions between a topological state in dissipative quantum systems using the quantum Fisher information [28]. In the following we discuss how we can obtain the proposed rank-2 state in Eq. (6) in the main text from these relations. We note that this perturbative argumentation has been used more commonly and typically the derived equations can be solved using the so-called "adiabatic elimination technique" of virtual excitations [30][31][32]. Here, an additional challenge is the large dissipation-free Hilbert space and the complicated structure of the spectrum of the Lindbladian due to the locality of the dissipation. We circumvent this challenge by deriving additional conditions which are simpler to treat. We would like to point out that the derivation here is very general for the cases of local dissipation. The zeroth-order Eq. (A1) only gives information at the boundary sites and is satisfied by the ansatz . To obtain information about the bulk part of R 0 , we need to consider the higher-order relations. To obtain information from these, it is convenient to decompose the Hamiltonian as an operator acting in the tensor product space H 0 ⊗ H 1 , where H 0 is a Hilbert space of the two boundary spins 1, N, and H 1 is the Hilbert space of the remaining bulk spins 2, . . . , N − 1. We introduce an orthonormal basis e 0 , e 1 , e 2 , e 3 in H 0 by The Hamiltonian with respect to this basis becomes One can show that the matrix elements between the zeroth and third state vanish, i.e., We introduce H 0,0 ≡ H eff which is given by Eq. (8) in the main text. The commutator in Eq. (A2) for k = 0 can be rewritten using this decomposition as Using this representation and taking the trace over the boundary sites the condition simplifies to which is given in Eq. (7) in the main text. The condition can be fulfilled if we assume the form Here, |α are eigenvectors of H eff and ν α are some real valued, non-negative coefficients. They fulfill the condition α ν α = 1 to give Tr[ρ (0) ] = 1. There exist some subtle issues connected to possible degeneracies of H eff . These in particular can lead to the existence of steady states with higher ranks, which goes beyond the scope of the current Rapid Communication [27]. Further, we can use the representation of the commutator in order to obtain information about ρ (1) We obtain where M 1 ⊗ |e 0 e 0 | is an arbitrary element from the kernel of the dissipator D to be determined by higher orders of the recurrence relations. Inserting the above into Eq. (A3) for k = 1, and again using Eq. (A5), we obtain after some algebra Finally, noting H 0,k = H † k,0 (see also Ref. [33] for details), and writing down the matrix elements α|Q|α = 0, we obtain after some straightforward algebra for any value of α, β =α w αβ ν β = ν α β =α w αβ , (A13) w αβ = | β|H 1,0 |α | 2 + | β|H 2,0 |α | 2 . In Eq. (A13) we recognize the steady-state equation of a Markov process with w αβ being the rate of the transition from the state α to state β. The explicit form of H 1,0 , H 2,0 can be calculated from Eq. (A6) (see, e.g., Ref. [17]) and is given by Note that the index of the spin operators denotes the sites to which the operator is applied. The Perron-Frobenius theorem guarantees an existence of a unique solution of Eq. (A13) with nonnegative entries, which sum up to 1. The quantities ν α thus have the double meaning of the eigenvalues of Eq. (A9) in the original quantum Markov process and of steady-state probabilities of configurations in a classical Markov process with rates w αβ associated with it (see also Ref. [34]). Now, the rank-2 state assumption, in terms of the associated Markov process Eq. (A13), means that the two states 0,1 form a closed set, with weights b, 1 − b which is a generalization of an absorbing state. The closed set property is w 0,β = w 1,β = 0 for all β > 1. We have checked numerically that the closed set property is satisfied for our setup for all N 13, when N − 1 is a prime number [35]. Thus, Eq. (A13) for α = 0, 1 becomes a closed equation for b, i.e., where w αβ = | β|H 1,0 |α | 2 + | β|H 2,0 |α | 2 , from which we obtain the weights b.
2019-10-10T12:01:29.000Z
2019-10-10T00:00:00.000
{ "year": 2019, "sha1": "855d476adc57eef45c61560c5408de6d6fbf803d", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.2.022007", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2d8248a865b05bd6a2b88a955f6863db0d1ea98b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221363778
pes2o/s2orc
v3-fos-license
Histo-Pathological Assessment of Zinc Oxide Eugenol Canal Sealer on Periodontal Tissues: Three Case Reports Purpose: Sealers used in endodontic dental treatment may leak into the periradicular tissues, increasing the risk of retained particles in oral mucosa, tissue irritation or discoloration, delayed healing or interfering on the erupting permanent teeth. Three clinical cases of primary teeth pulpectomy using Zinc Oxide Eugenol (ZOE) paste were observed, showing retained particles of filling cement in the oral mucosa during permanent dentition eruption. The aim of the study was to histologically analyze the possible causes of discoloration and to assess the effects of the sealer on cells by evaluating ultrastructure. Materials and Methods: Three healthy children undergoing orthodontic therapy were enrolled in this study for the presence of gingival pigmentation. For all patients a biopsy of the discolored area was performed and histologically analyzed. Furthermore, sealer was added to the culture of SAOS-2 to evaluate cytotoxicity effects by Alamar blue tests and ultrastructure evaluation using Transmission Electrons Microscope (TEM). Results: Black extracellular granules were appreciable in all Hematoxylin Eosin-stained sections. Deposition of pigment was observed particularly in the blood vessel walls, in connective tissue and in intercellular ground substance. In cell culture, sealers reduced viability about 43% after 24h, 17% after 48h and 12% after 72h compared with untreated cells. Furthermore, sealer affected morphology creating vacuoles inside the cytoplasmic area. Conclusions: Sealer resulted to be mild toxic for the cells. Retention granules of bismuth in the oral mucosa after root canal treatment of deciduous teeth and subsequent mucosal discoloration analyzed in this study should alert pediatric dentists to use the filling materials very carefully and to check the radicular and filling paste resorption in order to prevent the prolonged retention of the overfilled material. Introduction Endodontic dental treatment creates a unique and complex biomaterial-tissue interface at the tooth root apex. The canal filling material and sealer are in intimate, long-term contact with multiple cell types in the periradicular tissues [1]. Sealers may leak into the periradicular tissues, increasing the risk of tissue irritation or delayed healing. These risks increase if the sealers have inappropriate biological properties or do not effectively seal to prevent the ingress of bacteria [2][3][4]. In fact, although sealers should be confined within the root canal, their inadvertent extrusion into the periradicular tissues may occur [5], especially when a periapical lesion alters the anatomy of the apex. In many cases, the contact area between the sealer and the target cells, and in turn, the concentration of the cytotoxic components on cells, may increase greatly [6]. Thus, toxic sealers can potentially cause tissue injury and may participate in the development of periapical inflammation or the continued existence of a preexisting periapical lesion, thereby delaying healing and adversely affecting the outcome of treatment. So, the biocompatibility of a sealer is of crucial importance [7][8][9][10]. It is determined by various parameters, such as composition and leachable components, setting characteristics, stability of the set sealer and the contact area between the sealer and the adjacent soft and hard tissues [7,9,10]. The sealers commonly used in endodontics are based on zinc oxide eugenol, calcium hydroxide, mineral trioxide aggregate, glass-ionomer or polymers, such as epoxy resins, polydimethylsiloxane and methacrylate [8]. Zinc Oxide-Eugenol (ZOE) paste is probably the most used root canal filling for primary teeth in the United States [11,12]. It is radiopaque, is very easy to fill and to remove and has a good antiseptic activity [13,14]. Although described as a resorbable material, long-term follow-up evaluations of pulpectomy on primary teeth treated with ZOE have revealed a high frequency of retention of the overfilled material in the periapical area, even after physiological root resorption [12,15,16]. Another study revealed that 67% of all overfilled canals showed delayed resorption of the material when compared with physiological root resorption and retained particles of ZOE at 6 months follow-up [17]. Other studies showed that when ZOE extrusion occurs, there is a risk of deflecting the erupting permanent teeth due to its hardness [18,19]. In addition, it was demonstrated that ZOE components are a real obstacle for permanent successor eruption [20]. Various daily used endodontic sealers contain heavy metal; in 2002 Greenberg demonstrated that the presence of heavy metal could cause mucocutaneous discoloration [21]. If the combination of ZOE and heavy metal may increase the risk of mucocutaneous discoloration and may damage periodontal cells are questions still to be answered. This report presents three clinical cases of primary teeth pulpectomy using ZOE paste showing retained particles of filling cement in the oral mucosa during permanent dentition eruption. The retained particles of sealers migrated from the periapical area to the gingival vestibular area, in a way like a pus drainage, causing a poor aesthetics appearance. In order to remove the ZOE retained particles that created a poor aesthetic appearance, the periodontal surgery was necessary. The removed tissue was histologically analyzed in order to contribute to the understanding of the genesis of discoloration. At the same time, ZOE sealer was put in culture with cells with the aim to evaluate morphologically possible adverse effects on cells in contact. Clinical Procedures During a periodical orthodontic evaluation, an altered intraoral gingival pigmentation was observed in three healthy children. The first patient was a 7-years-old girl (Figure 1,2). The upper lateral incisors eruption was hindered by lack of space. After a rapid maxillary expansion protocol, the upper left lateral incisor did not erupt yet, thus root canal therapy and mesial slicing of the deciduous upper left canine were executed (Figure 1a,b), allowing a correct eruption of the lateral incisor. When the patient was 10 years old, she noticed a discoloration on the free gingival margin of the deciduous upper left canine. An orthopantomography showed root filling paste extrusion in the periradicular area ( Figure 1d). The darkening seemed to be referred to the bulge of the permanent maxillary canine (Figure 1c), but the discoloration also remained after the canine eruption (Figure 1e,f). The removal of the discoloration, a gingival graft and a histological assessment of the removed tissue were planned at the end of permanent dentition, when the patient was 11 years old ( Figure 2). The graft has been stabilized to the periosteum with 6-0 Prolene Ethicon sutures ( Figure 2). The second patient was a 13-years-old girl ( Figure 3). She had all permanent teeth, except for the lower right second premolar, affected by agenesis, and the left first molar that was extracted after an unsuccessful root canal treatment. Moreover, a darkening of the gingiva of the upper left second premolar was present ( Figure 3). Patient referred an endodontic therapy on the upper left second primary molar at age of 5 years. A gingival graft was planned to remove the discolorated area and histological study was executed. The third patient was a 10-years-old boy ( Figure 4). His medical history included two endodontic therapies of both upper left and right first primary molars. After the removal of residual roots, the corresponding free gingival area seemed dark. The discoloration also remained after the premolar eruption ( Figure 4). The removal of the discoloration and a histological assessment were executed to evaluate its nature. For all patients, a biopsy of the discolored area of buccal mucosa was performed using a surgical blade ( Figure 5, 7). For practical reasons for the first two patients the biopsy was made during periodontal surgery. The pigmented tissue has been removed surgically by means of a layered partial thickness flap, staring from the superficial layer and moving deeper when necessary. A free gingival graft was harvested from the tuberosity, trimmed to fit the wound area and the epithelial layer was removed, so the graft was a free connective tissue graft. This technique leads to a better esthetic outcome in terms of color and tissue thickness when compared to a free gingival graft [22]. Specimens Processing Immediately after harvesting, soft tissue biopsies were immersion fixed in 10% formalin 0.1M phosphate buffer saline (PBS) (pH 7.4) 24 hours at room temperature, then routinely dehydrated in increasing concentrations of ethanol (from 50 to 100%), xylol for 12 hours and then paraffin embedded. Serial 4-5 µm buccal-lingual sections were obtained, mounted on 3-Amino-propyl-trietoxi-silane coated slides, and then hydrated in decreasing concentrations of xylol and ethanol (from 100 to 70%) and then immersed in distilled water. To evaluate the tissue morphology four sections for site were stained with Mayer's Hematoxylin (Bio-Optica, Milan, Italy) and Eosin (Bio-Optica, Milan, Italy) according to the standard protocol [23]. All the histological sections were observed and photographed in a Nikon light microscope (Eclipse E600) equipped with a calibrated digital camera (DXM1200, Nikon, Tokyo, Japan) and morphological assessments were performed. Viability Assessment Three mother solutions were prepared adding 10%, 5%, 1% of ZOE past to medium of culture. After 24h in culture the plate was observed under contrast phase microscope, images were acquired to perform semi-quantitative analysis of viability cells [24]. Cells in adhesion were counted and the ratio between treated cells and untreated cells were expressed in percentage. Then the variation between treated cells with 10%, 5%,1% of ZOE was calculated compared with untreated control in order to report the reduction of the viability in presence of the sealer. The same cells were pelleted and used for ultrastructural analysis by Transmission Electron Microscope (TEM). Ultrastructure Evaluation For the assessment by TEM, cellular pellets were fixed overnight in a solution containing 2% of paraformaldehyde and 2% glutaraldehyde in 0.1 M sodium cacodylate buffer (pH 7.3) and then post-fixed in 1% osmium tetroxide in cacodylate buffer, washed, dehydrated and embedded in Epon-Azaldite resin. Semithin sections to evaluate pellets were obtained using microtome, stained with toluidine blue in heater and acquired under light microscope (Eclipse E600) [25]. Furthermore, ultra-thin sections were cut by a Leica Supernova ultramicrotome (Reichert Ultracut E) and stained with lead citrate for morphological observation under TE (Zeiss EM10). Study Population In three subjects that had a previous history of dental treatment, a blackish blue oral pigmentation due to ZOE paste was found and collected for histological examination. In two patients, the biopsy was carried out for aesthetic reasons, while in one case the study was conducted at the request of parents. All biopsies were obtained by oral surgeon. Pigmentation areas were measured between 0.2 and 0.7 cm (with a mean diameter of 0.45 cm). Histological assessment At histological evaluation, no signs of necrosis, fibroencapsulation, adipose tissue fatty infiltration or severe inflammatory infiltrations were detected. Only in the third patient some areas of a mild inflammatory infiltration were observed. All samples presented a well-organized connective tissue underneath the oral epithelium. The method used to harvest the biopsies did not assure the integrity of the junctional epithelium. Fine dark brown or black extracellular granules were appreciable in all Hematoxylin Eosin-stained sections (Figures 5-7). Deposition of pigment was observed particularly in the blood vessel walls, in connective tissue and in intercellular ground substance of the lamina propria. In basal membrane we found light brown micro-granules uniformly distributed all over its extension. In contrast, the granules found in the connective tissue assumed variable sizes, mostly larger than those observed in basal membrane; these were preferentially distributed in extra cellular matrix and showed an almost black coloration. In some cases, fibroblasts and endothelial cells were also marked. Macro-granules observed in the connective tissue presented jagged and irregular margins. At higher magnification, the granules in the vessel walls and in basal membrane appeared similar, both size and color. In all the samples epithelial layers did not show any deposit of pigment. Viability Assessment and Ultrastructural Analysis of Cells in Contact With ZOE The viability was evaluated using semi-quantitative methodology with the aim to compare treated cells (Figure 8ac) to untreated cells (Figure 8d) behavior. SAOS-2 are little cells of elongated shape in adhesion (Figure 8d). All concentrations reduced the viability of the cells when the presence of sealer was increased (10%) (Figure 8e). Observing cellular pellets on semithin sections obtained by sample with medium containing sealer at 10%, the adverse effects on cells were confirmed by structural analysis that showed well stained and round cells in suspension in no treated group (Figure 9a) and cellular debridement in treated group (Figure 9c). Furthermore, ultrastructural analysis conducted on ultra-thin sections allowed to visualize numerous cytoplasmatic vacuoles in treated cells treated (Figure 9d) compared with no treated cells (Figure 9b). The presence of numerous vacuoles seem to indicate a cellular foam degeneration with probably consequent cellular death (Figure 9d). Discussion An ideal endodontic sealer must be biocompatible, to have adequate physicochemical properties, bioactivity and antimicrobial activity. Although many endodontic sealers are available on the market, none of them meets all these requirements [26]. ZOE has been found to be potentially irritating to periapical tissues; it may even produce necrosis of bone and cementum, and extruded particles may develop a fibrous capsule that prevents resorption of the paste [27]. Moreover, it was already reported that the overfilled ZOE induced inflammatory reactions, chronic or subacute, on the dental follicle of permanent successor [20]. Frequently osteoclastic activity was too slow to eliminate the retained ZOE and suggests the possibility of not-resorption [28]. Pulp canal sealer (Sybron dental specialties orange, CA) is an endodontic filling material commonly used in contemporary endodontic treatment also for deciduous teeth [29,30]. It is composed of liquid and powder containing eugenol and zinc oxide, staybelite resin, bismuth sub carbonate, barium sulfate, sodium borate anhydrous with significantly less toxic effects in connective tissue in rats [29][30][31]. In literature, relations between heavy metal deposition and the following mucocutaneous discoloration have been demonstrated [21]. The bismuth subsalicylate seems to be a pigment-inducing agent; it determines the formation of a characteristic linear black rib at free gingival margin level. The tissues inflammation, caused by extrusion of endodontic sealer, lead to an increase of capillary permeability thus allowing the deposition of the metal sulfides in the connective structure, resulting in a mucocutaneous discoloration [32]. The three cases of oral pigmentation evaluated in the current study show both clinical and histological features that are typical for heavy oral metal deposition. This is the first histological report that analyzed the presence of granules in the oral mucosa due to the bismuth subsalicylate deposition. The aspect of intraoral discoloration and the localization of micro and macro-granules at histological analysis, seem to be similar to those already observed in tissues exposed to amalgam [33]. Amalgam pigmentation is harmless [34] and relatively frequent, mainly affecting the mandibular gingival mucosa [35], followed by the buccal mucosa, floor of mouth, tongue, retromolar mandibular area, lips, and palate. Amalgam may produce local adverse effects [33], including mucosal pigmentation due to its metal components (silver, mercury, and tin). This is the most prevalent exogenous oral pigmentation [35,36] and can be confused with melanin pigmentation, in which case biopsy studies are indicated. Also, bismuth deposition can be added with melanin pigmentation. In our cases, granules were vastly different from melanin deposits that are typically localized in the cytoplasm of melanocytes. Therefore, while melanocytes are usually scattered within the basal layer of the epidermis, the black granules were placed also in the connective tissue. Oral pigmentation caused by amalgam is caused by several mechanisms, including mechanical penetration into soft tissues, corrosion phenomena, and release of metallic components [37]. The mechanism of bismuth penetration in oral tissue is still unknown; in future studies it could be interesting to determine if this heavy metal works in a similar way. An in vitro model was applied to investigate ZOE sealer behavior. Cytotoxicity assays are used to verify the level of biocompatibility although may be influenced by many factors such as cells type used in the experiments [26,38]. The literature, in fact, reports diffused cytotoxicity effects of sealers towards cells in culture [26,38,39]. The cytotoxicity of zinc oxide-eugenol had been previously observed reporting similar data to this study where the viability was affected by the presence of the sealer and that cytotoxic effects were reducing over time [39]. Furthermore, Huang et al 2002 reported that zinc oxide-eugenolbased sealer caused moderate to severe cytotoxicity probably attributable to free eugenol liberated from the set material [39]. Vacuolization of the cells reported in the results confirmed the negative effects. Nowadays, the process is not completely clarified in all passages, but it was observed in mammalian cells after exposure to bacterial or viral pathogens as well as to various natural and artificial compounds [40]. Cytoplasmic vacuolization is a known process, morphologically recognizable, and associated with cell death. Conclusion As a result of these considerations, the pediatric dentist should carefully evaluate the primary treated teeth, and periodically check the radicular and filling paste resorption in order to prevent the prolonged retention of the overfilled material, especially during permanent dentition eruption. The cells treated with 5% of sealer present a major adhesion if compared to cells treated with 5% of medium. It is notable that cellular density is slightly lower than in control D. C: The cells treated with 1% of sealed for density and morphology are resembling to control D. D: untreated cells are well attached to the plate, in close contact among them with high density. Morphologically they present elongated shape characterized by cytoplasmic processes that allow adhesion on plate and intracellular contact maintaining stimuli to promote cellular proliferation. (contrast phase microscope, total magnification 200x). E: Semi-quantitative analysis reported in graphic shows reduction of the viability in presence of sealer at all concentrations. Figure 9: Ultrastructural analysis of SAOS-2. A: Untreated SAOS-2 cells appeared well stained with toluidine blue confirming the integrity of the cellular membrane, nucleus, nucleoli (total magnification 500x, light microscope, semi-thin sections). B: Image of a group of untreated cells. Some cells are observed in division phase (red arrows). Arrows indicate nucleus in division filled with chromatin and nucleoli. C: Red arrows indicate some cellular debridement in treated cells. SAOS-2 did not show marked color confirming that cells did not resulted well preserved in contact with sealer in brown indicated by green arrows (total magnification 500x, light microscope, toluidine blue, semi-thin sections). D: Treated cells resulted not preserved and characterized by numerous bubbles/vacuoles inside (red arrows) which indicate the non-integrity of the nucleus and organelles in the cytoplasm (400x).
2020-08-28T20:14:53.871Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "a3b567c422a616fd7eed16d1e3e7995364bf98d6", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.29011/jocr-110.100110", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a3b567c422a616fd7eed16d1e3e7995364bf98d6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
244402366
pes2o/s2orc
v3-fos-license
Adipoclast: a multinucleated fat-eating macrophage Cell membrane fusion and multinucleation in macrophages are associated with physiologic homeostasis as well as disease. Osteoclasts are multinucleated macrophages that resorb bone through increased metabolic activity resulting from cell fusion. Fusion of macrophages also generates multinucleated giant cells (MGCs) in white adipose tissue (WAT) of obese individuals. For years, our knowledge of MGCs in WAT has been limited to their description as part of crown-like structures (CLS) surrounding damaged adipocytes. However, recent evidence indicates that these cells can phagocytose oversized lipid remnants, suggesting that, as in osteoclasts, cell fusion and multinucleation are required for specialized catabolic functions. We thus reason that WAT MGCs can be viewed as functionally analogous to osteoclasts and refer to them in this article as adipoclasts. We first review current knowledge on adipoclasts and their described functions. In view of recent advances in single cell genomics, we describe WAT macrophages from a ‘fusion perspective’ and speculate on the ontogeny of adipoclasts. Specifically, we highlight the role of CD9 and TREM2, two plasma membrane markers of lipid-associated macrophages in WAT, which have been previously described as regulators of fusion and multinucleation in osteoclasts and MGCs. Finally, we consider whether strategies aiming to target WAT macrophages can be more selectively directed against adipoclasts. Macrophages have a unique potential to fuse with themselves to form multinucleated giant cells (MGCs) [1]. During homeostasis, the majority of macrophages fuse infrequently and reside in tissues as mononuclear cells. The exception to this rule is the osteoclast of bone, a multinucleated monocyte/macrophage [2] that originated from embryonic erythro-myeloid progenitors and is responsible for the resorption of mineralized bone [3]. The multinucleation capability of the osteoclast correlates with its resorptive activity, suggesting that cell fusion confers a specialized stage of differentiation lacking in the mononuclear state [1]. The concept of a cellular gain of function as a result of fusion/multinucleation is supported by a recent discovery showing that multinucleated osteoclasts can undergo fission to form osteomorphs, daughter cells transcriptionally distinct from osteoclasts [4]. While osteoclasts regulate bone mass, pathological macrophage fusion can be an immune response to infectious pathogens (e.g. Mycobacterium tuberculosis) or foreign materials. MGCs are derived from monocyte progenitors [4] but their precise role within the granuloma is not yet clear. On the other hand, foreign-body giant cells (FBGCs) can be involved in the uptake of larger particles [5], an observation confirmed in vitro [6]. These observations suggest that enhanced phagocytic clearance of large particulates is an adaptive phenomenon resulting from macrophage fusion and multinucleation. The adipose tissue contains macrophages and during obesity, their number increases significantly (up to 50% of all cells) to correlate with metabolic dysfunction characterized by inflammation, fibrosis and insulin resistance [7][8][9][10][11]. Their histological description as crown-like structures (CLS) refers to MGCs associated with necrotic adipocytes [12]; however, recent evidence demonstrated that these MGCs can phagocytose lipid remnants more efficiently when compared to unfused WAT macrophages [13]. There is an intriguing association between lipids and macrophage fusion. Cholesterol-rich MGCs have been reported as a frequent and non-specific histological feature in lung biopsies [14]. Historically, the Touton giant cell, which is frequently found in lesions containing high levels of cholesterol and lipid deposits, has been described as a product of fusion between macrophagederived foam cells [15]. Multinucleated foam cells have been indeed observed as a result of high-fat diet in inflammatory sites such as the synovium [16]. Recent evidence shows that common monocyte progenitors accumulate cholesterol and lipids, which are required for MGC formation [17]. These studies suggest that a lipid rich microenvironment such as the white adipose tissue (WAT) can be 'fusogenic' for resident macrophages. Based on recent findings published by Braune and colleagues [13], and the existing literature on osteoclasts and MGCs, we postulate that macrophage fusion and multinucleation in the WAT may initiate a 'gain of function' to clear increasingly stressed adipocytes under metabolically challenging conditions such as obesity. Thus, in this review, we refer to MGCs of crown-like structures (CLS) as adipoclasts, the 'fat-resorbing osteoclasts' of the white adipose tissue (Fig. 1A, B). The term adipoclast does not differentiate between the MGCs with different nuclei numbers (binuclear, 2-4, > 4) and differs from the designation lipid-associated macrophage (LAM) by its unique multinucleated feature. The choice of this term is based on the (i) wide description of the CLS histologically in the white adipose tissue, (ii) their recent functional annotation as catabolic cells following fusion/multinucleation [13], and (iii) the functional analogy with osteoclasts-hence the suffix 'clast'. We first describe the current knowledge on CLS and their proposed function. We then review recent advances in WAT single cell transcriptomics, with a specific focus on TREM2 and CD9, membrane receptors that have been previously described in macrophage fusion and multinucleation. We highlight the respective roles of TREM2 and CD9 in osteoclasts, in order to speculate on the adipoclasts' origin and function. Finally, we discuss whether recent macrophagetargeting therapies in the fat may be beneficial or fine-tuned in targeting adipoclasts in obesity. The review does not cover the polarization of macrophages in adipose tissue nor the significance of WAT inflammation in insulin resistance and metabolic disorders in general-an area that is amply covered by excellent reviews (some examples include [10,[18][19][20][21][22][23]) Crown-like structures are adipoclasts The infiltration of immune cells in the obese adipose tissue was shown in the 1960s [24,25] and then overlooked for almost four decades, except for an in vitro study showing that insulin resistance in adipocytes can be caused by a macrophage-derived mediator [26]. The presence of macrophages in human and mice adipose tissue was shown by several groups and while some reported their tissue localization adjacent to adipocytes, others highlighted their morphological appearance as MGCs arising from cell fusion [11,12,27,28]. Clement et al. isolated CD14+ cells from the stromal vascular fraction (SVF) of human subcutaneous WAT, using CD14-coupled magnetic microbeads and confirmed the presence of macrophages in adipose tissue by immunohistochemistry [27]. Two contemporaneous studies reported the existence of macrophage syncytia (or MGCs) in the WAT of genetically obese mice (ob/ob) [11,28]. From a histological point of view, Cinti et al. were the first to designate the WAT multinucleated macrophages as crown-like structures (CLS) [12], surrounding necrotic or lipolytic adipocytes [12,28]. Today it is well-established that adipose tissue CLS contain multinucleated macrophages (i.e. adipoclasts; Fig. 1A) and increase in frequency with obesity. The origin of this augmented macrophage infiltration in the WAT is thought to be blood monocytes [29] and the literature on CLS has long assumed that these cells are implicated in efferocytosis of dead adipocytes because of their histological localization around dead adipocytes. A recent study brought definitive evidence by live imaging the WAT MGCs (i.e. adipoclasts) in mice, showing that these cells can take up lipid remnants which were not ingestible by mononuclear macrophages in the WAT [13]. A bead phagocytosis assay confirmed these findings and showed that, like MGCs [6], adipoclasts can phagocytose large particles [13]. Interestingly, confirming the previous associations between MGCs and lipids, adipoclasts display a relatively high lipid content [13] and this is not surprising given the fusogenic properties of the long-chain fatty acid binding scavenger receptor CD36 in macrophages [30]. In summary, while it is well-accepted that adipoclasts are specialized in efferocytosis of damaged adipocytes, many questions remain regarding the mechanisms underlying this process, as well as the other advantages that cell fusion and multinucleation may confer in the context of prolonged obesity. Furthermore, given the presence of mononucleated, often foamy macrophages in WAT, it is necessary to consider more trophic functions and crosstalk between macrophages and adipocytes [31] including the role of CD36 and other macrophage scavenger receptors [32], as well as clearance functions. The complexity behind adipoclast function During prolonged obesity, adipose tissue remodelling is a well-described phenomenon that consists in depotdependent adipocyte death associated with macrophage infiltration [33,34]. Our limited understanding of adipoclast function is due to the complex aspects of the evolution of adipocyte cell state under metabolically impaired conditions (see review [35]). During obesity, adipocytes can undergo various forms of death [36]-apoptotic [37], necrotic [12], and pyroptotic [38]. In addition, preadipocytes (i.e. the precursor of adipocytes) have been described to undergo senescence through different mechanisms during obesity [39,40]. On the other hand, the macrophage clearance mechanisms of damaged adipocytes were reported to be through lysosomal exocytosis [41], in addition to phagocytosis [13]. By liveimaging, a recent report showed the requirement of a size threshold for efferocytosis of lipid remnants [42]. Adipocyte death induces a metabolically activated and pro-inflammatory macrophage phenotype [42]. Paradoxically, the clearance of dead adipocytes by CLS was also linked to preadipocyte proliferation [43], suggesting an adipogenic role for adipoclasts. Adding to this complexity, different fat depots (visceral vs. subcutaneous) can display different prevalence in adipocyte cell death. It was reported that CLS were widespread in visceral compared with subcutaneous fat in genetically obese mice (db/db and ob/ob) [44]. In keeping with this, adipoclast infiltrates may differ between murine and human WAT. In mice, a prolonged high-fat diet of 24 weeks is required to observe the adipoclasts histologically [13], suggesting that prolonged obesity is a prerequisite for multinucleation of these cells. Hence adipoclasts have been linked to adipocytes in different cellular states that describe broadly cellular stress and ultimately death. This raises the question of whether adipoclasts can 'sense' a particular adipocyte state and whether their fusion from mononuclear macrophages is triggered through adipocyte-derived markers of stress. For instance, using a co-culture setup, it was shown that adipocyte death triggers MGC formation in vitro [13]. Further experiments will be crucial in order to establish the exact mechanisms underlying this process. Adipoclasts and/or their precursors display multinucleation markers While it is accepted that obesity is associated with a shift toward pro-inflammatory macrophage function [45][46][47], WAT macrophages have a unique polarization state (metabolically activated macrophages [48]) and paradoxically, crown-like structures contain the M2-like marker CD206 (mannose receptor) and CD11c expressing macrophages [49]. Recent single cell transcriptomics studies revealed the different subtypes of adipose tissue macrophages and their evolution upon obesogenic conditions [50][51][52][53][54]. Two markers of white adipose tissue macrophages of particular interest include TREM2 and CD9. Jaitin et al. were first to describe a TREM2-expressing lipid-associated macrophage (LAM) subset in human WAT [53], later confirmed by a separate study [51]. Similarly, CD9, another marker of LAMs [53], was found to colocalize with the pan-macrophage marker CD68 in human WAT [54]. Notably, TREM2+ and CD9+ LAMs were found to be part of CLS [52,53] and their frequency increased with obesity in mice and humans [50,51] with a shift toward a pro-inflammatory polarization characterized by IL-1β and TNF production [51]. None of the single cell RNA-seq studies in the WAT distinguished multinucleated macrophages (i.e. adipoclasts) from other macrophage subsets. Although technically challenging, this could have been attempted by sorting LAMs with > 2 nuclei. The advantage of such an approach would have been the identification of potential precursors of adipoclasts, in order to make a distinction between 'fusion-competent' LAMs and adipoclasts, as well as the polarization state of each cell type. Nevertheless, the recent single cell transcriptomic approaches in human WAT suggest that adipoclasts and/or adipoclast precursors express TREM2 and CD9 [51][52][53][54]. TREM2 and CD9: a parallel between adipoclasts and osteoclasts The existence of CD9+ and TREM2+ adipoclasts is worth highlighting from a macrophage fusion perspective, especially given the relevance of these two membrane proteins in osteoclast and MGC fusion and multinucleation. Besides its widely studied role in microglial phagocytosis [55] and neurodegeneration [56,57], TREM2 (the triggering receptor expressed on myeloid cells 2) is essential for macrophage multinucleation as part of a signalling pathway that includes DAP12 and Syk [58]. TREM2 regulates osteoclast formation [59][60][61] and a recent report shows its regulatory role in granuloma formation through recruitment of mycobacterium-permissive macrophages [62]. Furthermore, deletions or loss-of-function mutations in either DAP12 or TREM2 are causally associated with Nasu-Hakola disease, a dementia associated with bone cystic lesions [63,64]. Importantly, mutations in TREM2 and DAP12 induce defective multinucleation in osteoclasts, resulting in impaired bone resorption [60]. Trem2 is a trans-acting genetic regulator of a macrophage multinucleation gene co-expression network [65,66], which also includes genes belonging to the Pi3K-mTORC1 pathway that controls osteoclast multinucleation and bone mass [66]. The TREM2-PI3K-mTOR axis is indeed welldefined in microglia [67] and the activation of PI3K signalling is a common feature of osteoclasts and MGCs [58,68]. Jaitin et al. identified TREM2, not only as a marker, but also as a driver of the LAM cell molecular program as lipid uptake and storage were abrogated in the absence of Trem2 [53]. Interestingly, apolipoprotein E (ApoE) is a Trem2 ligand [69,70] and both Trem2 and ApoE are expressed by a subpopulation of tumourassociated macrophages [71]. Macrophages can fuse with tumour cells and contribute to tumour heterogeneity [72], but a potential role of Trem2 in this process is yet to be found. The lipid sensing role of TREM2 has been shown as part of the microglia response [73] but also during infection, as TREM2 is capable of recognizing mycobacterial cell-wall mycolic acid (MA)-containing lipids [62]. This raises the possibility of a lipid uptake through TREM2 that can be a prerequisite mechanism for macrophage fusion and multinucleation. Local lipid changes are principal regulators of adipose tissue macrophage recruitment [74]. Interestingly, single cell RNAsequencing analysis of aortic CD45+ cells from atherosclerotic high-fat diet-fed (Ldlr -/-) mice identified macrophages with high Trem2 expression, specialized in lipid metabolism/catabolism and enriched in the osteoclast gene signature [75]. If one extrapolates these findings to the WAT, it is plausible that Trem2 expressing macrophages accumulate lipids and become fusogenic, giving rise to adipoclast precursors and adipoclasts. Fusion and multinucleation could be considered as the final differentiation step of these precursors. However, the exact Trem2-dependent and lipid-related mechanisms allowing the transition from fusion-competent adipoclast precursors to adipoclasts remain to be identified, and in that sense, some parallels drawn from knowledge on osteoclast lipid metabolism may be of relevance. Cholesterol is indispensable for membrane fusion and osteoclast v-ATPase activity [76,77] and Ldlr -/mice have defective osteoclast fusion [78]. Since osteoclast formation, survival and morphology are highly dependent on exogenous cholesterol/lipoproteins [79], adipoclast integrity and function may also be under the influence of a cholesterol-rich environment in the adipose tissue. Similarly, saturated fatty acids enhance osteoclast survival [80] and palmitic acid increases RANKL-mediated osteoclast differentiation [81]. On the other hand, short-chain fatty acids such as propionate and butyrate induce metabolic reprogramming of osteoclasts and downregulate essential osteoclast genes [82]. This suggests that individual lipid species may have opposing roles on osteoclast differentiation and fusion and therefore the lipid dynamics in the WAT during obesity may determine the formation of adipoclasts. In this regard, it has been shown that ablation of fat cells in adult mice can induce massive bone gain [83]. As the diet and microbiome significantly contribute to the reserve and processing of fatty acids, the lipid composition of WAT under obesogenic conditions [84] can be a pivotal factor in determining adipoclast formation and function. Tetraspanins are a superfamily of membrane proteins, and among them, CD9 and CD81 are closely related and known to control cell-cell fusion as they negatively regulate fusion of osteoclasts and MGCs [85,86]. These proteins facilitate the organization of integrins and influence macrophage motility [87]. CD9/CD81 double-null mice spontaneously develop MGCs in the lung, showing enhanced osteoclastogenesis in the bone and signs of accelerated ageing with atrophy of adipose tissue [86,88]. Interestingly, while CD9 has been robustly linked to WAT macrophages [51,53,54], CD81 has been recently described as a beige adipocyte progenitor cell marker and regulator of de novo beige fat biogenesis following cold exposure [89]. The potential involvement of CD81 in adipoclast differentiation and function remains to be identified. Given that CLS have been described to be an adipogenic niche for adipocyte progenitor cells [43], CD81 may be involved in a possible adipocyte progenitoradipoclast/adipoclast-precursor interaction. Notably, tetraspanins are the only inhibitors of fusion that have been so far identified. Because their downregulation induces membrane fusion [1,90], CD9 and CD81 may be expressed in adipoclast precursors and undergo down-regulation when fusion occurs. Hence, the transcriptomic characterization of CD9+ mononucleated and multinucleated cells in the WAT can confirm the precise role of tetraspanins in adipoclast formation. In summary, the presence of TREM2 + CD9 + adipoclasts or adipoclasts precursors seems to correlate with WAT inflammation and the severity of obesity-related pathologies (Fig. 2). In support of the pathogenic role of adipoclasts, a scar-associated and pro-fibrotic TREM2 + CD9 + subpopulation of macrophages was identified in cirrhotic human liver [91]. These scar-associated macrophages were conserved in mice and express osteopontin (SPP1) [91], a protein that regulates FBGC formation [92] and osteoclast fusion and resorption [93]. Whether the scar-associated macrophages can fuse with each other remains to be confirmed. In non-alcoholic steatohepatitis (NASH), a specific macrophage population is characterized by high levels of expression of Trem2 [94] and other lipid-associated macrophage markers, forming hepatic CLS [95]. A NASH diet causes a partial loss of Kupffer cell identity, induction of Trem2 and Cd9 expression, and cell death in mice [96]. Interestingly, the expression of Trem2 and Cd9 is a result of substantial reprogramming of the Kupffer cell regulatory landscape due to the prolonged exposure to the NASH diet [96]. Hence, an interesting parallel can be made with TREM2 + CD9 + adipoclasts, which may form as a result of chronic obesogenic conditions, whereby membrane fusion and multinucleation are likely to induce changes in the transcriptomic/epigenetic landscape, allowing phagocytosis of damaged adipocytes. In addition to metabolic tissues, TREM2 + CD9 + microglia in the brain may play a pathogenic role. It is intriguing that lipiddroplet-accumulating microglia (a subgroup presumably distinct from the disease-associated microglia expressing TREM2 and CD9 [97]), represent a dysfunctional and proinflammatory state in the ageing brain [98]. Targeting macrophages and/or adipoclasts in obesity? To date, it is well-accepted that obesity triggers the recruitment of monocytes into adipose tissue to promote inflammation, which itself may cause ectopic fat deposition in the liver and insulin resistance [99,100]. The discovery of adipose tissue TNF [101,102] and a decade later the monocyte-chemoattractant protein 1 (MCP-1) [103,104], proved the importance of WAT inflammation and its indisputable macrophage component in the metabolic syndrome. Logically, this has seen the emergence of macrophage-targeting therapies that were initially aiming to inhibit the recruitment of these cells [105][106][107]. With the increasing recognition of Fig. 2 The transition from obese to severely obese state is characterized by increased macrophage infiltration and the formation of TREM2 and CD9 expressing pro-inflammatory macrophages that eventually give rise to multinucleated adipoclasts surrounding stressed adipocytes. How fusion/multinucleation affects the expression TREM2/CD9 and whether this causes de novo expression of adipoclasts markers is yet to be determined macrophage metabolism in the regulation of its immune function [108], novel initiatives target mitochondrial function in macrophages [109,110], given the relevance of mitochondrial oxidative phosphorylation in dietinduced obesity [111]. Drug delivery approaches, including nanomaterial-based ones targeting macrophages, hold promise [112]. Furthermore, in addition to their professional phagocytic activity and plasticity [113], tissue macrophages have unique features that differentiate them from surrounding cells. For instance, their enhanced sensitivity to changes in intracellular potassium levels and inflammasome activation [114], makes them attractive targets for Na + /K + -ATPase blockers such as ouabain [115]. A recent study exemplifies the strategic relevance of macrophage-targeted pharmacological interventions in obesity: macrophage-derived PDGFcc production is regulated by diet and increases lipid storage by white adipocytes [116]. When considering macrophage-targeted treatments in adipose tissue, it is crucial to keep in mind the heterogeneity and master regulatory role of macrophages in the development and homeostatic function of adipose tissue. It has become evident that macrophages express organ-specific genes in addition to canonical macrophage genes, a phenomenon referred to as niche-specific programming [96,117]. The recently identified sympathetic neuron-associated macrophages increase with obesity and can be targeted for the browning of white fat [118]. This shows the heterogeneity of adipose tissue macrophages, which should be taken into account in any pharmacological approach aiming to reduce obesityrelated complications. During homeostasis, many aspects of the mature function of macrophages are controlled by CSF1 and IL-34, which both bind CSF1R, a receptor restricted to cells of the myeloid lineage. Furthermore, Trib1, an adaptor protein involved in protein degradation, is critical for the differentiation of tissue-resident macrophages [119], while receptors known to be preferentially expressed by mononuclear phagocytes such as TREM2 [55,120] and MARCO [121,122], regulate an array of tissue-resident macrophage function including efferocytosis (TREM2) and scavenging (MARCO). The genetic deletion of Csf1r in rats and Trib1 in mice reduces adipose tissue mass [119,123], while Trem2 -/and Marco -/-LAMs lose their efficacy in lipid buffering [53,124]. Of note, CSF1R on microglial cells can control hypothalamic control of energy homeostasis in mice [125,126] which suggests that CSF1R may be responsible for local and systemic control of adiposity. When considering macrophage-targeted therapies, a possible non-myeloid expression of some markers (e.g. Trem2) should be taken into consideration as it may influence metabolic health [127]. Altogether, these studies suggest that healthy macrophage differentiation and function is an unconditional part of adipose tissue homeostasis and therapeutic approaches must differentiate between optimal macrophage presence and pathological infiltration and accumulation of these cells. Based on current knowledge, adipoclasts are likely to form when relatively high numbers of macrophages infiltrate the adipose tissue due to prolonged obesity. It is still not clear whether adipoclasts are only homokaryons or whether they can also form by fusion of mononucleated macrophages and adipocytes. Here we argue that inhibiting adipoclast formation may improve insulin sensitivity. Rather than global approaches aiming to target adipose tissue macrophages, one can envisage inhibition of adipoclast formation. However, such therapies require a better understanding of adipoclast formation and the identification of novel markers that differentiate mononucleated precursors from multinucleated fused cells. Integrating transcriptomic, epigenetic and metabolic events that accompany cell fusion and multinucleation in the WAT will fine-tune cell-based therapies in obesity and metabolic syndrome.
2021-11-20T06:16:27.060Z
2021-11-19T00:00:00.000
{ "year": 2021, "sha1": "522ff23274b862c68b43c8927273c0ef77d131f1", "oa_license": "CCBY", "oa_url": "https://bmcbiol.biomedcentral.com/track/pdf/10.1186/s12915-021-01181-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "657bb7105a5aae5f553bc78ad91e5d43cc80da8d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
14248015
pes2o/s2orc
v3-fos-license
RB/PLK1-dependent induced pathway by SLAMF3 expression inhibits mitosis and control hepatocarcinoma cell proliferation Polo-like kinase PLK1 is a cell cycle protein that plays multiple roles in promoting cell cycle progression. Among the many roles, the most prominent role of PLK1 is to regulate the mitotic spindle formation checkpoint at the M-phase. Recently we reported the expression of SLAMF3 in Hepatocytes and show that it is down regulated in tumor cells of hepatocellular carcinoma (HCC). We also show that the forced high expression level of SLAMF3 in HCC cells controls proliferation by inhibiting the MAPK ERK/JNK and the mTOR pathways. In the present study, we provide evidence that the inhibitory effect of SLAMF3 on HCC proliferation occurs through Retinoblastoma (RB) factor and PLK1-dependent pathway. In addition to the inhibition of MAPK ERK/JNK and the mTOR pathways, expression of SLAMF3 in HCC retains RB factor in its hypophosphorylated active form, which in turn inactivates E2F transcription factor, thereby repressing the expression and activation of PLK1. A clear inverse correlation was also observed between SLAMF3 and PLK expression in patients with HCC. In conclusion, the results presented here suggest that the tumor suppressor potential of SLAMF3 occurs through activation of RB that represses PLK1. We propose that the induction of a high expression level of SLAMF3 in cancerous cells could control cellular mitosis and block tumor progression. INTRODUCTION Hepatocellular carcinoma (HCC) is a highly aggressive cancer, which results in more than 600,000 deaths every year worldwide [1]. Although the major risk factors of HCC have been identified which includes the infection with hepatitis B or C viruses [2], the balance between cell cycle regulators and cell proliferation is also an important determinant of tumor development and/or behavior [3]. Although the expression of proapoptotic genes is decreased in HCC, the balance between death and survival in HCC is dysregulated due to overactivation of anti-apoptotic pathways. Indeed, some molecules involved in counter-acting apoptosis, such as Bcl-XL, Mcl-1, c-IAP1, XIAP and survivin are known to be overexpressed in HCC cells. Furthermore, some growth factors that mediate cell survival are also up-regulated in HCC, as well as the molecules involved in the cleavage of their proforms to an active form. The expression and/or activation of the JAK/STAT, PI3K/AKT and RAS/ERKs pathways are also reported to be enhanced in HCC cells, conferring on them resistance to apoptotic stimuli [4][5][6][7][8]. SLAMF3 belongs to signaling lymphocytic activation molecule family of receptors (SLAMF-Rs) that trigger both inhibitory and activation signals in immune cells [9]. Recently, we identified the expression of SLAMF3 in hepatocytes [10]. We demonstrated a link between high level expression of SLAMF3 in HCC cells and low proliferation index. SLAMF3 overexpression inhibited ERK1/2, JNK and mTOR pathways and reduced tumor progression of HCC xenografts in a mouse model [10]. The identification of this new molecule in hepatocytes and its role in controlling the proliferation of HCC cells prompted us to investigate other potential pathways involved in the anti-proliferative effect of SLAMF3. In a preliminary study, we analyzed the mRNA and quantified the expression of genes implicated in proliferation and cell cycle control in the SLAMF3 overexpressing cells. Among the analyzed genes, the transcripts of Polo-like kinase PLK1 were found to be significantly decreased. The levels of PLK1 mRNA, coding for a serine-threonine kinase, vary dramatically during cell cycle progression. The levels range from very low or undetectable amounts in G0-G1, steadily increasing amounts have been detected during S phase onwards, and to a peak in G2-M phase [11,12]. The most prominent role of PLK1 is to regulate the spindle checkpoint in the M-phase [13]. It has also been observed that mutation in Plk1 alleles disrupts the spindle formation resulting in polyploid cells [14]. Comparison of PLK1 expression in HCC tumors and their corresponding tumor free tissue showed that it is overexpressed in the tumoral tissue. High-level expression of PLK1 in HCC tumoral tissue correlated with low overall survival rate. In addition, siRNA against PLK1 increased apoptosis in Huh-7 cell line in a caspase-independent manner and induced tumor regression in siPLK1-treated mice [15]. PLK1 depletion leading to G2/M arrest, inhibition of cell proliferation and promotion of apoptosis via downregulation of Survivin expression has also been reported [16]. RB activity is responsible for repression of the PLK1 promoter, which in turn depends on the activity of SWI/SNF chromatin remodeling complex [17]. The extended RB pathway comprises of p16 INK4a and p21 CIP1 family members, which inhibit the kinase activity of Cyclin-Cyclindependent kinase (CDK) complexes; these complexes in turn inactivate the RB protein and its two other family members p107 and p130 by hyperphosphorylation during G1/S transition of the cell cycle, thereby activating E2F transcription factors [18,19]. Multiple events resulting in the functional inactivation of RB pathway in human HCC occur early in the course of the disease, suggesting that the RB pathway plays a pivotal role in preventing initiation of HCC [20,21]. In the present study, we report the regulatory effect of hepatocyte SLAMF3 on PLK1 expression via the RB pathway. We provide evidence that the anti-proliferative effect of hepatocyte SLAMF3 is, in part, through the inhibition of PLK1 expression and activity. We also report an inverse correlation between the high level expression of PLK1 and low expression levels of SLAMF3 in HCC patients suggesting an anti-mitotic role of SLAMF3 through a PLK1-dependent pathway. Taken together, the induction of high level expression of hepatocyte SLAMF3, an anti-mitotic factor could be a potent strategy for the therapeutic intervention in HCC. Overexpression of SLAMF3 blocks HCC cells proliferation We have recently identified and described the expression of SLAMF3 receptor in hepatocytes [10] and shown that the high level expression of SLAMF3 inhibits proliferation in HCC cells. The cancerous wild type Huh-7 and HepG2 cell lines do not express more than 5-10% of SLAMF3 in the cell surface [10] in comparison to primary hepatocytes. Transient transfection of cell lines with plasmid increased the expression of SLAMF3 and yielded 60-70% positive cells. SLAMF3 +/high and SLAMF3 −/low cells were sorted and observed that the enhanced SLAMF3 expression reduced cell proliferation by 50% at 72 h posttransfection ( Figure 1A). As described previously, the high level expression SLAMF3 reduced the phosphorylation of MAPK ERK 1/2 as shown in Figure 1B. In a similar manner SLAMF3 was also overexpressed in another HCC cell line HepG2 and observed the inhibition of ERK1/2 as observed in Huh-7. However the cell proliferation even though inhibited upon overexpression of SLAMF3 was only 40% as compared to 50% inhibition observed in Huh-7. Inhibition of ERK1/2 and reduction in cell proliferation observed both in Huh-7 and HepG2 confirmed the anti-proliferative effect of SLAMF3 in HCC cells (Supplementary Figure 1A, 1B). SLAMF3 overexpression in HCC leads to increased cell size and granularity Huh-7 cells overexpressing SLAMF3 were compared to cells transfected with control plasmid (mock) and tested for their size and granularity by forward and side scatter in a flow cytometer. We show that SLAMF3 overexpressing cells present a bigger cell size and intense granulation ( Figure 2A). Indeed, the cell sizes were significantly increased by 30% in SLAMF3 +/high compared to SLAMF3 −/low ( Figure 2B). MGG staining showed that Huh-7 cells overexpressing SLAMF3 have an enhanced cytoplasm with denser chromatin when compared to SLAMF3 −/low ( Figure 2C). www.impactjournals.com/oncotarget SLAMF3 expression induces cell cycle arrest at G2/M We have previously shown that overexpression of SLAMF3 in cancerous cells leads to the inhibition of MAPK ERK/JNK, mTOR phosphorylation and induces apoptosis by a caspase-dependent-pathway [10]. These observations prompted us to analyze the effect of the signal induced by the high expression level of SLAMF3 on the cell cycle. Huh-7 cells were transfected with SLAMF3 plasmid and sorted for SLAMF3 +/high and SLAMF3 −/low sub-populations 48 hours after transfection to test the cell cycle distribution in SLAMF3 +/high and compare to SLAMF3 −/low sub-population. In SLAMF3 +/high subpopulation net cell cycle arrest was observed with accumulation of cells at G2/M stage (p < 0.01) and less pronounced at DNA synthesis phase (S-phase). SLAMF3 −/low cells remain predominantly in G0/G1 ( Figure 3A, 3B). Overexpression of SLAMF3 inhibits expression and phosphorylation of PLK1 Subsequently, based on increased cytoplasmic content, nuclei size and observed cell cycle blockade in presence of high SLAMF3 expression, we hypothesized that the expression of SLAMF3 blocks cell division after DNA replication. Polo-like kinase-1 PLK-1 is one of the crucial factors involved in the regulation of mitosis. This protein is involved in the mitotic spindle formation and separation of two daughter cells at late mitosis stage. First, we compared PLK-1 and SLAMF3 expression in HCC and healthy primary hepatocytes. We highlight significant inverse correlation (r = −0.9701, p < 0.0005) between PLK-1 and SLAMF3 expression ( Figure 4A). All HCC cells lines expressed very low levels of SLAMF3 compared to healthy hepatocytes PHH. Among the HCC cells, Huh-7, HepG2 and Hep3B cell lines expressed lower levels of PLK1 than SNU398 and SNU449. Huh-7 cells were transfected transiently to overexpress SLAMF3 and the PLK1 mRNA was quantified by qPCR. It was observed that overexpression of SLAMF3 significantly (p < 0.05) inhibited the expression of PLK1 mRNA ( Figure 4B). Western blot analysis also showed that SLAMF3 overexpression reduced the expression of PLK1. Western blot performed using anti-phospho PLK1 antibody showed that the overexpression of SLAMF3 also reduced the activation of PLK1 ( Figure 4C). Hepatocyte SLAMF3 maintains RB in its activated form and suppresses PLK1-dependent mitosis The Retinoblastoma RB factor is one of the many factors, which control the expression of PLK1 [17]. The hyperphosphorylation of RB results in its detachment from E2F-suppressor complex that induces the expression of genes under control of RB/E2F complex. Inversely, the hypophosphorylated form of RB remains attached to the E2F factor and represses the expression of genes under the control of RB [10,17]. The overexpression of SLAMF3 in Huh-7 cells drastically decreased the hyperphosphorylated form (p-pRB) where as both hypo and hyper-phosphorylated forms were present in the mock ( Figure 5A). This result suggests that overexpression (C) Huh-7 cells were transfected with SLAMF3 and sorted as SLAMF3 +/high and SLAMF3 −/low and cell morphology, by Giemsa staining, was compared to that of WT cells cultures. Morphologic analysis was determined at 48 hours after SLAMF3 transfection. One representative from two independent experiments is presented as microscopy analysis at 10x and 40x. of SLAMF3 retains RB in its active form that remains potentially fixed to the E2F-suppressor complex. To verify the link between SLAMF3 and RB, RB specific shRNA was introduced in Huh-7 cells to create a stably transfected cell line. Expression levels of RB was tested in the cell line and observed that the introduction of RB specific shRNA leads to 70% reduction in the mRNA and 80% reduction in the protein (see Supplementary Figure 2A, 2B). To understand the role of RB in the anti-proliferative property of SLAMF3, Huh-7/shRNA-RB cells were transiently transfected to over express SLAMF3 and the proliferation was tested by MTT assay. The results show that the overexpression of SLAMF3 did not have any effect on the cell proliferation suggesting that the inhibitory effect of SLAMF3 is mediated by RB factor ( Figure 5B). In addition, the inhibitory effect of SLAMF3 on PLK-1 expression was decreased in absence of RB ( Figure 5C), suggesting a strong link between the SLAMF3 overexpression, activation of RB, that by its hypophosphorylation in turn negatively regulates the expression and activation of PLK1 resulting in cell cycle arrest at mitosis stage. Expression level of SLAMF3 inversely correlates with PLK1 expression in patients with HCC To compare the expression levels of PLK1 mRNA between HCC and healthy adjacent normal tissue from the same patient, total RNA was extracted and real time RT-PCR was performed on the samples. Thirteen pairs (n = 13) of paired resections samples (T/pT) obtained from surgery department (CHU, Amiens, France). Analysis of SLAMF3 mRNA in samples demonstrated that in nine samples (9/13, 70%), SLAMF3 mRNA in HCC (T) tissues was significantly lower than those of adjacent normal tissue (pT) ( Figure 6A and Supplementary Figure 3A; p < 0.005). Four samples (4/13; 30%) presented higher SLAMF3 mRNA expression in T than in pT. This paradoxical result compared to our previous observations prompted us to check the presence of other cell types that express the SLAMF3 mRNA there by increasing the expression of this molecule in hepatic tumor tissue. Indeed, we quantified transcripts of CD3 and CD64, specific marker for T lymphocytes and macrophages, respectively, which are described as SLAMF3 expressing cells [22]. We show that the four samples which presented high SLAMF3 in tumor samples, also expressed, high levels of CD3 and CD64 suggesting infiltration of immune cells in tumor tissue Figure 4A, 4B). The mRNA quantification also showed that HCC tissues expressed, significantly (p < 0.05), high levels of PLK1 mRNA than adjacent normal tissue ( Figure 6B and Supplementary Figure 3B). These results were also confirmed by western blot analysis in T and pT samples from patient #1, #2, #3 and #4 ( Figure 6C). We observed a significant inverse correlation between SLAMF3 and PLK1 expression in the patients (p < 0.005; r = 0.86) ( Figure 6D). DISCUSSION PLK1 has been shown to be intimately involved in spindle formation and chromosome segregation during mitosis and therefore plays an important role in the regulation of cell cycle [23][24][25][26]. In HCC, the overexpression of PLK1 correlates with low overall survival rate in HCC patients. The in vitro introduction of siRNA against PLK1 in Huh-7 cells increased apoptosis in caspase-independent pathway and induced tumor regression in siPLK1-treated mice [15]. The levels of PLK1 increased during S phase and reached peak at mitosis G2-M and its activity is elevated in tissues and cells having a high mitotic index, that includes cancerous cells [12]. The depletion of PLK1 led to G2/M arrest, inhibition of cell proliferation and apoptosis promotion via down-regulation of survivin expression [16]. Based on these findings, PLK1 has been proposed as a novel diagnostic marker for cancer, and its inhibition might represent a rewarding approach in cancer therapy [27]. Indeed, several PLK1 inhibitors, including BI2536 and SLAMF3). The results were presented as the mean +/− SD (n = 3; ***p < 0.005, ns: statistically non significant). (C) PLK1 transcripts were quantified in shRNA RB and shRNA control-treated Huh-7. In both conditions, cells were untransfected (untreated cells), transfected with free-plasmid (Mock) and with SLAFM3 coding plasmid (SLAMF3). Results presented as the mean from six independent experiments (n = 6, *p < 0.05, ns: statistically non significant). www.impactjournals.com/oncotarget GSK461364, are in clinical studies for treating patients with various cancers [28]. Volasertib (BI6727) was the first PLK1 inhibitor to be tested clinically has reached phase III for the treatment of acute myeloid leukemia (AML). In addition to its proven activity, when used alone, Volasertib increased anti-leukemic activity in AML when combined with low-dose cytarabine compared with cytarabine alone [29]. In the present work, we highlight the link between SLAMF3 expression in HCC cells and the increased size of nucleus as well as the enhancement of cytoplasm. We previously reported that the increase in the cell size in SLAMF3 overexpressing cells continues with induction of apoptosis by a caspase-dependent pathway [10]. Transcriptomic screening of SLAMF3 overexpressing cells identified that PLK1, a major target of RB was repressed. Overexpression of SLAMF3 inhibits the expression of PLK1 by more than 50%. The inhibition of PLK1 transcripts is reinforced by total inhibition of PLK1 phosphorylation when SLAMF3 is overexpressed. Taken together, our results provide evidence that SLAMF3 abolishes the expression and activation of PLK1. In addition we also show that SLAMF3 overexpression induces cel1 cycle arrest at S-G2/M. The transition at the G2/M stage of cell cycle depends on the action of cyclins A/B and CDK1. The CDK1 activation in turn depends on the expression and activation of phosphatases such as CDC25. PLK1 activates CDC25 which in turn activates the CDK1 and forms an active complex with cyclin B. Cyclin B-CDK1 induce expression of certain factors necessary in the later stages of mitosis such as survivin and condensin [30]. Here we show that overexpression of SLAMF3 specifically inhibits the expression of CDC25 as shown by Q-PCR and western blotting (Supplementary Figure 5A, 5B). This observation confirms the suppressor effect of SLAMF3 on pathways that control cell cycle progression. The observations presented here allow us to identify at least one mechanism by which SLAMF3 controls cell cycle progression of cancerous cells by inhibiting PLK1 expression and phosphorylation. Second, by inhibiting CDC25 expression and activation, SLAMF3 controls CDK1-cyclin B activation. More importantly, in nine patients from our HCC cohort, we confirmed the high PLK1 expression in T samples compared to its low expression in pT samples. This result is similar to that of He et al., (2009) [15] and suggest that the rate of PLK1 expression could be a molecular marker of the HCC progression and the aggressiveness. A strict inverse correlation, (9/9, 100%) was obtained between expression of SLAMF3 and PLK1. In HCC samples, when PLK1 mRNA level was high, the levels of SLAMF3 was very less or not detectable. Based on METAVIR stage and clinico-biological data of patients, all patients with advanced fibrosis (score > F2) expressed high levels of PLK1 and undetectable levels of SLAMF3. Our results suggest that the expression of SLAMF3 could be considered as a marker of HCC as its expression was inversely correlated to that of PLK1. No significant correlation was detected between the expression of both SLAMF3/PLK1 and HCC etiology. Additional patients need to be analyzed in order to confirm the specificity of SLAMF3 expression in different etiologies such as viral hepatitis, NASH and alcoholic HCC. Finally, in the present work we also highlight the mechanisms by which the SLAMF3, acts as a tumor repressor, controls the HCC cell proliferation and tumor progression. Overexpression of SLAMF3 inhibited MAPK/ ERK1/2 phosphorylation in HCC wild type cells where ERK1/2 was constitutively activated [10]. Among the many roles, one role of the MAPK ERK cascade is the regulation of G2/M and mitosis progression. Indeed, all components of the cascade were shown to undergo activation during the late G2 and M phases of the cell cycle [31][32][33]. Several molecular mechanisms have been implicated in the regulation of G2/M by the MAPK/ERK cascade, including the phosphorylation of centromere protein E [34], SWI-SNF [35], Myt1 [36] as well as the indirect activation of PLK1 and Cdc2 [37]. Taken together, the SLAMF3-induced reduction in the activity of MAPK/ERK may control mitosis by inhibition of PLK1 expression and activation. This effect may be accentuated by the repressor effect of RB on PLK1 promoter. Indeed, PLK1 is a target of the RB suppressor pathway and several reports proposed that activation of RB, by its hypophosphorylation, mediate attenuation of PLK1 by controlling PLK1 promoter activity [17]. Our observations suggest that the induction of high level expression of SLAMF3 could be one of potent therapeutic strategy to control tumor progression. Thus, additional studies are needed to identify the molecular partners of hepatic SLAMF3 and study its implications in tumor-suppressing functions. Patient samples and cell culture Thirteen pairs (n = 13) of tumor (T) samples and matched peritumoral (pT) samples were obtained from HCC patients undergoing surgical resection at Amiens University Hospital (Amiens, France). Our protocol was approved by the local independent ethics committee (Comité de Protection des Personnes (CPP) Nord-Ouest, Amiens, France). Patients were provided with information on the study procedures and objectives and gave their written consent to participation. The clinical and biological information of patients are summarized in Table 1. Total mRNAs and proteins were extracted using specific kits and used for further analysis. mRNA extraction, quantitative PCR, sequencing and plasmid construction Total mRNA was extracted using RNeasy kit (Qiagen) and RT-PCR was performed using 100 ng of total RNA. Quantitative PCR was performed according to the Taqman Gene Expression protocol (Applied Biosystems) using the following primers for SLAMF3: forward 5′-tgg gac taa gag cct ctg gaa a-3′, reverse 5′-aca gag att gag aac gtc atc tgg-3′ and MGB probe with 6-FAM (5'-ccc caa cag tgg tgt c-3′). The transcription of GAPDH was measured as an endogenous housekeeping control. The hepatic SLAMF3 cloned in Mammalian expression vector pBud CE 4.1 vector (Invitrogen) previously described [10]. For SLAMF3 overexpression, cells (0.3 × 10 6 ) were first seeded into six-well plates 24 h prior to transfection and transfected with 0.8 μg of plasmid DNA using the FuGENE HD Transfection Reagent Kit (Roche, Meylan, France) according to the manufacturer's instructions. Cells were incubated for 48 h at 37°C before analysis of SLAMF3 expression by mRNA quantification, flow cytometry and WB. For Q-PCR quantification, the following primers were used: PLK1 For-aga aga ccc tgt gtg gga ct, Rev-tca aaa ggt ggt ttg ccc ac; CDC25 For-att ctc atc tga gcg tgg gc, Rev-act cct ttg tag ccg cct ttc, and GAPDH For-aag gtg aag gtc gga gtc aa, Rev-ctt gac ggt gcc atg gaa tt. For staining, cells were collected in cold PBS/0.01% sodium azide/0.5% BSA, washed and incubated with fluorescent-conjugated primary or isotype-matched antibodies for 20 min at 4°C. Following extensive washing (in PBS/0.01% sodium azide), cells were fixed (in 1% paraformaldehyde) and 5000 viable events were analyzed in the cytometer. Hepatocyte proliferation, cell cycle analysis and giemsa staining The bromure 3-(4,5-dimethylthiazol-2-yl)-2,5diphenyl tetrazolium test (MTT) was used to check the anti-proliferative effect of SLAMF3 expression in HCC cells. Cells (sorted Huh-7 SLAMF3 −/Low and SLAMF3 +/High ) were seeded at 10 4 cells/well in 96-well plates. At 24, 48 and 72 h, cells were rinsed and exposed for 1 h to a solution of thiozalyl blue tetrazolium bromide suspended at a concentration of 0.5 mg/ml in colorless culture medium (MTT assay kit from Sigma-Aldrich, St Quentin Fallavier, France). Reduced purple formazan crystals were extracted with DMSO and analyzed at a wavelength of 560 nm. For cell cycle analysis, cells were seeded at the density of 1 × 10 6 cells, and cell cycle distribution was analyzed by flow cytometry at 48 h after the transfection of SLAMF3 plasmid. After washing twice with PBS, cells were harvested and collected by centrifugation, and treated with ribonuclease RNase (Interchim) followed by fixation in ice-cold 70 % ethanol at −20°C overnight. Then, cells were collected and stained with 100 μl Propidium iodide PI (50 µg/ml) and RNase (10 µg/ml) solution for 30 min in the dark followed by cell cycle analysis. To estimate nuclear and cytoplasm sizes of cells, 48 h cultured cells from Huh-7 SLAMF3 −/Low and SLAMF3 +/High subpopulations were fixed after 48 h in methanol for 10 min and air-dried. Then they were immersed with the Giemsa solution for 45 min, washed with distilled water and air-dried. Statistical analysis Independent Student's t-test was used to compare mRNA expression in T and pT samples. Unless otherwise stated, results are expressed as the mean ± SD. Statistical analyses were performed with Prism software (version 4.0, GraphPad Inc., San Diego, CA, USA). The threshold for statistical significant was set to p < 0.05 for all analyses.
2018-04-03T01:10:20.107Z
2016-01-20T00:00:00.000
{ "year": 2016, "sha1": "91fbabdbeb0a3603ab1da82df8627dbe250c7cb7", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=19702&path[]=6954", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb46ff8f3a3158ecfe3038ce6f1b05139fe22a2f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4317961
pes2o/s2orc
v3-fos-license
Free-form Light Actuators — Fabrication and Control of Actuation in Microscopic Scale Liquid crystalline elastomers (LCEs) are smart materials capable of reversible shape-change in response to external stimuli, and have attracted researchers' attention in many fields. Most of the studies focused on macroscopic LCE structures (films, fibers) and their miniaturization is still in its infancy. Recently developed lithography techniques, e.g., mask exposure and replica molding, only allow for creating 2D structures on LCE thin films. Direct laser writing (DLW) opens access to truly 3D fabrication in the microscopic scale. However, controlling the actuation topology and dynamics at the same length scale remains a challenge. In this paper we report on a method to control the liquid crystal (LC) molecular alignment in the LCE microstructures of arbitrary three-dimensional shape. This was made possible by a combination of direct laser writing for both the LCE structures as well as for micrograting patterns inducing local LC alignment. Several types of grating patterns were used to introduce different LC alignments, which can be subsequently patterned into the LCE structures. This protocol allows one to obtain LCE microstructures with engineered alignments able to perform multiple opto-mechanical actuation, thus being capable of multiple functionalities. Applications can be foreseen in the fields of tunable photonics, micro-robotics, lab-on-chip technology and others. Introduction Microactuators are microscopic structures that can transmit external energy for the operation of another mechanism or system. Due to the compact size and remote control capability, they have been widely used in lab-on-chip systems 1 , micro-sensing 2 , and micro robotics 3 . The actuators available to date can perform only simple actions, such as swelling/collapse in a hydrogel matrix 4 , contraction/bending 5 in one direction with the external field. Although the recently developed techniques have enabled to fabricate microscopic scale actuating structures 6 , it is still a big challenge to control these actuations in the same length scale. This paper reports a method to prepare 3D light activate microstructures with controllable actuation properties. The technique is based on direct laser writing (DLW), and it is demonstrated in liquid crystalline elastomers (LCEs). LCEs are soft polymers combing the property of elastomer and liquid crystalline orientation. These materials are capable of large deformation (20 -400%) under various types of external stimuli 7 . The advantage of using LCEs for microactuators is the convenience of engineering molecular order in the structures, which allows for controlling the actuation in the microscopic scale 8 . LC monomers are synthesized with acrylate moiety, enabling a single-step photo-polymerization. This property gives access to different types of lithographic techniques for fabrication of 3D microstructures. Azo-dyes as photo responsive molecules are linked to the polymer network by co-polymerization process. Such molecules combine their strong light response ability (trans to cis isomerization) with the light induced heating of the system affording light controlled deformation. DLW is a technique to obtain polymer structures in a photosensitive material by spatial control of a focused laser beam 9 . DLW enables the creation of 3D free-form structures in LCE without losing the molecular alignment 6 . There are several advantages of DLW in the fabrication of LCE microactuators. First, the resolution can reach the submicron scale, and the structures are truly 3D 6 . Previously reported LCE micro fabrication methods, e.g., masked exposure 10 and replica molding 11 , provided resolution down to around 10 µm and only have 2D geometry. Secondly, DLW is a non-contact fabrication process. A suitable solvent can develop high quality structures maintaining the designed configuration. Replica molding technique rarely gives sub-micron resolution 12 and the structural quality is hard to control. Thirdly, laser writing LCE Microstructure Fabrication 1. Measure ~300 mg monomer mixture on the balance. See the molecular composition in Table 1. 2. Put the prepared mixture inside a glass bottle, and put it on a hot plate set at 70 -80 °C. 3. Wait until all the powder melts, add a magnetic stirrer, and mix the mixture for 1 hr (90 -150 rpm). 4. Place the cell on the hot plate at 60 °C. 5. Place a drop (around 20 µl) of mixture on the edge of the smaller glass slide and wait until the liquid infiltrates into the cell. 6. Transfer the cell to the optical microscope with a crossed polarizer and a temperature controller. Keep everything in the dark during transfer, and put an orange filter before the illumination lamp to filter out the UV. 7. Increase the temperature of the cell above 60 °C by using a temperature controller on the microscope, then decrease the temperature (2 -10 °C per min), to measure the temperature range for LC phase. A mixture with different molecular composition has a different LC phase temperature. A good homogeneous nematic LC phase can be recognized by observing the image contrast inversion while rotating the sample every 45° with respect to the polarizer axis. 8. Fix the cell on the sample holder, place it into the DLW system, and set the temperature to reach the LC phase (measured in step 2.7). 9. Find the interface at the lower inner surface and perform the tilt correction using a 100X objective, or a 10X objective without finding the interface. 10. Write the LCE structures by the use of DLW with a laser power and a scan speed of 4 mW and 60 µm/sec on the lower glass slide by using 100X objective. Otherwise, use with a laser power and a scan speed of 14 mW and 60 µm/sec by using 10X objective (LCE structure is fabricated throughout the entire sample thickness). 11. Take out the cell, and use a blade to open the cell removing the upper glass slide. 12. Immerse the structures in a toluene bath for 5 min. 13. Take out the sample, and dry in the air for 10 min. Characterization of Light Actuation of LCE Microstructures Representative Results Figure 1 shows the optical set up for laser writing. The system consists of a 780 nm fiber laser generating 130 fsec pulse at the repetition rate of 100 MHz. The laser beam is reflected into a telescope to adjust the beam profile to the optical microscope objective aperture where it is focused into the sample. On the microscope, a 3D piezo stage is installed with a 300 × 300 × 300 µm 3 travelling range for sample translation with a maximum speed of 100 µm/sec at 2 nm resolution. Linearly polarized light from a red lamp illuminates the sample from the top, while the image is collected at the bottom by the same objective and reflected by a beam splitter into a CCD camera. Before the camera, another polarizer is used to obtain cross polarized illumination for enhanced contrast. Within the grating network, the LCE structures become more confined, with much higher resistance to the development in toluene. A minimum width of the disconnected LCE has been measured to be ~300 nm, which is consistent with the resolution of DLW without the grating pattern. Another interesting approach for photonic application could be the realization of large scale periodic structure. Figure 4 (c, d) shows 2D LCE periodic structures within a micro-grating network. The alignments are well preserved inside these nanostructures, as shown in the inserted POM images of Figure 4 (c, d). However, light induced deformation could not be obtained in these nanostructures. This is because within the IP-L grating, the nano-LCE elements have been highly confined and adhesion prevents any visible deformation. The micro manipulation system is based on a home-made reflected microscope and is shown schematically in Figure 5. A 10X objective is fixed on a lens tube placed on a vertically standing optical breadboard. A 730 nm IR LED light source is used for illumination through a nonpolarized beam splitter. The reflected image is collected by the same objective and projected on the camera. A continuous solid state 532 nm laser is coupled into the objective by a long pass dichroic mirror (50% transmission and reflection at 567 nm) at an incidence angle of 45°. A power meter measures the transmitted beam after the dichroic mirror for real time detection of laser power. A loosely focused laser spot of ~150 µm diameter generates maximum illumination intensity of ~10 W/mm 2 . Laser intensity is controlled by a variable neutral density filter placed in front of the laser. Below the objective, a 3D manual translation stage is used for sample translation. A heating stage installed on the translation stage is used for precise control of the sample temperature in a range from -20 to 120 °C with 0.5 °C accuracy. Two glass tips mounted on two manual translation stages have been placed on the left and right sides, near the sample position. Structure micro manipulation can be realized by carefully moving the tips with the help of the translation stages. To demonstrate the alignment and deformation correlation, we fabricate four LCE cylindrical structures with 60 µm diameter and 20 µm height. These cylinders are written on four differently orientated IP-L grating regions (1 µm period). Under light excitation, the dyes inside the LCE absorb light energy and transfer it into the network. The LCE structures are heated up and then undergo phase transition (nematic to isotropic). Such phase transition is also helped by the trans to cis isomerization of the dye under the same light stimuli. Thus, the structures contract along the original LC alignment director and expand in the perpendicular direction 7 . Depending on different local alignments induced by the IP-L gratings, these structures deform along different directions, as shown in Figure 6 (Step 3.1). This technique enables the creation of compound actuators, which contain more than one type of alignment in one single structure. A 400 × 40 × 20 µm 3 size LCE stripe with two sections of alignment pattern was fabricated, as schematically shown in Figure 7 (a). Those alignment sections contain each a 90° twisted orientation in a different direction. The surface with parallel alignment contracts, while the one with perpendicular alignment expands under light illumination. The structure has been picked up by the micromanipulation system, and held in the air by a glass tip. Double bending was observed under light illumination (Step 3.3). A modulated laser beam (using an optical chopper) can induce cyclical deformations. LCE can respond following the laser modulation frequency (>1k Hz). However, the deformation amplitude decreases with increasing frequency Discussion IP-L micro-grating orientation technique has been integrated with DLW to orientate liquid crystalline monomers. The subsequently laser-written LCE micro-structures can also be patterned with the designed alignment in the micro scale. This technique allows us to create compound LCE elements which can support multiple functionalities. With outstanding ability to create accurate 3D microstructures and control of actuation, we expect this technique to be used for creating elastomer based microscopic robots 14 , and to open up a plethora of new strategies for the obtainment of light tunable devices 15 . There are two critical steps in the preparation. The first one is that the two glasses of the cell should be tightly glued (step 1.4, 1.5). The UV curing glue preserves the stability of the cell geometry during the development: the movement of a glass of the cell in respect to the other will result in a worst alignment of the LCE. Secondly, the laser writing speed during LCE structure writing should be as high as possible while 100X objective is chosen. Due to the strong swelling of the LCE during the laser writing process, the swelled structure would move out the designed position, thus affecting the quality of the fabricated actuators. In some cases, the light induced deformability is observed to deteriorate in the structures. This could be due to the dye bleaching under high illumination intensity. Once the dye molecules have been switched off, the LCE structure behaves as a transparent medium, and the light absorption/light induced deformation is suppressed. A lower laser power would be safer for the actuation of LCE microstructures. There are also some disadvantages of this method. Firstly, the whole process takes a relatively long time. In order to maintain the cell configuration, the first IP-L development process (made by immersing the sample in a solvent bath) is carried out in 2-proponal without opening the cell. The developing time thus depends on the cell size and the thickness of the gap, and usually takes 12 -24 hours. Replacing the IP-L grating with other laser writable patterns, such as laser induced ablation pattern and laser induced chemically modified surface, could result in LC alignment and in a great reduction of the fabrication time. Second, LCE is a soft matter which always suffers adhesion on the glass substrate. Light induced deformation has been suppressed when the microstructures stick onto the surface. Third, the height of the structure is limited by the thickness of the cell and the objective working distance. In the laser writing system, the maximum height is around 100 µm. Recently developed 3D printing techniques could be a good candidate for creating light actuated LCE structure from mesoscopic to macroscopic scale. However, maintaining the molecular orientation during polymerization could be the main issue of concern. This technique is unique because allows one to obtain 3D free-form actuators at the truly microscale, which is not possible with other existing techniques. LCE microstructures may be patterned with different molecular orientations and functionalities. Implementation of such technique by further chemical engineering, will enable to make the actuators sensitive to other stimulus sources and will open up to develop efficient microrobots and soft photonic devices.
2017-10-25T11:06:44.325Z
2016-05-25T00:00:00.000
{ "year": 2016, "sha1": "d6ba2e2c98f2ac4b575ecc5470f17cdfe57e2eab", "oa_license": "CCBYNCND", "oa_url": "https://www.jove.com/pdf/53744/free-form-light-actuators-fabrication-control-actuation-microscopic", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "52f98fc680df18a363c08cd3e4039b64a4c3d9fe", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
601608
pes2o/s2orc
v3-fos-license
A Multi-Platform Flow Device for Microbial (Co-) Cultivation and Microscopic Analysis Novel microbial cultivation platforms are of increasing interest to researchers in academia and industry. The development of materials with specialized chemical and geometric properties has opened up new possibilities in the study of previously unculturable microorganisms and has facilitated the design of elegant, high-throughput experimental set-ups. Within the context of the international Genetically Engineered Machine (iGEM) competition, we set out to design, manufacture, and implement a flow device that can accommodate multiple growth platforms, that is, a silicon nitride based microsieve and a porous aluminium oxide based microdish. It provides control over (co-)culturing conditions similar to a chemostat, while allowing organisms to be observed microscopically. The device was designed to be affordable, reusable, and above all, versatile. To test its functionality and general utility, we performed multiple experiments with Escherichia coli cells harboring synthetic gene circuits and were able to quantitatively study emerging expression dynamics in real-time via fluorescence microscopy. Furthermore, we demonstrated that the device provides a unique environment for the cultivation of nematodes, suggesting that the device could also prove useful in microscopy studies of multicellular microorganisms. Introduction In recent years there have been numerous attempts to develop novel platforms for growing previously uncultured microbes, and to expand the scope and precision of the study of bacterial model organisms. While the vast majority of bacterial and archaeal species remain uncultivable, recent studies have shown that an increasing number of environmental isolates can be grown in artificial environments that mimic the physical and chemical parameters of the organisms' natural habitat [1,2,3]. Conversely, quantitative, systems-oriented investigations of highly engineered model organisms containing synthetic regulatory networks benefit from experimental set-ups which allow precise control to be exerted over these same parameters, while physiological responses are monitored in real time without disrupting the cells [4,5,6]. These platforms range from simple do-it-yourself approaches to more high-tech methods [7,8,9]. While some of these approaches, which include microfluidic chips [10], porous metallic membranes [2], and gel microdroplets [11], among others, vary greatly in terms of scale, material, and level of sophistication, they share a number of common characteristics. In all cases the microorganisms are physically trapped, and thereby constrained in terms of mobility and population size. Nutrients are either provided through direct channels, or via diffusion across the trapping barrier. Optical transparency of the materials is also common, as it facilitates microscopic observations. Elegant microfluidic cultivation devices may not only reduce costs, but can also increase the accuracy and precision of measurements, enabling new experimental approaches and testing novel hypotheses [12]. Still, microfluidic devices frequently require sophisticated and expensive peripheral equipment, potentially complicating the simplest of analyses. Furthermore, microfluidic devices are vulnerable to fouling and contamination when used to deliver culture medium, which reduces the robustness of such methods and their usable lifetime. Within the context of iGEM, the international Genetically Engineered Machine competition [13], a team of undergraduate students from Wageningen University re-designed a genetic circuit for synchronized oscillations, inspired by previous studies in Escherichia coli [4,6]. To study the dynamics of synchronized oscillatory gene expression, a simple -and by all means affordable-bacterial cultivation platform was required, in which quantitative fluorescence measurements on small bacterial populations could be performed. Previous work on synchronized bacterial oscillators has demonstrated the importance of providing growth conditions in which high-density, spatially constrained bacterial populations can be nutritionally sustained over prolonged periods of time [4]. Attempts to observe synchronized oscillatory behavior using conventional experimental set-ups, such as microtiter plates, proved irreproducible. In order to provide the necessary conditions, we envisioned a simple growth chamber in which small microbial populations could be immobilized and which could be operated in conjunction with a fluorescence microscope taking measurements in real time, with medium flow supplied by a standard syringe pump. To this end, we designed, manufactured and tested a simple and reusable plastic flow chamber with a custom socket that can accommodate two different cultivation platforms. One of the cultivation platforms in this flow chamber is a silicon nitride microsieve that is available with a range of different pore sizes, which has its origins in microfiltration applications (http://www. aquamarijn.nl/). The second platform is a microbial culture chip (microdish) made of porous aluminium oxide (PAO) (http://www. microdish.nl/) [2]. The flow device enabled us to monitor the activity of fluorescent reporter genes from microbes under the microscope, while providing control over the supply of growth media without mechanically disrupting the cells. Materials and Methods Design and construction of the multi-platform flow device The device is made from polymethylmethacrylate (perspex), a transparent, durable, autoclavable, and inexpensive material ( Figure 1). The in-and outlets are threaded, which allows plugs and tubing to be screwed in securely. The material is resistant to ethanol and chlorine-based cleaning solutions, and can be cleaned with these to sterilize the device between experiments (http:// www.gehr.de/dyndata/PMMA_engl.pdf). The device is not as susceptible to biofouling as microfluidic chips due to the internal dimensions that allow cleaning solutions to remove any biological residue. We did not observe any residual contamination after repeated experiments (data not shown). The distance between the socket and the top of the chamber is ,1 mm. This proximity allows for a magnification with a 20x objective for a total magnification of 200x using an Olympus Microscope BX41, which allows the detection of fluorescence emitted by small agglomerations of cells. Detection of single bacterial cells may be also possible using objectives with higher magnification as long as the minimum focal depth is no less than 1 mm, such as 40x and 60x water-immersion objectives of the Olympus LUMPLFLN-W series. The dimensions of the chamber were chosen such that both growing platforms could be accommodated. The resulting total volume of the main chamber is ,0.5 ml. The device can be sealed with a thin glass slide using Bison mastic silicon kit, which is removable and enables the reuse of the device after an experiment. The flow device was used with a ProSense syringe pump NE1000X2 for the flow of media or buffers which is connected to the device via transparent silicon tubing. The flow device was designed using Google SketchUp (http://sketchup.google.com), and manufactured with a standard workbench computer controlled drill in the fine mechanical workshop of Wageningen University. The sketch files of the device are available for users to customize and manufacture their own flow device (File S1). The cost of a single, reusable device, excluding a microsieve or microdish, can be approximated to be ,150 Euros, the bulk of which can be attributed to manual labor costs. Microsieve cultivation Microsieves are inorganic membranes made of a thin layer of silicon-rich silicon nitride [14]. The microsieve comes in a wide range of variants with respect to different well-defined pore sizes (0.2 mm-0.45 mm), thicknesses (0.1-1 mm) and different levels of porosity. The silicon nitride has high thermal stability, chemical inertness and mechanical strength. Together this allows high flux performance with low trans-membrane pressure and size-selectivity [15,16,17,18]. To retain fluorescent Escherichia coli cells on the microsieve (pore size 0.45 mm), the cells were inoculated through the top inlet channel while applying negative pressure below the sieve to partly remove the liquid and retain the bacteria on the sieve. This was done manually with the use of a syringe attached via plastic tubing to one of the lower channels. Microdish cultivation The material (PAO) used for the microdish is a broadly applicable and modifiable matrix enabling an increased cultivation efficiency, for example of pathogens [19,20], and is available with and without circular microscopic subdivisions into micron scale wells, which can serve as growth chambers. The microdish comes in a wide range of variants with respect to the presence and absence of subcompartments, subcompartment dimensions, chemical modifications of the PAO, but also with respect to the thickness and pore size and porosity of the cultivation chip itself. Together this allows for a range of dynamics with respect to the diffusion rate of small molecules and the ability to passage or exclude macromolecules. A compartmentalized microdish cultivation chip, containing 40 mm deep wells with a diameter of 180 mm, was tested in the flow device. An overnight-grown E. coli culture containing a synthetic gene construct capable of producing synchronized oscillatory gene expression was resuspended in phosphate buffered saline (PBS). The resuspended culture was used to inoculate the device through the top inlet channel. LB medium was supplied via the lower inlet channel, thus restricting the growth of bacteria to the wells where nutrients could be obtained through the porous material at the base. Several methods that allow wells to be individually inoculated have been described [21,22,23]. However, these specialized inoculation techniques were not necessary for the experiments performed during the validation of our device. Using an Olympus BX41 microscope equipped with a GFP filter, variations in fluorescent light emission were measured. The light source was a HBO 103W/2 Osram mercury lamp. Images were captured by a CCD camera with an exposure time of 200 ms. Cells were illuminated for 2 seconds for every measurement. The shutter was robotically controlled to ensure consistent illumination at each time point. Data analysis and processing were done with the software ImageJ 1.45 (http://rsbweb.nih.gov/ij/index.html). Additionally, experiments to test the applicability of co-cultivation in the microsieve and microdish were performed. For this, cultures of inducer and inducible cells were grown separately overnight. The inducible cells were inoculated in the top compartment of the flow chamber containing a microsieve, followed shortly thereafter by the inoculation of inducer cells in the lower compartment. For the same experiment in the microdish, the inducible cells were resuspended in PBS before inoculating via the top inlet channel. LB media was supplied via the lower inlet channel to enable the cells to grow in the wells. After letting the inducible cells grow in the dish for one night, the inducer cells were injected into the lower compartment. In both experimental set-ups, GFP measurements of the top cells were taken in short intervals (5-10 minutes) directly after inoculation of the inducer cells. Organisms and constructs The bacteria used for the experiments were E. coli TOP10 cells (Invitrogen) with a pSB1A2 plasmid backbone encoding ampicillin resistance (http://partsregistry.org/Part:pSB1A2) and containing different BioBrick parts as they can be found in the Registry of Standard Biological Parts (http://partsregistry.org). BioBrick standard biological parts are DNA sequences that adhere to defined standards to facilitate the modular assembly of complex genetic constructs from simple subcomponents. The principle feature of BioBrick parts is the presence of unique restriction sites flanking the functional DNA elements [24]. The parts used are depicted in table 1. The liquid cultures were made from single colonies from cell cultures grown on LB agar plates with ampicillin (50 mg/mL). These were grown over night at 37uC in 10 ml of LB medium containing ampicillin. The cultures were either directly inoculated in the device or centrifuged and resuspended in PBS, depending on the experimental set-up. To assess whether the flow device could be used for other organisms, an engineered Caenorhabditis elegans strain PD4792 expressing green fluorescent protein (GFP) exclusively in the oesophagus (www.wormbase.org) was injected into the top chamber. The fluorescence allows for easier detection of the nematodes, but the animals are also visible with white light from above. In an additional experiment, the wells were first seeded with green fluorescent protein producing E. coli, serving as a food source for the animals. The nematodes were placed in the upper compartment the next day and the liquid over the wells was removed. Nematodes that remained trapped in the wells were left in the dish for another night. Retention of bacteria on a microsieve The device containing a microsieve was inoculated with fluorescent E. coli cells. The bacteria were retained predominantly on the diagonal permeable areas, where small clusters of cells could be discerned. Even after applying a gentle flow over the retained cells, potentially allowing exposure of the cells to other compounds in the context of biosensing, the cells remained trapped on the sieve (Figure 2). Synchronized oscillatory gene expression in a microdish Certain intercellular signalling networks consisting of regulatory feedback loops can result in synchronized oscillatory gene expression. However, the emergence of such phenomena is generally not very robust, and often contingent on specific environmental parameters. Temporal variations in GFP expression produced by E. coli microcolonies harboring such a signalling construct were detected using fluorescence microscopy. The averaged and normalized fluorescent emission intensities plotted against time clearly depict synchronized oscillatory behavior with a period of approximately 1 h (Figure 3). Since the aim of this experiment was to detect relative changes in fluorescence intensities over time in order to confirm the oscillatory properties of the construct rather than measurement of absolute GFP expression values, it was not necessary to normalize the measured intensities against the cell density. Co-cultivation experiments In nature, most microorganisms grow in co-culture with other microbes, and interactions mediated by diffusible signalling compounds are critical in this process and often essential for growth [1,3]. Relatively simple co-cultivation systems based around porous membranes have led to major improvements in the cultivation of otherwise intractable species. In the laboratory, co-cultivation experiments are comparatively rare and usually confined to defined pairs of microorganisms. The separation of the flow chamber into two compartments by either of the two cultivation platforms allows the co-cultivation of different organisms separated by the permeable platform, thereby enabling an exchange of small molecules, for example in codependent cross-feeding experiments [25]. Alternatively, since PAO also allows for eukaryotic cell culturing [26], this flow device could allow for experiments on growth promotion, infectivity or toxicity. To test the applicability of the set-up for co-cultivation, we assessed whether a signalling molecule produced by bacteria in the lower compartment would induce gene expression of bacteria in the top compartment. The signalling molecule produced by the bacteria in the lower compartment (inducer cells) should easily diffuse through the permeable platform and be bound by a receptor present in the bacteria in the top compartment (inducible cells). The resulting complex in turn activates gene expression of a detectable reporter, in this case GFP. The experiment was carried out in flow devices equipped with either a microsieve or a microdish. Initial measurements of the inducible cells in the top compartment showed hardly detectable amounts of basally expressed GFP. After addition of the inducer culture in the lower compartment, a rapid increase of GFP expression was observed, both in the microsieve ( Figure 4) and the microdish ( Figure 5). The observed increase was too rapid to be the result of cell growth alone. This can be concluded from the observation that cells constitutively expressing GFP do not display increases in fluorescence of a comparable rate and magnitude (data not shown). In order to validate the barrier function of the two platforms, the suspension of inducer cells was spiked with E. coli cells expressing RFP. Additionally to the GFP measurements, the top compartment was monitored for presence of RFP. Leakage was not observed, indicating that the edges of the respective growing platforms were sealed properly. The system presented here would allow the gap between nature and the laboratory to be bridged, with thousands of different microorganisms (in discrete microwells) to be co-cultivated with a common partner organism (beneath the microdish), a major increase in combinations that may be further improvable with higher well densities to allow genuinely high throughput screenings for microbial interactions. Using the flow device with nematodes The device could be useful for the cultivation of organisms other than bacteria, for example for Caenorhabditis elegans, a nematode approximately 1 mm in length and 80 mm in width. The nematodes inoculated in the upper compartment of a flow device containing a microdish were easily discernible (Figure 6), and capable of moving over the wells in the dish when the top chamber of the device was filled with liquid. Cultivation experiments on PAO indicated that the full developmental cycle, from eggs to adult, could occur on this material (data not shown). The pictures taken one day after inoculation revealed that the wells containing worms were almost completely devoid of the bacteria, while wells without the animals were still filled with the fluorescent E. coli cells ( Figure 6). This highlights a potential use of the flow device with multicellular organisms. Approaches to the analysis of nematodes using microfluidics have used relatively complicated platforms requiring multistep fabrication techniques. Furthermore, these approaches have been limited in throughput, resulting from the need to supply individual chambers with input and output channels [10,27,28]. Given the importance of nematodes in drug screening, e.g. drugs targets against the central nervous system, robust high-throughput methods are desirable and the method presented here provides a route to one, particularly given the potential to add drugs either from beneath the culture chip or by non-contact printing into individual wells or by co-culture with bacterial strains expressing siRNAs. Conclusions The recent development of novel microbial cultivation platforms has provided researchers with a wide variety of options for isolating and studying microorganisms. It is important to be aware of the advantages and drawbacks of the available platforms. Factors such as population size, geometry, control over medium composition, parallelization and multiplexing capacity need to be weighed against the cost of, and the expertise and peripheral equipment required to operate a given platform. The most basic implementation of what can be considered a novel cultivation platform is a simple matrix, which combines cell adhesion with nutrient diffusion. Such set-ups are inexpensive, and easy to implement as they do not require sophisticated peripheral equipment. On the other end of the spectrum, there are nanoscale microbioreactor chambers, which allow individual cells to be subjected to precisely controlled chemical environments [7]. The microbial flow device described in this paper is somewhere in between. It is well suited for fluorescence microscopy studies of small, high-density cellular populations over extended time periods, as it provides control over the composition of liquid media without disrupting the cells. We have demonstrated its utility in the study of intercellular signalling, dynamic gene regulatory networks, and nematode cultivation. Other potential applications of this device include microbial biosensing, toxicity measurements and chemotaxis studies. Furthermore, since the platforms used in this device allow for the cultivation of eukaryotic cells, it could be of use in the real-time monitoring of co-cultivation and infection experiments. We believe that this device has the potential to make advanced cultivation materials accessible to a wider range of users. While we do not anticipate that this platform will replace existing technologies, we hope that our design, and modifications thereof, will prove useful where versatility, cost, and simplicity of use need to be emphasized over experimental precision, such as in the DIY biology community and in iGEM projects.
2017-06-18T00:24:27.155Z
2012-05-14T00:00:00.000
{ "year": 2012, "sha1": "33d5712130f67ed0d8ead8ab1ad431f88c9a4b06", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0036982&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "33d5712130f67ed0d8ead8ab1ad431f88c9a4b06", "s2fieldsofstudy": [ "Biology", "Engineering", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
238581662
pes2o/s2orc
v3-fos-license
Sporadic Creutzfeldt-Jakob disease: a case report of long disease duration and difficulties in confirming the diagnosis with short literature review Creutzfeldt-Jakob disease (CJD) is a spongiform encephalopathy with the fatal outcome, caused by the accumulation of pathological prion protein in the central nervous system (CNS). CJD is classified into four types: sporadic (sCJD), familial or genetic (fCJD), iatrogenic (iCJD) and variant form (vCJD). The recognition of CJD is based on the clinical presentation, neuroimaging, electroencephalography and biochemical tests. The hyperintense signals in basal ganglia on brain magnetic resonance imaging (MRI), periodic sharp and slow wave complexes (PSWCs) in the electroencephalogram as well as presence of neuronal proteins such as protein 14-3-3 in the cerebrospinal fluid (CSF) support the diagnosis. The definite diagnosis of CJD still demands neuropathological confirmation. We report the case of a 56-year-old woman with the rapidly progressive cognitive impairment, motor dysfunctions and the fulminant neurological deterioration to akinetic mutism during the five weeks’ hospitalisation. The probable diagnosis of sCJD was based on medical history and characteristic findings in MRI. The positive result of the real-time quaking-induced conversion (RT-QuIC) test and presence of protein 14-3-3 were obtained post-mortem and definite diagnosis was confirmed by neuropathological examination. In this paper we would like to emphasize the difficulties in reaching the diagnosis and the need for a series of diagnostic examinations in different points of time to obtain the confirming results. A 56-year-old woman, a butcher worker, was admitted to the neurology ward after transient loss of consciousness with a history of progressive cognitive impairment and motor dysfunctions. The initial symptoms including behavioural changes started about one year before the admission. She was peri-odically hyperactive and behaved inappropriately to the situation. The patient's family also observed some motor symptoms like: involuntary movements of her upper limbs (later diagnosed as myoclonic movements), falling out of objects from her hands as well as gait unsteadiness causing falls. The psychomotor retardation occurred, her speech became dys-arthric. She became apathetic and withdrew from the social life. She was complaining of headaches, the neck pain, paraesthesia in hands (numbness and tingling) and fluctuations of arterial blood pressure. During the first year of symptoms she underwent the magnetic resonance imaging (MRI) of the brain and the cervical spine. The brain MRI revealed meningioma of the right fronto-parietal region, which was qualified for Gamma Knife radiosurgery. The results of cervical spine MRI showed C5/C6 and C6/C7 disc herniation without myelopathy, which was also intended to be treated surgically. The significant deterioration of her neurological condition occurred about 3-4 weeks before the hospitalization. Because of the observed dystonic movements of the upper limbs the operations were postponed. In her past medical history there was an episode of bilateral peroneal nerve palsy four years earlier, supposedly related to the work in a squat position. On admission to the ward the patient was conscious, her autopsychic and allopsychic orientation in brief mental status examination was within basic level. Her general somatic state, including circulatory and respiratory functions, were within normal range. The neurological examination revealed four-limb rigidity with symmetrical hyperreflexia and dystonic movements of her upper limbs, accentuated more on the right side, the superficial sensory disturbances on the left side and the deep sensory disturbances in lower extremities. The patient was unable to move independently and was wheelchair-bound. She was complaining of poor eyesight and unspecific paraesthesia. The neuropsychological examination, which was only feasible to conduct in the initial phase of her hospitalization, revealed unequivocal deterioration of cognitive functions (slowing-down of thoughts and speech, delayed response to verbal tasks and memory impairment). The disturbances of superficial and deep sensory were manifested in variable fashion in the following days. Later on her visual problem severely aggravated, she developed cerebellar symptoms (four limbs ataxia), extrapyramidal dysfunctions (asymmetric rigidity of four extremities, more in the upper extremities and on the left side), hyperreflexia, and subtle left side hemiplegia with Babinski sign. During the four weeks of hospitalization her general conditions worsened significantly. The rapid deterioration of cognitive functions and impaired consciousness occurred. We observed myoclonus affecting upper extremities, aggravated dystonic movements of the trunk, upper and lower limbs as well as tonic epileptic seizures. Approximately two weeks before her death she developed akinetic mutism and passed away after six weeks of hospitalization. The initial MRI brain examination was not unequivocal (Fig. 1A-C). It revealed hyperintensity in the region of the right caudate and the inner capsule on DWI with no contrast enhancement and no ischemic changes. These findings were suspected to be rather motor artefacts at first glance. The laboratory tests, including blood tests, glucose levels, renal, hepatic and thyroid function tests, electrolytes, C-reactive protein (CRP), and B 12 vitamin levels were normal. Tests for human immunodeficiency virus (HIV), Treponema pallidum (IgM and IgG), Borrelia (IgM and IgG), anti-thyroid peroxidase, and anti-thyroglobulin were negative. The results of cerebrospinal fluid were irrelevant ( Table I). The polymerase chain reaction (PCR) assay of CSF for detection of encephalitis [E. coli, H. influenzae, L. monocytogenes, N. meningitidis, S. agalactiae, S. pneumoniae, cytomegalovirus (CMV), enterovirus, herpes simplex virus 1 (HSV 1), HSV 2, human herpesvirus 6 (HHV6), human parechovirus (HPeV), varicella zoster virus (VZV), and C. neoformans/gatti] was negative. The results of serum protein electrophoresis did not reveal any abnormalities. Apart from serum IgM-type anti-GM2 antibodies, no other paraneoplastic antibodies (anti-Hu, anti-Yo, anti-Ri, anti PNMA2, anti-CV2, anti-amphisin, anti-GM1, anti-GM3, anti-GD1a, anti-GD1b, anti-GT1b, anti-GQ1b) were detected. Apart from the right peroneal nerve neuropathy there was no evidence of polyneuropathy in the electromyography. The serum-copper and the serum-ceruloplasmin levels as well as the urine-copper level were within normal range. Thus, the inflammation of the CNS and metabolic disorders were excluded. The test for protein 14-3-3 in the CSF, with the use of Western-Blot method, was negative. During hospitalisation the fluctuations of blood pressure and tachycardia were observed. The EEG showed high-voltage theta and delta waves on the right hemisphere (the localisation of suspected meningioma) with periodic epileptic discharges. She was treated with antiepileptic drugs (diazepam, valproic acids, thiopental) and the consecutive EEG tests revealed progression towards diffused slow waves without epileptic discharges. Thus, the EEG pattern did not allow to recognise or to indicate any specific brain pathology. No response to the intravenous steroid therapy and the course of plasmapheresis was observed. In different stages of the disease in differential diagnosis we took into consideration the following conditions: the epileptic seizures due to meningioma, the posterior reversible encephalopathy syndrome and the autoimmunological encephalitis. Ultimately, considering the clinical picture and results of the tests, Creutzfeldt-Jakob disease has been considered especially since the alternative diagnoses had become rather improbable or excluded, particularly regarding the development of changes in MRI. The brain MRI in the 4 th week hospitalisation showed the symmetrical hyperintensity of the basal ganglia (caudate nucleus, putamen) in PD/T2 and FLAIR and even better seen in dorsomedial thalamic nuclei, what gave appearance characteristic for Heidenhain variant of the CJD (so called "hockey-stick") (Figs. 2, 3A-C). Similar findings were noted in occipital lobe cortex (Fig. 3C). As a result, just before the death she fulfilled the NCJDSU criteria for probable CJD. To definitely confirm the CJD the brain autopsy was performed in the Department of Neuropathology, Collegium Medicum Jagiellonian University (description given below). Moreover, the CSF sample (taken before the demise of the patient) was sent again for 14-3-3 protein analysis (with ELISA method) and RT-QuIC test to the National Reference Centre for the surveillance transmissible spongiform encephalopathy in Gottingen (Germany). The autopsy and the aforementioned test turned out to be positive thus making the diagnosis of CJD definite. The neuropathological examination revealed moderate and symmetrical general (cortical and subcortical) cerebral atrophy. The microscopic evaluation ( Fig. 4A-C) disclosed spongiform changes in grey matter, diffused astrocytosis and microgliosis as well as neuronal loss in the brain hemispheres (neocortex, basal ganglia and thalamus). In the region of basal ganglia, thalamus, occipital and parietal cortex the so-called "synaptic-like" pattern of PrP accumulation was found. The "kuru-like" plaques were not identified. The distribution and extent of these neuropathological changes as well as the type of prion immunopositivity were suggestive of the sporadic variant of CJD, most probably with MM1 subtype of the prion. Discussion Creutzfeldt-Jakob disease is a very rare and rapidly progressive neurodegenerative disorder with the fatal outcome. It is caused by an abnormal form of the prion protein. The normal form of the prion A B C protein (PrP C ), which is expressed at the highest level in neurones within the brain, is converted into an abnormal isoform (designated PrP SC ). The defective prion protein accumulates in the CNS and results in neurodegeneration. The natural polymorphism of human prion protein gene (PRNP) at codon 129 which encodes either methionine (M) or valine (W) determins the susceptibility to sporadic and acquired forms of prion disease as well as its clinicopathological characteristics. On the basis of codon 129 polymorphism and physicochemical properties of type 1 PrP SC or type 2 PrP SC , sCJD can be classified into six subtypes: MM1, MM2, MV1, MV2, VV1 and VV2 [6]. The clinical presentation as well as the molecular and neuropathological pictures reveal the heterogeneity of CJD phenotypes, however the dynamic dementia is its common denominator. The diagnosis of CJD faces a big challenge during the lifetime due to the overlapping clinical syndromes. The progressive decline in the neurological condition is followed by focal neurological signs with myoclonus being the most typical [14]. The commonly reported neurological abnormalities include the visual changes leading to cortical blindness, ataxia, pyramidal and extrapyramidal features and usually an akinetic mutism in the last stages of the disease [7]. The atypical findings such as behavioural abnormalities (anxiety, irritability, social withdrawal), subtle memory changes, judgment difficulties, and other psychiatric symptoms, are frequently observed in the early stages of the disease but can be easily overlooked [12]. The estimated incidence of human prion disease is about 1-2 persons per million worldwide annually [2]. The vast majority of human prion diseases are sporadic (sCJD) and account for 85% of CJD cases. The initial symptoms of sCJD occur in the 7 th decade of life and the median lifetime expectancy is 5 months with 90% of patients passing away within 1 year. Approximately 10% of all CJD cases are familial/ genetic. The familial CJD is caused by diverse mutations in the PRNP gene. The disease is transmitted in autosomal dominant pattern with high penetrance and an incidence increasing with age. The inherited forms are classified into three categories: Gerstmann-Straussler-Scheinker syndrome, fatal familial insomnia (FFI) and fCJD [7]. The acquired form of CJD (including iCJD and vCJD) occurs in 2-5% of cases [12]. The form iCJD is a consequence of transmission of abnormal prion protein during medical procedures. Over the last few decades it was described after intracerebral electrode implantation, corneal transplantation, dura mater grafts, and growth hormone injections. Iatrogenic CJD's clinical picture as well as MRI and EEG findings are similar to sCJD [7]. Variant CJD was first described in 1996 and as a result of exposure (including ingestion) to the products of animals suffering from bovine spongiform encephalopathy (BSE). The clinical presentation, neuroimaging and pathological findings differ from other variants of CJD. The most cases of vCJD had presented psychiatric symptoms in the early course before the ataxia began approximately within 6 months. Its onset was reported in younger subjects than in sCJD or fCJD. The research showed that the median age of its onset was 27 years (range: 12-74) in the UK, 35 years in France (range: 18-57) and the median duration of the disease was 14 months for both countries [2]. The studies revealed the presence of the pathological form of prion protein in peripheral tissues of these patients (tonsil, lymph nodes, appendix, spleen), what was suggested to be used for diagnostic purpose [14]. The first diagnostic criteria for CJD were formulated by the World Health Organization (WHO) in 1998 (Table II) with the diagnosis relying on clinical examination, EEG, and CSF findings [7]. The present internationally recognised criteria published in 2017 by the National CJD Research and Surveillance Unit (NCJDRSU) in Edinburgh have been expanded to include brain MRI findings and modern laboratory tests. On the basis of NCJDRSU criteria, sCJD (Table III) can be qualified as define, probable or possible [9]. The EEG was found as the first significant and non-invasive test to point the diagnosis of CJD. Its importance has been emphasized and included in the first published diagnostic criteria [4]. The characteristic pattern of periodic sharp and slow wave complexes (PSWCs) was reported in two-thirds of cases of sCJD. The typical appearance for sCJD is that of a 1/second periodic triphasic sharp wave complex. The simple sharp waves can be classic triphasic, biphasic, or mixed [7]. This characteristic pattern in some patients was observed as early as three weeks after onset of the disease. However, in majority of cases it occurs about twelve weeks after onset and in a few isolated cases even later. In contrast to the familial and sporadic types, the EEG study is not usually informative in iatrogenic human growth hormone cases as well as it is negative for any periodic sharp wave forms in patients with vCJD. In our case, the EEG revealed high-voltage theta and delta waves on the right hemisphere with periodic epileptic discharges, however the pattern was not typical for CJD. Moreover, these epileptic discharges were observed in the region of suspected meningioma, thus could have potentially explained their presence. Later on, EEG showed non-specific slow waves abnormalities. The results of CSF analysis in standard investigations (cell count, barrier function and inflammatory reactions) in patients with CJD are in general within normal range [14]. A slightly elevated protein (0.5-1.0 g/l) was noted in one-third of cases. The presence of oligoclonal bands in CSF was very rarely observed [4,14]. Several studies reported the biomarkers that can be useful in diagnosis of CJD, including protein 14-3-3, tau, S100b, neuron-specific enolase and phosphorylated tau. The majority of available data relate to 14-3-3 and tau proteins [4]. The detection of 14-3-3 protein in the CSF by using a Western-Blot method occurred to be sensitive and specific. The protein 14-3-3 is a neuronal destruction marker and can be present in early stages of the clinical disease [4]. Its sensitivity is estimated at 92-96% in sCJD, what is in contrast to lower sensitivity of 50% in vCJD [14]. The protein 14-3-3 in fCJD patients bearing codon 200 and codon 210 mutations has a similar diagnostic value as in sCJD. In case of the fatal familial insomnia the protein 14-3-3 is constantly absent in the CSF as well as it is uncommon in GSS. The sensitivity is also low in patients with iCJD (60%) but may increase in later stages of the disease [14]. The neurological conditions that cause neuronal loss may also give positive results of 14-3-3 protein in CSF, but their clinical presentation should be differentiated from CJD. The presence of the 14-3-3 protein in CSF was found in the following diseases: herpes simplex and other viral encephalitis, recent stroke, subarachnoid haemorrhage, hypoxic brain damage, metabolic encephalopathy after barbiturate intoxication, glioblastoma, carcinomatous meningitis from small-cell lung cancer, paraneoplastic encephalopathy, and corticobasal degeneration [4]. The RT-QuIC assays of CSF is a modern laboratory technique that enables definitive diagnosis of CJD. It provides detection of an abnormal form of prion protein (PrP SC ) through in vitro amplification technology in CSF samples [1,7]. The test has sensitivity estimated at 80-90%, but in contrast to the test for 14-3-3 protein, its specificity is 100%. The difference in detection among the genetic subtypes was not reported [7]. The conducted studies demonstrated a significant role of MRI in the diagnostic process of CJD. The cerebral cortical hyperintensities as well as the high signal in caudate nucleus and putamen on fluid attenuated inversion recovery (FLAIR) or diffusion-weighted imaging (DWI) MRI have been reported as the characteristic lesion features of sporadic Creutzfeldt-Jakob disease [13]. The typical signal enhancements in vCJD were reported in the posterior thalamus and termed "pulvinar sign" [14]. The key is the knowledge of the disease state while interpreting neuroimaging. It is still unclear when changes in the typical findings appear. However, there are some suggestions that characteristic MRI patterns may be presented in early stages [13]. DWI was found to be superior to any other MRI sequence in early stages of CJD. It was reported that in the course of the condition the hyperintensity decreases in later stages and then the cortical atrophy is observed [14]. In relation to our patient the MRI turned out to be characteristic for CJD in the late phase of the disease. The first brain MRI was performed two months before admission to the ward and apart from the suspected meningioma in the right fronto-parietal region, there were no other abnormalities. During the hospitalisation she underwent two brain MRI. The initial one was difficult to analyse due to motor artifacts, but some hyperintensity in the region of the right caudate and the inner capsule on DWI was observed. The second MRI, performed two weeks later, revealed lesions characteristic for Heidenhain variant of the CJD. The definite diagnosis of sCJD requires neuropathological examination. The common histopathological changes confined to the CNS include: spongiform vacuolation throughout the cerebral grey matter, reactive proliferation of astrocytes and microglia, neuronal loss, and PrP deposition within the brain. However, the mentioned features are not specific for the prion disease. Their occurrence in defined neuroanatomical regions of the brain is of significant importance in the differential diagnosis of the disease [6]. In our case the diagnosis of sCJD was considered during the early course of the disease, but the initial results of the conducted examinations were not diagnostic. The first brain MRI was difficult to analyse due to a lot of motor artefacts, however, the follow-up neuroimaging revealed the characteristic lesions for CJD. The EEG was not specific for CJD. We obtained the negative results of 14-3-3 protein in the first CSF analysis, but the follow-up one was of significant importance and confirmed the diagnosis together with a more sensitive RT-QuIC test. We were speculating on the reason for the discrepancy in the results of both received tests detecting 14-3-3 protein. Both of the tests vary in the used method, localisation of the laboratory and two different time points of testing. The second sample was tested with the use of ELISA technique instead of Western-Blot in the first test. Both of the mentioned techniques are most commonly used to detect 14-3-3 protein. One of conducted studies comparing ELISA technique with Western-Blot in detecting 14-3-3 protein showed that, after combining definite and probable cases as a reference, the sensitivities were 88.9% for ELISA and 93.7% for Western-Blot, with a specificity of 97.6% for both methods. The authors of mentioned studies argued that ELISA might give more consistent outcomes that are less likely to confound the assessment seen in the Western-Blot method. On the other hand, another study conducted on 253 patients revealed no significant difference in specificity and sensitivity between the Western-Blot technique and the ELISA technique [8]. More recent data indicate that both of the methods give comparable results and their authors draw attention to quality of the analysis in particular the standardization of the preanalytical treatment of CSF samples [10]. The next research performed on 32 cases with pathologically confirmed sCJD displayed sensitivity of only 53% for testing 14-3-3 protein in both techniques and found the significant relationship between positive 14-3-3 results and shorter time from disease onset to the test, what is consistent with the argument that 14-3-3 protein is associated with acute neuronal loss (the patients with rapid progress of the disease are more likely to undergo lumbar puncture and 14-3-3 protein test earlier than those with slower illness) [3]. Thus, in the context of our case, we reflect on whether gradual progression of neurological symptoms during first months of the disease (period before hospitalization) could explain the lack of 14-3-3 protein in CSF in the initial examination (as well as lack of abnormalities in the brain MRI two months before admission). The fulminant deterioration of her condition observed in the ward resulted in acute neuronal loss and then the positive result of 14-3-3 protein presence in the control test was obtained. The researchers suggest that in doubtful cases, the serial testing of CSF should be considered due to an observed increase in sensitivity of 14-3-3 protein investigation in later stages of sCJD. Therefore, it is recommended to repeat lumbar puncture at least within 2 weeks after the initial one as was in our case [14]. Conclusions The confirmation of CJD is a diagnostic challenge that demands an extensive differential diagnosis and a series of conducted tests should be considered especially in cases with suspected "false-negative" results in initial examinations. The clinical symptoms can be masqueraded by other neurological syndromes, as was in our case during the long prehospitalisation period of the disease. There is a need for standardisation of 14-3-3 protein analysis. Currently the most specific test for CJD is RT-QuIC, but the "the gold standard" of making definite diagnosis is neuropathological examination.
2021-10-12T06:23:21.812Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2fc70304827208661ae689ca08d30d480e5eddbf", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-20/pdf-45282-10?filename=Sporadic.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "278d7c712c656416faea19894b9b3836967559e4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16033915
pes2o/s2orc
v3-fos-license
Methanol contamination in traditionally fermented alcoholic beverages: the microbial dimension Incidence of methanol contamination of traditionally fermented beverages is increasing globally resulting in the death of several persons. The source of methanol contamination has not been clearly established in most countries. While there were speculations that unscrupulous vendors might have deliberately spiked the beverages with methanol, it is more likely that the methanol might have been produced by contaminating microbes during traditional ethanol fermentation, which is often inoculated spontaneously by mixed microbes, with a potential to produce mixed alcohols. Methanol production in traditionally fermented beverages can be linked to the activities of pectinase producing yeast, fungi and bacteria. This study assessed some traditional fermented beverages and found that some beverages are prone to methanol contamination including cachaca, cholai, agave, arak, plum and grape wines. Possible microbial role in the production of methanol and other volatile congeners in these fermented beverages were discussed. The study concluded by suggesting that contaminated alcoholic beverages be converted for fuel use rather than out rightly banning the age—long traditional alcohol fermentation. Background Beverage ethanol production via fermentation is an age long tradition in many parts of the world. In the tropical world and elsewhere, indigenous people are involved in the entire value chain of traditional alcohol fermentation. Jespersen (2003) reported that most beverages and foods in Africa are produced at household level or on small industrial scale often of varying qualities. Aiyeloga et al. (2014) reported the potentials of raffia palm wine in sustaining livelihood in rural and urban populations in Nigeria. However, in Africa, Asia and South America, there has been an increasing incidence of methanol contamination in traditionally fermented alcoholic drinks (WHO 2014). Several cases of methanol poisoning have been reported in India and elsewhere. For instance in 2008, over 180 persons were killed in Bangalore and in 2009, 138 were killed in Gujarat, India. In 2015, 27 persons died in India after consuming toxic ethanol. In 2009, 25 persons died in Indonesia after consuming fermented palm wine containing methanol. About 130 persons died in some India villages in 2011 linked to poisonous ethanol consumption. In Czech Republic, 127 persons were poisoned from contaminated alcohol, out of which 42 died (Vaskova 2013). In 2014, the World Health Organization (WHO) alerted that there have been increasing outbreaks of methanol poisoning in several countries including Kenya, Gambia, Libya, Uganda, India, Ecuador, Indonesia, Nicaragina, Pakistan, Turkey, Czech Republic, Estonia and Norway. The size of these outbreaks ranged from 20 to over 800 victims, with case fatality rates of over 30 % in some cases (WHO 2014). Lachenmeier et al. (2011) evaluated the risk of contaminated unregulated alcohol in the European Union. In Nigeria, between April and June 2015, a total of 89 persons died following the consumption of locally produced ethanol beverage called kaikai/ogogoro/apeteshi or illicit gin. Kaikai is produced mostly from the sap of raffia palm and oil palm and to a lesser extent from other palms such as date palm, nipa palm etc. Laboratory analysis carried out by WHO and NAFDAC (National Agency for Food, Drug Administration and Control) show that the beverage contain 16.3 % methanol, while the blood methanol concentration of victims was found to be 1500-2000 mg/l. Victims exhibited symptoms of methanol poisoning including loss of consciousness, dizziness, weakness and breathing difficulties, blurred vision and blindness, weight loss, headache, abdominal pains, nausea, diarrhea and vomiting (Methanol Institute 2013). WHO (2014) reported that blood methanol concentration above 500 mg/l is associated with severe toxicity, whereas concentration above 1500-2000 mg/l causes death in untreated victims. While investigation is ongoing on the source/origin of methanol in the beverage, the Federal Government of Nigeria (FGN) placed a ban on the production, sale, distribution and consumption of locally fermented beverage in Nigeria. Enforcement of the ban was heightened in the months (June-August 2015) following the incidence, but as of the time of writing (November 2015) enforcement has slacked. But the ban on the age long fermentation processes could have major impacts on the local economy. For instance, over 50 million people consume palm wine in Southern Nigeria (Obahiagbon 2009). Raffia palm, which is among the most diverse and geographically widespread palm, is found in Africa, Asia and South America (Oduah and Ohimain 2015). The palm has many potential uses (Oduah and Ohimain 2015) but it is currently undertilized . Production of beverage ethanol from raffia palm provide a source of employment especially for rural people (Obahiagbon and Osagie 2007;Ohimain et al. 2012). Aiyeloja et al. (2014) studied the potential of raffia palm in the sustenance of rural and urban population in Nigeria. They found that raffia palm beverage value chain provides profit of ₦50,000-₦90,000 ($ 1 = ₦220) to producers and ₦45,000-₦70,000 to marketers. The complete ban on traditionally fermented beverages could be detrimental to the country's economy especially at a time when most economics are under recession, with high inflation and un-employment rates. Nigeria is currently experiencing an economic downturn due to low crude oil prices. Hence, there is the need to establish the source/cause of methanol in traditionally fermented alcoholic beverages. Methanol Institute (2013) reported that methanol is often deliberately added to alcoholic beverages by unscrupulous and illegal criminal enterprises as a cheaper alternative to the production of cheaper ethanol. This may be unlikely in Nigeria and many other developing countries where methanol is not domestically produced but imported at costs higher than the cost of alcoholic beverage. For instance, domestically produced ethanol (40-60 % alcohol content) is quite cheap costing ₦20 per shot of 30 ml i.e. ₦670/l as against ₦5168/l of 99.85 % methanol (excluding importation and duty costs). Hence, there is need for research to focus on other possible sources of methanol in locally fermented beverages. WHO (2014) reported that outbreaks of methanol often occur when methanol is added to alcoholic beverages. Ohimain et al. (2012) reported that alcoholic beverages are produced in Nigeria using rudimentary equipment under spontaneous fermentation, which lacked effective controls and are carried out by uneducated rural workers with poor hygiene in an unsterile environment. Traditional fermentation is carried out by mixed cultures consisting of yeast, other fungi and bacteria. Though, most of the traditionally fermented food and beverages are dominated by the yeast Saccharomyces cerevisiae, and to a lesser extent Lactobacillus (Jespersen 2003;Ogbulie et al. 2007;Karamoko et al. 2012;Rokosu and Nwisienyi 1980), the presence of other microbes can lead to the production of diverse products including methanol (Dato et al. 2005;Shale et al. 2013;Kostik et al. 2014). Several compounds could be produced during mixed fermentation with several organisms. Also, it has been severally reported that microbial fermentation of substrates rich in pectin can result in the formation of methanol (Nakagawa et al. 2000;Mendonca et al. 2011;Siragusa et al. 1988). Contaminating yeast has been demonstrated to produce methanol during traditional fermentation (Dato et al. 2005). Recent studies have also shown that the ethanol fermenting yeast, S. cerevisiae has several strains with slightly different metabolism (Jespersen 2003;Stringini et al. 2009;Okunowo et al. 2005) with some strains possibly producing methanol. More worrisome are recent studies showing increase in blood methanol level in some persons even after consumption of methanol-free ethanol (Shindyapina et al. 2014;Dorokhov et al. 2015). These authors recognized two sources of methanol in human systems, endogenous and exogenous sources. It is generally believed that unscrupulous vendors deliberately spike beverages with methanol in order to increase the alcohol content. The aim of this review is to present alternative viewpoint showing the possible role of microbes in the production of methanol in traditionally fermented beverages. We reviewed literatures on traditionally fermented alcoholic beverages, assessed the methanol content of the beverages, the pectin content of their feedstocks and the microbial species involved in the fermentation in an attempt to establish a possible role of microbes in the production of methanol in traditionally fermented alcoholic beverages. Ohimain SpringerPlus (2016) 5:1607 Methanol contamination in fermented beverages The result of the review is presented in Table 1, showing that several traditionally fermented alcoholic beverages in different countries could be prone to methanol contamination. Majority of the beverages are made from few feedstocks including palm wine, sorghum, millet, maize, sugarcane, citrus, banana, milk and Plum. Cases of methanol contamination have been reported in some of the wines produced from banana, plum and Agrave. Spirits made from mangoes, pears, banana and melon have been shown to contain methanol (Mendonca et al. 2011). In Rwanda, traces of methanol were reported in Urwagwa, a beer produced from banana (Shale et al. 2013 The substrate for ethanol production is the first probable source of methanol in the beverage. Chaiyasut et al. (2013) reported factors affecting the methanol production in fermented beverages including raw material size and age, sterilization temperature, pectin content and pectin methyl esterase (PME) activity (Note that PME activity is optimal at 50-60 °C). Another possible source of methanol in traditionally fermented alcoholic beverage is the fermenting microbes. The ethanol fermenting yeast S. cerevisiae dominated traditional fermentation followed by Lactobacillus (Table 1). Jespersen (2003) also observed this trend in African indigenous fermented beverages and foods. Saccharomyces cerevisiae have been used as catalysts for the production of ethanol for thousands of years. But recent studies have shown that there are different strains of S. cerevisiae involved in traditional ethanol fermentation (Hayford and Jespersen 1999;Jespersen 2003;Kuhle et al. 2001;Pataro et al. 2000;Guerra et al. 2001;Ezeronye and Legras 2009). The big question is 'have the traditional ethanol producing yeast evolved into the production of methanol in addition'? Professor Benito Santiago, University of Spain (Personal communication, July 2015) opined that some years ago, methanol at low concentration was desirable in beer and wines. However, we were unable to find literature confirming this claim. Plant cell wall degrading enzymes including pectinases are ubiquitous among pathogenic and saprophytic bacteria and fungi (Prade et al. 1999). Pectin enzymes are widely distributed in nature and are produced by yeast, bacteria, fungi and plants (Sieiro et al. 2012). Methanol is a major end product of pectin metabolism by microorganisms (Schink and Zeikus 1980). Human colonic bacteria, Erwinia carotovora is able to degrade pectin releasing methanol (Siragusa et al. 1988). Anaerobic bacteria, particularly Clostridium butyricum, Clostridium thermocellum, Clostridium multifermentans, and Clostridium felsineum produce methanol from pectin (Ollivier and Garcia 1990). Schink and Zeikus (1980) reported various pectinolytic strains of Clostridium, Erwinia and Pseudomonas. Dorokhov et al. (2015) listed at least 20 species of human colonic microbes capable of producing methanol endogenously. The authors in a comprehensive review presented at least five different pathways of methanol synthesis in humans and four pathways of methanol clearance from the body and they also demonstrated the presence of gene regulation in methanol synthesis. Readers are advised to consult this literature for details on metabolic methanol in human systems. Pectinolytic enzymes are classified into esterases and depolymerase (lyases and hydrolases). Hydrolysis of pectin by lyases produces oligo-or mono-galacturonate, while hydrolysis of pectin by esterases produces pectic acid and methanol (Sieiro et al. 2012). Some authors have identified strains of Saccharomyces that produces the three types of pectinolytic enzymes namely pectin methyl esterase (PME, EC: 3.1.1.11), pectin lyase (PL), and polygalacturonase (PG) (Gainvors et al. 1994a, b;Naumov et al. 2001). Fernandez-Gonzalez et al. (2005) genetically modified S. cerevisiae strain having pectinolytic activity. Analysis of S. cerevisiae among many traditional fermented beverages in Africa shows that they vary according to the location and types of substrates (Jespersen 2003). Strains of S. cerevisiae having PME activity could produce methanol during fermentation. Methanol is produced during fermentation by the hydrolysis of naturally occurring pectin in the wort (Nakagawa et al. 2000;Mendonca et al. 2011). PME deesterify pectin to low-methoxyl pectins resulting in the production of methanol (Chaiyasut et al. 2013;Micheli 2001). Jespersen (2003) reported the roles of S. cerevisiae in the traditional fermentation to include fermentation of carbohydrate to ethanol, production of aromatic and Iwuoha and Eke (1996) Urwagwa ( Iwuoha and Eke (1996) Cocoa sap wine Sacharomyces cerevisiae Nigeria ? Iwuoha and Eke (1996) Cholai rice, sugar-cane, juice of date tree, molasses, and fruit juice (pineapple and jackfruits) Yeast India flavor compounds, stimulation of lactic acid bacteria and probiotic activities among others. Saccharomyces cerevisiae also inhibit the mycotoxin producing fungi and cause the degradation of poisonous cyanogenic glycosides and produces tissues degrading enzymes such as cellulose and pectinase. The volume of ethanol produced during fermentation is dependent on the strains of yeast used. For instance, the total alcohol (ethanol and methanol) produced from orange juice fermentation was 3.19 % w/v with S. cerevisiae var. ellipsoideus and 6.80 % w/v with S. carlsbergensis (Okunowo and Osuntoki 2007 (Ogbadu et al. 1997;Muyanja et al. 2003;Namuguraya and Muyanja 2009;Quattara et al. 2015;Koffi-Marcellin et al. 2009;Ashmaig et al. 2009;Eze et al. 2011). Since traditional fermentation occur via spontaneous inoculation from the substrate and processing equipment (Ohimain et al. 2012;Jespersen 2003), hence mixed cultures usually carry out the fermentation. Therefore, contaminating microbes including other yeasts, fungi and bacteria could result in the production of several other products including methanol. And because methanol has a lower boiling point (65 °C) than ethanol (78 °C), it could be further concentrated in the beverage during distillation. Though, there are some disadvantages of mixed culture fermentation, the use of mixed culture in ethanol production will offer the advantage of production at low cost since a large range of substrates may be metabolized into ethanol. Moreover, the high cost associated with operations of process plants with pure cultures could be drastically minimized when mixed cultures are used. As previously stated, mixed fermentation could result in the production of diverse products. Even pure culture fermentation can result in the production of diverse products depending on the operating conditions. Hence, beverages produced via spontaneous fermentation by mixed culture could produce greater variety of products. Table 2 listed some volatile congeners produced in selected alcoholic beverages beside methanol. Some of these compounds are also very poisonous e.g. ethyl carbamate and some are even carcinogenic (Lachenmeier et al. 2009(Lachenmeier et al. , 2011Testino et al. 2014;Testino and Borro 2010). Annan et al. (2003) listed 64 volatile compounds produced during the mixed culture fermentation of Ghanaian maize dough consisting of 20 alcohols, 22 carbonyls, 11 esters, 7 acids, 3 phenolic compounds and a furan. Paine and Davan (2001) reported that low concentrations of methanol occur naturally in most alcoholic beverages without causing any harm. According to WHO (2014), methanol concentration of 6-27 mg/l in beer and 10-220 mg/l in spirits are not harmful. Paine and Davan (2001) reported that the daily safe dose of methanol in an adult is 2 g and a toxic dose of 8 g as against the EU general limit for naturally occurring methanol of 10 g methanol/ethanol, which is equivalent to 0.4 % (v/v) methanol at 40 % ethanol. Czech Republic permitted safe limit for methanol in spirits is 12 g/l of pure ethanol (Vaskova 2013). Note that EU Methanol limit is variable (0.2-1.5 %) depending on the type of beverage and feedstock used for fermentation. Some countries have regulatory limits of methanol in alcoholic beverages (Table 3). This regulatory control should be encouraged rather than outright ban. Recommendations and the way forward Microbiological control of the process could also be used to prevent methanol formation in fermented beverages. For instance, pure culture inoculation using commercial yeast as opposed to spontaneous inoculation by wild yeasts should be practiced. The traditional fermentation processes could also be scaled-up using well characterized and purified starter culture. For instance, starter cultures have been successfully used to produce pito, a traditionally fermented alcoholic beverage produced from maize or sorghum (Orji et al. 2003). Adequate equipment with process controls should be used for fermentation and distillation as opposed to rudimentary equipment lacking controls, which are currently used. For instance, sterilization/boiling at temperatures higher than 80 °C could prevent the production of methanol (Chaiyasut et al. 2013;Amaral et al. 2005). Moreover, standard microbiological process controls and working under aseptic conditions could control contaminating wild yeasts in the fermentation process. Jespersen (2003) also recommended improved process control of fermentation and product characterization including the use of purified starter cultures with appropriate technology. Another microbiological method for the control of methanol in fermented beverages, is the use of methylotrophic yeast such as Pichia methanolica (Nakagawa et al. 2005) and Candida boidinii (Nakagawa et al. 2000) which have the capacity of utilizing pectin or methyl ester moiety of pectin and methanol, thus preventing the accumulation of methanol in fermented products. Finally, instead of an outright ban on traditional fermentation, because of methanol contamination, the mixed alcohol (ethanol and methanol) could be further processed and used as biofuel. Literature abounds on the use of methanol and ethanol as biofuels (Kamboj and Karimi 2014;Iliev 2015;Shayan et al. 2011). Conclusions Incidences of methanol contamination in traditional beverages are increasing globally and have caused death in many counties including Nigeria, India and Indonesia. It is generally believed that unscrupulous vendors deliberately spike the beverages with methanol in order to increase the alcohol content. This review observed that methanol production in traditional fermented beverages can be linked to the activities of pectinase producing yeast, fungi and bacteria. Microbes producing pectin methyl esterase are able to produce methanol from fruits/juices containing pectin. Under traditional/ informal fermentation, alcoholic beverages produced by mixed microbial consortium could probably lead to the production of mixed alcohols containing methanol and other volatile congeners. The study concluded by suggesting that contaminated alcoholic beverages be converted for fuel use rather than out rightly banning the agelong traditional alcohol fermentation. Regulatory limits for methanol in fermented beverages should be strictly enforced. It is also suggested that pure cultures should be used for alcohol fermentation under aseptic conditions as opposed to spontaneous fermentation by mixed contaminating microbes.
2018-04-03T01:20:56.371Z
2016-09-20T00:00:00.000
{ "year": 2016, "sha1": "cbf862a90145f0fe6f65f4d17696b5bb58b691ab", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-016-3303-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cbf862a90145f0fe6f65f4d17696b5bb58b691ab", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
119232054
pes2o/s2orc
v3-fos-license
Numerical accuracy of mean-field calculations in coordinate space Background: Mean-field methods based on an energy density functional (EDF) are powerful tools used to describe many properties of nuclei in the entirety of the nuclear chart. The accuracy required on energies for nuclear physics and astrophysics applications is of the order of 500 keV and much effort is undertaken to build EDFs that meet this requirement. Purpose: The mean-field calculations have to be accurate enough in order to preserve the accuracy of the EDF. We study this numerical accuracy in detail for a specific numerical choice of representation for the mean-field equations that can accommodate any kind of symmetry breaking. Method: The method that we use is a particular implementation of 3-dimensional mesh calculations. Its numerical accuracy is governed by three main factors: the size of the box in which the nucleus is confined, the way numerical derivatives are calculated and the distance between the points on the mesh. Results: We have examined the dependence of the results on these three factors for spherical doubly-magic nuclei, neutron-rich $^{34}$Ne, the fission barrier of $^{240}$Pu and isotopic chains around Z = 50. Conclusions: Mesh calculations offer the user extensive control over the numerical accuracy of the solution scheme. By making appropriate choices for the numerical scheme the achievable accuracy is well below the model uncertainties of mean-field methods. I. INTRODUCTION The self-consistent mean-field approach, based on an energy density functional (EDF), is a tool of choice to study nuclei in any region of the nuclear chart [1]. It allows to calculate the properties of the ground state but also of alternative configurations, like shape isomers, or to follow the behaviur of a nucleus along rotational bands or along fission paths. Often, one is not directly interested in the total binding energy of a specific nucleus but by its evolution along a series of isotopes or isotones, which can signal structural changes for given neutron or proton numbers. Motivated by the needs of the nuclear physics and astrophysics communities, large efforts are underway to push the predictive power of nuclear mass models well below the 500 keV level. To reach this goal, the protocols used to adjust the EDF's parameters have been revisited. In particular, methods are being developed [2][3][4] to quantify the statistical uncertainty of these parameters. However, besides the errors on observables due to these uncertainties, there is also a numerical error due to the way the SCMF equations are solved. One needs to verify that the numerics does not introduce errors that are larger than the maximum error tolerated for mass models. More importantly, these errors should not vary too rapidly from one nucleus to the other, to avoid a spurious behavior of mass differences. * phheenen@ulb.ac.be The numerical methods used to solve the mean-field equations can be classified according to the way the single-particle wave functions are represented: by coordinate space techniques or by a basis expansion. Coordinate space techniques represent the single-particle wave functions in a discretized, finite volume. Several discretization techniques exist, utilizing finite difference formulas [5], Fourier transformations [6], B-splines [7], wavelets [8,9] and the Lagrange Mesh method [10][11][12]. The second family of numerical representations involves expanding the single-particle wave functions on some chosen (finite) set of basis states. Usually these basis states are harmonic oscillator (HO) eigenstates, although the details often vary. While the origin of numerical errors is quite different for both families of representations, the type of EDF does not seem to influence the accuracy of the methods much. The three main families (relativistic EDFs, zero-range Skyrme EDFs or finite-range EDFs) require similar numbers of basis states to achieve a similar precision (see e.g. [13][14][15][16][17]). In what follows, we will limit ourselves to the study of zero-range Skyrme EDFs. It is the aim of this paper to study the numerical accuracy of a specific implementation of coordinate space techniques: representation on a three-dimensional cartesian mesh of equidistant points. We will focus on two specific techniques: finite-difference (FD) formulas and the Lagrange Mesh (LM) method, which are the ones implemented in our codes. As far as we can infer from the tests published in the literature, the accuracy obtained with the other techniques mentioned above is similar to the one obtained within our LM scheme. Most of the in-formation relative to the tools that we have developed has been presented for the particular implementation made in the code Ev8 [18,19]. More involved implementations have also been used, which differ from Ev8 only in that they impose less symmetries on the nucleus. The presence of these symmetries in general allows for the reduction of the dimension of the problem, e.g. in EV8 it allows for the reduction of the space by a factor of 1/8. The article is organized as follows: first we define precisely the quantities that will be used to characterize the accuracy of a mean-field calculation. Next, we review the basic ingredients needed to define wave functions on a cartesian mesh and to calculate derivatives and integrals in this representation. We then discuss the main sources of numerical errors: the size of the box in which the nucleus is confined and the step size of the mesh. We discuss the numerical accuracy that can be achieved by comparing energies and radii of doubly magic nuclei with those obtained with a spherical code. Finally, we check the convergence of energies, radii and the multipole moments of deformed nuclei by comparing results obtained with decreasing mesh discretization lengths. II. DEFINITION OF USEFUL QUANTITIES A mean-field configuration is characterized by its energy, its rms radius and by multipole moments. In this section we define these quantities whose dependence on the mesh parameters will be studied. A. Total Energy For a time-reversal invariant system as assumed here, the total energy is composed of the kinetic energy, the Skyrme energy describing the strong interaction in the particle-hole channel, the pairing energy, the Coulomb energy and a center-of-mass correction [19] For the parameterizations used throughout this article, the Skyrme EDF takes the form of the sum of various bilinear combinations of the isoscalar (t = 0) and isovector (t = 1) local densities ρ t (r), kinetic densities τ t (r) and spin-current densities J t,µν (r), µ, ν = x, y, z, with coupling constants as defined in Ref. [19]. The kinetic energy just depends on the kinetic density of protons and neutrons. While the Skyrme and kinetic energies are local functionals of the densities, the direct Coulomb energy is a nonlocal functional of the proton density ρ p (r) Compared to the other terms contributing to the total energy (1), the exact calculation of the Coulomb exchange energy is orders of magnitude more costly as it is a functional of the complete nonlocal one-body density matrix. As a consequence, the local Slater approximation that is of similar numerical cost as the Skyrme energy (2) is used instead The pairing energy contribution to the energy is: where thev kkmm are anti-symmetrized matrix elements of the pairing interaction and the f i are cutoff factors, both of which are specified in Appendix B. The expression for the c.m. correction, which is not relevant for our discussion, can be found in Ref. [19]. B. Dimensionless multipole moments As in [19], the dimensionless multipole moments β ℓm are related to the matrix elements of the multipole oper-atorsQ ℓm ≡ r ℓ Y ℓm (r) by where R 0 = 1.2 A 1/3 fm. When m is omitted we imply it to be zero. C. Radii Another set of observables, related to the density profile of the nucleus, are the mean-square (ms) radii, rootmean-square (rms) radii and the isotopic shifts. The ms radius of the proton (q = p), neutron (q = n), and total density distribution is defined as The root-mean-square (rms) radii are then the square root of the corresponding mean-square radius. Similarly, we will present results for the isotope shift of charge radii that are calculated as the difference between the proton ms radius of an isotope with N neutrons and a reference isotope with N 0 neutrons without any corrections. III. COORDINATE SPACE REPRESENTATION Assuming a 3-dimensional cartesian mesh, a function Φ(r) = Φ(x, y, z) is represented by the tensor Φ pqs of its values at the collocation points (x p , y q , z s ) A mesh can be defined in several ways, depending on the choice of the collocation points. For example, the origin of the coordinate system and the boundaries of the box can be included as collocation points or not. Different choices can also be made for the boundary conditions at the edge of the box. To set up the self-consistent mean-field equations, one has to vary the EDF with respect to Φ pqs . This requires to define prescriptions to calculate derivatives and integrals from the values of Φ pqs on the mesh. Several choices for derivatives have been explored over the years. A. Derivatives on a mesh The most straightforward possibility to set-up a coordinate-space representation of the self-consistent mean-field equations is provided by the finite-difference method, a widely-used tool to solve partial differential equations [20]. In such a scheme, the derivatives are calculated with npoint finite-difference formulas, and the integrals are obtained by summing up the integrand at the mesh points multiplied by a suitable volume element. There are three factors that determine the accuracy that can be achieved with the finite-difference method. One is the overall resolution scale provided by the mesh spacing; decreasing the distance between mesh points improves the accuracy. Second, the higher the order of the finite-difference formulas used for a given mesh spacing the better the accuracy. In both cases, however, better accuracy means also increase of the numerical cost. Third, there are internal inconsistencies introduced by the method itself. For example, taking twice the numerical first-order derivatives of a given function is not equivalent to applying the numerical second-order derivatives. Also, the numerical derivatives are not the inverse of the numerical integration. It is only for very small step sizes well below 0.1 fm that these internal inconsistencies become irrelevant. While such small step sizes can be easily handled in spherical 1d codes [21], the required storage is prohibitive in axial 2d and cartesian 3d codes. In addition, such step sizes are much smaller than what can be expected to be the physically relevant resolution scale, see for example the arguments brought forward in Ref. [22]. Several other schemes have been developed in the past with a better consistency between derivation and integration. For instance, derivatives have been defined through a Fourier transformation to momentum space [6,23,24], which is equivalent to the assumption that the functions on the mesh can be developed into a set of plane waves. In this method, the derivatives are quasi-exact for a given resolution of the mesh, and first-and second-order derivatives internally consistent. Similar ideas have been developed in quantum chemistry under the label of discrete variable representation (DVR) [25][26][27]. A similar formalism that provides an internal consistent scheme for derivatives and integrals is the Lagrange-mesh method that we will sketch in the following section. B. Lagrange-mesh representation The idea underlying the Lagrange-mesh method is that for each Gauss quadrature one can construct a set of basis functions for which orthogonality and completeness relations are exactly fulfilled when evaluated with the given quadrature [10,28,29]. This additional condition makes the LM method a special case of the slightly less rigorous concept of DVR [26,29]. Lagrange meshes have been constructed for a multitude of different geometries and used for a wide range of applications, see [29] and references therein. We will use here the case of an equidistant 3d cartesian mesh. Its three directions are separable in the formalism, such that the presentation of the principles of the method for one dimension is sufficient. The underlying basis of a one-dimensional Cartesian equidistant Lagrange mesh is constructed as the set of functions ϕ k (x) whose orthogonality relations are exact when evaluated with a simple 2N -point rectangular quadrature rule, sometimes called midpoint rule, where dx is the distance between the collocation points located at x r = r dx = ±dx/2, ±3 dx/2, . . . , ±(N − 1) dx/2 , where L = 2N dx is the length of the numerical box and where k = ± 1 2 , ± 3 2 , . . . , ±(N − 1 2 ). The real part of the ϕ k (x) is symmetric, has nodes on the boundaries of the box and a maximum at the origin, whereas their imaginary part is skew-symmetric, consequently has a node at the origin, and maxima on the boundaries of the box. This also implies that ϕ * k (x) = ϕ −k (x). The ϕ k (x r ) form a complete set of functions to describe any function on the mesh points Note that the box size L is not a multiple of the wavelength of the basis functions. Instead, twice the box size is an odd multiple of the wavelengths that take the values 2L = 2L/1, 2L/3, 2L/5, . . . 2L/(2N − 1). Both the real and imaginary parts of all plane waves in Eq. (14) are non-zero at all mesh points. As recalled in Ref. [22], in cartesian DVR and LM coordinate-space methods where the derivatives are defined through an expansion in plane waves, the analysis of a calculation's infrared and ultraviolet cutoffs introduced by the basis is straightforward. This has to be contrasted with the much more involved analyses required when working with an HO basis [30][31][32]. It has also been argued in Ref. [22] that a DVR or LM representation of the nuclear many-body problem covers the relevant part of the phase space with a much smaller number of basis states than required by an HO basis. In practice, however, HO bases typically used for selfconsistent mean-field calculations are much smaller than the typical number of mesh points used in the same kind of calculation. For a box with 20 points in every direction the number of linearly independent states is 64000, to be compared with a harmonic oscillator expansion with 20 shells, which contains 14168 states. While the basis functions ϕ k (x) of Eq. (14) are useful to discuss the mathematical properties of the LM method, the actual coordinate representation then employs the set of 2N Lagrange interpolation functions f i (x) obtained as [10,28,29] By construction, the Lagrange interpolation functions have the property to be equal to one at the mesh point x r = r dx, and zero at all others, f r (x s ) = δ rs [10,28,29]. When developed into the Lagrange functions, any function φ(x) on the mesh is then simply represented by its values φ r ≡ φ(x r ) at the 2N mesh points. The Lagrange functions are smooth and infinitely derivable. They can be used to define matrices representing the first and second derivatives of functions discretized through 1 Eq. (17) 1 Unfortunately, the corrections of these expressions as given in the corrigendum to Ref. [19] still contain a typographical error: the formula for the second derivative has a superfluous factor of two when i = j. The first derivative of any function φ(x) on the mesh is obtained by multiplying the 2N × 2N matrix D and similar for the second derivatives. Note that the derivative matrices have the property D (2) = D (1) D (1) by construction [29], which is not the case for finitedifference formulas. As the derivatives of Eqs. (18) and (19) correspond to full 2N × 2N matrices, their application is more time consuming than finite-difference derivatives that correspond to a sparse band matrix. The full cartesian 3d representation of a function Φ(r) is then provided by where the number of discretization points does not have to be the same in each direction. In that case, the derivative matrices in Eqs. (18) and (19) have to be set-up separately for each direction. As pointed out in Refs. [26,33], a variational calculation using a DVR or LM derivatives delivers very precise values for the total energy in spite of the individual matrix elements being much less accurate. In what follows, we will illustrate that this property implies very accurate total energies while separate terms of the Skyrme EDF are less well represented. In addition, we will show that using a LM results in a variational calculation. A. Numerical parameters of parameterizations Unless explicitly stated, we have used the SLy4 parameterization. To explore the dependence of the numerical accuracy on the EDF, we have in addition tested a representative set of Skyrme parameterizations, as listed in Appendix A. In the next two sections, we present calculations for doubly-magic spherical nuclei 40 Ca, 132 Sn and 208 Pb, the neutron-rich nucleus 34 Ne, Cd, Sn and Te isotope chains and the fission path of 240 Pu. It is worth noting that we only included pairing for the isotopic chains and for 240 Pu, see appendix B for details. In all other cases, pairing has been neglected. In appendix C we comment on the precise physical constants used during our calculations. B. Measuring accuracy The accuracy of a coordinate space calculation is limited by the size of the box, the discretization length dx and the way derivatives and integrals are calculated. In order to properly judge these effects we employ two ways of analyzing results. For spherical nuclei we can compare our 3d results with a one-dimensional spherical code that also represents the single-particle wave functions in coordinate space. Because of spherical symmetry, we can use extremely fine discretizations and the results can thus be considered exact to very high precision. For this purpose we use Lenteur [21] as a reference. For deformed nuclei, we no longer have access to such a comparison. Here we have to resort to looking at 3d results as a function of both box size and mesh spacing: we compare results in small boxes with large mesh spacing to results in very large boxes with very fine mesh spacing. C. The use of derivatives and the variational principle The numerical cost of using LM derivatives is much higher than the FD alternative. To control the computational time, three options have been considered: they differ by the way derivatives are calculated during the mean-field iterations and after convergence. The first option (FD+FD) has been used in the first applications of the codes [5] where derivatives were exclusively calculated by FD. The second one (FD+LM) is the most used one for more than 20 years: FD derivatives are used during the iterations but the energies are recalculated after convergence by LM formulas. Finally, in the last option (LM+LM), the LM formulas are used during the iterations and after convergence. In practice, we use a seven-point difference formula for the first order and a nine-point formula for the second order derivatives when employing FD formulas. It has been shown earlier in Ref. [34] that this provides an efficient compromise in terms of overall speed and precision. Figure 1 illustrates the accuracy on the total energy ob- tained using these three options. The LM + LM choice is by far the most accurate. As it can be seen in Table II, the result obtained with a mesh size of 1.0 fm differs by only 25 keV from the Lenteur result. The FD+LM option is less accurate, but already sufficient for most applications, with an error of around 100 keV for dx = 0.8 fm. It is better by nearly an order of magnitude than the FD+FD choice. Results presented in the following have been obtained with the FD+LM option, except otherwise stated. Both the FD+LM and LM+LM calculations underestimate the binding energy, as it should be for a variational calculation. This is due to the fact that the single-particle wave functions are expanded on a complete and closed basis for given box size and mesh discretization length, see Eq. (14). Increasing the box size and/or decreasing the mesh discretization length enlarge the accessible subspace of the Hilbert space [29] and lead to a monotonous convergence of the energy. By contrast, such a basis cannot be defined for the FD+FD option for which the calculation systematically overestimates the binding energy of 208 Pb. The same applies to mesh calculations with Fourier derivatives, as can be deduced from the convergence analyses in Refs. [6,23]. While for a given dx the overall accuracy of the binding energy found there is very similar to the one we find for LM+LM calculations, the energy does not converge monotonically when decreasing dx. While the use of LM derivatives after having used FD ones during the iterations (FD+LM) is sufficient to obtain an upper bound of the total energy since any wave function discretized on a mesh can be expanded on the LM basis, the errors on the various individual terms of the Skyrme EDF can be very large, as can be seen in Table I. While the total energy varies by slightly less than one MeV when dx is decreased from 1.0 fm to 0.549 fm, the variation in the kinetic energy is of the order of 40 MeV, counterbalanced by a similar change in the Skyrme energy. The situation for the LM+LM scheme is shown in Table II. It indicates a similar effect, but on a much smaller scale: the total energy varies by 20 keV while the kinetic energy varies by roughly 150 keV. When performing symmetry restoration and configuration mixing by the GCM, a high level of accuracy is required to avoid buildup numerical noise while solving the Hill-Wheeler-Griffin equation. This calls for the use of LM derivatives in these calculations, as done since our first applications [35]. D. Determining box sizes and mesh spacings The first requirement of a coordinate space calculation is that the box in which the nucleus is confined is large enough to avoid any spurious effect due to the truncation of the wave functions. The influence of the box size on the total energy for three spherical nuclei is represented in Fig. 2. The same mesh size dx = 1.0 fm is used in all calculations while the number of discretization points is varied, changing thus the volume of the box. The calculation in the largest box, using 23 points, is taken as a reference. The errors decrease quickly when the box size is enlarged. If one requires that the error is smaller than a keV, we see that taking boxes with half-sides of 11 fm for 40 Ca, 15 fm for 132 Sn and 20 fm for 208 Pb is sufficient. Since the numerical effort required for 40 Ca is very low we opted to use a slightly larger half-side of 13 fm in order to further increase our accuracy to about 0.1 keV. Similar analyses have been performed for all nuclei considered in this paper. Since several nuclei in the isotopic chains around Z = 50 are deformed, we have performed all calculations with the same box size as 208 Pb. This choice allows us to calculate all isotopes with the same numerical conditions. The box dimensions are summarized in Table III. The columns C x , C y and C z indicate the size of the box in which the Coulomb problem is solved. For every system, the box size was varied for fixed dx until the energy did not change by more than 0.1 keV, with the exception of the 240 Pu for which this limit was of 1 keV. A non-ambiguous comparison between calculations performed with different mesh discretizations dx can only be achieved when the volume of the box is conserved. This is realized by determining the value of dx in such a way that the box has the same size for each number of mesh points. E. Convergence of the iterative procedure Decreasing the mesh size improves the accuracy. However, this has a price in computing time. First, to keep the same box size requires to increase the number of discretization points. A second factor increasing the computing time is that the time step of the imaginary-timestep method [36,37] implemented in the codes [19] has to be decreased with decreasing mesh size. This considerably slows down the convergence. In Fig. 3 we show the evolution of the error on the total energy relative to Lenteur during the iterations for the nucleus 40 Ca for different mesh discretizations dx. The most accurate result after 100 iterations is obtained with dx = 1.0 fm. Gaining an order of magnitude of accuracy after convergence requires to carry out roughly 100 more iterations for the step sizes represented on the figure. F. Treatment of the long-range Coulomb interaction The direct Coulomb energy requires a special treatment because of its long range. One of the spatial integrations in Eq. (4) can be eliminated through the calculation of the Coulomb potential of the protons, which satisfies the electrostatic Poisson equation where e 2 is the square of the elementary charge. When solving this equation, boundary conditions need to be imposed at the edge of the box.These can be easily constructed when recalling that at large distances the potential is entirely determined by the multipoles of the nucleus' charge distribution Q ℓm . Expanding the Coulomb potential on spherical harmonics and keeping terms up to ℓ = 2, the Coulomb potential outside the box is approximated by which provides the boundary condition for the numerical solution of Eq. (22). The direct Coulomb energy is then calculated as As for the nuclear part of the energy, the accuracy of the electrostatic potential, obtained by solving Eq. (22), is limited by three factors: the size of the box, the mesh discretization length dx and the way derivatives are calculated. A suitable box size for the Coulomb problem has to be larger than for the Skyrme EDF. This is a direct consequence of the long range of the Coulomb force. To make negligible the contributions to the boundary conditions of terms higher than ℓ = 2, see Eq. (23), one has to calculate the Coulomb potential in a box larger than the one used for the nuclear part of the interaction. Typical values are given in Table III. For light nuclei such as 40 Ca, no extra points for Coulomb need to be added, while the box has to be significantly enlarged for heavier systems in the 132 Sn and 208 Pb regions. For the calculation of the fission barrier of heavy nuclei such as 240 Pu up to very large deformations, the Coulomb box size has to be two times larger than the one needed for the Skyrme EDF to obtain the same nuclear accuracy on all the energies. The Laplacian in Eq. (22) has to be approximated on the mesh in such a way that the accuracy on the Coulomb energy is similar to the one of the other terms in the EDF. We show in Fig. 4 the gain in accuracy on the total energy of 208 Pb obtained by going from a three-point to a seven-point FD formula for the Laplacian. Already a fivepoint formula brings the required accuracy and is used in all other calculations reported here. One can easily understand that a lower-order finite-difference formula than the one used to calculate the kinetic energy is sufficient for the Laplacian in Eq. (22): the typical length scale of the variation of the Coulomb potential is much larger than the scale on which the wave functions vary. The final factor for the accuracy of the Coulomb solution is the mesh discretization length dx. As the effect of the Coulomb term is already incorporated in all of the applications, we will not discuss it separately. A. Binding energies Provided that the box size is large enough, the main factor determining the accuracy of our implementation of mesh calculations is the discretization length. In Fig. 5 the energy difference with respect to Lenteur results is plotted for three doubly-magic spherical nuclei, as a function of the mesh discretization dx for a representative set of Skyrme parameterizations. It is remarkable that the interactions are grouped according to their effective mass (see Appendix A for the actual values): interactions with larger effective mass m * give systematically more accurate results than interactions with smaller ones. This property is related to the term E ρτ term of the Skyrme EDF in Eq. (2) that, in our experience, is the least well represented on a mesh. Since the magnitude of this term increases when the effective mass decreases, the accuracy obtained for a given mesh size deteriorates for lower effective mass. One can see that the accuracy obtained with a mesh discretization as large as dx = 1.0 fm is lower than 1.0 MeV for 208 Pb. The energy difference decreases to a few hundred keV for dx = 0.8 fm and to a few keV for dx = 0.6 fm. Note that a similar accuracy for dx = 0.6 fm was found for a 2-dimensional code based on splines [8]. To obtain an agreement between the spherical code Lenteur and our 3-dimensional codes below the 1 keV level would require to increase the box size but also to make the codes more similar. For a nucleus with a binding energy larger than 1 GeV, this implies a relative discrepancy better than 10 −7 and there are several sources of differences in the codes that can play a role, none of which is easy to control. B. Deformation energy curves Let us now study the convergence properties of our numerical scheme for the fission path of 240 Pu. Our motivation is twofold: 240 Pu is a frequent benchmark for models that describe fission [39][40][41][42][43][44] but also for numerical algorithms [8,13]. The energy curve of this nucleus presents two minima at prolate deformations, the ground state and a fission isomer. In Fig. 6, we show the variation of the energy with deformation. The box used for these calculations has the same size for all discretizations, as indicated by Table III. When the left-right symmetry is broken, the number of points along the z direction is doubled. We have performed calculations with four different mesh discretization, dx = 1.0, 0.82, 0.69 and 0.60 fm and tested the convergence as a function of dx by taking the difference with respect to the results obtained with dx = 0.6 fm. For each value of dx, the energy at each deformation is the energy relative to the prolate ground state. The energy curve obtained with dx = 0.6 fm is shown in Fig. 6. The topography obtained for other values of dx is the same. Shapes are triaxial in the vicinity of the first barrier, whereas everywhere else they remain axial. At deformations smaller than the one of the fission isomer the configurations are reflection symmetric, whereas at larger deformations they are increasingly asymmetric. We will use this curve as a reference to determine the accuracy of the calculations carried out for other values of dx. For each dx, the ground state energy is taken as the zero of the energy. The results are shown Fig. 7. The properties of the minimum are summarized in Table IV. 0.69 fm. At dx = 1.0 fm the error is of the order of a few times 100 keV, with a rather large oscillation. For a mesh discretization of 0.82 fm, the error becomes lower than 100 keV (except in the vicinity of the spherical configuration where it reaches 150 keV, but this configuration is very excited) and is quite acceptable for the calculation of energy curves. Decreasing still the discretization to 0.69 fm reduces the error to values around a few times 10 keV at most. Some published results allow for a comparison between the accuracies of mesh calculations and of calculations using an expansion on an HO basis. Pei et al. [8] have performed calculations on an axial mesh using B-splines and on HO bases either spherical or deformed, with 20 oscillator shells in both cases. The accuracy obtained in [8] on a mesh of dx = 0.65 fm seems very similar to the one we obtain. The use of a spherical HO basis is rather unreliable, with an error larger than 1 MeV already for the excitation energy of the fission isomer and that quickly increases to several MeV at larger deformations. For an axial oscillator basis, the results are similar to those that we obtain with a mesh size of 0.82 fm up to the first barrier but the accuracy deteriorates rapidly for larger deformations, being of several hundreds of keV at the deformation corresponding to the fission isomer. Similar results can be found in [46] for 194 Hg and in [47] for 256 Fm. As a number of shells significantly larger than 20 is numerically prohibitive, one either has to resort to a two-center oscillator basis or one has to construct a suitable subspace within a much larger one-center HO basis by carefully selecting the low-lying single-particle states. The former option is developed in Ref. [48] whereas the latter has been used during the construction of the unedf1 parametrization [49], where the lowest 1771 basis states out of a basis of 50 HO shells has been kept. The accuracy obtained in this way for the excitation energy of the In the light of these error bars, a numerical accuracy of 100 keV is sufficient for the adjustment of an EDF. However, from the published results by Pei et al. [8], it can be estimated that the numerical error on the fission barrier height is a few times these 100 keV. Similar results have been obtained in the case of the RMF method [13,53]. C. Radial density distribution The rms radius is intimately linked to the radial density distribution of a nucleus. One can expect that it is particularly sensitive to the box size for nuclei with a large excess of neutrons. Tests have been performed for the very neutron-rich nucleus 34 Ne by varying the box size for a fixed mesh discretization dx = 0.8 fm. To avoid any ambiguity in the calculation, pairing has been omitted. The results are presented in Fig. 8, where we show the difference in total rms radius as a function of the box size for a representative set of EDF parameterizations. For the size of the box recommended for 40 Ca in Table III, the number of points is 16 for a mesh size of 0.8 fm. It leads to an error of the order of 10 −2 fm for most interactions, the results being slightly less accurate for SV-min. For smaller boxes, the accuracy of radii is lower and depends on the interaction. In Fig. 9 the radial profile of the total density of 34 Ne is plotted as a function of the box size. The distortion of the density in the smallest box is large and demonstrates that half the box size has to be larger than 8.0 fm. In all other boxes, the exponential tail of the density distribution is well described, up to the point before the last one. For a box size around 12 fm, the density is well described up to a decrease of the central density by six order of magnitudes. The confinement in a volume is less evident in an expansion on a basis than in a mesh calculation, but it is also present. While oscillator basis functions extend to infinity, they are in practice strongly localized by their Gaussian form factor. If one takes its classical turning point as a measure for the extension of a HO state, one obtains, for 208 Pb and 20 oscillator shells, a value for the turning point that varies from 14 fm for ℓ = 0 to 16 fm for ℓ = 20 . To increase the value of this turning point to 20 fm would require to use 28 oscillator shells for ℓ = 0. This effect of confinement by an oscillator basis has been put in evidence in Ref. [54] for the case for 112 Zr. For comparison, the experimental uncertainty on rms charge radii for the Ne isotopes (up to A = 28) varies from 0.002 fm close to stability to 0.02 fm for exotic isotopes [55]. It is interesting to note that the numerical accuracy of a mesh mean-field calculation has a similar level (provided the box is large enough), but that the model already introduces uncertainties on the rms radii that are at least one order of magnitude larger [2]. In Fig. 10, we compare the total rms radii calculated with decreasing mesh sizes to those obtained with Lenteur for three spherical nuclei 40 Ca, 132 Sn and 208 Pb. The agreement is already very satisfying for a large mesh size of 1.0 fm, with one order of magnitude gained in accuracy when decreasing the mesh size to 0.8 fm, which is the usual value of production calculations. An interesting feature that cannot be deduced from Fig. 10 is that all of the parameterizations, with the exception of un-edf0, always produce an rms radius that is smaller than the Lenteur result. In Fig. 11, we present the isotopic shifts δr 2 (N, Z) for a range of even-even Sn nuclei, the reference being 132 Sn. All curves almost exactly coincide. This demonstrates that the isotopic shifts are quite reliable even with coarse meshes. Similar results are obtained for Cd, Xe and Te isotopes. D. Two-neutron separation energies To put into evidence changes of nuclear structure with nucleon number, one often uses mass filters that are computed by taking specific differences between the binding energies of neighboring nuclei. The simplest filter is the two-nucleon separation energy that is defined as the energy difference between two isotopes (or isotones) whose nucleon number differs by two. In Fig. 12, we show the evolution of the two-neutron separation energies, S 2n , of even-even nuclei for three neighboring isotopic chains when the mesh size dx is decreased. For each discretization dx we have plotted the difference of the S 2n values to the one obtained at dx = 0.63 fm. Even with a mesh size as large as dx = 1.0 fm, the accuracy on the S 2n is already better than 100 keV, which is small enough for most applications. The mesh size used in most of our published applications, dx = 0.8 fm leads to an accuracy better than 10 keV. In the bottom panel, the twoneutron separation energies of the three isotope chains are plotted for four values of dx. The curves cannot be distinguished using a scale adapted to the variation of S 2n as a function of the neutron number. This result is in strong contrast with respect to some published calculations using an expansion on an oscillator basis [56], where special algorithms have to be devised to smooth numerical irregularities that can be of the order of few hundred keV. E. Multipole moments The dimensionless ground-state quadrupole moments β 2 of even-even Te isotopes are shown in Fig. 13. Differences between the curves corresponding to different values of dx are tiny and not significant. Similar results were obtained for the Cd and Sn isotopes. We now examine how the multipole moments of 240 Pu along the fission path are affected by the mesh size. In Figs. 14 and 15 we show the octupole and hexadecapole moments, respectively, in the region of the fission path where parity is broken. Similar results obtained for the axial and triaxial cases are not shown. In Tables IV and V we show the multipole moments of the ground state and fission isomer of 240 Pu for the different mesh discretizations as obtained by unconstrained calculations. From Figs. 14 and 15 we see that the overall sequence of shapes along the fission path is robust with respect to the mesh spacing. The fission path is already precisely defined at the coarsest mesh (dx = 1.0 fm) we used. A single exception can be seen at the onset of octupole deformation; in the vicinity of this point, however, the energy surface is very flat in β 3 direction. On a smaller scale, the multipole moments do vary as a function of the mesh discretization. This is best visible in Tables IV and V. Since our method hinges on the variation of the total energy in Eq. (1), there is no guarantee that the values of the multipole moments converge in a predictable way. It is, however, reassuring to see that the typical variation of these moments is of the order of a few percent to at most about 10 percent. The larger variations present themselves in the higher-order β 6 , β 8 and β 10 moments. These are more difficult to resolve on coarse meshes because of the high number of nodes their associated Legendre polynomials have. F. Single-particle levels In Fig. 16 we show the evolution of the neutron singleparticle levels within 1.5 MeV of the Fermi energy in the ground state of 240 Pu as a function of the mesh spacing dx. While slight shifts of the position of the levels are observed as a function of the mesh size, the largest error at dx = 1.0 fm is of the order of 100 keV. One can also note that the level ordering within the parity subspaces is the same for all values of dx. A similar dependence on box parameters is found for the proton states and the lighter nuclei studied here. VI. CONCLUSION The aim of this paper was to study the numerical accuracy of the solution of the self-consistent mean-field equations using a discretization on a 3-dimensional cartesian coordinate-space mesh. Three elements permit to control its numerical accuracy. The first one is the method used to calculate derivatives. Using Lagrange-mesh derivatives leads to much more accurate results than finitedifference formulas for derivatives. In addition, a cartesian Lagrange mesh corresponds to a representation in a closed subspace of the Hilbert space, such that it always provides an upper bound to the binding energy that becomes tighter when adding points outside a given box or when decreasing the distance of mesh points in a given box. Neither is the case for finite-difference derivatives. However, we have shown quantitatively that the accuracy of a calculation that uses finite-difference formulas during the iterations can be significantly improved upon by recalculating the EDF at convergence with Lagrange-mesh derivatives. Again, this procedure provides us with an upper bound of the energy, thus restoring the variational character of the calculation. Using Lagrange derivatives during the iterations allows to still improve the accuracy on energies but at the cost of at least doubling the computing time. The second element on which mesh calculations depend is the size of the box in which the nucleus is confined. The examples of doubly-magic nuclei and neutronrich 34 Ne illustrate that results for energies and densities are already stable at small box sizes. Thirdly, the quality of the results depends on the mesh size with errors on energies that are almost independent on the number of neutrons and protons and on the shape of the nucleus. A mesh size dx = 0.8 fm guarantees an accuracy that is in general better than 100 keV, which corresponds to a relative accuracy of less than a tenth of percent, even for lighter nuclei. Decreasing the mesh size to 0.7 fm permits to gain nearly an order of magnitude and to reach an accuracy that is well below all the uncertainties of the mean-field model. One can summarize these results by concluding that a mesh technique as implemented in our codes is flexible (it can accommodate any kind of symmetry breaking), robust (the accuracy can be controlled by an adequate choice of the three elements mentioned above) and that it can be very accurate if needed. The positive aspect of our numerical scheme is that using a mesh size of 0.8 fm, as used in most of our past applications ensures an accu-racy better than 100 keV on energies and reliable shape properties for nuclei of any mass. Our study has been focussed on the solution of the mean-field equations and we have not touched the description of pairing correlations. There has already been a study of this problem by Terasaki et al. [57]. It should be revisited today to take into account new developments. However, the problem is not exclusively a problem related to the way the mean-field equations are solved. The description of single-particle states well above the Fermi energy is probably very different when using a discretization on a mesh or an expansion on an oscillator basis. Whenever possible, we used the value of 2 /(2m) that was used during the adjustment of the parametrization. It might seem superfluous to completely specify the physical constant used, but the results of our calculations depend on the precise values of these constants. In particular, the level of agreement between Ev8 and Lenteur described in Sect. V A is only attainable when these codes use exactly the same numerical values for the physical constants. In fact, significant errors can be introduced when the values of the physical constants are slightly changed. The seemingly innocuous value of 2 /(2m) plays in fact a very important role. Figure 17 shows Lenteur calculations for the spherical nuclei 40 Ca, 132 Sn and 208 Pb with SLy4. Every point was calculated by slightly changing the value of 2 /(2m) from 20.73553 MeV fm 2 , the SLy4 value. We see that using a value for 2 /(2m) that is not consistent with the value used during the fit of the EDF can lead to an error of several MeV on the total energy. the adjustment of a given parametrization are not available, then one cannot reliably compare the results with experimental data. In this case, one cannot judge the predictive power of this parametrization. Similar concerns arise for the parameters of the Skyrme interactions. The energy obtained in our calculations is more sensitive to some Skyrme parameters than to others, but the close agreement observed in Sect. V A is not obtainable without carefully checking that the Skyrme parameters are completely consistent across codes. That this is not trivial can be concluded from Fig. 18. There we plotted the relative difference in energy found by Lenteur between modified versions of the SLy4 functional and the correct SLy4. The interaction parameters are the same for every point, save for the density dependence parameter α in Eq. (2). There are only very few parameterizations for which the value of α corresponds to a terminating decimal, for example SV-min for which α = 0.255368. For the large majority of parameterizations the value of α is either 1/3 or, as in the case of SLy4, 1/6. Both of these correspond to a repeating decimal number, whose numerical representation might differ from code to code. Using α = 0.1667 in a calculation with SLy4 corresponds to a rounding error of α − 1/6 ≃ 3.33 × 10 −5 , which introduces an error of the total binding energy of 40 Ca of a few tens of keV. It can clearly be seen that a limited representation of α implies a roundoff error that has a visible effect on the energy. This kind of error shows up when comparing Lenteur and Ev8 results and for this reason we conclude that relative errors smaller than 10 −5 become meaningless. Similar analyses can be made for the other interaction parameters, including the values of physical constants used to fit the interaction.
2015-10-19T09:47:10.000Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "4c96ea2a275605df25ec70bd2789fa6edf55b59b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1509.00252", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4c96ea2a275605df25ec70bd2789fa6edf55b59b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
201135265
pes2o/s2orc
v3-fos-license
Development of an EMG-Controlled Knee Exoskeleton to Assist Home Rehabilitation in a Game Context As a leading cause of loss of functional movement, stroke often makes it difficult for patients to walk. Interventions to aid motor recovery in stroke patients should be carried out as a matter of urgency. However, muscle activity in the knee is usually too weak to generate overt movements, which poses a challenge for early post-stroke rehabilitation training. Although electromyography (EMG)-controlled exoskeletons have the potential to solve this problem, most existing robotic devices in rehabilitation centers are expensive, technologically complex, and allow only low training intensity. To address these problems, we have developed an EMG-controlled knee exoskeleton for use at home to assist stroke patients in their rehabilitation. EMG signals of the subject are acquired by an easy-to-don EMG sensor and then processed by a Kalman filter to control the exoskeleton autonomously. A newly-designed game is introduced to improve rehabilitation by encouraging patients' involvement in the training process. Six healthy subjects took part in an initial test of this new training tool. The test showed that subjects could use their EMG signals to control the exoskeleton to assist them in playing the game. Subjects found the rehabilitation process interesting, and they improved their control performance through 20-block training, with game scores increasing from 41.3 ± 15.19 to 78.5 ± 25.2. The setup process was simplified compared to traditional studies and took only 72 s according to test on one healthy subject. The time lag of EMG signal processing, which is an important aspect for real-time control, was significantly reduced to about 64 ms by employing a Kalman filter, while the delay caused by the exoskeleton was about 110 ms. This easy-to-use rehabilitation tool has a greatly simplified training process and allows patients to undergo rehabilitation in a home environment without the need for a therapist to be present. It has the potential to improve the intensity of rehabilitation and the outcomes for stroke patients in the initial phase of rehabilitation. INTRODUCTION Stroke is a major cause of chronic motor disability among adults worldwide (Feigin et al., 2009;Langhorne et al., 2009Langhorne et al., , 2011. Many stroke survivors suffer from hemiplegia, which makes walking difficult or even impossible. Neurorehabilitation training has been widely used to reduce the handicap and disability caused by stroke (Langhorne et al., 2011). Recent studies have shown that a unique time-limited window of enhanced neuroplasticity for 1-3 months exists after ischemic stroke (Zeiler and Krakauer, 2013), known as the post-stroke sensitive period. Within this unique critical period, both spontaneous and intervention-mediated recovery from impairment are maximal (Murphy and Dale, 2009;Floor et al., 2013;Zeiler and Krakauer, 2013). Motor training and enriched rehabilitation during this period are especially effective in enhancing muscle activity and improving neuromuscular control. However, a crucial question remains regarding how to take best advantage of this critical time-limited window. One major problem is that patients cannot make overt movements, although they may regain some muscular control ability early after a stroke . This barrier greatly limits the delivery of motivational rehabilitation training to patients. One possible way to overcome this obstacle is the use of a "muscle-computer interface, " which tests the electromyographic (EMG) activity of the patient and provides feedback. As an easy-to-use tool, EMG signals have been successfully applied to powered exoskeletons (Tucker et al., 2015;Long et al., 2016;Lambelet et al., 2017). The critical advantage of EMG-based methods is that even though the human subject is unable to generate sufficient joint torque, their intention can still be detected from residual EMG activity and consequently the exoskeleton can be controlled (Peternel et al., 2016). It is therefore possible to train patients during the post-stroke sensitive period. Different EMG-based exoskeletons have been developed in the past few decades. Several studies have estimated muscular torques from EMG activity using a musculoskeletal model, and this approach has been applied to the control of both upper limb (Buongiorno et al., 2016) and lower limb (Long et al., 2016;Ao et al., 2017) exoskeletons. As alternatives to a musculoskeletal model, some researchers have proposed the use of neural networks to learn the complex relation between EMG and muscular torque (Song and Tong, 2005;Chen X. et al., 2017) or of statistical learning algorithms to classify different action modes or motion patterns from measured EMG signals (Irastorza-Landa et al., 2017;Yun et al., 2017). However, most of these approaches were designed for use in rehabilitation centers or clinics with the assistance of therapists, leading to greatly increased costs and limiting the intensity of rehabilitation treatment (Chen J. et al., 2017). There are several reasons for the limited application scenarios of these exoskeletons. First, the high cost of traditional EMG acquisition equipment and the complex electrode placement procedure required make it unsuitable for home rehabilitation (Hakonen et al., 2015). Since the signal-to-noise ratio can be improved by placing the electrodes as close to the EMG source as possible (Hakonen et al., 2015), the electrodes of standard EMG laboratory equipment are designed to be placed separately on the skin overlying specific muscles. However, the selections of which muscles to use and the positioning of the electrodes usually need to be done by the therapist. Besides, skin preparation, which is usually needed for traditional EMG electrodes, and accurate placement are time consuming as well (Cram and Rommen, 1989;Marquez et al., 2018). The cost of EMG equipment is also too high for a patient undergoing home rehabilitation. The second reason relates to the control methods. Both the musculoskeletal model and the neural network method expend most effort on increasing the accuracy of predicting muscle torque or in classification, which are important with regard to biomechanics and physiology (Lenzi et al., 2012). However, these methods depend strongly on the subject's anatomy as well as on the placement of the electrodes and usually require a precise calibration procedure, which may be unnecessary for effective exoskeleton control (Lenzi et al., 2012). User-dependent and session-dependent calibration procedures are time-consuming and cannot be done by the subject alone, which limits their use to a laboratory environment rather than a home setting. Moreover, the inconvenience of donning and removing the EMG sensor and exoskeleton, the complex setup procedure, and the boring training process also make these exoskeletons unsuitable for home-based rehabilitation. Thus, the development of an EMGcontrolled exoskeleton that is simple, acceptable, and effective in improving lower limb function and is able to assist home rehabilitation for patients in the post-stroke sensitive period is an urgent task. The challenges of home-based robotic therapy are to make the rehabilitation robot system safe and easy to use in a home setting (Sivan et al., 2014). The rehabilitation system should be acceptable to the patient and enable them to complete the training process independently, without the therapist being present for each session. The technology also needs to match general therapy principles (e.g., intensity, motivation) and provide the patient with the relevant therapy (Sivan et al., 2014). For an EMG-controlled exoskeleton, choosing the optimal way to assemble the electrodes, making it easy to don and remove the EMG sensor and exoskeleton, simplifying the calibration and setup procedures, developing appropriate methods to process the EMG signal, and maintaining motivation to participate in rehab are all of significant importance. There have been a few studies investigating simple solutions for providing effective robotic assistance by exoskeletons (Lenzi et al., 2012;Lince et al., 2017). Lenzi et al. (2012) modeled the EMG-torque relationship by a second-order Butterworth filter and applied an assistive torque proportional to the envelope of the EMGs to an elbow exoskeleton. Their study showed that subjects can compensate for the imprecision of torque estimates and still benefit from robotic assistance. This approach has the advantage that the proportional control greatly decreases the complexity of setup, but it also suffers from the filtering method used, which is unable to maintain smoothness and responsiveness. Menegaldo (2017) found that the delay caused by the Butterworth filter can be up to 320 ms, which is too long to allow real-time control. Compared with a Butterworth filter, a Kalman filter reduced both the delay and the computing demand remarkably (Menegaldo, 2017). Some passive training devices like robots with continuous passive motion (CPM) were also performed in the home setting (Lynch et al., 2005;Hu et al., 2009;Mau-Moeller et al., 2014). The study aims at improving the effectiveness of stroke rehabilitation in the initial phase. Our efforts focus on delivering intensive and motivational rehabilitation training to these patients. To improve training intensity, realizing home rehabilitation with exoskeleton is definitely helpful, since it makes rehabilitation easier to access. In order to motivate the patients, we first choose EMG control to involve the patient's neural system into the rehabilitation. Second, biofeedback is provided to the patient so that they can easily observe their muscle activities. Third, a game is further developed to make the training process more interesting and challenging. This study contributes to making home rehabilitation accessible for the stroke patients at the critical period, since most patients went home after 29-55 days in hospital (Jørgensen et al., 1995). In the present study, we develop an EMG-controlled knee exoskeleton to assist home rehabilitation and investigate whether healthy subjects can use it to perform a challenging task and further improve their control strategy after practicing a visuomotor game. This user-friendly training system, which can be applied to both stroke patients and healthy subjects, provides a motivating and challenging training environment. Based on this setup, we investigate whether training with an EMG-controlled knee exoskeleton in a game context can provide significant motor learning in healthy subjects. As compared with passive training in the home setting like a CPM device (Lynch et al., 2005;Hu et al., 2009;Mau-Moeller et al., 2014), since training of the proposed system can be performed actively, high effects should also be expected in the functional recovery. Game contexts developed in this study are also good for continuous training. Taking together, significant differences are expected in the proposed system. This paper describes the EMG sensor and the data processing method, as well as the mechanical design and control strategy of the exoskeleton for home-based therapy. A pilot experiment on six healthy subjects establishes the feasibility of the training system. Results concerning both the evaluation of the system and the performance of the experimental subjects are presented and discussed, and indicate that this home-used rehabilitation tool shows promise for improving the outcomes for stroke patients in the initial phase of rehabilitation. MATERIALS AND METHODS Six healthy subjects (four males and two females, mean age 24 years, range 22-26 years) who were naive to this training system were recruited to the experiment. The study was approved by the Biological and Medical Ethics Committee of the Beijing University of Aeronautics and Astronautics in accordance with the Declaration of Helsinki, and all subjects gave written informed consent before participation. EMG activity of the thigh muscles was recorded by a Myo thigh-band (Figure 1) and then processed by a Kalman filter to use in controlling an exoskeleton. The lower limb exoskeleton, which has four active degrees of freedom (DOF), was seated on a chair and used to assist the subject in knee rehabilitation. A Flappy Bird game, which was implemented in Python 2.7 on a standard computer with Ubuntu 14.04.03 operating system, was used to motivate the subject to do active training, with the bird stimulated by the knee joint of the exoskeleton. During the training process, the subject was seated on the chair wearing the Myo thigh-band and the exoskeleton. In the game context, the subject needed to keep the flappy bird flying across a series of pipes and obtain as high a score as possible by trying to extend their shank against gravity to control the movement of the knee exoskeleton. Multisensory simulation was provided to the subject. This setup is intended to facilitate strengthening of anti-gravity knee extensor muscles and improving knee joint movement stability and accuracy. The EMG Sensor The Myo thigh-band (Figure 1), which was reassembled from two Myo armbands (Thalmic Labs Inc., www.myo.com), consists of 16 dry surface EMG (sEMG) sensors and two nine-axis inertial measurement units (IMUs) as well as two vibrating motors. The thigh-band electrodes form an extendable cuff that is able to adjust to the thigh in a flexible manner. A subject is able to don and remove the thigh-band without the therapist needing to be present. Vibrating motors are used to provide haptic feedback, which is applied during the game play (described in more detail below). With a sampling frequency of 200 Hz for raw sEMG data, the Myo thigh-band communicates wirelessly with the host computer via Bluetooth Low Energy (BLE). Data Processing of the EMG The "raw" EMG data from Myo thigh-band, which have been rectified and low-pass filtered, are still quite noisy and cannot be used directly. Traditional filtering methods like moving average windowing (Lee et al., 2011;Chen and Wang, 2013) and the Butterworth filter (Lenzi et al., 2012) have relatively long time lags that make them unsuitable for real-time control, especially in the challenging game context. Here a Kalman filter is used to process the acquired raw EMG data. Denoting the measured raw EMG by Y k and the filtered EMG by X k , we initialize the previous estimate X k − 1 as X 0 and its estimated error P k − 1 as P 0 . As depicted in Figure 2, the process of using the Kalman filter to estimate EMG can be divided into four steps. The first is the prediction step, in which the Kalman filter produces an estimate of the current state variable X k p , along with its uncertainty or error in estimate P k p according to the previous states X k − 1 and P k − 1 : (1) Here we assume that the EMG estimate does not change for each time step, so the prediction X k p is the same as the previous state (2) calculating Kalman gain and producing current estimate; (3) calculating estimate error; (4) updating state. X k − 1 . In Equation (2), the process noise variance Q is added to the estimate error P k p . Once the next measurement Y k , which is corrupted with measurement noise variance R, has been observed, the estimate is updated using a weighted average (Kalman gain KG): The greater the certainty of the estimation, the larger is the Kalman gain. X k is the output of the Kalman filter, namely, the filtered EMG. Then the estimate error P k needs to be updated too, as follows: By changing the current state to the previous state, the algorithm is recursive to the next round: The Kalman filter assumes that all errors are Gaussiandistributed. In the flowchart in Figure 2, for each measured raw EMG Y k , there is an output filtered EMG X k . Even though Kalman filter has been applied in EMG processing (Menegaldo, 2017), the EMG model used here is different. As described above, the prediction step (Equations 1, 2) is based on our modeling on EMG. Since we aim to obtain a stable output from the noisy raw EMG, we model the real or filtered EMG signal as a constant signal. That is why the prediction of the next state X k p is the same as the previous state X k − 1 ; this model makes the computation cost lower and the output of the filter smoother. The Q and R values, which affect the filtering effect, were obtained by trial and error. By comparing a list of Q and R on variable raw EMG signals, we decided the Q and R with sufficient output signals, which were neither too noisy nor too delayed. In this study, the process noise variance Q = 0.0001, and measurement noise variance R = 0.59948; the same values were applied to all subjects and all channels. The control signal of the exoskeleton is obtained from the mean over eight channels of a filtered EMG signal related to the extensors of the knee (quadriceps femoris). Actuation Design and Range of Motion As shown in Figure 3, the powered lower limb exoskeleton provides active assistance at both hip and knee joints in the sagittal plane. Each active joint is driven by a brushless motor (Maxon EC 90 flat, Maxon Motor AG, Switzerland) through a harmonic reducer. The harmonic reducer (CSD-25-160-2UH, Harmonic Drive Systems, Inc., Japan) of the hip joint has a reduction ratio of 160:1 and provides a nominal joint torque of 89.6 N m, while the harmonic reducer (CSD-25-100-2UH, Harmonic Drive Systems, Inc., Japan) of the knee joint has a reduction ratio of 100:1 and provides a nominal joint torque of 56 N m. The range of motion at the hip joint is 100 • in extension and 40 • in flexion, while that at the knee joint is 110 • in flexion and 10 • in hyperextension. The ankle joint, which is in parallel with two linear springs, is a negatively adaptive joint with a range of motion from 25 • in flexion to 25 • in extension. Structure and Weight Most supporting parts of the exoskeleton are made of aluminum, while its shell is 3D-printed. The lengths of both thigh and shank can be changed and adjusted to take account of the wearer's leg length. As shown in Figure 3, the exoskeleton is attached to the waist, thighs, shanks, and feet of the wearer. The fixation system consists of flexible bandages together with supporting connection parts on the exoskeleton, thus allowing for quick and easy fastening. The total weight of the exoskeleton is 20 kg (including the electronic components and battery), while the weight of the exoskeleton's lower leg is 0.92 kg. Sensing and Electronics Design The joint position is measured by an absolute encoder mounted at the rotational shaft of each joint (Figure 3), whereas the joint velocity is derived from the incremental encoder that is attached to each motor. As depicted in Figure 4, real-time control is performed by a digital signal processor (DSP) and three field programmable gate arrays (FPGAs). The DSP acts as the main controller, communicating with the host computer through a serial port and sending control signals through a control area network (CAN) bus to the motor drivers. The FPGAs collect positional data from the absolute encoders through a BiSS-C interface and communicate with the DSP by parallel ports in real time. The exoskeleton (motors and electronics) is powered by a 36 V, 6800 mA h lithium-ion polymer battery, which weighs about 0.9 kg. In this experiment, since only one knee joint was activated at one time, the exoskeleton could work for more than 3 h with this battery, which was sufficient for our experiment. Safety To ensure the safety of the exoskeleton wearer, protection at three levels is implemented. The first is at the software level, limiting the moving range and moving speed of each joint in the program. The second level is electronic protection, achieved by mounting an overtravel-limit switch on each side of each active joint (Figures 3, 4). If one of these switches is pressed, i.e., if the exoskeleton has reached its limit of movement, the power is cut off. In order to prevent too much force exerted to the leg so as to hurt the user, maximum output torque of each motor is limited by the motor driver. In addition, there is an emergency switch handled by the experimenter to protect the wearer in case of emergency. The third level of protection is mechanical, preventing any joint from overtravel by means of stops (Figure 3). Control Algorithm As shown in Figure 5, the real-time control algorithm of the exoskeleton is based on position control with an inner velocity control loop. Proportional (P) to the filtered EMG, the desired knee joint position θ d is sent to a proportional-derivative (PD) controller, which works as the position controller. This position controller generates the desired angular velocity ω d according to the error between the desired joint position θ d and the actual joint position θ a , and sends it to the velocity controller. The velocity controller using a proportional-integral-derivative (PID) control strategy is implemented in the motor driver, which acquires the motor velocity ω meas from the incremental encoder and sends a control commend ω com to the motor. Performing the Visuomotor Training Game The visuomotor training game requires the subject to sit on a chair wearing the Myo thigh-band and exoskeleton to perform knee extension movement against gravity. Before performing FIGURE 4 | Schematic representation of the control system and the sensing and electronics design of the exoskeleton. the training task, some preparation and calibration need to be done. The general testing procedure is depicted in Figure 6 and includes (i) donning the Myo thigh-band, (ii) donning the lower limb exoskeleton, (iii) determining the maximal-voluntary EMG range achievable by the subject, and (iv) performing the visuomotor training task. The user is provided with guidance and also visual feedback for each step. The user is allowed to repeat the procedure if necessary. Donning the Myo Thigh-band The extensors of the knee (quadriceps femoris muscle, etc.) are the muscles most significantly involved in knee extension against gravity, whereas the activation level of the flexors (biceps femoris muscle, etc.) is very low . This is because in such voluntary movements, the subject can flex his or her knee joint under the force of gravity without activating the flexor muscles . Therefore, we collected EMG data only for the knee extensors. The subject was instructed to put the Myo thigh-band on the middle of right or left thigh (around 200 mm relative to the knee joint, where the rectus femoris located; Figure 7). During the test, only half of the electrodes of the thigh-band were activated, namely, eight channels on the quadriceps femoris side of the thigh. Therefore, when donning the Myo thigh-band, the activated part (with a blue flashing light) was placed on the quadriceps femoris side (front side) of the thigh. Donning the Knee Exoskeleton The exoskeleton was placed on a comfortable chair with the hip joints fixed at 90 • in extension and the knee joints initialized at 90 • in flexion (Figure 6). The subject just needed to sit on the chair and fix their shanks, thighs, and waist to the supporting parts of the exoskeleton. The experimenter provided any necessary assistance to the subject. In this setup, only one knee joint of the exoskeleton was activated and could be controlled by the sEMG of the subject (Figure 5), whereas the other three active joints were fixed at the initial joint angles. In the following description, a knee joint angle of 0 • represents the sitting posture, i.e., with the thigh perpendicular to the shank, while a knee joint angle of 90 • means that the thigh and shank are in the same line. Determining Maximal EMG Activity After the subject had donned the Myo thigh-band and the exoskeleton, their maximal EMG activity was determined (Figure 6). The signal using here was the mean over eight channels of the filtered EMG related to knee extensors. During this process, the subject first relaxed for 5 s and then performed knee isometric contraction for 5 s with the exoskeleton on. We determined the maximal-voluntary EMG (MVE) that could be maintained for at least 1 s with maximal voluntary contraction of knee extensors. Similarly, the bias (Bias) was obtained from the measured EMG signal when the muscles were relaxed. Since patients in the post-stroke sensitive period are also unable to produce overt movement, testing MVE with isometric contraction is meaningful for both stroke patients and healthy subjects. The value obtained was used to adapt the knee exoskeleton movement individually to the EMG range of the subject. In other words, the knee joint angle was proportional to the filtered EMG, with Bias and MVE being related to the minimum (0 • ) and maximum (90 • ) knee joint angles respectively. To avoid fatigue, only 60% of the maximal EMG activity was applied for the below training task, which means that the knee joint movement range was from 0 • to 54 • . Flappy Bird Visuomotor Training Task Previous studies shows that task-oriented intense training in an environment that provides timely feedback, motivation, stimulation and confidence significantly improves rehabilitation outcome (Johansson, 2011). Videogame-based intervention with these features has attracted attention. Figure 6 shows a screenshot of the Flappy Bird visuomotor training task, which depicts a bird flying in the sky with some pipes as obstacles. The task aims at improving knee joint movement stability and accuracy, as well as thigh extensor muscle strength against gravity. It also has potential to facilitate motor recovery and provide new possibilities for cortical reorganization and enhancement of functional mobility (Santos et al., 2016). The goal of this task is to control the bird's flight across the pipes. The position of the flappy bird in the vertical direction on the screen is proportional to the knee joint angle of the exoskeleton, which means that the lowest and highest positions of the bird in the sky correspond to knee joint angles of 0 • and 54 • , respectively. All subjects were explained and guided how to use their muscles (or EMG) to control the exoskeleton or the bird. Extension movement or bird flying up can be achieved by activating their thigh extensors; flexion movement or bird flying down can be achieved by relaxing their thigh extensors. In each block, the subject had four bird lives (or trials). If the bird flew across a pair of pipes (upper and lower pipes), the subject gained one point as a reward. However, if the bird hit the pipes, the subject would lose one bird life and the bird would hover on the sky and stop moving forward. When the subject was ready for the next flight, he or she could press the space key to start the next bird life and move forward again. The subject's aim was to obtain as many points as possible with four bird lives. Once all four lives had finished, the game stopped and the subject could choose either to exit or to replay the game. To prevent fatigue, subjects were provided with rest intervals throughout the experiment (Video S1). Multisensory (vision, audio, touch) feedback to the subject about the movement performance was achieved (Figure 1). Different sounds indicating gaining one point or losing one bird life were played along the game. Haptic feedback generated from the vibrating motor of the Myo thigh-band worked as a punishment when the bird hit the pipes. Such enriched game experiences have been demonstrated to increase patients' motivation and facilitate functional recovery by engaging appropriate neural circuits in the motor system (Perez-Marcos et al., 2017). To make the game more interesting and challenging, some of its parameters were adjusted during the process. The pairs of pipes appeared randomly at different heights, and the gap between the pipes in each pair became narrower as the score increased. At the same time, the bird's flying speed in the horizontal direction also increased with the score. Both the narrowing gap and increasing flying speed made the game become more and more difficult, which meant that the subject needed not only to move the knee joint in a more stable manner, but also to respond to the changes in the pipes more quickly. Neurorehabilitation programs should include activities or tasks that enable patients to float in their Flow Zone, defined as where the person is at a high level of enjoyment with a balance between the difficulty of the task and the abilities of the person (Perez-Marcos et al., 2018). Following this principle, the Flappy Bird game tried to make the subject feel comfortably challenged and highly engaged by the task. Maintaining a state of flow is important for promoting patients adherence to treatment, especially for home-based rehabilitation (Perez-Marcos et al., 2018). Figure 8 shows how the difficulty of the game changed, as represented by the increasing bird velocity and decreasing gap size between pipes, as the score increased. The velocity of the bird in the horizontal direction was low at first and increased gradually to 2.5 times its initial value. At the same time, the gap size decreased from 300 pixels to 190 pixels (the size of the bird remained at 48 pixels throughout). Once the score exceeded 100, the level of difficulty ceased to change. The low initial speed and relatively wide gap between pipes allowed the subject to practice and learn the game at the beginning. Once the subject had become familiar with the control, the game became more and more challenging, which also provided some motivation for the subject to continue to play. Experimental Design Six healthy subjects were recruited to take part in the experiment to investigate whether they could use the EMG-controlled knee exoskeleton to assist them in home rehabilitation and to further improve the EMG control strategy with repetitive task training. Each subject performed a 10-block visuomotor training game with each leg, with interblock rest intervals of 30 s. In order to control for leg dominance, subjects were randomly assigned to two groups. Half of the subjects started with the left leg and the other half with the right leg. After finishing the game with one leg, they transferred the Myo thigh-band to the other leg and continued. The experimental protocol is shown in Figure 9. Since all the subjects were naive to the game, the experimenter first explained the game to them and then guided them in donning the Myo thigh-band and exoskeleton. After the MVE had been detected, the subjects played the game themselves. The experimenter sat beside the subject during the whole testing procedure with the emergency switch in hand in case of emergency and also to provide any guidance needed. One session test, which included 10 blocks training on the left leg and 10 blocks training on the right leg, lasted around 75 min. Score The score is the points obtained by the subject within one block. As the main evaluation variable of this visuomotor game, we further calculated the mean and standard deviation (SD) of the score across six subjects for both the first and the second legs. Muscle Activation Level In order to quantify how much the subjects actually activated their muscles, we defined the muscle activation level (MAL) at each time step as where EMG(t) was the processed EMG signal extracted from the thigh extensors. The parameters MVE and Bias were measured during the calibration process. Then, we can obtain mean muscle activation level (mMAL) in each block via where T was the total time steps in one block. Similarly, we also calculated the mean and SD of mMAL across six subjects for both the first and the second legs. Block Activation Time The block activation time (BAT) is the time taken for the subject to actively play one block of the Flappy Bird game. Block activation time represents active therapy duration provided to the subject. As an important metric reflecting the therapy dose or intensity, it is a critical factor to achieve a positive outcome. The mean and SD of block activation time across six subjects for the each leg were calculated. Statistical Analysis A one-way analysis of variance (one-way ANOVA) was performed when appropriate for the above metrics. RESULTS To evaluate the performance of the training system as well as the subjects, we analyzed the data from the experiment, with the following results. Time to Set Up the Training For home rehabilitation, it is important to simplify the setup process, since the therapist cannot be present at each session. With the Myo thigh-band, the electrodes do not need to be placed precisely over specific muscles. The subject just needs to don the thigh-band with the activated part on the front of the thigh by themselves or with the assistance of anyone around. The only calibration procedure required is to determine the MVE automatically with the participant performing according to onscreen guidance. One participant was asked to set up the training independently 10 times, and the time spent was measured. According to this test, the entire setup process, including donning the Myo thigh-band and knee exoskeleton and determining MVE, took 73.2 ± 10.7 s. FIGURE 9 | Experimental protocol. Subjects were assigned to two groups, with one group (three subjects) starting with the left leg and the other group (three subjects) with the right leg. All subjects performed the training task with both legs. For each leg test, after determining MVE, 10 blocks training game was performed, with each block consisted of four bird lives. Performance of the Kalman Filter To quantify the performance of the Kalman filter, one subject was asked to activate the muscle extensors three times in 30 s. The raw Myo EMG data from one channel measuring the extensors and the corresponding EMG data filtered using the Kalman filter are shown in Figure 10, from which it can be seen that the filtered EMG is much smoother than the raw EMG. As shown in the inset of Figure 10, a fast Fourier transform (FFT) was performed to analyze the frequencies of the raw and filtered EMG data. The FFT analysis demonstrated that the Kalman filter attenuated signal noise at frequencies above 1 Hz in the raw EMG. We implemented a cross-correlation analysis (Cohen, 2014) on the raw and filtered EMG data, and the result showed that the delay caused by the Kalman filter was 64 ms. Performance of the Knee Exoskeleton Since the control strategy of the knee exoskeleton is based on position control, the most important evaluation criterion is its tracking ability (Jia, 2000), which can be quantified by the root mean square error (RMSE) between the desired joint angle and the actual joint angle. Figure 11 shows the typical knee joint angle tracking performance of the exoskeleton, based on data obtained from one subject in this experiment. The desired joint angle is proportional to the filtered EMG signal (the mean over the eight channels of the filtered EMG) and the actual joint angle is measured by the absolute encoder at the knee joint. In this figure, the lower values (< 10 • ) are the relaxed phase, while the higher values (> 20 • ) indicate how the participant controlled the bird to cross the pipes in the game. We can see that the actual joint angle of the exoskeleton generally moved along the desired joint angle. We analyzed the data (10 blocks for left legs and 10 blocks for right legs) from all six subjects performing the Flappy Bird game, considering two legs separately or both together, and the results are shown in Table 1. As can be seen, the RMSE between the desired and actual joint angles was 1.56 • ± 0.21 • when we considered both legs together. A one-way analysis of variance (one-way ANOVA) was performed in Python and further indicated that there was no significant difference between The time lag between the desired and actual joint angles was calculated using a cross-correlation analysis (Cohen, 2014), and the results are also presented in Table 1. The time lag caused by the exoskeleton was around 110 ms and there was no significant difference between left and right legs in terms of time lag (p = 0.864 > 0.05) according to one-way ANOVA. Score We quantified the performance of the subjects by the scores they obtained in each block. Since each subject took part in the game using both legs in turn (either left and then right or vice versa), we analyzed the performance of the first and second legs to be tested. As can be seen in Figures 12A,B, for the first leg, the performance generally became better and better (score from 41.3 ± 15.2 to 67.0 ± 17.4) and reached its highest level at the end of the game. However, for the second leg, the performance was initially better (a starting score of 53.8 ± 26.7), and rose quickly to give the highest score (78.5 ± 25.2) at block 5, after which it deteriorated and then generally kept stable until the end of the game (61.7 ± 28.7). It can be seen that for both the first and second legs, the score improved after 10 blocks training, even though no significant difference was found (p > 0.05). This may be because the game became more and more challenging with the score increasing, which means gaining one more point at the end is much harder than at the beginning. Muscle Activation Level Muscle activation level analysis showed that mMAL generally kept stable between 20% and 30% during the whole training process (Figure 13). One-way ANOVA indicated that there was no significant difference between each block (p > 0.05) for both legs. Block Activation Time The block activation time of the first and second legs is depicted in Figure 14. For the first leg, the block activation time increased from 111.8 ± 22.8 to 152.4 ± 32.2 s. For the second leg, the block activation time increased from 129.0 ± 41.5 to 139.1 ± 48.89 s. No significant improvement was found at the end of the training for both legs (p > 0.05). DISCUSSION The observation or imagination of body movements facilitates motor recovery and provides new possibilities for cortical reorganization and enhancement of functional mobility (Santos et al., 2016). Thus, it appears that movement visualization may play an important role in motor rehabilitation (Santos et al., 2016). Motor recovery of stroke patients who are too weak to make overt movements is a big challenge, since voluntary muscular contractions do not lead to significant sensory feedback, which makes rehabilitation training less effective in motivating and enhancing motor skill learning gains (Pereira et al., 2015). The use of EMG-controlled exoskeletons together with visuomotor training tasks might provide a new opportunity for this group of patients. Nevertheless, previous studies of EMG control found low predictability and high variability, which may impair motor learning. In the present paper, a preliminary study was conducted and we found that healthy subjects could learn to control a user-friendly knee exoskeleton to perform an interesting visually guided game using EMG signals in a simulated home setting. The setup was significantly simplified by improving the system in a number of ways, such as reassembling the EMG electrodes and introducing a new signal processing method, thereby making it possible to assist patients undergoing home rehabilitation. The proposed home-based rehabilitation system should allow improvements in the intensity of training and make rehabilitation more convenient for the patient. The results further indicated that all subjects had better task performance through training. Initial feedback from voluntary subjects confirmed that this interesting and challenging training system is not only easy to use but also provides motivation for the patient, making it a promising strategy for active training of patients in the early rehabilitation phase. EMG Controller and Knee Exoskeleton: System Characteristics By combining two Myo armbands, the Myo thigh-band used here provides a more convenient instrument to acquire EMG data online compared with traditional EMG systems (see, e.g., Wolf and Binder-Macleod, 1983;Armagan and Oner, 2003;Song and Tong, 2005;Crow et al., 2009;Buongiorno et al., 2016;Peternel et al., 2016;Ao et al., 2017;Chen X. et al., 2017;Irastorza-Landa et al., 2017;Yun et al., 2017). In particular, both patients and healthy subjects could don and remove the device easily without the therapist being present because of its use of dry electrodes and expandable flex. By guiding the subject to perform one-knee isometric extending contraction at a maximum level through a calibration routine, the training system is individualized, which provides an intrinsically adaptive aspect when the training lasts several days or even weeks. The short setup time, with a calibration process taking only about 73.2 s, makes this system greatly appreciated by users. Another challenge facing EMG-based control systems is the need to transform highly variable raw EMG into a smooth, rapid response control signal. Since delays can impair visuomotor control and learning (Honda et al., 2012), here we used a Kalman filter to remove signal noise above 1.2 Hz with a time lag of 64 ms. Because the knee extension involved almost all the muscles on the quadriceps femoris side, our control signal used the mean over eight channels of the filtered EMG on the extensor side of the thigh. The knee exoskeleton here acts like a therapist, providing assistance to patients in their rehabilitation by performing a specific task, but with control and intention provided by the patients themselves. The exoskeleton has the potential to assist stroke patients to move their lower legs as they wish so that movement visualization can be achieved. The time lag caused by the exoskeleton was about 110 ms. Previous studies have shown that electromechanical delay (EMD), which is typically defined as the time lag between electrical activation of a muscle and the onset of the exerted force (Cavanagh and Komi, 1979), is between 30 and 150 ms (Zhou et al., 1995;Blackburn et al., 2009;Nordez et al., 2009;Yavuz et al., 2010). Úbeda et al. (2017) even found EMDs ranging from 112 to 361 ms. Considering that EMG appeared about 125 ms before force generation (Blackburn et al., 2009), the 170 ms time lag in our training system (which was caused by both the filter and the exoskeleton) is very short and acceptable. Most subjects in our experiment said that they did not feel any time lag in the system. Flappy Bird Game For stroke rehabilitation, motivation is especially important. Studies shows that motivation influence on the effectiveness on rehabilitation (Rapoliene, 2018). Activating patient participation in the therapy is a guiding principle of rehabilitation (Sitaram et al., 2016). The Flappy Bird game not only makes the rehabilitation process more interesting, but also motivates the patient to take an active part in the training. Making the game neither too easy nor too difficult for patients is key: that is to say, the game should be fitted to the patients, rather than the other way round. By adjusting the difficulty to the patient's pace of recovery not only maximizes training potential, but also prevents habituation and frustration (Perez-Marcos et al., 2018). Enable patients to float in their Flow Zone facilitates keeping patient motivation at an optimal level during the long rehabilitation process (Perez-Marcos et al., 2018). In our experiment, we started the task with a low level of difficulty and gradually made the task become ever more challenging. This allowed the subjects to learn and adjust to the task at the beginning and then improve their skill gradually as the difficulty increased. Rehabilitation dose, which might be a critical factor to achieve a positive outcome, can be also increased since this challenging game can motivate subjects to continue with rehabilitation with the aim of improving their scores in the game. Appropriate and timely feedback (e.g., reward and punishment) together with adaptation of difficulty levels can boost and maintain patients' motivation as long as possible (Perez-Marcos et al., 2018). Besides positively affecting motivation and enjoyment of training, videogames also impact cognition. In particular, playing action videogames (i.e., games that emphasize physical challenges) has been shown to robustly enhance attention and spatial cognition (Perez-Marcos et al., 2018). Brain plasticity is the base of rehabilitation (Johansson, 2000). Closed-loop neurofeedback and real-time training is good for brain plasticity (Sitaram et al., 2016). As shown in Figure 1, the flappy bird in the game, the EMG controller, and the knee exoskeleton together with the subject form a closed control loop. In this loop, playing the game to obtain as many points as possible becomes the objective of the patient. This makes the patient generate intentional movement. The Myo thigh-band records the EMG signals, and the EMG controller decodes these to produce the intended knee movement. By actuating the motor, the patient can perform the desired movement with the assistance of the knee exoskeleton. With this complete loop, we actually change the objective of stroke patients from doing rehabilitation exercises to playing an interesting game. All the movements in the loop are actively performed by the patients themselves. By involving the corticomotor system in the training process, we may make rehabilitation more effective. There is substantial evidence that the post-operative environment can influence the outcome after stroke (Jess and Hannan, 2006). Enriched environment and rehabilitation augments neuroplastic processes and compensates neuronal growth, which ultimately contributes to improved motor function and cognitive skills (Biernaskie and Corbett, 2001). Multisensory simulation from the videogame provides patients with enriched rehabilitation, which is able to evoke the mirror neuron system and mechanisms of action observation (Perez-Marcos et al., 2018). In the Flappy Bird game, not only visual, but also auditory and haptic feedbacks were implemented. It involves subject's auditory nervous system and haptic perceptual system into the training process. Multisensory stimulation, challenging gaming environment, incorporating closed-loop mechanics can boost the rehabilitation effect (Perez-Marcos et al., 2018). Skill Acquisition by the Subjects In the visuomotor training task, the randomized pipe height requires the subjects to voluntarily activate and maintain their levels of muscle excitement, while the variation in the gap between pipes demands that the subjects actively control their level of muscular accuracy. An improved final score in each block indicated that subjects improved their control skill during one training session. The performance with the second leg, which exhibited not only a higher starting score and but also the highest overall score, was generally better than the performance with the first leg. This indicated that healthy subjects could shift their learned skill from one leg to another. However, it remains to be seen if the same would be true in stroke. The activation level of the muscles did not change too much during training. This is because the game setting is similar for each block. By changing the knee joint movement range, the muscle activity can be affected. However, in order to quantify the score in the same difficulty level, we did not change it in this experiment. The block activation time relating to the rehabilitation dose, was improved. However, no significant effect was found, which might be caused by the short training period. By increasing the training sessions, the block activation time has potential to improve more. In this study, we quantified how much the participants actually activated their muscles, and the timing of that activation. Task performance and training dose were improved even though the training period was quite short. By increasing the training period, significant improvement is possible. Limitations and Future Directions Even though we believe that the proposed EMG-controlled exoskeleton training system has the potential to enhance stroke rehabilitation outcomes, several potential problems still need to be considered before it can be adopted for use with patients. First, although the sample rate of the Myo thigh-band and the filtered EMG signal quality were adequate for healthy subjects, more test needs to be done to decide whether this device and the filter method are appropriate for stroke patients. By applying this device and filter method to visual feedback tasks like EMG based target tracking, we may be able to collect EMG data from stroke patients. We can further prove whether the filtering method working on them. Second, adapting the difficulty of the game individually to keep patients motivated may increase the acceptability of the system by patients. Since gamified tasks can also help strengthen brain modulation, adapting the training based on the patient's needs and performance can make the rehabilitation program more effective (Perez-Marcos et al., 2018). Third, the range of motion, which also influences the muscle activation level during training, will also be individualized for stroke patients. In addition, although the knee exoskeleton could be proportionally controlled by healthy subjects, there is still concern regarding whether this will be the case for stroke patients, especially given the possible risk of additional injury during training. For example, patients' unwanted muscle activity like spasticity may also cause the exoskeleton to move proportionally, which may hurt the patients. More safety measures should be taken and tested. Finally, given the knee flexion movement also involved knee flexors, it is worth to test whether collecting knee flexors EMG and introducing it to the control could lead to a better performance. Whether the proposed training system can be realized in stroke patients with very weak muscle activity is something that will be tested in the future. CONCLUSION This paper has described the development and evaluation of a rehabilitation system for home use that combines an EMG-controlled exoskeleton driven by knee extensors and an interesting visuomotor game that provides motivation for patients. By overcoming a number of difficulties, we have made it possible for the system to be used without the need for a therapist to be present at each session, thus significantly decreasing the cost of training and increasing the intensity and outcome of the rehabilitation process. Initial testing in healthy subjects suggests that using the EMG-controlled exoskeleton in a game context to carry out rehabilitation is possible and that the training system facilitates learning of motor skills. Further tests need to be done on stroke patients with low muscle activity to determine whether the EMG-controlled exoskeleton and visuomotor training task implemented here are suitable for them. A user-friendly home rehabilitation tool like this may improve the outcomes of rehabilitation for patients in the initial rehabilitation phase. DATA AVAILABILITY The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher. ETHICS STATEMENT The study was approved by the Biological and Medical Ethics Committee of the Beijing University of Aeronautics and Astronautics in accordance with the Declaration of Helsinki, and all subjects gave written informed consent before participation. AUTHOR CONTRIBUTIONS ML, W-HC, and XD were responsible for the study conception and designed the experiment. ML and W-HC developed the exoskeleton and the EMG-based training system. ML conducted the experiment and collected the data. ML, W-HC, XD, JW, and ZP analyzed the data. ML, W-HC, and XD wrote the paper. BZ contributed to manuscript preparation. All authors corrected several versions of the paper and approved the final manuscript.
2019-08-22T20:24:39.393Z
2019-08-27T00:00:00.000
{ "year": 2019, "sha1": "2694ed57ed27a197c724f0097ec21e66483741c5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbot.2019.00067/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2694ed57ed27a197c724f0097ec21e66483741c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
2338492
pes2o/s2orc
v3-fos-license
Hypothesis of snake and insect venoms against Human Immunodeficiency Virus: a review Background Snake and insect venoms have been demonstrated to have beneficial effects in the treatment of certain diseases including drug resistant human immunodeficiency virus (HIV) infection. We evaluated and hypothesized the probable mechanisms of venoms against HIV. Methods Previous literatures published over a period of 30 years (1979-2009) were searched using the key words snake venom, insect venom, mechanisms and HIV. Mechanisms were identified and discussed. Results & Conclusion With reference to mechanisms of action, properties and components of snake venom such as sequence homology and enzymes (protease or L- amino acid oxidase) may have an effect on membrane protein and/or act against HIV at multiple levels or cells carrying HIV virus resulting in enhanced effect of anti-retroviral therapy (ART). This may cause a decrease in viral load and improvement in clinical as well as immunological status. Insect venom and human Phospholipase A2 (PLA2) have potential anti-viral activity through inhibition of virion entry into the cells. However, all these require further evaluation in order to establish its role against HIV as an independent one or as a supplement. Results & Conclusion: With reference to mechanisms of action, properties and components of snake venom such as sequence homology and enzymes (protease or L-amino acid oxidase) may have an effect on membrane protein and/or act against HIV at multiple levels or cells carrying HIV virus resulting in enhanced effect of anti-retroviral therapy (ART). This may cause a decrease in viral load and improvement in clinical as well as immunological status. Insect venom and human Phospholipase A 2 (PLA 2 ) have potential anti-viral activity through inhibition of virion entry into the cells. However, all these require further evaluation in order to establish its role against HIV as an independent one or as a supplement. Background Components of snake venom are used for health and diseases [1], an interesting emerging concept. Some of the snake venom preparations include angiotensin-converting enzyme (ACE) inhibitor, disintegrins (antiplatelet aggregants) [2] and also used, in diagnostic assays of various blood coagulation factors [3]. Alpha neurotoxin, extracted from cobras has been shown to have analgesic effects [4,5] and crotoxin from Crotalus durissus terrificus has cytotoxic effects [6]. Recently, Alrajhi and Almohaizeie [7] demonstrated the usefulness of snake venom in a patient suffering from a drug resistant human immunodeficiency virus (HIV) infection, who was on anti-retroviral therapy (ART). In HIV patients, the response after administration of snake venom preparation [7,8] was an increase in CD4 count and decrease in viral load. We have recently shown that the components of snake venom might enhance the activity of ART at different levels [9]. Interestingly, insect venom and human secretions also have anti-HIV activity [10][11][12]. Hence, we evaluated and hypothesized the probable mechanisms of venoms and secretions against HIV infection. Methods Previous literatures published over a period of 30 years (1979-2009) were searched using the key words snake venom, insect venom, HIV and mechanisms. Based on the available materials, the probable mechanisms of action of venom and secretions against HIV were identified and discussed. Snake Venom The pharmacological activities of snake venom are complex in nature with little known about them and it varies amongst the multitude of snake venoms. The mechanisms of action of snake venom against HIV are mediated through various levels [9], such as structural homology, binding interference (receptor/enzyme), catalytic/inhibitory activity through enzymes, and induction/interaction at membrane level. 1) Structure The HIV virus entry into cells is mediated through the binding of envelope glycoprotein -gp120 [13]. There is a striking homology between the sequence 164-174 of short segment HIV-1 gp120 and the highly conserved 30-40 amino acid residues of snake venom neurotoxins long loop [14,15]. Thus, both may compete for the same receptor or binding site and act against HIV. F N I S T S I R G K V -HIV gp 120 C D K F C S I R G P V -alpha -cobratoxin (Naja naja siamensis) 2) Binding a) Snake venom contains Phospholipase A 2 (PLA 2 ) [11,16], which protect human primary blood leukocytes from the replication of various macrophage and T cell-tropic human immunodeficiency virus 1 (HIV-1) strains. PLA 2 which is found in the venom of many snakes has been shown to block viral entry into cells before virion uncoating through prevention of intracellular release of viral capsid protein [16]. This is mainly due to the specific interaction of PLA 2 to host cells and not due to catalytic activity. 3) Enzymatic activity a) L-amino acid oxidase (LAO), present in the venom of Trimeresurus stejnegeri [18], C. Atrox, P. australis [19]; inhibits infection and replication of HIV virus through P24 antigen in a dose dependant manner [18]. P24 antigen is a core protein of HIV and its level associates with viral load [20]. Besides the binding of protein to cell membrane, hydrogen peroxide (H2O2) produced as a free radical could inhibit the infection/replication of HIV, thereby further enhancing the anti viral activity. In contrast, catalase -a scavenger of H2O2, reduces the anti-viral activity [18]. b) Protein fragment isolated from Oxyuranus scutellatus snake venom is a potent inhibitor of p24 antigen and blocks viral replication of resistant strains [21]. c) Snake venom contains metalloprotease inhibitors [16,22] which could prevent the production of new viruses through inhibition of protease enzymes. HIV infects a CD4 cell of a person's body and then it copies its own genetic code into the cell's DNA. Then, CD4 cell is "programmed" to make new HIV genetic material and proteins. These proteins are degraded by HIV protease enzyme and again these proteins are used to make functional new HIV particles. Protease inhibitors are used to block the protease enzyme and prevent the cell from producing new viruses. 4) Effect on membrane protein P-glycoprotein (P-gp), a membrane protein, is an energydependent efflux transporter driven by ATP hydrolysis [23]. P-gp transports a wide range of substances with diverse chemical structures. In general, P-gp substrates appear to be lipophilic and amphiphatic, and are recognized to play an important role in processes of absorption, distribution, metabolism, and excretion of many clinically important drugs in humans [23]. Because of its importance in pharmacokinetics, inhibition or induction of P-gp by various components of snake venom can lead to significant drug-drug interactions, thereby changing the systemic or target tissue exposure of the protease inhibitors. At the same time one has to remember genetic polymorphism of P-gp, [23] which has also been recorded recently, because it may affect drug disposition and produce variable drug effects. Other Clinical Uses of Snake Venom Neurotoxins from snake such as cobra venom activates central cholinergic pathways by nicotine and nicotinic agonists, which have been shown to elicit anti-nociceptive effects in a variety of species and produces significant analgesic effect [24,25]. PLA 2 inhibitors (PLI) from snake -Habu snake, Trimeresurus flavoridis have anti-enzymatic, anti-myotoxic, anti-edema inducing, anti-cytotoxic, and anti-bacterial activities - [26], and hence, used in neurodegenerative disorders such as trauma, Alzhiemers disease, Parkinson's and brain tumors - [27]. Fibrolase from A. contorix snake venom degrade α and β chains of fibrin and used as a thrombolytic agent [28]. Snake venom RGD-disintegrins showed direct interaction in several tumor cell lines. It blocks αvβ3 integrin in tumor cells, thus inhibited their adhesion to the extra cellular matrix and thereby prevents metastasis [29]. PLA 2 from Bothrops neweidii and Naja Naja venom, was found to be cytotoxic towards B16F10 melanoma and Ehrlich ascitic tumor cells, as an anti-cancer drug [30]. Crotoxin, a pre-synaptic neurotoxin has been tried as an anti-cancer agent in advanced cancer patients [31]. VRCTC-310, a natural product with PLA 2 from Crotalus Durissus terrificus and cardiotoxin from Naja Naja atra, have inhibitory effect against human and murine tumor cell lines, and have effective value in the treatment of advanced solid cancers, which were refractory to other therapy [32]. Insect Venom 1. Gene expression Melittin is a 26 amino acid amphipathic α-helical peptide, a major component of bee venom [33]. The cecropins are a family of antibacterial peptides 35-39 amino acids in length which occur in a number of insect species and in mammals [34]. Like melittin, they consist of two α-helices linked by a flexible segment, and contain amphipathic structures. Melittin and cecropin act against a wide range of infectious agents, including Gram-positive and Gram-negative bacteria [35]. Whereas melittin is lytic for red blood cells at high concentrations, cecropins do not lyse erythrocytes or other eukaryotic cells [35] and appear to be non-toxic for mammalian cells. Melittin has been reported to inhibit replication of murine retroviruses, tobacco mosaic virus [36] and herpes simplex virus [37] suggesting that melittin also displays antiviral activity. Analogous to antibacterial activity, the antiviral activity of melittin has been attributed to direct lysis of viral membranes, as demonstrated for murine retroviruses [38]. However, melittin also displays antiviral activity at much lower, non-virolytic concentrations, as shown for T cells chronically infected with HIV-1 [39]. Wachinger [10] et al., reported that melittin and cecropin A are shown to suppress production of HIV-1 by acutely infected cells and also, suppresses the HIV-1 replication by interfering with host cell-directed viral gene expression [10]. Melittin treatment of T cells reduces levels of intracellular Gag and viral mRNAs, and decreases HIV long terminal repeat (LTR) activity. Besides, HIV LTR activity is also reduced in human cells stably transfected with melittin and cecropin genes. Binding i. Mammalian venom secreted PLA 2 have been associated with a variety of biological effects. Fernard et al [11] suggested that PLA 2 protect human blood leukocytes from the replication of various macrophage and T cell-tropic HIV-1 strains. This is neither due to virucidal nor cytotoxic effect on host cells; however PLA 2 blocks viral entry into cells before virion uncoating, independent of the receptor. Inhibitors and catalytic products of PLA 2 have no effect on HIV-1 infection suggesting that PLA 2 catalytic activity is not involved in antiviral effect. ii. Peptide p3bv, is a 21-25 aminoacids component from secreted phospholipases of bee venom (bvPLA 2 ) [40]. The p3bv peptide inhibits the replication of HIV-1 through prevention of the cell fusion process mediated by T-lymphotropic HIV-1 envelope without the effect of monocytotropic HIV-1. Then, p3bv inhibits the binding of stromal cell factor-1 α (natural ligand of CXCR4) and 12G5 (anti-CXCR4 monoclonal antibody). Overall, p3bv blocks the replication of T-lymphotropic HIV-1 strains by interacting with CXCR4, thereby blocking viral entry into cells. iii. PLA 2 -I A from bee, and serpent venom showed in vitro anti-HIV activity, which was due to the ability of secretions to destabilize anchorage (heparans) and fusion (cholesterol) receptors on HIV target cells [41]. Human PLA 2 Interestingly, human PLA 2 (group III PLA 2 ) has significant homology with bee venom PLA 2 [42]. Several murine and human group phospholipases such as II A, X, V, XII, II E, I B, and II F have potential antibacterial effects against gram positive and negative bacteria [43]. In individuals repeatedly exposed to HIV but who remain uninfected, several possible reasons for protection have been proposed but not clearly elucidated [44]. Membrane Kim et al., [12] suggested that human PLA 2 and human group X PLA 2 (PLA 2 -X) have potential antiviral activity against diverse lentiviruses by the degradation of viral membrane. PLA 2 -X has high affinity for phosphatidylcholine, a phospholipid in outer plasma membrane and hydrolyzes it. Viral membrane of HIV-1 is rich in phosphatidylcholine and sphingomyelin and may be more susceptible to PLA 2 -X. Binding PLA 2 -X inhibits replication of both CXCR4 and CCR5 HIV-1 in human CD4 cells. This effect was observed despite the resistance of viral preparations to lysis by antibody-mediated complement activation, suggesting that this action occur in cases even where the acquired immunity is ineffective [12]. In view of the above, anitiviral activity of human PLA 2 expressed in immune tissues and cells will be particularly interesting to analyze in future [44]. Debate in PLA 2 action Kim et al., [12] concluded that enzymatic activity of PLA 2 -X is necessary for antiviral effect, which contradict the findings of Fernard et al., [11] where catalytic activity was not required. Hence, further studies are needed to ascertain its exact mechanism. Conclusion In view of the above mechanisms, snake venom might reduce HIV load, thereby decreasing its effect and enhances CD4 count. Insect venom and human PLA 2 act through PLA 2 mediated inhibition of virion entry into host cells. Hopefully, the use of venom preparation or a synthetic molecule similar to snake/insect venom/human secretions without adverse effects may open a new era of anti-retroviral therapy against HIV or act as an adjuvant not only for HIV but also to other viral infections. However, further research is required to ascertain the exact mechanism of antiviral activity of snake and insect venoms.
2018-05-08T18:40:47.319Z
2009-11-19T00:00:00.000
{ "year": 2009, "sha1": "23104c5e8e545ce00c8f7b3e6a63c5bc36b888d4", "oa_license": "CCBY", "oa_url": "https://aidsrestherapy.biomedcentral.com/track/pdf/10.1186/1742-6405-6-25", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "55b4aec520a5aaa4c70c97b1192a250e290d30c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
30119769
pes2o/s2orc
v3-fos-license
Expression of Pituitary Tumor Transforming Gene 1 is an Independent Factor of Poor Prognosis in Localized or Locally Advanced Prostate Cancer Cases Receiving Hormone Therapy Prostate cancer represents a major global public health issue, and are the most frequently diagnosed tumor and the second largest cancer contributing to mortality in men in USA. It is estimated that 217,730 men will be diagnosed with and 32,050 men will die of cancer of the prostate in 2010 in USA (http://seer.cancer.gov/csr/ 1975¬_2007/ results_single/sect_01_table.01.pdf). In Asia, the incidence of prostate cancer has been increasing sharply for a decade, and prostate cancer has been an emerging threat to the health of aging men, though deficient of accurate epidemiological data in many Asian countries at present (Zhang et al., 2011). According to therapeutic strategies, use of primary androgen deprivation therapy (PADT) has emerged as an option for men with clinically localized or locally advanced prostate cancer except for radical prostatectomy and radiotherapy in the world (Kawakami et al., 2006; Holmes Jr et al., 2007). In Japan, data on the treatment of prostate cancer shows that PADT is chosen to treat localized and locally advanced prostate cancer in an Introduction Prostate cancer represents a major global public health issue, and are the most frequently diagnosed tumor and the second largest cancer contributing to mortality in men in USA.It is estimated that 217,730 men will be diagnosed with and 32,050 men will die of cancer of the prostate in 2010 in USA (http://seer.cancer.gov/csr/1975¬_2007/ results_single/sect_01_table.01.pdf).In Asia, the incidence of prostate cancer has been increasing sharply for a decade, and prostate cancer has been an emerging threat to the health of aging men, though deficient of accurate epidemiological data in many Asian countries at present (Zhang et al., 2011). According to therapeutic strategies, use of primary androgen deprivation therapy (PADT) has emerged as an option for men with clinically localized or locally advanced prostate cancer except for radical prostatectomy and radiotherapy in the world (Kawakami et al., 2006;Holmes Jr et al., 2007).In Japan, data on the treatment of prostate cancer shows that PADT is chosen to treat localized and locally advanced prostate cancer in an RESEARCH ARTICLE Expression of Pituitary Tumor Transforming Gene 1 is an Independent Factor of Poor Prognosis in Localized or Locally Advanced Prostate Cancer Cases Receiving Hormone Therapy Xi-Liang Cao1 , Jiang-Ping Gao 1 *, Wei Wang 1 , Yong Xu 1 , Huai-Yin Shi2 , Xu Zhang 1 extremely high proportion of cases (Akaza et al., 2004).Data from Cancer of the Prostate Strategic Urologic Research Endeavour (CaPSURE) of the USA also shows an increase in recent years in the proportion of localized and locally advanced prostate cancer patients for whom PADT is being selected (Cooperberg et al., 2003).In 2002, Labrie et al. (2002) reported the efficacy of hormonal therapy for localized or locally advanced prostate cancer.In 2006, Akaza et al. (2006) further confirmed the usefulness of PADT for localized or locally advanced prostate cancer by analyzing the 10-year survival rates for men with localized or locally advanced prostate cancer treated with PADT or prostatectomy.However, some patients who receive PADT, if followed long enough, will develop evidence of resistance and progression.Thus, accurate pretreatment risk stratification is essential for both patient counseling and the design of adjuvant therapy.Several factors, such as volume of disease, risk category, and PSA velocity, have been assessed to be predictors of advanced prostate cancer progression after hormone therapy, but not for patients with localized or locally advanced prostate cancer receiving PADT (Kwak et al., 2002;Chung et al., 2008;Abouassaly et al., 2009).Consequently, there is a great need for markers that we can apply to biopsy specimens to accurately predict the risk of disease progression in patients with prostate cancer receiving PADT and allow appropriate treatment planning.Pituitary tumor transforming gene 1 (PTTG1) was first isolated from rat pituitary tumor cells in 1997 and has been identified an oncogene because PTTG1 overexpression induces cellular transformation in vitro and tumor formation in nude mice.As human securin, PTTG1 participates in mitotic spindle checkpoint pathway and inhibits sister chromatin separation ensure chromosomal stability (Pei and Melmed, 1997;Zou et al., 1999).In contrast to restricted normal tissue expression, PTTG1 is abundantly expressed in a wide variety of tumors, and is associated with metastasis and a poor clinical outcome with several types of tumors, suggesting that PTTG1 may play a role in tumorigenesis (Vlotides et al., 2007).PTTG1 has been also identified as one of the key 'signature genes' to predict metastasis in prostate cancer (Ramaswamy et al., 2003).Zhu et al. (2006) detected PTTG1 protein expression in a high percentage of prostate cancer tissues in prostate tissue samples by immunohistochemistry, and proved that ectopic PTTG1 gene expression promoted prostate cancer cell proliferation and tumorigenesis both in vitro and in nude mice, and down-regulation of PTTG1 led to suppression of tumor cell growth, suggesting that PTTG1 may be a potential prognostic marker and a therapeutic target for prostate cancer. However, no study was performed to evalue the relation between expression of PTTG1 and prostate cancer progrssion in patients who receiving hormone therapy.In the present study, we retrospectively determined whether PTTG1 overexpression on diagnostic prostate needle biopsy specimens obtained from patients with localized or locally advanced prostate cancer could be a useful marker in predicting progression after hormone therapy. Patients The subjects were 64 patients who attended Chinese PLA General Hospital and received a diagnosis of T2N0M0 or T3N0M0 prostate cancer between June 2003 and January 2010.This study was conducted in accordance with the declaration of Helsinki.This study was conducted with approval from the Ethics Committee of Chinese People's Liberation Army General Hospital.Written informed consent was obtained from all participants.meeting the following strict criterions: treated by continuous combined androgen blockade (CAB) without radical prostatectomy or radiation for various reasons, including high risk of surgical complications, advanced age and patient preference; being good responsive to PADT, and PSA dropped to undetectable level (< 0.2 ng/ ml) after three months; appropriate follow-up data and biopsy tissues being available. Follow up During the first 6 months after treatment, PSA levels were examined monthly.After that, PSA levels were examined every 3 months.Bone scan and Transrectal Ultrasound were performed annually.When indicated, nuclear magnetic resonance or computed tomography of the lungs and abdomen were also performed.Progression was considered in one of the following circumstances: (a) PSA measurement > 0.2 ng/ml, and the judgment of PSA recurrence assumed elevation of PSA level on three consecutive occasions; (b) radiological or histological evidence of local progression or metastasis.Follow-up was terminated upon disease progression of the patient or by June 30, 2010.The study performance was approved by the Ethics Committee of Chinese People's Liberation Army General Hospital. Immunohistochemical staining Immunohistochemical staining was performed using the single core that had a highest Gleason score (GS) as a result of a systematic sextant needle biopsy.PTTG expression in prostate biopsy specimens was detected by the two-step immunohistochemical staining method.Formalin-fixed, paraffin-embedded tissue sections (4μm) were deparaffinized in xylene and rehydrated in a graded series of ethanol.For antigen retrieval, slides were exposed to citrate buffer (10 mmol/l, pH 6.0) and heated for 30 minutes in microwave oven and allowed to cool at room temperature for 20 minutes.The slides were then incubated for 30 minutes in PBS with 0.3% hydrogen peroxide to block endogenous peroxidase activity and washed again with PBS.Subsequently, the slides were incubated with primary antibodies diluted 1:100 in PBS-1% bovine serum albumin (BSA) for 60 minutes at room temperature.Primary antibody was rabbit polyclonal anti-PTTG antibody.The EnVision method was used for staining.TBS buffer was used instead of primary antibody as the negative control, and colon cancer tissues were used as the positive control.All tissues were stained at the same time to avoid false positive and false negative staining results. Immunohistochemical evaluation Both the extent and intensity of immunostaining were considered when scoring PTTG 1 protein expression according to Hao et al. (Hao et al., 2000).The intensity of positive staining was scored as 0, negative; 1, weak; 2, moderate; 3, strong.The percentage of PTTG1 reactive cells was assessed counting 100 tumor cells in serial sections, and scored as 0, <5%; 1, >5-25 %; 2, >25-50 %; 3, >50-75 %; 4, >75 % of the prostate cancer cell.The final score was determined by multiplying the intensity score and the extent score, yielding a range from 0 to 12. Scores 9-12 were defined as high expression, 5-8 as low expression and 0-4 as negative expression.The sorces were assessed in dependently by two skilled pathologists.Discrepant cases were reviewed at a multihead microscope and a consensus reached.All specimens were evaluated without knowledge of the patients' clinical information. Statistical analysis The parameters investigated were T stage, GS, pretreatment PSA level, risk groups and the status of PTTG1 expression.The correlation between PTTG1 expression and clinicopathological parameters were evaluated using Spearman correlation test.Survival curves were generated using the method of Kaplan and Meier and the significance of differences was assessed with the log-rank test.For univariate and multivariate analyses, Cox proportional hazard analysis was used to assess the independence of parameters to predict the disease free survival after hormone therapy.All P-values < 0.05 were considered as statistically significant.All analyses were performed with the SPSS 13.0 for Windows software. Immunohistochemical staining In prostate cancer cells, PTTG1 was expressed mainly in perinuclear granular particles in the cytoplasm which were rough and dark yellow.With regard to subcellular localization, PTTG1 staining was observed in the cytoplasm of tumor cells.In a small number of cells, PTTG1 was expressed in the nuclei, which was observed mainly in poorly differentiated tumors.PTTG1 reactivity was not detected in histologically normal epithelial cells in areas adjacent to the tumor (Figure 1). Among the 64 prostate carcinoma specimens, 17 (26.5%)were high expression, 27 (42.2%)low expression, and 20 (31.3%) negative for PTTG1 immunoreactivity.The pretreatment PSA levels were dichotomised into < 20 vs ≥ 20 ng/ml.Gleason scores of biopsy specimens were stratified into 3 groups: Gleason Score < 7, Gleason score 7, or Gleason score > 7. High-risk patients were difined as having a PSA level ≥ 20 ng/mL, stage T3 disease, or a Gleason score ≥ 8.The low-risk category included all other patients.No meaningful association between PTTG1 expression and GS groups, clinical T stage, PSA level, risk groups.PTTG1 expression in relation to clinical and pathologic features is summarized in Table 1. Univariate Analysis Univariate Cox proportional hazards regression analysis was used to evaluate high and low PTTG1 expression (p=0.000),high risk group (p=0.001) and T3 stage (p=0.042)as prognostic predictors of a shorter time to disease progression after CAB.Although PSA level has a tendency to predict disease progression, this finding did not achieve statistical significance (p= 0.056).Age and Gleason score provided no prognostic value in this set of patients (Table 2).The predictive value of PTTG1 expression, risk group and T stage were evaluated using Kaplan-Meier actuarial analysis (Figure 2).The mean PFS time for the high PTTG1 expression patients was 25.3 (95% confidence interval (CI), 16.1-34.6)months, whereas that for the patients with low PTTG1 expression was 53.4 (95% 36.8-70.1)months, negative expression 94.0 (95% CI, 68.7-119.4)months.The mean PFS time of patients in high-risk group was 40.7 (95% CI, 28.7-52.7)months, whereas in low-risk group 81.1 (95% CI, 61.3-100.9)months.The mean PFS time of patients with T2 was 62.7 (95% CI, 49.3-76.2) months, whereas the mean PFS time of T3 was 41.3 (95% CI, 47.2-74.5)months. Multivariate Analysis To determine the smallest number of parameters that could jointly predict disease progression in our cohort of patients, the multivariate Cox proportional hazard model and stepwise selection analysis was used.When all parameters with prognostic (i.e.age T stage, GS group, risk group, pretreatment PSA level and PTTG1 expression) were included in the model, high risk group (p=0.0147,hazard ratio=4.062),and PTTG1 low expression (p=0.002,hazard ratio=3.724),and PTTG1 high expression (p=0.000,hazard ratio=8.045)reached statistical significance in predicting decreased PFS (Table 2). To demonstrate the joint effects of PTTG1 expression and risk group on disease progression, Kaplan-Meier analysis was performed.As shown in Figure 3, in both low and high risk subgroups patients with high PTTG1 expression had a worse prognosis than patients with low or negative PTTG1 expression (p=0.000).Thus, the highest probability of disease progression was found in patients with high PTTG1 expression and high-risk group, whereas individuals with negative PTTG1 expression and low-risk group had the lowest probability of progression. Discussion In the present study, we show for the first time that PTTG1 overexpression in prostate cancer is statistically associated with decreased PFS after CAB therapy both in univariate and multivariate analysis, even though is not associated with the Gleason score, PSA level and clinical T stage.Several factors which were typically used so far to predict outcome of curative therapeutic strategies, such as Gleason score and PSA level, losted their prognostic value for patients with localized or locally advanced prostate cancer receiving PADT in this cohort.Moreover, With regard to PTTG1 expression in patients with prostate cancer, Zhu et al. (2006) detected in a higher percentage of prostate cancer tissues (34/41, 82.9%) than we did (68.7%,44/64).The reason for the little different results seem to be that the disease stage of the specimens used for comparison varied and that there were differences in the procedure for evaluation of PTTG1 expression, including the condition of the antigen, type of antibody used, and method of restoration of the antigen.Although the tissue examined was only the partial biopsy specimen obtained at the diagnosis, our results indicate that PTTG1 expression can be fully detected even by IHC using a biopsy specimen.Since PTTG1 expression was observed in virtually most of prostate cancers, one thing can be said from our results that detection of PTTG1 expression using the biopsy specimen obtained at diagnosis could help to identify patients with aggressive disease who require aggressive therapy, being treated with more intensive therapy, such as CAB combined with HDR-brachytherapy, intensity-moderated radiotherapy, EBRT, or some forms of chemotherapy. The most intresting point is that PTTG1 is related to endocrine response.PTTG1 expression can not only be androgen upregulated in castrated rat prostate and human prostate cancer cell LNCaP (Zhu et al., 2006), but also be induced by estrogen through an estrogen-response element in the PTTG1 promoter region in prolactinoma (Heaney et al., 1999).Androgen pathways and estrogen signaling all have been showed to play important roles in prostate cancer development and progression (Bonkhoff and Berges, 2009;Celhay et al., 2010).PTTG1 has also been identified as one of new candidate genes associated with endocrine therapy resistance in breast cancer (Ghayad et al., 2009).In our present study, subjects with PTTG1 overexpression had a shorter time to tumor progression than that with low PTTG1 expression.These results suggest that disruption of PTTG1 may be one of the major factors contributing to androgen deprivation therapy resistance, and inhibtition of this gene may be a potential therapeutic target in the suppression of prostate cancer progression.We hypothesize that PTTG1 overexpression may be associated with advanced disease that responds poorly to hormone therapy, just as Rb loss was (Sharma et al., 2007).Further studies are necessary to clarify the role of PTTG1 in development and progression of prostate cancer.It might be interesting to invastigate whether PTTG1-transfected hormone sensitive prostate cancer cell line (i.e.LNCaP) could survive under androgen deprivation circumstance. The cohort of our study is restricted to 64 patients with localized or locally advanced prostate cancer.Despite its limited size, the strength of this cohort is its restriction to T2-T3 tumor without lymph node and distant metastasis and undetectable PSA level after continuous CAB therapy in first 3 months, because thus disease progression indeed reflects tumor aggressiveness rather than being the result of enlargement of the metastasis tumor, and biochemical failure reflects transformation to androgen independent prostate cancer rather than being the result of a bad response to PADT. There are two limitations of the study.The first is that the subjects are good responsive to hormonal therapy and patiensts with nonmetastasis, thus it is not clear whether the present results are applicable to poor responders or patients with metastasis.The second is that the detection approach, immunohistochemistry is semiquantitative.But immunohistochemistry approach is convenient and economically efficient, widely applied and much easier to be implemented into clinical practice. In summary, this paper introduces that PTTG1 immunostaining in patients with prostate cancer may be a useful approach to predicting PFS after combined androgen blockade treatment in Chinese patients with localized or locally advanced disease and may identify those patients who may benefit from novel aggressive therapeutic strategies. Figure 3 . Figure 3. Prognostic Value of PTTG1 Expression Stratified by Risk Group.(A) Kaplan-Meier plots of disease free probability of each group of different PTTG1 expression in low risk group.Statistically differences were observed among high, low and negative expression (log rank, p=0.000).(B) Kaplan-Meier plots of disease free probability of each group of different PTTG1 expression in high risk group.Statistically differences were observed among high, low and negative expression (log rank, p=0.000) Table 2 . Results of Univariate and Multivariate Analysis Hazard Ratio; CI, confidence interval; Cox proportional hazard model and single parameter analysis was used to determine the prognostic significance of age group, GS group, T stage (T3/T2), pretreatment PSA level, risk group and PTTG1 expression; All were used as categorical variables DOI:http://dx.doi.org/10.7314/APJCP.2012.13.7.3083Pituitary Tumor Transforming Gene 1 is an Independent Prognostic Factor for Prostate Cancer prostate cancer patients with different risk group could be further classified based on the PTTG1 expression in their prostate cancer specimens to predict disease progression more accurately.
2017-06-17T17:48:34.278Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "c528908578edc279428addcd455e400b3e216513", "oa_license": "CCBY", "oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201205061572146&method=download", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "47f592563771871ca4183a39ed496f1ded97f949", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202710421
pes2o/s2orc
v3-fos-license
Construction and Analysis of a Long Non-Coding RNA (lncRNA)-Associated ceRNA Network in β-Thalassemia and Hereditary Persistence of Fetal Hemoglobin Background Higher fetal hemoglobin (HbF) levels can ameliorate the clinical severity of β-thalassemia. The use of integrative strategies to combine results from gene microarray expression profiling, experimental evidence, and bioinformatics helps reveal functional long noncoding RNAs (lncRNAs) in β-thalassemia and HbF induction. Material/Methods In a previous study, a microarray profiling was performed of 7 individuals with high HbF levels and 7 normal individuals. Thirteen paired samples were used for validation. lncRNA NR_001589 and uc002fcj.1 were chosen for further research. The quantitative reverse transcription-PCR was used to detect the expression levels of 2 lncRNAs. The Spearman correlation test was employed. The nuclear and cytoplasmic distribution experiment in K562 cells was used to verify the subcellular localization of 2 lncRNAs. Potential relationships among lncRNAs, predicted microRNAs (miRNAs), and target gene HBG1/2 were based on competitive endogenous RNA theory and bioinformatics analysis. Results Average expression levels of NR_001589 and uc002fcj.1 were significantly higher in the high-HbF group than in the control group. A positive correlation existed between NR_001589, uc002fcj.1, and HbF. The expression of NR_001589 was in both the cytoplasm and the nucleus, mostly (77%) in the cytoplasm. The expression of uc002fcj.1 was in both the cytoplasm and the nucleus; the cytoplasmic proportion was 43% of the total amount. A triple lncRNA-miRNA-mRNA network was established. Conclusions Novel candidate genetic factors associated with the HBG1/2 expression were identified. Further functional investigation of NR_001589 and uc002fcj.1 can help deepen the understanding of molecular mechanisms in β-thalassemia. Background Genome-wide sequencing shows that about 93% of the DNA sequence in the human genome is transcribed into RNA, but only about 2% of the DNA sequence eventually encodes a protein [1]. Some nonprotein-coding RNAs, formerly known as "transcriptional noise," now are known to serve as significant regulators of target gene expression [2]. Among them, long noncoding RNAs (lncRNAs) are longer than 200 nucleotides and are involved in a variety of biological processes, such as cell proliferation, differentiation, and chromosomal variation [3]. MicroRNAs (miRNAs), 22-25 nucleotides in length, negatively regulate target genes at the post-transcriptional level and participate in hematopoietic landscape shaping [4]. lncRNAs can function in various diseases, including hematopoiesis and other blood diseases, by interacting with miRNAs [5,6]. lncRNAs act directly against miRNAs to antagonize the expression and function of miRNAs, or are degraded by miRNAs, thus affecting target lncRNAs in the pathophysiological process [7,8]. lncRNAs compete with miRNAs for direct binding to mRNAs [9]. Some lncRNAs can also cleave miRNAs from sequences of intronic or exonic region during maturation [10,11]. However, few studies have reported the important regulatory roles of lncRNAs in b-thalassemia and fetal hemoglobin (HbF) induction. b-thalassemia is a genetic and hemolytic disease caused by the dysfunction of globin synthesis [12]. Severe b-thalassemia probably accounts for more than 50 000 deaths per year of all deaths of children in tropical and subtropical areas [13]. In Guangxi province of southern China, the mutation gene frequency of b-thalassemias is up to 6.43% [14]. HbF, composed of 2 a chains and 2 g chains, is the major hemoglobin type during fetal life and is replaced by adult hemoglobin after birth [15]. Accumulating evidence has shown that increased HbF levels effectively ameliorate the clinical symptoms and improve the prognosis of b-thalassemia. Genetic regulation of HbF levels has been of particular therapeutic interest in recent years [16,17]. Focusing on individuals with high levels of HbF in the geographic regions where b-thalassemias are prevalent with specific molecular pathology and racial/ethnic characteristics may provide valuable insights into the mechanisms underlying the expression of HBG1/2 genes. So far, detailed studies on mRNAs and miRNAs have helped guide the diagnosis and therapy of b-thalassemia. However, few studies have been conducted on the function of lncRNAs in b-thalassemia. In a previous study, a microarray profiling of individuals with high HbF levels and normal individuals was performed, but the lncRNA function was poorly clarified. In the present study, lncRNAs NR_001589 [18] and uc002fcj.1 were selected to explore their regulatory mechanisms based on the competitive endogenous RNA (ceRNA) theory [19]. NR_001589 was of interest because it is located upstream of the b-globin locus. The previous study suggested that NR_001589 might activate HBE1 and regulate HbF expression. Uc002fcj.1, located on chromosome 16, was the most upregulated lncRNA in the high-HbF group compared with the normal group. The quantitative reverse transcription-polymerase chain reaction (qRT-PCR) was performed for validating these 2 differentially expressed lncRNAs. Their subcellular localization was confirmed using K562 cell lines. Putative miRNA-HBG1/2 sites in lncRNAs were predicted by bioinformatics analysis. A triple lncRNA-miRNA-mRNA network was established. The findings of this study offer new insights into the role of lncRNAs in HbF induction in patients with b-hemoglobinopathies, although deeper explorations are needed on this novel regulatory mechanism. Study participants and microarray analysis The details are available in Reference [18]. Thirteen paired samples (13 subjects in the high-HbF group and 13 subjects in the control group) were used for validation. This study approved by the First Affiliated Hospital of Guangxi Medical University (2013-KY-007). RNA extraction from nucleated erythrocytes and reticulocytes Isolation of nucleated red blood cells an reticulocytes was shown in our previous study [18]. Total RNA was extracted from reticulocytes using TRIzol (Invitrogen Life Technologies, USA) in accordance with the manufacturer's protocol. The quantity and quality of the total RNA were assessed using a NanoDrop ND1000 spectrophotometer (NanoDrop, USA). qRT-PCR validation of differentially expressed lncRNAs: NR_001589 and uc002fcj.1 Based on a previous study [18], qRT-PCR was performed to further confirm whether lncRNA NR_001598 and uc002fcj.c had differential expression. RNA was reverse-transcribed into cDNA using SuperScript III Reverse Transcriptase (Invitrogen Life Technologies, CA, USA) according to the manufacturer's protocols. Table 1 presents the sequences of qRT-PCR primers used. b-actin was used as the control gene. Subcellular localization of lncRNAs: NR_001589 and uc002fcj.1 Cell culture K562 cell is a widely used human erythroid-like cell line capable of undergoing erythroid differentiation [20]. Numerous 7080 studies have used K562 cells to elucidate the regulatory mechanism of HbF expression in b-thalassemia in vitro [21]. K562 cells were purchased from the Stem Cell Bank of the Chinese Academy of Sciences (Shanghai, China) and cultured in Roswell Park Memorial Institute (RPMI) 1640 medium (Sigma-Aldrich, MO, USA) with 10% fetal bovine serum (Gibco, South America), 100 IU/mL penicillin, and 100 μg/mL streptomycin (Solarbio, China) in a 5% CO 2 humidified atmosphere. Nuclear and cytoplasmic separation experiments Frozen K562 cells at -80°C were slowly thawed on ice and centrifuged at 500 g for 5 min to collect cells. The cells were washed by adding 500 μL of 1×PBS (phosphate-buffered saline) and collected by centrifugation at 500 g for 5 min. Then, 20 volumes of cell lysis buffer were added to the cell pellet, mixed well, and placed on ice for 5 min. After centrifugation at 1500 g for 5 min, the supernatant was collected as a cytoplasmic crude extract. Attempts were made to remove the supernatant. An equal volume of cell lysis buffer was added, mixed well, and placed on ice for 10 min. After centrifugation at 1500 g for 5 min, the precipitate comprised the separated nucleus. The cytoplasmic crude extract was centrifuged at 16 000 g for 5 min, and the supernatant was finally isolated as a cytoplasmic fraction. RNA extraction from the nucleus and the cytoplasm Nuclear and cytoplasmic RNAs of K562 cells were extracted separately using TRIzol (Invitrogen Life Technologies) according to the manufacturer's protocol. The quantity and quality of the extracted RNA were tested on a NanoDrop ND-1000 spectrophotometer (NanoDrop, NY, USA). Denaturing agarose gel electrophoresis was used to assess the integrity of the RNA. Total RNA was reverse-transcribed into cDNA using SuperScript III Reverse Transcriptase (Invitrogen, NY, USA) in accordance with the manufacturer's instructions. The amount of input RNA used was 500 ng, and the final volume for all reactions was adjusted to 20 μL with ddH 2 O. cDNA was stored at -20°C overnight and then used for qRT-PCR. qRT-PCR validation of subcellular localization for NR_001589 and uc002fcj.1 A qRT-PCR was performed using the ViiA 7 Real-Time PCR System (ABI, NY, USA). A reaction volume of 10 μL was mixed, consisting of 5 μL of 2×Master Mix (ArrayStar, MD, USA), 0.5 μL of PCR Forward Primer, 0.5 μL of PCR Reverse Primer, 2 μL of template cDNA, and 2 μL of double-distilled water. The following cycling conditions were applied: 95°C for 10 min followed by 40 cycles of 95°C (10 s) and 60°C (60 s). The lncRNA PCR results were quantified using the 2 -DDct method, with normalization using b-actin and U6. Statistical analysis All statistical data were analyzed with SPSS 20.0 software (SPSS, Inc., IL, USA). Data are shown as the mean ± standard deviation. The t test was used to analyze the statistical significance of the microarray and qRT-PCR results. The Spearman correlation coefficient analysis was performed to assess correlations between the levels of lncRNAs verified by qRT-PCR and HbF levels. Statistical differences were considered significant at P<0.05. Validation of dysregulated lncRNAs The results of qRT-PCR showed that lncRNAs NR_001589 and uc002fcj.1 were all upregulated in the high-HbF group compared with the control group ( Figure 1). Subcellular localization of lncRNAs The subcellular distribution of lncRNA determines its possible ways of functioning. Subcellular localization in K562 cells is necessary for subsequent mechanistic studies. The expression of NR_001589 was found in both cytoplasm and nucleus, mostly (77%) in the cytoplasm. The expression of uc002fcj.1 was seen in both the cytoplasm and the nucleus; the cytoplasmic proportion was 43% of the total amount ( Figure 3). Establishment of the lncRNA-miRNA-mRNA network A lncRNA-associated ceRNA network was constructed by combining lncRNA-miRNA interactions and miRNA-HBG1/2 interactions. The network was visualized, and was composed of 2 lncRNA nodes, 2 mRNA nodes, and 14 miRNA nodes ( Figure 4). Table 2 presents the specific miRNAs and source online platforms. Discussion Great efforts have been made to elucidate the molecular mechanism underlying b-thalassemia. Previous studies focused mainly on mRNA and miRNAs. HBS1L-MYB, BCL11A, and KLF1 regulate g-globin gene (HBG1/2) expression and influence HbF levels [22][23][24]. Additional, several miRNAs have been identified as critical factors regulating HbF expression, such as miR-15a, miR-16-1, miR-96, miR-210, miR-221, miR-222, miR-486-3p, and the let-7 family [25][26][27][28][29]. Accumulating evidence suggest the roles of lncRNAs in a variety of biological processes. Dysregulation of lncRNA has been found in genetic diseases, including hematopoiesis and the pathogenesis of blood diseases [30,31]. Studying the relationship of lncRNAs with miRNAs and/or mRNAs, whose functions have been annotated, might help infer the potential functions of lncRNAs. Reportedly, lncRNA has a natural "sponge" role as a ceRNA, thus affecting the inhibitory effects of miRNAs on target genes [32]. miRNAs regulate lncRNAs through similar interactions with the highly conserved region of lncRNAs and vice versa [33]. Therefore, it is crucial to learn the regulatory role of lncRNAs and their functional relationship with miRNAs as ceRNA in b-thalassemia and HbF induction. This novel study confirmed that NR_001589 and uc002fcj.1 were significantly upregulated in the high-HbF group. The interplay data from databases and a previous study were combined to generate a triple network based on the ceRNA theory. Based on the results (Figures 1, 2, and 4), it was hypothesized that NR_001589 and uc002fcj.1 could interact with miRNAs and alter the expression of g-globin gene. The miRcode, RegRNA2.0, and TargetScan were used to obtain NR_001589and uc002fcj.1-targeting miRNAs, so as to find more relevant miRNAs and their potential regulatory role in HbF induction. The results showed that these miRNAs also interacted with HBG1/2. The miRNAs related to NR_001589, uc002fcj.1, and HBG1/2 (miR-3619-5p and miR-137) gained attention. miR-3619-5p has been proved to be a cancer suppressor in prostate cancer and non-small cell lung cancer. It is associated with proliferation, invasion, and autophagy [34][35][36]. Bioinformatics databases (TargetScan and miRcode; Table 2) found binding sites for HBG1 and miR-3619-5p. Currently, The qRT-PCR data of NR_001589 were derived from a previous study [18]. whether miR-3619-5p is involved in HbF regulation is unknown. The association between miR-3619-5p and HBG1 warrants further investigation. The biological roles of miR-137 in cell proliferation, migration, invasion, and apoptosis have been reported. miR-137 is also involved in human cord blood-derived CD34+ cell erythropoiesis [37]. Complementary sequences of HBG1 and miR-137 were detected by the bioinformatics software (microRNA.org, miRcode, and DIANA Tools; Table 2). The regulatory effect of miR-137 in HbF induction needs further investigation. This novel study detected the subcellular distributions of NR_001589 and uc002fcj.1 in K562 cells ( Figure 3) and the advantage of lncRNA analysis. Nuclear and cytoplasmic lncRNAs can regulate gene expression in different ways [38]. Intranuclear lncRNAs bind the transcription factors and recruit related proteins. Histone trimethylation is induced, and the expression of nearby gene mRNAs is regulated. In addition, lncRNAs can directly bind with the promoter to regulate gene expression [39]. The correlation data of NR_001589 and HbF were derived from a previous study [18]. 7084 The ceRNA theory indicates that all types of RNA transcripts can crosstalk with each other through miRNA-binding sites. A recent study showed that cytoplasmic lncRNAs can serve as ceRNAs, function as precursors of miRNAs, and participate in mRNA and protein modifications [40]. Based on these results, it was speculated that lncRNA NR_001589, distributed mainly in the cytoplasm, might function as a ceRNA by sponging some microRNAs (including miR137), affecting the expression of HBG1/2. However, lncRNA uc002fcj.1, distributed in both the nucleus and the cytoplasm, affects transcription of the HBG1/2 gene and also influences post-transcriptional modification, thereby affecting HbF levels. All these topics need further exploration. The present study has some limitations. First, the available microarray data on b-thalassemia and hereditary persistence of fetal hemoglobin (HPFH) was lacking. Second, the lncRNA microarray research is still in its infancy compared with mRNA and miRNA microarray testing. Finally, further experimental studies should be conducted to analyze the complex regulatory patterns underlying b-thalassemia and HPFH. Conclusions This study shows that NR_001589 and uc002fcj.1 can act as a ceRNA to promote the expression of HBG1/2 by sponging miRNA during b-thalassemia and HbF induction. These results might help in designing a series of in vivo and in vitro experiments to explore the functions of NR_001589 and uc002fcj.1 through the ceRNA language. After establishing multiple ln-cRNA-miRNA-mRNA relationships, it can be presumed that these genetic factors are involved in b-thalassemia, thus laying the theoretical foundation for subsequent investigations.
2019-09-22T13:04:29.983Z
2019-09-21T00:00:00.000
{ "year": 2019, "sha1": "454d44b5d45d27a80d72a02c4d78d8026fbd9f24", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc6767942?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ad547e8a9ed7d5a7b07d06337b07ea611e4e63dd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
12253395
pes2o/s2orc
v3-fos-license
Small Regulatory RNA and Legionella pneumophila Legionella pneumophila is a gram-negative bacterial species that is ubiquitous in almost any aqueous environment. It is the agent of Legionnaires’ disease, an acute and often under-reported form of pneumonia. In mammals, L. pneumophila replicates inside macrophages within a modified vacuole. Many protein regulators have been identified that control virulence-related properties, including RpoS, LetA/LetS, and PmrA/PmrB. In the past few years, the importance of regulation of virulence factors by small regulatory RNA (sRNAs) has been increasingly appreciated. This is also the case in L. pneumophila where three sRNAs (RsmY, RsmZ, and 6S RNA) were recently shown to be important determinants of virulence regulation and 79 actively transcribed sRNAs were identified. In this review we describe current knowledge about sRNAs and their regulatory properties and how this relates to the known regulatory systems of L. pneumophila. We also provide a model for sRNA-mediated control of gene expression that serves as a framework for understanding the regulation of virulence-related properties of L. pneumophila. Putative sRNA molecules expressed by L. pneumophila were identified by both a bioinformatic approach as well as by deep RNAsequencing from growth in broth and inside A. castellanii (Faucher et al., 2010;Weissenmayer et al., 2011). In addition, a number of sRNAs have been implicated in the regulation of virulence factors of L. pneumophila, including the CsrB homologs RsmY and RsmZ (Rasis and Segal, 2009a;Sahr et al., 2009) and the RNA polymerase (RNAP) regulator 6S RNA (Faucher et al., 2010). This review aims to describe the current knowledge about sRNAs in general and provide a global perspective of the involvement of sRNA regulation systems in the behavior of L. pneumophila. Base-paIrIng srnas The most common type of regulatory sRNA are base-pairing sRNAs. They are short, highly structured RNA molecules that are complementary to some degree to their target mRNAs and are therefore often called antisense sRNAs (Brantl, 2007). Base-pairing sRNAs can have a positive or a negative effect on expression of the target gene. Binding of the sRNA at or near the ribosomal binding site (RBS) prevents recognition by the ribosome and subsequent translation (Figures 1B,C). Alternatively, binding of the sRNA could change the secondary structure of the mRNA and free the RBS to permit translation initiation ( Figure 1C). sRNA-binding to the mRNA can also induce its degradation by recruiting RNases (Waters and Storz, 2009). Base-pairing sRNAs can be encoded in cis or in trans. cIs-encoded Base-paIrIng srnas Cis-encoded sRNAs are antisense RNA molecules encoded on the complementary strand of their target RNA gene ( Figure 1B). Therefore, they share extensive sequence complementarity with the target mRNA but do not necessarily form long RNA duplexes (Brantl, 2007). Thirty-three sRNAs were recently identified in L. pneumophila that were at least partially complementary to genes encoding protein, some being known virulence factors (Weissenmayer et al., 2011; Table 1). Lpr0020 is encoded antisense to lpg0644, which encodes a homolog of RtxA involved in intracellular survival and modification of trafficking (Cirillo et al., 2001(Cirillo et al., , 2002. Another sRNA, Lpr0050, is found antisense to the Icm/Dot effector SdeA (lpg2157; Bardill et al., 2005). Two sRNAs, Lpr0003 and Lpr0004, are antisense to the gene encoding the Icm/ Dot effector LegA10, and are expressed during intracellular growth in A. castellanii. Lpr0018 is encoded antisense to comEC (also known as comA, lpg0626) and would form a duplex with the 5′ end of the coding sequence and partially with a putative 5′UTR. ComEC is predicted to 6 Trans-encoded sRNAs identified by Weissenmayer et al. (2011) were predicted as functional if the predicted structure was found to be stable. For example, in E. coli, Hfq was shown to regulate the locus of enterocytes effacement (LEE) encoding a type III secretion system (TTSS; Hansen and Kaper, 2009;Shakhnovich et al., 2009). In the intracellular pathogen Salmonella enterica serovar Typhimurium, Hfq is necessary for optimal growth in epithelial cells and macrophages . Burkholderia cenocepacia encodes two Hfq homologs and both of them are required for optimal resistance to stress and virulence (Ramos et al., 2011). Deletion of the hfq gene of Staphylococcus aureus has no effect on metabolism but reduces virulence (Bohn et al., 2007;Liu et al., 2010). However, in Neisseria gonorrhoeae, deletion of hfq leads to only a weak reduction of virulence (Dietrich et al., 2009). Moreover, in some bacteria, Hfq is required for the function of some sRNAs but dispensable for others. For example, in V. cholerae, Hfq is required for the control of the quorum sensing systems by the sRNAs Qrr1-Qrr4, but dispensable for the repression of ompA by VrrA (Lenz et al., 2004;Song et al., 2008). It is noteworthy that Helicobacter pylori does not encode an Hfq homolog but still expresses hundreds of sRNAs (Sharma et al., 2010). This suggests that in some bacterial species, the function mediated by Hfq is not necessary for sRNA-mediated gene regulation or that an as yet unknown protein could carry out a similar function. Following genome-wide identification of Hfqbinding sRNAs, it was postulated that even in E. coli, some basepairing sRNAs might not bind to, or use Hfq (Zhang et al., 2003). Careful review of the Hfq-related literature lead Jousselin et al. (2009) to postulate that the need for Hfq in mRNA-sRNA interaction is related to a number of factors. First, the higher the overall GC content of the bacterial genome the more likely Hfq is required and Hfq seems to be dispensable in bacteria whose genomes display a low GC value, such as S. aureus (32% GC). Second, Hfq is dispensable when the sRNA-mRNA interaction is mediated by long (>30) and uninterrupted pairing. Third, they observed a correlation between a requirement for Hfq and the C-terminal extension length of Hfq, which forms an mRNA interaction surface. Hfq proteins that have a short C-terminus tend to be found in bacteria in which Hfq is dispensable. In L. pneumophila, deletion of the hfq gene affects the duration of the lag phase after inoculation in fresh broth (McNealy et al., 2005). Moreover, the L. pneumophila hfq mutant shows a reduced growth rate in chemically defined medium containing low concentrations of iron and a reduction in the expression of the ferric uptake regulator (fur). In E. coli, the RyhB sRNA negatively regulates expression of fur in a Hfq-dependant manner (Vecerek et al., 2007). In addition, the L. pneumophila hfq mutant shows a small reduction in intracellular growth (McNealy et al., 2005). The somewhat limited effect of deleting the hfq gene on L. pneumophila phenotypes suggests that Hfq is not critical for sRNA-mRNA interactions in this organism. The GC content of the L. pneumophila genome is low (38%) and alignment of its Hfq protein sequence with other homologs (Figure 2) reveals that the C-terminal region is short and comparable to the length of the V. cholerae Hfq that is not essential for all mRNA-sRNA interactions. According to the postulates of Jousselin et al. (2009), one could hypothesize that Hfq will not be required for all sRNA-mRNA interactions in L. pneumophila. Nonetheless, one can speculate that in L. pneumophila, basepairing sRNAs acting through Hfq may regulate iron acquisition, virulence-related functions and possibly other systems as well, be part of the machinery involved in DNA uptake in L. pneumophila. Competence for natural transformation is induced by treatment that triggers stalling of the replication fork, such as UV irradiation and exposure to bicyclomycin (Charpentier et al., 2011). Some evidence previously suggested that sRNA could be involved in regulation of competence in L. pneumophila. First, deletion of the rnr gene, encoding RNase R, was found to induce competence and resulted in the accumulation of small RNA molecules originating from highly structured 16S rRNA and tmRNA (see below; Charpentier et al., 2008). Whether or not these two phenotypes are related requires clarification. Second, the Escherichia coli homolog of the L. pneumophila competence repressor ProQ (Sexton and Vogel, 2004) was found to work as a RNA chaperone to allow translation of proP mRNA, involved in the uptake of osmoprotectants (Chaulk et al., 2011). Taken together, these facts could lead one to hypothesize a regulatory model in which ProQ is essential to inhibit degradation, by RNase R, of the sRNA Lpr0018, which would mediate degradation of comEC mRNA, similar to the mechanism depicted in Figure 1B. Therefore, in the absence of ProQ or RNase R, comEC would be stabilized and efficiently translated. Alternatively, the sRNA Lpr0018 could stabilize comEC mRNA, allowing its transcription, while ProQ could act as a negative regulator of Lpr0018, potentially by targeting it for degradation. However, to our knowledge, such a mechanism has yet to be described for cis-encoded sRNA. Another sRNA, Lpr0019, is 742 nt long and is complementary to the 5′ end of lpg0627 and to the 3′ end of lpg0628. Both genes are part of a predicted polycistronic RNA composed of lpg0632-lpg0627 encoding subunits of the type IV pili, which was associated with competence (Stone and Kwaik, 1999). Lpr0019 could possibly be involved in induction of competence in a manner similar to what we suggested for Lpr0018. Of course, those hypotheses will need to be tested experimentally. Nonetheless, the finding that two sRNAs are encoded antisense to key players of DNA uptake by L. pneumophila strongly suggest that its induction is regulated at the post-transcriptional level. Recently, induction of competence in Vibrio cholerae was found to be dependent on the expression of a trans-encoded sRNA (TfoR), which allows translation of the positive regulator TfoX (Yamamoto et al., 2011). Lpr0036 is encoded antisense to lvrA (lpg1259), the first gene of the lvr/lvh locus encoding a Type IVA secretion system, involved in conjugation (Segal et al., 1999). However, the role of LvrA is currently unknown and it is difficult at this point to speculate a possible role for this sRNA. trans-encoded Base-paIrIng srnas In contrast to cis-encoded sRNA, trans-encoded base-pairing sRNAs are not physically linked to their mRNA target and the formation of RNA duplexes are mediated by short imperfect RNA interactions ( Figure 1C). The function of many of the trans-encoded basepairing sRNAs depends on the RNA-binding protein Hfq, which is thought to enhance the likelihood of a productive interaction between the sRNA and its target (Waters and Storz, 2009). This is in contrast to cis-encoded base-pairing sRNA that do not generally require the participation of a RNA chaperone (e.g., Hfq) to bind their target mRNA (Brantl, 2007). In bacterial pathogens, deletion of the hfq gene often leads to a reduction in virulence, as was observed for E. coli, Salmonella, Shigella, Yersinia, and Listeria (Reviewed in Chao and Vogel, 2010). on RpoS during post-exponential phase (Figure 3). Since RpoS is an important regulator of virulence, it is tempting to speculate that LprA could be part of its regulatory cascade and plays a role in expression of virulence factors. Regardless of the growth phase, the presence of H 2 O 2 induces its expression, which suggests that LprA responds to oxidative stress. This is similar to the E. coli sRNA OxyS, which is part of the oxidative stress response and reduces its mutagenic effects (Altuvia et al., 1997). RNA-sequencing identified 38 sRNA molecules encoded in intergenic regions that could be considered as potential transencoded sRNAs (Weissenmayer et al., 2011; Table 1). Of these, nine were predicted to be functional based on the stability of their predicted secondary structures at 37°C. The predicted structure of one sRNA (Lpr0010) was less stable than 1000 randomly permutated sequences of the same length and base composition at 20 or 37°C, suggesting that it is under evolutionary pressure to form an unstable secondary structure. The biological relevance of this was not explored further, but one can hypothesize that the structure is only stable at low temperatures (less than 20°C) and that it could be part of a cellular response to low temperature. Interestingly five sRNA pairs were identified, for which two distinct sRNA are transcribed antisense to each other (Weissenmayer et al., 2011). In E. coli, the sRNAs RyeB and SraC are encoded opposite to each other and RyeB is completely complementary to the longer SraC segment (Vogel et al., 2003). The size of SraC is ≈270 nt, but when RyeB is present a shorter band (≈150 nt) is also detected. This reduction in size seems to be dependent on RNase III, suggesting that RyeB mediates degradation of SraC. For the sRNA pairs identified in Legionella, one sRNA can act as a negative regulator of the other, efficiently sequestering it by extended base-pairing and potentially targeting it for degradation. Moreover, mRNA can also regulates sRNAs. This mechanism, named trap-RNA, was described for the MicM sRNA that induces degradation of the YbfM porin mRNA. The chb polycistronic mRNA contain a sequence complementary to MicM and expression of the chb operon leads to MicM hybridization and degradation, resulting in stabilization of the ybfM mRNA (Figueroa-Bossi et al., 2009;Overgaard et al., 2009). Again, additional work is needed to understand the regulatory functions of Legionella trans-encoded base-pairing sRNA. There are a number of base-pairing sRNAs encoded in other bacterial genomes that are known to affect virulence. A few examples are provided below that might be relevant in the context of although Hfq function would not be essential for these. Expression profiling of a hfq-deficient L. pneumophila strain would shed light on the importance of Hfq on gene regulation and be of great help at identifying phenotypes that could be affected by it. A similar approach was used for other bacteria such as E. coli (Zhang et al., 2003), Typhimurium (Sittka et al., 2008), B. cenocepacia (Ramos et al., 2011), Pseudomonas aeruginosa (Sonnleitner et al., 2006), and N. gonorrhoeae (Dietrich et al., 2009). In addition, immunoprecipitation of Hfq with subsequent identification of bound sRNAs by enzymatic RNA-sequencing (Christiansen et al., 2006), tiling microarray (Zhang et al., 2003), or deep-sequencing (Sittka et al., 2008) would shed light on the mRNA species affected by Hfq and on the potential sRNAs whose functions are at least partially dependant on Hfq. Windbichler et al. (2008) have used an affinity chromatography procedure to identify RNA-binding proteins in E. coli. Briefly, they tagged a number of known sRNAs with a streptomycin-binding RNA aptamer, allowing them to bind to a streptomycin-coated column, which was then used to capture RNA-binding proteins from cellular extracts. They found that three proteins were consistently bound to a variety of sRNA sequences: Hfq, RNAP β-subunit and the small ribosomal subunit S1. Moreover, they showed that specific proteins could interact with a specific sRNA, depending on its sequence and secondary structure. Therefore, a hunt for sRNAbinding proteins is necessary to complete the sRNA-mediated regulatory landscape and to fully understand the extent of their impact on regulation of cellular functions. In L. pneumophila, a number of trans-encoded base-pairing sRNA candidates have been identified but mechanistic studies are needed to evaluate their mode of action and to validate them as authentic base-pairing sRNAs ( Table 1). Five intergenic RNAs were identified based on computer prediction by using the sRNA Predict software (Faucher et al., 2010). By searching for Rho-independent terminators in intergenic regions preceded by a sequence conserved in other L. pneumophila strains, 143 sRNA molecules were predicted. Using a custom-made microarray, the expression of 101 of these predicted sRNAs was monitored during growth in a variety of conditions. This two-step approach led to the identification of 12 sRNA molecules that were actively expressed, including 6S RNA, six 3′UTR, and five sRNAs that are independently transcribed (Faucher et al., 2010; Table 1). At this point the functions of the five identified sRNAs are unclear. Interestingly, expression of LprA during exponential growth is dependant on OxyR but dependant VrrA seems to negatively regulate expression of the adhesion molecule Tcp and therefore affects intestinal colonization (Song et al., 2008). There is structural similarity between VrrA and LprD of L. pneumophila and it is tempting to speculate a role for LprD in the regulation of OMP synthesis. However, structure comparisons of trans-encoded sRNAs have been of limited help for predicting function or targets, and an experimental strategy should be taken to determine if LprD regulates OMP synthesis. The quorum system of V. cholerae comprises four redundant sRNAs named Qrr1-Qrr4 and two signaling molecules, the furanosyl borate diester (AI-2) and the α-hydroxyketone Cqs (Lenz et al., 2004). At low cell density, the system positively regulates expression of Qrr1-Qrr4, which destabilize the mRNA of hapR, a negative regulator of virulence. Therefore, at low cell density, hapR is degraded allowing expression of virulence traits. L. pneumophila also possesses a putative quorum system, based solely on the presence of the α-hydroxyketone Lqs and the LqsR/LqsS two-components system (TCS) (Tiaden et al., 2007;Spirig et al., 2008). Beside the absence of AI-2 signaling in L. pneumophila the quorum system architecture of L. pneumophila and V. cholerae are quite similar (Tiaden et al., 2010). However, in L. pneumophila no sRNA has been implicated in this regulatory system as yet. Following RNA-sequencing, two sRNAs (Lpr0001, and Lpr0069) were found to have substantial homology both at the sequence and the secondary structure levels, which is reminiscent of the Qrr1-Qrr4 sRNAs (Weissenmayer et al., 2011). A search for homologous sequences throughout the genome revealed 20 more copies of these sRNAs, one (Lpr0049) being partially antisense to lpg2142, which encodes a putative ORF. The consensus structure of these sRNAs is a long stem-loop with L. pneumophila intracellular growth. One intracellular pathogen for which extensive identification and characterization of sRNA have been and are being performed is Salmonella. In this species, outer membrane protein (OMP) expression is regulated by a network of sRNAs. One of them, InvR, is encoded on the Salmonella pathogenicity island-1, acquired by horizontal gene transfer (HGT) and encoding the TTSS responsible for enterocyte invasion . Expression of this sRNA is dependant on HilD, a key regulator of TTSS expression. When the TTSS is expressed, InvR acts as a negative regulator of OmpD synthesis, one of the most abundant OMP in Typhimurium. Indirect evidence suggests that repression of OmpD could stabilize the membrane in the context of TTSS expression, allowing succesful translocation of bacterial effectors (Vogel, 2009). Therefore, InvR is thought to have helped establishment of the TTSS sequences after HGT by repressing expression of OMP that were incompatible with the virulence advantage provided by the TTSS (Vogel, 2009). Therefore, it is tempting to speculate that similar mechanisms exist in L. pneumophila to repress OMPs during expression of the Icm/Dot system, the Type IVA secretion system (lvr/lvh) or the Tra conjugative system. However, to date, no trans-encoded sRNAs have been identified in the vicinity of these systems, but, as described above, one cis-encoded sRNA is antisense to lvrA (lpg1259). The sRNA VrrA of V. cholerae is part of the membrane stress response pathway mediated by σ E and targets ompA mRNA, presumably to limit synthesis of OMPs (Song et al., 2008). Deletion of vrrA leads to an increase in the synthesis of outer membrane vesicles that are known to be involved in delivery of virulence factors to host cells (Mashburn-Warren and Whiteley, 2006). Moreover, The L. pneumophila genome encodes homologs of the BarA/ UvrY TCS named LetA/LetS. This system was first identified as a positive regulator of flagellin expression (Hammer et al., 2002). Although a letA mutant still replicates in mammalian macrophages, it is defective for replication in A. castellanii (Gal-Mor and Segal, 2003;Lynch et al., 2003). Subsequently, LetA was shown to regulate expression of a number of virulence factors, including Mip, IcmR, IcmT, DotA, and the Icm/Dot effector RalF (Gal-Mor and Segal, 2003;Shi et al., 2006). Based on these results, the consensus model is that during exponential phase, CsrA represses expression of post-exponential phase genes, either by inhibiting mRNA translation, or by modulating their stability. During post-exponential phase, the LetA/LetS TCS, supposedly by inducing expression of CsrB homologs, inhibits the activity of CsrA, allowing expression of post-exponential traits (pigmentation, cytotoxicity, and motility). Computer predictions of CsrB homologs in several bacterial species identified two candidate CsrB homologs in L. pneumophila, based on the identification of intergenic regions enriched for the GGA motif (Kulkarni et al., 2006). These two sRNAs were named RsmY and RsmZ (Table 1), based on their short size, which more closely resemble the sRNAs involved in the RsmA (CsrA) system of P. aeruginosa (Lapouge et al., 2008). It was shown that: (i) LetA specifically binds upstream of rsmY and rsmZ and that the LetA/LetS TCS controls their expression; (ii) expression of rsmY and rsmZ in E. coli results in a similar phenotype as over-expression of csrB and csrC and; (iii) RsmY and RsmZ bind CsrA, confirming that RsmY and RsmZ are the missing link in the LetA/S-CsrA regulatory pathway (Hovel-Miner et al., 2009;Rasis and Segal, 2009b;Sahr et al., 2009; Figure 1E). Deletion of either rsmY or rsmZ has little impact on virulence, but deletion of both strongly impaired replication in both mammalian macrophages and A. castellanii (Sahr et al., 2009). It was also shown that increased expression of rsmY and rsmZ during post-exponential phase requires RpoS, probably due to the regulation of letS expression by RpoS (Hovel-Miner et al., 2009;Rasis and Segal, 2009b). Reduced expression of CsrA leads to an increase in rpoS expression, which suggests the existence of a positive feedback loop (Forsbach-Birk et al., 2004). However, deletion of rsmYZ, which should mimic over-expression of CsrA, also resulted in increased expression of rpoS (Sahr et al., 2009). Therefore, the interplay between LetS, RsmYZ, CsrA, and RpoS remains unclear and will require further investigation (Figure 3). Interestingly, RpoS and LetA, two major regulators of virulencerelated traits in L. pneumophila positively regulate expression of hfq during exponential growth (McNealy et al., 2005). Whether or not Hfq, in turn, affects RsmY and RsmZ function or stability is currently unknown (Figure 3). In P. aeruginosa, Hfq binds to and affects the stability of RsmY (Sonnleitner et al., 2008). Also, the LqsR/LqsS TCS is regulated by the CsrA system, which is similar to what was shown for V. cholerae (Lenz et al., 2005;Tiaden et al., 2007;Sahr et al., 2009). Microarray studies revealed that no genes were significantly affected by the deletion of either letA, letS, or rsmYZ during exponential growth in rich broth, in agreement with the current working model in which CsrA is active during exponential phase and that the LetA/LetS/RsmYZ part of the regulatory cascade is silent (Sahr et al., 2009). However, during the post-exponential phase of two central bulges comprised of ∼25 nt and two small hairpins extruding from either side of the central stem 20 nt before the loop (Weissenmayer et al., 2011). Many of these sequences were found in other Legionella strains as well, often in the same configuration, which indicates that they are evolutionarily conserved and likely to play a beneficial role. Moreover, both the Lqs system and the homologous sRNA sequences are absent in L. longbeachae. These observations are only suggestive and experimental evidence is needed to link the Lqs quorum sensing system with this group of homologous sRNA sequences. It is noteworthy that deletion of all four Qrr sRNAs was needed to see a phenotype on the quorum sensing system (Lenz et al., 2004). Since only Lpr0001 and Lpr0069 seem to be expressed at good level, it might be informative to generate a double lpr0001/lpr0069 mutant and monitor its effect on a population density-related phenotype. Although the vast majority of base-pairing sRNAs do not encode proteins, there are at least two examples where they do. In E. coli the sgrS gene encodes a sRNA, SgrS, and a small protein, SgrT, that together regulate glucose uptake by different strategies (Wadler and Vanderpool, 2007). In S. aureus, the sRNA RNA III targets virulence factors and functions as a key regulator of virulence, but also encodes a 26 amino acid long hemolysin (Boisset et al., 2007). Therefore, one should keep in mind that sRNAs are not necessarily non-coding. We recently identified two small RNA molecules, LstA and LstB that are predicted to encode small proteins with transmembrane motifs (Faucher et al., 2010). Because small proteins are difficult to predict accurately from genomic sequences, the hunt for small RNA molecules also has the potential benefit of filling the gaps of genomic annotation by also identifying putative small proteins and correcting errors in genome annotation. the csra/csrB system The CsrA protein was first identified in E. coli as a regulator of glycogen biosynthesis (Romeo et al., 1993). CsrA binds to GGA motifs in the 5′UTR of target mRNAs and affects their stability and/or their translation (Romeo, 1998). The sRNAs CsrB and CsrC contain many GGA motifs and can therefore bind multiple CsrA proteins resulting in titration/sequestration of CsrA, thus relieving CsrA effects on the expression of its target mRNAs ( Figure 1D). Transcription of CsrB and CsrC is regulated by the BarA/UvrY TCS. Both sRNAs are degraded by a pathway involving RNase E and CsrD, a cyclic di-GMP binding protein (Suzuki et al., 2006). Legionella pneumophila contains four CsrA homologs, of which one (lpg0781) was identified as able to complement a csrA deletion in E. coli (Fettes et al., 2001). The roles of the other CsrA homologs are currently unknown. In L. pneumophila, CsrA is responsible for the repression of post-exponential traits during exponential growth, including pigmentation, motility, and cell shortening (Fettes et al., 2001;Molofsky and Swanson, 2003;Forsbach-Birk et al., 2004). Moreover, CsrA is required for intracellular growth in both mammalian macrophages and A. castellanii (Molofsky and Swanson, 2003;Forsbach-Birk et al., 2004). Recently, it was shown that CsrA directly repressed the expression of ylfA/legC7, ylfB/legC2, and vipA, which encode Icm/Dot effectors (Rasis and Segal, 2009b). Regulation of CsrA expression seems to be dependant on PmrA, another well-known virulence regulator (Rasis and Segal, 2009b). consensus terminal loop of the γ-proteobacteria lineage of 6S RNAs. Co-immunoprecipitation studies revealed that the L. pneumophila 6S RNA candidate physically associate with RNAP (Faucher et al., 2010). Therefore, the gene encoding this sRNA was named ssrS in accordance with the published nomenclature recommendations (Barrick et al., 2005). Deletion of the ssrS gene reduced intracellular growth in human macrophages and in A. castellanii by 10-fold despite no difference in Icm/Dot translocation activity or cytotoxicity (Faucher et al., 2010). Also, the 6S RNA deficient strain was unable to compete against the wild-type strain during intracellular growth but grew equally well in AYE broth. Thus, it seems that in L. pneumophila 6S RNA is important for optimal expression of genes related to intracellular growth (Figure 3). In order to further dissect the effects of 6S RNA on gene expression, microarray analysis was used to monitor global gene expression patterns during the post-exponential phase of growth, when the 6S RNA is most abundant. When the ssrS deletion mutant strain was compared to the wild-type it was observed that L. pneumophila 6S RNA negatively affects expression of six genes and promotes transcription of 127 genes during post-exponential phase of growth, including those encoding: a subset of Icm/Dot effectors (VipA, LegC5, SdeC, SdbC), small molecule transporters, DNA repair enzymes as well as genes involved in fatty acid metabolism, amino acid metabolism, and carbohydrate metabolism. This was somewhat in contradiction with the consensus understanding of 6S RNA being mainly an inhibitor of transcription from σ 70 -dependant promoters. However, a recent study revealed that 6S RNA is also an activator of transcription in E. coli, where it negatively affects transcription of 148 genes and positively affects expression of 125 genes (Neusser et al., 2010). In this study, genes affected by 6S RNA contain promoters that are specific for a variety of σ subunits, including σ S , σ 32 , and σ 54 . Accordingly, 6S RNA seems to bind also to Eσ S , although with much less affinity than for Eσ 70 (Gildehaus et al., 2007). Therefore, it seems that 6S RNA regulation is not as clear-cut as first conceived and these results suggest that many variations on a common theme may exist in different bacterial species. Factors that could influence 6S RNA regulation in L. pneumophila include distinctive usage of the different σ subunits, strength of the promoters present in the genome and overall regulatory organization. In E. coli, RNAP can use 6S RNA as a template to generate 14-24 nt long de novo RNA molecules, named pRNA, originating from the central bulge on the 5′ strand (Wassarman and Saecker, 2006;Gildehaus et al., 2007). However, transcription from 6S RNA only occurs after a sudden increase in the NTP pool, for example when bacteria in post-exponential phase are diluted with fresh medium. Transcription from 6S RNA leads to the dissociation of 6S RNA from Eσ 70 , which is then free to transcribe genes again. This also causes destabilization of 6S RNA, due to increased access of nucleases to unbound 6S RNA or recognition of the 6S RNA-pRNA duplex by RNases (Wassarman and Saecker, 2006). Therefore, synthesis of pRNA seems to be a way to "reset" this regulatory system. Synthesis of pRNA probably also occurs in other bacteria as well, including L. pneumophila, but at present direct evidence for this is lacking. growth, many genes were negatively affected by deletion of either letA or letS or both rsmYZ, including a number of Icm/Dot effectors (RalF, SidC, SdeA, SdeC, SidF, and SdhB) (Sahr et al., 2009). Independently, it was shown that RsmY and RsmZ relieve the CsrAmediated repression of the expression of ylfA/legC7, ylfB/legC2, and vipA (Rasis and Segal, 2009b). However, expression of flagellar genes was largely RsmYZ independent but negatively affected by deletion of either letA or letS (Sahr et al., 2009). However, since CsrA affects mRNA translation, over-expression of RsmY and RsmZ could result in a stronger phenotype at the protein level. Interestingly several genes positively affected by LetA/S and RsmYZ were predicted to encode GGDEF and/or EAL domains, including lpg0156 (cdgS4) and lpg2132 (cdgs20) (Sahr et al., 2009;Levi et al., 2010), suggesting that there may be crosstalk between the CsrA system and the cyclic di-GMP system (Figure 3) as it was shown in E. coli (Jonas et al., 2008). Interestingly, wild-type bacteria that over-express cdgs20 are defective for intracellular multiplication (Levi et al., 2010). the rna polymerase/6s rna system The 6S RNA of E. coli was first identified and sequenced 40 years ago (Hindley, 1967;Brownlee, 1971). However, its function remained elusive until the year 2000 when Wassarman and Storz (2000) showed that 6S RNA binds to the σ 70 and the β/β′ subunits of RNAP and inhibits transcription of the rsd gene from its σ 70 -dependant promoter. Later, it was shown that, in laboratory E. coli strains, deletion of the 6S RNA gene, ssrS, renders cells more resistant to high pH and less able to compete against wild-type bacteria for survival in deep stationary phase Wassarman, 2004, 2006). In bacteria, functional RNAP holoenzyme consists of the core subunits β/β′α 2 ω, which associate with a σ subunit that provides promoter specificity. In E. coli, the σ 70 -RNAP holoenzyme (Eσ 70 ) is responsible for bulk transcription during exponential phase. During stationary phase, the σ S subunit preferentially associates with the β/β′α 2 ω subunits of RNAP to allow transcription of stationary phase genes. The general consensus for the role of 6S RNA's regulatory effect is based on its preferential binding to Eσ 70 , compared to Eσ S , and the observation that binding of 6S RNA to Eσ 70 inhibits its binding to DNA promoters ( Figure 1E). Thus, in the presence of 6S RNA, Eσ 70 is sequestered, promoting the formation of other holoenzymes, such as Eσ S , that are able to activate transcription from their specific promoters (Wassarman, 2007). Later, it was shown that σ 70 -dependant promoters negatively affected by the presence of 6S RNA contained a weak -35 element and an extended -10 element (Cavanagh et al., 2008). Thus, 6S RNA may function as a competitor for the binding of Eσ 70 to a specific subset of promoters. Following bioinformatic prediction of sRNAs in L. pneumophila, one sRNA showed very high expression during the post-exponential phase of growth, similar to E. coli 6S RNA (Wassarman and Storz, 2000). Its predicted structure was highly similar to the published consensus structure of the widely distributed 6S RNA (Barrick et al., 2005;Trotochaud and Wassarman, 2005). All the previously identified conserved features of 6S RNA homologs were present in the L. pneumophila 6S RNA candidate, including: (i) a 22-nt closing stem with two small bulges; (ii) a central bulge composed of 14 nt on the 5′ strand and 13 nt on the 3′ strand of low %GC content; (iii) two G-C base pairs surrounding the central bulge; and (iv) a terminal loop comprising four small bulges resembling the relIef of stalled rIBosomes By tmrna Stalling of ribosomes on a mRNA occurs when the translation machinery reaches the end of of the transcript without encountering a stop codon. This is a consequence of co-transcriptional translation that occurs in bacteria and the translation of mRNA that are being degraded from the 3′ end. Stalling of the ribosome prevents its release from the mRNA and can cause decay of the active ribosome pool. Moreover, generation of incomplete proteins can be toxic to the cells. Therefore, a system is needed to release the ribosome and target the incomplete protein for degradation. This function is performed by the tmRNA that is universally conserved in the bacterial kingdom (reviewed in Keiler, 2007; Table 1). The name tmRNA comes from the two functions performed by this sRNA. It acts as a tRNA and is charged with alanine and it acts as an mRNA, encoding a short peptide tag, which targets a protein for degradation. The current model of tmRNA-mediated rescue of stalled ribosome includes two proteins: SmpB and EF-Tu. A complex formed from alanyl-tmRNA-SmpB-EF-Tu enters the A-site of the stalled ribosome. The nascent protein is transferred to the alanyl-tmRNA. The complex then moves to the P-site and the ribosome translates the short peptide tag encoded on the tmRNA, resulting in tagging of the protein and release of the mRNA. Deletion of tmRNA usually results in strong phenotypes such as a marked reduction in growth rate and lethality (Keiler, 2007). In the intracellular pathogen Salmonella, deletion of tmRNA or the smpB gene results in severe reduction in survival capacity and pathogenesis in mouse macrophages (Julio et al., 2000;Ansong et al., 2009). The effect of the deletion of tmRNA in L. pneumophila is currently unknown but SmpB may be essential for axenic growth since a smpB deletion mutant could not be constructed (Charpentier et al., 2008). a note aBout 5′ and 3′ untranslated regIons of mrna In addition to their coding sequences, mRNAs have two distinct regions that can perform regulatory functions: the 5′UTR and the 3′UTR (Gripenland et al., 2010). Both regions can vary greatly in length from only a few, to several hundred bases. Some 5′UTR can adopt different structural states depending on conditions inside cells, including temperature (e.g., thermosensor), pH, and the presence of specific metabolites ( Figure 1A). Such 5′UTR are called riboswitches. One of the best-known riboswitches regulates transcription of the prfA gene, a major virulence regulator of Listeria monocytogenes. At low temperatures, the prfA 5′UTR adopts a structural state that masks the RBS and thus prevents translation. In contrast, at 37°C, the 5′UTR structure changes, exposing the RBS and allowing translation of the PrfA protein and expression of virulence determinants (Johansson et al., 2002). No riboswitches have been identified in L. pneumophila as yet. However, temperature is known to affect biofilm formation by L. pneumophila (Piao et al., 2006). Moreover, optimal growth at high and low temperature requires specific stress response proteins: ClpP and RNase R respectively (Charpentier et al., 2008;Li et al., 2010). Therefore, one may speculate that RNA thermosensors could be involved in L. pneumophila gene regulation to promote growth at extreme temperatures and to form biofilms. The small nucleotide cyclic di-GMP regulates many biological processes in bacteria, including biofilm formation, motility, and virulence (Hengge, 2009). Cyclic di-GMP is produced from Some bacterial species contain two or more 6S RNA homologs, such as Bacillus subtilis and Clostridium (Barrick et al., 2005;Trotochaud and Wassarman, 2005). A second 6S RNA homolog, named 6S2 RNA, was recently identified in the L. pneumophila genome (Weissenmayer et al., 2011). Surprisingly, the authors could detect transcription from the opposite strand encoding 6S2 RNA and suggest that its expression is regulated by a cis-acting sRNA. The 6S2 RNA is expressed in E and PE phase at a similar level, but the antisense transcript is only expressed in E phase, which could inhibit 6S2 function during E phase and therefore effectively result in functional 6S2 RNA expression only during PE phase. That would result in a situation similar to the 6S RNA of E. coli and the 6S RNA of L. pneumophila that are only highly expressed in PE phase. The role of 6S2 RNA is currently unknown and it would be interesting to investigate the phenotype of a mutant defective in both 6S RNA and 6S2 RNA. the crIspr ImmunIty system The CRISPR loci encode a sRNA-based immunity system against viruses and other invading DNA (Horvath and Barrangou, 2010). It consists of a leader sequence followed by several non-contiguous direct repeats separated by pieces of variable sequences called spacers. The spacer is a sequence of DNA (21-72 bp) originating from invading viral or plasmid DNA that has been integrated in the bacterial genome. Following transcription of the CRISPR loci, the multi-repeat, multi-spacer RNA is processed by CRISPR-associated protein (Cas) into small units consisting of a spacer flanked by two partial repeats, called crRNA. Those crRNA provide specificity to the system by guiding the Cas interference machinery to the invading nucleic acids that match its sequence. Therefore, the spacers are remnants of past viral infections or plasmid invasions and can be viewed as a form of acquired immunity. New spacers can be added at the leader end of the CRISPR loci. In L. pneumophila, CRISPR loci have been identified in the Lens, Alcoy, and Paris strains, but not in Philadelphia-1 (D'Auria et al., 2010). The Lens strain possess two CRISPR loci, one on the chromosome, the other on a plasmid. The Alcoy and Lens CRISPR systems are almost identical, composed of three Cas genes (cas1, cas3, and csy4) and 55 or 52 repeats, respectively, of 27 bp with one bp difference between the two strains. The Paris locus are not related to the Alcoy/Lens loci and is composed of cas1, cas2, and cas4 and contains 34 repeats of 37 bp. BLAST analysis of the spacer sequences did not identify any homologous sequences in the GenBank database. It is noteworthy that four bacteriophages of L. pneumophila have been identified from environmental water samples, but their sequences are unknown (Lammertyn et al., 2008). There is currently no evidence of any implication of the CRISPR system in regulation of virulence-related traits in L. pneumophila. However, in P. aeruginosa, the CRISPR system is needed for bacteriophage-mediated inhibition of biofilm formation and swarming motility following lysogenic infection with bacteriophage DMS3 (Zegans et al., 2009). This suggests that the combination of lysogenic infection and the presence of an active CRISPR system may have an impact on the regulation of group behavior traits. Whether or not this is relevant in the context of host infection by bacterial pathogens still needs to be determined. of cis-encoded base-pairing sRNAs are obvious, they are the mRNA encoded on the complementary strand. However, even in this case, molecular evidence is needed to establish the link between the two molecules and the effect of the sRNA on the target mRNA. For trans-encoded base-pairing sRNAs, there are a priori no indications of what the target might be. As a start, it could be useful to use a bioinformatic approach to generate a list of putative targets that can then be tested experimentally. Target prediction usually relies on the estimation of optimal hybridization scores between sRNA and mRNA targets and often includes the effects of stable secondary structures. Many web servers are available for genome-wide prediction of mRNA targets, including, but not limited to sRNATarget (Cao et al., 2009) and TargetRNA (Tjaden et al., 2006). Target prediction could also be used in conjunction with experimental genome-wide approaches such as transcriptional profiling. Comparison of the transcription profile of a mutant strain or an over-expresser strain to the wild-type strain can highlight putative targets . One has to keep in mind that any observed effects on transcript expression could be indirect, when, for example, a transcriptional regulator is the true target. Since the effect of some sRNAs can only be seen at the protein level, the effect of a sRNA is not necessarily observable at the steady-state RNA level. Comparison at the proteome level, by 2D gel analysis, could be more informative, but because of detection limitation, poorly expressed proteins are usually missed. Comparison of the sRNA deletion mutant, the over-expresser strain and the wild-type strain by SDS-PAGE and Coomassie staining may be sufficient to suggest a putative target. Then, a protein of interest can be identified by mass spectrometry analysis. The target of the GlmY sRNA, a polycistronic mRNA encoding glmUS, was identified with this strategy (Urban et al., 2007). A more direct approach to find the mRNA target of transencoded sRNA is to use the sRNA as a bait to fish out the target. In the case of a sRNA that interact with Hfq, the sRNA-Hfq complex can be preloaded into an affinity purification column and incubated with extracted mRNA. After washing, the eluted mRNA are converted to cDNA and identified by sequencing or by microarray analysis. Such method was used to identify the target of the E. coli RydC sRNA, an ATP-binding cassette permease (Antal et al., 2005). Alternatively, a sRNA could be tagged with biotin, bound to streptavidin-coated magnetic beads and incubated with extracted mRNA. Identification of the captured mRNA could be performed as explained above. This method has been used to identify two targets, ompA and ompC mRNA, of the RseX sRNA of E. coli (Douchin et al., 2006). The identification of protein targets of protein-binding sRNAs is somewhat similar to what was described above for mRNAbinding sRNAs. However, in this case, secondary structures are often very well conserved, which is illustrated by 6S RNA and the CsrB homologs, and therefore structure predictions could serve as a guide. Then proteomic studies could be undertaken or more direct approaches, such as the streptavidin-binding aptamer tag described above could be used (Windbichler et al., 2008). Said et al. (2009) have performed a systematic analysis of the use of different aptamers and configurations to identify protein targets of sRNA. two guanosine-5′-triphosphate molecules by diguanylate cyclases (DGC, containing a GGDEF domain) and degraded selectively by phosphosdiesterases containing either EAL or HD-GYP domains (Hengge, 2009). Therefore, the quantities and activities of DGC and EAL/HD-GYP enzymes determine the net intracellular concentration of cyclic di-GMP, which may be an integration point for many different signals. Consequently, the mechanism(s) of gene regulation by cyclic di-GMP has been the subject of intense investigation. A new riboswitch class that regulates gene expression by binding to the second messenger cyclic di-GMP was described and found in many different bacterial species (Sudarsan et al., 2008). Recently, our lab provided evidence that the cyclic di-GMP signaling pathway of L. pneumophila is involved in the regulation of intracellular growth and flagellin synthesis (Levi et al., 2010). Given the large number of DGC and EAL/HD-GYP enzymes present in L. pneumophila genome, it is tempting to speculate that an, as yet, unidentified riboswitch may play a role in cyclic di-GMP regulatory pathways in L. pneumophila. However, no riboswitch has been identified in L. pneumophila as yet and it would therefore be interesting to performed a systematic search to identify possible candidate. In eukaryotes, 3′UTRs are important for the control of translation (Sonenberg and Hinnebusch, 2009). The importance of 3′UTR for bacterial gene regulation is currently unclear but probably underestimated. Long overlapping 3′UTRs were identified in L. monocytogenes and in B. subtilis (Rasmussen et al., 2009;Toledo-Arana et al., 2009). Such 3′UTRs could affect the stability of convergent genes by a mechanism similar to cis-encoded base-pairing sRNAs (see below). Whole-genome tiling array experiments were used to find transcriptionally active regions in B. subtilis, which identified a group of genes with long (∼200 nt) homologous 3′UTR (Rasmussen et al., 2009). Structure predictions revealed that those 3′UTR fold into a highly stable Y-shaped double-stranded structure ending with a very short single-stranded tail. The author suggested that such structures could target the mRNA to a location in the cells were the protein is needed (i.e., the membrane) or prevent access of RNAses to the 3′ end of the transcript. Stable structures at the 3′ end of mRNAs block the activities of most 3′-exoribonucleases. RNase R is able to degrade double-stranded RNA molecules but needs a single-stranded tail of at least 10 nt (Vincent and Deutscher, 2006). In L. pneumophila, six actively transcribed 3′UTRs were identified (Table 1), ranging from 66 to 180 bases (Faucher et al., 2010). Whether or not they are involved in gene regulation requires clarification. Interestingly, the predicted structure of gltX-3′UTR is similar to the Y-shape structure reported in B. subtilis homologous 3′UTRs. the next step: target IdentIfIcatIon and characterIzatIon of L. pneumophiLa srnas Now that a number of actively transcribed sRNAs have been identified in L. pneumophila, further research should focus on the determination of their functions and their specific targets. First of all, it is important to define what a true target is. Essentially, a true target is a mRNA or a protein that physically interacts with the sRNA and whose function, stability or translation is affected by this interaction (Vogel and Wagner, 2007). The inferred targets biology are the identification of sRNA targets and determining the phenotypes of mutant that are defective in the production of individual and multiple sRNA species. acknowledgments This work was supported by PHS award AI064881 to Howard A. Shuman. Sébastien P. Faucher was supported by a post-doctoral fellowship from the National Sciences and Engineering Research Council of Canada (NSERC) and from the Fond de recherche en santé du Québec (FRSQ). We would like to thank three anonymous reviewers for helpful comments and suggestions. concludIng remarks Increasing evidence points to important and broad implications of sRNAs in the regulation of life cycles, stress responses and virulence properties of several pathogenic bacteria (Papenfort and Vogel, 2010). This is evident in L. pneumophila where three sRNAs, 6S RNA, RsmY, and RsmZ, are already known as major determinants of virulence regulation. However, this is probably only the tip of the iceberg and it is likely that other sRNAs are involved in regulation of virulence and other traits such as biofilm formation and the responses to environmental stresses. In the near future, important goals for characterizing the specific roles of sRNA in L. pneumophila
2014-10-01T00:00:00.000Z
2011-02-01T00:00:00.000
{ "year": 2011, "sha1": "9362d3e8229a87be90a47c589134039510408bc7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2011.00098/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9362d3e8229a87be90a47c589134039510408bc7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
36239843
pes2o/s2orc
v3-fos-license
Comparison of Immediate and Intermediate-Term Results of Intravascular Ultrasound Versus Angiography-Guided Palmaz-Schatz Stent Implantation in Matched Lesions Background Intravascular ultrasound (IVUS) provides more precise information than angiography about vascular dimensions. This information is used by some centers to optimize intracoronary stent implantation. There are no direct comparisons of the effects on restenosis of optimal IVUS-guided versus angiography-directed high-pressure stenting. Methods and Results Lesions of patients who had a 6-month angiographic follow-up study were eligible for matching. From 445 consecutive lesions treated by Palmaz-Schatz (P-S) stenting guided by IVUS (IVUS group) in Milan, 173 lesions were individually matched with 173 of 476 consecutive lesions treated by P-S stenting directed by angiography (Angio group) in Hamburg. Lesions were selected by a computerized program according to baseline clinical, angiographic, and procedural variables. Immediate and 6-month angiographic results were retrospectively compared, distinguishing an “early phase” from a “late phase.” This distinction was based on the more aggressive dilation strategy with larger balloons and more demanding IVUS criteria for optimal stent expansion used in Milan in the early phase. In both phases, a larger minimum lumen diameter (MLD) immediately after stenting and after 6 months was achieved in the IVUS group than in the Angio group. In the early phase, the dichotomous restenosis rate was lower in the IVUS group than in the Angio group (9.2% versus 22.3%; P =.04). In the late phase, there was no difference in restenosis between the groups (22.7% versus 23.7%; P =1.0). Conclusions In matched lesions treated with high-pressure stenting, IVUS guidance achieved a larger MLD than angiographic guidance. However, in the IVUS group, the restenosis rate was lower only in the early phase, when balloons larger than currently used were selected to maximize the stent lumen area. dextran (dextran 40, given at a dose of 100 mL/h for 2 hours before stenting and at a dose of 50 mL/h during and after the procedure, for a total volume of 1 L). A bolus of 10 000 U heparin was given after sheath insertion, with a repeat bolus of 5000 U given as needed to maintain the activated clotting time >250 seconds in Milan or hourly in the event of a prolonged procedure in Hamburg. Only P-S tubular slotted stents (Johnson & Johnson Interventional Systems) were implanted in the patients who entered this study: the standard 15-mm PS153 stent with a central linear articulation, a disarticulated 7-mm PS153 stent, a 14-mm PS154 stent, a 10-mm PS104 stent, an 18-mm PS204 stent with multiple spiral bridges, a 10-mm biliary stent, and a 20-mm renal stent. For calculating the number of stents per lesion, the short stents (<10 mm) were counted as half stents. Biliary stents were counted as one stent each. All other stents were counted as one stent. Indications for stenting and their definitions were as previously reported. and intraobserver variability. The diameters of the proximal and distal lumen reference segments were averaged to obtain a mean reference diameter. MLD and % DS were measured in the baseline, posttreatment, and follow-up angiograms. Lesion length was measured on the baseline angiogram as the distance between the proximal and distal shoulders of the lesion, detected as the point at which the lumen becomes compromised by 50%. Lesions were characterized according to the modified American College of Cardiology/American Heart Association score. Thrombus was defined as a filling defect seen in multiple projections surrounded by contrast in the absence of calcification. ⇑ , the was slight and did not into a lower restenosis rate (22.7% versus 23.7%; P =1.0). Our findings confirm the importance of the immediate result in determining the late result, by Kuntz et but deviate from the model of restenosis proposed by these authors in that the higher acute gain observed in the IVUS group versus the Angio group was not associated with a greater late loss, which was similar (1.0 to 1.1 mm) in the two groups in both phases. Coronary artery stenting prevents negative remodeling; thus, late loss within a stent results almost exclusively from intimal hyperplasia, as recently demonstrated by a serial IVUS study. The present study demonstrates that in the IVUS group, the more aggressive balloon dilation strategy used in the early phase, which possibly increased vessel wall injury, was not accompanied by a greater hyperplastic response. A possible mechanism for this result could be that after stenting, the extent of subsequent intimal hyperplasia is more dependent on plaque mass before intervention, as we clear selection bias was introduced, including only the patients who had a follow-up angiographic study. However, the results observed in this selected population are likely to reflect the results of the overall population. The patients were selected by a computerized procedure from a larger patient cohort, which can be considered representative of the initial cohorts. In fact, in both centers after stenting, all patients were scheduled for a coronary angiography at 6 months, and a similar percentage of patients (61.4% of the Milan population versus 71.9% of the Hamburg population) had an angiographic follow-up. The rest did not undergo a repeat angiographic study, mostly because they were asymptomatic and refused the study. View larger version: In this page In a new window Download as PowerPoint Slide CSA equal to or greater than the distal reference lumen CSA. In this phase, IVUS-guided stent optimization was generally performed with noncompliant balloons inflated at high pressure. The balloons were selected with a calculated nominal CSA 25% to 30% larger than distal lumen CSA, based on the observation that the ratio of the final stent CSA to the calculated nominal balloon CSA was 0.75 to 0.80 in the early phase. The rapid change in the final balloon-to-artery ratio over time in Milan, reflecting the change in IVUS-guided balloon selection compared with Hamburg (angiography-guided), is shown in Fig 1⇓. The third IVUS criterion for optimal stent expansion was that the nonstented adjacent inflow and outflow segments should not reveal evidence of a significant lesion, defined as plaque area >60% of the total vessel lumen. Figure 1. Smoothed curve plot (moving average of 7 data points) of ratio of nominal diameter of final balloon selected to angiographic reference vessel diameter in Milan and Hamburg over time. In Hamburg, dilation strategy did not change over this time period. In Milan, when target for defining IVUS success was achievement of 60% of average of proximal and distal total vessel CSA, postdilations were performed with balloons angiographically oversized (early phase). From September 1993, IVUS criterion for optimal stent expansion was rapidly altered (late phase). Goal was to achieve stent CSA equal to or greater than distal lumen CSA. In this phase, use of smaller balloons inflated at higher pressure resulted in significantly lower balloon-to-artery ratio than in early phase. Angiographic Analysis Both in Milan and in Hamburg, coronary angiograms were analyzed by experienced technicians not involved in the stenting procedure. Angiographic measurements of baseline, final, and follow-up angiograms were performed in a single matched view (working projection) at end diastole. The lesions were measured with a digital electronic caliper (Brown and Sharp) from an optically magnified image. The guiding catheter was used as the scaling device for calibration. Previous studies have shown that digital calipers correlate closely with computer-assisted methods, with a low interobserver and intraobserver variability. The diameters of the proximal and distal lumen reference segments were averaged to obtain a mean reference diameter. MLD and % DS were measured in the baseline, posttreatment, and follow-up angiograms. Lesion length was measured on the baseline angiogram as the distance between the proximal and distal shoulders of the lesion, detected as the point at which the lumen becomes compromised by 50%. Lesions were characterized according to the modified American College of Cardiology/American Heart Association score. Thrombus was defined as a filling defect seen in multiple projections surrounded by contrast in the absence of calcification. Patient, Baseline Angiographic, and Procedural Characteristics The baseline clinical characteristics of the patients in the IVUS (173 lesions in 158 patients) and Angio (173 lesions in 154 patients) groups are shown in Table 1⇓. In the Angio group, compared with the IVUS group, left ventricular ejection fraction was higher, the number of patients with two-vessel disease was lower, there were more patients with three-vessel disease and hypercholesterolemia, and fewer patients were current smokers. Sex, previous angioplasty at the same site, diabetes, and unstable angina were not different between the two groups. Matching for angiographic and procedural variables resulted in two groups of lesions with superimposable baseline angiographic and procedural characteristics. In the Angio group, however, the percentage of calcific lesions identified by angiography was lower than that in the IVUS group (Table 2⇓). Furthermore, in the Angio group, the percentage of lesions in which a half stent per lesion was deployed was higher, with a lower percentage of lesions in which one and two stents per lesion were implanted (Table 3⇓). Consequently, in the Angio group, the mean total number of stents per lesion was slightly lower (1.05±0.46 versus 1.17±0.44, P=.014). Baseline Clinical Characteristics Ta ble 2. Clinical Events As shown in Table 4⇓, no stent thrombosis occurred in either group. Moreover, the percentages of patients who had MI and CABG during hospitalization and after discharge were not statistically different between the groups. The percentage of patients who needed a repeat percutaneous intervention during follow-up was lower in the IVUS group than in the Angio group (5.1% versus 11.7%; P=.05). However, the percentage of patients who needed a repeat revascularization (CABG+PTCA) was not significantly different between the groups (7% versus 11.7%; P=.17). Table 5⇓ summarizes the quantitative angiographic results of the matched lesions in the IVUS and Angio groups during the early phase and the late phase. Reference vessel diameter, MLD, and %DS immediately before stenting were similar in the IVUS and Angio groups, indicating that the matching process was adequate. Comparison of Quantitative Angiographic Results of the Matched Lesions in the Early Phase and Late Phase As illustrated in Fig 2⇓, IVUS-guided stent deployment produced a significantly greater acute gain than angiography-guided stenting in both the early phase and the late phase. A similar late loss of 1.0 to 1.1 mm was observed at 6-month follow-up angiography in the two groups in both phases. This resulted in a higher net gain and a lower loss index in the IVUS group than in the Angio group, with a statistically significant difference in the early phase, which translated into a lower dichotomous restenosis rate (9.2% versus 22.3%; P=.04). In the late phase, there was no difference in restenosis between the groups (22.7% versus 23.7%; P=1.0). Furthermore, in the early phase, the balloons used to optimize stent expansion were larger in the IVUS group than in the Angio group, with a higher balloon-to-artery ratio (Fig 1⇑). In the late phase, although the size of the final balloon was greater in the IVUS group than in the Angio group, the balloon-to-artery ratio was not different between the groups. In addition, in the early phase, the maximal balloon inflation pressure was lower in the IVUS group than in the Angio group. Finally, lesions were slightly longer in the IVUS group than in the Angio group. However, the calculated total length of the stented lesion was not different between the groups. have recently reported, with the preintervention plaque area measured by IVUS, than on the greater final strain (overstretch) applied to the vessel wall by a larger balloon correctly sized to the media-to-media vessel dimensions. Late loss (and restenosis) has been reported to be influenced by some clinical, angiographic, and procedural factors. The effects of these factors in our study were well balanced in the two groups; in particular, the higher percentages of patients with hypercholesterolemia and three-vessel disease in the Angio group were counterbalanced by the lower percentage of patients currently smoking, by the slightly shorter lesion length, and by the lower percentage of calcific lesions. Furthermore, the differences in the type and number of stents per lesion were negligible. In fact, in Milan, two disarticulated 7-mm-long PS153 stents, instead of one standard 15-mm-long PS153 stent, were implanted in 53% of the lesions in which only disarticulated PS153 stents were implanted. This result is also inferable by the fact that, although the percentage of half stents implanted was higher in the IVUS than in the Angio group (46.2% versus 28.1%), the percentage of lesions in which only a half stent was implanted was lower in the IVUS group (6.9% versus 20.2%). Finally, although the mean number of stents per lesion was higher in the IVUS than in the Angio group (1.17 versus 1.05), the majority of patients had one stent per lesion, and the calculated stented lesion length was equal in both groups in both phases (Table 5⇑). Other established risk factors for restenosis (diabetes, unstable angina, and chronic total occlusion) were not different between the groups. After stenting, the inhibition of intimal hyperplasia would be the ideal therapy to reduce restenosis. The results of the present study indicate that restenosis can also be reduced mechanically by trying to achieve as large an MLD as possible and that IVUS is better than angiography guidance to achieve this goal, because angiography may underestimate the extent of atherosclerotic disease in coronary arteries that undergo compensatory enlargement, thus leading to underestimation of the size of the final balloon that can be selected to safely expand the stent and maximize the stent lumen CSA. The use of IVUS guidance allows one to better oversize the balloon (by angiography) and to obtain a larger final MLD that would not be achieved by inflating a smaller balloon at higher pressure. IVUS-Guided Stent Optimization In the early phase, IVUS-guided stent optimization was performed with larger balloons than in the late phase, and with this strategy, a greater increase in the stent CSA was obtained after stent optimization (51±36% versus 28±35%; P=.002). Although there was no difference in the angiographic reference vessel diameter, the vessel CSA measured at the distal reference site by IVUS was significantly smaller in the lesions treated during the late phase as a result of a lower percentage of plaque area. However, this finding cannot by itself explain the lower increase in CSA achieved in the late phase, which probably would have been greater if the final balloons selected had been larger and sized to the IVUS average distal vessel diameter. IVUS Guidance Permits the Use of Balloons Traditionally Considered Oversized In the IVUS group, in the 76 matched lesions treated in the early phase, there were no more complications or vessel ruptures, compared with the 97 lesions treated in the late phase and with the lesions in the Angio group, suggesting that in the presence of arterial remodeling identified by IVUS, target lesions can safely accommodate larger balloons. This hypothesis is supported by the favorable results of the CLOUT trial, in which oversized balloons were safely used in 73% of the lesions in which IVUS identified the presence of arterial remodeling and the absence of heavy calcification.
2018-04-03T03:39:50.514Z
1997-11-04T00:00:00.000
{ "year": 1997, "sha1": "2f62698e4573c635b4af39f5a69841ee2e63a0fe", "oa_license": "CCBY", "oa_url": "https://escholarship.org/content/qt6ff080p8/qt6ff080p8.pdf?t=nw0rsd", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "810656f7393cd4cda68191e5e8518e8077c925cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215760336
pes2o/s2orc
v3-fos-license
Virtual reality as a non-pharmacologic analgesic for fasciotomy wound infections in acute compartment syndrome: a case report Background Fasciotomy is a life-saving procedure to treat acute compartment syndrome, a surgical emergency. As fasciotomy dramatically improves wound pain, it should be performed as soon as possible. Moreover, delays in the use of fasciotomy can increase the rate of wound infections. Once the fasciotomy wound is infected, pain control is achieved via the long-term use of opioids or anti-inflammatory analgesics. However, the administration of high doses of opioids may cause complications, such as respiratory depression, over-sedation, and constipation. Therefore, treatment methods other than narcotic administration should be established to better manage the pain caused by fasciotomy wound infections. Virtual reality has recently been introduced in analgesic therapy as a replacement, or complement, to conventional pharmacological treatments. Its use has been extensively studied in the pain management of patients with burns. An increasing number of painful conditions are being successfully treated with virtual reality. Here, we report a case of acute compartment syndrome complicated by fasciotomy wound infection. Case presentation A 40-year-old Japanese man suffering from acute compartment syndrome of his leg due to a car accident trauma was treated with a fasciotomy to decompress intra-compartmental pressure and restore tissue perfusion, and admitted to an intensive care unit. Unfortunately, as the open fasciotomy wound was complicated by infection, he complained of hyperalgesia and severe pain during wound debridement. He was therefore given acetaminophen and high-dose intravenous patient-controlled analgesic fentanyl (35 μg/kg per day) to reduce the pain. Despite these efforts, the pain was poorly controlled and opioid-induced side effects such as respiratory depression were observed. An immersive virtual reality analgesic therapy aimed at distraction and relaxation was used and effectively alleviated the pain. Three sessions of virtual reality analgesic therapy over 2 days produced sustainable analgesic effects, which led to a 25–75% dose reduction in fentanyl administration and the concomitant alleviation of respiratory depression. Conclusions This case suggests the feasibility of virtual reality analgesic therapy for pain management of fasciotomy wound complications in acute compartment syndromes. Virtual reality represents a treatment option that would reduce analgesic consumption and eliminate opioid-induced respiratory depression to treat fasciotomy wound infection. Introduction Acute compartment syndrome (ACS) is a serious complication of limb trauma, in which the swelling of injured tissues and/or concomitant hematoma causes an increase in intra-compartmental pressure, thereby constraining blood perfusion of the tissues [1]. Ischemia induces neurological symptoms such as numbness and pain. If low or lack of blood perfusion persists, it can ultimately cause tissue necrosis, irreversibly damaging the limbs [1,2]. A prompt execution of fasciotomy is recommended to decompress the intra-compartmental pressure and restore tissue perfusion, once a severe rise in pressure is either confirmed by invasive monitoring or assessed from clinical symptoms [3]. Although early fasciotomy is recommended for those suffering from ACS, this invasive procedure leaves the wound open, thereby endangering patients with potential complications such as wound infections, which have been reported to occur in 10-30% of cases [4,5]. Severe pain is a hallmark of ACS and strongly suggests the presence of acutely increased intra-compartmental pressure and resulting tissue ischemia, thereby prompting the need for a decompressive fasciotomy [1,2]. When decompression of the intra-compartmental pressure is achieved by fasciotomy, the intensity of the pain should gradually cease. A short course of opioid treatment is then suitable for pain management in patients with ACS [6]. However, pain can be recurrent and can even worsen when a fasciotomy wound is compounded by infection, thereby complicating the use of opioids for pain management [7][8][9]. Virtual reality (VR) has recently emerged as a novel analgesic therapy that could replace or complement conventional pharmacological treatments, and has been extensively studied in the pain management of patients with burns. The list of painful conditions successfully treated with VR is growing. Here we add to the list a case of ACS complicated by fasciotomy wound infection. In this case, the patient's pain was difficult to manage with opioids due to intolerable adverse effects such as nausea and respiratory depression. Case presentation A 40-year-old Japanese man, a truck driver, suffered multiple traumas during a road car crash that severely damaged the front part of his truck. While he was trapped in the driver's seat, his lower-right limb was strongly pinched against the dashboard for 8 hours until he was saved by a rescue team. He was then transferred to an intensive care unit (ICU) at Mie University Hospital, a tertiary academic medical care center. He was fully alert and complained of severe pain, along with numbness and weakness, in his right limb. A fullbody computed tomography scan revealed multiple rib and lumbar compression fractures. His right lower leg had no fractures; however, the muscles in his lower leg were significantly swollen after the prolonged compression. After placing intramuscular catheters to monitor the intra-compartment pressures of his lower limb, the trauma team found that the pressures in the anterior, posterior, medial, and lateral side compartments had risen tõ 50 mmHg. Based on these findings, he was diagnosed as having ACS, and quickly underwent a fasciotomy of his right lower limb. The fasciotomy wounds were left open ( Fig. 1 top panels), and were cleaned daily and wrapped in a dressing containing an antibiotic ointment. After the fasciotomy, the neurological deficits of his right limb were gradually restored and the pain intensity was reduced, made manageable by opioid treatment for at least for 4 days (Fig. 2). He exhibited rhabdomyolysis with increased levels of serum creatine phosphokinase (CPK) and acute renal failure, possibly due to the ischemia reperfusion injury associated with ACS (Fig. 3). His CPK level progressively decreased and returned to normal levels on day 10, indicating resolution of the rhabdomyolysis. Acute renal failure temporarily required hemodialysis for 5 days, and he subsequently recovered. The pain at the fasciotomy wounds on his right leg was well managed for 4 days (days 1 to 4), as shown by the score of 5 points on the Numeric Rating Scale (NRS) for pain (0-10) [10], via intravenous patient-controlled analgesia (IV-PCA) fentanyl at a dose of~12 μg/kg per day (Fig. 2). On day 5, owing to the onset of nausea, IV-PCA fentanyl was discontinued and replaced with a drip infusion of acetaminophen. However, as acetaminophen could not sufficiently control the pain, IV-PCA fentanyl was resumed along with antiemetic agents on day 6. On day 8, as our patient's pain at the wounds intensified, we observed the appearance of defective granulation and necrotic tissue at the site of the fasciotomy despite adequate infection prevention. In addition, the wounds exuded a foul odor and were strongly suggestive of infection ( Fig. 1 bottom panels). To control the wound infections, debridement of the infected necrotic tissues was performed. Although the wound infections appeared to be under control with antibiotics and debridement, our patient continued to complain of severe pain both during and between procedures, as shown by the NRS of 6-10 points. The pain experienced was self-described as if being stabbed with a pin or a needle or, alternately, a numbness resulting in a dull sensation. The pain began abruptly, irrespective of body movement, and lasted for approximately 1 hour. A dose of IV-PCA fentanyl to manage the pain was progressively increased up to 35 μg/kg per day from day 10 to 13. He also presented hyperalgesia of his right limb beginning at day 10. In addition, nausea and poor appetite worsened as a side effect of high-dose fentanyl, and he experienced loud snoring and excessive daytime sleepiness due to opioidinduced respiratory depression; as such, he was administered oxygen to prevent hypoxia. We consulted with an in-hospital pain control team about a pain management strategy to replace/complement opioid administration. They confirmed the presence of opioid-refractory severe pain and hyperalgesic states, and proposed the use of VR analgesia. On day 14, he was provided with an immersive VR using the Samsung Gear Oculus headset fitted with a Samsung Galaxy S7 phone loaded with the AppliedVR (AVR) healthcare platform (AppliedVR Inc., Los Angeles, CA 90067, USA), which delivers various VR analgesia program modules. Of 20+ VR programs designed to distract and/or relax, the program "Dream Beach" was selected according to our patient's preference for the sea. The VR program simulates the experience of being at the beach beside a calm sea on a sunny day. Each session lasted for 30 minutes and three sessions were administered over 2 days. The VR analgesic proved effective, as his pain rating fell dramatically from 10 to 6 points. On day 15, the second day of VR administration, its analgesic effects proved so successful that bolus infusions of IV-PCA fentanyl were no longer required. On day 16, the pain rating scale remained 2 points under a baseline IV-PCA infusion of 8.8 μg/kg per day fentanyl, and as the wound pain became manageable, he was transferred from the ICU to a general surgery ward at a secondary care hospital. At day 28, after the fasciotomy, he was free of opioids and the wounds were closed using split-thickness skin grafting. Discussion and conclusions In this case report, we present a successful application of an immersive VR experience to alleviate severe pain from the open fasciotomy wounds of a patient with ACS who had been treated with a high-dose (that is, 35 μg/kg per day fentanyl) IV-PCA opioid, which became intolerable due to adverse effects such as nausea and respiratory depression. The analgesic effects brought about by this immersive VR experience to our patient made it possible to reduce the doses of IV-PCA fentanyl by 25~75%, which alleviated the opioid-induced respiratory depression. This case study not only confirms previous reports showing the effectiveness of VR to alleviate pain during wound care and physical therapy in patients with burns [11,12], but also illustrates the feasibility of using a VR analgesic to manage pain in a patient with ACS. In patients with ACS, the pain during the early phase that is, within several days after fasciotomymay stem from physical tissue damage and inflammation, as well as from ischemia and ischemia-reperfusion that can damage neuronal and non-neuronal cells [1,6]. Generally speaking, pain in the early phase gradually decreases, as inflammation resolves while the wounds heals. Such pain is usually manageable in the early phase with opioids, as occurred in our case [6]. In contrast, pain in the late phasethat is, several days after the fasciotomymay indicate the presence of complications associated with ACS and fasciotomy, such as wound infections and neuropathic pain [2,4]. Pain in the late phase is often not well managed with opioids, as was observed in this case. This is partly due to the fact that prolonged use of opioids is accompanied by several adverse effects, from nausea, itching, and constipation to respiratory depression and the development of tolerance and dependency [13]. Of note, opioids have been shown to exhibit a wide range of immune-suppressive effects [14], thereby potentially worsening wound infections [15]. Hyperalgesia also occurred in our case. Hyperalgesia can be divided into three types: primary, secondary, and opioid-induced. Primary hyperalgesia is caused by the exacerbation of pain due to tissue damage. Secondary hyperalgesia involves the spreading to undamaged tissue. In general, pain spreads to the areas surrounding the damaged tissue. Opioid-induced hyperalgesia is thought to be induced by the administration of opioids such as morphine and fentanyl to relieve pain [16]. However, the narcotic side effects can be severe enough to warrant discontinuation of opioid treatment. In the current case, as primary or opioid-induced hyperalgesia may have occurred, it was necessary to reduce the dose of fentanyl as quickly as possible. Thus, it is of great clinical significance that a VR analgesic proved effective in alleviating the pain in the late phase of a patient with ACS treated with fasciotomy, thereby reducing the need for opioids [12]. Continuous infusion of fentanyl at 2.88-16.08 μg/kg per day for several days has often been safely used without any serious side effects such as respiratory depression [17,18]. The appearance of serious adverse effects such as respiratory depression, which can require intubation, would hamper the continuous use of high-dose fentanyl [19], as was the case in this report. Thus, a pain management approach that would replace/complement opioids, thereby mitigating the risk of opioid-induced respiratory depression, would be extremely useful in ICUs and other clinical settings. In fact, opioid-induced respiratory depression represents the major morbidity associated with opioid abuse [20]. Considering the adverse effects of opioids and the serious negative social impact of widespread opioid addiction that originates from the misuse of prescription drugs [21], alternative means of pain control that can reduce opioid usage are of great clinical importance. VR is a promising nonpharmacological means to replace and/or complement opioids [12]. It has been shown in healthy individuals that the thermal stimulation of VR confers additive analgesic effects to opioid treatment in alleviating pain [22]. Consistent with these findings, our own case of ACS showed that pain relieved by VR resulted in reduced opioid requirements, which alleviated our patient's opioidinduced respiratory depression. Further investigations would be needed to test how well VR can replace/complement opioid analgesics in acute and chronic pain conditions in various diseases. The major mechanism by which the VR program described in our case reduced pain was distraction, which is designed to dilute a patient's attention to pain, supplanted by an immersive VR environment, which modulates a patient's pain perception [12]. Relaxation, another mechanism closely related to distraction, was also employed in our VR analgesic regimen. As pain perception can be influenced by a patient's affect (a psychological term describing the experience of positive emotion), shifting the distressing circumstances of being in a wounded state in a hospital room toward the much more enjoyable circumstances of a pleasant VR environment gives rise to a positive effect, which alleviates pain. To optimize the effects of distraction and relaxation, proper selection of the VR content is critically important. In fact, although we prescreened 20+ available VR programs based on our patient's preference for the sea, the "Dream Beach" program proved effective, while the "Sea Hospital" program, which simulates the experience of being at a pool with seals, was not effective. Although not applicable to the present case, focusshifting and skill-building represent two additional advanced mechanisms of VR analgesia [12]. As is often observed in gaming-type VR programs such as "Bear Blast", which involves a shooting game to target bears with cannon balls, focus-shifting potently shifts one's attention to VR objects, and requires the user's focused interaction with a VR environment. It has been shown in patients with burns that VR analgesics based on focus-shifting mechanisms are more effective at alleviating pain than that of mere passive distraction [23]. Skill-building aims to foster a patient's capacity to achieve a certain mental state, such as mindfulness mediation, in order to control their mental and physical responses to painful conditions [24,25]. As skillbuilding would require more active engagement than simple passive distraction [12], this might prove difficult for some patients in ICU. Skill-building VR is expected to be effective for the management of chronic pain [12]. How long the analgesic effects of VR can last represents an important and unresolved question. It has been shown that the preoperative administration of immersive VR experiences made pediatric patients more resilient to postoperative pain [26]. It is possible that VR analgesia, under certain settings, may give rise to sustainable modulation of pain perception. In our case, our patient felt significantly less pain not only during, but also sometime after administration of the VR analgesic, thus suggesting that the effects could last for hours or days. A possible explanation for the long-lasting analgesic effects of VR in our case might be that a VR-induced positive shift of our patient's affect helped modulate pain perception in a lasting manner, continuing even after the VR session had ended [12]. A potential limitation of VR analgesics is the cost of introducing such a platform (including software and hardware) to clinics. Although the initial expenses might be high, one recent economic analysis using computer modeling has estimated that, overall, VR analgesic therapy would be cost saving whenever it reduced the length of hospitalization [27]. Real-world economic analyses are needed to carefully assess the economic benefits of VR analgesic therapies. In addition, potential adverse effects, if any, might limit the utility of VR analgesia in clinics. Several clinical studies have occasionally, though quite infrequently, reported minor incidences such as nausea, thereby supporting the overall safety profile of VR's clinical applications [11]. As is sometime seen in recreational VR users, motion sickness, which can induce nausea, is a major factor potentially limiting the utility of VR analgesia [28]. Investigations into human and machine factors affecting susceptibility to nausea, as well as the development of novel technologies mitigating such side effects are currently underway.
2020-04-15T14:32:00.160Z
2020-04-14T00:00:00.000
{ "year": 2020, "sha1": "d42dcdc60aa4213758d7b49a063c1803577196a0", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/s13256-020-02370-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d42dcdc60aa4213758d7b49a063c1803577196a0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247579813
pes2o/s2orc
v3-fos-license
Global Sound Field Reconstruction in the Room Environment Based on Inverse Wave-Based Simulation An inverse wave modeling-based method is proposed for globally reconstructing the sound field in room environments. 'e method builds the wave model of the sound field as prior knowledge to support the reconstruction under strong reverberation. In this method, the whole space is divided into a set of subdomains. Based on the theory of discretization-based numerical simulation, a wave model that can describe the transfer characteristic between any subdomain and the source is built. Supported by this model, the sound source is recovered based on spatial sound pressure sampling and the global sound field reconstruction can be further accomplished in the reverberant environment. In particular, the shape function with the property of sparsity is constructed in building the wave model. 'en, the intensity on the point source is represented by a sparse vector over the subdomains, and then, the sparse method can be used to achieve the recovery of this vector, which reduces the sampling burden in the space. Numerical verifications are performed to evaluate the performances of the proposed method. It demonstrates that the proposed method is capable of obtaining accurate reconstructions in a strong reverberant environment. It also shows that the method is applicable to problems with complicated excitations in the low-frequency range. Introduction Sound field reconstruction is a key technique in many engineering applications, such as cabin sound source identification, spatial noise reduction, and acoustic imaging [1]. is topic has attracted sustained focus in the area of acoustics and signal processing, and new methods have been constantly proposed in order to obtain more precise and robust reconstruction. A basic assumption in most classical sound field reconstruction methods, for example, the near-field acoustic holography (NAH) [2], is that the sound propagated in free space or the enclosed space is large enough to be considered as a free one. is is a key problem that most methods being faced in the room environment where there is the influence of reverberation on sound propagation caused by reflections of walls. In recent years, many efforts have been dedicated to indoor sound field reconstruction. A regular way to realize the reconstruction in the reverberant environment is improving the classical NAH. NAH is one of the most representative methods for sound field reconstruction. It recovers the source information by sampling the sound data on a hologram surface and then gives the prediction of the sound field on the predicting surface. Based on various supporting techniques such as the equivalent source [3][4][5][6], Helmholtz equation least squares [7][8][9], and inverse boundary element methods [10][11][12], NAH has become increasingly popular in various fields. To enhance its performance in the room environment, NAH based on finite element analysis was proposed in recent years [13]. is method effectively avoids the side influence of reverberation on sampling the pressure information. But it mainly focuses on the particle velocity reconstruction on the boundary but not the interior sound reconstruction. For the pass-by noise contribution analysis in a vehicle, a frequency-averaged l 1norm regularization technique based on near-field sampling was proposed [14]. Similar to other NAH-based methods, this method also focuses on the boundary vibration reconstruction but not the sound field. Overall, from the applicability perspective, NAH-based methods are more commonly used for reconstructing exterior sound field or interior vibration on the boundary. e methods based on acoustic function expansion are other types of frequently used methods to realize indoor reconstruction. In these types of methods, the sound pressure in a room is represented by some expansion form and expressed by adding the products of a set of base functions and their corresponding coefficients. By solving the coefficients through inverse operations from sampled signals, the sound fields on other spatial positions can be calculated based on the expression of the sound pressure. e spatial modal expansion presented by Wu was early used for sound pressure reconstruction in a closed cavity [15]. By solving the Helmholtz equation by the extended Helmholtz equation least squares (HELS) method, the sound pressure can be correctly reconstructed. But because acoustic modes involved in the calculation were analytically obtained in regular-shaped rooms, the method was not suitable for the irregular-shaped room. Nevertheless, due to good applicability to reverberation, the expansion type of methods has attracted much attention in recent years [16][17][18]. According to different expansion forms, these methods can be further categorized into different types, such as the spherical wave decomposition methods [19][20][21] and plane wave decomposition methods [22][23][24][25]. However, although the expansion type of methods has achieved huge developments over the past decade, they still lack good performances on global reconstruction under reverberation. ese methods are often able to provide correct reconstructions in regions near the microphone array. But their accuracies degrade a lot at a far region of the array. is is because the expansions of sound fields in these methods usually cannot precisely describe the room's boundary effect. Under this condition, the expansion type of methods requires more sample points distributed throughout the room to produce a global reconstruction. Under the classical method framework, supporting techniques also achieved significant developments. e most representative technique is the sparse method. No matter for the source recovery in NAH or solving the coefficients of sound pressure expression, the sparse method is a very effective way to reduce measurement burden and improve recovery robustness. It has been nearly considered as a basic framework for accomplishing the recovering calculation in the past decade. e development of sparse Bayesian learning in recent years has further strengthens on the wide application of the sparse method in this field [26][27][28][29][30]. In the aforementioned reconstruction methods, the room knowledge is rarely involved in the calculations. But in fact, the room itself plays an important role in forming the sound field. Its geometrical shape governs the propagation patterns of the sound waves, and the boundary impedance has an obvious influence on the attenuation of sound waves. erefore, it is foreseeable that a good reconstruction performance can be obtained if the room information can be properly used as prior knowledge. is idea has been recently used in the inverse room acoustic problem. Image source method has been used to build the model in some source recovery methods. By simulating the channel responses between the sensors and the virtual image of source, the enclosure can be expanded into a large free space, which cancels the reverberant problem [31][32][33]. However, a basic rule in the image method is that the order of the image sources should be large enough to ensure that the enclosure can be considered as free space. us, the number of image sources will sharply increase as the number of walls increases. is would lead to that the computational accuracy and efficiency degrade a lot substantially. e wave-based theory, such as the finite element method and boundary element method, is another commonly used modeling method in acoustics. It divides the enclosure into smaller and simpler parts and yields responses through solving the system equation that derived from the Helmholtz equation. By constructing a localization method according to the inverse procedure of wave modeling, it is potentially beneficial for giving robust result in reverberant environment. ere has been a study reporting that the method based on the inverse finite element analysis is effective for sound source localization in a strong reverberant environment [34]. However, the classical finite element analysis realizes the simulation through assembling local system matrices into system matrices. It leads to that the recovery parameter of the source has no sparsity, so a large number of samplings are needed to weaken the influence of the underdetermined problem. Inspired by the model-driven thought, an inverse wave modeling-based method is proposed in this study to globally and economically reconstruct sound field in a complicatedshaped room environment. In this method, a wave-based simulation model is first constructed under the classical finite wave simulation framework [35]. A global and spatial sparse type of shape function is developed in this step that is used to construct the system's matrices, which implicitly contain the room's geometric information. Under the wave model's constraint, the sound source is sparsely recovered and the wave model's nodal pressure is obtained. e sound field over extended regions inside the room can be estimated. e experiments validated that this method can provide correct global sound field reconstructions in complicated-shaped rooms. e study is organized as follows. In Section 2, a theoretical derivation of the proposed method is described. In Section 3, numerical simulations are conducted to preliminarily verify the proposed method's accuracy, and the proposed method is compared with the reference method. e factors influencing the proposed method's performance are also evaluated. A conclusion is provided in Section 4. Overview of the Method. Sound field reconstruction is a typical kind of inverse problem in acoustics. e sound source is usually unknown in this problem, so the traditional numerical simulation methods, such as the finite element method, ray-tracing method, and statistical energy analysis method, cannot be directly applied to predict the sound field. To realize the reconstruction, spatial sound pressure sampling should be first performed. Based on the spatial sound field conversion algorithm, the sound source or the expansion coefficients are recovered, which is an inverse procedure compared to the simulation. en, the spatial sound field can be predicted. is procedure can be generally expressed by where y is the sampling information, s is the source information or expansion coefficients to be determined, and Φ is the sensing matrix. By solving s, the reconstruction of the sound field on other spatial positions can be easily realized. In this procedure, the sensing matrix Φ is the key factor for the reconstruction. To be available for a strong reverberant environment, it is an effective way to involve the room information in constructing Φ. is is achieved by constructing a wave model in this study. e room environment is first divided into discrete subspaces as illustrated in Figure 1. By converting the Helmholtz equation into an integral equation spread over all subspaces, the influence of room on forming the sound field is exploited in the model and relations between any two subspaces are built. us, the transfer functions between sampling points to the source can be combined as the matrix Φ, and the recovery of source under a strong reverberant environment can be accurately achieved. Sound Field Modeling in the Room Environment. A steady-state sound field simulation problem in a room is considered as shown in Figure 1. A closed boundary Γ surrounds a fluid domain Ω, which is characterized by its speed of sound c 0 and ambient fluid density ρ 0 . e fluid domain is excited at a circular frequency ω by an acoustic point source with a prescribed volume velocity q located at position s. Assuming that the system is linear, the fluid is inviscid and the process is adiabatic. e steady-state acoustic pressure p in the problem domain is governed by the wave equation as follows: where ∇ 2 is the Laplacian operator. To solve the sound field in the frequency domain, the Helmholtz equation can be derived from equation (2) as follows: where ω is the circular frequency, j is the imaginary unit, and p ω and q ω are the sound pressure and the sound source intensity, respectively, in the frequency domain. As the sound waves incident on the surface of the room will be partially absorbed, the boundary condition can be described by where Z s is the specific acoustic impedance of the surface, and zp ω /zn expresses the pressure derivative along with the boundary's normal direction n. Equation (3) can be analytically solved for a room with a regular boundary. While in an irregularly shaped room, this equation usually cannot be directly solved. Under this condition, numerical analysis methods such as the finite element method (FEM) and the element-free Galerkin method are usually employed to provide approximate solutions. In these methods, to solve equation (3) under complicated boundary conditions, the space Ω must be divided into discrete forms, for example, the elements or nodes shown in Figure 1. Based on these predefined elements and nodes, the sound pressure at any position, for example, r, can be expressed by the sum of a set of linear functions: where n ∈ Ω is the total number of nodes distributed in the problem domain, T is the vector of nodal sound pressure on each predefined node, and N r � [N r1 , N r2 , . . . , N rn ] T is the vector of an interpolating function that indicates how much influence each node has on position r. Since N r can be only constructed by using the spatial geometric information, it is usually called the shape function [36]. Finally, by expressing the global sound pressure using the predefined nodes, a system equation can be obtained as follows: where K, M, and C are system matrices, which describe the essential characteristics of the room on forming its interior sound field. ey are only related to room parameters and are defined by the following: Ω Г element node Figure 1: e initial room is divided into a set of subdomains. e wave model is numerically built based on these subdomains. e derivation shows that the differential equation in equation (2) is converted into an integral equation, in which the relationship between the room and its interior sound field has been built. By solving equation (6), the sound field even in a highly reverberant environment can be globally calculated. e global sound field reconstruction method proposed in this study was inspired by this theory. Global Sound Field Reconstruction. In equation (6), the governing pattern of the space Ω on its interior sound field has been modeled. Based on this model, the transfer function between any two positions in the space can be calculated. It demonstrates that the global reconstruction of the sound field can be realized even if the samplings are totally obtained from the reverberant field under the constraint of the wave model. It constitutes the theoretical foundation of the reconstruction method in this study. e first step in the sound field reconstruction is the recovery of the vector F. In the same space illustrated in Figure 1, it is assumed that a sound pressure sampling point with a known position r has been arranged inside the room and the sampled sound pressure is p r . en, according to equation (5), the nodal sound pressure on predefined nodes can be expressed by the following form: By substituting equation (8) into (6), the system equation can be converted into the following: is equation builds a connection between the sampled signal and the wave model. By extending the sound signal sample from a single position to a number l of known position r � [r 1 , r 2 , . . . , r l ], the sampled sound pressure p r � [p r 1 , p r 2 , . . . , p r l ] T ∈ C l×1 can be expressed by the following: where . . , N r l ] T ∈ C l×n is the matrix consisting of the shape functions related to the sampling positions, and x � N s q ω ∈ C n×1 is the sound source parameter to be determined. Solving equation (10) gives the recovered vector F and by substituting F into equation (6), the global sound field can be finally reconstructed. Construction of Global and Sparse Types of Shape Functions. e expressions of the system matrices in equation (7) demonstrate that the shape function N is a basic parameter for constructing the system matrices. It describes the contributions of spatial nodes to a target point by interpolation. Also, it can be used to distribute the original source energy to spatial nodes. So, the recovery of the original source in equation (10) is equivalent to the recovery of the shape function of the sound source. erefore, it is very important to construct a global type of shape function for an accurate sound field reconstruction. e moving least squares (MLS) method [37,38] was used to construct the shape function in this study. In MLS, the shape function is constructed based on entire spatial nodes, which establishes the foundation of global reconstruction. In addition, by constraining the effective values using a compactly supported domain, the shape function has sparse properties, which allows that the compressive sensing theory can be used to achieve high recovering accuracy based on the limited numbers of sample points. As shown in Figure 2, it is assumed that the space Ω has been discretized and expressed by n nodes s Ω � [s Ω 1 , s Ω 2 , . . . , s Ω n ] T and the exact nodal field values at all nodes are known as u Ω � [u Ω 1 , u Ω 2 , . . . , u Ω n ] T . We assume that a target position is located at a random position r in the space. Since the exact field value is usually difficult to be directly obtained, its approximating value is defined by where u h (r, r ⌢ ) is the interpolant of u(r) that defined at a neighborhood position r ⌢ of target position r, b(r ⌢ ) is a set of complete basis at r ⌢ , a(r) is the vector of the unknown coefficients related with target position r, and g is the number of base units. Several types of functions can be used as the basis for MLS, such as the monomial, trigonometric, and wavelet functions. In this study, the monomial function, which is defined as follows in three-dimensional problems, is taken as the complete basis. where (x ⌢ , y ⌢ , z ⌢ ) is the space coordinate of position r ⌢ . To obtain the unknown coefficients a(r), by considering the predefined nodes s Ω as neighborhoods of target position r, a weighted discrete L 2 norm over n nodes is constructed as follows: where n is the number of nodes, s Ω i is the coordinate of the ith predefined node, u Ω i is the nodal value on the ith discrete node, and w(d i ) is a weighted function whose value is constrained by the distance d i � |r − s Ω i | between the ith node and target position. To construct a sparse shape function, a compactly supported technique is used to constrain the value of w in equation (9) as shown in Figure 2. e weight function is only available in a compactly supported domain, which is a sphere in this study. Based on this constraint, only a small number of nodes really have influences on the target position. A shape function with sufficient sparsity can be constructed based on the compactly supported technique. In this study, the quartic function was chosen as the weight function: where d r denotes the radius of the compactly supported domain. Minimizing J with respect to a(r) provides en, the vector a(r) can be obtained by solving equation (15) where A and B are matrices defined as follows: Finally, substituting equation (16) into (11) and setting r ⌢ � r, the field value can be further expressed as follows [39]: where N r is the shape function constructed on position r. e derivation demonstrates that the field value at any position in the space can be expressed by a set of predefined nodes and their nodal values. It can be considered as a governing factor, which determines how much influence each predefined node has on the target position. Compressive Sensing with l 1 -Norm Optimization. To ensure the sound source data can be correctly and economically recovered when solving equation (9), the compressive sensing (CS) theory that has been widely accepted as an efficient recovery tool in acoustic problems is used. CS theory suggests that if a signal is sparse and the measurement matrix is highly incoherent with the dictionary, it can be reconstructed using a limited number of measurements by solving an underdetermined inverse problem. In equation (10), x � N s q describes the distribution of the discretized excitation. Based on the compactly supported domain constraint in MLS for constructing the shape function, x has very few nonzero elements. erefore, x can be considered as a sparse vector, which is a benefit to the use of the CS theory to effectively reduce the sampling requirement. If measurement noise is present, the inverse problem in equation (10) can be solved by solving a LASSO problem as follows [40]: arg min x∈C m×1 where the operator ‖ · ‖ n indicates the l n norm. e parameter ξ is an estimation of the upper boundary of the noise present during the sensing process. Numerical Verification e accuracy of the proposed method was preliminarily explored by reconstructing the sound field in an enclosed room. e sizes of the room and coordinate system are shown in Figure 3. e specific acoustic impedance of the room's inner surfaces is set to be 10ρ 0 c 0 + 10ρ 0 c 0 j kg/m 2 s, where ρ 0 � 1.21 kg/m 3 and c 0 � 340 m/s. To construct the wave model, the room is divided into 6 × 6 × 6 � 216 nodes as shown in Figure 3. In this verification, the sound source is a point source located at [2.55, 0.6, and 0.35] m and a virtual array consisting of 28 microphones is used to sample the sound pressure signal. is planar array is placed at the plane of x � 3 m as shown in Figure 3. In this study, the sampled signals and sound fields used as real data are simulated using LMS Virtual Lab. Reconstructed sound fields on surfaces of x � 0 m, y � 0 m, and z � 1.8 m at 50 Hz, 100 Hz, and 150 Hz are illustrated in Figure 4. To evaluate the performance of the proposed method, two reference methods were also used to calculate the same problem. e reference method 1 is the spherical wave model (SWM) [19], which realizes the reconstruction by spherical wave expansion. e reference method 2 is a method developed from an indoor localization technique [31]. Sensor arrays used in the reference methods are the same as that in the proposed method. e results in Figure 4 demonstrate that the distributions and amplitudes of the sound fields between the proposed method and the real data are very close. At 50 Hz and 100 Hz, the sound field reconstructions are nearly identical to the real sound fields. is demonstrates that the proposed method can provide accurate reconstructions under strong reverberations at low frequencies. At 150 Hz, although the sound field is more spatially complicated than those at low frequencies, the proposed method can still give a similar distribution pattern to the real data. But there are also more differences between the proposed method and the real value. is indicates that the proposed method's accuracy tends to decrease as the frequency increases. Shock and Vibration In this verification, the reference method 1 fails to give satisfying results. SWM is a method that has been proved valid for sound field reconstruction in free space and cylindrical cavities. It usually has good performances near the sampling points. But in this verification, the global sound field is mainly formed by multipropagation of sound waves. It brings high difficulty for the SWM to reconstruct the global field. e reference method 2 has much better reconstruction results. It gives similar distribution patterns at these three example frequencies. But its accuracy also tends to degrade at high frequencies. is reference method is developed from an indoor sound source localization method. Like the proposed method, it achieves the sound field reconstruction by first recovering the sound source and then predicts the sound field at other spatial positions. In the sound source recovering step, the method builds an acoustic model based on the image source theory and then localizes the source by sparse recovery. e comparisons demonstrate that building the wave model is a key and effective strategy to realize global reconstruction. To quantitatively evaluate the proposed method's accuracy, a root mean square error (RMSE) is defined as follows: where p i reconstruct and p i real denote the reconstructed and real sound pressure at the same evaluation position i, and n is the number of positions included in the error evaluation. e RMSEs of the proposed method and reference method 2 from 30 Hz to 200 Hz are calculated on the 216 predefined nodes and are illustrated in Figure 5. Since the results of reference method 1 are obviously different from the real data, its RMSE is not evaluated here. Figure 5 shows that the RMSEs at most frequencies are less than 20%. ere are only a few frequencies at which the errors are higher than 20% but still less than 25%. e reference method has similar low errors at low frequencies, but its errors at mid-and high frequencies are significantly higher than the proposed method. It is because that the image source theory used in the reference method requires to extend the enclosure into a large free space, which leads to that the source vector to be recovered is much larger than that in the proposed method. Under the condition of the same number of sensors, the proposed method has better recovery performance than the reference method. Figure 5 also reveals that the errors of the proposed method tend to increase as the frequency increases. Compared to low frequency, the sound field changing in an enclosed room is usually more sensitive to spatial variation at high frequency. Even small spatial derivations of sound pressure would lead to high errors. Besides this reason, the dispersion error that emerges as the wavenumber increases [41] also causes high errors. In the wave simulation, the system matrices K, M, and C essentially determine the numerical simulation's accuracy. Usually, these system matrices can correctly describe features of sound fields in low-frequency ranges. However, as the wavenumber increases, these system matrices are no longer suitable for the problem due to dispersion errors that appear as the phase shifts along the frequency axis. Under these conditions, decreasing the sizes of the elements and refining the discretization are necessary. A so-called "rule of the thumb" to ensure the simulation's precision is expressed as follows: where k is the wavenumber, and h is the mesh size. Based on this equation, it can be generally concluded that there must be at least 6 meshes within a wavelength corresponding to the upper calculation frequency. According to this rule, a finer discretization of the space can improve the proposed method's performance in higher frequency ranges. Moreover, from the reconstruction of the research perspective, the sound field reconstruction focuses more on low-frequency ranges than high-frequency ranges. is is because that the number of acoustic modes becomes quite dense in high-frequency ranges, making the sound field distribution over the space being too complicated to reconstruct. For the space in this case, the number of acoustic modes at each integer frequency interval starts to be quite large from 200 Hz. is means that 200 Hz can be considered a relatively high frequency. Under these conditions, the sound field usually needs to be statistically reconstructed rather than reconstructing the exact frequency response. Numerical Analysis of Performance- Influencing Factors e factors influencing the method's performance are further numerically analyzed. All analyses in this section are conducted based on the cubic room shown in Figure 3. Signal Sampling. Signal sampling is one of the key steps in reconstructing sound fields. It provides the input data for the reconstruction and directly determines the reconstruction quality. e evaluations on the number and position of sensors were performed, and the RMSEs are shown in Figure 6. e first test is performed to evaluate the influence of the number of sensors on the performance. In this test, three numbers of sensors were tested. In this test, the sensors in different conditions were placed in the same area as shown in Figure 6(a) and the results for the test are shown in Figure 6(c). In the second test, three sensor arrays with the same number of sensors but different sizes were used to evaluate the influence of the sampling area on the performance. e three arrays are shown in Figure 6(b), and the result for this test is shown in Figure 6(d). In these tests, the sensors were all placed in the plane of x � 3 m. It demonstrates that the size of the sampling area is an important factor influencing the performance of the proposed method. A large sampling area is beneficial for obtaining precise reconstruction. e number of sensors is also important for achieving good reconstruction. But based on the sparse recovery, even a small number of sensors can give satisfied accuracy if they can spread over a large area. Figure 6(d) shows that the array 3 leads to a very large error at 140 Hz, so the sound field distribution at 140 Hz is illustrated in Figures 6(a) and 6(b) as backgrounds to express the sensors' positions. e sound field varies a lot at different spatial positions. A larger array can ensure that the acquired sound pressure sampled by sensors has sufficient difference. Signal-to-Noise Ratio (SNR). e performance of the proposed method under different SNRs is evaluated. Gaussian white noise with SNRs of 30 dB, 40 dB, and 50 dB is added to the pressure signals on each sensor to simulate the actual measurement. e calculations are conducted based on the same sound source and microphone array configuration described in Section 3. e RMSEs in this case are illustrated in Figure 7. Figure 7 shows that the errors for SNR � 50 dB and 40 dB are very low at all frequencies. When the SNR is 30 dB, the errors remain low levels at frequencies below 100 Hz. is demonstrates that the proposed method can stably work even when the signals have low SNRs at low frequencies. However, its performance at high frequencies is much more sensitive to the low SNR. e RMSEs at two narrow bands exceed 40%. But at most frequencies, the proposed method still give results with errors of less than 20%. same microphone array configuration described in Section 3. Comparisons of the sound fields at exemplary frequencies are illustrated in Figure 8 and the RMSEs are shown in Figure 9. Figure 8 shows that under the excitation of multiple monopoles, the proposed method provides very similar reconstructed results with real sound fields at 50 Hz and 100 Hz. At 150 Hz, the pattern of the sound field distribution obtained by the proposed method is still generally similar to the real data, but the difference is also larger than that in low frequencies. e RMSE shown in Figure 9 demonstrates that the proposed method under the excitation of three monopoles has larger errors than those under single monopole excitation as shown in Figure 5. Generally, the analysis demonstrates that the proposed method can be applied for sound field reconstructions at low frequencies even under complicated excitations, while the accuracy tends to decrease as the excitation complexity increases. e precision reduction under distributed excitation in the proposed method is mainly caused by the sound source recovery in the reconstruction. As indicated in equation (9), the proposed method has a key step in reconstructing the sound field that the sound source needs to be recovered first. In the theory of the proposed method, the sound source is assumed to be point sources. Due to this assumption, the proposed method is capable of processing the reconstruction problems with point sources. However, its accuracy degrades for problems with plane vibration or other distributing excitations. To improve the performance of the proposed method for such problems, the theory of equivalent source is a potential direction. is theory is frequently used in NAH for solving reconstruction problems with complex sources [3][4][5][6]. It employs a series of virtual point sources to replace the source with a complex boundary shape to express the radiated sound field. Based on this equivalent processing, the continuous and complex source can be converted into discrete and simple point sources, which are exactly the recovered source form in the proposed method. e main difference between the reconstruction problems in this study and classical ESM-based NAH is that the target reconstruction zone in this study is the interior space of the enclosed environment, while in NAH the exterior space is often the target zone. Due to this reason, the classical theory of equivalent source cannot be directly used in the proposed method, but it still gives important inspiration for improving the performance on problems with complicated excitations. In future work, the equivalent source method will be considered as an important strategy to support better sound field reconstruction under complex excitations. Conclusions An inverse wave modeling-based method for globally reconstructing sound field in room environment was proposed. By building a wave-based model, the complicated propagating paths from the source to fields can be modeled, and then, the sound field at any position in a room can be reconstructed under strong reverberation by inversely solving the wave-based model. Compared with the traditional method, the proposed method needs geometry information as prior knowledge to build the wave model. is step leads to that these methods are suitable to be used in a constant environment. If the problem needs to be solved in another room or the current environment has significantly changed, then a new wave model should be built. e numerical verification has proved that the proposed method is capable of reconstructing sound fields under strong reverberation environments. Further evaluations of influencing factors demonstrated that the proposed method performs better in low-frequency ranges. In addition, this method does not need to calculate the arrival time difference or phase difference among different microphones. So, there is no need to design an array with a complicated shape in this method, which improves the flexibility to set the microphone positions in the space. But the verification also showed that a sampling in which the signals have large differences is still a benefit for better reconstruction. e proposed method is sensitive to SNR in high-frequency ranges, while it is robust to low SNR in low-frequency ranges. It was also validated that the proposed method provides correct results for multiple source excitations. Data Availability e data that support the findings of this study are available from the first author by e-mail (wht@nwpu.edu.cn), upon reasonable request. Shock and Vibration
2022-03-21T15:14:03.642Z
2022-03-18T00:00:00.000
{ "year": 2022, "sha1": "370442753af5bcefb822a8c8c3057ad46283ca9d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sv/2022/3137447.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "485b050112e5175dc521580bf11261daec94e0a2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
4357358
pes2o/s2orc
v3-fos-license
18F-Fluoride PET/CT tumor burden quantification predicts survival in breast cancer Purpose In bone-metastatic breast cancer patients, there are no current imaging biomarkers to identify which patients have worst prognosis. The purpose of our study was to investigate if skeletal tumor burden determined by 18F-Fluoride PET/CT correlates with clinical outcomes and may help define prognosis throughout the course of the disease. Results Bone metastases were present in 49 patients. On multivariable analysis, skeletal tumor burden was significantly and independently associated with overall survival (p < 0.0001) and progression free-survival (p < 0.0001). The simple presence of bone metastases was associated with time to bone event (p = 0.0448). Materials and Methods We quantified the skeletal tumor burden on 18F-Fluoride PET/CT images of 107 female breast cancer patients (40 for primary staging and the remainder for restaging after therapy). Clinical parameters, primary tumor characteristics and skeletal tumor burden were correlated to overall survival, progression free-survival and time to bone event. The median follow-up time was 19.5 months. Conclusions 18F-Fluoride PET/CT skeletal tumor burden is a strong independent prognostic imaging biomarker in breast cancer patients. INTRODUCTION Bone metastasis is a common cause of serious morbidity in patients with breast cancer. It is associated with various debilitating skeletal-related events, which include bone fractures, hypercalcemia, nerve compression, and severe pain. The diagnosis of bone metastasis influences the patient's prognosis, reducing overall survival (OS) [1]. The early detection of bone metastases in newly diagnosed breast cancer patients is important because it changes the ideal treatment strategies [2][3][4]. Recent guidelines recommend that stage IIIA breast cancer patients should undergo staging with either conventional bone scintigraphy or with 18 F-Fluoride PET/CT [5]. While both 18 F-fluoride (PET/CT) and 99m Tc-MDP (conventional bone scintigraphy) are bone-seeking tracers used to identify bone remodeling and detect areas of increased bone remodeling due to metastases [6], when comparing the two imaging modalities for staging and restaging breast cancer patients, clearly 18 F-fluoride PET/CT is ideal due to greater sensitivity, specificity and accuracy [7]. Furthermore, 18 F-Fluoride PET/CT has been shown to alter treatment plan in approximately 39% of breast cancer patients [8]. Beyond lesion detection and staging, it is feasible to quantify skeletal tumor burden using 18 F-Fluoride PET/CT. Determination of skeletal tumor burden has been shown to Studies have shown that calculation of the primary tumor metabolism using parameters such as total lesion glycolysis (TLG) and metabolic tumor volume (MTV) on 18 F-FDG PET/CT images predicts survival in breast cancer patients at initial staging [10,11]. However, when breast cancer patients develop bone metastases, there are no means to foresee which patients will have a shorter survival time. Even though breast cancer bone metastases are 18 F-FDG-avid, unfortunately, quantification of wholebody tumor burden with this tracer is not practical because the areas of normal biodistribution. Only one recent study investigated the prognostic role of 18 F-Fluoride PET/CT in breast cancer patients semi-quantitatively [12]. While the authors did not find a significant correlation, the parameters that they used did not evaluate the entire bone disease extent on 18 F-Fluoride images. To that effect, there are no studies that calculated the entire skeletal tumor burden turnover on 18 F-Fluoride PET/ CT and correlated with prognosis in breast cancer patients. The purpose of this study was to correlate skeletal tumor burden determined by 18 F-Fluoride PET/CT with clinical outcomes in breast cancer patients. The patients were submitted to 18 F-Fluoride PET/ CT for detection of bone metastases. Forty patients underwent 18 F-Fluoride PET/CT for primary staging of breast cancer. The remainder underwent 18 F-Fluoride PET/ CT with suspicion of bone metastases prior to or after some modality of treatment. The treatment consisted of one or more of the following: chemotherapy (82 patients), radiotherapy (53 patients), surgery (57 patients) and hormone therapy (87 patients). Among the 107 patients enrolled, 49 patients (45.8%) were diagnosed with bone metastases. Analyzing only the population that performed the 18 F-Fluoride PET/ CT for staging, 32.5% (13 patients) were positive for bone metastasis. The analysis of the tumor burden of these 49 patients was undertaken and compared to the 58 patients without bone, visceral or nodal metastases. Nineteen patients (17.7%) had visceral metastases (15 patients with lung metastases and 4 patients with liver metastases) at the time of the 18 F-Fluoride PET/CT examination. All patients with hepatic lesions and 12 patients with lung lesions had also bone metastases. Thus, 16 patients (15%) had bone and visceral metastases. Although all patients had undergone CT scans of the chest, abdomen and pelvis for detection of visceral metastases, 20 patients (18.7%) also underwent an 18 F-FDG PET/CT study within 3 months of the 18 F-Fluoride PET/CT. In these cases, the 18 F-FDG PET/CT exams were also considered when evaluating for visceral metastases. 18 F-Fluoride PET/CT images detected bone metastasis in 49 (45.8%) patients. The hSUV of the bone metastases for all patients (mean ± SD) was 46.7 ± 23.37 (range 12.6 -96.5) and the Mean 10 for all patients (mean ± SD) was 14.8 ± 5.2 (range 9.4 -43.2). The mean FTV 10 was 204.1 ml (range 0.5-1578 ml) and the mean TLF 10 was 3395.3 (range 9.0-39410). TLF 10 and FTV 10 values were highly correlated (р = 0.95; P < 0.0001) and therefore only TLF 10 was used for further analyses. TLF 10 and OS At the end of the follow-up period, 84 patients were alive (30 with bone metastasis). The median overall survival was 15.2 months for patients with bone metastasis and 23.4 months for patients without bone metastasis. TLF 10 was significantly associated with OS on univariable analyses (p < 0.0001; HR = 1.136; 95% CI = 1.066-1.210). The presence of bone or visceral metastasis, hSUV, negative progesterone receptor (PR) status and ECOG status were also correlated with survival in the univariate analysis. Other parameters such as initial tumor characteristics (HER2 status, ER status, Ki-67 index), the current patient´s age, the time of disease, ECOG status, current pain score, and treatments (surgery, chemotherapy and radiotherapy) during the course of disease did not correlate with OS. The patient group that underwent 18 F-Fluoride PET/ CT examination for staging had a median TLF 10 Higher TLF 10 values (meaning more metastases) were associated with worst survival (Figure 1). A TLF 10 cutoff of 3,700 separated two groups in terms of survival. Patients with TLF 10 > 3,700 had a significantly higher risk of death (median OS = 8.5 months) while patients with TLF 10 < 3,700 had a median OS of 33.4 months (p = 0.0002; HR = 6.569; 95% CI = 2.419-17.835) ( Figure 2). TLF 10 and PFS At the end of follow-up, 32 patients (30%) progressed (eight had bone progression, four had nodal progression, 13 had visceral progression and seven had an increase in ECOG score by 2 points). Visceral metastases were located in the lungs and liver. Among these patients, 27 had bone metastasis prior to progression, one patient had a liver metastasis and the remaining four patients were diseasefree. The most common site of progression of the 27 patients with known bone metastasis was visceral disease. Visceral (lung and liver) metastases occurred in 13 patients. The median PFS for patients with vs without bone metastases was 4.7 vs 12.2 months, respectively. Analyzing only the 49 patients with bone metastases at the baseline 18 F-Fluoride PET/CT scan, the mean TLF 10 was 2.5 times greater for patients that progressed when compared to those that did not progress (TLF 10 = 4,670 vs 1,831). DISCUSSION Previous reports have demonstrated that skeletal tumor burden on 18 F-Fluoride PET/CT, quantified by the simple method of obtaining the TLF 10 (SUVmax threshold = 10) is a strong and independent prognostic biomarker in prostate cancer patients undergoing 223 Ra [9]. To our knowledge, there have not been prior studies describing that skeletal tumor burden on 18 F-Fluoride PET/ CT is an independent prognostic biomarker in breast cancer patients. Actually, the few studies conducted to identify PET parameters that predict survival in metastatic breast cancer were performed with 18 F-FDG PET or PET/CT. These studies (with 18 F-FDG PET/CT) have demonstrated that total lesion glycolysis bears a strong correlation to OS [13,14]. The only other investigation evaluating the prognostic role of 18 F-Fluoride PET/CT that we found was conducted by Piccardo et al. in 32 breast cancer patients [12]. Although the authors did not discover a strong and independent association of 18 F-Fluoride PET/CT with OS, their study was the first to attempt to use semiquantitative parameters for this purpose. The discrepancy among their findings and ours may be due to the method of tumor burden quantification. We used the TLF 10 parameter since we have conducted extensive studies with this metrics. We established the ideal cut-off values to separate normal bone from lesions and proved it a valuable independent prognostic imaging biomarker to predict OS in prostate cancer patients [9,15]. In the clinical setting, while it seems obvious that breast cancer patients with very low bone tumor burden will have better outcomes than those patients with high tumor burden, it is still relevant to increase awareness by a scientific approach as opposed to mere observation. We found that the median overall survival was 15.2 months for patients with bone metastasis vs 23.4 months for patients without bone metastasis. Visual analysis of the presence vs absence of bone metastases also demonstrates a significant and high likelihood of death in patients presenting with bone metastases (p = 0.007; HR = 6.461). However, on a multivariable model, visual analysis does not correlate with OS and only TLF 10 can independently define which patients have worst prognosis. We did not decide on performing 18 F-Fluoride PET/ CT over 18 F-FDG PET/CT in these breast cancer patients. We performed 18 F-Fluoride PET/CT over conventional bone scintigraphy because of the higher sensitivity to detect bone lesions. In fact, 18.7% of these women were also submitted to 18 F-FDG PET/CT scans during treatment, to evaluate response to therapy. However, the determination of whole-body tumor burden 18 F-FDG PET/CT scans using TLG and MTV parameters in metastatic breast cancer patients (especially with bone lesions) is not feasible on a daily basis. Since in breast cancer patients, osteoblastic bone metastases predominate, we envisioned that the determination of skeletal tumor burden with 18 F-Fluoride PET/CT might be a substitute for whole-body 18 F-FDG tumor burden calculations in daily clinical practice. Clinical, laboratory and imaging parameters are used to prognosticate patients with limited and advanced breast cancer. However, these parameters cannot be used independently. At initial staging of patients, ECOG status, primary tumor histology, serum laboratory measurements, tumor markers and conventional images have relevant prognostic value. Worse prognosis is associated with absence of hormone receptors, Her2-neu gene amplification and high percentage of Ki-67 positive cells [16]. However, these variables (clinical, laboratory and imaging) could lose the ability to be independent prognostic biomarkers as the disease becomes advanced. For example, Piccardo et al. [12] have found that in breast cancer patients with bone metastases, the 18 F-FDG PET/CT findings have a stronger prognostic impact in OS with an independent association than conventional clinical and biological prognostic factors. Likewise, we demonstrated that among all variables evaluated (as ECOG status, pain score, treatments, presence of visceral metastases, patient age, time of cancer), only the PR status (at initial diagnosis) and the quantitative (i.e., objective) volumetric analysis (TLF 10 ) of bone tumor burden (during the course of disease) independently separated survivors from non-survivors. The mean TLF 10 of patients that were alive at the end of follow-up was four times lower than the TLF 10 of the 19 patients that were dead (1, 562 vs 6,288). With a cutoff TLF 10 value of 3,700 there was a significant difference in survival (specificity = 93.3%). Furthermore, the prognostic impact of skeletal tumor burden (TLF 10 ) was high for both staging and restaging in patients with bone metastases. Therefore, since skeletal tumor burden calculation will relate to OS and PFS (in both staging and restaging settings), it may help define future therapeutic strategies. TFL 10 was also an independent predictor of PFS in breast cancer patients, even among patients with visceral disease progression. Using the cutoff TLF 10 value of 1,815 discriminated patients that were more likely to progress. Earlier studies report bone events occurring in nearly 50% of patients with breast cancer with a median TTBE of 5.5 months [17,18]. In our population however, only 12 patients (11.2%) had bone events and among these, nine of 49 (18%) had BE due to bone metastasis; the remaining three patients (without bone metastases) developed pathological fractures because of osteoporosis during follow-up. The median TTBE in our study was 9.8 months. This discrepancy of findings between the literature and ours may be due to differences in treatment of bone metastases, nowadays with more advanced drugs that protect bones from fractures. The TLF10 value (i.e. the determination of skeletal tumor burden) was not an indicator of TTBE. However, the presence of bone metastases increased 4 times the risk of developing a bone event. One limitation of our study was its retrospective nature with patients undergoing multiple treatment regimens. However, because of the large sample size (107 patients) we were able to evaluate the bone burden of breast cancer patients with a variety of lesions, ranging from none to a near super-scan. Study design The local Institutional Review Board approved this retrospective study (#46/2016) of patients with breast cancer that underwent whole-body 18 F-Fluoride PET/CT images for investigation of bone metastases. Patient population Inclusion criteria consisted of histologically confirmed breast cancer patients, above 18 years, that underwent 18 F-Fluoride PET/CT. All patients were followed-up for at least 12 months or until death. We excluded patients whose imaging study could not be retrieved and also patients lost to follow-up after the collection of the 18 F-Fluoride PET/CT data. F-Fluoride PET/CT All patients underwent a true whole-body PET/CT acquisition on two PET/CT scanners (Siemens Biograph True-Point PET/CT 64 or Siemens Biograph PET/CT 16, Siemens Healthcare, USA) 45 minutes after intravenous injection of 3.7MBq/kg of 18 F-sodium fluoride. CT parameters included 5mm axial reconstruction and 120 kV or dose care kV tube voltage. PET images were acquired in 3-dimensional mode using 90s/bed position. F-Fluoride PET/CT Interpretation and Quantification All 18 F-Fluoride PET/CT images were blindly interpreted by three Nuclear Medicine physicians with over 12 years of experience with PET/CT images. All 18 F-Fluoride PET/CT quantitative analyses were performed by two nuclear medicine physicians with 5 and 12 years of experience with PET/CT images, respectively. Quantitative interpretation was performed on all 18 F-Fluoride PET/CT images to determine whole-body skeletal tumor burden. 18 F-Fluoride PET/CT images were quantified using METAVOL® software [19]. To calculate the skeletal tumor burden, a threshold for SUV max = 10 to exclude normal bone was used, the details of the quantification is described in our previous study [15]. After processing the following parameters were automatically provided by the software: hSUV: the highest SUV max among all the metastases, Mean 10 : the mean SUV max of all metastases, FTV 10 : the total volume of fluoride-avid bone metastases (in milliliters). This calculation is equivalent to the calculation of metabolic tumor volume (MTV) on 18 F-FDG PET/CT images, TLF 10 : the skeletal tumor burden (VOI 10 x Mean 10 ) i.e., the total activity of 18 F-Fluoride-avid metastases. This calculation is comparable to the calculation of total lesion glycolysis (TLG) on 18 F-FDG PET/CT images. Statistical analyses The following information of each patient was correlated with the skeletal tumor burden parameters: age, years of cancer, initial clinical stage, presence of bone metastases, presence of visceral metastases, primary tumor characteristics (Ki-67, hormone receptor status, HER-2, histology), previous treatments and clinical evaluation using performance status scale (ECOG) [20] and pain scale [21]. We did not collect CA15-3 and CA27.29 values at diagnosis or to monitor recurrence because it is not recommended by the American Society of Clinical Oncology [22]. Visceral metastases were evaluated by conventional CT scans of the chest, abdomen and pelvis or by the PET/CT scans (whether with 18 F-FDG or 18 F-sodium fluoride). The primary end-point was overall survival (OS), established from date of 18 F-Fluoride PET/CT until date of death from any cause, censoring data on last follow-up of living patients. Secondary end-points were progression free-survival (PFS) and time to bone event (TTBE). PFS was defined as length of time from the 18 F-Fluoride PET/ CT image until the date of objective tumor progression or death of any cause. Objective tumor progression was defined as a new lesion (whether bone or soft tissue or visceral) or a lesion that increased in size (RECIST criteria) leading to a change in current therapy or initiation of another therapy. TTBE was defined from the date of 18 F-Fluoride PET/CT until the date of a bone event (surgical intervention, spinal cord compression, pathologic fracture, bone pain or rapid lesion progression requiring immediate intervention). Numerical variables were described as mean value, standard deviation, minimum and maximum and median values, and categorical variables were described with absolute and percentage frequency. To evaluate the relationship between the variables and outcomes as predictors of survival the cox proportional hazards regression was applied. ROC curve was used to determine the cutoff points for measuring the TLF 10 and the Kaplan-Meier survival curves to demonstrate survival time distributions. The level of significance was set at 5%. CONCLUSIONS The skeletal tumor burden determined with 18 F-Fluoride-PET/CT is a powerful prognostic biomarker of OS and PFS in breast cancer patients. While the simple presence of bone metastases is associated with worst prognosis we have demonstrated that, among all patients with bone metastases, it is possible to objectively discriminate which ones will have worst outcome. This may help improve treatment strategies for breast cancer patients. To understand the relevance of our findings, more studies are necessary to evaluate if the skeletal tumor burden metrics will ultimately alter these treatment strategies. Ethical approval The local Institutional Review Board approved this retrospective study (#46/2016). All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.
2018-04-03T04:31:41.824Z
2017-03-21T00:00:00.000
{ "year": 2017, "sha1": "d3a0f285d9409f38ac267f459c66aa386c853901", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=16418&path[]=52531", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3a0f285d9409f38ac267f459c66aa386c853901", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
49559800
pes2o/s2orc
v3-fos-license
Exploratory Analysis of Pairwise Interactions in Online Social Networks In the last few decades sociologists were trying to explain human behaviour by analysing social networks, which requires access to data about interpersonal relationships. This represented a big obstacle in this research field until the emergence of online social networks (OSNs), which vastly facilitated the process of collecting such data. Nowadays, by crawling public profiles on OSNs, it is possible to build a social graph where"friends"on OSN become represented as connected nodes. OSN connection does not necessarily indicate a close real-life relationship, but using OSN interaction records may reveal real-life relationship intensities, a topic which inspired a number of recent researches. Still, published research currently lacks an extensive exploratory analysis of OSN interaction records, i.e. a comprehensive overview of users' interaction via different ways of OSN interaction. In this paper we provide such an overview by leveraging results of conducted extensive social experiment which managed to collect records for over 3,200 Facebook users interacting with over 1,400,000 of their friends. Our exploratory analysis focuses on extracting population distributions and correlation parameters for 13 interaction parameters, providing valuable insight in online social network interaction for future researches aimed at this field of study. Introduction and related work A social network is a structure composed of nodes and edges which represent people and their relationships, such as family bonds, friendships, etc. Social network analysis (SNA) is a research field which deals with analysing such networks and extracting useful information about people described within, with the analysis being mostly focused on user interactions. There are numerous possible applications: by analysing social networks sociologists and social psychologists are trying to explain how people's thoughts, feelings and behaviours are influenced by presence of others [1,2]; recommender systems can use it to make customized and novel recommendations [3,4]; corporations are trying to improve relations between employees and their working effect [5][6][7]; telecoms want to prevent users churn [8][9][10]; in the educational domain information about connectedness between students may be used to enhance the learning process [11][12][13], etc. Modern online social networks (OSNs) such as Facebook or Twitter are widely accepted as platforms for exchanging messages, sharing photos, links and other kinds of information. We can treat these OSNs as applications for social networks management. Due to their nature as digital platforms, information about connectedness and interaction between users is usually stored in a structured fashion and is becoming more accessible than ever, which has vastly facilitated the ability to observe social networks for research purposes. One of the basic methods of gathering OSN information is creating software which uses the OSN's API to crawl public profiles and construct a social graph based on publicly available "friendship" information contained within [14][15][16]. In that way it is possible to create a social graph with information whether two users are connected, but usually not the details about the nature or intensity of their real-life relationship. There are however some researches that introduce various models and algorithms which enable calculating friendship intensity and picking out real-life relationships from ego-users' total OSN friends by considering their interaction on OSN [17][18][19][20][21][22][23][24][25][26]. Some papers aim simply to differentiate between strong and weak friendships of the ego-user [17][18][19], others classify ego-user's friends in more than just two basic classes [20,21] while some aim to determine the connection strengths between all OSN users and express it in a numerical fashion [20,[22][23][24][25][26]. Although OSN interaction records are frequently used as basis for various research purposes, so far a comprehensive exploratory analysis of users' OSN interaction has not yet been published. Taken this into consideration, we have decided to invest a great effort in collecting a representative real-life OSN interaction dataset, followed by performing an extensive exploratory analysis in order to extract and describe its key properties. As Facebook is arguably the most popular OSN today with over 2 billion active users [27], we decided to focus on this particular social network. We have conducted a comprehensive Facebook social experiment NajFrend where we collected records that describe interaction between almost a million and a half pairs of Facebook users. We have then performed an exploratory analysis where we focused on extracting population distributions and correlation matrices for 13 Facebook interaction parameters such as posts, likes, comments, mutual photos, etc. (which we will call interaction parameters in the following sections). All these parameters were collected and summarized on pairwise levels -e.g. total likes, total comments, etc. between pairs of Facebook friends. The results of this user interaction exploratory analysis based on huge empirical dataset represent the pivotal contribution of this paper. The paper is organized as follows: in the Methodology section we provide details about the conducted social experiment, present the collected dataset and describe in detail the process of extracting population distributions and constructing the correlation matrix; the Results section contains tabular and visual results of the exploratory analysis; Discussion provides insight and interpretations of gained results; finally, in Conclusion we give final remarks on this research. Methodology This section will provide a brief description of the conducted social experiment NajFrend and the dataset collected in that experiment, which is a core dataset for our exploratory analysis. Also, we will explain the steps undertaken in the exploratory analysis itself. Social experiment NajFrend and the collected dataset NajFrend is a comprehensive social experiment held in April and May of 2015. It has involved 3277 examinees, mostly from Croatia and neighbouring countries. Majority of examinees were between 18 and 30 years old. Close to 80% of examinees were high school and university students. 57.7% of examinees were men and 42.3% were women. This experiment collected a dataset about interactions between 3277 examinees and over 1,400,000 of their Facebook friends. All examinees gave explicit permission to allow using collected data about their Facebook interaction for this research. For the following exploratory analysis, we have chosen 13 Facebook interaction parameters to describe user interactions, whose list and explanations can be found in Table 1. Additionally, for each attribute in the table we have included an abbreviation which will be used in the certain following figures with insufficient space for the full attribute names. Exploratory analysis Main goal of our exploratory analysis was to analyse behaviour of the collected Facebook interaction parameters. We focused on extracting population distributions for each of the observed 13 interaction parameters and calculating Pearson's correlation coefficient for each pair of interaction parameters. For each distribution, we have provided a detailed quantile table and a theoretical distribution which has shown to be the best approximation for an empirical distribution of each interaction parameter. Since most Facebook users interact very little with a large portion of their Facebook friends, our dataset contains a lot of zero values. Taking this into consideration, we have chosen to focus on the best approximative theoretical distribution for the non-zero values and present ratios of zero values for each interaction parameter. The following candidate distributions were tested for each parameter: beta, gamma, inverse gamma, normal, log-normal, skewed normal, geometrical and uniform. For the theoretical distributions which are defined only on interval [0,1] we have first normalized the data according to Equation (1), Maximum likelihood estimation (MLE) was used for each listed distribution to find the distribution parameters which show the best fit. Using the chi-square test we have decided on the final theoretical distributions with lowest corresponding chi-square values. Analysis of the underlying distributions In this section, we will present the results of our underlying distributions analysis for each interaction parameter. Detailed quantile tables with over 10,000 records for each interaction parameter empirical distribution are not included in this paper due to obvious size constrains, but can be found at r.lukahumski.iz.hr/ EAPIOSN/quantiles.csv. For each interaction parameter, we have found out the best approximative theoretical distribution of non-zero values and presented the ratio of zero values. Figure 1 shows the results of chi-square tests (with the number of bins set to 50) for each interaction parameter for different distributions. To show a simple graphical illustration of differences between empirical distributions and the best approximate theoretical distributions we also include a representative probability density function (PDF) of empirical and approximative theoretical distribution for the friend_mutual parameter on Figure 2. Theoretical distribution is depicted as a dotted line, while the empirical distribution is shown with a solid line. In Table 2 we list all the interaction parameters, their best approximative theoretical distribution name, parameters for best fit, resulting chi-square value and the ratio of zero values. It is important to emphasize that according to the chi-square test it is not unequivocally proven for any interaction parameter to be distributed according to a specific theoretical distribution, but highlighted theoretical distributions are the best approximation for observed empirical distributions considering the scope of observed theoretical distributions. Analysis of correlations between interaction parameters Pearson's correlation coefficients between attributes in the dataset are shown in Figure 3. Upper part of the figure shows correlation intensity using the size and colour of the squares, while the lower part shows exact numerical values. Due to reasons of clarity, all attributes have abbreviated names (according to Table 1). Discussion Previous section presented the results of exploratory analysis done by using the dataset gained in the conducted social experiment. In the following paragraphs, we will briefly review gained results and try to provide some interpretations. Correlations show which interaction parameters are connected and how strong that connection is. Our analysis shows that feed_comment and feed_addressed have the strongest correlation. It is interesting to note that people who make a lot of comments on friend's posts will also write many standalone posts on their respective timelines. Analysis also shows high correlation between parameters photo_like and feed_like, which is logical concerning the nature of these parameters, i.e. users treat reacting to textual posts and pictures very similarly. High correlation between attributes photo_comment and feed_comment also supports this assumption. Low correlation between parameters that show the numbers of mutual photos is slightly surprising. We previously expected to see a relative similarity between parameters mutual_photo_published_by_user, mutual_photo_published_by_friend and mutual_photo _published_by_others because all these parameters count the number of mutual photos between ego-users and their observed Facebook friend, with the only difference being the person who published the photo. Analysis, however, showed that photo sharing habits vary significantly between users. Another interesting find is that there is no correlation between number of mutual friends and the level of interaction on OSN via observed interaction parameters. An assumption can be made that people who have more friends in common belong to a certain clique which will be reflected in a more intensive online communication, but our analysis showed this is not corroborated by facts gained by the survey results. When looking at various distributions, the large number of zero values is apparent, meaning that egousers generally interact very little with most of their Facebook friends. This is not so surprising if we refer to the Dunbar's number [28] which states that people can comfortably maintain only 150 stable relationships, compared to the average number of Facebook "friends" in our survey which was 429. The total lack of interaction further affirms this supposition, and this fact additionally motivates researches which aim to distinguish OSN friends which truly are digital representations of actual real-life relationships. Finally, if one wants to model interaction paramater behaviour using theoretical distributions, the overall best approximative theoretical distribution for all interaction parameters is the gamma distribution, the sole exception being the inbox_chat parameter for which the log-normal distribution gives the best results. Conclusion In this paper, we have presented the results of our exploratory analysis aimed to extract key properties of the data which describes interactions between pairs of connected Facebook users. For each interaction parameter, we have provided an empirical distribution as a detailed quantile table. Also, we discovered the best approximative theoretical distributions and associated parameters for all observed interaction parameters. For all pairs of interaction parameters, we presented the level of correlation by calculating the Pearson's correlation coefficient. The presented dataset was obtained in a massive social experiment NajFrend which involved over 3000 participants and collected more than 1,400,000 records with summarized frequencies of interaction parameters between ego-users and their Facebook friends. The interaction records were collected using Facebook API 1.0. This dataset will also be the mainstay of our future research involving methods for discovering and visualizing real-life relationships based on observed social network interaction parameters.
2018-06-30T17:17:35.000Z
2017-10-02T00:00:00.000
{ "year": 2017, "sha1": "f5c019eeb0ea94aecf2b37adbe64cd3816326a12", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/00051144.2018.1468162?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "926df7ff8b20c743232d1d847adf09e7458331ba", "s2fieldsofstudy": [ "Sociology", "Computer Science" ], "extfieldsofstudy": [ "Psychology", "Computer Science", "Physics" ] }
195749568
pes2o/s2orc
v3-fos-license
The Relationship Between an Alternative Form of Cognitive Reflection Test and Intertemporal Choice The cognitive reflection test (CRT) has been popular because it has demonstrated a good predictive validity of a variety of biases in judgment and decision making. Thomson and Oppenheimer (2016) further developed a second version of the cognitive reflection test, CRT2. Although CRT-2 has been found to be associated with several biases in judgment and decision making, its relationship with intertemporal choice remains unclear. Previous studies have shown that intertemporal choice characterizes the competition between intuition and reflection, and can be predicted by the original CRT. To further validate CRT-2, the present study tests the relationship between CRT-2 and intertemporal choice. The study finds that better performance on CRT-2 is significantly associated with fewer impulsive intertemporal choices in both gain and payment conditions. Moreover, impulsive choices are related to intuitive errors but not nonintuitive errors generated from CRT-2. The study suggests that CRT-2 provides some more items for researchers to select to characterize individual differences in thinking style and judgment and decision making. The Cognitive Reflection Test (CRT) is a popular test that is used to measure rational thinking and normative choice preference (Frederick, 2005). CRT contains three items, and an iconic item is the famous bat and ball problem: "A bat and a ball cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost?" As one can imagine, a "10 cents" answer appears to be intuitive but nevertheless incorrect. To find the correct answer, the respondent needs to override the intuitive impulse, and perform reasoning deliberately (Frederick, 2005;Kahneman, 2011). Researchers believe that the CRT responses characterize the interaction between two com-peting mental processes as defined by the dualprocess theory (Frederick, 2005;Kahneman, 2011;Sinayev & Peters, 2016). According to this theory, two processes (systems) exist in our mind: whereas System 1 is fast, intuitive and impulsive; System 2 is slow, deliberative and controlled (Sloman, 1996;Evans, 2008;Kahneman, 2011). To deliver a correct answer on a CRT item, System 2 needs to check, inhibit, and outperform System 1. The dual-process theory has long been used to address biased judgment and decision-making, and a variety of such biases are linked to System 1's impulse and intuition (Evans, 2008;Kahneman, 2011). Consistently, a series of studies have revealed an association between CRT and biased judgment and decision-making. For example, in the intertemporal choice task, participants with lower CRT scores displayed a stronger preference for the immediate smaller rewards than for the later larger rewards and hence, were more impulsive in their choices (Bialek & Sawicki, 2018;Frederick, 2005;Sinayev & Peters, 2016). In the gamble choice task, participants with lower CRT scores exhibited excessive risk-averse, hence they were not able to maximize the potential earning (Frederick, 2005). Additionally, fewer correct answers on CRT were associated with greater conjunction fallacy and base-rate neglect (Hoppe & Kusterer, 2011;Oechssle et al., 2009). Not surprisingly, performance on CRT also correlated with scholastic assessment test (SAT, a popular test used for college admission in the United States) scores and grade point average (GPA, a classical measure to index overall academic performance), both of which require logical reasoning and deliberation (Frederick, 2005;Thomson & Oppenheimer, 2016). Thus far, the development of CRT has advanced our understanding of judgment and decision making; nonetheless, some concerns have also been raised. For example, Primi, Morsanyi, Chiesi, Donati, and Hamilton (2016) argued that CRT might be too difficult and hence lead to a floor effect particularly in relatively poorly educated populations. A more significant concern deals with CRT's overexposure. As CRT gains its popularity in research and media report, participants may learn the items and the answers before taking the test. For instance, in Thomson and Oppenheimer (2016, study 1), more than sixty percent of the participants had been exposed to at least one item before the study. The knowledge of the test can artificially inflate the score. In line with this, in Haigh (2016), those who had seen at least one item scored significantly higher than those without any prior knowledge of CRT. Similarly, Białek and Pennycook (2017) analyzed six previously published studies and found that in four studies participants with prior knowledge of CRT obtained a higher score than those who did not have such knowledge. However, it is worth noting that although prior knowledge of CRT may increase test scores, CRT's predictive ability (its core ability) remains robust. For example, in Białek and Pennycook (2017), even though participants with prior knowledge of CRT scored better, there was no significant difference in CRT's predictive ability (correlations between CRT and other tasks) between experienced and inexperienced participants. Meyer, Zhou, and Frederick (2018) tracked mTurk workers who took CRT repeatedly and found that on average, scores improved by merely 0.024 items per exposure. More importantly, CRT's predictions did not significantly vary with repeated exposure. In Stagnaro, Pennycook, and Rand (2018), CRT was correlated with religious belief measures, and such correlations were stable across years. Importantly, one recent study provided new insights into the impact of CRT's exposure on its predictive power. Šrol (2018) found that this impact was moderated by the need for cognition. In this study, CRT's predictive ability of performance on heuristics and bias tasks was improved by its exposure only in those with a high level of need for cognition. However, in that sample, only 16% of participants were categorized into the group with a high level of need for cognition. Thus, when combining all participants together, there was no overall difference in CRT's predictions between exposed and unexposed participants. Nonetheless, Šrol (2018) indicated that participants' metacognitive characteristics might moderate how exposure affected CRT's prediction. Another concern pertains to the confounding effect of numeracy. Sinayev & Peters (2016) proposed and empirically demonstrated that both cognitive reflection and numeracy were needed to generate correct answers for CRT. Numeracy refers to the ability to comprehend and utilize numerical information (Peters & Bjalkebring, 2015;Sinayev & Peters, 2016). According to Sinayev and Peters (2016), to generate a correct answer, participants went through two steps. In the first step, participants needed to inhibit the intuitive impulse (i.e., cognitive reflection). In the second step, participants engaged in math calculation (i.e., numeracy involvement). Consistent with their hypothesis, Sinayev and Peters (2016) found that the numeracy component, teased apart from the CRT response, could significantly predict judgment and decision-making biases as described above. Thus, the relationship between CRT and judgment and decision-making biases was confounded with numeracy. Given the concerns, some researchers have introduced modified CRT measures (Baron, Scott, Fincher, & Metz, 2015;Primi et al., 2016;Thomson & Oppenheimer, 2016;Toplak, West, & Stanovich, 2014). For example, to mitigate the potential floor effect, Primi et al. (2016) added three new items and found only a very small proportion of participants answered all items incorrectly. The new version performed well in younger and less educated populations. To address the overexposure problem, Toplak et al. (2014) added four more items to CRT. further tested this seven-item version with three formats: open-ended questions, two-option multiple choices, and four-option multiple choices. Both studies found that the extended CRT retained its predictive power, regardless of the question format. Exploring a Second Version of CRT: CRT-2 Among the modified CRT measures, the present study specifically focuses on CRT-2, which was developed by Thomson and Oppenheimer (2016). We have two reasons. First, compared to the measures that contained both original CRT and new items (Baron et al., 2015;Primi et al., 2016;Toplak et al., 2014), CRT-2 adopts a completely new set of items (specific items are found in the Methods section). Our main goal is to further validate these items by testing the relationship between CRT-2 and intertemporal choice. More broadly speaking, the study aims to further investigate whether CRT-type trick questions can predict biased judgment and decision making. CRT-2 has the potential to provide more items for researchers to select to characterize individual differences in cognition. Another reason to focus on CRT-2 is that CRT-2 might rely less on (though not exclude) numeracy. First, CRT-2 adopts items that aim to reduce such an effect. As can be seen in the Methods section, among the four items, the first and the third items do not appear to need any computation. Second, in Thomson and Oppenheimer (2016), the correlation between CRT-2 and numeracy was significantly weaker than the correlation between the original CRT and numeracy. Third, as demonstrated in Primi et al. (2016), numeracy was a significant covariate that mediated the gender effect on CRT. That is, the fact that males had better performance on CRT was in part because males performed better on numeracy. In Thomson and Oppenheimer (2016), males scored higher on both CRT and numeracy than did females. However, there was no difference in performance between females and males on CRT-2. Taken together, it is reasonable to believe that CRT-2 might rely less on numeracy than does the original CRT. In Thomson and Oppenheimer (2016), CRT-2 was correlated with need for cognition, base rate neglect, college GPA, and SAT scores, indicating it could replicate some of the important findings generated by the original CRT. Nevertheless, the study did not find a significant relationship between CRT-2 and intertemporal choice. As described in that article, one reason might be that the intertemporal choice task was not reliable in the study. The low reliability might be because there were only a few items. Moreover, only one relationship reached a statistical significance level when testing the correlation between CRT-2 and each of the intertemporal choice items separately. We note that with the limited number of items, the task might not be able to capture a stable choice preference. In the present study, we are interested in clarifying the relationship between CRT-2 and intertemporal choice for two reasons. First, intertemporal choice is related to a series of important life activities and consequences. For example, research has found that more impulsive intertemporal choices are associated with lower income, lower credit score, lower college GPA, and a greater chance of having obesity and abusing substances (de Wit, 2008;Kirby, Winston, & Santiesteban 2005;Meier & Sprenger, 2011;Reimers, Maylor, Stewart, & Chater, 2009;Schiff et al., 2016). Thus, it is of interest to examine a test that can characterize individual differences in intertemporal choice. Second and more importantly, researchers have demonstrated that making intertemporal choices reflects the competition between System 1 and System 2 as defined by the dual-process theory. For example, McClure, Laibson, Loewenstein, and Cohen (2004) identified two competing brain regions (part of the limbic system vs. dorsal lateral prefrontal cortex) when participants were making different selections in an intertemporal choice task. These two brain regions resembled the characteristics of System 1 and System 2 (e.g., intuition vs. calculation). Additionally, with modeling, Price, Higgs, Maw and Lee (2016) found that intertemporal choice could be well explained by a two-parameter model that depicted the dual-process theory. Moreover, recent studies with mouse-tracking demonstrated that the trajectories were less direct when making less impulsive intertemporal choices, and concluded that participants had to inhibit the temptation of choosing the sooner smaller rewards in order to maximize their benefit in the long run (Cheng & González-Vallejo, 2017;Dshemuchadse, Scherbaum, & Goschke, 2013;Stillman, Medvedev, & Ferguson, 2017). Therefore, testing the relationship between intertemporal choice and CRT-2 helps to illustrate whether CRT-2 captures cognitive reflection (System 1 vs. System 2), as does the original CRT. Overview of the Present Study CRT-2 appears to provide some new items that pertain to cognitive reflection and judgment and decision making. Some recent studies combined CRT and CRT-2 and had used the new composite to address honesty, analytical thinking style, and attitude toward fake news (Capraro & Peltola, 2018;Pennycook & Rand, 2017;Yilmaz & Saribay, 2017). However, we believe the validity of CRT-2 needs to be addressed before its extensive application. The present study aims to test the validity of CRT-2 by examining its correlation with intertemporal choice. To address the reliability issue, we employed an intertemporal choice task that was recently employed in other studies (Cheng & González-Vallejo, 2016;Dai & Busemeyer, 2014;Scholten, Read, & Sanborn, 2014). In this task, participants make repeated choices between a sooner, smaller reinforcer and a later, larger reinforcer. With a series of choice pairs, we hope to increase the reliability of the task and to obtain a stable choice preference from participants. Furthermore, for CRT scoring, most studies so far have used the number of correct responses. Such a scoring method measures cognitive reflection and has demonstrated good predictive ability (Pennycook, Cheyne, Koehler, & Fugelsang, 2015). However, as implied in Pennycook et al. (2015), while greater cognitive reflection may predict more long-term oriented choices, the pattern is different from the concept that intuition can predict more impulsive choices. In other words, for CRT-2, even its correct response could predict intertemporal choice preference, the extent to which CRT-2 measures intuition in intertemporal choice remains unclear. From the perspective of face validity, if CRT-2 taps into intuitive thinking style, two patterns should be revealed. First, among the errors, there should be at least a portion of intuitive errors. Too few intuitive errors among all errors would indicate that CRT-2 is unable to capture the intuitive thinking style. Second, the intuitive error should be able to predict intertemporal choice preference in the opposite direction predicted by the correct response. Following Pennycook et al. (2015) and Sinayev and Peters (2016), we employ the scoring method with the correct response, intuitive error and other error. For CRT-2, the intuitive and other types of errors can be found in the Methods section. The study aims to further examine whether the performance of CRT-2 is consistent with its face validity regarding both reflective and intuitive thinking styles. One issue of CRT-2 is its relatively low reliability (Cronbach's α). In Thomson and Oppenheimer (2016), with the same group of participants, CRT-2's reliability was .51, lower than CRT's reliability (.62). In Primi et al. (2016), CRT's reliability was .65. Białek and Pennycook (2017) reviewed six past studies on CRT and found that the reliability ranged from .53 to .76. In Šrol (2018), CRT's reliability was as high as .78. Thus, it appears that for the original CRT, its reliability varies across samples. For CRT-2, it is not clear whether its reliability also varies between studies. More importantly, consistently low reliability would reduce the merit of CRT-2. Thus, the present study tests CRT-2's reliability with a different sample. It is worth noting that in the majority of studies with intertemporal choice, only the gain condition is adopted. That is, participants make selections between two rewards. In such a condition, excessive preference for the immediate/ sooner, smaller rewards over the later, larger rewards is considered being impulsive, and lower CRT scores are supposed to be associated with greater impulsive choices. To obtain a reliable relationship between CRT-2 and intertemporal choice, the present study also employs a payment condition where participants make selections between a sooner, smaller payment and a later, larger payment. In this condition, excessive preference for the later, larger payment over the sooner, smaller payment is regarded as the impulsive choice pattern, because participants have to pay more money in the long run (Cheng, Lu, Han, & González-Vallejo, 2012;Perry & Carroll, 2008). We hypothesize that lower CRT-2 scores and more intuitive errors are correlated with more impulsive choices in both gain and payment conditions. Participants Prior to data collection, this study was approved by the Institutional Review Board (IRB) to ensure it met the ethical guidelines. In the present study, all participants were recruited from the participant pool at the authors' institution. The participant pool was comprised of freshmen and sophomore students who were taking Elementary Psychology. Data collection stopped at the end of the semester when the participant pool was closed. As a result, onehundred and forty-five college students participated in this study via Qualtrics to receive course credit. Three participants completed fewer than half of the items. Another three completed zero or only one item on CRT. Hence these six participants were removed from the study. In the remaining 139 participants, there were 68 females, 67 males and four did not reveal their gender. We note that this sample size was comparable to the one tested in Thomson and Oppenheimer (2016). Sensitivity analysis was performed with G*Power 3.1.9 to estimate the effect sizes with the current sample size. α was set at .05 and statistical power was set at .80. As a result, the study had sufficient power to detect a correlation coefficient of .23 (two-tailed), and differences between two independent means of d = 0.49 (two-tailed, one group had 68 females and the other group had 67 males). Materials and Procedures All participants completed CRT-2 and two conditions of intertemporal choice tasks (gains vs. payments), as described below. CRT-2 scale. Four items of CRT-2 were adopted from Thomson and Oppenheimer (2016, p. 101). To clarify the impact of intuitive error on decision preference, we adopted two kinds of scoring criteria (Sinayev & Peters, 2015;Thomson & Oppenheimer, 2016). The first one simply differentiated the incorrect and correct answers. The second kind not only identified the correct and incorrect answers, but it also teased apart the errors into two categories: intuitive errors and other errors. The items and the scoring keys are listed below. For each item, any answer that is different from the correct or intuitive answer is considered as a non-intuitive incorrect answer. 1. If you're running a race and you pass the person in second place, what place are you in? (intuitive answer: first; correct answer: second) 2. A farmer had 15 sheep and all but 8 died. How many are left? (intuitive answer: 7; correct answer: 8) 3. Emily's father has three daughters. The first two are named April and May. What is the third daughter's name? (intuitive answer: June; correct answer: Emily) 4. How many cubic feet of dirt are there in a hole that is 3' deep x 3' wide x 3' long? (intuitive answer: 27; correct answer: none) Intertemporal choice tasks. The intertemporal choice task employed in the present study was similar to those reported in some previous studies (Cheng & González-Vallejo, 2016;Scholten et al., 2014). The current study employed two conditions of intertemporal choice tasks with hypothetical gains and payments. In the gain condition, participants were asked to make forty choices between a sooner gain and a more delayed gain. All attributes, including magnitude and delay, varied across all choice pairs. To mimic the earning and payment (for the payment condition) in everyday life where whole numbers rarely occur, in all choice pairs, the magnitude contained two decimal places. As an example, participants were asked to make a choice between $137.55 in 67 days vs. $90.29 in 34 days, and then moved to another choice pair: $205.05 in 55 days vs. $149.85 in 32 days. Across all choices, the averages of the sooner and later delays were 28.68 and 54.43 days, respectively. The averages of the smaller and larger gains were $195.97 and $345.75, respectively. The delays and magnitudes used in the payment condition were exactly the same as those used in the gain condition. There were two differences between the conditions. First, in the payment condition, participants were asked to make choices between a sooner smaller payment and a more delayed larger payment (as opposed to selecting between gains in the gain condition). Second, the sequences of the choice pairs were different between the two conditions. Doing so aimed to reduce the memory effect so that memory of choices in one condition would not affect choices in the other. In an earlier experiment performed by the authors, upon completing the task, participants were asked whether they noticed that the attributes were the same between the two conditions. None reported affirmatively. Following previous studies (Cheng et al., 2012;Scholten et al., 2014), the present study employed the proportion of choosing the longterm advantageous options (later larger gain in the gain condition, and sooner smaller payment in the payment condition) to index the choice preference. A higher proportion in both conditions indicates a less short-sighted (impulsive) choice preference. Reliability of the Measures In the current study, when only differentiating correct and incorrect answers, CRT-2's Cronbach's α was .60, with a 95% confidence interval between .48 and .70. When differentiating correct answers, intuitive errors and other errors, CRT-2's Cronbach's α slightly increased to .61, with a 95% of confidence interval between .50 and .71. Given the confidence intervals, such reliability was comparable to the findings in other studies of CRT-2 (Thomson & Oppenheimer, 2016;Yilmaz & Saribay, 2017). For the gain and the payment conditions of the intertemporal choice task, the Cronbach's α were .93 (95% CI between .91 and .95) and .92 (95% CI between .89 and .93), respectively. Thus, choice preference in the current study was reliable and could be used for further analyses. Performance of CRT-2 On average, participants answered 2.39 items correctly (59.8% correct rate), with an SD of 1.17. As seen in Figure 1, the percentages of participants who gave zero to four correct answers were: 9.4, 12.2, 24.5, 38.1 and 15.8, respectively. Thus, based on the current sample, the distribution of CRT-2 scores was not severely skewed. Moreover, CRT-2 did not meet a floor or ceiling effect. Table 1 further presents the results regarding CRT-2 performance when differentiating intuitive and non-intuitive errors. As can be seen, when participants made errors, the majority errors (73.6%) were intuitive ones. As displayed in Table 1, the last item was more difficult than the other three. Given the different levels of difficulty, one might ask whether including the last item decreased the reliability of CRT-2. This was not the case in the present study, as removing the last item resulted in a Cronbach's α of .60 (95% CI between .46 and .70). Moreover, as shown in Table 2, items displayed significant inter-correlations, with the only exception between Item 2 and Item 4. 1 Thus, all four items should be included in CRT-2. CRT-2 and Intertemporal Choice In the gain condition, the mean proportion of choosing the later larger gain over the sooner smaller gain was 0.64 (SD = 0.25). In the payment condition, the mean proportion of choosing the sooner smaller payment over later larger payment was 0.67 (SD = 0.22). Similar to other studies (Cheng et al., 2012;Estle et al., 2006), there was a trend that participants selected more long-term advantageous options in the payment condition than in the gain condition, t(138) = 1.64, p = .10, d = 0.14, although not statistically significant. Table 3 shows Pearson correlations between CRT-2 responses and preference of intertemporal choice. As shown, overall CRT-2 performance and intuitive error were significantly related to choice preference in both of the gain and payment conditions. Following Lee and Preacher (2013), Fisher's z test was applied to examine whether the correlation strength was significantly different between when using CRT-2 total score and when using intuitive error to predict choice preference. In the gain condition, there was no significant difference between the two correlations, Fisher's z = 1.25, p(two-tailed) = .212. A similar non-significant pattern was also found in the payment condition, Fisher's z = 1.05, p(two-tailed) = .295. Thus, CRT-2 total score and intuitive error had a similar predictive ability on choice preference in both gain and payment conditions. Contrary to CRT-2 total score and intuitive error, error due to non-intuitive reasons was not associated with choice preference in either condition. The non-intuitive error was not related --.30 *** Note. ** p < .01; *** p < .001. 1 For all correlations in the present study (Tables 2 and 3), there was little difference in correlation coefficients between when using Pearson correlation and Spearman correlation. The significance of the correlations remained the same when using either type of the correlation. to the intuitive error, either. We did not apply Fisher's z test to compare the predictive ability between intuitive error and other error because the latter one simply could not predict choice preference. Table 4 exhibits the comparisons on CRT-2 and choice preference between female and male participants (those who did not report gender were excluded in this section). Similar to Thomson and Oppenheimer (2016), there was no difference in any of the CRT responses between females and males. Additionally, there was no gender effect on intertemporal choice preference. Discussion The present study examined the relationship between CRT-2 and intertemporal choice. The overall performance on CRT-2 (e.g., average total score and inter-correlations between items) was comparable between the present study and Thomson and Oppenheimer (2016). Primi et al. (2016) concerned a potential floor effect for the original CRT. As illustrated in Figure 1, less than 10% of participants answered all items of CRT-2 incorrectly. Meanwhile, 15.8% of participants answered all items of CRT-2 correctly. Hence, the study did not detect any obvious floor or ceiling effect, indicating the CRT-2's difficulty appeared to be appropriate for college students. Compared to Thomson and Oppenheimer (2016) and Yilmaz and Saribay (2017), the internal consistency of CRT-2 in the present study was similar (when taking 95% confidence internal into account). As stated, at the apparent level, the first and the third item in CRT-2 did not need any computation, whereas the other two items were more related to mathematics. Thus, the inconsistency between the items' relationship with mathematics might decrease CRT-2's internal consistency. While a Cronbach's α of .60 was far from being perfect, it was still close to CRT's Cronbach's α in some studies as cited earlier. Hence, we believe CRT-2's internal consistency should not be a fundamental problem that prevents its future usage. The present study computed three scores: CRT-2's total score (i.e., the correct answer rate), the percentage of intuitive errors, and the percentage of other errors. Similar to Thomson and Oppenheimer (2016), the majority of errors were intuitive errors. Moreover, there was no significant relationship between intuitive errors and other errors. Thus, intuitive errors and other errors appeared to capture different constructs of thinking style. Most importantly, the present study employed a reliable intertemporal choice task and found that more CRT-2 corrected responses were significantly related to fewer impulsive intertemporal choices in both gain and payment conditions. Additionally, we also found that intuitive errors but not other errors were significantly positively related to impulsive choice preference. Furthermore, the strength of the correlation between choice preference and CRT-2 cor-rect responses was similar to the strength of the correlation between choice preference and intuitive errors. The similar predictive ability between the correct responses and intuitive errors might be due to the fact that the intuitive errors accounted for 73.6% of total errors. The findings stated above had a few implications. First, in addition to the correct responses, intuitive errors could also predict impulsive preference in intertemporal choices. By contrast, non-intuitive errors were not able to do so. While we admit that both CRT-2 and intertemporal choice tap into a variety of psychological constructs such as general intelligence and numeracy, we believe the current findings generated by CRT-2 are at least consistent with the notion of cognitive reflection and intuitive thinking style. In other words, the performance of CRT-2 was in line with its face validity. To more clearly demonstrate that CRT-2 can capture cognitive reflection and intuition, in future studies, more CRT-type scales, thinking style scales (for example, the Faith in Intuition scale used in Pennycook et al., 2015), and judgment and decision making tasks are needed for cross-validation. Additionally, the study implied that for CRT and other similar scales, to examine their validity, researchers can go beyond the total score (i.e., the number of correct responses). The percentage of intuitive errors and the relationship between intuitive errors and other behavioral tasks should also be tested. Combined with previous findings in Thomson and Oppenheimer (2016), the present study implied that CRT-2 could provide some more valid items for researchers to characterize individual differences. In a broader sense, the present study suggested that in addition to the three original CRT items, CRT-type questions generally have good predictive power of biased judgment and decision making. Limitations of the present study should also be addressed. First, we did not directly ask participants whether they had seen any of the CRT-2 items before. Thus, we could not illustrate to what extent CRT-2 was free of prior experience. Second, Thomson and Oppenheimer (2016) found that compared to CRT, CRT-2's correlation with objective numeracy scales was weaker. While teasing apart numeracy is appealing, the current study did not measure numeracy. Similar to Thomson and Oppenheimer (2016), the present study found that there was no gender effect on CRT-2, inciting that CRT-2 seemed to be more gender neutral than the original CRT. Nonetheless, the gender effect on the original CRT may have resulted from not only objective numeracy (numerical skills) but also math anxiety, self-efficacy, and rational thinking (Primi, Donati, Chiesi, & Morsanyi, 2018;Ring, Neyse, David-Barett, & Schmidt, 2016;Sladek, Bond, & Phillips, 2010;Zhang, Highhouse, & Rada, 2016). Thus, the present study simply replicated the non-significant gender effect on CRT-2. However, we believe such a pattern did not provide sufficient insight into the relationship between CRT-2 and numeracy. Hence, future studies are needed to clarify whether CRT-2 is less affected by objective and/or subjective numeracy. Recently, a new version of CRT (termed verbal CRT) based on non-mathematical problems was developed. This version has a weaker relationship with numeracy and is more gender neutral (Sirota, Kostovičová, Juanchich, Dewberry, & Marshall, 2018). We believe developing such a version is the right step to tease apart cognitive reflection and numeracy. The third limitation pertains to the study's external validity. The current study employed college students from a participant pool. Although with such a sample, CRT-2 performed well, we realize that further studies are needed to examine whether CRT-2 can also be applied to populations with different ages and education levels. In sum, the present study reveals that with a reliable intertemporal choice task, CRT-2's correct response and intuitive errors are able to predict choice preference in both gain and payment contexts. The study suggests that CRT-2 provides some more items for researchers to select to characterize individual differences in thinking style and judgment and decision making.
2019-06-26T00:54:57.320Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "418c0a3fea7c3369ac7c91e96664b981a41bb463", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21909/sp.2019.02.774", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fd97ecee79b357b8b02314d89f1e692c5ed694f7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
235369966
pes2o/s2orc
v3-fos-license
Barriers to axonal regeneration after spinal cord injury: a current perspective Regeneration of long axons after the spinal cord injury (SCI) will benefit patients with extensive traumatic damage to the white matter pathways who experience intolerable, permanent, neurologic deficits even after neuroprotective treatment with anti-inflammatory agents (Kwiecien, 2021a). This short paper attempts to synthetize pathologic mechanisms or barriers involved in inhibition of axonal regeneration in the SCI and provides suggestions of therapeutic interventions enabling this regeneration in animal models. Barriers to axonal regeneration after spinal cord injury: a current perspective Regeneration of long axons after the spinal cord injury (SCI) will benefit patients with extensive traumatic damage to the white matter pathways who experience intolerable, permanent, neurologic deficits even after neuroprotective treatment with anti-inflammatory agents (Kwiecien, 2021a). This short paper attempts to synthetize pathologic mechanisms or barriers involved in inhibition of axonal regeneration in the SCI and provides suggestions of therapeutic interventions enabling this regeneration in animal models. Recently elucidated pathogenesis of the spinal cord injury (SCI) in the rat model indicates that destruction of axonal pathways by a traumatic event is further augmented by the severity of destructive inflammation over months following the trauma (Kwiecien et al., 2020a). The character of the inflammatory response in the spinal cord depends on the location of the area of necrosis and hemorrhage; if it is located deep, surrounded by the spinal cord, it results in formation of the cavity of injury (COI) where myelinrich necrotic debris, hemorrhage and from day 3, rapidly increasing numbers of proinflammatory, CD68 + /CD163macrophages are sequestered by a central nervous system (CNS) tissue response, particularly astrogliosis, within the first week post-SCI. Excess edema fluid from the surrounding spinal cord appears to be transferred to the COI with apparent participation of reactive astrocytes and aquaporin-4 (Rash et al., 2004;Kwiecien et al., 2021b). The other type of inflammatory response, arachnoiditis, involves large disruption of the surface of the spinal cord (pia limitans externa) and involves infiltration of macrophages, fibroblasts and blood vessels with concurrent obliteration of the spinal cord and exclusion of CNS glia including GFAP + astrocytes (Kwiecien et al., 2020a). While arachnoiditis is essentially a severe form of granulomatous inflammation that resolves into a scar, the COI resolves into a macrophage-free syrinx (Kwiecien et al., 2020a). Both types of inflammation apparently expand at the cost of destroyed spinal cord and are contained and inhibited by progressively thickening wall of astrogliosis and its anti-inflammatory activity mechanisms of which are unknown (Kwiecien, 2013;Kwiecien et al., 2020a). Given the above pathogenesis of the SCI we have to consider the following barriers to regrowth of long axons in descending and ascending pathways; (1) the severity of inflammation initiated by trauma to the white matter (in the COI and in arachnoiditis) is not only destructive to the adjacent white matter but it lasts for an extraordinarily long period of time, > 16 weeks (Kwiecien et al., 2020a), (2) the COI, with its aqueous content and the resulting syrinx (Kwiecien et al., 2021b) is not crossed by axons unless they are supported by an implanted bridge (Kwiecien, 2013), (3) arachnoiditis and the resulting scar cease to be part of the spinal cord (Kwiecien et al., 2020a) and the CNS axons may not enter it. Inhibition of inflammation in the SCI has been achieved recently with prolonged, continuous 1-2 week long subdural a d m i n i s t r a t i o n o f d e x a m e t h a s o n e and two immunomodulatory proteins derived from Myxoma virus, Serp-1 and M-T7 (Kwiecien et al., 2019). Since infusion of dexamethasone resulted in severe toxicity (Kwiecien et al., , 2019, this powerful synthetic glucocorticoid is not suitable for long term sustained administration. An 8 week long subdural infusion of Serp-1 lowered the numbers of macrophages throughout the administration and essentially eliminated them from the COI thus reducing the duration of inflammation by half considered a neuroprotective effect (Kwiecien et al., 2020b). This is the first preclinical study indicating the required duration of sustained administration of an anti-inflammatory agent, 8 weeks, to eliminate inflammation from the COI. Although subdural infusion offers an effective route of administration of agents that do not readily pass the bloodspinal cord barrier (Kwiecien et al., 2019), it is an invasive administration that needs to be maintained for a long period of time. Antiinflammatory agents administered orally and intravenously still need to be tested for their neuroprotective effectiveness considering the damage to the bloodspinal cord barrier around inflammation confined to the COI with resulting persistent vasogenic edema (Kwiecien et al., 2020a(Kwiecien et al., , b, 2021b. While the COI can be implanted with materials secreting anti-inflammatory agents such as Serp-1 (Kwiecien et al., 2020c) or materials tested for ability to support axonal regeneration (Kwiecien, 2016), arachnoiditis, a solid inflammatory tissue (Kwiecien et al., 2020a) is less amenable to neuroregenerative therapies and probably would require the surgical resection to create a lesion leading to the COI for appropriate implantation enabling axonal regeneration. Axonal regeneration in ascending pathways in the dorsal column can be conveniently studied in myelin-lacking Long Evans Perspective Jacek M. Kwiecien * Shaker (LES) rat with the dorsal column crush lesion implanted with the rat choroid plexus (Kwiecien, 2013) and dextran axonal tracer microinjected in both sciatic nerves (Kwiecien, unpublished). Ependymal cells derived from the implanted choroid plexus formed elaborate processes enveloping numerous axons in the COI, thus supporting their regrowth across 1-2 mm wide lesion and supporting regenerated axons for beyond 8 weeks. Some of the regenerated axons received myelin from implanted ependymal cells that transdifferentiated into oligodendrocytes by 8 weeks post-SCI indicating that these axons may have regenerated at full length and re-constituted synapses in the caudal brain stem, 4-5 cm from the site of the crush or 2/3 length of the spinal cord (Kwiecien, 2013). Although ependymal cells are very good at supporting axonal regeneration across the COI of the LES rat or in the crushed filum terminale of the LES and normally myelinated rats (Kwiecien and Avram, 2008), their availability is limited and a variety of candidate synthetic materials have been studied in the SCI for their ability to bridge axonal regeneration. The implantation of the spinal crush injury in normally myelinated rats resulted in complete destruction of all implants by the severity of inflammation (Kwiecien, 2016) rendering normally myelinated spinal cord not useful for such testing due to the severity of post-SCI inflammation (Kwiecien et al., 2020a). The same materials implanted in the spinal crush of dysmyelinated LES rats could be studied conveniently in more detail following the 2 week survival. Most of the implants induced an inflammatory response from the spinal cord involving infiltration by macrophages and multinucleated giant cells and formation of liquid-filled cystic spaces between the spinal cord and the body of an implant indicating rejection (Kwiecien, 2016). One material however, a methacrylate hydrogel, did not induce inflammatory response and adhered closely to the spinal cord around the lesion but axons failed to enter it (Kwiecien, 2016). Observation from this and subsequent studies on other materials (Kwiecien, unpublished) indicate that in a normally myelinated animal model of human SCI, after the inhibition and elimination of inflammation, a candidate material for bridging activity in the COI should be: (a) inert, non-resorbable, not inducing inflammation in the implanted spinal cord, (b) liquid, with ability to gel within 10-30 seconds after microinjection in the lesion (approximately 50 μL in the rat), (c) supporting axonal migration into its delicate, soft, porous structure and their permanent support thereafter including enabling of re-myelination. The last requirement may need participation of tri-dimentional chains of suitable cells, therefore, pre-seeding of a candidate hydrogel material in vitro prior to its micro-injection into the COI. The inhibition of inflammation as the first and necessary step in neuroregeneration leads to an exciting idea of using hydrogels loaded Perspective with an effective anti-inflammatory agent for sustained release in situ (Kwiecien et al., 2020c) that would also serve as the bridge for axonal regeneration across the lesion at the same time, a daunting challenge for a potentially fast and effective treatment of a devastating and currently untreatable disease, the SCI. Once the axons cross the acute lesion or the COI, they will face myelinated white matter at the opposite side and myelin is not permissive to axonal regeneration, it needs to be removed in a gentle fashion not involving initiation of destructive inflammation, at least for a period of time required for axons to re-grow. Such removal of myelin has been achieved in large areas of the white matter of the spinal cord by 1 week long subdural infusion of a very high concentration (50 million times higher than physiological) of kynurenic acid indicating the method for therapeutic removal of myelin in areas of the spinal cord targeted for neuroregeneration. Importantly, oligodendrocytes appeared remarkably affected by the treatment with kynurenic acid with markedly retracted cytoplasmic processes and small amount of organelle-poor cytoplasm associated with specific "weakening" of oligodendrocytes in vitro, mechanism of which remains uncertain (Langer et al., 2016). It is also not known whether "weakened" oligodendrocytes would revive and remyelinate naked axons after a period of time following administration of kynurenic acid and if so, what is that period of time? Figure 1, therapeutic neuroregeneration following the SCI is the matter of properly designed pre-clinical experiments targeting the inflammation, involving hydrogels or other materials acting as the bridge for axonal regeneration across the COI and the removal of myelin sheaths in the white matter areas targeted for axonal regrowth with infusion of kynurenic acid. Considering that axonal regeneration in the filum terminale (an integral part of the CNS in the rat) and in the spinal cord is about 2 mm a day at its fastest (Kwiecien and Avram, 2008;Kwiecien, 2013), it will be a slow process in a much longer human spinal cord. Therapeutic neuroregeneration in clinical trials and beyond will require in vivo imaging to monitor regenerating axons throughout the therapy, another challenge for in vivo pre-clinical studies. The present work was supported in part by VPC NeuroPath CONSULTING, Inc (to JMK Figure 1 | Conceptual presentation of cellular mechanisms involved in inhibition of axonal regeneration in the SCI with putative treatment directions. A SCI involving the dorsal column cuts ascending axons and results in a locally severe, destructive and extraordinarily protracted inflammation that can destroy any implant to serve as the bridge for axonal regeneration. The site of injury deep in the spinal cord is converted within the first week into a COI filled with water from excess edema fluid and by macrophages or, if it is at the surface of the spinal cord, into arachnoiditis, a type of severe solid, granulomatous inflammation. Both types of inflammation are walled off from the rest of the spinal cord by astrogliosis. Regenerating axons do note enter the COI because it is filled with water and axons do not swim across on their own while arachnoiditis is devoid of glial cells and becomes a solid extra-neural tissue hostile to axons. The administration of anti-inflammatory agents is the first and necessary step in treating the SCI; the inhibition of the severe inflammation that allows for neuroprotection and also for protection of cells and/or synthetic materials implanted into the SCI lesion, or later, into the COI. The implantation of the choroid plexus, rich in ependymal cells, allowed for axons (colored purple and not myelinated) to cross the COI. Myelin sheaths in the spinal cord form an impassable barrier for axonal regeneration beyond the COI. Their widespread and safe removal and creation of large myelin-free areas can be accomplished by subdural infusion for 7 days of a very high concentration of kynurenic acid which allows for axonal plasticity, sprouting and potentially for regeneration of axons (purple, not myelinated) after crossing the COI. COI: Cavity of injury; SCI: spinal cord injury.
2021-06-09T06:18:30.313Z
2021-06-07T00:00:00.000
{ "year": 2021, "sha1": "613ec3823e31087eb60367261bdc9405b1874f24", "oa_license": "CCBYNCSA", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8451569", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e58357f71728c2db21c17d23dd0ad812604a16fb", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
4593893
pes2o/s2orc
v3-fos-license
Modeling Unobserved Heterogeneity in Susceptibility to Ambient Benzo[a]pyrene Concentration among Children with Allergic Asthma Using an Unsupervised Learning Algorithm Current studies of gene × air pollution interaction typically seek to identify unknown heritability of common complex illnesses arising from variability in the host’s susceptibility to environmental pollutants of interest. Accordingly, a single component generalized linear models are often used to model the risk posed by an environmental exposure variable of interest in relation to a priori determined DNA variants. However, reducing the phenotypic heterogeneity may further optimize such approach, primarily represented by the modeled DNA variants. Here, we reduce phenotypic heterogeneity of asthma severity, and also identify single nucleotide polymorphisms (SNP) associated with phenotype subgroups. Specifically, we first apply an unsupervised learning algorithm method and a non-parametric regression to find a biclustering structure of children according to their allergy and asthma severity. We then identify a set of SNPs most closely correlated with each sub-group. We subsequently fit a logistic regression model for each group against the healthy controls using benzo[a]pyrene (B[a]P) as a representative airborne carcinogen. Application of such approach in a case-control data set shows that SNP clustering may help to partly explain heterogeneity in children’s asthma susceptibility in relation to ambient B[a]P concentration with greater efficiency. Introduction Asthma is complex heritable syndrome, which afflicts an estimated 300 million people worldwide [1]. A growing body of research suggests that particular subtype(s) of asthma arises from complex interactions of genetic and environmental factors during early-life prior to onset of symptoms [2,3]. Even though several environmental contributors of asthma risk are established to date, a growing body of genome-wide association studies (GWAS) has also shown substantial contributions by genomic variability [4,5]. That is, risk of asthma attributed to genetic variations has also risen in recent decades [5]. Since genetic variations of the populations are unlikely to have changed within a span of few decades, environmental exposure context-dependent increase in penetrance of the genetic susceptibility factors explain, at least partly, the underlying gene-environment interactions (GEIs) [4]. A recent review by Bønnelykke and Ober proposes examination of asthma subgroups with homogeneous phenotypes in conjunction with a thorough measurement of environmental exposures, as the starting point of GEIs investigations [4]. Childhood exposures to polycyclic aromatic hydrocarbons (PAHs), benzo[a]pyrene, a representative PAH, is a known risk factor for reduced lung function [6] symptom aggravation [7] or the onset of new symptoms [8,9]. Yet, present evidence on the mechanisms of particular subtypes of childhood asthma, namely, endotype, due to early-life exposures to PAHs remains incomplete [10]. To date, most investigations of gene-environment interaction focus on a priori-driven hypothesis of well-recognized pathways and/or established candidate genes [10]. Such approach inherently precludes the possibility of identifying novel pathways and/or candidate genes. An alternate strategy for identification of new or suspected alleles is needed for a deeper exploration of the mechanisms of PAH-associated asthma endotype(s). Within the present investigation, we explore the feasibility of reducing dimensionality of data complexity in the host (i.e., asthma cases and healthy controls), using cluster analysis and regression trees. We identify homogeneous subgroups of the children, based on a data-driven analysis of allergy and asthma symptom severity, and identify features (e.g., SNPs) associated with each asthma severity group. In recent decades, multiple algorithms and techniques have been developed to reduce data dimensionality, including hierarchical clustering [11,12], association analysis [13], or partition optimization methods, such as the k-means clustering algorithm [14,15]. Concurrent clustering of children and SNPs into homogeneous groups is known as biclustering (or block clustering or two-mode clustering, or co-clustering), in which the resulting sub-groups are expected to capture homogeneous subgroups of children. A hierarchical Bayesian procedure for biclustering [16], using mixtures has been proposed for binary data [17][18][19], for count data [20,21], and for ordinal data [22][23][24]. In addition, biclustering models based on double k-means are proposed in Rocci and Vichi [25] and Vichi [26]. Specifically, we re-categorize asthmatic children (along with the associated SNPs) from six to three clusters, while retaining a single category for the healthy controls. Within each of the three sub-groups, a non-parametric regression tree was fit to reduce the original 619 single nucleotide polymorphisms (SNPs) into 16 SNPs of interest, which are most robustly correlated within each of the three clusters. The initial pool of 619 candidate SNPs is chosen based on their roles in xenobiotic metabolism, detoxification, induction and/or repair of oxidative damage, initiation and/or enhancement of the inflammatory immune responses and DNA repair from exposure to air pollution [27][28][29][30]. Subsequently, we posit that while the individual SNP may pose very small effect individually, they jointly account for a significant proportion of variation in the children's susceptibility to asthma per unit exposure to ambient B[a]P, as proposed by polygenic inheritance [31]. We test whether the variant genotypes en masse, associated with given sub-group, contribute to different odds of asthma diagnosis per same unit increase in ambient B[a]P concentration. We compare the odds of asthma per unit increase in the ambient concentration B[a]P and after calculating polygenic scores for each subgroup. Study Sites Equal number of the cases and the controls were enrolled from the city with historically high air pollution level (i.e., Ostrava) as well as five rural background sites (i.e., Southern Bohemian towns), in the Czech Republic [32]. Ostrava (i.e., the high exposure site), near the Polish border, has maintained a high concentration of coal mining, coal processing, and metallurgical industries since the 18th century [33]. Among the four districts within Ostrava, the cases and the controls were enrolled from the most polluted district (i.e., Radvanice-Bartovice), with a highest district mean for B[a]P during November 2008 (11.4 ng/m 3 ), compared to the City's annual mean value (9.3 ng/m 3 ) [34]. In contrast, the rural background region (with~24,000 populations) demonstrated mean B[a]P concentration of 2.5 ng/m 3 during the same period [32]. Within the rural site, predominant local air pollution sources include indoor heating and vehicle exhaust emissions [33]. In contrast, the urban sites are characterized by industrial sources, in addition to the residential and vehicular ones. The ambient air pollution effect on the children's gene expression levels [32], the micronuclei frequency [35], proteins and lipids peroxidation have been published [36]. Case and Control Children Case and control definition have been published [37]. Briefly, all children were given lung function, bronchodilation, and skin prick tests by an allergist, if he/she were flagged by a primary care physician as a possible/probable case. Asthma case definition were based on the positive results for the following criteria. (1) Child has a positive diagnosis by an allergist of 'current' asthma using International Classification of Diseases (ICD), Tenth Revision [38] codes within their medical record. (2) The child currently receives asthma medication. (3) The child has a clinical lung impairment based on spirometry test within the past 12 months. (4) The child was positively responsive to a bronchodilatation test during the last 12 months. (5) In addition, all children were given allergy skin tests (Phadiatop, Pharmacia & Upjohn Diagnostics, Uppsala, Sweden) for airborne allergens. Diagnostic reliability of ICD-10 diagnoses in physician visits or hospitalizations have been demonstrated in our earlier investigations [39]. In contrast, the control children were defined as those who were free from any of the above conditions. Each case was matched to a control according to the enrollment site, age group, and gender. In addition, all parents filled out a questionnaire on the children's medical history and life-style choices. The questionnaire included questions on the children's birth weight, length of gestation, duration of breastfeeding period for the first six months of life, body weight and height, allergy, positive history of asthma episode, allergic rhinitis episode, atopic dermatitis episode within the past 12 months, any antibiotic use within the two weeks of sampling, parental smoking history. The ethical committee of the Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, approved the study. The parents of the children signed an informed consent, according to the Helsinki II declaration. Air Pollution Monitoring of Polycyclic Aromatic Hydrocarbons PAHs are measured using a Versatile Air Pollution Sampler (VAPS) [40]. Particle-bound PAHs are extracted from filters and a quantitative chemical analysis of PAHs was performed by high-performance liquid chromatography (HPLC) with fluorescence detection according to the US Environmental Protection Agency (EPA) method [41]. The mean daily level is measured once every three days for total of 10 days/month in the background region; once every six days for total of 5 days/month in the high exposure region. The difference in measurement frequency was determined by the local government, according to the availability of the funds, public's demand, and scientific opinion of the Czech Hydrometeorological Agency. Details regarding quality assurance and control have been described [39,42,43]. DNA Extraction Children's sputum sample was incubated at 50 • C in a water bath for a minimum of 1 h followed by incubation with ORAgene Purifier on ice for 10 min (DNA Genotek, Ottawa, ON, Canada). DNA was precipitated with 95% ethanol, and then diluted in TE buffer (10 mM Tris, 0.1 mM EDTA, pH 8.0). DNA concentrations were measured between 50 and 80 ng/µL using a NanoDrop 1000 Spectrophotometer (Thermo Fischer Scientific, Wilmington, DE, USA). SNP Detection Quantification of SNP on Glutathione S-transferase Mu 1 (GSTM1) Glutathione S-transferase theta-1 (GSTT1), plus 95 genes have been described [42,43]. The SNPs from the 95 genes of interest were selected based on their known role of protection against oxidative injury, metabolism of xenobiotics, DNA repair, and/or immune and inflammatory responses. The SNPs of interest were chosen from SNP500 Cancer Database (http://snp500cancer.nci.nih.gov/). Only the SNP with the minor allele frequency >5% were included. Custom-designed 96-sample panels were employed to detect 768 SNPs using GoldenGate genotyping assay on a BeadStation 500GX system (Illumina, San Diego, CA, USA) according to the manufacturer's instructions. As a quality control measure, 12% of the DNA samples were randomly genotyped, yielding 100% concordance. We removed the samples and SNPs with p10 GC score <0.40 and/or call frequency score <0.60 for all SNPs from further analysis. Furthermore, polymorphisms with poor clustering quality (149 SNPs) were removed, yielding 619 SNPs for further analysis. Cotinine and Vitamin Assays Urine sample was used to validate self-reported tobacco exposure. Urinary cotinine levels were analyzed by radioimmunoassay [35]. Cotinine >450 ng/mg of creatinine was considered a cut-off value for active smoking status; 20-449 ng/mg was deemed cut-off for passive smoking status. Blood sample (~40 mL) was drawn by venipuncture into vacuettes containing lithium heparin (for the vitamin C assay) or EDTA (for the vitamin A, E assays). All tissue samples were stored at 4 • C and transported within 18 h of collection to the Department of Genetic Ecotoxicology for Vitamin A, C, and E analyses [44,45]. Allergy and Asthma Severity Index We recoded the dichotomous asthma outcome, into an ordinal variable, allergy and asthma severity index (AASI) (see Table 1). The asthmatic children (yes/no) were subdivided, according to the age at onset of atopic dermatitis, allergic sensitization, wheezing symptoms, as well as the results of the spirometry tests as proxy for the severity of current asthma as shown below [46,47]. At least one at ≤24 months A Jonckheere-Terpestra (JT) test was used to assess a linear trend, without assuming an underlying normal distribution of age at α = 0.05. Biostatistical Methods The variable ambient concentration B[a]P was transformed to a natural log scale as it was right-skewed. The children's self-reported cigarette smoke exposure was validated with urinary cotinine concentration. The secondhand smoking status was defined as creatinine-adjusted cotinine 20 to 449 ng/mg; and current active smoking status as cotinine ≥450 ng/mg [48]. In addition, parental report of the number of smokers at home was correlated with the child's creatinine-adjusted cotinine level, using Spearman's non-parametric ranked agreement. All active smokers (7% of asthmatics and 6% of controls) were removed from further analysis. The distributions of individual SNPs were checked for Hardy-Weinberg equilibrium (HWE) using a chi-square test. Exposure Window of Interest Ambient B[a]P concentration at a monitor nearest to each child's home was matched. Consistent with our earlier analysis [39], optimal exposure window was defined as average 30-day periods. However, as the childhood susceptibility to asthma exacerbation is expected to be dependent on the age of the child, we compared the representativeness of the period of interest (i.e., 30 Identification of Confounders Same set of putative confounding of B[a]P-asthma association by following variables were examined, including age, gender, total number of smokers in the family, body mass index, plasma levels of Vitamin C (mg/L), Vitamin A (mg/L), and Vitamin E (mg/L), season of delivery (indicator variables, fall, winter, and spring), and gestational age at delivery (see Table 2). That is, univariate association between each of above variable with ambient B[a]P and the asthma and allergy diagnosis, respectively, were examined through linear correlation coefficients as well as Pearson χ 2 test. Table 2. Re-categorization of the ordinal variable asthma severity variable. The original equally spaced 7-level scale was estimated and transformed into a new non-equally spaced 7-level scale (highlighted in italic in row (a)). The final 4-level categories after collapsing their categories are highlighted in boldface in rows (b) and (c). Multivariate Analyses In the logistic regression model, we retained the variable as a possible confounder in the model if the variable induced >10% change in the regression coefficients of B[a]P. The largest initial logistic regression model included following predictors: age, gender, total number of smokers in the family, obesity (body mass index ≥30), plasma levels of Vitamin C (mg/L), Vitamin A (mg/L), and Vitamin E (mg/L), season of delivery (indicator variables, fall, winter, and spring), birthweight and gestational age at delivery. The final model adjusted for gender, age, obesity, and total number of smokers at home. Regression diagnostics to determine the robustness of the estimated odds ratios were performed by removing the influential values, which were defined as a value 3-times greater than 75th percentile. Double k-Means This technique extends well-known vector quantification of k-means methodology [50]. Double k-means method [25,26] groups the individuals (e.g., children) and the features (e.g., SNPs) simultaneously. The model identifies clustering structures by numerical solution of a least-squares algorithm and highlights dependence between children and SNPs. We tried a finite number of cluster for both children and SNPs and the criterion to decide the optimal number of clusters is obtained as a trade-off between the sum of squares between cluster (SSB) and within clusters (SSW) for all possible number of clusters tested. The optimal number of clusters of children and SNPs was three, which can be summarized as: healthy control children (Cluster 1), mild-moderate case children (Cluster 2), and severe cases (Cluster 3). Regression Trees Environmental health analyses often involve modeling the relationship between a response (e.g., health outcomes) and a set of explanatory variables for the purposes of quantifying the exposure-health outcome associations, describing patterns and processes, or making spatial or temporal predictions. The regression tree [51] is a non-parametric model, which is suitable for data with sample size >100. However, the application of regression trees requires the imposition of fewer model assumptions than their traditional parametric counterparts. Thus, regression trees can select from among many predictors those and their possible complex interactions that are most important in determining the outcome variable to be explained. This is an advantage for our data set because the number of predictors is high, and the modeling of all possible interactions makes the parametric counterparts infeasible. We apply this regression to each of the clusters obtained with the double k-means approach to select the primary SNPs and their possible interactions. The optimal tree maximally reduces deviance and it will indicate the number of final set of SNPs, as the structure of regression tree. Re-Categorizing Ordinal Outcome An ordinal outcome variable is one with a categorical data scale, which describes order. Each ordinal level refers to a greater or smaller magnitude of a certain characteristic than another level. For ordinal outcomes, there is a clear ordering of the levels, but the absolute distances among them are unknown. Thus, the degree of dissimilarity among the adjacent levels of the scale in an ordinal variable might not necessarily be always the same. For instance, the difference in the severity of an injury expressed by level 2 rather than level 1 might be much more than the difference expressed by a rating of level 10 rather than 9. In addition, the utilization of the first q positive integers as labels does not imply that there is an equal space among ordinal categories. We used the ordered stereotype model in our analysis to help us to decide the correct number of categories and, also, to determine the spacing among categories. This model was proposed by Anderson (1984) [52] and is a paired-category logit model as nested between the adjacent-categories logit model and the standard baseline-category logit model. One of the main advantages of using an ordered stereotype model to fit ordinal outcome is that it is able to determine a new spacing among ordinal levels, which is dictated by the data. This is information no evaluated with other common ordinal regression models such as the proportional odds model or the adjacent-categories logit model. Polygenic Risk Score (PRS) Calculation PRS is typically estimated as a weighted sum of the number of risk alleles. However, a priori weight information (i.e., published ORs of asthma) are not available for most of the 619 SNPs. Therefore, we used equal weight for all high-risk alleles. The odds of asthma diagnosis per high B[a]P exposure was compared across the sub-groups. Statistical Software All analyses were performed with the statistical package R 3.2.3 (R Foundation for Statistical Computing. Vienna, Austria) and SPSS version 20.0.1 for Windows (SPSS Inc., Chicago, IL, USA). Re-Categorizing Ordinal Outcomes The fitting of a new spacing among ordinal levels allows that two ordinal adjacent levels with "close" values can be collapsed. Initially, asthma severity was defined as a 7-level uniform and equidistant ordinal variable ( Table 2). After fitting an ordered stereotype model to this variable, the fitted spacing dictated by the data is as shown in Table 2 row (a). The original equally spaced 7-level scale (1-7) is transformed into an estimated non-equally spaced 7-level scale whose values are (1, 4.954, 6.784, 6.988, 6.994, 6.994, 7). The last four categories were indistinguishable, and thus, could be merged. The final 4-level categories after collapsing those categories are highlighted in boldface in Table 2, rows (b,c). As shown in Table 2 row (a), the scores of the last four categories are very close to each other (from 6.988 to 7). Accordingly, the categories 4-7 were re-categorized into one, thereby yielding a 4-level ordinal variable ( Table 2, rows (b,c)). The results in Table 2 row (c) still assume equal spacing between ordinal categories. We replaced the original ordinal response values with the non-equal, yet continuous spacing obtained from fitting the ordered stereotype model ( Table 2, row (c)). We used this final adjustment of the number of categories in the asthma severity variable to apply the double k-means and, subsequently, the non-parametric regression trees. There are two main advantages of collapsing adjacent levels in an ordinal variable: (a) to avoid ordinal levels that are not distinguishable in terms of predictive power, and (b) to reduce the cost of collecting the same type of data in future similar studies. Double k-Means Based on the double k-means technique, sets of SNPs most closely associated with each severity level are identified. We considered from k = 2 to k = 10 groups of SNPs and children taking 10 random starting points and we used the recategorized ordinal scale of the asthma severity. The goal of SNP clustering was to identify a set with smallest within-group variability and largest between-group variability. Figure 1 displays the evolution of the sum of squares between cluster (SSB) and within clusters (SSW) for all possible number of clusters tested. The best trade-off between SSB and SSW is observed when k = 3 (SSB < SSW) or k = 4 (SSB > SSW). Observing the resulting clustering structure according to asthma severity, the best solution is 3 clusters of children and SNPs. Thus, the children within the k = 3 cluster are very similar, but very different from the children of the other clusters. Cluster 1 contains only the healthy control children (severity = 1); Cluster 2 contains mild-moderate case children with severity = 2.977 (i.e., the level 2 in the 4-level scale; Table 2, row (b)); and Cluster 3 includes the most severe cases (i.e., the levels 3 and 4 in the 4-level scale; Table 2, row (b)). Demographic Traits of the Clusters of Children Summary statistics of the three clusters (cluster 1-Reference group, cluster 2-Mild/moderate, and cluster 3-Severe outcome) are shown in Table 3. The severe asthma cases (cluster 3) had their first clinical diagnosis by 2 years of age, compared to mean first diagnosis age of 6 years among the controls (p < 0.001). Similarly, the cluster 3 was also associated with a longest history of corticoid treatment, compared to the controls (p < 0.001). Cluster 2 and 3, were associated with also respectively associated with significantly higher mean ambient B[a]P concentration (p < 0.001). At the same time, the cluster 3 was not associated with significantly elevated urinary cotinine concentration (p = 0.329), or a higher number of cigarette smokers at home (p = 0.108). Significantly lower proportion of mild/moderate (cluster 2) and severe (cluster 3) cases were enrolled from the rural background site, compared to the controls (p = 0.001). Regression Trees Using ln(B[a]P) A non-parametric regression tree models were fit for each cluster, in which the response variable was the logarithm of benzo[a]pyrene, ln(B[a]P), and the predictors were the resultant SNPs from the cluster analysis. Using ln(B[a]P) >2, which is the cut-off for selecting the levels of the tree having a statistically significant change in deviance, the significant SNPs within each cluster are shown in Table 4. Table 4 summarizes biclustering ID of the original AASI categories, count of children in each category, and SNPs associated with each biclustering ID. A total of 16 SNPs were identified. While mild/moderate outcome (cluster 2) group was associated with five SNPs, the severe outcome group (cluster 3) was associated 11 SNPs, one of which have been identified in our earlier investigation (rs2070673 in gene CYP2E1-07) [37]. Demographic Traits of the Clusters of Children Summary statistics of the three clusters (cluster 1-Reference group, cluster 2-Mild/moderate, and cluster 3-Severe outcome) are shown in Table 3. The severe asthma cases (cluster 3) had their first clinical diagnosis by 2 years of age, compared to mean first diagnosis age of 6 years among the controls (p < 0.001). Similarly, the cluster 3 was also associated with a longest history of corticoid treatment, compared to the controls (p < 0.001). Cluster 2 and 3, were associated with also respectively associated with significantly higher mean ambient B[a]P concentration (p < 0.001). At the same time, the cluster 3 was not associated with significantly elevated urinary cotinine concentration (p = 0.329), or a higher number of cigarette smokers at home (p = 0.108). Significantly lower proportion of mild/moderate (cluster 2) and severe (cluster 3) cases were enrolled from the rural background site, compared to the controls (p = 0.001). Regression Trees Using ln(B[a]P) A non-parametric regression tree models were fit for each cluster, in which the response variable was the logarithm of benzo[a]pyrene, ln(B[a]P), and the predictors were the resultant SNPs from the cluster analysis. Using ln(B[a]P) >2, which is the cut-off for selecting the levels of the tree having a statistically significant change in deviance, the significant SNPs within each cluster are shown in Table 4. Table 4 summarizes biclustering ID of the original AASI categories, count of children in each category, and SNPs associated with each biclustering ID. A total of 16 SNPs were identified. While mild/moderate outcome (cluster 2) group was associated with five SNPs, the severe outcome group (cluster 3) was associated 11 SNPs, one of which have been identified in our earlier investigation (rs2070673 in gene CYP2E1-07) [37]. Application of Bi-Clustering Methods to Estimation of ln(B[a]P) Association with Asthma As shown in Figure 2, mild/moderate outcome group was associated with an overall lower polygenic risk score (3 ± 1, mean ± SD), compared to the children in severe outcome cluster (mean polygenic risk score, 18 ± 2). Application of Bi-Clustering Methods to Estimation of ln(B[a]P) Association with Asthma As shown in Figure 2, mild/moderate outcome group was associated with an overall lower polygenic risk score (3 ± 1, mean ± SD), compared to the children in severe outcome cluster (mean polygenic risk score, 18 ± 2). Table 5 summarizes the distribution of the polygenic risk scores for the children in moderate and severe outcome groups, respectively. In addition, we compared the demographic traits of the children within a particular cluster (see please Table 6), according to their polygenic risk score. There was no indication of trend in demographic, disease history, and/or exposure history according to the polygenic risk scores within the respective clusters (all p-values > 0.05). Table 5 summarizes the distribution of the polygenic risk scores for the children in moderate and severe outcome groups, respectively. In addition, we compared the demographic traits of the children within a particular cluster (see please Table 6), according to their polygenic risk score. There was no indication of trend in demographic, disease history, and/or exposure history according to the polygenic risk scores within the respective clusters (all p-values > 0.05). As shown in Table 7, the adjusted odds of the asthma outcome per (ln) unit increase in ambient B[a]P concentration were overall similar between moderate-risk (aOR, 2.4; 95% CI, 1.0-5.4) and the severe outcome groups of children (aOR, 2.7; 95% CI, 0.8-9.3). However, following further stratification according to the polygenic risk scores, the same unit increase in ambient B[a]P concentration was associated with somewhat increased adjusted odds of the asthma (aOR, 3.8; 95% CI, 0.5-31.5) outcome for both moderate-and severe outcome (aOR, 5.2; 95% CI, 0.8-36.1), compared to the low polygenic score groups (Table 7). Discussion Allergic asthma, allergic rhinitis, and atopic dermatitis represent the most burdensome childhood diseases in the world [53]. In the US, an estimated 40 million (or 13%) people suffer from asthma during their lifetime. To date, clinical and public health interventions to mitigate symptom exacerbations have been met with limited success, because the existing strategies do not address possible heterogeneous pathogenesis of multiple asthmas with apparently common symptoms. In spite of growing evidence that asthma is a heterogeneous collection of diseases, most investigations to date have ignored the complex contributions by genetic, environmental, dietary, and/or social domains [54,55]. As of today, the predominant research approach targets single risk domain (e.g., "asthma genes") for an identification of singular and potent causal factors. Thus, there is an urgent need to integrate age-sensitive multi-axial functional genomic data, in order to clarify how early-life exposures to air pollution, and c-PAHs in particular, brings about clinical and preclinical symptoms (i.e., phenotype) via specific molecular mechanisms (i.e., endotype). This paper introduces a two-step technique to reduce the heterogeneity of the host traits, through clustering of single nucleotide polymorphisms (SNPs) associated with the symptoms. According to our methodology, the analysis suggests for the first time that the children with high polygenic risk score have markedly higher odds of asthma outcome per unit increase in ambient B[a]P concentration, within each severity outcome group (i.e., biclustering ID). In a first step of the analysis, we apply an unsupervised learning algorithm method (double k-means) and non-parametric regression trees to find a biclustering structure of children according to asthma severity and SNPs where the dimensionality of the SNPs is reduced. In the second step, we apply conditional logistic regression for each polygenic risk score group, nested within severity groups, against the healthy controls using B[a]P as a representative airborne carcinogen. k-Means algorithm creates a partition of clusters of the data space, where each observation belongs to the cluster with the nearest mean. It allows an efficient implementation for obtaining clusters, which are easy to interpret. The procedure yield computationally expedient results, and is available in most of the statistical software. We implement the double k-means algorithm version in the statistical package R 3.2.3. Unfortunately, there is not a general and well-established methodology to decide the unknown number of groups hidden in the data set. The criterion used in this article is a trade-off between the sum of squares between and within clusters. Although it is not the case for the data set analyzed here, one of the drawbacks of this technique is that it might be sensitive to outliers. Regression trees [51,56] are commonly used in data analysis with the objective of creating a model, which is robust and flexible, easy to interpret, and predicts the value of an outcome based on the values of several predictors. When the sample size like is large as the data set used here the use of this technique has advantages such as latent nonlinear relationships between predictors do not affect the performance of the tree, is easy to interpret, and performs variable selection, which is our interest in this paper. One of the limitations of using this non-parametric tool is the lack of a standardized validation test to assess the goodness-of-fit of the model. One option would be to use cross-validation to obtain an R-square, which might be used to compare with their equivalent parametric counterparties. Additionally, we chose to use non-parametric regression tree models to do variable selection because it is easy to replicate by researchers and practitioners in the field as it is implemented in most of the commonly used statistical software. There are other approaches that could be used to provide variable selection such as Least Absolute Shrinkage and Selection Operator (LASSO), Gibbs Variable Selection (GVS), and Stochastic Search Variable Selection (SSVS). Childhood exposures to polycyclic aromatic hydrocarbons (PAHs), are associated with allergic sensitization and early-onset wheezing symptoms in children [8,9], and acute aggravation of existing asthma [57]. However, the sources of variability in host susceptibility to PAH remain unclear [57]. Here, extreme seasonal variations in ambient PAH concentrations contribute to overall homogeneous distribution of ambient PAH concentration within relatively confined geographic locations of interest [34]. For example, mean for B[a]P concentration in the polluted urban location during our study period (November, 2008) was 11.4 ng/m 3 , approximately 5-times higher than that in background location (2.5 ng/m 3 ) during the same period [32]. Since the 75th percentile of B[a]P is <0.5 ng/m 3 in both New York City, US [58], and London, UK [59], distinct spatiotemporal B[a]P contrasts in Czech Republic allow us to detect the health effects using smaller sample size, than that typically required for G × E studies [58,[60][61][62][63]. PAHs represent the most potent genotoxic proportion of the inhaled PM 2.5 , and PM 10 [64]. Our earlier work has shown that B[a]P concentration >1.0 ng/m 3 could induce DNA damage, oxidative damage [65], genomic translocations [66], micronuclei [36] and DNA fragmentation in sperm [67]. Prenatal exposure to ambient air polluted by PAHs is associated with a heightened risk of intrauterine growth restriction (IUGR) [58,60,68], preterm delivery [58], reduced neonatal height and gestational age [69]. PAH mixture represents an environmental risk factor for not only swift aggravation of asthma [57], but also an early-life induction [9]. However, the evidence gathered to date is either model-based or indirect in terms of the human exposure estimation [57]. For example, in vitro studies of human cells have demonstrated that B[a]P suppresses both humoral (B-cell mediated) and cellular (T cell-mediated) immune responses [70]. B[a]P also impairs aryl hydrocarbon receptor (AhR)-regulated signaling pathways [70]. Using human macrophages exposed to B[a]P, gene expression analysis indicated that biological functions linked to immunity, inflammation and cell death were most severely affected, including AhR-mediated p53 pathways [71]. Strengths of the present investigation include direct quantification of ambient B[a]P levels. As all children within the both of our target regions are served by the primary care clinicians within our study, our sample captures population-representative group of children. Furthermore, even though our study design is cross-sectional, and therefore enrolled existing asthma cases, this is unlikely to have influenced the identification of high-risk SNPs. That is, high-risk SNPs are unlikely to have been influenced by asthma severity. Additionally, assigning scores to ordinal categories in the asthma severity response gives an easy way to analyze the data. Future examination of assigning clinical basis of asthma symptom severity is warranted for creation of ordered categories. By using the ordinal scoring scheme of asthma outcome, ordinary linear models could be applied. However, if there were little clinical or biological rationale for the spacing between adjacent categories, the use of an ordered stereotype model is expected to yield more optimal results. This is because the ordered stereotype model does not assume equally spaced distance across the ordinal outcomes. The estimation of the spacing among ordinal responses is an improvement over other ordinal data models, such as the proportional odds model and continuation-ratio model. Finally, by analyzing the data set, we found several clusters and therefore show the presence of unobserved heterogeneity, which is most likely to exist in almost all this type of data sets. Therefore, the two-step technique presented in this article can assist practitioners in partly reducing heterogeneity in children's asthma phenotypes in relation to ambient B[a]P concentration. At the same time, several limitations are noted. First, our sample size is modest. The associated 95% confidence interval of the (ln) unit odds of asthma was wide, likely due to the limited sample size. Furthermore, PAH has been shown to have high mutual correlations with other air pollutants. Therefore, residual confounding by other correlates of B[a]P in air (e.g., metals) could not be ruled out. Future analysis should account for the effects of metal components within PM2.5 [37]. In addition, another limitation is that double k-means could fail to find a correct clustering structure if the number of local minima in the data set is large. A good option to avoid local maximum is trying multiple well-separated starting points. As a possible extension of our work, we think that a comprehensive simulation study could be set up to compare our approach against other polygenic risk score approaches such as PRSice, PredictABEL, and gtx. The reproducibility of the polygenic score, based on our identified SNPs, as an efficient predictor of asthma severity requires validation and it would be another future direction to take. Conclusions Our approach demonstrates an efficient strategy to reduce host heterogeneity in underlying susceptibility. Validation of our observation through future prospective cohort study design is warranted.
2018-04-03T00:00:38.028Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "86ca8ba5a1683a25bd37d107d490e90f78a99ba1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/15/1/106/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86ca8ba5a1683a25bd37d107d490e90f78a99ba1", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
122152455
pes2o/s2orc
v3-fos-license
NUMERICAL COMPUTATION OF THE VALUE FUNCTION OF OPTIMALLY CONTROLLED STOCHASTIC SWITCHING PROCESSES BY MULTI-GRID TECHNIQUES By the dynamic programming principle the value function of an optimally controlled stochastic switching process can be shown to satisfy a boundary value problem for a fully nonlinear second-order elliptic differential equation of Hamilton-Jacobi-Bellman (HJB-) type. For the numerical solution of that HJB-equation we present a multi-grid algorithm whose main features are the use of nonlinear Gauss-Seidel iteration in the smoothing process and an adaptive local choice of prolongations and restrictions in the coarse-to-fine and fine-to-coarse transfers. Local convergence is proved by combining nonlinear multi-grid convergence theory and elementary subdifferential calculus. The efficiency of the algorithm is demonstrated for optimal advertising in stochastic dynamic sales response models of Vidale-Wolfe type. INTRODUCTION We consider a stochastic system operating in m different regimes under state constraints in the form of either an exit problem or reflecting boundary conditions.In the first case the regimes can be described by the diffusion processes dy, (t) = b"yx ( ~t + a?y, (1)) ~t ) , ( a )i,j,, 1 I p 5 m , are assumed to be sufficiently smooth functions on IR .In the second case ( l .l a ) has to be replaced by the reflected diffusion process where xr is the characteristic function of T = , 6 is an increasing continuous adapted process and yp= (yy , . . ., y$)T , 1 5 p5 m , is SUPposed to be continuous and bounded on IRd satisfying f(x)-n(x) z 6 > 0 for all x E r and all 1 I p I m where n(x) denotes the unit outward normal in x E T .Then, given smooth running costs fp= fF(x,t) , x E R , t 2 0 , 1 5 p 5 m .and nonnegative discount factors cp = c' (x) , x E Q , 1 I p I m , the control objective is to find an optimal switching control policy V,,, = (z,, p,; 2*, p2; ... ) of random stopping times zi and regimes pi such that the total cost is minimized, where p(t) = p i , Z~ < t c ri+, , with given p(0) = p O , E , , , is the expectation and T = 2 , is the first exit time of the process for an exit problem while T = + 00 in case of reflecting boundary conditions. The optimal cost function or value function u(x) , x E Q , is given by We denote by A' , 1 5 p 5 m , the second-order elliptic differential operators P ) L d P P * where a = ( a ) a" 0 (0 ) I2 , 1 5 p 2 m . Then, a formal application of the dynamic programming principle shows that the value function u satisfies the following Hamilton-Jacobi-Bellman (HJB-) equation max (APu(x)-fF(x)) = 0 , X E 52 (1.5a) with homogeneous Dirichlet boundary conditions for exit problems and oblique derivative boundary conditions in case of reflecting boundary conditions. Bema&: If the functions 1 optimal performance or utility function u(x) , x E R , is given by and the associated HJB-equation takes the form min (APu(x)f"x) ) = 0 , x e R (1 Sa)' More details on controlled diffusion processes can be found in the textbooks by A. Bensoussan [I], A. Bensoussan and J.L. Lions [2], [3], W.H. Fleming and R. Rishel [13] and N.V. Krylov [22].For a derivation of the HJB-equations in the context of viscosity solutions see the pioneering work by M. G. Crandall and P. L. Lions [ 1 2 ] and P. L. Lions [26], [27].Regularity results have been obtained by various 2,a - authors, for C (R) regularity, a E (0,l) , in case of exit problems the reader is referred to L. Caffarelli, J. Kohn, L. Nirenberg and J. Spruck [8] and N.V. Krylov [23], [24] , [25] while for reflecting boundary 6. Mercier [28].In particular, in [28] the authors consider an iterative scheme based on finite element discretizations of a HJB-equation of type (1.5a),(1.5b)where at each iteration step (1.5a) is linearized by locally choosing that p E (1, ... ,m} for which the maximum is attained.Then, the resulting linear algebraic system can be solved by either direct methods or standard iterative solvers.However, it is well known that with decreasing step sizes h and thus increasing number Nh of unknowns direct methods suffer from an increasing computational work of order o(N:) while the convergence rates of standard iterative solvers deteriorate according to 1-0(h2) .These drawbacks can be overcome by the application of multi-grid methods where the computational work is directly proportional to the number of unknowns and the convergence rate is typically independent of the step sizes (cf.e.g. A. Brandt [5] and W. Hackbusch [I 63).Multi-grid algorithms for HJB-equations with homogeneous Dirichlet boundary conditions, based on the iterative schemes given in [28] and using analogous finite difference discretizations of positive type with respect to a hierarchy of grids, have been developped by R.H.W. Hoppe in [I 71.In this paper, we will present a more direct multi-grid approach which uses nonlinear Gauss-Seidel iteration applied to (1.5a) as a smoother and an adaptive local choice of prolongations and restrictions in the coarse-to-fine and fine-to-coarse transfers of the multi-grid cycles.That multi-grid algorithm will be detailly described in 92. Then, in 53 we will derive a local convergence result using nonlinear multi-grid convergence theory in the spirit of W. Hackbusch [I 4],[15], [16] and elementary subdifferential calculus as basic tools.The idea of proof is very similar to that one used by R. H. W. Hoppe in [I 8], [19] and by R. H. W. Hoppe and R. Kornhuber 1201 concerning multi-grid algorithms for variational inequalities, complementarity problems and free boundary value problems, respectively.Finally, in 54 some numerical results will be given for HJB-equations with Dirichlet and Neumann boundary conditions the latter one representing the maximal utility of profits in optimal advertising for a stochastic dynamic sales response model of Vidale-Wolfe type. THE MULTI-GRID ALGORITHM In this section we will develop a multi-grid algorithm for the numerical solution of HJB-equations of type (1.5a) with either homogeneous Dirichlet boundary conditions (1.5b) or oblique derivative boundary conditions (1.5b)'. For notational convenience we will restrict ourselves to the two-2 dimensional case, i.e., we assume R c IR , and we consider a hierarchy of grids (Rk);=, constructed in the following way: For step sizes hk+, = h,/2 , 0 I k I 1-1 , given some h, > 0 , we define and we set which we refer to as the set of interior grid points (h, is assumed to be sufficiently small in order to guarantee R, 0 ).Further, we define as the set of boundary grid points.Then we discretize the elliptic differential operators A\ 1 1 I; 5 m , with respect to a,= Q,u r,, where the A:,S, 1 s p i m , represent the resulting Nk x N, coefficient matrices.As far as these matrices are concerned, throughout the following we will assume: The matrices A[ , 0 i k i l , 1 6 p i m .have all positive diagonal elements, non-positive off-diagonal elements and are lower semistrictly diagonally dominant. We recall that a matrix A = (ai); i s called lower semistrictly diagonally dominant, if it is diagonally dominant and (cf.e.g. A. Berrnan and R.J. Plemmons [4]). In particular, under hypothesis (2.4) the A:' S are nonsingular M-N N matrices and hence, the functions Fk : lR 4 defined by means of (2.3) can be easily shown to be continuous surjective M-functions which in turn implies that for any choice of 11: , 1 i p < m , the HJB-equations The multi-grid approach to the numerical solution of (2.3) on the highest level k = R is based on the fact that the coarse grid corrections on the lower levels can also be formulated as HJB-type equations: Given u ; .v 2 0 , on level 1 and having computed a smoothed iterate , v * -v the error e, = u,u, satisfies where p, -, and r, is a suitable prolongation and restriction, respectively, and u, -, E IRN,-l solves the lower dimensional HJB-equation u, -, = r, u, as a startiterate on level R -1 , the above process will be successively repeated until the coarsest grid k = 0 is reached.Finally,' after returning to the highest level, several post-smoothing steps will be -v,new performed with respect to the startiterate u, thus yielding a new v+l iterate u, . We will now describe in detail both the smoothing process and the choice of restrictions and prolongations.As a smoother on levels 1 I k f; R and as an iterative solver for the coarse grid correction on the lowest level k = 0 we choose nonlinear Gauss-Seidel iteration applied to the HJB-equations LJ where gk , 1 I p I m , 0 I k I R , is recursively defined by We remind that nonlinear Gauss-Seidel iteration is known as a convergent iterative procedure for nonlinear algebraic systems involving M-functions (cf.e.g.W. C. Rheinboldt [31]). We denote by the decomposition of the matrix A: into its diagonal, lower diagonal and upper diagonal part.Then, performing K nonlinear Gauss-Seidel iterations with respect to a lexicographic ordering of grid points amounts to the successive solution of where vk = u for pre-smoothing and V, = u, for post-smoothing. Remark: In case of an HJB-equation of type (1.5a)' the corresponding nonlinear Gauss-Seidel iteration is given by (2.10) with " min " replaced by " max " . 1 sp9-n In the coarse-to-fine and fine-to-coarse transfers a natural choice for k pk-, and r p l , 1 5 k 5 I , are prolongations based on bilinear interpolation and full weighted nine-point restrictions (cf.e.g.W. Hackbusch [ 161).However, in the present situation full weighted restriction cannot be used globally, because otherwise false information would be transferred to the coarser grid causing non-convergence of the multi-grid algorithm.To see this, for x E Qk we denote by Nk(x) the set of grid points consisting of x itself and its eight nearest neighbours in ; i.e., where e, = (1,O) , e , = (0,l) , e , = el + e , , e , = e, -e , (with appropriate modifications for grid points near the boundary rk).Then, for a smoothed iterate G;= V; we denote by Q; the set of all grid points for which the minimum in (2.9) is attained at p E I , = (1, ... ,m} , i.e. where i stands for the bijective map which to each x E Qk assigns its corresponding index i(x) E INk = (1, ... ,Nk) .Further, we define the grid point sets Now, let us suppose that we have a coarse grid point x E Qk-l n Rk (uk ) , p E I , .Then there exists at least one fine grid - x E RkmI n ak (U ) .T ~U S , denoting by ' r and rk pointwise and full weighted restriction, respectively, we advocate the following local choice of the restrictions r p l , 1 5 k 5 A , As far as the prolongations are concerned, the global use of bilinear interpolation may also cause instability in a vicinity of grid points associated with different y E I , .For that reason we propose the following where pk denotes prolongation based on bilinear interpolation.for i : = 1 step1 until K, do u k : = S k ( u 1 ; g ~, . . ., g j ) , for i : = l step1 until m do dl:= g,-A,uR; Note that at each level 1 I k I R within the cycle K, 2 0 p r esmoothings and K, 2 0 post-smoothings are performed while the number of nonlinear Gauss-Seidel iterations for the approximate solution of the correction HJB on the lowest level k = 0 is ~~2 0 .The structure of the cycle is determined by yk , 1 I k I 2-1 .(For y , = 1 we have a "V" -cycle and for y, = 2 a "W" -cycle). 0 A suitable startiterate u, on the highest level k = R can be obtained by nested iteration, i.e., using suitable prolongations , 1 I k 5 R , an approximation u, on the lowest level k = 0 is pro-0 -1 longated onto the level k = 1 yielding ul = p o u o which is then used as a start-iterate for the execution of one or several cycles 1 MGHJB(1.U, ,g1 , ... , g: ) .This process will be repeated until the highest level k = R is reached.Since nested iteration is a fairly standard procedure in the multi-grid business, for more details we refer to W. Hackbusch [16]. In this section we will prove local convergence of the multi-grid 1 algorithm MGHJB(2, u, ,g,, ... , gT) by using elementary subdifferential calculus and fundamental ingredients of nonlinear multi-grid convergence theory in the spirit of W. Hackbusch [I 4], [15]. We start with some basic assumptions on the continuous problem (1.5): The second order elliptic differential operators A' , 1 5 p 2 m , given by (1.4) are supposed to be uniformly elliptic with smooth coefficients a; , by, c' E c2@) , 1 5 i, j i 2 .Further, we assume f ' E c2(S) and, in case of reflecting boundary conditions, y ' ~ c2(3), 1 i i 2 2. As mentioned in the Introduction, under the above hypotheses the value function u(x), x E a , of the optimally controlled stochastic switching process under consideration can be shown to be the unique solution in norms will be denoted by I Tk II,, = sup{ 11 Tkvk 1 1 , discretizations of second order uniformly elliptic operators, it is convenient to choose 11 * 1 1 , , 0 s p 1 3 , as discrete analogues I * Is of the Sobolev norms in H'(Q) , s E JR+ , s + l e N .In particular, a suitable We now consider the nonlinear functions F, : IRNk + BNk , 0 < k s R , defined by (2.3).These mappings are not differentiable in the usual sense, but admit generalized Jacobians aFk(uk) , uk E p N k , in the sense of F.H. Clarke [9] satisfying where the right-hand side denotes the set of matrices whose i-th row is an element of the generalized gradient dFk,i (uk) of the i-th component of F, .Since F, is defined by means of a max -function with a finite number of arguments, the generalized gradients can be easily computed: Denoting by xk(i) that grid point in R k uniquely associated to i E (1, ... ,Nk} , we have dFksi (uk) = CO (A: ; , ... 9 A : ; ) ), for all ck > 0 there exists tik > 0 such that for all uk, vk such that ( 1 uk-u; 1l 2 c tik and I I v k -u i ~~, < 6, .Moreover, ask (ui) is given by Now, we are in a position to establish local convergence of the multi-grid algorithm MGHJB.We will proceed in the spirit of W. Hackbusch's convergence theory in [14], [15], [16] and first obtain a two-grid convergence result by linearization of the two-grid iteration operator at the solution and then verifying a basic approximation and smoothness property for that linearized operator.The first partial result deals with the smoothing process: v Pro~osition 3.1.Let u, , v 2 0 , be the v-th multi-grid iterate and let further U ; be the result of the smoothing process obtained by K nonlinear Gauss-Seidel iterations (2.10) applied to the HJB-equation In view of (3.9), (3.1 1) and the fact that the A [ % , 1 r p 5 m , result from finite difference approximations of uniformly elliptic second order differential operators with smooth coefficients, the estimates (3.1 6),(3.17)can be expected to hold true (cf.W. Hackbusch [ I 41). Next, we provide the following preparatory stability result for (2.3) which is also of interest in its own: Lemma 3.2.Let u, , i 3, be solutions to the HJB-equations (2.3) with data f [ and i : , 1 r p < m , 0 r k 2 l , respectively.Then, under assumptions Proof: For all 1 r i 5 Nk it follows from (2.3) that and hence, taking into account that in view of (2.4) and (3.9) both aFk (uk) and aFk (Ck) are nonsingular M -matrices Then, using the monotonicity of 11 1 1 , and (3.17) (with u: replaced by uk and Gk , respectively), (3.18) follows instantly from the preceding inequalities. We now compute the two-grid iteration operator where for simplicity we assume that only pre-smoothing is performed: ~osition 3.3.Suppose that u l , v r 0 , are the iterates obtained by where with C(K) > 0 and ll(V) -+ 0 as i l u;u; 1 1 , -+ 0 . -1-1 Proof: First, we define a map F, : lRN1 -t l RNkl by -1-1 * where we have used the fact that F, U, = 0 .Taking advantage of (3.1 3) in Proposition 3.1, we get Then, if we set and observe that by the chain rule aF, (u;) = r, aF, ( As far as the smoothing property (3.27) is concerned we remark that according to (3.9), (3.1 2) dS (u$ is the Gauss-Seidel iteration operator associated with dFk (ufk) .Recalling the fact that the AE'S are finite difference approximations of uniformly elliptic differential operators with 2coefficients in class C (R) , there is evidence that (3.27) Remark: The preceding results can be easily modified in order to cover the case where also post-smoothing is performed, i.e., K, > 0 in 1 MGHJB(1, u, ,g, , ... , g 7 ) (cf. e.g.W. Hackbusch [I 51). For more than two grids the multi-grid iteration operator can be recursively defined by means of the two-grid iteration operators M : ' + Zk on levels 1 < k < R .Then, the following local convergence result can be established: v Theorem 3.6.Let u, , v 2 0 , be the iterates obtained by the multi-grid 1 algorithm MGHJB(1, u, ,g, , ... , g y ) for a hierarchy of 1 + 1 grids k = 0,1, ... ,R assuming a "W" -cycle structure (i.e.y = 2 ).Then, under the same hypotheses as in Proposition 3.5, there exists K~~~ 1 1 such that for all K~~~ I K I K , , ~ (hl) the estimate (3.32) holds true.In 0 particular, if the startiterate u, is chosen in an appropriate neighbourhood of the solution u ; .we have II u: -u ; 112 -+ 0 as v -+ -. Proof: Using Proposition 3. We have tested the efficiency of the multi-grid algorithm MGHJB by applying it to an HJB-equation both with Dirichlet and Neumann boundary conditions. The first, rather academic example is an HJB-equation for two uniformly elliptic operators A', A2 under homogeneous Dirichlet data which has already served as a numerical test example in [17] and [28] . The operators A', A2 and right-hand sides fl.Note that all computations reported in this section have been performed on a CRAY XMP-24 where eps = 1 0-14. Figures 1 and 2 represent the asymptotic convergence rates for W -cycles with a different number of pre-smoothings and no postsmoothing (Fig. 1) and for W -cycles with a different number of pre-and post-smoothings (Fig. 2) .For comparison, in both figures we have also plotted the convergence rates of the corresponding single-grid nonlinear SOR-iteration with suboptimal choice of the relaxation parameter.The 2 plots clearly demonstrate the expected O(1h, ) -behavior of the singlegrid convergence rates while the multi-grid convergence rates seem to approach a constant value for an increasing number of grids in the hierarchy thus confirming the theoretical results derived in the preceding section.Note that the multi-grid convergence rates are considerably higher than for standard linear second order elliptic boundary-value problems (cf.e.g.[16]) which is an effect commonly observed for free boundary problems.Indeed, the convergence rates are almost in the same range as those obtained for multi-grid algorithms applied to other types of free boundary problems (cf.e.g.[6], [19], [20] and [30]). As a second, more realistic example we have dealt with the computation of the maximum utility of profits for a stochastic dynamic sales response model which has been analytically investigated by C.S. Tapiero, J. Eliashberg and Y. Wind in [32] and which represents the stochastic version of the classical deterministic Vidale-Wolfe model 1331.We consider a firm selling two products z = (2, ,z2) with market potentials M = (M1,M2) at prices p = (pl,p2) while spending a = (a,,a2) for advertising.We denote by y = (y1,y2) the market shares yi = zi / Mi , I r i 5 2 , and by m = (m,,m2) and q = (ql,q2) the forgetting and advertising effectiveness effects, respectively.Modelling sales uncer-2 tainty by a diffusion term o(y,a) = (oij (~,a))~,,=, and including reflection constraints ( i.e. continuing the process when y leaves the admissible region R = (0,1)x(0,1) ), the market shares evolve according to the following stochastic diffusion process with reflection boundaries In (4.4a) the drift term b(y,a,m,q) is given by representing the deterministic Vidale-Wolfe model, w = w(t) stands for a normalized two-dimensional Wiener process, X, is the characteristic function of r = aR , n the outward normal at and d a is a continuous adapted process (for details see [32]).The admissible control set V consists of sequences (@,a) = ( @, , a, ) , , , of random times 0, and advertising policies a, = (%,,,a,,,) chosen qiai (1yi))"2 , 1 5 i 5 2 .Moreover, we have chosen the following four admissible advertising policies corresponding to "no advertising" (a') , "advertising for product 1" (a2) , "advertising for product 2" (a3) and "advertising for both products" (a4) . The corresponding HJB-equation is then of type (1.5a)',(1.5b)'with We have discretized A' , 1 5 p 2 4 , with respect to the same hierarchy of grids Rk , 0 2 k s R , as in the first example.Using standard -2 + central difference quotients hk Dk,xiD,,,, for the second order derivatives a2 / ax: and the forward resp.backward difference quotient hi1 DiVxi resp.hi1 D;k,Xi for the first order derivatives a / axi (according to the sign of bi(x,aF,m,q) in x E Rk) , we get a discrete HJB-equation with coefficient matrices A: being lower semistrictly diagonally dominant M-matrices. Providing a startiterate on the finest grid R, by nested iteration, we have computed the optimal utility of profits by successive application of MGHJB with W-cycle structure.Figures 3 and 4 display the sets R: (u; ) for 1 = 5 (h, = 1/64) for the following market potentials Mi , prices pi , sales-decay rates mi and Points x E R, belonging to %(u;) , R: ( u; ) , R h ; ) and R, (u, ) are marked by " -" , " 0 " , " + " and " I " , respectively.The probabilistic interpretation is that for any initial state x E a, the markings tell us both the type and the value of control which has to be performed asymptotically (i.e. for t -+ -).The figures display a certain risk-averse behaviour of the advertiser with essentially decreasing advertising expenses for increasing market shares. R E F E R E N C E S [ I ] A. Bensoussan, Stochastic control by functional analysis methods, North-Holland, Amsterdam, 1982 1 5 p 5 R is a bounded smooth domain in Euclidean space IR , d E N , with boundary T = aQ , and w is a standard d-dimensional Wiener process.The drift bp = (bt...,b$)T and the diffusion o ' = b d S2) n C (Q) regularity has been proved by P.L. Lions and N.S.Trudinger 1291.As far as the numerical solution of HJB-equations is concerned, we mention the work done by Ph.Cortey-Dumont [I 11 and P. L. Lions and denotes the defect d, = f , -A, u, , 1 I y 5. m .This suggests to correct ii; according to 1 1-1 F0 s p 1 2 . resp.c ~' ~( R ) n C (Q) for some a € (0.1) of the HJB-equation (1.5a) under the boundary conditions (1.5b) resp.(1.5b)' (cf.[8],[25],[29]).If u*(x) , x E 5 , is the unique solution to (1.5), for p E Im we set We assume that the boundary value problem (1.5) is nondegenerate in the sense that We define as the continuous internal free boundary.Likewise, in view of the definition of the sets r : l o by (2.12b) we refer to W I m as the discrete internal free boundary, and we also suppose nondegeneracy of the discrete problems (2.3), i.e., Concerning the internal free boundaries, in the sequel we will assume: (i) The continuous internal free boundary f is a one-(3.5a)dimensional manifold admitting a Lipschitzian parametrization.(ii) The discrete internal free boundaries T;, 0 < k ss R , are situated in an O(hk) -neighbourhood of the continuous internal free boundary r* , i.e., max dist (xJ* ) = O(h , ) (h + 0) (3.5b) xc r; Remarks: (i) For a model obstacle problem, discretized by piecewise linear finite elements, under fairly general hypotheses F. Brezzi and L. Caffarelli [7] have established convergence of the discrete internal free boundaries to the continuous one of an order which is approximately the square root of the c convergence rate for the convergence of the solutions of the discrete problems to the continuous one.Taking into account both the relationship of HJB-equations of type (1.5a) to implicit obstacle problems and the optimal L°° convergence rate established by Ph.Cortey-Dumont [I I ] and F. Conrad and Ph.Cortey-Dumont [lo], there is evidence that (3.5b) holds true in the situation considered in this paper.(ii) Note that under hypotheses (3.5a),(3.5b) the grid point sets Q i (u;), 1 5 p 5 m, 0 5 k 5 R , satisfy "property C" in the sense of W. Hackbusch [14l.Moreover, throughout the following we will assume that 11 !Ip I 1 lip, 0 r p r 3 , are norms on lRNk (not necessarily all different) such that Nk I( 1 1 , is a monotone vector norm and 11 v, 1 1 , s 11 vk , V , E B , For a linear map T, : lRNk -+ IRNF the associated matrix F I ? . holds true, in particular if we use red-black ordering of grid points instead of the lexicographic one (cf.[ I 4; Chapter 41).Using Lemma 3.4 we immediately obtain: Under the hypotheses of Proposition 3.3 and Lemma 3.4 there holds proof: The estimate (3.32) is a direct consequence of (3.19), (3.21) and (3.28). f2 are chosen according to so that u is the exact solution of the Dirichlet problem (1.5a),(1 Sb) on R = (O,l)x(O,l) .With respect to a hierarchy of equidistant grids R k with step sizes 2 , 0 I k I 1-1 (ho = 0.5), we have discretized A ,h D D; , , , h DL,, D, , , and [ D: , , D; , , + D, , , D; , , ] 12.0 for the partial derivatives a2/ax2 , a2/ay2 and d2/axay , where D: , , D: , , denote the forward resp.backward difference in x resp.y on the grid R, .It is easy to check that these dis-cretization~ yield difference schemes of positive type (cf.[I 7],[28]).Choosing the restricted exact solution as a startiterate on the coarsest grid Q, , we have determined an initial iterate on the finest grid Rl by nested iteration.We have then performed several multi-grid cycles until either machine accuracy has been reached or the total number of work units has exceeded 100.Here, a work unit corresponds to a symmetric nonlinear Gauss-Seidel iteration on the finest grid.Denoting by II e ; v 2 1 , the discrete L2 -norm of the difference e l = u: -ul-' of two subsequent iterates, an asymptotic convergence rate, relating the gain in accuracy to the amount of work for implementation, has been computed by means of where Nwu is the number of work units for performing one multi-grid v cycle and v* denotes the last iterate before either I I e , I I , , , < eps or (v-l)Nwu > 100 . set A = { a .Then. denoting by n(a) = (pi Miyi -ai) i=l the instantaneous profits and choosing a utility function U = U(n(a)) , discounted at a constant discount factor c > 0 , the optimal control problem is the following infinite horizon utility maximization problem u(x) = sup E [I U (n (a) ) exp (-ct) dt I .(4.6) @,a)€ V As utility function U(x(a)) and diffusion o(y,a) we have taken U(n(a)) = 2 n(a) and o(y,a) = diag(ol(y,a),02(y,a)) with variances oi &,a) = (mi yi +
2019-04-19T13:01:52.926Z
1989-01-01T00:00:00.000
{ "year": 1989, "sha1": "7bb829fea02473c0969c6bdcd4f0524fa4fdf82a", "oa_license": "CCBYNC", "oa_url": "https://opus.bibliothek.uni-augsburg.de/opus4/files/57025/57025.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "3f68e234045c28001783a2b1c1ec3b84f9b6810e", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
244961282
pes2o/s2orc
v3-fos-license
Serum Potassium and Mortality Risk in Hemodialysis Patients: A Cohort Study Rationale & Objective Both hypo- and hyperkalemia can cause fatal cardiac arrhythmias. Although predialysis serum potassium level is a known modifiable risk factor for death in patients receiving hemodialysis, especially for hypokalemia, this risk may be underestimated. Therefore, we investigated the relationship between predialysis serum potassium level and death in incident hemodialysis patients and whether there is an optimum level. Study Design Prospective multicenter cohort study. Setting & Participants 1,117 incident hemodialysis patients (aged >18 years) from the Netherlands Cooperative Study on the Adequacy of Dialysis-2 study were included and followed from their first hemodialysis treatment until death, transplantation, switch to peritoneal dialysis, or a maximum of 10 years. Exposure Predialysis serum potassium levels were obtained every 6 months and divided into 6 categories: ≤4.0 mmol/L, >4.0 mmol/L to ≤4.5 mmol/L, >4.5 mmol/L to ≤5.0 mmol/L, >5.0 mmol/L to ≤5.5 mmol/L (reference), >5.5 mmol/L to ≤6.0 mmol/L, and >6.0 mmol/L. Outcomes 6-month all-cause mortality. Analytical Approach Cox proportional hazards and restricted cubic spline analyses with time-dependent predialysis serum potassium levels were used to calculate the adjusted HRs for death. Results At baseline, the mean age of the patients was 63 years (standard deviation, 14 years), 58% were men, 26% smoked, 24% had diabetes, 32% had cardiovascular disease, the mean serum potassium level was 5.0 mmol/L (standard deviation, 0.8 mmol/L), 7% had a low subjective global assessment score, and the median residual kidney function was 3.5 mL/min/1.73 m2 (IQR, 1.4-4.8 mL/min/1.73 m2). During the 10-year follow-up, 555 (50%) deaths were observed. Multivariable adjusted HRs for death according to the 6 potassium categories were as follows: 1.42 (95% CI, 1.01-1.99), 1.09 (95% CI, 0.82-1.45), 1.21 (95% CI, 0.94-1.56), 1 (reference), 0.95 (95% CI, 0.71-1.28), and 1.32 (95% CI, 0.97-1.81). Limitations Shorter intervals between potassium measurements would have allowed for more precise mortality risk estimations. Conclusions We found a U-shaped relationship between serum potassium level and death in incident hemodialysis patients. A low predialysis serum potassium level was associated with a 1.4-fold stronger risk of death than the optimal level of approximately 5.1 mmol/L. These results may imply the cautious use of potassium-lowering therapy and a potassium-restricted diet in patients receiving hemodialysis. Rationale & Objective: Both hypo-and hyperkalemia can cause fatal cardiac arrhythmias. Although predialysis serum potassium level is a known modifiable risk factor for death in patients receiving hemodialysis, especially for hypokalemia, this risk may be underestimated. Therefore, we investigated the relationship between predialysis serum potassium level and death in incident hemodialysis patients and whether there is an optimum level. Study Design: Prospective multicenter cohort study. Setting & Participants: 1,117 incident hemodialysis patients (aged >18 years) from the Netherlands Cooperative Study on the Adequacy of Dialysis-2 study were included and followed from their first hemodialysis treatment until death, transplantation, switch to peritoneal dialysis, or a maximum of 10 years. Analytical Approach: Cox proportional hazards and restricted cubic spline analyses with time-dependent predialysis serum potassium levels were used to calculate the adjusted HRs for death. Limitations: Shorter intervals between potassium measurements would have allowed for more precise mortality risk estimations. Conclusions: We found a U-shaped relationship between serum potassium level and death in incident hemodialysis patients. A low predialysis serum potassium level was associated with a 1.4-fold stronger risk of death than the optimal level of approximately 5.1 mmol/L. These results may imply the cautious use of potassium-lowering therapy and a potassium-restricted diet in patients receiving hemodialysis. A ccumulating evidence points to the adverse impact of dyskalemia on life expectancy in patients receiving hemodialysis. 1 Both hypo-and hyperkalemia can cause potentially fatal cardiac arrhythmias and sudden death. 2 Potassium homeostasis is mainly regulated by the kidneys, responsible for excreting 90% of the dietary potassium intake. Patients receiving hemodialysis rely mainly on potassium removal during each dialysis session. 2 Hyperkalemia, defined as a serum potassium level of ≥5.5 mmol/ L, is a common electrolyte disorder occurring in 12%-20% of patients receiving hemodialysis. [3][4][5][6] Nonadherence to dietary potassium restrictions and metabolic acidosis increase the risk of hyperkalemia. Relatively little attention is paid to hypokalemia and a low potassium level, defined as serum potassium levels of <3.5 mmol/L and <4.0 mmol/ L, respectively, occurring in 2% and 13% of patients receiving hemodialysis, respectively. 3,5,6 Malnourishment, metabolic alkalosis, potassium binders, and low potassium dialysate are risk factors for hypokalemia. 1,5 In addition, patients receiving hemodialysis have low intracellular potassium concentrations, despite the tendency of hyperkalemia development in them. Total body potassium levels can be up to 10% lower in patients receiving hemodialysis than in controls and is associated with an increased risk of hypertension, cardiovascular disease, and death. 7,8 The optimal predialysis serum potassium level is unknown. There are no randomized controlled trials that have examined the target predialysis potassium level with regard to long-term outcomes. Therefore, we have to rely on prospective cohort and registry studies. In the general population, serum potassium levels between 3.5 and 5.0 mmol/L are considered to be within the normal range. In patients with chronic kidney disease, the optimal range is between 4.0 and 4.5 mmol/L. 9 An important limitation to previous studies investigating the relationship between serum potassium levels and death on hemodialysis is the inclusion of mainly prevalent instead of incident patients, which may have resulted in survivor bias. 10 Furthermore, until now, cohort studies have estimated the risk of low serum potassium levels in comparison with relatively low levels (4.0-4.5 mmol/L) as a reference category and, therefore, most likely underestimated the mortality risk (hazard ratios [HRs], 1.03-1.14) related to hypokalemia. 4,11,12 Low predialysis serum potassium level is a potentially modifiable risk factor; however, its adverse effect on death in patients receiving hemodialysis may be underestimated. Therefore, we studied the relationship between serum potassium levels and death in a prospective cohort of incident hemodialysis patients and investigated whether there is an optimum level to pursue. Results from this study may inform future guidelines for patients receiving hemodialysis. Study Design and Population The Netherlands Cooperative Study on the Adequacy of Dialysis-2 is a prospective multicenter cohort study of patients, aged >18 years, with incident end-stage renal disease, starting with their first dialysis treatment, as previously described in detail. 13 Briefly, enrollment occurred between 1997 and 2007 at 38 dialysis centers throughout the Netherlands. The maximum follow-up was 10 years after the start of hemodialysis, with the latest follow-up date on January 1, 2018. The institutional review board of the Academic Medical Hospital, Amsterdam, the Netherlands approved the study (approval number, MEC95/226a) and the institutional review boards of all participating hospitals confirmed this by an additional local approval. All patients gave written informed consent. Follow-up visits were scheduled at 3 months after the start of dialysis therapy, at 6 months, and subsequently at intervals of 6 months. All centers predominantly used a dialysate potassium concentration of 2 mmol/L and, if indicated, 3 mmol/L. Baseline was defined as 3 months after the start of hemodialysis treatment, when the patients' fluid and metabolic conditions had stabilized. Dates of mortality were immediately reported during follow-up and ascertained by a nephrologist. The cohort comprised 1,117 patients receiving hemodialysis, without previous kidney replacement therapy, 3 months after starting dialysis. Survival time was defined as the number of days between 3 months after the start of the hemodialysis treatment (baseline) and the date of death, the date of censoring due to loss to follow-up, kidney transplantation, transfer to a nonparticipating dialysis center, a switch to peritoneal dialysis treatment, or the end of the follow-up. Data Collection Demographic and clinical data such as age, sex, ethnicity, primary kidney disease, current smoking, medication, a history of diabetes, cardiovascular disease, malignancy or chronic lung disease, Subjective Global Assessment (SGA), blood pressure, blood, and 24-hour urine samples were collected at the start of dialysis treatment and at all visits until the end of follow-up. Primary kidney disease was classified according to the codes of the European Renal Association-European Dialysis and Transplant Association. 13 We grouped patients into 4 classes of primary kidney disease: diabetic nephropathy, glomerulonephritis, renal vascular disease, and other kidney diseases. A history of diabetes was defined based on diabetes mellitus registered as a comorbid condition or diabetic nephropathy as primary kidney disease. Current smoking was defined as current cigarette smokers, including those who quit smoking in the past 3 months. Cardiovascular disease was defined as any history of a cerebral vascular accident, a myocardial infarction, or peripheral vascular disease. For blood pressure, the mean systolic and diastolic blood pressure values prior to dialysis over the previous 2 weeks were calculated. The nutritional state was measured using a 1-7 score on SGA, with scores of 6-7 indicating normal protein-energy wasting, scores of 4-5 indicating moderate protein-energy wasting, and scores of 1-3 indicating severe protein-energy wasting. 14 Serum potassium, albumin, creatinine, and urea levels were measured during the prelude to dialysis and after the long dialysis interval. In addition, urea and creatinine levels were also measured in the urine. Residual kidney function was estimated using combined urea and creatinine clearance and corrected for body surface (mL/min/1.73 m 2 ). Statistical Analysis Variables are presented as means ± standard deviations (SDs), median (interquartile range), or numbers (proportions) where appropriate and according to 6 predefined baseline predialysis serum potassium level categories: ≤4 PLAIN-LANGUAGE SUMMARY Both high and low serum potassium levels can cause fatal heart arrhythmias. In patients receiving hemodialysis, the mortality risk of low serum potassium levels, in particular, may have been underestimated. Therefore, we investigated the relationship between predialysis serum potassium level and 6-month mortality in 1,117 patients receiving hemodialysis during 10 years of follow-up. In our analysis, we adjusted for the potential confounders of age, sex, diabetes, cardiovascular disease, smoking, residual kidney function, and nutritional state. We found that low (≤4.0 mmol/L) and high (>6.0 mmol/L) predialysis serum potassium levels are associated with a 1.4-and 1.3-fold, respectively, stronger risk of death than the optimal serum potassium level of approximately 5.1 mmol/L. These results may imply the cautious use of potassium-lowering therapy and a potassium-restricted diet in patients receiving hemodialysis. We assessed the relationship between predialysis serum potassium level and all-cause mortality during 10 years of follow-up using several methods. In all analyses, survival was measured from 3 months (baseline) after the start of hemodialysis. The survival probabilities for the 6 predialysis serum potassium categories at baseline were visualized using Kaplan-Meier curves for the first 10 years of follow-up. For all following analyses, the predialysis serum potassium level was included as a time-dependent variable and updated at a 6-month interval after the start of the first hemodialysis treatment. First, we used life tables to calculate absolute mortality rates during the follow-up within each of the 6 time-dependent predialysis serum potassium level categories. Second, we used a timedependent Cox proportional hazards model to calculate crude and multivariable adjusted HRs for 6-month allcause mortality during 10 years of follow-up. As normal serum potassium levels vary from 3.5 to 5.5 mmol/L and no optimum level has been definitively recommended within that range, the predialysis serum potassium category (>5.0 to ≤5.5 mmol/L) with the lowest mortality rate in our cubic spline analysis was considered the reference category. 15 Analyses were adjusted for potential confounders measured at baseline: age, sex, current smoking, history of diabetes, history of cardiovascular disease, residual kidney function, and SGA score (full model). We did not control for time-varying confounding because later values of time-dependent confounders may be influenced by earlier levels of predialysis serum potassium. Therefore, adjustments for such time-dependent variables could have introduced bias. 16 Third, the continuous relationship between time-dependent predialysis serum potassium level and mortality was explored by modeling a 4-knot restricted cubic spline with 95% confidence intervals (CIs) adjusted for the previously mentioned confounders. The knots were chosen at the 5th, 35th, 65th, and 95th percentiles of the predialysis serum potassium level distribution. 17 We performed 4 sensitivity analyses. First, we repeated our full time-dependent Cox proportional hazards model with additional adjustments for serum phosphate level, serum bicarbonate level, serum albumin level, and normalized protein catabolic rate to assess any residual confounding by nutritional state. Second, we performed time-dependent Cox proportional hazard analyses considering as a secondary outcome cardiac death due to dyskalemia, cardiac arrest, myocardial infarction, and sudden death. We did not include cardiac death in our primary analysis as specific causes of death are more likely to be misclassified. Third, we repeated our analyses without censoring for patients switching to peritoneal dialysis. In the primary analyses, we censored patients from the moment they switched from hemodialysis to peritoneal dialysis, since we aimed to investigate only the effect of the predialysis serum potassium level on the mortality risk in patients receiving hemodialysis. However, this might have resulted in informative censoring, since patients switching to peritoneal dialysis are often in better or worse clinical condition than the general population receiving hemodialysis. Fourth, we repeated our Cox proportional hazards model using the baseline predialysis serum potassium level category as a fixed variable to calculate the HRs for 10-year all-cause mortality and cardiac death. This enabled us to evaluate whether the relationship between predialysis serum potassium level and all-cause mortality is indeed a short-term effect that can be better estimated using a timedependent analysis. We assumed missing values to be missing at random. Missing data were handled using 2 different strategies. For missing predialysis serum potassium levels at baseline (n = 9; 1%), we carried the next observed predialysis serum potassium level backward, and for the missing values during the 10-year follow-up (n = 869; 13%), we carried the last observed predialysis serum potassium level forward. For the following missing baseline data-smoking (n = 8; 1%), history of diabetes (n = 14; 1%) or history of cardiovascular disease (n = 14; 1%), residual kidney function (n = 259; 23%), and SGA score (n = 82; 7%), we used multiple imputations to avoid bias and maintain power, using 10 imputations, and including all relevant baseline variables and the outcome in the model. 18 In the proportional hazards regression models, the proportionality assumption for each covariate was checked by adding a product term between that covariate and the logarithm of follow-up time. All analyses were performed using SPSS 23.0 (International Business Machines Corporation) and R version 3.5.1 (R Core Team). Baseline Characteristics Of all Netherlands Cooperative Study on the Adequacy of Dialysis-2 study participants, 1,117 patients receiving hemodialysis survived the first 3 months after starting dialysis and were therefore included in the analysis. At baseline, the mean age of the study cohort was 63 years (SD, 14 years), 58% were men, 26% were current smokers, 24% had a history of diabetes, 32% had a history of cardiovascular disease, 7% had a low SGA score, and the median residual kidney function was 3. Table 1 presents the baseline characteristics for all patients and according to the 6 predialysis serum potassium level categories. Compared with the reference category (>5.0 to ≤5.5 mmol/ L), patients in the lowest potassium level category were older, smoked less often, and had lower SGA scores, whereas The residual eGFR was based on combined urea and creatinine clearance and corrected for body surface. c Mean systolic and diastolic BP values shown are prior to dialysis, as measured over the previous 2 weeks. d Cardiovascular disease was defined as any history of a cerebral vascular accident, a myocardial infarction peripheral vascular disease, or heart failure. e The nPCR was calculated using the Watson nomogram. those in the highest predialysis serum potassium level category were younger and had lower residual kidney functions. Phosphate levels increased with higher potassium level categories, whereas bicarbonate levels decreased. Figure S1. Table 2 shows the mortality rates (95% CIs) per 100 patient-years during 10 years of follow-up according to the 6 time-dependent potassium level categories. The absolute risk of death was clearly increased in the lowest predialysis serum potassium level category of ≤4.0 mmol/ L compared with >5.0 to ≤5.5 mmol/L, which corresponded to an excess rate of approximately 9 deaths/100 patient-years. The highest predialysis serum potassium level category of >6.0 mmol/L corresponded to an excess rate of approximately 2 deaths/100 patient-years compared with the predialysis serum potassium level category of >5.0 to ≤5.5 mmol/L. After checking the proportional hazards assumption, we found no sign of violation. Table 3 shows the crude and multivariable adjusted relationship between the 6 timedependent predialysis serum potassium level categories and 6-month mortality during 10 years of follow-up. Additional adjustments for residual kidney function or SGA did not materially attenuate the relationship between predialysis serum potassium level and mortality. The HR of time-dependent predialysis serum potassium ≤4.0 mmol/L was 1.42 (95% CI, 1.01-1.99), implying that hypokalemia is a 1.4-fold stronger risk factor compared with predialysis serum potassium levels of >5.0 and ≤5.5 mmol/L, whereas predialysis serum potassium levels of >6.0 mmol/L resulted in an HR of 1.32 (95% CI, 0.97-1.81). Figure 1 shows the U-shaped relationship between timedependent predialysis serum potassium level and 6month mortality during 10 years of follow-up, expressed using the multivariable adjusted HR, with a nadir at approximately 5.1 mmol/L. HRs for death increased substantially below a predialysis serum potassium level of ≤4.5 mmol/L and above >5.7 mmol/L, with the effects of lower predialysis serum potassium levels being more pronounced. For example, patients receiving hemodialysis with a predialysis serum potassium level of 4.0 mmol/L compared with the optimum level of 5.1 mmol/L had an almost 1.4-fold increased risk of death. Sensitivity Analyses We performed 4 sensitivity analyses. First, additional adjustments for the nutritional markers phosphate, bicarbonate, albumin, and normalized protein catabolic rate did not materially alter the results of our main analysis (Table S1). Second, considering cardiac death as an outcome attenuated the results due to the loss of power but showed a similar direction of the relationship (Table S2). Third, repeating our analyses without censoring for patients who switched to peritoneal dialysis also did not substantially change the effect between predialysis serum potassium level and death (Table S3). Finally, considering predialysis serum potassium level as a fixed category at baseline did attenuate the strength of the relationship between predialysis serum potassium level category and 10year all-cause mortality as expected due to the dilution of the effect of potassium over time (Tables S4 and S5). DISCUSSION This prospective study among exclusively incident hemodialysis patients showed a U-shaped relationship between predialysis serum potassium level and death during 10 years of follow-up, with an optimum level of approximately 5.1 mmol/L. Compared with this optimum level, low (≤4 mmol/L) and high (>6 mmol/L) predialysis serum potassium concentrations were 1.4-and 1.3-fold stronger risk factors for death after multivariable adjustment, respectively. Our results are in line with the study by Torl en et al, 3 including >1,11,000 prevalent hemodialysis patients showing adjusted HRs for all-cause mortality for timedependent serum potassium levels of <3.5 mmol/L and ≥5.5 mmol/L of 2.0 (95% CI, 1.8-2.1) and 1.2 (95% CI, 1.2-1.3) compared with the reference category of 4.0-4.5 mmol/L, respectively. A limitation of this previous study is the inclusion of prevalent patients, which may have resulted in selection bias. In addition, estimates were not adjusted for the potential confounders residual kidney function and nutritional status. In contrast, another study including 55,000 prevalent hemodialysis patients showed low adjusted HRs for death of 1.0 (95% CI, 1.0-1.1) and 1.1 (95% CI, 1.0-1.2) for serum potassium <4.0 mmol/L and >6.0 mmol/L, respectively, compared to the reference category of 4.0 to 5.0 mmol/L. 12 Next to selection bias and their relatively low reference category, another possible explanation for their low HRs could be the use of serum potassium as a fixed variable in their model over a median follow-up of 16.5 months. When considering predialysis serum potassium level as a fixed value at baseline, we had a similar finding, namely, a weakening of the observed effect over time most likely due to dilution. The relatively high optimum predialysis serum potassium level of 5.1 mmol/L that we found is consistent with 2 previous studies. Kovesdy et al 19 reported that predialysis serum potassium levels between 4.6 and 5.3 mmol/L were associated with the highest 3-year survival rate in >81,000 prevalent hemodialysis patients. Pun et al 20 found that a predialysis serum potassium level of 5.1 mmol/L resulted in the lowest risk of sudden cardiac arrest. Limitations of these 2 previous studies were again the inclusion of prevalent patients and, in the study by Pun et al, 20 failure to adjust for potential confounders such as smoking, residual kidney function, and nutritional status. Serum potassium concentration rapidly decreases by approximately 1 mEq/L in the first hour of hemodialysis treatment, when the blood-to-dialysate gradient is greatest, and an additional lowering of 1 mEq/L occurs over the next 2 hours. A rapid rebound of serum potassium occurs because of efflux from the intracellular compartment after completion of the dialysis session. 7 Patients with a relatively low predialysis serum potassium level may experience more severe or prolonged hypokalemia after the session, which may explain the increased risk of death that we found. 21 Another explanation could be that low predialysis serum potassium level is a proxy for malnourishment, which is a strong risk factor for death. 22,23 However, adjusting for SGA score, the normalized protein catabolic rate, serum phosphate, bicarbonate, and albumin, as indicators of malnutrition, did not materially attenuate the relationship between predialysis serum potassium level and death. Finally, low total body potassium is also a risk factor for death. 8 Hemodialysis treatment may reduce total body potassium by depleting intracellular potassium stores. Although serum potassium level does not necessarily reflect the intracellular potassium, a low predialysis concentration may serve as a proxy of a relatively low total body potassium level. There are several strengths to our study. First, to our knowledge, this is the first study that included only incident hemodialysis patients. All previous studies investigating the relationship between predialysis serum potassium level and death mainly included prevalent hemodialysis patients, thus being susceptible to survivor bias. 3,4,11,12,19,23,24 This is a form of selection bias that occurs when the risk of an outcome is estimated from data collected at a given time point among survivors rather than on data gathered in a group of incident cases. 10 As with other biases, an increased study size cannot compensate for survivor bias. 25 Second, all measurements were performed according to the study protocol, and, therefore, information bias is unlikely. This is in contrast to the previous studies in which data were collected from clinical records, potentially resulting in information bias as data were collected for a clinical reason. Third, we adjusted for smoking, an important confounder that was unavailable in the majority of the previous studies and, therefore, not included in the model. 4,11,12,23,24 Reverse causation owing to inadequate control for smoking status can distort the true relationship between hypokalemia and the risk of death because smoking is associated with both decreased serum potassium level and an increased risk of death. 26 Finally, by modeling potassium freely in our restricted cubic spline analysis, we could establish the optimum predialysis serum potassium level as a reference category and incorporate it into our main time-dependent analysis. As this reference was found to be higher than those used in previous studies, this may have allowed for a more valid estimation of the HRs associated with dyskalemia, particularly hypokalemia. Using a lower reference category, as most previous studies did, could have resulted in the underestimation of the relative relationship between hypokalemia and mortality. Nevertheless, our study has some limitations. First, as with most studies, we encountered missing data. To maintain power and minimize any bias, we used multiple imputations to account for these missing data. Second, even though we updated predialysis serum potassium level as a time-dependent variable, we could only do so for every 6 months. As the serum predialysis potassium level fluctuates, shorter intervals between measurements would have allowed for more precise estimations of the mortality risk and less "dilution" of the effect. Third, as we did not have information on peridialytic changes in serum potassium levels, the serum-to-dialysate potassium gradient, or postdialysis serum potassium levels, we could not consider the effects of these factors on the mortality risk. Fourth, we used all-cause mortality as the primary outcome, which is unequivocal, whereas the secondary outcome, cardiac death, can be nondifferentially misclassified. Considering cardiac death as an outcome showed slightly weaker but similar results. In general, nondifferential misclassification results in an underestimation of the effect. 25 In conclusion, we found a U-shaped relationship between predialysis serum potassium levels and 6-month allcause mortality in incident hemodialysis patients in the first 10 years of follow-up. Our results indicate an optimum predialysis serum potassium level of approximately 5.1 mmol/L. Low and high predialysis serum potassium levels resulted in 1.4-and 1.3-fold stronger risk factors for death, respectively, compared with the optimum level. If proven causal, the clinical implication of these results is that potassium-lowering therapy should be used with caution in patients receiving hemodialysis with normal or low serum potassium levels before the dialysis session. Furthermore, as low predialysis serum potassium level could result from malnourishment, the associated mortality risk emphasizes the importance of preventing nutritional disorders in patients receiving hemodialysis. SUPPLEMENTARY MATERIAL Supplementary File (PDF) Figure S1: Kaplan-Meier curves showing the survival probability for each baseline serum potassium level category over 10 years of follow-up. Table S1: Hazard ratios with 95% confidence intervals of 6-month all-cause mortality according to the 6 categories of timedependent predialysis serum potassium levels in 1,117 incident hemodialysis patients during 10 years of follow-up, with additional adjustments for nutritional markers. Table S2: Hazard ratios with 95% confidence intervals of 6-month cardiac death according to the 6 categories of time-dependent predialysis serum potassium levels in 1,117 incident hemodialysis patients during 10 years of follow-up. Table S3: Hazard ratios with 95% confidence intervals of 6-month all-cause mortality according to the 6 categories of timedependent predialysis serum potassium levels in 1,117 incident hemodialysis patients over 10 years of follow-up, without censoring at the time of a switch to peritoneal dialysis. Table S4: Hazard ratios with 95% confidence intervals of 10-year allcause mortality according to the 6 categories of predialysis serum potassium levels fixed at baseline in 1,117 incident hemodialysis patients. Table S5: Hazard ratios with 95% confidence intervals of 10-year cardiac death according to the 6 categories of predialysis serum potassium levels fixed at baseline in 1,117 incident hemodialysis patients.
2021-12-09T17:54:35.245Z
2021-10-22T00:00:00.000
{ "year": 2021, "sha1": "e54e03d47881de9cd3889f4524073d2e757f94b2", "oa_license": "CCBY", "oa_url": "http://www.kidneymedicinejournal.org/article/S2590059521002223/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66176d168284780cc7221b584c68325b77f1c36f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245445837
pes2o/s2orc
v3-fos-license
Follicular lymphoma: updates for pathologists Follicular lymphoma (FL) is the most common indolent B-cell lymphoma and originates from germinal center B-cells (centrocytes and centroblasts) of the lymphoid follicle. Tumorigenesis is believed to initiate early in precursor B-cells in the bone marrow (BM) that acquire the t(14;18)(q32;q21). These cells later migrate to lymph nodes to continue their maturation through the germinal center reaction, at which time they acquire additional genetic and epigeneticabnormalities that promote lymphomagenesis. FLs are heterogeneous in terms of their clinicopathologic features. Most FLs are indolent and clinically characterized by peripheral lymphadenopathy with involvement of the spleen, BM, and peripheral blood in a substantial subset of patients, sometimes accompanied by constitutional symptoms and laboratory abnormalities. Diagnosis is established by the histopathologic identification of a B-cell proliferation usually distributed in an at least partially follicular pattern, typically, but not always, in a lymph node biopsy. The B-cell proliferation is biologically of germinal center cell origin, thus shows an expression of germinal center-associated antigens as detected by immunophenotyping. Although many cases of FLs are typical and histopathologic features are straightforward, the biologic and histopathologic variability of FL is wide, and an accurate diagnosis of FL over this disease spectrum requires knowledge of morphologic variants that can mimic other lymphomas, and rarely non-hematologic malignancies, clinically unique variants, and pitfalls in the interpretation of ancillary studies. The overall survival for most patients is prolonged, but relapses are frequent. The treatment landscape in FL now includes the application of immunotherapy and targeted therapy in addition to chemotherapy. sites, although central lymph nodes, including abdominal and thoracic, can also be involved. Extranodal sites that are commonly involved include bone marrow (BM), spleen, liver, and peripheral blood [7]. ETIOPATHOGENESIS Biologic abnormalities that promote the development of FL can be broadly summarized as occurring in three stages of B-cell development: (1) BM events, (2) germinal center events, and (3) post-germinal center events. BM events BM precursor B-cells, usually in the pre-or pro-B-cell stage, acquire t(14;18)(q32;q21) IGH/BCL2 because of repair failure during V(D)J recombination. The resulting overexpression of BCL2, an anti-apoptotic protein, promotes survival and discourages apoptosis of B-cells as they later mature during the germinal center reaction [8]. Germinal center events B-cells harboring the t(14;18)(q32;q21) translocation retain germinal center functionality (e.g., BCL6-mediated pathways), and undergo somatic hypermutation and class switch recombination of immunoglobulin genes initiated by activation-induced cytidine deaminase with retention of IgM/IgD surface expression. This last phenomenon is known as the allelic paradox and promotes proliferation and survival pathways in malignant Bcells [9]. Post-germinal center events Additional chromosomal alterations and mutational abnormalities occur, promoting the pre-lymphomatous t(14;18)+ cell into bonafide lymphoma cells. Reentry of a subset of BCL2+ memory B-cells to the germinal center also occurs [10]. The etiology of FL is mainly unknown. t(14;18)(q32;q21) IGH/ BCL2 alone is not sufficient to cause lymphoma [14]. Some essential factors which seem to play a contributing role are family history and inherited/genetic susceptibility (especially first-degree relatives) and environmental factors (such as exposure to pesticides and herbicides) [15,16]. Clinical features The median age of patients with FL is the sixth decade [1]. Clinical presentation of FL is most commonly that of enlarged lymph nodes, frequently in the neck or abdomen. FL is a localized disease in about 10% to 20% of cases. The vast majority of FLs present with widespread nodal involvement (~80%) and advancedstage disease (stages III-IV) at the time of diagnosis [7]. Some cases follow a chronic relapsing course. A subset progresses rapidly and transforms to aggressive lymphomas such as diffuse large B-cell lymphomas, double-hit large B-cell lymphomas, and lymphoblastic leukemia/lymphoma [17]. FL can also relapse as classic Hodgkin lymphoma, which is clonally related to the antecedent FL and with Hodgkin lymphoma cells also harboring the t(14;18) (q32;q21) translocation [18]. Most patients with FL are mainly asymptomatic. Symptomatic presentations may include fatigue, fever or night sweats, weight loss, or recurrent infections. Tissue biopsy (lymph node/extranodal sites), most frequently in the form of needle biopsy (both core needle biopsy and fine-needle aspiration biopsy), and occasionally as excisional biopsies, are the most frequent diagnostic materials. Peripheral blood and BM biopsy are usually performed for staging purposes [10]. Abnormal laboratory findings are uncommon. Leukemic phase FL is identified in < 5% of patients [19]. An increase in lactate dehydrogenase (LDH) and β2-microglobulins are present in about 15% of patients [20]. Rarely, FL is found outside lymph nodes. Extranodal FL can cause a variety of symptoms depending on its location. For exam- ple, in case of BM involvement, anemia is present (about 10% of patients with FL), while leukopenia/thrombocytopenia is rarely seen [19,21]. Involvement of mucosa-associated sites may be asymptomatic or present with symptomatology related to the involved disease site. HISTOLOGIC FINDINGS The diagnosis of FL is made histologically in tissue sections obtained from surgically excised lymph nodes or, more frequently, a needle biopsy (both core needle biopsy and fine-needle aspiration biopsy). Gross examination of involved lymph nodes shows a vaguely nodular pattern in the cut section with bulging. Fig. 1. Follicular lymphoma, low-grade. A representative case of follicular lymphoma, low-grade. H&E-stained excisional biopsy (A-D) and immunohistochemical stains (C inset, E-I) show classic morphology of follicular lymphoma cells, with increased, monotonous appearing neoplastic follicles in an excisional biopsy of the lymph node. The borders of the follicles are ill-defined and lack well-preserved mantle zones. Foci of sclerosis are identified (A). The neoplastic follicles are expansile and arranged in a back-to-back fashion. The neoplasm extends into perinodal fat (B) and has attenuated to absent mantle zones (C). Immunostain for CD21 highlights follicular dendritic cell meshworks within neoplastic follicles, which is useful in establishing the presence of lymphoid follicles (C, inset). The neoplastic follicle comprises numerous centrocytes and fewer centroblasts, compatible with grade 1-2 of 3 (D). Immunostain for CD20 highlights B lymphocytes in neoplastic follicles and interfollicular (diffuse) areas (E). Immunostain for CD3 highlights reactive T-cells in follicular lymphoma. The pattern of CD3, accumulating around neoplastic follicles, can be used to highlightthe nodulardistribution of lymphoma cells (F). Immunostain for CD10 confirms that the neoplastic cells are of germinal center origin (lymphoma cells are positive within neoplastic follicles). Scattered interfollicular neoplastic cells are weakly stained with CD10. The reactivity is stronger in germinal centers than in interfollicular regions (G). Immunostain for BCL-6 highlights neoplastic lymphoma cells of germinal center origin within neoplastic follicles (H). Immunostain for BCL2 is positive in neoplastic Blymphocytes (I). Microscopic examination shows partial or complete effacement of the lymph node architecture with numerous, similarly sized, nonpolarized neoplastic follicles with attenuated or absent mantle zones, typically present in a back-to-back fashion in involved sites (Fig. 1). The neoplastic follicles of FL may also contain many reactive T-cells and follicular dendritic cells (FDCs), but tingible body macrophages are usually few or absent. FL is typically composed of centrocytes (small and large cleaved Bcells) and larger centroblasts (large noncleaved B-cells). Grading of FL is based on the number of centroblasts per high-power microscopy field (× 40 objective, 0.159 mm 2 ) [1]. Distinguishing between grades 1 and 2 (containing up to 15 centroblasts per field) is not recommended at this time. Grade 3 cases have > 15 centroblasts per field and are further subdivided to 3A (centrocytes present), 3B (solid aggregates of centroblasts with no or very rare intervening centrocytes) (Fig. 2). Grade 3A (or 3B) FL with diffuse growth containing > 15 centroblasts per high-power field should be classified as diffuse large B-cell lymphoma (DLBCL) according to the WHO system (Table 1) [1]. WHO classification recommends documenting the proportion of the neoplasm that is present in a follicular versus diffuse distribution, as follows: follicular pattern (> 75% of the sample has a follicular pattern); follicular and diffuse pattern (25%-75% of the specimen has a follicular pattern); focally follicular/ predominantly diffuse (< 25% of the specimen has a follicular pattern); and a diffuse pattern (absence of follicular areas) [1]. Based on these criteria, a neoplasm with a purely follicular pattern is considered FL, even if composed of centroblasts alone. For patients with low-grade FL, the tumor pattern of diffuse has no prognostic importance. Still, the possibility of sampling error in small biopsies should be considered or noted in the report. An area of increased subjectivity is when biopsies are small and show limited regions of diffuse pattern with grade 3A morphology. In these cases, especially if associated with low-grade FL in most specimens and clinical findings support low-grade FL, it is critical not to overcall that diffuse area as DLBCL [22]. FL of the usual type can show a wide range of morphologic variability, but are still classified as FL. Distinct from this, there are also WHO-defined FL variants which are also FL but have consistent clinicopathologic and biologic nuances that separate them from usual FL. In terms of morphologic variability in FL, some cases of FL, especially when neoplastic follicles invade beyond the lymph node capsule, particularly in retroperitoneal and mesenteric sites, can be associated with diffuse and prominent sclerosis, often associated with blood vessels. FL of the usual type may also show scattered Hodgkin-like cells, which must be distinguished from classic Hodgkin lymphoma and collision tumors with both FL and classic Hodgkin lymphoma, usually accomplished by careful and extensive immunophenotyping. FL can also have Castleman-like features, including concentric mantle zone cells, hyalinization and regression of follicles, and interfollicular vascular proliferation with penetrating vessels creating lollipop lesions. These cases may mimic hyaline vascular Castleman's disease to the point that the FL is not identified, creating a diagnostic pitfall [23]. In the floral variant of FL, neoplastic follicles are irregular in shape and surrounded by expanded, prominent mantle zone lymphocytes which penetrate neoplastic follicles. This variant FL resembles the non-neoplastic entity progressive transformation of germinal centers and may also mimic other lymphomas, including marginal zone lymphoma (with follicular colonization) or nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) [24,25]. Signet ring cell FL is a variant in which tumor cells have clear, vacuolated cytoplasm and an eccentric nucleus and should be distinguished from carcinoma cells. The vacuoles are composed of intracytoplasmic immunoglobulin deposits [26]. FL variants include in situ follicular neoplasia, duodenal-type FL, diffuse variant FL, and testicular FL. In situ follicular neoplasia is diagnosed when lymph node biopsies show overall normal histologic findings with preservation of nodal architecture. Still, abnormal, bright BCL2-positive B-cells are identified within lymphoid follicles [1]. These BCL2-positive B-cells are confined to follicles and represent colonization of pre-existing germinal centers by monoclonal BCL2-rearranged B-cells [27,28]. Because a subset of these patients will have FL in other sites at the time of diagnosis of in situ follicular neoplasia, these patients should be staged. Approximately 5% of these patients will subsequently develop FL or DLBCL [29]. Further supporting that these lesions are biologically in situ neoplasms, sequencing studies performed in these selected cells show mutation abnormalities similar to those seen in FL but at lower variant allele frequency. Moreover, retrospective analysis of previously removed lymph nodes in patients diagnosed with FL can identify in situ FL in most of these patients [30,31]. As opposed to in situ follicular neoplasia, lymph nodes may show partial involvement by FL. Both neoplastic and reactive follicles are present in these cases, and lymph node architecture is partially effaced. These cases are still classified as FL, not in situ lesions, and the presence of only partial nodal involvement is associated with lower stage, and better prognosis, given adequate sampling [32]. Duodenal-type FL are FL that arise at extranodal, mucosal sites within the small bowel, usually the second portion of the duodenum (Fig. 3). These neoplasms do not typically pose diagnostic challenges since the histopathologic features are typical of low-grade FL. This variant is important to recognize clinically since it occurs in younger patients and remains localized in nature [33]. Therefore, it is amenable to localized radiation alone and generally does not require systemic therapy. Biologically this variant FL is intriguing in that it shows features overlapping between FL and extranodal marginal zone lymphoma. Like usual FL, duodenal-type FLs have BCL2 translocations, somatically hypermutated immunoglobulin genes, and frequent mutations in KMT2D, CREBBP, and TNFRSF14 [34]. However, these lesions additionally show features that overlap with those of extranodal marginal zone lymphoma, including restricted usage of immunoglobulin heavy chain variable region, suggesting development in the context of antigen stimulation [35]. The diffuse variant of FL has clinical, immunophenotypic, and molecular genetic differences from typical nodal FL. This variant FL typically presents at nodal sites, usually inguinal lymph nodes, [36,37]. However, as opposed to the usual FL, this variant is frequently localized without systemic involvement. Histologically, this variant may be challenging to recognize, given that the majority of the neoplasm is distributed in a diffuse pattern with only focal and usually small micronodular foci. Cytologically, lymphoma cells have typical centrocytic and centroblastic morphologic features and typically express CD10 and other germinal center B-cell markers. Diffuse expression of CD23 is consistently identified. Like usual FL, these neoplasms frequently show 1p36 chromosomal abnormalities and/or TNFSRF14 and CREBBP mutations. However, unlike usual FL, these neoplasms lack the BCL2 translocation harbor STAT6 mutations. The testicular variant of FL is rare. This neoplasm was initially identified in children but has also rarely been reported in adults [38]. These neoplasms have several features similar to pediatric type FL. Histologically, these neoplasms have high-grade cytology (grade 3A or 3B histology), yet they are usually localized and associated with a good prognosis. Neoplastic cells in this variant do not express BCL2 protein and lack the BCL2 translocation, similar to pediatric type FL. Needle biopsy of BM is performed as part of staging procedures in patients with FL. In FL, focal or extensive BM involvement is found in most patients [39]. The most frequent pattern of involvement is paratrabecular aggregates of lymphoma cells, with or without interstitial or diffuse patterns. The pure follicular (nodular) pattern in BM is present in about 5% of cases with BM involvement [40]. The liver and spleen are commonly involved in FL. In the liver, FL usually involves the portal tracts. However, large nodules in parenchyma are present when the spleen is extensively involved by lymphoma. In the spleen, the white pulp is preferentially involved with two predominant patterns: Expansion of the white pulp in most cases vs. relatively preserved white pulp architecture less frequently [41,42]. Fine-needle aspirations typically show variable mixtures of centrocytes and centroblasts. In comparison to reactive follicular hyperplasia, tingible body macrophages are rare or absent (Fig. 4). Centrocytes are small to large, have angulated nuclei, dense chromatin, and scant cytoplasm. Centroblasts are large cells with oval nuclei, vesicular chromatin, 1-3 nucleoli, moderate cytoplasm and are > three times the size of lymphocytes. Similar morphologic features might be seen in FDCs. However, FDCs have large round nuclei, dispersed and even nearly clear chromatin, single eosinophilic nucleolus, and indistinct cytoplasm [43]. A subset of FL cases is truly negative for BCL2, especially in grade 3B [47][48][49]. These tumors also usually do not have the BCL2 translocation of FL and may express MUM1 and cytoplasmic immunoglobulin, thus appearing to have biology more similar to post-germinal center B-cells. However, a subset of cases is falsely negative for BCL2 expression, which may be due to a point mutation in BCL2 that blocks binding of the BCL2 clone 124, which is used commonly in clinical IHC. Other anti-BCL2 antibodies such as clone E17 or SP66 will be positive [50]. FL cases are negative for T cell markers CD2, CD3, CD4, CD5, CD7, CD8, and are usually negative for CD43 and cyclin D1 [1,51]. A small subset of cases is positive for CD5, including a subset of instances of the floral variant of FL [52]. The morphologic appreciation of follicular patterns in FL is usually adequate. Still, it can be supported by identifying FDC networks underlying the follicular aggregates, which are generally positive for CD21, CD23, and/or CD35 [22]. Another clue for a nodular pattern in FL is the accumulation of reactive T-cells at the periphery of the nodules. CXCL13 is another FDC-associated IHC marker that tends to be positive in most FL cases with FDCs when negative for other FDC-associated IHC markers [53]. Ki-67 assesses the proliferation rate of follicular lymphomas, and the Ki-67 proliferation index correlates with FL grade. Most low-grade FL has proliferative rates < 20%. However, about 20% of low-grade FL have a high proliferation rate (> 30%). These follicular lymphomas appear to behave more aggressively, similar to grade 3A or grade 3B FL. We report these cases as FL grade 1 to 2 with a high proliferation fraction. We include diagnostic comments indicating that these cases may be more clinically aggressive than typical low-grade FL [54]. It is recommended to assess Ki-67 in follicles. However, if interfollicular areas are more extensive than follicular ones, we estimate the Ki-67 proliferation based on an average of the entire neoplasm [22]. FLs with polarized follicles may have higher Ki-67 proliferative rates present in the dark zone of the follicle, and this should not be interpreted as a high proliferation index in FL [22]. Peripheral blood is commonly involved (at a low level) in ~90% of FL cases. This feature can be detected by flow cytometry or molecular methods if not observed easily in peripheral blood smear. Absolute lymphocytosis with a high count is present in 5%-10% of cases [19]. Neoplastic cells are typically small to intermediate in size with highly indented nuclei, known as buttock cells. The absence of a BCL2 rearrangement in a suspected lowgrade FL is unusual. In contrast, the absence of such a rearrangement in grade 3 cases should not be interpreted as evidence against a diagnosis of follicular lymphoma, particularly in grade 3B FL. Loss of 1p36, which contains TNFRSF14, is common in FL [11,13]. 3q27 BCL6 rearrangement or amplification is present mainly in FL 3B. In the absence of BCL2 rearrangement in grade 3 cases, FISH studies for BCL6 rearrangements can be performed. MYC rearrangement/activation of MYC is rare in FL (< 5%). In the absence of histologic transformation, these cases are still called FL and are not classified as high-grade B-cell lymphoma. More extensive studies are needed to evaluate if MYC-rearrangement in FL has prognostic significance [58,59]. However, some of the FL with MYC-rearrangement are associated with transformation to DLBCL. These cases are categorized as double-hit Fig. 6. IGH/BCL2 dual-color fluorescent in-situ hybridization (FISH). FISH on a fixed, paraffin-embedded tissue section of follicular lymphoma using dual-fusion probes for BCL2 (red) and IGH (green). The t(14;18)(q32;q21) IGH/BCL2 fusion gene is a yellow signal. Finally, monoclonal B-cell populations in FL can be detected by monoclonal Ig heavy and light chain gene rearrangements detected by BIOMED-2 primer sets in multiplex polymerase chain reaction [64,65]. Reactive follicular hyperplasia In reactive follicular hyperplasia, follicles are primarily located in the cortex, are more widely separated, show variation in size and shape, the polarization of germinal centers into light and dark zones (higher proliferation), frequent mitoses, tingible body macrophages in germinal centers, and sharply demarcated mantle zones. There is usually no evidence of monoclonality by Ig rearrangement studies, and t(14;18)(q32;q21) is not identified [66]. Flow cytometry evidence of monotypic light chain expression, uniform expression of CD10, decreased intensity of CD19, CD20, and CD38, are also features that would support the diagnosis of FL over reactive follicular hyperplasia. However, rare cases of re-active lymph nodes can show monotypic light chain expression among B-cells, especially in children [67][68][69]. Progressive transformation of germinal centers Nodules are 3-5 times larger than background reactive follicles in the progressive transformation of germinal centers. It may be hard to separate this entity from the floral variant of FL on morphology (Fig. 7). By IHC, germinal center B-cells are negative for BCL2 in the progressive transformation of germinal centers, and there is no evidence of monoclonality or t(14;18)(q32;q21) [25]. Nodular lymphocyte-predominant Hodgkin lymphoma NLPHL has larger nodules than FL, and the nodules are more vaguely circumscribed. Most cells in nodules are small round lymphocytes. Centrocytes and centroblasts are absent. The presence of lymphocyte predominant (LP) cells which are negative for CD10 and BCL2, is the clue to the diagnosis of NLPHL. Interestingly, LP cells might express some germinal-center-associated markers, including BCL6, HGAL, LMO2, which may add to the difficulty of differentiating NLPHL B-cell rich nodular areas from FL [70]. In NLPHL, t(14;18)(q32;q21) is not identified [22]. Lymphocyte-rich classic Hodgkin lymphoma There are commonly large nodules with eccentrically located Nodal marginal zone lymphoma In nodal marginal zone lymphoma (MZL) lymphocytes with monocytoid features, frequent plasmacytic differentiation, and colonization of germinal centers by a monotypic and monoclonal B-cell population is identified. The neoplastic cells express pan-B-cell markers and are BCL2 positive by IHC. However, they are negative for CD5, CD10, cyclin D1, BCL6, LMO2, and other germinal center-associated markers. Expression of germinal center cell markers, including dual expression of BCL2 and BCL6 in rare cases of MZL, especially extranodal MZL, is an essential diagnostic pitfall to be aware of [71]. Positive IHC stain for IRTA1 and MNDA favor the diagnosis of MZL [72]. PROGNOSTIC FACTORS AND THERAPEUTIC MODALITIES FL patients have a median overall survival of > 15 years with different 5-year progression-free survival based on disease characteristics, comorbidities, therapies used, and therapeutic response [73]. Short remission after treatment has a poor prognosis [74]. FL may transform into an aggressive lymphoma. In ~25% to 35% of cases, FL progress to DLBCL, usually with the DLBCL showing biologic similarity to germinal center derived DLBCL [75][76][77]. As mentioned previously, a small subset of FL may progress to high-grade B-cell lymphomas with MYC, BCL2, and/or BCL6 translocations, and rare cases may progress to lymphoblastic lymphoma or relapse as classic Hodgkin lymphoma. FL is generally very responsive to radiation and chemotherapy. Radiation alone can provide a long-lasting remission in some patients with limited disease. In more advanced stages, physicians may use one or more chemotherapy drugs or the monoclonal antibody rituximab (Rituxan) alone or with other agents. The bispecific antibody, mosunetuzumab, which targets CD20 and CD3, redirects and recruits endogenous T-cells to the proximity of CD20-expressing B-cells; it has promising clinical activity in patients with relapsed or refractory FL [85]. CONCLUSION FL is the most common indolent B-cell lymphoma and originates from the lymphoid follicle's germinal center B-cells (centrocytes and centroblasts). In summary, we discussed the importance of morphologic classification and interpretation of ancillary studies in the accurate diagnosis of FL. Although many cases of FLs are typical and histopathologic features are straightforward, differentiating FL from mimickers, either from other lymphomas or reactive conditions, requires awareness of different morphologic patterns in FL and pitfalls in the interpretation of ancillary tests. The overall survival for most patients is prolonged, but relapses are frequent. Understanding mutational abnormalities and signaling pathways, in addition to the t(14;18) (q32;q21) translocation and BCL2 mutation, will further help to identify innovative treatment approaches and application of immunotherapy and targeted therapy in patients with FL. Ethics Statement Not applicable. Availability of Data and Material The datasets generated or analyzed during the study are available from the corresponding author on reasonable request. Code Availability Not applicable. Author Contributions Writing-review & editing: MK, JRC. Approval of final manuscript: all authors. Conflicts of Interest The authors declare that they have no potential conflicts of interest to disclose. Funding Statement No funding to declare.
2021-12-25T06:16:20.110Z
2021-12-27T00:00:00.000
{ "year": 2021, "sha1": "e877f789651df668036f0489bff3724486ad4d8c", "oa_license": "CCBYNC", "oa_url": "https://www.jpatholtm.org/upload/pdf/jptm-2021-09-29.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d2af8bfaf5590cc1cf3e13b8bdbfc6ca70e2efc3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12344123
pes2o/s2orc
v3-fos-license
Diabetes prevalence and metabolic risk profile in an unselected population visiting pharmacies in Switzerland Background Diabetes represents one of the major health challenges in Switzerland, and early diagnosis and treatment is mandatory to prevent or delay diabetes-related morbidity and mortality. For the purpose of identifying affected individuals, early screening in pharmacies is a valuable option. In this survey, we aimed to determine blood glucose and metabolic control in an unselected population of individuals visiting Swiss pharmacies. Methods The subjects responded to a short questionnaire and underwent a single capillary blood glucose test for screening purposes. They were classified as normal, indeterminate, impaired fasting glucose, and diabetes according to predefined blood glucose levels. Results A total of 3135 individuals (mean age 56 years) in 18 cantons were screened in November 2010; of these, 4.2% (95% confidence interval [CI] 3.5–4.9) had previously been diagnosed with diabetes. Diabetes was newly diagnosed in 1.9% (95% CI 1.5–2.4), and 11.5% (95% CI 10.4–12.6) had impaired fasting glucose. Subjects with impaired glucose control had an increased body mass index, a frequent family history of diabetes, hypertension, hypercholesterolemia, smoking, and a low level of physical activity. Prevalence of impaired glucose control was different between the French/Italian-speaking part of Switzerland (new diabetes 4.9%; impaired fasting glucose 12.7%) and the German-speaking part (new diabetes 1.9%; impaired fasting glucose 10.3%). Conclusion Our study shows a 6.1% prevalence of diabetes, of which about a third (1.9%) was previously undiagnosed, and 11.5% had impaired fasting glucose. Therefore, screening initiatives in pharmacies may be suitable for detecting people with undiagnosed diabetes. Introduction There is an increasing prevalence of type 2 diabetes globally, which is attributable to a growing global population, an increase in life expectancy, increased diagnostic efforts, and a reduced diabetes risk attributable to recent advances in diabetes treatment. 1 Therefore, diabetes is a major driver of health care costs. 2 For Switzerland, a diabetes prevalence of 3.3% has been reported for 1997 and an increase to 4.8% in the year 2007. 3 Other data from a large population-based study conducted in the French part of Switzerland point to an even higher prevalence. 4 Therefore, it is not surprising that type 2 diabetes and its complications contribute to 2.2% of Switzerland's health care expenditure. 5 Early diagnosis and treatment is mandatory to prevent or delay diabetes-related morbidity and mortality and reduce health care costs. Within this context, measurement of blood glucose levels during screening initiatives may help to identify persons with unknown diabetes or impaired fasting glucose and to initiate preventative measures or appropriate therapy in a timely manner. Pharmacies are particularly suitable for this purpose, because they are an easily accessible point of contact for individuals not visiting physicians. Based on these considerations, we conducted a national survey in pharmacies of the AMAVITA chain documenting more than 3000 individuals with respect to their random blood glucose values and further cardiometabolic parameters. Our aim was to assess glucose values, classify observed values into normal, indeterminate, impaired fasting glucose, and definite diabetes, in a largely unselected population of pharmacy attendees. Materials and methods Individuals visiting a participating pharmacy in one of 18 cantons of Switzerland were enrolled into the present study. Participants were selected based on their willingness to undergo blood glucose testing and to provide information on their cardiometabolic risk profile, without any further predefined selection criteria. Patients not willing to participate were neither documented nor counted to determine participation rates. Prior to enrollment, all study participants received a written information leaflet explaining the survey objectives and procedures. The study was conducted in accordance with applicable Swiss national regulations for epidemiologic studies and good epidemiologic practice, and informed verbal consent was obtained. A signed informed consent form was not required, because no personal data were registered and measurement of blood glucose is a standard procedure in Swiss pharmacies. Documentation On an anonymized one-page case report form, participants were asked to report the time of their last meal, as well as their gender, age, body weight, and height. The following parameters were also documented: presence or absence of diabetes, antidiabetic treatment, family history of diabetes or, in women, a weight of more than 4 kg of their newborns, a history of cardiovascular events (myocardial infarction, stroke), and cardiovascular risk factors (hypertension, hypercholesterolemia, smoking, physical exercise). Physical exercise was classified as no activity, moderate (at least 30 minutes/day for 5 days/week, eg, walking, dancing, gardening), or strenuous (at least 20 minutes/day on 3 days a week, eg, jogging, swimming, team sports). Blood glucose classification Capillary blood glucose was measured in all study participants using a uniform blood glucose monitoring device (Accu-Check ® Aviva, Roche, Basel, Switzerland) according to the manufacturer's instructions. Based on their blood glucose values, study participants were categorized as either fasting (last meal $ 8 hours), or nonfasting (last meal , 8 hours), and were further subclassified into four different groups based on their blood glucose levels as follows: normal blood glucose (,5.6 mmol/L for fasting/nonfasting), diabetes status indeterminate (5.6 to ,11.1 mmol/L for nonfasting), impaired fasting glucose (5.6-6.9 mmol/L for fasting), and diabetes ($7 mmol/L for fasting; $11.1 mmol/L for nonfasting). Statistical analysis We performed a descriptive analysis of the data. For this purpose, only participants with complete data regarding the individual parameters were considered. Continuous variables are shown as the mean ± standard deviation, and categorical variables as percentages. P values for comparison between groups of patients were calculated using the two-sided t-test for quantitative variables and the χ 2 test for qualitative variables. Results A total of 3135 participants were enrolled by 102 pharmacies in November 2010. The study population comprised 27% males, had a mean age of 56 years and a mean body mass index of 25 kg/m 2 . Mean random blood glucose was 6.0 ± 1.5 mmol/L, and about one quarter (23.0%) of participants were in a fasting state ( Table 1). The presence of diabetes was self-reported by 131/3135 participants (4.2%; 95% confidence interval [CI] 3.5-4.9), and the majority of these known diabetic participants (67.2%) were treated with oral antidiabetic drugs alone. A total of 6.1% received an insulin-containing regimen (insulin alone or basal insulinassisted oral antidiabetic treatment) and 24.4% received diet and exercise alone. submit your manuscript | www.dovepress.com Prevalence of unknown impaired glucose control Of 3004 participants with previously unknown diabetes, 86.7% had a normal (43.2%) or indeterminate (43.5%) blood glucose status, 11.5% were diagnosed with impaired fasting glucose, and 2.0% had new diabetes. Impaired glucose control was particularly prevalent in older age (P , 0.001), male patients (P = 0.011), and those with a body mass index . 25 kg/m 2 (P , 0.001). The prevalence was also higher in participants with risk factors, such as hypertension, hypercholesterolemia, and smoking (P , 0.001, Table 2). SUI-GER versus SUI-FR/IT In the total cohort, the diabetes prevalence was 6.1% (CI 5.3-7.0, Figure 1). A total of 1681 participants were from the SUI-GER region, and 1454 participants from the SUI-FR/ IT region (Table 3). Mean blood glucose levels were similar in both regions of Switzerland. In the SUI-FR/IT region, the participants were younger (P , 0.001) and were more often fasting (P , 0.001), while the prevalence of diabetes was only nominally higher and many participants were treated with oral antidiabetic drugs compared with SUI-GER. Furthermore, participants in SUI-FR/IT had lower blood pressure values (P = 0.046), were more likely to be smokers (P = 0.002), and had a low level of physical activity (P , 0.001, Table 1). The regional comparison (Table 3) showed a lower number of participants with known diabetes in the SUI-FR/IT region (5.0% versus 3.6% in SUI-GER; P = 0.063), whereas no regional difference was seen with regard to newly diagnosed diabetes (1.9% for both). The frequency of a normal glucose profile varied between 30.7% (Basel-Stadt) and 52.8% (Tessin), whereas impaired fasting glucose was most frequently diagnosed in Geneva (17.2%), but in only 5% of participants in the canton of Jura. The presence of diabetes was most frequently reported Cardiovascular events History of MI (n, %) in Geneva (8.0%) and least frequently in the canton of Jura (1%). Most cases of new diabetes were detected in the canton of Zug (6.0%), whereas the lowest number of new diabetes cases was seen in the canton of Graubünden (0.5%) and Wallis (1%). Due to low numbers of study participants, the cantons of Schaffhausen, Solothurn, and Waadtland were not considered for this comparison. Discussion Our analysis shows a 6.1% prevalence of diabetes in an unselected Swiss population visiting a pharmacy and agreeing to participate in our survey, of which 4.2% had known diabetes and 1.9% were newly detected by capillary blood glucose determination. These data not only confirm a recently published prevalence of 4.8% in a population aged 18 to .65 years (nationwide, population-based telephone survey), but indicate stable prevalence of diabetes from 2007 to 2010 using a similar methodologic approach. 3 However, based on the number of newly diagnosed diabetic participants in our survey, it can be assumed that the real diabetes burden in Switzerland is higher than previously reported. Diabetes prevalence in perspective According to the International Diabetes Federation, an estimated 629,000 participants (aged 20-79 years) in Switzerland suffer from diabetes, corresponding to a prevalence of 8.9%. These numbers are similar to those in Germany and Austria, but higher than in other northern European countries, such as Belgium, The Netherlands (5.3%), and France (6.7%). 6 However, accurate prevalences of diabetes can only be derived from structured, population-based studies. From a methodologic perspective, our investigation does not fullfill all the criteria of a proper population-based study and the willingness to undergo voluntary blood glucose testing might favor the enrollment of the more health-conscious subjects. Nevertheless, participants consulting a pharmacist can be considered as a largely unselected population due to the large age spectrum and the variety of reasons for a pharmacy visit. One population-based study was conducted in the Frenchspeaking region of Switzerland (Lausanne), and reported an overall diabetes prevalence of 6.6% in participants aged 35-75 years, of which only 66.3% were known cases of diabetes. 4 Even though the low number of study participants in the canton of Waadtland (Lausanne region) in our study does not allow for any conclusions, the overall prevalence of 6.8% in the SUI-FR/IT region (4.9% with known/1.9% with newly diagnosed diabetes) corresponds well to these published data, mostly driven by the comparatively high number of diabetic participants in the canton of Geneva (9.6% new and known diabetes combined). Interestingly, the overall prevalence for diabetes appears to be lower in the SUI-GER part of Switzerland, especially in the cantons of Bern, Basel-Stadt, and Graubünden, but there was considerable heterogeneity observed between cantons, which remains unexplained. Diabetes and risk factor control The recently published Swiss health survey shows selfreported diabetes control rates of 65.5% for the year 2007. 3 In line with these findings, data from our survey suggest the presence of uncontrolled diabetes in about 45% of participants with known diabetes (29% with a diabetic profile/15.3% with impaired fasting glucose) based on a single random blood glucose test and where the majority of participants were treated with oral antidiabetic drugs alone. However, these data have to be interpreted with extreme caution, given that a single capillary blood glucose test is not a suitable tool to establish the level of blood glucose control in participants with diabetes, but rather a prescreening tool. 7 Further, the total number of diabetic participants was low in our study, which also represents a major limitation for making conclusions on blood glucose control in diabetic participants in Switzerland. In our study, established cardiometabolic risk factors, such as elevated body mass index, a family history of diabetes, hypertension, hypercholesterolemia, smoking and a low level of physical exercise appeared to be associated with impaired glucose metabolism, confirming well known previous observations in this Swiss population. [8][9][10] Even though the prognostic relevance of treating diabetes and impaired fasting glucose with regards to microvascular and macrovascular complications are undisputed, there is a lack of prospective data supporting the cost-effectiveness of structured screening initiatives. [11][12][13] However, screening allows for early identification of individuals affected by diabetes or at risk of diabetes, and such initiatives, even when conducted at unconventional locations such as optometry practices, have been shown to be useful. 14 Diabetes screening Screening for diabetes in asymptomatic patients has been challenged recently 15 for having no beneficial effect on overall health outcomes. To our knowledge, there are no randomized, controlled trial data and only one relevant casecontrol study, and neither showing a benefit when considering microvascular complications. However, there are data to support screening being effective in hypertensive persons when macrovascular benefits are considered. Therefore, targeted screening might be more effective than mass screening. The most recent guidelines of the American Diabetes Association for screening and diagnosis of diabetes require a glycosylated hemoglobin $ 6.5%, a fasting plasma glucose $ 126 mg/dL or a 2-hour plasma glucose $ 200 mg/dL, or symptoms of hyperglycemia in patients with a random plasma glucose $ 200 mg/dL. 16 Fasting is defined as no caloric intake for at least 8 hours. Just as there is less than 100% concordance between fasting plasma glucose and 2-hour plasma glucose tests, there is not full concordance between glycosylated hemoglobin and either glucose-based test. Analyses of National Health and Nutrition Examination Survey data indicate that, assuming universal screening of the undiagnosed, the glycosylated hemoglobin cut point $ 6.5% identifies one-third fewer cases of undiagnosed diabetes than a fasting glucose cut point $ 126 mg/dL (7.0 mmol/L). 16 In the clinical as well as in the research environment, the required fasting status is a challenging task. For clinicians and patients it is much simpler to obtain random glucose values irrespective of the fasting duration. In studies, especially epidemiologic ones, the fasting requirement influences the study design, complicates the field work, and increases the costs of the study. Moreover, and most important, it is not feasible to control reliably for self-reported fasting status. Therefore, random blood glucose values have been used as a type of prescreening measure to identify individuals in which a repeat testing under fasting conditions is necessary. 7,17 For the results of the present study, this means that a number of patients (ie, those with indeterminate blood glucose) may be either normal or have impaired fasting glucose or diabetes. The conclusion is that the prevalence rates reported in our survey for new diabetes are a conservative estimate of the true prevalence. submit your manuscript | www.dovepress.com Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/vascular-health-and-risk-management-journal Vascular Health and Risk Management is an international, peerreviewed journal of therapeutics and risk management, focusing on concise rapid reporting of clinical studies on the processes involved in the maintenance of vascular health; the monitoring, prevention and treatment of vascular disease and its sequelae; and the involvement of metabolic disorders, particularly diabetes. This journal is indexed on PubMed Central and MedLine. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. Vascular Health and Risk Management 2012:8 Limitations Despite the straightforward approach of testing patients for the presence for dysglycemia in a large number of pharmacies in Switzerland, a number of limitations have to be considered when assessing the results of this study. A confirmation of the results of random blood glucose monitoring in largely nonfasting subjects in the same subjects after 8 hours of fasting or by assessing glycosylated hemoglobin, as suggested by the recent American Diabetes Association recommendation, would have been useful, but was not possible for logistic reasons. We obtained no data on subjects who did not participate in the study, so we cannot rule out the possibility that males, for example, who comprised 27% of the study population, were not visiting pharmacies because of having to work. Finally, the population visiting pharmacies may not be totally representative of the overall population, or may have been affected by common cold, for example, which may introduce bias into blood glucose determination. Conclusion In our survey, a total of 59 participants were identified with newly diagnosed diabetes by 102 pharmacies. Given that there are about 1700 retail pharmacies in Switzerland, such an initiative may contribute to identification of a substantial number of participants with unknown diabetes. This would enable early treatment and prevention of diabetes and/or diabetes-related complications.
2017-03-31T01:19:57.443Z
2012-09-21T00:00:00.000
{ "year": 2012, "sha1": "b0d3a2053a65cea5a55c191fcb7f3818e788bebc", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=14037", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac45e1be8bc533f6f0d65f187be0a9be92a93556", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257088841
pes2o/s2orc
v3-fos-license
Quantitative Evaluation of Flood Control Measures and Educational Support to Reduce Disaster Vulnerability of the Poor Based on Household-level Savings Estimates In developing countries, where budget constraints make it difficult to invest in disaster risk reduction, disasters worsen the poverty trap. To alleviate poverty by reducing the risk of disasters, not only the immediate direct impacts of disasters but also their long-term and indirect impacts should be considered. However, since the effects of individual policies are often evaluated based on the extent of damage reduction, the impact on the poor, who have few assets and thus small losses, is generally ignored. Here, we aimed to quantitatively evaluate the effects of flood control measures and educational support in terms of the flood vulnerability of the poor at the household level. We constructed a model to calculate the savings of individual households and used the flood damage-to-savings ratio to determine their flood vulnerability. Next, we estimated the extent to which the flood vulnerability is reduced by various policies. We found that educational support is suitable for reducing the flood vulnerability of the poor cost-effectively, especially when the budgets are small. Gini coefficient predictions confirmed that educational support is effective in reducing income inequality. The novelty of this study is that it quantitatively links flood damage, savings, and education, which are factors that affect the flood vulnerability of the poor, and it compares the effects of various flood control measures and educational support at the household level in terms of the flood vulnerability. While the model was developed using household survey data from Bago, Myanmar, the framework should be applicable to other regions as well. Introduction As poverty eradication is the first in the list of sustainable development goals (SDGs), improving the lives of the poor is necessary for sustainable development. However, the significant losses that the poor suffer from natural disasters are an obstacle to this goal (Guterres 2019). For example, it is estimated that, on average, more than 25 million people fall into extreme poverty each year owing to floods and droughts alone . The poor are often severely affected because they live on lands and in houses that are vulnerable to disasters (Brouwer et al. 2007). Moreover, they often do not have significant savings, which can cause them to fall into debt and make it difficult to invest in their businesses and education (Collins et al. 2009;Linnerooth-Bayer and Hochrainer-Stigler 2015;Janzen and Carter 2013). As a result, it becomes extremely difficult for them to break the cycle of poverty (Hallegatte et al. 2020;Rentschler 2013;Carter et al. 2007;Shimomura 2020;Sen 2003;Dube et al. 2018;Banerjee and Duflo 2007). Floods are particularly devastating, causing the largest cumulative global economic losses of any natural disaster since 1950 (Podlaha et al. 2020). Furthermore, it is estimated that global exposure to floods will more than triple by 2050, not only because climate change is expected to intensify floods but also because populations and economic assets in flood-prone areas are expanding (Jongman et al. 2012;McGranahan et al. 2007). Since climate change worsens the exposure of the poor in particular (Winsemius et al. 2018), there are concerns that future floods will lead to greater inequality. However, since the effects of disaster risk reduction investments do not actualize over short periods and are difficult to evaluate quantitatively because of future uncertainties, disaster risk reduction measures are not prioritized as policies in developing countries with limited budgets (Watson et al. 2015;de Ruig et al. 2019). In addition, even when investments are made for disaster risk reduction, the effects of the individual policies are often evaluated in terms of benefits to the overall region (Ward et al. 2017), and the impact on the poor, whose assets and flood damages are small to begin with, is likely to be ignored (Masozera et al. 2007;Rao et al. 2017;Hallegatte and Rozenberg 2017). Moreover, although the poor fall into the trap of long-term poverty because of disasters, cost-benefit analyses of disaster risk reduction investments often focus only on the direct losses caused by disasters and do not consider the indirect effects on the long-term improvements in the livelihood of the poor (Kind et al. 2020;van Hattum et al. 2021). For these reasons, some policies may even increase the vulnerability of the poor and contribute to widening inequality (Pelling and Garschagen 2019). In addition, Thacker et al. (2019), who analyzed the impact of infrastructure on the SDGs, found that the indirect impact can be as high as three times the direct impact. Therefore, to quantitatively evaluate disaster risk reduction investments, it is necessary to determine their effectiveness in not only reducing the losses in the event of a disaster but also with respect to long-term poverty eradication. Furthermore, in recent years, it has become clear that, in addition to flood control measures, educational support is effective in alleviating the poverty caused by floods (Masozera et al. 2007;Fang et al. 2016;Tahira and Kawasaki 2015;Kawamura and Kawasaki 2018). The poor often have low levels of education and are therefore forced to work in unstable, low-income jobs. This is one of the main reasons for the inequality. Therefore, if educational support were to be made equally available to the households that have not been able to afford education, the poor would have a better chance of finding stable jobs. This would make it easier for them to increase their savings and reduce their flood vulnerability. In addition, since the poor are often forced to drop out of school due to disasters (Maccini and Yang 2009;Ferreira and Schady 2008;Baez et al. 2010;Cadag et al. 2017), educational support would be effective. However, no study has quantitatively confirmed this effect. In this study, we quantitatively evaluated various policies for helping the poor decrease their flood vulnerability and compared the effects of flood control measures and educational support, which are considered effective strategies for reducing flood vulnerability. For this purpose, a model was constructed to calculate the savings related to the flood vulnerability of each household. The model estimated who would be affected by floods and to what degree by determining the changes in the household economy instead of changes in the economy of the entire region. In addition, instead of focusing on the damage caused by a single flood, we focused on the long-term changes in savings, considering the impact of frequent floods and the livelihood improvement for each household based on the savings. By combining the savings determined using this model with the results of inundation calculations based on various flood control measures, we could assess the extent by which the flood vulnerability of each household, as defined in this study, would be reduced. Furthermore, we also estimated the changes in income inequality owing to individual policies. Borgomeo et al. (2017) had quantified the negative impact of floods on poverty. They analyzed the impact of floods on agricultural income as well as the resulting changes in household assets in the coastal areas of Bangladesh and found that floods exacerbate poverty. Based on this study, Barbour et al. (2022) estimated the long-term effects of infrastructural features such as embankments. However, these studies neither considered the changes in the living conditions as determined based on the assets of the individual households nor addressed inequality at the household level. also attempted to assess the losses experienced by the poor whose livelihoods were severely affected by disasters. They did this by defining the "wellbeing losses" based on the reduction in consumption during the recovery period after the disaster. Furthermore, case studies that used this index have shown that the magnitude of asset losses during disasters does not match that of the loss in well-being (Markhvida et al. 2019;Walsh and Hallegatte 2019). However, these studies did not attempt to estimate the effects of specific measures. Poverty is a multifaceted problem that involves not only material poverty, represented by a lack of money, but also physical weakness, isolation, vulnerability, and powerlessness (Chambers 1983). This study aimed to approach poverty reduction from the aspects of both material poverty and disaster vulnerability. Target Areas for Case Study In this study, four villages (Tar Wa Bu Tar, Kun Paung, Let Pan Win, and Htan Pin Chaung) in Bago District, Myanmar, were selected as the case study areas. Myanmar is classified as a least-developed country, where poverty is a major problem. It is also the country secondmost affected by natural disasters in the 20-year period from 1999 to 2018 (Eckstein et al. 2019), and natural disasters are an obstacle to poverty eradication. Furthermore, climate change is expected to increase the flood risk in Myanmar, worsening both their intensity and frequency (Hirabayashi et al. 2013). All four villages are in the Bago River basin, an area that experiences floods almost every year because of the monsoon in the rainy season. Yamagami (2020) demonstrated that some parts of the target areas experienced over 40 cm flooding every year from 1985 to 2015, with over 1 m flooding in 29 of the 31 years. Development of Savings Estimation Model To evaluate the flood vulnerability of individual households, a savings estimation model was developed, as shown in Fig. 1. The model estimates the following four socioeconomic factors annually for each household in the order shown: income, flood damage, savings, and educational investment. Initially, we used the survey data for 416 households collected in 2019 by Shimomura (2020), the future GDP projections reported by Riahi et al. (2017) and Dellink et al. (2017), and the future population projections reported by KC and Lutz (2017) to determine the annual income of each household. Based on the GDP and population projections, the annual rate of change in the GDP per capita was calculated and multiplied by the income of each household in 2019. Although there are various possible patterns for the changes in the GDP and population, in this study, we used the results of the projections for the intermediate scenario, SSP2. Table 1 shows the descriptive statistics on the socio-economic factors of the sample used in the analysis. Next, using the inundation calculations performed by Yamagami (2020) for the same area, we estimated the flood damage for each household based on its income as well as the house structure and materials used as reported in the household survey data. As shown in Eqs. (1a)-(1c), the damage functions developed by Win et al. (2018) were used to calculate each household's house, asset, and income losses, respectively. As for the precipitation, we assumed RCP4.5, which is the intermediate scenario for climate change. In the equation, HD is the amount of damage inflicted on the household's house (kyats), HV is the price of the house (kyats), FD is the flood depth (m), FH is the floor height from the ground (m), Fig. 1 Overview of developed savings estimation model and x and y are dummy variables for the building materials (x 1 : brick, x 2 : wood, x 3 : bamboo) and the presence/absence of soil erosion (y 1 : no, y 2 : yes). Furthermore, AD is the loss of each household's asset (kyats), AV is the price of the asset (kyats), and a is a dummy variable for the house structure (a 1 : one story, a 2 : two stories, a 3 : stilt). In addition, IL is the loss of income of each household (kyats), HI is the annual income of the household (kyats), FDR is the number of days of inundation (days), and b is a dummy variable for the job category (b 1 : daily/unstable job, b 2 full-time/stable job). The maximum annual inundation depth was used, regardless of the number of floods, because households tend not to repair or rebuild their houses during the monsoon season but wait until the end of the rainy season. With respect to the presence of soil erosion, we assumed that households within a 50-mile radius of the riverbank are affected by soil erosion, in keeping with Yamagami (2020). For the number of days of inundation, we used the number of days with inundation greater than 0.3 m, because we assumed that an inundation depth of more than 0.3 m would make it difficult to move safely (Kramer et al. 2016) and thus prevent commuting. In addition to the income and flood damages thus obtained, the savings rate was used to determine the annual savings of each household. This calculation was performed as shown (1a) in Eq. (2) using the same framework as that employed for asset estimation by Borgomeo et al. (2017). Here, S(t) is the household savings in year t (kyats), is the savings rate, I(t) is the income in year t (kyats), and L(t) is the flood damage in year t (kyats), i.e., the sum of the losses related to the house, assets, and income. Since there is a strong positive correlation between the savings rate and income (Dynan et al. 2004), it is necessary to define the savings rates based on the economic levels of the households. However, as there are no data available on the savings rates by income in the target area, the savings rates were calculated based on the average savings rate for Myanmar (CSO et al. 2020) and the distribution of the savings rates by income groups for Cambodia (JBIC 2001), a neighboring country with a similar economic level. Based on the savings rates of the 10 income groups in Cambodia and their average value, we determined a multiple of how much each group could save relative to the average. This multiple was then multiplied by the average savings rate in Myanmar to establish the savings rate for each of the 10 income groups. Finally, we determined whether a household invests in education based on the amount of savings. We assumed that, among the households whose savings exceeded the cost of middle school, those with investment choice rates would invest in education. In Myanmar, secondary education is free, but the cost of uniforms, books, and stationery, as well as donations and extracurricular tuition, are a significant burden for the poor (JETRO 2016). On average, the annual expenditure per student is 123,200 kyat (CSO et al. 2020). The investment choice rate was defined as the percentage of households that currently give up education because of a lack of money and was calculated based on the income levels reported by CSO et al. (2020). This is because households without access to education due to non-financial reasons are not likely to invest in education even if they could save. Based on the same data, we assumed that the households that invested in education would see their average income increase by a factor of 1.26 after the completion of secondary education. Policy Evaluation Methods for Identifying Vulnerabilities of the Poor To evaluate the effects of various flood control measures on long-term poverty alleviation, in addition to damage reduction at the time of disasters, the flood vulnerability at the household level was also considered. In this study, the "damage rate," which is the ratio of the loss in each household's savings because of flood damage, was used an indicator of flood vulnerability. Appropriate flood control measures naturally mitigate flood damage and reduce the damage rate. Furthermore, since they also make it easy for households to save, the resulting savings can be used to buffer the impact of future floods. In particular, for those households that are unable to access education because of a lack of money, the damage rate can be significantly reduced because they can expect to increase their savings further when the flood damage is reduced, and these savings, in turn, can be invested in education. In the same manner, educational support can also reduce the damage rate by increasing savings. Thus, we compared the effects of various flood control measures and educational support from the same perspective, namely, that of flood vulnerability. The five flood control measures evaluated in this study were embankment, retention area, dredging & widening, early warning, and building elevation. It was assumed that the embankments are raised by 3.0 m, 12 km on the right bank and 5 m on the left bank near the target area. As for the retention area, it was assumed that a 10-km 2 retention area was set up upstream. With respect to dredging & widening, it was assumed that the riverbed is dredged to a depth of 1.5 m over a length of 6.5 km, and the river width is widened by 5.0 m over a length of 1.0 km. These three structural measures would affect the flood depth and the number of days of inundation as determined through the inundation calculations. Early warnings can reduce asset damage by 4.6% (Pappenberger et al. 2015) while building elevation can reduce the damage to house and assets by raising the floor height of all the households by 50 cm. Although these two measures do not affect the flood depth or the number of days of inundation, they were considered in the calculations performed using Eqs. (1a) and (1b) to determine the amount of damage experienced by each household. The measures to be considered and where they should be introduced were the same as that in the study conducted by Yamagami (2020), which was designed after consulting the local planners, and in reference to flood risk management projects with similar sizes of beneficiaries in low-or middle-income countries. For educational support, support for secondary education was considered. Although many students in Myanmar are enrolled in primary education, the enrollment rates drop sharply for secondary education. Therefore, for households where children are unable to attend middle school for financial reasons, we considered providing all the expenses necessary to attend middle school for four years. In this study, we assumed that these policies would be introduced in 2040 and calculated their impact until 2070. An overview of this methodology is shown in Fig. 2. Figure 3 shows the total amount of damage caused by floods over 31 years for each household while Fig. 4 shows the damage rate, which was calculated by dividing the total damage by the amount of savings. The horizontal axes of the two figures show the rank of each household, sorted by income in ascending order. In other words, the households on the right of the figure have higher incomes. Figure 3 shows that the amount of flood damage tends to be smaller for households with lower income. This can be attributed to the fact that the poor have less to lose in the event of a flood because they have smaller incomes and fewer assets and live in cheaper houses. On the other hand, as can be seen from Fig. 4, the damage rate is expected to be higher for poor households with smaller savings, which quantitatively indicates the high flood vulnerability of the poor. Specifically, for households with an income in the bottom 10%, many households have damage rates as high as 40-80. Thus, to save 40-80 times as much as they should, they would need to reduce their daily expenditures significantly, which would be a severe burden. Comparison of Effects of Individual Policies To begin with, the cost-benefit ratio for each policy is shown in Table 2. Here, the benefit is the sum of the reduction in the flood damage and the amount of increased income based on improvements in the education level. All six policies have a cost-benefit ratio of more than 1. In particular, the cost-benefit ratio of dredging & widening is the highest. In the case of educational support, the cost-benefit ratio is moderately high, because the benefit itself is small. However, its cost is extremely small too. Next, the percentage reduction in the damage rate for the individual households was calculated for each policy. Table 3 shows the average percentage reduction in the damage rate for all 416 households and the average percentage reduction in the damage rate for the 41 households with income levels in the bottom 10% (hereafter referred to as the poor). In addition, to compare the cost-effectiveness of the policies, the percentage reduction divided by the cost is also shown. The percentage reduction in the damage rate was also the greatest for embankment, which had the largest benefit. On the other hand, educational support, whose benefit itself was the smallest among the six policies, was found to be moderately effective in reducing vulnerability. In addition, embankment significantly reduces the vulnerability of the nonpoor instead of the poor, since the percentage reduction in the damage rate for all households was higher. On the other hand, the quantitative results also show that educational support contributes significantly to reducing the vulnerability of the poor. The results of the cost-benefit analysis show that educational support is highly effective. In fact, educational support is 13.5 times more cost effective than embankment and 6.0 times more cost effective than dredging & widening. Verification of Results Through Comparison with those of Previous Studies To begin with, the results for flood damage are consistent with those of previous studies such as that by . Although the amount of damage caused to the poor by flooding itself is small, owing to their low assets and income, the negative impact of floods on their livelihoods is larger because of their low income and savings. These results also suggest that the negative impact of disasters on the poor is not readily reflected in macroeconomic analyses that focus on the extent of damage experienced by the region in general, while in fact they are the most severely affected. Therefore, to identify the policies suitable for mitigating the impact of floods on the poor, it is essential to not only focus on the damage reduction rate but also on flood vulnerability, as was done in this study. Next, the results on the cost-benefit ratio calculations are compared with those of Yamagami (2020), who used the same inundation calculation results. The cost-benefit ratios calculated in this study were smaller than those reported by Yamagami (2020). This is because the benefits of improved income, which are accounted for as indirect benefits, were considered by Yamagami (2020). The following two factors explain the differences in the ratios. The first is that, in the above-mentioned study, the average income was used in the calculations instead of the income of each household, and the second is that it was assumed that living in an area that is less prone to floods improves the income regardless of one's financial situation. In contrast, in this study, the income of the group with the highest investment choice rate with respect to education was set very low based on the household survey data, and the poorest group was assumed to have no ability to invest in education even in the absence of flood damage. Therefore, it was difficult to improve the livelihoods through investments in education in general. Furthermore, even when the livelihoods were improved, the improvement in income was small and not reflected in the overall benefits to the community. We believe that there is room for further study on the effects of investment choice rates and the corresponding rate of income growth. For the same reason, the relationships among the cost-benefit ratios of the various flood control measures were also different. In the previous study, the indirect benefits of all the policies were large, and the total benefits did not differ significantly. Thus, the cost-benefit ratio was generally larger for those policies with lower costs. On the other hand, in this study, the cost-benefit ratio tended to be larger for those policies that significantly reduced flood damage because the indirect benefits were not so large. Finally, a cost-benefit ratio of 3.4 for educational support is generally consistent with those (2.2-3.7) proposed by Psacharopoulos (2014) for secondary education in Asia. Effectiveness of Policy in Reducing Inequality Using the savings estimation model developed in this study, we examined how economic inequality within a region changes as the income increases because of improvements in the education levels, as determined based on the Gini coefficient. Using the 416-households survey data, the current (2019) Gini coefficient was calculated to be 37.9%, which is 92 nd in the World Bank's ranking of 167 countries, indicating greater inequality than the global average. In addition, according to the World Bank, Myanmar's national Gini coefficient was 30.7% as of 2017, ranking 28 th in the world. This means that the inequality in Bago, Myanmar, is particularly severe. It was estimated that the Gini coefficient would change to 37.3% if embankments, which can reduce the vulnerability of the poor the most, were to be built and to 36.9% if educational support were to be provided. In addition, it was found that educational support reduced the Gini coefficient by 1.0%, placing it 85 th in the world ranking of inequality. As an additional analysis, we also calculated the change in the Gini coefficient while assuming that all the households received secondary education and support for higher education, instead of only those who wished to receive it, as was done in the initial part of this study. The results are shown in Fig. 5. Education contributes greatly to the reduction in inequality in the region. However, many households do not have access to education owing to reasons other than a lack of money, and very few households have Fig. 5 Changes in Gini coefficient in region after educational support of different levels access to higher education. Therefore, policies that encourage education in ways other than monetary subsidies are essential. Sensitivity Analysis Finally, to investigate the uncertainty of the model, a sensitivity analysis was performed. Of the various model parameters, we targeted the savings rate and conducted the same analysis by doubling and quintupling the savings rate, as it is an important parameter, even though it is not directly based on the household survey data collected from the target area. Figure 6 shows the cost-benefit ratio for each policy shown in Table 2 as recalculated using different savings rates while Fig. 7 shows the average percentage reduction in the damage rate for the poor corresponding to each policy shown in Table 3 as recalculated using different savings rates. Figure 6 shows that the cost-benefit ratio increases as the savings rate increases. This is because when the savings rate is high, more households can invest in education, which is expected to increase their income. Therefore, the cost-benefit ratio increases in the case of the measures that are more effective in increasing the income of the poor as they also improve their education level, instead of those that reduce the damage itself. This result also suggests that encouraging people to save more will have some effect. On the other hand, Fig. 7 shows that the percentage reduction in the vulnerability of the poor is almost independent of the savings rate. This indicates that the effects of additional flood control measures and educational support are not particularly large, because increasing the savings alone reduces the flood vulnerability. This suggests that the rate of reduction of the flood vulnerability as defined in this study is independent of the savings rate and has a high degree of certainty. Policy Implications Next, we discuss the appropriate policies under the conditions investigated in this study, given the results described above. Embankments have the greatest monetary benefit and lead to the greatest reduction in the vulnerability of the poor as well as that of the entire region (Table 3). However, the cost of embankments is enormous. Therefore, dredging & widening, which has the largest cost-benefit ratio (7.4), is the most appropriate method for ensuring cost-effectiveness (Table 2). Dredging & widening is also expected to reduce the vulnerability of the poor by 16.4% (Table 2). On the other hand, in developing countries where budget constraints are particularly severe, educational support may be a good option. The cost-benefit ratio of educational support is 3.4, which is lower than those of dredging & widening, embankment, and building elevation (Table 2). However, for the entire region, the benefits exceed the cost. Educational support is a policy specifically designed to reduce the vulnerability of the poor who had previously given up on education. Not only does it reduce flood vulnerability, but it also contributes significantly to reducing income inequality within the region. Therefore, even though its absolute benefits may not be large, educational support may be effective in some areas because it can efficiently reduce the vulnerability of the poor with a small budget while also providing overall benefits. Conclusions Although natural disasters can cause serious damage and result in economic losses, especially to the poor, evaluations have rarely focused on the poor, whose losses may be small. Therefore, in this study, we aimed to quantitatively evaluate various disaster prevention policies in terms of the vulnerability of the poor and compare the effects of both flood Fig. 7 Results of sensitivity analysis of average percentage reduction in damage rate for the poor control measures and educational support, which are effective in reducing flood vulnerability, at the household level. This comparison was possible only after quantitatively linking floods, savings, and education, which are factors that affect the flood vulnerability of the poor and quantifying the flood vulnerability based on the amount of savings. The results suggest that educational support is also effective in reducing the flood vulnerability of the poor and is particularly economical owing to its low cost. This is a conclusion one could not have arrived at through cost-benefit analyses, which are usually used for policy evaluation. In other words, even if we do not have the budget for large-scale flood control measures, we should proactively reduce the vulnerability of the poor by supporting education. In this study, owing to a lack of household survey data, some of the settings of the developed model were simplified or assigned values based on assumptions. Therefore, collecting more detailed household survey data would allow us to estimate the effects of disaster prevention policies more accurately. Furthermore, while the model employed in this study was developed using household survey data for Bago, Myanmar, the model framework and the method used for identifying the flood vulnerability should be applicable to other regions as well. Therefore, the next step would be to develop a more general assessment method using a larger household survey dataset and inundation calculation results for other regions.
2023-02-23T15:11:31.059Z
2022-04-26T00:00:00.000
{ "year": 2022, "sha1": "2fc1dd29661a8c1a088969fad9a12a77940dc989", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41885-022-00112-y.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "2fc1dd29661a8c1a088969fad9a12a77940dc989", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
54047099
pes2o/s2orc
v3-fos-license
Origin and residence time of shallow groundwater resources in Lagos coastal basin, south-west Nigeria: An isotopic approach Knowledge of the source of water in the Lagos coastal basin (LCB) groundwater system was to be found vital to the future development and management of the system. Stable and radioactive isotopic measurements have been employed to unravel the source of recharge and residence time of the shallow groundwater system, based on the sampling conducted in 2016 and 2017 on groundwater, surface water and rainfall. The concentration of tritium in the groundwater samples were very low and ranged from less than 1 to 2.8 TU, while measured 14C contents ranged from 59.1 to 88 pMC. The δ18O values of groundwater samples ranged from 4.81 and 3.98 ‰, while the δ2H values ranged from -24.75 and -19.70 ‰ for the wet and dry seasons, respectively. The obtained results indicated non-existence of paleo recharge; rather all groundwater in the basin were found to be essentially of meteoric origin with intermittent surface water contributions. Moreover, shallow groundwater and surface water have considerable variations in isotopic compositions, reflecting evaporation and preservation of seasonal fluctuation. Though there was an observed generally low tritium content, however, it proved useful in the identification of recent active recharge taking place across the basin. The deduced radiocarbon age reflected the presence of “modern water” and thus supports the presence of present recharge to the groundwater system. Therefore, the source of the shallow groundwater recharge was actively renewable particularly during the wet season and thus water exploitation is potentially sustainable in the basin. Introduction Surface water and groundwater interaction studies are vital to implementing effective management of water resources (Winter, 1999;Sophocleous, 2002;Abiye, 2013). It is a known fact that well-managed water resources are vital for sustainable socioeconomic development (Abiye, 2013). Groundwater plays a vital role in determining the social and economic growth of Lagos and acts as the major source of potable water supply to a vast majority of the coastal city inhabitants (Akoteyan and Soladoye, 2011;NERC, 2003). Hence, it is imperative to understand the hydrogeological environment (topography, geology, and climate) in order to investigate groundwatersurface water interactions (Sphocleous, 2002;Abiye, 2013). Lagos is located within the southwest of Nigeria coastal domain (6 • 34ʹ 50ʺ North and 3 • 19ʹ 59ʺ East) comprising dominantly of lagoons and coastal creeks developed by a barrier of beaches and situated on series of stratified sedimentary rocks consisting of silts, clay, peats or coal associated with coastal plants (Oyedele and Momoh, 2009). The Lagos metropolis was until 1991 the Federal Capital of Nigeria and still is a commercial epicenter, despite the movement of the seat of power to Abuja. The rate of population growth is about 300,000 persons per annum with an average density of 20,000/km 2 Atakpo et al., 2011). The coastal city with a total land area of about 3600km 2 and with an annual growth rate of 4% it was one of the world's five megacities in 2015 (UN, 2011). The exponential population growth as well as the rapid industrial expansion in the metropolis, consequently, demanded an increase in water supply for domestic and industrial purposes. In Lagos, there is a pressure on groundwater resources, further worsen by anthropogenic pollution. A large number of private wells (boreholes and hand-dug) were sunk without recourse to the source and amount of replenishment to the aquifer (Oyeyemi et al., 2015). However, the roles of sustainable development of the groundwater require an understanding of their origin and renewability (Acheampong and Hess, 2000;Chen et al., 2006;Abiye, 2011). The study area (Fig. 1) falls within the southern coastal belt of Lagos, which is an integral component of 1000km stretch of Nigeria Atlantic coastline extending from Accra to Lagos within the Dahomey basin (Oyeyemi et al., 2015). This area constitutes vulnerable ecosystem subject to severe anthropogenic and natural hazards, such as sea level rise, land subsidence, flooding and coastal erosion and salinization of groundwater (Bear et al., 1999;Adelana and MacDonald, 2008;Akoteyan and Balogun, 2012). A coastal aquifer is particularly damaged by enhanced pumping for water supply, as this leads, as the case may be, to lowering of water table, increase of land subsidence and intrusion of saline water into fresh aquifers (Barlow, 2003;Kumar et al., 2007). Thus, a major problem in drinking water quality and management of domestic water supply in this coastal strip is the saltwater intrusion in hand-dug wells and boreholes (Oladapo et al., 2014). Many wells have not been completed, while several existing hand-dug wells and boreholes are abandoned or decommissioned due to saltwater intrusion overtime (Oyeyemi et al., 2015). In order to enhance our understanding of the hydrodynamical nature of the shallow aquifers in this basin for sustainable management, two questions are pertinent to the sustainability of groundwater resources: How old is the rechargeable water supply? In other words, when did recharge happen? What is the source of recharge for shallow groundwater? Are we 'mining' groundwater or otherwise? The most common approach used to solve this problem is to adopt environmental isotopes technique; an important tool in hydrogeological studies that is specifically not known to the Lagos coastal basin (LCB) prior to this research. The stable isotopes of 2 H and 18 O occur naturally in precipitation and provide a seasonal meteoric signal in temperate, continental systems that are often attenuated in shallow groundwater (Clarks and Fritz, 1997;Abiye, 2013). Environmental isotopes, (stable and radiogenic) as a tool for hydrogeological studies are gaining popularity in recent times and have been used to gain some insight into the subsurface flow and recharge condition (Abiye, 2011). Naturally occurring stable ( 2 H, 18 O) and radiogenic ( 3 H and 14 C) isotopes of water have been widely used over the past 40 years to solve problems related to groundwater recharge and its residence time (Fontes, 1980;Gonfiatnini, 1986;Clarks and Fritz, 1997;Chen et al., 2006). Isotopes in combination with other hydrochemical and geophysical studies have a wide range of applicability. This includes but not limited to the determination of the age and origin of different groundwaters, delineation of flow systems, quantification of mass-balance relationships, interaction between surface and groundwater, occurrence of groundwater; localize and delimit groundwater catchment areas (Fontes, 1980;Gonfiatnini, 1986;Siegel, 1991;Gat, 1996;Clarks and Fritz, 1997;Rozanski et al., 1997). We herein present the results of a pioneering attempt at using environmental isotopes to investigate and improve our understanding of the shallow groundwater system in Lagos coastal basin (LCB) of Nigeria specifically with respect to the age and source of recharge and, a possible interaction between surface and groundwaters ( Fig. 1). Geological and hydrogeological setting The study area is situated in the coastal strip of Dahomey basin, western Nigeria (Fig. 2). It is an extensive sedimentary basin in the Gulf of Guinea. It extends from southeastern Ghana (Keta basin) in the west, through southern Togo and the southern Benin Republic to thin out at Okitipupa ridge in Nigeria in the east (Billman, 1976). The basin was initiated during the Mesozoic in response to the separation of the African-South American land masses (Gondwanaland) and the subsequent opening of the Atlantic Ocean (Burke et al., 1971;Whiteman, 1982). Deposition began in fault-associated depressions developed in the crystalline basement complex as a result of the rift-generated basement subsidence during the Early Cretaceous (Neocomian). The subsidence gave rise to the deposition of a very thick sequence of various types of sedimentary rocks over the entire basin (Lehner and De Ruiter, 1977). Over 1,400 meters of these sediments are preserved in coastal areas in Nigeria and offshore in Benin republic (Billman, 1976;Omatsola and Adegoke, 1981). In the Santonian, both the basement rocks and the sediments in the basin were tilted and block-faulted and subsequently forming a series of horsts and grabens during the Maastrichtian. The basin became quiescent during this period and experienced only gentle subsidence (Omatsola and Adegoke, 1981). The coastal zone of Lagos is made up of creeks and lagoons developed by barrier beaches associated with sand deposition in the geologic past. The western limit of the basin is marked by faults while it is bounded to the east by the Benin hinge line demarcating the western limit of the Niger Delta. The basin geology is composed of sedimentary rocks and surficial alluvial deposits. These are essentially composed of loose and light grey sand with varying proportion of vegetation matter content on the lowland, while the reddish and brown loamy soil characterize the upland (Olowofela et al., 2012). The geology is essentially underlain by interbedded sands, gravelly sands, silts, and clays (Akoteyon et al., 2011). The subsurface geology reveals two basic lithologies viz; an alternating sequence of clay and sand deposits (Akoteyon et al., 2011;Olowofela et al., 2012). These deposits are intercalated in places with sandy clay or clayey sand and occasionally with vegetable remains and peat (Ayolabi and Peters, 2005). Hydro-geologically, the water-bearing strata of Lagos consists of sand, gravel, or a mixture, which range from fine through medium to coarse sand and gravel (Adeleye, 1975). The four major aquiferous units in the Lagos metropolis includes Abeokuta group, Ewekoro formation, coastal plain sands (CPS) and recent sediments (Jones and Hockey, 1964). There is a general decrease in the aquifer thickness from the north towards the coast in the south and also well noticed is the variation in the percentage composition of sand from north to south (Longe et al., 1987). The aquifers vary from unconfined, semi-confined to confined occurrence with depth. The CPS is the most productive and most exploited aquifer in the Lagos state. However, there is different aquifers' depth estimation by various authors. The first aquifer extends from ground level to about 12 m below the subsurface. This aquifer is definitely prone to various forms of pollutions due to its limited depth. The second aquifer exists between the 20 m and 100 m below sea level. The third aquifer was intercepted in the central part of Lagos between the depths ranging from 130 m to 160 m below the sea level. The fourth aquifer can be accessed at 450 m depth below sea level and only a few boreholes tap from this aquifer (Jones and Hockey, 1964). This account, however, differs from Onwukas ' (1990) observations. He classified the hydrogeologic units of the Dahomey basin groundwater into three main hydro-stratigraphic units viz: The upper aquifer (Alluvium and Coastal plain sand); Middle aquifer (Ilaro and Ewekoro Formations) and Lower aquifer (Abeokuta Formation) and it is considered most protected aquifer from pollution. Materials and methods Our sampling sites comprised of shallow groundwater wells mostly hand-dug and surface water located around and across the study area. The choice and number of sampling sites were constrained by both availability and accessibility to dug wells. Thirty-shallow groundwater, five-surface water, and one-rain water were sampled for environmental isotope analyses between SeptembereOctober 2016 and in February 2017, hydrological years. The records of well completion were not available to us or were non-existent. In most cases, we were able to sample groundwater directly from boreholes, while we were compelled to take samples via household taps in places. However, the surface water samples were taken at least 250m away from the onshore, to ensure even mixing and adequate representation. The sampled waters were collected unfiltered and stored unpreserved in tightly sealed plastic bottles. To ensure groundwater samples taken were aquifer representation at any particular location and depth, the wells were thoroughly mixed and pumped as the case may be prior to sampling. The water level depth was taken with the aid of TLC meter. A total of 30-samples for stable isotopes of 2 H and 18 O, 21-samples for radioactive tritium and 9-samples for 14 C and 13 C isotopes were collected and analyzed. Samples for 18 O and 2 H were collected in 10ml glass bottles with airtight caps. The samples for tritium analysis were collected in sealed 1L plastic bottles. The samples for carbon isotope determination were collected by precipitating BaCO 3 by adding BaCl 2 to 50L of water, previously brought to pH ! 12 by addition of NaOH. The stable isotope of deuterium and oxygen were analyzed at hydrogeology laboratory at the school of geoscience, using the Liquid Water Isotope Analyzer-model 45-EP at the University of the Witwatersrand, South Africa, while tritium, 14 C, and 13 C analyses were carried out at the i-Themba laboratory, Gauteng, South Africa. All samples were replicated. Results are represented in the conventional V-SMOW normalization. The precision obtained was 0.05% and 1% for 18 O and 2 H respectively. The stable isotopic composition of a water sample is reported in d notation as given by Eq. (i): (Gonfiantini, 1978). A positive and negative value connotes enrichment and depletion in the heavy isotope respectively relative to the standard. Also, precipitation originated from higher altitude is more isotopically depleted in 2 H and 18 O than precipitation at lower altitudes. Therefore, these stable isotopic ratios are useful in evaluating the precipitation source areas of recharge to an aquifer (Mazor, 1991). Tritium was determined on electrolytically enriched water samples by low-level proportional counting. The results were reported in tritium unit (TU) with a typical error of AE1TU (Echinger, 1980), while 14 C of dissolved inorganic carbon (DIC) was radiometrically determined by liquid scintillation counting after conversion to benzene (Fontes, 1971). The 14 C is between 0.7-1.0 pMC. The d 13 C was determined spectrometrically and expressed as d-values related to the V-PDB (Vienna Pee Dee Belemnite) standard. Precision for d 13 C is AE0.5. Radioactive isotopes of tritium ( 3 H) and Carbon ( 14 C) do occur in natural waters in low but detectable amounts (Loehnert, 1988 Results and discussion Detailed physical parameters measured in-situ on the field were contained in (Tables 1 and 2), while analytical isotope results were presented in (Tables, 3, 4 and 5) represents a summary of the isotopic data set for both dry and wet seasons. It should be noted that 13 C, 14 C, and 3 H analysis only represent the wet season, as the result for the dry season is not available at the time of writing this paper. Deuterium ( 2 H) and oxygen-18 The isotope composition (oxygen and hydrogen) of groundwater, surface water from Lagoon, creek, and seawater from the Atlantic Ocean are shown in (Tables 3 and 4 Clarks and Fritz, 1997). On a global average, the general meteoric relationship between 18 O and 2 H was found to be linear for natural water and has been defined by global meteoric water line (GMWL) with the following equation for fresh water Eq. (ii): The concept of the deuterium excess (d-excess) is given by Eq. (iii): The location of the data on the (GMWL) indicates the source of air moisture. A local meteoric water line (LMWL) defined by Eq. (iv) was established by Loehnert (1988) in Ore Agbabu, Southwestern Nigeria. Whereas, local meteoric water line LMWL for Cotonou GNIP station, an extension of the Lagos coastal basin LCB and her immediate neighbor country, between the years 2005e2015 is defined by Eq. (v): This study, however, adopts the Cotonou LMWL for the purpose of this research due to the common features shared by both basins and reliability of the GNIP data. The LMWL show low vapor humidity relative to the GMWL resulting from its lower slope value. A slope is a function of humidity, temperature and other factors (Gat and Gonfiantini, 1981;Rozanski et al., 1997) of a particular groundwater territory. The plots in Figs. 3 and 4 reveal that the groundwater samples were plotted around and along the LMWL, indicating that the groundwaters in the coastal aquifer are of meteoric origin and that a large proportion is subject to evaporation effects. Furthermore, the samples that plotted above the LMWL indicate rapid infiltration of recharge water before evaporation, while samples that plotted below the LMWL are essentially subjected to evaporation prior to recharge. Moreover, the isotopic compositions of the groundwater in the dry season are generally enriched in 18 O and 2 H resulting in a shift towards the meteoric line indicating higher evaporation relative to the wet season. The seasonal variations can be attributed to amount effect as noted by Dansgaard (1964) that at any given location, the heavier rainfall is more isotopically depleted in composition than the light intensity rainfall during the dry season as the air moisture is subjected to less Raleigh condensation process. These results remain in perfect agreement with several observations for low latitude marine sites of the IAEA monitoring stations (IAEA, 1992 The low slopes of 3.48 and 3.89 exhibited by both wet and dry seasons are also suggestive of the evaporation process. According to Gat and Gonfiantini (1981) and Sheppard (1986) evaporation from a freshwater surface commonly results in evaporation line with a lower slope of 3.5e6 in a normal range of relative atmosphere humidities of 75e10. The position of rainwater above the LMWL could be related to the condensation effect which is controlled by regional air circulation from South Atlantic Ocean. According to Clarks and Fritz (1997) and Abiye (2013), this occurrence could be due to low humidity in the vapor. Generally, surface waters are highly enriched with respect to 18 O and 2 H with the higher values recorded in the seawater relative to the Lagoon water (Fig. 3, Table 3). Specifically, the seawater samples collected in both seasons (wet and dry) close to the seashore south of the study area have an isotopic composition value that is higher than 0& for both isotopes as reported for modern oceanic seawater. In the dry season, however, surface water samples collected close to the sea, at the Apapa creek (Sw5) and the Lagos lagoon (Sw1) exhibit both marine and evaporation influence on the isotopic composition with positive stable isotopes values. Contrarily, Ajah lagoon (Sw3) at a greater distance from the sea has isotopic compositions that are only reflective of strong evaporation (Table 5 and Fig. 4). Whereas, in the wet season, both the Lagos and Ajah lagoons (Sw1 and Sw3) have similar depleted values as ground waters with respect to 2 H and 18 O and were plotted above the LMWL, but in the same region with parts of the groundwater on LMWL (Fig. 3, Table 3). According to Fritz and Clark (1997), it was rare to find surface and groundwater plotted above the GMWL but in low humidity regions, re-evaporation of precipitation from local surface waters created vapor masses with isotopic content that plot above the local meteoric water line. The shift above the LMWL indicates the the distribution has a range of variations from -2.29& to 13.94& and 1.45& to 13.38& for wet and dry seasons respectively. These ranges of values reflect the influence of both local and regional moisture circulation indicating highly enriched humidity in the area of study. The occurrence of lower d-excess values at higher oxygen values (d 18 O) connotes evaporative enrichment from regional circulation. On a global scale, the average d-excess value is known to be about 10& but differs with variations in humidity, wind speed, and sea surface temperature during evaporation, accordingly, the low d-excess values reflect high humidity during formation of vapor mass (Clarks and Fritz, 1997;Abiye, 2013). The groundwater isotope contents of the LCB in the south are isotopically enriched relative to the isotope compositions of groundwater studied in the northern parts of Nigeria (Kehinde, 1993;Goni and Edmunds, 2001;Adelana et al., 2003), but are similar in isotopic compositions to groundwaters from basement and sedimentary basin of southwestern Nigeria (Loehnart, 1988) and to those reported from other West Africa countries e.g. Ghana (Acheampong and Hess, 2000; Jorgensen and Banoeng-Yakubo, 2001). The observed south-north depletion of stable isotope compositions in groundwaters may be attributed to both altitude and continental effect of the incident rain. Tritium has been reliably employed to distinguish groundwater recharge during the pre-bomb time from younger water (Clarks and Fritz, 1997). Its variation provides an insight into local recharge and circulation mechanisms. Sequel to early sixties thermonuclear test that injects 3 H into the atmosphere, the tritium content in precipitation increased to 1000-folds, especially in the northern hemisphere. Since 1963, the peak tritium concentration has decreased to natural values in winter and about double natural in summer. This event consequently affected the groundwater tritium content as aquifers are being recharged. Therefore, 3 H content can often be used to determine dates ante quem and post quem. For example, water with 3 H < 5TU must have a residence time of more than 40 years, while waters having 3 H > 20TU must date after 1961 (Clarks and Fritz, 1997). The tritium values of shallow unconfined aquifer ranged from 0.1 to 2.8TU, 1.7 to 2.0 TU for surface water and 2.2 TU for a single rainwater sample (Tables 3 and 5). Rainwater of this composition when dominating a recharge would be considered as derived from the pre-bomb period (Loehnert, 1988). Shallow groundwater and surface waters, however, appear a proper mixture of infiltrated rainwater with tritium contents close to the precipitation. Likewise, the closeness in tritium values exhibited between Lagoon (Sw1 and SW3) and groundwater demonstrates intense and reiterating interaction of the water bodies. Similarly, Loehnert (1998) reported tritium value of 2 TU for rainwater during August break in parts of southwestern Nigeria. The groundwater data, however, apparently reveals clustering into two distinct groups, consisting of a group of relatively young immature shallow groundwater having tritium values >1 TU, and an older group of more mature or admixture of old and recent recharge groundwater with values of <1 TU. Based on groundwater residence time proposed by Clarks and Fritz (1997), two distinguished recharges were discernible viz: sub-modernrecharged prior to 1952 with tritium value of < 0.8 TU, and a mixture of submodern and recent recharge having tritium value range between 0.8 to 4 TU. For all the samples, the tritium values for both groundwater and surface waters show significant contribution equal to or less than found in precipitation exception being samples (Gw13and Gw37) with extremely low tritium values. This implies recent contributions or an ongoing recharge to the aquifer system and thus short transit time through the unsaturated zone. This assertion is supported by the plot of tritium-14 C content of the groundwater samples as shown in Fig. 7. Also, sample Gw5 is worth elaborating with tritium value higher than the rainwater content. This could either be attributed to highly enriched tritiated rainwater preserved in less permeable layers or contaminated by wastes being an open abandoned well. However, the observed background values of tritium in these groundwater samples connote recharge under modern climatic conditions. The tritium concentration >1 TU with corresponding high 14 C characterizing the entire study area except for location (Gw13) is an indication that these waters are quite recent. In addition, the aquifer good permeability in the study area also enhances short residence times for the tritiated water, while varying tritium concentration may either depict variable residence times or infiltration with variable tritium content into the aquifer (Kehinde, 1993). However, the anomalous occurrence of poorly tritiated shallow groundwater (Gw13and Gw37) regarded as sub-recent water may suggest any of the following: probable mixture of young and old water (low tritium) or at least contain an admixture of a certain amount of recent water; or due to the presence of relatively impermeable sediments, which increases residence times and enables disintegration of the tritium content; and an additional possibility to the above is that the aquifer was recharged by younger but poorly tritiated rainwater (Kehinde, 1993). Groundwater samples with zero (near zero) tritium value according to Abiye (2013), have been in circulation for a long time (>50 years) and are not derived from present-day rainfall. In addition, most of the groundwater samples are characterized by low total mineralization with total dissolved solids (TDS) values <500 ppm and electrical conductance (EC) values <750 ms/cm. These values are within the recommended standard of (WHO, 2006) for potable water and equally indicates recently recharged fresh-groundwater water to the unconfined shallow aquifer in the basin (Tables 1 and 2). However, a few of them exhibit higher values. Carbon-14 and d 13 C The carbon isotopes of 13 C and 12 C are essential and vital tools to quantify the interaction between water and rock in the case of 14 C age determination of groundwater. The carbon isotopes were carried out in the study area to establish an input function for dating groundwater and unraveling the source of carbon in the basin (Table 3). The increment in the natural concentrations of both carbon-14 and tritium in waters resulted from the thermonuclear testing in the early 1960s and therefore, elevated concentration of 3 H and 14 C in groundwater indicates recent recharge. The carbon-14 content, however, decreases in old waters by radioactive decay hence, making it a useful age determination tool while Carbon-13 determinations are useful in the identification of the origin of carbon in groundwater (Loehnart, 1988). The 13 C content in the groundwater varies from -22.95 to -12.56 &. These values indicate biogenic origin and dominance of shallow water-soil interaction. The notable carbon-13 values for some plant material range from -23 to 3& while that for carbonate minerals is between -2 to 0 & (Faure, 1986;Mazor, 1991;Ferronsky and Polyakov, 1982). In addition, the depletion of 13 C values connotes little or no marine carbonate rock with enriched 13 C value was available for dissolution in the subsurface. However, in the present study, the 14 C activity was found to vary from 59.1 to 88 pMC for groundwater from the shallow interstitial aquifer. In the recently recharged water, the 14 C content is expected to be close to or above 100% because 14 C is derived from the soil CO 2 and it is likely to contain bomb 14 C. The relatively high activity of 14 C observed in the basin groundwater is indicative of young, shallow and locally recharge source of water. According to Dorr et al. (1987), the 14 C content of shallow groundwater ranges from 90 pMC to about 50 pMC depend- Salle et al. (2001)). Also, based on carbon-14 and carbon-13 contents, most of the samples lack evidence of mixing; indicating that variation in activity of carbon-14 is essentially due to radioactive decay and represents the true residence time of groundwater. Furthermore, diffusion of CO 2 gas from the unsaturated zone to the groundwater could also affect the measured carbon activity in water (Fontes and Edmunds, 1989;Le Gal La Salle et al., 2001). When diffusion occurs, modern CO 2 increases carbon-14 activity of groundwater and consequently reduce the estimated groundwater residence time (Le Gal La Salle et al., 2001). Diffusion is known to occur in modern groundwater with high pH where dissolution of CO 2 gas is enhanced (Fontes and Edmunds, 1989;Stumm and Morgan, 1981). Thus the low pH characterizing the studied area suggests that the diffusion process has no effect on the estimated carbon-14 activity. Similarly, the relative correlations agreement between tritium and carbon-14 (Fig. 7), suggests that carbon-14 signature remain unmodified and still reflect groundwater renewal rate. It can thus be considered herein that diffusion is not important. The residence time (t) of the groundwater, however, provides useful practical implication for water resource management and can be estimated through the below decay equation (Eq. (viii)): where a t 14 C is the measured activity and a o 14 C, is the initial activity. The rock-water interaction enhances dissolution of carbonate and reduces the 14 C activity of groundwater through exchange with dead carbon in the aquifer (Bajjali et al., 1997). The initial carbon 14 C activity (a o 14 C) in the atmosphere is assumed to be 100 percent modern carbon (pMC). This is to be adopted as the 14 C activity of modern CO 2 during recharge. In addition, we can also determine the dilution factor for dating purposes. A dilution (water-rock interaction) factor known as 'q' reduces the initial activity of the sample for non-decay 14 C the reduction in 14 C activity through a geochemical reaction in the recharge water may be used to estimate the dissolution factor 'considering Eq. (ix) below Where the A recharge ¼ Average value of 14 C measured in the recharge area and the atmosphere respectively. Generally, due to the dominance of high 14 C activity, groundwater of this region is continuously renewed and can be qualitatively characterized as possessing a relatively high recharge/draft ratio. Aquifer recharge The stable isotope of oxygen ( subjected to a various degree of evaporation and as such exhibit some variation in stable isotopic composition and distribution along the LMWL. These variations are attributed to different storms that recharged the aquifers. Considering the size and altitude of the study area the major process that may yield any significant note in the isotopic composition of the rainfall depends on its amount and intensity. Therefore, the mean annual rainfall of about 1800mm in the study area could have produced the relatively depleted isotopic signature preserved in the shallow groundwater. From Figs. 3 and 4, it can be deduced that the groundwater originated essentially from local rainfall and variation in its isotopic composition may be related to the prevailing climatic conditions. This observation may be supported by the groundwater fluctuation plot deduced from the water level data (Fig. 8). It reveals the sensitivity of the aquifer's quick response to precipitation during the wet season denoted by a sharp rise in water level. In addition, some of the hand-dug wells away from the surface waters' course, have negligible or absolutely no water content during the dry season indicating precipitation as the main source of recharge to the groundwater. Furthermore, in the wet season, the spatial distribution of isotopes (Figs. 3 and 4 and; Tables 3 and 4) in some of the groundwater reflects isotopic signatures similar to that found in the lagoon water samples. This similarity besides the similar d-excess values suggests that the Lagoon also acts as a notable source of recharge to the groundwater. The general conceptual model (Fig. 9) shows the interaction between Surface water and groundwater in the wet season. The exceptional high TDS values ranging from >500-971ppm and >500e5336ppm and, EC values range between >1200 -3387 mS/cm and >1200e5338 mS/cm above the drinking water recommended limit of (WHO, 2006) suggests salinity increase and also confirm the surface water-groundwater interaction. This assertion is further supported with salinity increase in both surface and groundwater towards the ocean ( Fig. 1 and Table 2). Generally, groundwater salinity decreases away from the saline sources towards the northeast and eastern limit of the study. In the dry season, in contrast, the Lagoon water experiences reduction and exhibits isotopic signatures similar to that of the seawater indicating the significant influence of marine on the Lagoon water that lies next to the shore. This, on the other hand, indicates that probably the surface water does not provide a significant contribution to the recharge of the groundwater system in the dry season (Table 4, Fig. 4). However, most of the groundwater samples are identified as young and modern in age by tritium and carbon-14, while sample Gw 13 represents an exception in Lagos Island and classified as sub-recent water resulting from admixture of old and young waters. This localized zone of mixing could not be delineated with the present paucity of data. In summary, the replenishment to the phreatic aquifer is believed to occur from local rain, flood flows, and surface waters. Conclusions Radiogenic isotopes ( 14 C and 3 H) and stable isotopes ( 18 O, 2 H and 13 C) of groundwater in the LCB of Nigeria has provided the basis for better understanding of the shallow unconfined groundwater system in the basin with respect to age and recharge sources. Our findings show that shallow groundwater infiltration across LCB is generally affected by evaporation. The regional and local prevalent climatic conditions are the factors responsible for the observed evaporation effect in the isotopic composition of the groundwater. The relationship of 18 O and 2 H identified that rainfall is the main source of recharge to the basin's groundwater system. In addition, shreds of evidence from isotopic signature and water level map revealed the mutual interaction between the groundwater and the Lagoons and creeks' water bodies particularly during the rainy season thus suggesting another source of recharge to the aquifer. The general low concentrations of tritium, relatively equal to its current concentration in rainfall suggest the presence of modern recharge. In other words, the tritium data indicates that the groundwater systems are essentially modern in age, whereas, the apparent age of the groundwater based on 14 C activity reveals older water of over a few units of thousand years. Therefore, the groundwater system in the LCB represents recharge under modern climatic conditions. Thus, the source of groundwater in this region is considered renewable and water exploitation is potentially sustainable. However, groundwater exploitation is considered 'mining' especially during the dry season, if groundwater withdrawal rate is higher than recharged. The facts and findings presented in this study do not only reflect meteorological and hydrological characteristics of shallow groundwater in the basin, but also provide the essential and valuable isotopic database from where information can be generated for groundwater resource management and further future study in the basin. Declarations Author contribution statement Mumeen A. Yusuf: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Tamiru A. Abiye: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data. Michael J. Butler: Analyzed and interpreted the data. Kehinde O. Ibrahim: Contributed reagents, materials, analysis tools or data.
2018-12-09T01:58:22.862Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "9f75c368d26d68bec2b33e80054eeba822053fe9", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844018344803/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f75c368d26d68bec2b33e80054eeba822053fe9", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
270806075
pes2o/s2orc
v3-fos-license
A natural approach to combating antibiotic-resistant pathogens in livestock: Hibiscus sabdariffa-derived hibiscus acid as a promising solution We examined the antibacterial efficacy of streptomycin, hibiscus acid, and their combination against multidrug-resistant Shiga-toxin-producing Escherichia coli (STEC) and Salmonella Typhimurium in mice. We determined the minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) for streptomycin, hibiscus acid, and their combination against STEC and Salmonella. Fifteen sets of six mice in each set were utilised: six groups were orally exposed to 4 log10 colony forming units (CFUs) of S. Typhimurium and another six to STEC, and three acted as the controls. Six hours post-inoculation, specific groups of mice received either oral solutions containing hibiscus acid at 5 and 7 mg/ml; streptomycin at 50 and 450 μg/ml; hibiscus acid/streptomycin (5 mg/ml hibiscus acid and 50 μg/ml streptomycin); or isotonic saline. The study determined the MIC and MBC of 7 mg/ml of hibiscus acid; 300 and 450 μg/ml of streptomycin; and two concentrations of hibiscus/streptomycin (3 mg/ml / 20 μg/ml and 5 mg/ml / 50 μg/ml). Interestingly, the mice that were infected and subsequently treated with hibiscus acid at 7 mg/ml alone or in conjunction with streptomycin did not have either STEC or Salmonella in their faecal samples, and none of the mice died. In contrast, the untreated mice and those exclusively treated with streptomycin had the pathogens present in their stool, leading to the mortality of all the subjects. https://doi.org/10.17221/105/2023-VETMED The escalating crisis of antibiotic resistance is an imminent threat to both human and animal health worldwide (WHO 2023).Within the realm of veterinary medicine, the emergence and dissemination of antibiotic-resistant bacteria among livestock populations presents a significant challenge, with pathogens such as Salmonella and Shigatoxin-producing Escherichia coli (STEC) at the forefront of concerns (Karch et al. 1999;Boore et al. 2015;WHO 2023).As we face the looming prospect of a post-antibiotic era, the need for innovative approaches to combat antibiotic-resistant pathogens in veterinary medicine has never been more pressing. STEC and Salmonella have emerged as significant and concerning health hazards.These two microorganisms, classified as foodborne pathogens, pose a substantial risk to public health on a global scale.The prevalence of outbreaks attributed to these pathogens has garnered considerable attention, prompting intensified research and surveillance efforts to mitigate their impact (Karch et al. 1999;Boore et al. 2015). STEC, a subset of E. coli bacteria, is characterised by its ability to induce haemorrhagic colitis, a condition marked by severe abdominal pain and bloody diarrhoea.This bacterium is of grave concern due to its propensity to cause outbreaks, with instances of contamination often traced back to the consumption of undercooked ground beef, unpasteurised dairy products, or contaminated produce.The virulence of STEC lies in its production of Shiga-toxins, which can lead to haemolyticuremic syndrome (HUS), a condition characterised by renal failure and potential long-term health consequences (Beutin and Martin 2012). Another notorious foodborne pathogen is Salmonella, which comprises a diverse group of bacteria capable of causing gastroenteritis in humans and animals.The impact of Salmonella infections can range from mild gastrointestinal discomfort to severe dehydration.The sources of Salmonella contamination are multiple, including poultry, eggs, raw meat, and even fresh produce (CDC 2013). The livestock industry, a key component of global food production, is critically impacted by the rise of antibiotic resistance.Conventional antibiotics, once effective in promoting animal health and ensuring food safety, are losing their potency due to the relentless adaptation of bacterial strains.The result is not only compromised animal welfare, but also a direct threat to human health due to the potential transmission of resistant pathogens through the food supply chain.Salmonella and STEC have become serious health threats as globally important foodborne pathogens causing numerous outbreaks (Chang et al. 2015). This dire situation necessitates novel approaches to treating bacterial infections.In this context, the rich biodiversity of plant species has captured the attention of researchers as a potential source of natural antibacterial agents.Such agents hold the promise of yielding innovative compounds that could be used to control infections on a global scale.The medicinal potency of plants lies within their complex array of secondary metabolites, including alkaloids, flavonoids, terpenoids, and phenolic compounds (Cruz-Galvez et al. 2013;Ma et al. 2019). Hibiscus sabdariffa is a species of subtropical plant that grows in countries such as Mexico, Sudan, India, and Thailand. Recently, Portillo-Torres et al. (2019), reported that hibiscus acid obtained from acetonic extract of H. sabdariffa calyces is one of the compounds responsible for the antibacterial activity of H. sabdariffa.Recently, the antibacterial effect of hibiscus acid on different microorganisms such as Salmonella serotypes (Portillo-Torres et al. 2019;Sedillo-Torres et al. 2022), E. coli (enteroinvasive, enteropathogenic, enterohemorrhagic and Shiga toxin-producing) (Portillo-Torres et al. 2019), Streptococcus mutans, S. sanguinis, Capnocytophaga gingivalis, and Staphylococcus aureus (Baena-Santillan et al. 2022), and Pseudomonas aeruginosa (Cortes-Lopez et al. 2021) has been reported.Although the complete mechanism of the effect of hibiscus acid on the bacterial cell is not well known, evidence suggests that hibiscus acid alters the bacterial membrane (Portillo-Torres et al. 2019;Baena-Santillan et al. 2022;Sedillo-Torres et al. 2022), inhibits the flagellar motility and cell invasion in Salmonella enterica (Sedillo-Torre et al. 2022), and has a strong interaction with the active site of the LasR protein (Cortes-Lopez et al. 2021).However, more studies are necessary to elucidate the complete mechanism of the effect of hibiscus acid on bacteria.In addition, Baena-Santillan et al. (2022) reported that hibiscus acid is not toxic.Other authors have also mentioned that hibiscus acid is not toxic (Zheoat et al. 2019;Sedillo-Torres et al. 2022). Currently, there is no information on the possible antibacterial effects of hibiscus acid when adminis-https://doi.org/10.17221/105/2023-VETMEDtered in an animal model infected with pathogenic, antibiotic-resistant bacteria.In the literature, there is currently only one study available on the antimicrobial effect of hibiscus acid in a mouse abscess/ necrosis model, hibiscus acid at sublethal concentrations (15 and 31.2 μg/ml) that affected infection establishment by P. aeruginosa and prevented damage and systemic spread (Cortes-Lopez et al. 2021). We recently reported on a study on the antimicrobial effects of aqueous extract from calyces of Hibiscus sabdariffa in CD-1 mice infected with multidrug-resistant enterohaemorrhagic E. coli (EHEC) and S. Typhimurium (Portillo-Torres et al. 2022).In the reported study, the effect of the aqueous extract of H. sabdariffa calyces alone and at a concentration of 50 mg/ml was tested in CD-1 mice orally infected with EHEC or S. Typhimurium.EHEC and S. Typhimurium were absent in the faecal samples of the mice that received the aqueous extract on the 2 nd and 3 rd days after treatment.Additionally, these mice showed signs of recovery from the infection.Conversely, in the untreated mice or those treated solely with chloramphenicol, the pathogens persisted in their faeces throughout the study, leading to the mortality in some mice (Portillo-Torres et al. 2022). The objective of this study is to determinate the minimum inhibitory concentrations (MICs) and minimum bactericidal concentrations (MBCs) for streptomycin, hibiscus acid, and their blend against multidrug-resistant STEC and Salmonella; and to evaluate the antibacterial effects of hibiscus acid when administered, alone or in combination with the antibiotic streptomycin, to mice infected with multidrug-resistant STEC and S. Typhimurium. Isolation of hibiscus acid A kilogram batch of dehydrated calyces of H. sabdariffa cultivated in the state of Guerrero, Mexico, was used to obtain hibiscus acid from an acetonic extract as described by Portillo-Torres et al. (2019).Briefly, samples (100 g) of dehydrated calyces were placed in glass flasks and 900 ml of acetone were added.The flasks were hermetically sealed and stored at room temperature for 7 days with manual shaking for 1 min once a day.Afterwards, the liquid phase was filtered through filter paper.The filtered extracts were concentrated in a rotary evaporator.The acetone was completely removed from the rotary evaporated concentrate by placing it in an air recirculation oven at 45 °C for 24 hours.Two hundred and thirty grams (230 g) of dry acetone extract of H. sabdariffa calyces was packed with silica gel in a chromatographic column.Hexane was used as the mobile phase to separate the oils in the extract, and 600 ml fractions were recovered in glass flasks.All the chromatographic fractions obtained were rotary-evaporated to remove the solvents and concentrate the separated compounds.After discarding most of the oils from the extract, the solvent mixture hexane-ethyl acetate (9 : 1 v/v) was used as the mobile phase to remove all the residual oils.The mobile phase (8 : 2 v/v) passed through the packed column until some small crystals were observed in the rotary-evaporated fractions and it was then used at a ratio of 7 : 3 (v/v) to obtain welldefined crystals in the rotary-evaporated fractions.It was re-crystallised using 7 : 3 (v/v) acetone-ethyl acetate in a separatory funnel and then stored for 24 hours.Once the formation of crystals on the wall of the separation funnel was observed, the liquid was decanted, and the crystals were recovered.Finally, the residual acetone was removed in an air recirculation oven at 45 °C for 2 hours. Bacterial strains Four multidrug-resistant bacteria strains were isolated from different foods.S. Typhimurium C12 and S70 (both resistant to 12 antibiotics) were isolated from coriander (Rangel-Vargas et al. 2016) and tomatoes (Gutierrez-Alcantara et al. 2016), respectively; and STEC CA1 (Stx2 gene and resistant to 14 antibiotics) and BJ22 (Stx2 and resistant to 12 antibiotics) were isolated from fresh cheeses made with unpasteurised milk (de la Rosa-Hernandez et al. 2018).All the bacteria were resistant to the same 11 antibiotics (amoxicillin-clavulanic acid, amikacin; ampicillin, colistin, erythromycin, gentamicin, kanamycin, neomycin, sulfisoxazole, trimethoprim-sulfamethoxazole and streptomycin) according to the protocol indicated by the Clinical and Laboratory Standards Institute (CLSI 2020).It is important to note that for all the studies, mutants resistant to rifampicin were used, which were obtained from the four multidrug-resistant STEC and Salmonella described above.Rifampicin-resistant mutant strains https://doi.org/10.17221/105/2023-VETMED(+; Sigma-Aldrich, Ciudad de México, Mexico) of S. Typhimurium, and multidrug-resistant STEC strains resistant to other antibiotics were obtained according to the method described by Castro-Rosas and Fernandez-Escartin (2000).These mutant rifampicin-resistant strains were chosen specifically to ensure accurate tracking and analysis.To facilitate this monitoring, colony counts were conducted on agar plates containing rifampicin. The use of the rifampicin-resistant strain served the dual purpose of incorporating rifampicin into the agar plates, thereby creating an environment where only the targeted bacteria could thrive, while simultaneously preventing the growth of other bacterial strains.It is worth noting that rifampicin is a restricted antibiotic, uncommonly utilised in both human and veterinary medicine.Consequently, naturally occurring bacterial resistance to rifampicin is minimal, further underscoring its suitability for this study's experimental design.This approach enhances the reliability of the microbiological results. Inocula preparation All four of the multidrug-resistant STEC + and S. Typhimurium + strains were inoculated in trypticase soy broth (TSB) and incubated at 35 °C for 18 hours.The cultures were washed twice in sterile isotonic saline solution (ISS; 0.85% NaCl) by centrifuging at 2 000 g for 20 min, and the pellets were resuspended in sterile peptone water at about 9 log 10 CFU ml.An inoculum cocktail was prepared for each multidrug-resistant strain by mixing 1 ml of each washed suspension. Minimum inhibitory concentration and minimum bactericidal concentration To determine of the minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC), we used the macrodilution method using the inocula cocktail of S. Typhimurium + or STEC + at 1 × 10 5 CFU/ml as described by Portillo-Torres et al. (2019).We applied some variations: we used trypticase soy broth (TSB) tubes containing hibiscus acid, streptomycin, and hibiscus acid/ streptomycin mixtures at different concentrations.All the treatments were performed in triplicate. Fractional inhibitory concentration index (FICI) The fractional inhibitory concentration index (FICI) is a numerical value used to assess the interaction between two or more antimicrobial agents when administered in combination.We estimated the FICI value between the hibiscus acid and streptomycin using the MIC values of each of the compounds on both pathogenic bacteria, using the following equation (Garvey et al. 2011): The antimicrobial effect in CD-1 mice infected with STEC or S. Typhimurium We investigated the antimicrobial effect of hibiscus acid, streptomycin, and the hibiscus acid/ streptomycin mixture in strain CD-1 mice infected with rifampicin-resistant mutant strains (+) of STEC or multidrug-resistant S. Typhimurium strains resistant to other antibiotics.The use of the CD-1 mice strain is justified based on several factors.CD-1 mice are a commonly used outbred strain in biomedical research due to their genetic heterogeneity, which closely mimics the genetic diversity found in human populations (Aldinger et al. 2009).This genetic diversity can influence the host response to infections and treatments, making CD-1 mice suitable for studying complex interactions between pathogens and potential therapeutics (Aldinger et al. 2009).In addition, CD-1 mice have been extensively employed in infectious disease research, including studies involving Salmonella (Ramachandran et al. 2017) and STEC (Mohawk and O'Brien 2011). We conducted this study as described by Portillo-Torres et al. (2022).Briefly, an inocula cocktail (1 × 10 5 CFU/ml) of STEC + or S. Typhimurium + was used.Ninety healthy male mice of the CD-1 strain of 8 weeks of age were used.The experimental protocol involving mice was analysed and approved https://doi.org/10.17221/105/2023-VETMEDby the University (UAEH) Ethics Committee for the Care and Use of Laboratory Animals.For inoculation into mice, MBCs against both STEC + and S. Typhimurium + cocktail were used.The concentrations of the used test solutions were therefore 7 mg/ml, 450 µg/ml, and 5 mg/ml / 50 µg/ml for hibiscus acid, streptomycin, and hibiscus acid/ streptomycin, respectively.The 90 mice were divided into 15 groups of six mice each (groups I to XV).All the groups were maintained for 1 week of adaptation, providing them with standard food and water ad libitum.After this adaptation time, the mice were orally inoculated with an inocula cocktail of STEC + or S. Typhimurium + .The mice were held firmly by the scruff of the neck in a vertical position and inoculated with the R + pathogen in suspension, antibacterial solution, or saline solution, using an oesophageal cannula attached to a sterile needleless syringe.Mouse group I was not infected with the pathogenic strains, and no treatment was administered (blank, only an isotonic saline solution (ISS) was administered orally).Groups II and III were not infected with the pathogenic strains, but they were administered streptomycin and hibiscus acid, respectively (uninfected and treated controls).Groups IV, VI, VIII, X, XII and XIV were inoculated orally (0.1 ml) with approximately 1 × 10 4 CFU of the S. Typhimurium + cocktail.Groups V, VII, IX, XI, XIII and XV were inoculated orally (0.1 ml) with approximately 1 × 10 4 CFU of the STEC + cocktail.Then, 6 h after infection, groups IV and V, VI and VII, VIII and IX, X and XI, XII and XIII, and XIV and XV, were orally administered 0.5 ml of ISS, streptomycin (450 µg/ml), streptomycin (50 µg/ml), hibiscus acid (5 mg/ml), hibiscus acid (7 mg/ml) and hibiscus acid/streptomycin (5 mg/ml / 50 µg/ml) solutions, respectively.Each of the treatments with the test solutions and the ISS were administered to the mice every 12 h for 7 days.The presence of STEC + and S. Typhimurium + in the mouse faeces was quantified, and the mouse mortality rate and pathological manifestations were examined as reported by Portillo-Torres et al. (2022).Briefly, under aseptic conditions, faeces were collected from each cage bed every 8 h and stored under refrigeration.The faeces of the test animals were taken directly from the sawdust found in the base of each of the cages, containing each group of mice.The faeces were taken with sterilised forceps and placed in plastic bags with a hermetic closure.Every 24 h, the bags containing the faeces of each group of rodents were transported to the laboratory in refrigeration and under aseptic conditions.In the laboratory, the faeces of each 24 h-period were mixed and numbered for R + pathogenic bacteria.The bacterial counts were determined for each of the 9 study groups.The sawdust from each of the 9 cages was changed and the cage was sterilised daily during the collection of the stool samples to avoid cross-contamination.To prepare for enumeration of EHEC R + and S. Typhimurium R + in each stool sample, 9 ml of a sterile peptone diluent (0.1%) was placed in the plastic bag containing 1.0 g of stool, then the faeces were homogenised manually by rubbing the bag from outside for 1 minute.The enumeration of the pathogenic R + bacteria was performed by the pour plate technique using TSA supplemented with rifampicin (100 mg/l), and incubating at 35 ± 2 °C.Each dilution was inoculated in triplicate.To confirm the presence of the R + mutant strain in the TSA-rif-plates, the colonies from these plates were taken and streaked onto eosin and methylene blue (EMB) agar or brilliant green agar (BGA), both containing rifampicin (100 mg/l) for EHEC R + or S. Typhimurium R + , respectively. The mortality rate of the mice in the different groups was calculated as the number of mice that died during the experiment, against all the mice used in each group.Throughout the study, the consistency of the faecal matter of each rodent was registered.The animals were also observed daily for any physiological and pathological abnormalities (weight loss, loss of appetite, weakness/slow movement and mortality) during the period of the experiment. Statistical analysis The experiments for the MIC/MBC were repeated three times.An exploratory data analysis was performed to assess the assumptions of the equality of variances and normal distribution of errors of the results obtained from the in vitro antimicrobial activity of hibiscus acid and streptomycin, which were analysed using the Statgraphics Centurion XVI statistical program (StatPoint Technologies USA software, 2009) for the one-way analysis of variance.Comparisons of the means with the Tukey test were performed for each experimental section, with a significance level of P < 0.05.https://doi.org/10.17221/105/2023-VETMED Minimum inhibitory concentration and minimum bactericidal concentration The values obtained for the MIC and MBC of hibiscus acid, streptomycin, and the hibiscus acid/ streptomycin against S. Typhimurium + and STEC + are reported in Table 1. It should be noted that the MIC values that were obtained for the hibiscus acid were 7 mg/ml for both S. Typhimurium + and STEC + , while for streptomycin, the MIC was 300 µg/ml for both pathogenic strains.However, when the mixture of both agents was tested, the MICs of the hibiscus acid/ streptomycin for STEC + and S. Typhimurium + were 3 mg/ml and 20 µg/ml, respectively.A similar reduction was observed in the MBC values of the hibiscus acid and streptomycin, both alone and in mixture (Table 1). In this study, the MIC and MBC for the STEC and S. Typhimurium strains resistant to multiple antibiotics were very high compared to the levels shown by strains sensitive to these antibiotics.This means that, to control an infection caused by these pathogenic strains in a human or animal, a very high level of streptomycin would be required, which would carry risk due to its toxicity.It is widely documented that streptomycin, even at the levels administered to control an infection by pathogenic bacterial strains not resistant to the antibiotic, presents a certain degree of toxicity (Peloquin et al. 2004). Fractional inhibitory concentration index (FICI) The mixture of hibiscus acid with streptomycin gave an FICI of 0.488 for both S. Typhimurium and EHEC, showing a synergistic effect (Table 1).Synergistic and additive interactions between two antibacterial components have been reported to improve antibacterial efficacy compared to when they are used alone (van Gent et al. 2022). The antimicrobial effect in CD-1 mice infected with STEC or S. Typhimurium The results of the antibacterial activity of the hibiscus acid, streptomycin, and the hibiscus acid/streptomycin mixture in the CD-1 mice infected with S. Typhimurium R + or STEC R + are reported in Tables 2 and 3.Both S. Typhimurium + and STEC + were able to colonise the mice and replicate in the mice that were treated with ISS-only; streptomycin 450 µg/ml; streptomycin 50 µg/ml; or hibiscus acid at 5 mg/ml (Table 2).By contrast, when the mice were administered with hibiscus acid 7 mg/ml or hibiscus acid/streptomycin 5 mg/ml / 50 µg/ml, both pathogens were no longer detected in the faeces after day one (Table 2).It is important to note that when hibiscus acid was tested at a concentration lower than the MBC (5 mg/ml) it had no effect on the survival of both pathogenic bacteria.However, when it was tested at the same concentration, but in a mixture with S. Typhimurium 7 ± 0.0 7 ± 0.0 1 300 ± 0.0 400 ± 0.0 1.8 3 ± 0.0/20 ± 0.0 5 ± 0.0/50 ± 0.0 0.488 synergism STEC 7 ± 0.0 7 ± 0.0 1 300 ± 0.0 450 ± 0.0 1.8 3 ± 0.0/20 ± 0.0 5 ± 0.0/50 ± 0.0 0.488 synergism STEC = Shiga-toxin-producing Escherichia coli https://doi.org/10.17221/105/2023-VETMEDstreptomycin, there was an effect in reducing the concentration of both pathogens and inactivating them.This confirms the synergistic effect previously observed in the culture broth. An alternative approach for future research could involve elucidating the optimal dosing regimen and administration route for the hibiscus acid/streptomycin combination in livestock settings.This could entail conducting dose-response studies to determine the most effective concentrations of hibiscus acid and streptomycin for combating multidrugresistant pathogens while minimising any potential adverse effects. Additionally, exploring the pharmacokinetics and pharmacodynamics of the hibiscus acid/streptomycin combination in livestock would provide valuable insights into its absorption, distribution, metabolism, and excretion properties.This could involve studying the bioavailability and tissue distribution of both compounds individually and in combination, as well as assessing their potential for drug-drug interactions. Furthermore, conducting field trials or longitudinal studies in livestock farms to evaluate the practicality and effectiveness of administering the hibiscus acid/streptomycin combination under real-world conditions would be invaluable.This could involve monitoring the incidence of bacterial infections, antibiotic resistance patterns, and livestock health outcomes over an extended period to assess the long-term efficacy and sustainability of this treatment approach. To the best of our knowledge, this is one of the first reports in the literature of the antimicrobial effect of hibiscus acid, both alone and in a mixture with an antibiotic, in an animal model. Table 2 . Effect of the treatments in the CD-1 mice on the faecal excretion of STEC + and S. Typhimurium + (CFU/g)Groups TreatmentsNumber of R + bacteria excreted in the faeces of the mice each day throughout the study (CFU/g Table 3 . Clinical signs and mortality observed in the groups of mice infected and not infected with S. Typhimurium + and STEC + during the experiment Number of affected mice/total number of mice in each group + = rifampicin-resistant; HA/STR = hibiscus acid/streptomycin; ISS = isotonic saline solution; S. Typhimurium + = Salmonella Typhimurium + ; STEC + = Shiga-toxin-producing E. coli + https://doi.org/10.17221/105/2023-VETMEDlarly in the context of antibacterial activity, the literature lacks empirical evidence directly linking these compounds to livestock bacterial pathogens.Consequently, the interpretation of study results within the context of the available literature becomes challenging.Without established antecedents or prior investigations examining the effects of hibiscus acid on livestock bacterial pathogens, it is difficult to provide a substantive discussion or draw meaningful comparisons.Acknowledging this limitation underscores the necessity for future research endeavours to address this gap and conduct comprehensive investigations to elucidate the potential impact of hibiscus-derived compounds on bacterial pathogens in livestock settings. *
2024-06-29T15:12:22.311Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "c93c5d980628524af60c5e1029f43d1f6d990e5b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.17221/105/2023-vetmed", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1cd4b2039a3a154c89fa433f862fc5f8496a8f9", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13704714
pes2o/s2orc
v3-fos-license
Mineral licks as environmental reservoirs of chronic wasting disease prions Chronic wasting disease (CWD) is a fatal neurodegenerative disease of deer, elk, moose, and reindeer (cervids) caused by misfolded prion proteins. The disease has been reported across North America and recently discovered in northern Europe. Transmission of CWD in wild cervid populations can occur through environmental routes, but limited ability to detect prions in environmental samples has prevented the identification of potential transmission “hot spots”. We establish widespread CWD prion contamination of mineral licks used by free-ranging cervids in an enzootic area in Wisconsin, USA. We show mineral licks can serve as reservoirs of CWD prions and thus facilitate disease transmission. Furthermore, mineral licks attract livestock and other wildlife that also obtain mineral nutrients via soil and water consumption. Exposure to CWD prions at mineral licks provides potential for cross-species transmission to wildlife, domestic animals, and humans. Managing deer use of mineral licks warrants further consideration to help control outbreaks of CWD. Introduction Chronic wasting disease (CWD) was first observed in 1967 [1] and long thought to be a disease of minor scientific curiosity affecting mule deer (Odocoileus hemionus) and confined to the Rocky Mountains in northern Colorado and southern Wyoming, USA. Subsequently the disease was found in white-tailed deer (O. virginianus) and elk (Cervus canadensis). The geographic range of CWD has also expanded dramatically since 2000 [2] and is now present in 25 U.S. states, two Canadian provinces (http://www.nwhc.usgs.gov/disease_information/ chronic_wasting_disease/index.jsp), South Korea, Norway [3], and Finland (https://yle.fi/ uutiset/osasto/news/first_case_in_finland_elk_dies_due_to_chronic_wasting_disease/ 10108115) and has been found in moose (Alces alces) and reindeer (Rangifer tarandus) [2,4]. In addition, CWD prevalence has continued to increase with some free-ranging herds a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Here, we test the hypothesis that mineral licks used by deer harbor CWD prions, thus serving as potential environmental reservoirs for these infectious agents. During 2012-2015 we collected soil and water samples from 11 mineral licks (10 man-made and one natural) frequented by free-ranging white-tailed deer in a large CWD enzootic zone west of Madison, Wisconsin, USA [6] (Fig 2). We adapted a 96-well microplate variant of PMCA that incorporates Teflon 1 beads (mb-PMCA) [44] to detect CWD prions in soil and water samples. We optimized conditions to extract CWD prions from soils to enable reliable detection by mb-PMCA (S1 Text and S2-S4 Figs). We also tested deer feces collected in proximity to a mineral lick as a potential source of CWD prions. We previously detected CWD prions in fecal and urine samples from experimentally infected cervids by bead-assisted PMCA [22]. Ethics statement Field studies conducted on public lands were done with the permission of the Wisconsin Department of Natural Resources. Field studies on private lands were done with the permission of the landowners. Field studies did not involve endangered or protected species. Sample collection and preparation Eleven mineral licks in the CWD-affected zone of south-central Wisconsin were located with the assistance of Wisconsin Department of Natural Resources personnel during 2013 (Fig 2). Deer visiting these mineral licks consume soil to supplement their mineral intake. During rain events, rainwater often accumulates at the lick sites. Deer visiting rainwater-filled mineral licks often stand in the water and suspend sediment as they drink. We therefore collected soil samples from dry mineral licks and, after a rainfall event, water samples before and after disturbing the underlying sediment. We collected soil at each mineral lick a single time as follows: six soil samples (2.54 cm diameter, 2.54 cm depth) were collected using a 1" diameter galvanized steel LaMotte soil sampling probe when water was not present. For each mineral lick site, the upper and lower 1.27 cm halves of the soil samples were pooled separately to yield a single pooled sample for the upper soil layer and one for the lower soil layer. Soil from Site 2 was too wet to reliably separate into upper and lower layers and was therefore pooled and analyzed as one sample. Pooled soil samples were freeze-dried, homogenized, and sieved through #18 U.S. standard testing sieve (VWR 57334-450) to remove water and non-soil components. We returned to each lick after a rainfall event and collected one water sample prior to and a second after disturbing the underlying sediment with a stick found in the immediate vicinity of each lick. Water collected prior to disturbance was clear of sediment and water collected after disturbance was turbid. We opportunistically collected fecal samples in the proximity of one heavily used lick to determine whether prions were shed in feces by deer using this site. Extraction of prions from environmental samples We evaluated a series of extractant solutions for their ability to recover prions from amended soils and allow amplification by mb-PMCA (S1 Text and S1 Fig). Of the extraction solutions evaluated 0.1 M sodium phosphate (pH 7.4) with 1% (w/v) N-lauroylsarcosine sodium salt (sarkosyl) followed by precipitation with sodium phosphotungstic acid (NaPTA) was selected. From each pooled upper soil sample and each pooled lower soil sample, two 25 mg soil subsamples were rinsed twice with ultrapure water (18 MO cm; Barnstad GenPure Pro) before extraction of soil-bound prions. Each subsample extract was used to seed four mb-PMCA replicates. The upper or lower soil sample from each mineral lick was therefore represented by four analytical replicates for each of two subsamples (8 replicates for each soil layer, 16 soil replicates per lick). Water samples (50 mL) were centrifuged (2 min, 2000g) to separate particulate matter before 100 μL was removed and diluted 1:1 (v/v) with 3% sarkosyl in PBS. Following, NaPTA precipitation, detailed below, each sample was used to seed four mb-PMCA replicates (4 replicates for each water sample, 8 water replicates per lick). Prions were extracted from two independent 100 mg replicates of each deer fecal sample using a protocol for ovine feces [45] modified to include additional rounds and a longer duration of centrifuging to fully clarify samples. Each fecal sample was diluted 1:9 (w/v) in ultrapure water containing Roche complete ultra mini EDTA-free protease inhibitor (Fisher Scientific 50-100-3269) and homogenized twice for 40 s at maximum speed (6 m s -1 ) in beadbeater tubes containing glass beads and silicon carbide particles (MP Biomedical #116916100). Sodium dodecyl sulfate was added to a final concentration of 1% (v/v) before three further homogenizations by bead beating. Samples were rotated and incubated (60 min, room temperature), then centrifuged (60 min, 15000g, 10˚C) to clear the particulate matter. Further clarification was achieved by transferring the supernatant to a fresh tube and centrifuging under the same conditions. Supernatant was then transferred to another tube and diluted 1:1 (v/v) with phosphate-buffered saline (PBS) containing 4% sarkosyl. Pierce Universal Nuclease for Cell Lysis (Thermo Scientific #88701) was added before heating the samples to 50˚C for 30 min. All mb-PMCA experiments included eight replicates of unspiked NBH as a negative control for spontaneous formation of PrP res during mb-PMCA, a total of 42 replicates in 5.5 plates (2 replicates had incomplete PK digestion and, thus, were inconclusive). Because no extraction was performed on the water, unspiked NBH also served as a negative control for the water. In addition, to control for the possible effect of soil extracts on promoting formation of PrP res , we extracted Elliot silt loam (International Humic Substances Society, St. Paul, Minnesota, United States) and treated the extract in the same manner as the extracts from the mineral licks; two concurrent independent extracts were used to seed 4 replicates apiece and put through 2 rounds of mb-PMCA. Feces from a white-tailed deer negative for CWD were put through the fecal extract protocol and two rounds of mb-PMCA to serve as a negative control and validate the fecal extraction methods; two concurrent independent extracts were used to seed 4 replicates apiece and put through 2 rounds of mb-PMCA. For positive controls, we included the 1:3.1×10 4 and 1:1.2×10 17 dilutions of a 10% (w/v in PBS) brain homogenate of an orally inoculated, clinically affected CWD-positive white-tailed deer (96 GG) on every plate. We reliably detected prions in the 1:1.6×10 5 dilution of the CWD-positive brain homogenate after one round of mb-PMCA and in the 1:7.5×10 19 dilution after two rounds. Contamination controls We took several precautions to prevent inadvertent introduction of CWD prions into the field samples during sample collection and continuing through mb-PMCA testing. These precautions included changing gloves between collection of fecal samples; changing gloves between mineral licks; wiping the LaMotte soil sampling tool with 10% bleach before and after each mineral lick; discarding plastic bags used for soil and fecal collection after a single use; discarding 50 mL water collection tubes after a single use; double bagging of samples at the site; aliquoting stock solutions to single-use volumes; changing pipette tips after each use; wiping lab tools with 10% bleach before and after each use; using fresh disposable paper bench covering for every task; handling and loading plates with only one sample at a time and changing gloves between samples; loading plates with all wells capped except those being seeded with that replicate; and handling samples and loading plates in the following order: no seed negative controls, matrix extract negative controls, experimental samples, followed by positive controls. Results We detected CWD prions in soil samples, water samples, or both collected from nine of the 11 mineral licks following two rounds of amplification with mb-PMCA (Table 1). We limited the number of mb-PMCA rounds to two for soil and three for feces to minimize the possibility of de novo prion generation [47]. Soil from seven licks contained CWD prions. No pattern was apparent in the presence of prions in the upper vs. lower 1.27 cm of soil. We sampled water from nine mineral licks. Prions were detectable in the undisturbed water from four of the sites and in disturbed water from two of these sites. Two mineral licks (Sites 2 and 6) contained detectable CWD prions in both water and soil samples. The amounts of prions amplified from these soil and water samples was near the limit of detection for two rounds of mb-PCMA, likely due in part to co-extracted inhibitors of the PMCA reaction and incomplete extraction from soil particles. The detection of prions at 9 of 11 sites sampled, however, demonstrates widespread contamination of mineral licks in the CWD outbreak zone. The generally higher detection frequency of CWD prions in water samples relative to the corresponding soil samples suggests that either prion concentrations are higher in the water samples or that coextracted constituents from soil inhibited amplification by mb-PMCA. At the mineral lick site with the highest detection of CWD prions in environmental samples (Site 6), we opportunistically sampled white-tailed deer fecal pellets. We detected CWD prions in six of the 10 fecal samples after three rounds of amplification by mb-PMCA. Of eight replicates tested for each fecal sample, one fecal sample had four positive replicates, three had two positive replicates, and two had a single positive replicate. Importantly, no false positives were produced in any of our negative control samples. No PrP res was detected after two or three rounds of amplification by mb-PMCA in negative control samples conducted when testing lick samples, which included at least one no-seed control, one negative soil extract control, and one no-seed fecal extract control from a CWD-negative white-tailed deer. Discussion Our results demonstrate that CWD-infected white-tailed deer deposit prions at mineral licks they visit. Although the mechanism of prion deposition is unknown, we suspect deposition of saliva by infected deer during ingestion of soil and water at mineral licks has the highest potential to facilitate indirect transmission to susceptible deer. Saliva from white-tailed deer infected with CWD contains on the order of 1-5 infectious doses (ID 50 ) per 10 mL as quantified by real-time quaking-induced conversion, where an ID 50 is the dose of CWD prions capable of infecting half of the transgenic mice expressing cervid prion protein [48]. Frequent visitation by infected cervids could allow mineral licks to become potential "hot spots" for indirect transmission of CWD [49]. Currently, little is known about the relative importance of direct contact and environmental routes of CWD transmission in free-ranging cervids [10]. Thus, how artificial and natural mineral licks contribute to current and future CWD infection in cervids and whether licks should be managed to control cervid use are important questions for further research. Despite the relatively recent detection of CWD in Wisconsin (2001) and the moderate incidence of infection (6-19% prevalence in adult deer in the area sampled at the time of sample collection), our results suggest contamination of mineral licks in the CWD outbreak zone is widespread. This finding suggests that mineral licks may serve as reservoirs of CWD prions that contribute to disease transmission to susceptible animals. Although the levels of CWD prions in the samples analyzed appears low, we note that the association of prions with clay minerals often present at mineral licks can dramatically enhance disease transmission via the oral route of exposure [30][31]. For hamster-adapted scrapie prions binding to montmorillonite clay particles enhanced transmission by a factor of 680, however, an upper bound on the enhancement factor could not be assigned [30][31]. At present, the degree to which binding to clay mineral particles enhances CWD transmission to deer via the oral (or nasal) route of exposure is not known. Furthermore, repeated oral exposure to prions is associated with increased likelihood of disease transmission [50]. Differences in the sialyation status of Nlinked glycans between brain-derived and secreted/excreted PrP CWD may impact oral infectivity [51]. Cervid species that avoid interspecific contact make use of the same mineral lick sites [49], potentially leading to interspecies transmission. Mineral licks also attract livestock and other wildlife that supplement mineral intake via soil and water consumption, exposing these animals to CWD prions. Exposure of predators and scavengers to CWD prions via consumption of infected tissue has been previously documented [23]; our results suggest that environmental exposure of non-cervid animal groups can also occur via environmental routes. We also detected CWD prions in fecal samples collected in proximity to a mineral lick, indicating that fecal excretion represents a route of CWD deposition into the environment with potential transmission to susceptible cervids [19]. Deposition of fecal pellets by whitetailed deer near bait sites increases with higher deer visitation [52] and similar patterns probably occur at mineral licks. Thus, increased local fecal deposition by CWD-infected deer likely contributes to increased environmental concentrations of prions in and around mineral licks. Deer generally avoid consumption of feces [52]; however, the apparent long-term duration of prion infectivity in the environment [27][28][29], the enhanced disease transmission by soil-bound prions combined with the repeated visitation, long-term existence of and multi-generational use of mineral licks suggest the impact of concentrated environmental contamination on the dynamics of disease transmission warrants further investigation. Recent laboratory research indicates plants grown in prion-contaminated soil can accumulate prions [53]. Our data suggest that plants growing near contaminated mineral licks may warrant investigation as a source of prions for foraging animals. Areas where cervids congregate for mineral consumption, feeding and baiting sites, winter yarding, wallows [54] or other activities where CWD prions are deposited in the environment may also provide potential long-term reservoirs for transmission to cervid and non-cervid species. Conclusions We used mb-PMCA to detect CWD in soil and water from mineral licks naturally contaminated with prions and used by free-ranging deer, livestock, and non-cervid wildlife species. Detection of prions in environmental reservoirs represents an important first step in understanding the contribution of environmental transmission to CWD epizootics and potential for cross-species transmission. The present study characterized an environmental prion reservoir by (1) identifying an apparent "hot spot" of deposition and potential exposure to both cervid and non-cervid species; (2) indicating CWD prions shed by free-ranging cervids are present in areas of frequent use leading to environmental contamination and potentially plant uptake; and (3) motivating investigation of the exposure and susceptibility of non-cervid species to CWD contaminated soil, water, and plant materials. Future research should be directed at quantifying CWD prion concentrations at mineral licks and other areas where cervids congregate, determining the persistence of prion infectivity at these sites, delineating spatial-temporal patterns of environmental prion deposition and accumulation, and assessing consumption by susceptible animals. Identifying additional environmental reservoirs of CWD prions and determining the contributions of direct and indirect transmission over the course of CWD outbreaks represent key aims in advancing understanding of long-term CWD infection dynamics. The indicated extraction solution was added (200 μL) to each of the soil pellets and vortexed (2 h, 1,200 RPM, room temperature). Soil particles were sedimented (1000g, 10 min), and the supernatants were retained. Extractions were done in triplicate, and 100 μL of each extract and the water rinses were used to assay the absorbance at λ = 465. Absorbances were compared to dilution series of Elliot soil humic acid in each of the above extraction solutions, and the concentration of NOM was estimated. Shown are the mean NOM concentrations for the three replicates with the standard deviations. (B) Extraction of PrP CWD from soil. PK-treated 10% brain homogenate from a CWD-positive white-tailed deer (40 μL) was adsorbed to Elliot soil (25 mg) in ultrapure water (100 μL, 24 h) followed by a 2-h desorption step in 100 μL water (to remove any non-adsorbed unbound PrP CWD ). The sorbed PrP CWD was extracted at room temperature with 200 μL of the indicated extraction solution and analyzed by immunoblotting. Abbreviations: A-G, extraction solutions (see descriptions above); M, molecular mass marker; mAb, monoclonal antibody; rinse, water rinse; S, supernatant from binding experiment. were mixed with 2 μL from the second 5-fold dilution of 10% brain homogenate (BH) from an end-staged CWD-positive wt/wt deer. Normal brain homogenate (NBH; 90 μL) was added with two Teflon 1 beads in a 200-μL thin-walled PCR tube. Samples were sonicated for 96 cycles (30 s sonication with 27:30 incubation at 37˚C between sonications). (B) Elliot soil or Defore soil (25 mg) was extracted using 100 μL of the indicated extraction solution. An aliquot (8 μL) of these extracts were mixed with 2 μL from the second 5-fold dilution of 10% BH from an end-stage CWD-positive wt/wt deer. NBH (90 μL) was added with two Teflon 1 beads in a 200-μL thin-walled PCR tube. Samples were sonicated for 96 cycles (30 s sonication with 27:30 incubation at 37˚C between sonications). Extraction solutions are described in the text. Proteinase K (PK) resistant prion protein was detected using Western blot with antibodies 8G8 and BAR224. (TIF) S3 Fig. Effect of humic and fulvic acid detection of PrP CWD by PMCAb. Prions (10% CWD brain homogenate) were diluted using four serial fivefold dilutions in normal brain homogenate (NBH) to obtain the seed dilution used in this experiment. The seed dilution (2 μL) was transferred into 36 μL NBH with saponin, and 2 μL humic or fulvic acid (dissolved in ultrapure water) was added to achieve a total mass of 5, 2.5, 1. Samples, normal brain homogenate (NBH; 90 μL), and two Teflon 1 beads were added to a 200-μL thin-walled PCR tubes containing NBH. Samples were sonicated for 96 cycles (0.5 min sonication with 27.5 min incubation at 37˚C between sonications). Proteinase K (PK) resistant prion protein was detected using immunoblotting with antibodies 8G8 and BAR224. (TIF)
2018-05-09T00:43:47.525Z
2018-05-02T00:00:00.000
{ "year": 2018, "sha1": "65177cf270c8bbf2c26f8893b4557a70840ff89c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0196745&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65177cf270c8bbf2c26f8893b4557a70840ff89c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4976061
pes2o/s2orc
v3-fos-license
Estimating parametric phenotypes that determine anthesis date in Zea mays: Challenges in combining ecophysiological models with genetics Ecophysiological crop models encode intra-species behaviors using parameters that are presumed to summarize genotypic properties of individual lines or cultivars. These genotype-specific parameters (GSP’s) can be interpreted as quantitative traits that can be mapped or otherwise analyzed, as are more conventional traits. The goal of this study was to investigate the estimation of parameters controlling maize anthesis date with the CERES-Maize model, based on 5,266 maize lines from 11 plantings at locations across the eastern United States. High performance computing was used to develop a database of 356 million simulated anthesis dates in response to four CERES-Maize model parameters. Although the resulting estimates showed high predictive value (R2 = 0.94), three issues presented serious challenges for use of GSP’s as traits. First (expressivity), the model was unable to express the observed data for 168 to 3,339 lines (depending on the combination of site-years), many of which ended up sharing the same parameter value irrespective of genetics. Second, for 2,254 lines, the model reproduced the data, but multiple parameter sets were equally effective (equifinality). Third, parameter values were highly dependent (p<10−6919) on the sets of environments used to estimate them (instability), calling in to question the assumption that they represent fundamental genetic traits. The issues of expressivity, equifinality and instability must be addressed before the genetic mapping of GSP’s becomes a robust means to help solve the genotype-to-phenotype problem in crops. Introduction Finding methods to predict crop phenotypes from genotypes, often termed the G2P problem, is one of the highest priorities of applied biological research [1] and is central to addressing global food security issues that will otherwise become acute by mid-century [2]. Accurate G2P prediction will enable crop breeders to implement efficient crossing schemes that will increase PLOS potential seed size). When equifinality is present, the range of possible GSP values is wider, thus reducing or eliminating detection of significant genotypic effects. Unfortunately, many commonly used model inversion methods [15] only output a single, point GSP estimate, allowing equifinality to escape detection. Even when error bounds are determined [16], it may not be evident whether the cause is equifinality as opposed to biological variability. The potential for adverse effects of equifinality on genetic mapping warrants an investigation of its prevalence during estimation studies. The stability of GSP estimates across the environments is also of concern. When an estimate varies with the set of environments used in model inversion, there is no way to determine which value to use when weather conditions, soil properties or crop management differ from those used in model training. Indeed, given that GSP's are, as a defining property, assumed to be free of G×E interactions, the detection of instability is prima facie evidence that a particular parameter is not immediately usable as a GSP. Therefore, instability should be assessed as part of any GSP estimation protocol. The overall goal of this study was to investigate the estimation of GSP's controlling anthesis date in a large maize mapping population (> 5200 lines), applying an ECM inversion technique to data from eleven site-years of maize field experiments [5]. Not only is anthesis date a phenotype of major biological significance, but it was also studied in this same panel using conventional statistical genetic methods [4,17]. Parameter estimation is especially challenging when large numbers of lines are involved. Therefore, we sought a method that was efficient for large estimation tasks while explicitly supporting an investigation of equifinality and stability. Specific objectives were to 1) estimate CERES-Maize GSP's for each maize NAM line, 2) assess the estimated GSP's for evidence of equifinality and instability, and 3) examine the estimated GSP's for other potential issues or opportunities. Experimental data Anthesis date data for a total of 5266 maize lines were obtained from the Panzea data repository (http://www.panzea.org). The lines were members of three genetic panels. In particular, 4785 lines were from the 25 RIL panels comprising the maize NAM set described above; 200 RIL lines, referred as the IBM (Intermating B73 x Mo17) panel [18]; and a diversity panel [19] contained an additional 281 lines. Various combinations of these lines were grown at six sites in the United States, providing a total of 11 site-years during 2006 and 2007 (Table 1). For the NAM lines and IBM population, trials were arranged as augmented incomplete block designs, having one replication per trial. For each trial, lines were grouped by family with augmented incomplete blocks within each family. Each incomplete block contained 20 RILs and two checks, B73 and the second parent of the family. A similar design was used for the diversity panel. Anthesis date was recorded as the date on which 50% of the plants in a plot had begun shedding pollen. Data on daily maximum and minimum daily temperatures for each site were provided by the maize NAM collaborators [5]. These stations followed standard instrument and siting guidelines, and data were found comparable to 2.5 arc minutes (ca. 4 km) gridded data [20] (Figs A and B in S1 File and Discussion). As verified by inspecting model source code, the calculated photoperiods include civil twilight, defined as when the sun is <6˚below the rising or setting horizon. CERES-Maize model The Crop Estimation through Resource and Environment Synthesis (CERES)-Maize v 4.5 [21,22] is one of the oldest and most widely used maize ECM's. Four GSP's (P1, P2, P2O, and PHINT; Table 2) that control time to anthesis were considered [23,24] The soil water and nutrient components, tillage, pest, and disease options, none of which affect anthesis date in CERES-Maize, were switched off during the simulation runs to reduce the computing time required. Row spacing and planting depth were set to 0.5 m and 2.5 cm, respectively. Parameter estimation Search strategy. This study adapted a parameter space search algorithm developed by [25][26][27] (Fig 1). First, model simulations were run for each of the 11 site-years across a multidimensional set of parameter value combinations that were stored in a database along with the simulated anthesis dates. Second, for each line, the root mean square error (RMSE) [28] in days between observed and simulated anthesis dates as summed across site-years was where, n is the number of observations for that line (i.e., one per site-year), and Y p (Y o ) is the Assessing parameter estimates for potential issues and opportunities simulated (observed) anthesis date. For each line, the search engine then output any (ideally just one) parameter value combination that produced a minimal RMSE. The minimal RMSE criterion has also been used in GSP searches done by [29][30][31][32]. This paper calls this procedure the "Sobol database algorithm" after the method by which the parameter value combinations were produced. It is described in the next section. Sampling the model parameter space with Sobol sequences. Unlike [25][26][27], who sampled the parameter space with a rectilinear grid, a Sobol sequence was used to avoid the combinatorial explosion in computational requirements that accompanies increasing dimensionality. Sobol sequences belong to a family of quasi-random processes that generate parameter sets samples dispersed as uniformly as possible over a multidimensional parameter space [33]. Sobol sequences offer reduced spatial variation compared to other sampling methods (e.g., random, stratified, Latin hypercube), making them more robust [34]. The Sobol algorithm was coded in Python and used to generate 32,400,070 GSP sets ( Table 2). The resulting database had 356,400,700 entries, consisting of the CERES-Maize simulated anthesis dates for each of the 11 site-years x 32,400,770 GSP sets. High performance computing. To execute the needed 356 million model runs, a simple wrapper was written that iteratively inserted the desired parameter values into the CERES--Maize input files. This is a very efficient procedure because, with minor modifications, the wrapper can be reused to analyze other CERES model parameters or outputs. The model runs were conducted using 112 processors on the "Stampede" supercomputer (https://www.tacc. utexas.edu/systems/stampede) at the Texas Advanced Computing Center (TACC), requiring 63,372 CPU hours. The predicted anthesis dates were transferred to the BeoCat computing cluster (https://support.beocat.ksu.edu/BeocatDocs/index.php/Compute_Nodes) at Kansas State University, where RMSE values were tabulated for each line × parameter value combination across all site-years. GSP combinations that gave the lowest RMSE values were recorded. This process took 7 h for all lines on 200 Xeon E5-2690 with ca. 15 minutes of wall clock time per line. An advantage of the database approach is that, if needs or more refined interests dictate, searches using completely different objective functions can be executed without the need for additional CPU-time-consuming model runs. Subsequently, when it was determined that specialized analyses were needed, a more labor-intensive, manual undertaking extracted the anthesis date submodel, ported it to Python and ran it independently on Beocat. Because such submodel extraction could easily introduce errors, checks were performed to insure that outputs of standalone code matched those of the full CERES-Maize model. Assessing estimate properties Equifinality. The extent of equifinality for a line was quantified as one less than the number of Sobol parameter combinations that produced the identical, minimal RMSE value (i.e., the "number of ties"; see Table A in S2 File for an example tie). During the database tabulation, the "best combination of parameter estimates seen so far" was updated only if its RMSE value was strictly better than all previously evaluated ones. Thus, the first single estimate encountered giving a minimum RMSE was reported. This is referred to below as the "first-bestfound" estimate. The number of subsequently examined estimates having the same RMSE as the first-best-found is the extent of equifinality. Relationships among parameter estimates. In genetics, one expects to see trait correlations, the architecture of which is central to understanding and prediction. Thus, a possibility was that correlations among GSP estimates might reflect biologically important differences among populations. Scatter plots and Pearson correlations were used to examine the relations among parameters. Testing for parameter stability across environments. To determine whether the GSP estimates depended on the particular set of environments used to obtain them, a novel statistical approach was developed. A subset of 539 lines was identified that were present in all 11 site-years. Next, all 330 mathematical combinations of the 11 site-years when chosen seven-ata-time were constructed. The number seven was selected because preliminary Sobol database tabulations revealed that equifinality increased dramatically when fewer than seven site-years were used in estimation (see Results). We conducted 177,870 (= 539×330) line x environmental subset parameter searches. Because equifinality might reduce the power of the statistical test used to detect instability (next paragraph), 114,314 searches were discarded because they had ties. Of the 330 site-year subsets run, 297 were identified that had at least 100 lines remaining after ties were removed. Each of the 539 lines was present in at least 28 site-year subsets. By this process an overall total of 60,834 estimates were generated for each of the four GSP's in the study. The following statistical model was used to test for stability in parameter estimates across environmental subsets: where ρ l,e represents an estimate of the GSP.ρ (i.e., either P1, P2, P2O, or PHINT) for the l th line (l = 1,2,. . . 539) obtained from the e th site-year subset (e = 1,2,. . . 297), μ is the intercept parameter, acting as an overall mean of GSP ρ across all lines and site-year subsets;α l is the differential random effect of line l, assumed to be distributed a l $ Nð0; s 2 l Þ; β e ; is the differential random effect of the e th subset of site-years, assumed to be distributed b e $ Nð0; s 2 e Þ and ε l,e ; and is the remaining residual unique to the l,e th observed GSP estimate and assumed ε l;e $ NIIDð0; s 2 ε Þ The differential line effects α l are considered to be random, as is common in field studies of plant population biology. Further, the differential effects of site-year subsets, β e , were treated as random because the corresponding environmental subsets are combinations of 7 out of 11 plantings considered to be a representative, if not random, sample of the population of possible site-years to which we are interested in inferring. If the estimation of any GSP parameter ρ were stable across the site-year subsets, one would expect the variance of β e , namely s 2 e to be zero; alternatively, if estimation is unstable, one would expect s 2 e > 0. To test this hypothesis set, two competing versions of the statistical model in Eq (2) were fit, one with and one without the random effect of site-year subsets β e for each of the GSP's ρ = P1,P2,P2O, and PHINT. For each GSP, the two competing models were compared using a likelihood ratio test statistic against a central chi-square distribution with half a degree of freedom to account for the fact that the test was conducted on the boundary of the parameter space. Statistical models were fit using the linear mixed-effects model package lmer in R [35] with optimization based on the log-likelihood option. The lmer package also calculated the Akaike and Bayesian Information Criteria (AIC [36] and BIC [37], respectively), which allowed additional assessments of fits for the statistical models that included or excluded the random effects of site-year subsets. Observations vs. simulations The overall model fit was quite good. In plots of observed vs. simulated days to anthesis for the 49,491 line × 11 site-year combinations (Fig 2), the symbols were concentrated along the identity line with an overall estimated RMSE of 2.39 days. To put this value in context, some other studies have reported RMSEs of 0.91 to 3.2 days [38,39], prediction errors of 6 to 12 days [40]; a mean deviation of 10.1 days [41]; and a standard deviation of 6 days [42]. The detailed scatter plots and RMSE for each site-year are presented in Fig Equifinality A total of 2254 lines exhibited equifinality. Of these, 2153 lines had 40 or fewer ties ( Fig 3A) and the remaining 101 had from 41 to over 1 million ties ( Fig 3B). The number of ties per line (traces in Fig 3A and 3B; right axes) was extreme when there were fewer than seven observations per line ( Fig 3B). Anthesis dates for each line common to both NY6 and NY7 are plotted at coordinates corresponding to their paired simulated ( Fig 4A) and observed (Fig 4B) values. The seemingly Assessing parameter estimates for potential issues and opportunities smaller number of data symbols in Fig 4A is due to identical simulated anthesis dates for many lines, leading to overlap in the plot. The symbol colors show the extent of P1 equifinality on a log 10 scale. The symbol sizes encode the ranges of the equifinal P1 estimates for each line as a percentage of the mean. These vary from 0.36% for the smallest symbols to 65.68% for the largest. The association of redder colors with larger symbols indicates that the ranges of equifinal GSP estimates do, indeed, increase with the extent of equifinality. When the data were plotted at their observed dates (Fig 4B), the resulting cloud was more dispersed than that of the simulated symbols (Fig 4A), showing the model responses to the environment were less variable than responses of real plants. However, a large number of lines (blue symbols in Fig 4B) had observed anthesis dates that failed to overlap any of the simulated values ( Fig 4A). This is indicated by the red line, which was drawn in Fig 4A and then copied exactly into Fig 4B. From here forward, the red line is referred to as the "expressivity frontier". It differentiates those observations that the model is able to reproduce-above the expressivity frontier-from those below the frontier, which the model cannot simulate. To our knowledge, most previous phenology modeling studies, not only in maize [41] but in other crops as well (e.g., wheat [44], soybean [30], rice [45]), have reported results solely in terms of prediction accuracy without recognizing this second, quite distinct, and unexpected category of model misbehavior. The deleterious effect of this phenomenon on the ability to link ecophysiological models with genetics is examined further in the Discussion. Related details are discussed further in the section on "Model Expressivity". Relations among parameter estimates In examining possible relations among parameters, two anomalous features were noted ( Fig 5). First, a pronounced banding pattern appeared in all plots except, perhaps, P2O vs. PHINT. Most bands were linear except for those on the scatterplot of P2O and P2, which showed curvature. Second, a vertical gap appeared in all P2O scatterplots. These patterns proved to be symptomatic of serious issues in the estimation process as described further in sections below. Model expressivity To understand the patterns in Fig 4, we explored the "phenotype space" delimited by the observed and simulated anthesis date data for all sites where more than one year of data existed (Fig 6). Except for North Carolina, there were many lines for which no GSP values in the ranges examined allowed the model to reproduce the observed anthesis data. Such observations are hereinafter termed "inexpressible", and the remaining data are described as "expressible". Although the parameter ranges used were in general agreement with prior biological knowledge, the possibility remained that inexpressible observations resulted either because the ranges were too narrow or due to some artifact of the Sobol database method itself (e.g., its discrete character). To evaluate these possibilities in a computationally efficient way, the CERES-Maize anthesis date routine was ported to Python and fit to a single pair of site-years (NY6/ NY7) using differential evolution (DE; [46], a well-established, continuous optimization algorithm. Like the Sobol database algorithm, DE allows range limits to be set. Intentionally disregarding prior biological knowledge for the purposes of this test, these limits were specified to be much broader than expectations based on maize biology (Table 3). Despite this much larger parameter space, the expressivity range of the model was not extended. The results of the DE searches (Fig 7, dark blue) almost exactly reproduce the expressible data (yellow data symbols) but do not extend beyond Sobol database region (light blue) to reach any previously inexpressible observations (red data symbols). This suggests that there might be intrinsic expressivity issues in the model, at least as used in this study, that go beyond the search algorithm used or the parameter space examined. During the review process, the utility of these extended parameter ranges was questioned. Reviewers also suggested that the NY6/NY7 results might be accounted for by not having optimized the base (Tbase) and optimum temperature parameters (Topt). These parameters control the conversion of daily temperatures to thermal time increments. To explore these twin issues, a six-parameter scan was done that (1) included Tbase and Topt and (2) set the parameter ranges according to values recommended for CERES-Maize in the cultivar and ecotype files provided in DSSAT v4.5 (Table A in S4 File). These ranges are slightly narrower than those in Table 2. The results were that (1) inclusion of Tbase and Topt had no qualitative impact on the results and (2) any quantitative improvements were more than offset by the parameter range narrowing. Specifically, expressivity declined at all sites. The parameter ranges used and the resulting analogs of Figs 5 and 6 are, respectively, shown in Table A and Figs A and B in S4 File A deeper investigation (Fig 8) of the values estimated for expressible (yellow) and inexpressible (red) observations demonstrated a link to the scatterplot banding in Fig 5 for P1 and P2O. In particular, banding was very pronounced near P1 = 250. Tabulation for the Sobol database estimates (Fig 8A) revealed that 68.2% of the lines had P1 estimates ranging from 245-260. Of these, 31.7% (36.5%) of the lines were expressible (inexpressible). Similar proportions were found for the DE estimates (Fig 8B), reinforcing the kindred results of the two search methods. Despite their superficial visual differences, both graphs have 4,731 symbols, the number of lines planted in both NY6 and NY7. The number of apparent symbols is fewer in 8a because of the discrete nature of the Sobol database. This contrasts with 8b due to the fact that DE is a continuous search and the parameter range is wider Furthermore, a phenotype space graphic in numeric form provided more detail (Fig 9). The black numbers in the blue region are the first-best-found P1 values that generate the corresponding row and column anthesis date combinations. Note that these tend to be close to 250 along and near the expressivity frontier. The red values are the numbers of lines whose anthesis date combinations were not expressible by the model. The RMSE for inexpressible observations was minimized by assigning the GSP values associated with the closest achievable dates, P2O gap Of the 11 site years, three (FL6, FL7, and PR6) had decreasing day lengths during Stage 2, all of which were less that ca. 11.5 h. P2O estimates based on these site-years showed a gap (Fig 10A and 10E). In contrast, the other eight site years all had Stage 2 photoperiods longer than the maximum allowed in the Sobol database search (14 h). P2O estimates obtained using these data exhibited no gap (Fig 10B and 10F). When estimates were computed using any data from the three southern site-years, a gap resulted, in particular as seen in the combination of all 11 site-years (Fig 10C and 10G). Assessing parameter estimates for potential issues and opportunities An optimizer will find any way that it can to reproduce the observations, i.e. minimize RMSE. In this case the "P2O gap" results from an interaction between the model's equations for anthesis dates and the range of parameter values allowed. For any given value of P1, the predicted anthesis date is determined by the combination of P2O, P2, and PHINT values. In southern states with short photoperiods, there were two cases to be considered-lines with longer vegetative periods and those with shorter ones. In the former case, the optimizer would select a P2O that was much shorter than the actual (already short) day lengths. Then it can achieve the needed delays by selecting combinations of P2 and PHINT to create a best match with the dates observed across all sites. For lines with shorter vegetative periods, the optimizer selected P2O values greater than the actual day length so that Stage 2 only lasted for four days. Again, however, the needed intervals were obtained by adjusting PHINT. Because of equifinality, all values of P2O in excess of the observed photoperiod were equally workable in the latter case. Similarly, there were multiple workable combinations of P2O and P2 in the former case. The result was a P2O gap bracketing the actual photoperiods at each of the southern sites. In the case of northern sites, the actual photoperiods exceeded the 14-hr limit put on the Sobol database. Thus, in the north, all lines were analogous to short vegetative lines in the south. In the effective absence of two categories, no band was seen in the north. When the P2O limit was extended in the DE search, all southern cases with gaps were rendered equivalent to the northern sites and the gaps disappeared (Fig 10D and 10H). Tests for stability of GSP estimates The effect of including or excluding the effect of different subsets of site-years on each GSP estimates (Eqn. 2) was hugely significant (chi-square p-values, Table 4). The AIC and BIC values for all GSP's were considerably smaller for models that included the random effect of site-year subsets, β e , therefore also suggesting non-negligible variability. To illustrate the size of the site-year subset effects, an Index of Variability (IoV) was calculated as the standard deviation of the β e effect, normalized by the grand mean (the interceptm r in Eq 2), and expressed as a percentage. The percentage of the total GSP variance (s 2 e þ s 2 l þ s 2 r ) attributable to siteyear subsets was also calculated. Both descriptors indicated substantial variability between siteyear sets, with indexes of variability ranging from 5.9% for P2O to 33.6% for P2 and over 20% of the total variance related to site-year sets for all GSP's. All of these statistics demonstrate that the GSP's based on this model structure are not, in fact, genotype-specific despite the goodness-of-fit displayed in Fig 2. This result is understandable given the range of artifacts due to equifinality and model expressivity issues identified above (Figs 4-10) along with the unevenness of their distribution across site-year x line combinations ( Table 4). The possible causes and implications of these finding are discussed next. Assessing parameter estimates for potential issues and opportunities Discussion Since their inception, ecophysiological models have been evaluated in terms of predictive ability, which is often superb [47]. In such work, ECM parameters were considered to be inputs whose genesis was secondary as long as the model outputs proved useful. However, perceived needs, desiderata, and requirements escalate as technologies evolve. It is now expected that the model inputs, themselves, be the accurate outputs of processes at the genetic level that can be modeled by genomic prediction. It is not surprising, therefore, that modeling technologies that were adequate for past applications now require improvement. Usually experimental GSP measurement requires intricate and/or intensive protocols. GSP's are also rather numerous in modern ECMs. In combination these factors make their direct determination infeasible for more than a few lines. This mandates indirect inference of GSP's via model inversion [48]. There are, however, multiple ways that inverse studies can go awry that can affect the usability of the end results. A non-exhaustive list of these includes (1) an inappropriate objective function for measuring goodness-of-fit, (2) inadequate sampling of the parameter space, (3) errors in the observed phenotype data, (4) errors in the model input data, and (5) structural issues and errors in model representation of biological/environmental process interactions. Relative to the first point, this study employed RMSE as the objective function, making the searches congruent to the virtually universal nonlinear least squares. The second potential pitfall, parameter space sampling, was explicitly addressed in three ways. First, a parameter range was used that is far wider than what might be deemed biologically reasonable (but see the discussion of P2O below). Secondly, the Sobol database approach guaranteed that the range was sampled at a uniform density limited only by the amount of computing power available. Conventional search algorithms, even global optimizers, can be inferior in this regard, especially when objective functions have highly deceptive goodness-of-fit landscapes [49]. Third and finally, the achieved sampling density was maximized by the use of supercomputing resources. Additionally, many traditional optimization algorithms are like DE in that they only produce single point estimates [46,50,51] and thus lack any ability to assess equifinality or model expressivity. This gives the Sobol database used herein a clear superiority in that it can evaluate Table 4 (N = 60,834). Assessing parameter estimates for potential issues and opportunities the properties of both the parameter and phenotype spaces. Moreover, when used with parameter ranges that exceed those ascertained from prior knowledge, Sobol database methods reveal the upper, structural limits on model expressivity. The degree of expressivity shown when ranges are set by prior knowledge can easily depend on which "prior knowledge" is used (e.g. Fig 6 vs. Fig A in S4 File). A parameter space scan using an extended range establishes a benchmark against which such variation can be interpreted. With respect to trait scoring errors, anthesis date phenotyping is extremely common and, as described in the Methods section, standard sampling methods were used. While site-specific observer effects cannot be discounted as being present, these same data sets have been successfully used in a variety of other studies [4,5]. Even if adverse observer effects were to be documented at some later time, they would only reinforce the central point of this paper, namely that refined protocols for GSP estimation are required. GSP Log likelihood w/o (top) and w/ (bot) a site-year set effect a AIC w/o (top) and w/ (bot) a site year set effect b BIC w/o (top) and w/ (bot) a site-year set Relatedly, an examination of the weather data inputs (germane to point 4) was instructive. As shown in Table A in S1 File, when compared to a gridded data set, there were small, sitedependent, near-constant, inter-annual offsets (i.e., biases) and/or swings (i.e., variation). Moreover, these had variably-sized effects on predicted anthesis dates (Fig C in S1 File). Of course, it is not possible to tell which is more accurate, the gridded data or the local observations. Although soil and terrain data were not used in this study, their notorious spatial variability would only exacerbate whatever issues are attributable to weather data herein. Indeed, at least one GSP estimation study has documented improved prediction stability with increases in the amounts of soil data used [25]. More on these issues will be stated below along with discussion related to model structure (point 5). Inadequate expressivity is more damaging than equifinality because it is unlikely to be alleviated by optimization constraints in the form of more data or more data types. The only solution for a lack of model expressivity is to develop models that better represent underlying physiological processes. For example, although CERES-Maize includes equations for estimating emergence dates, the corresponding GSP's for this component were left at default values, because emergence data were neither in the published dataset and nor the field trial reports (E. Buckler, personal communication). This effectively removed seedling emergence as a means to distinguish between lines. Extrapolating this example, one can readily imagine models that omit other critical processes altogether. Adding new processes to such models will likely require estimating more GSP's, similar to what would have been necessary to include emergence date data in this study. However, this will increase equifinality. Large numbers of equifinal points (e.g., a million ties in some cases) is a mathematical issue resulting from model structure (an aspect of point 5, above) and data. Stated graphically, model revisions that move the expressivity frontier down and rightward in Fig 4B will make more symbols redder and larger. The overall solution is to first increase model expressivity and then to include more observations of more model variables to reduce equifinality. Current research and development efforts aimed at high throughput phenotyping (HTP) technology will be helpful in adding new data types. For example, if one assumes that TOLN = SUMDTT/(PHINT×0.5) +5 is the correct way to model the number of leaves at anthesis, then HTP data on total leaf number would allow optimizers to particularly favor PHINT trial estimates that approximated 2×SUMDTT/(TOLN-5). In real world situations, equifinality concerns not only model parameters, but also gene action. However, when gene-level equifinality occurs (i.e., multiple pathways produce the same phenotype), one would expect the mapping step to reveal all active, contributory genes. When alternative sets of genes are stably present and act toward similar ends across environments then, ipso facto, their markers will have strong associations with the GSP values. Thus, genetic equifinality would be detected during the GSP mapping phase. Alternatively, another approach to reduce equifinality is to simplify models. Simpler models would have fewer GSP's and fewer indirect mathematical pathways through which changes in one parameter could be exactly offset by changes in others. Of course, fewer GSP's, along with possible reductions in the range of processes modeled might limit model plasticity to the detriment of expressivity. Whatever path is taken, the Sobol scheme herein can be used to assess trade-offs between model equifinality and expressivity, thus providing a valuable tool in facilitating the linkage of more fundamental biological traits with their underlying genetics. However, forging links between traits and genetics requires parameter stability. Instability can occur when: (1) undiscovered equifinality is present, and the solutions found depend on low-level algorithmic idiosyncrasies of the optimizer; (2) a stable answer exists but the optimizer is insufficiently skilled to find it; (3) a stable and possibly even unique answer exists within the skill level of the optimizer to find, however, because of a large number of parameters, the values obtained reflect noise signals that differ between environments; or (4) the model incompletely or incorrectly disentangles G × E. Explanation (1) is unlikely in this study, first because rampant equifinality was, in fact, discovered and ties were explicitly excluded from the evaluation of instability and, second, two different optimizers (Sobol and DE) performed similarly. Explanation (2)-unskilled optimization-also seems unlikely to be present given the small RMSE values achieved. Explanations (3) and (4) are interrelated in that they are both additional examples of model structural issues, which was point (5) in the above list of GSP estimation frailties. In this study, there were detectable systematic differences in the weather data collected in different site-years (Fig A in S1 File). Moreover, although small (Table A in S1 File), these differences had a measurable effect on anthesis date predictions (Fig C in S1 File). However, the use of very large data sets confers an extraordinary and, perhaps, excessive power to detect GSP site-year dependencies (Table 4). For example, a visual comparison of Fig C in S1 File with the NY and FL panels in Fig 6 shows that the effects due to weather instabilities are insufficient to compensate for lack of expressivity. Thus, it seems likely that there are remaining G x E disentanglement issues in this model. To the extent that the statistical instability test is deemed overly sensitive, the IoV might be a better index for practical interpretation. Even so, a clear implication is that field researchers must seek methods of abiotic measurement that better characterize the actual environments experienced by the plants. For example, by combining high temporal and spatial resolution canopy temperature data from UAV-mounted sensors with an ECM that simulates crop development responses to canopy temperature, external measurements of some environmental variables (e.g., air temperature) could, perhaps, be foregone. Whatever is done, of course, one cannot accurately estimate the controlling parameters without collecting data in settings wherein the relevant processes operate differentially. Another problem can arise from adverse interactions between model structure (point 5 yet again) and the specification of prior knowledge. This is clear from a comparison of the P2O results for Sobol database and DE searches. In the former, the estimates were unrealistically compressed into two restricted ranges separated by a gap (Fig 10A, 10C, 10E and 10G). During the second run, however, because of equifinality in combination with wider permitted ranges, the optimizer (DE) found a different way to "explain" the observed anthesis dates. Specifically, it shifted the most important explanatory variables from P2O, P2, and PHINT to PHINT alone. This eliminated the gap found in the first search but spread the P2O estimates out until they attained values considered biologically unrealistic (Fig 10D and 10H). This forcefully makes the point that unexpected, highly counterintuitive, and even counterfactual interactions can occur during estimation. Such artifacts might not have been observed before because previous studies (e.g. [41]) have not explored the parameter space with methods able to reveal them. The debilitating influence of all of the behaviors seen herein on attempts to link parameter values to genes is, unfortunately, quite obvious. An additional concern with quality of the data was that only 539 out of 5266 lines had anthesis date observations for all plantings, and, where anthesis data were lacking, no information was given as to whether the field plot died prior to expected anthesis date or failed to reach anthesis, presumably due to high photoperiod sensitivity. Besides documenting a need for more complete plot-level data, this imbalance in representation of lines across environments suggests that some more global notion of balance needs to be established and applied for use in ECM inversion. However, given the expense of such large-scale trials and the multiple purposes they serve, "balance" cannot mean "orthogonality" with all lines planted at all sites. There is a large literature on methods for optimizing experimental designs [52][53][54]. Perhaps such methods should be applied at levels higher than the single field trial with the needs of GSP estimation being a specific criterion receiving consideration. Conclusions The anthesis date component of the CERES-Maize model was fitted with data from 5266 maize lines including the maize NAM population. Despite the model's high predictive ability, issues of expressivity, equifinality and instability were identified. Although analysis of GSP's as crop traits still seems highly promising, the problems noted with CERES-Maize simulations of anthesis date were severe enough to preclude use of its estimated GSP values in mapping analyses. Model inversion using the Sobol database approach proved especially useful because, unlike other optimizers that find single point estimates of GSP's, this algorithm revealed both the extent of equifinality and the boundaries of the expressible phenotype region. It should be employed more broadly, for example, to additional models of maize phenology [41] and, beyond this, to complex traits in other crop models. The constraining issues can be summarized as falling mainly into three categories. The first arises in situations where the model is unable to express the observed data for some lines, even by a relatively few number of days. In this circumstance, a line is assigned the GSP associated with the nearest point on model's expressivity frontier. The result is that many, even a majority, of lines are assigned the same GSP values independent of their genetics. The second issue arises when the model can reproduce the data but there are many combinations of GSP values that predict equally well, i.e. equifinality. When equifinality exists, there is no principled way to assign the line a genetically relevant value. The third issue, which can arise in either equifinal or inexpressive situations, is when GSP estimates are unstable, i.e., they vary depending on the set of environments used to determine them. In this case, simulation outputs will be suspect when the model is applied to environments not used in estimation. The importance of these issues cannot be overstated. Community interest in the GSP-fitting-and-mapping paradigm is high as shown by the heavy citation rates for the seminal papers in this area. For example, as of January 2018, the classic [7] paper had been cited 304 times and those publications, themselves, had been cited by 9,713 others (Source: Google Scholar). Indeed, it is quite unclear that there is an alternative approach for linking genotypes to phenotypes in situations involving non-constant environments and interacting, nonlinear biological processes. However, without the ability to obtain stable and unambiguous GSP estimates that fully reproduce the data's observational range, the paradigm breaks down. Therefore, it is mandatory that the GSP estimation issues raised herein be addressed and resolved. Of course, one cannot fix problems one cannot first detect. Doing so will require more and better data, but also improved metadata. For example, there is a need to consider better ways to quantify the abiotic conditions actually experienced by the plants as well as the protocols and quality of that data. On the biotic side, we need more crop development and status data, as discussed in the emergence date example above. Current burgeoning research on high throughput phenotyping may help meet critical needs in this area by expanding not only the range of traits that can be quantified but also increasing their temporal frequencies and reducing the errors (especially observer effects) inherent in manual measurement. Additionally, more than 11 site-years are needed for this type of work. The CERES-Maize GSP's studied here had large values for the Index of Variability because only 11 site-years of data were used. Therefore, it seems highly unlikely that the values obtained will generalize. Even if the IoV values had been much smaller, it is hard to believe that 11 plantings adequately capture the environmental variability of US corn production. High throughput phenotyping will also help to meet this need. However, all of these improvements will increase computational loads despite the efficiencies of the Sobol database method when used for large numbers of lines. Therefore, strong consideration should be given to disaggregating comprehensive models into separate modules that can be studied independently at much lower computational cost. (This was done here when the anthesis submodel was ported to Python to study an expanded parameter space.) A good long-term strategy would be to program future models in a manner that supports singlemodule testing at the source code level. In conclusion, there is no doubt as to the importance of the ability to predict the behaviors of novel genotypes in novel environments while crosses are still in the planning stage. Indeed, this is precisely the genotype-to-phenotype problem, which has been declared by the National Research Council to be a top-priority goal for applied biology [1]. So, these impediments need to be overcome. With methods like the ones advanced here for detecting adverse model behaviors under estimation; emerging technologies for collecting ever larger and higher quality data sets; research that is probing ever more deeply into the plant biological processes and their controls; and despite the huge amount of work to be done, there is no reason to believe that we will not be successful. acknowledge the Texas Advanced Computing Center (TACC; http://www.tacc.utexas.edu) at The University of Texas at Austin and Beocat, Kansas State University for providing high performance computing resources that have contributed to the research results reported within this paper. We also thank Hsiaoyi Hung (North Carolina State Univ.) for providing the daily weather datasets. We also greatly appreciate the input of two reviewers whose suggestions have greatly strengthened the paper. Support for this effort was also supplied by the Department of Agronomy at Kansas State University. This paper is contribution number 17-134-J of the Kansas Agricultural Experiment Station at Kansas State University, Manhattan, KS. USDA and Kansas State University are equal opportunity providers and employers.
2018-04-27T04:56:13.396Z
2018-04-19T00:00:00.000
{ "year": 2018, "sha1": "f70e21a788687612c647f0cf0695d2394a676cc6", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0195841&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f70e21a788687612c647f0cf0695d2394a676cc6", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
257851454
pes2o/s2orc
v3-fos-license
A focus on coordination chemistry at chlorine The first crystallographic characterization of chloronium cations stabilized by pyridine ligands (P. Pröhm, W. Berg, S. M. Rupf, C. Müller and S. Riedel, Chem. Sci., 2023, https://doi.org/10.1039/D2SC06757A) is discussed in the context of coordination chemistry at chlorine. Coordination chemistry has most frequently been associated with classic transition metal-ligand complexes, where ligands donate two electrons to metals to form sigma bonds, oen designated as coordinate or dative bonds. There is also rich coordination chemistry in the main group, with a classic example being the many adducts to boron or the heavier, more metallic, p-block elements. Generally, more electronegative atoms are more electron rich and less likely to act as Lewis acids. The halogens (F, Cl, Br, I) are an electronegative group of elements and their chemistry is dominated by gaining electrons and existing in the −1 oxidation state. As such there is relatively less coordination chemistry for the halogens as compared to the other groups, nonetheless they do act as Lewis acids in compounds where they are found in higher oxidation states. Charge transfer complexes of the type Nu-X-X are well-studied. 1 They are most stable and common for iodine and several compounds have been crystallographically characterized, including for pyridine 2 and phosphine ligands, 3 for example. They are less common for bromine but can be observed 4 and in more rare cases isolated. 5,6 For chlorine there are few examples where the charge transfer complex can be observed, 7 although they have been well studied computationally. 8 Unsurprisingly, iodine as the least electronegative member of the group (radioactive astatine aside) has the best known and most stable coordination compounds where the halogen is found in the higher +1 oxidation state. known as Barluenga's reagent, is commercially available and widely used. 9 The bromine analogue has been known for a very long time and has been crystallographically characterized, 10-12 but rarely used, with a SciFinder search returning only a few dozen papers over 60 years compared to a few hundred for iodine. The chlorine analogue was rst detected in solution at −80°C by Erdélyi and coworkers in 2014 and they found that, as predicted by theoretical studies, the analogue forms a symmetric [Pyr-Cl-Pyr] + species. 13 [Pyr-F] + is a commercially available electrophilic uorinating reagent and Erdélyi found that addition of a second pyridine to this compound resulted in an asymmetric environment at low temperature. Additionally our group found that attempts to observe any bis-pyridine adduct at room temperature, or perform ligand exchange, resulted in decomposition to complex mixtures. 14 Which brings us to the remarkable recent results from Riedel and co-workers who were able to crystallographically characterize both mono and bis-pyridine adducts of chloronium cations [Cl] + as well as a pyridine-Cl-Cl charge transfer complex for the rst time. 15 For the syntheses, the problem of obtaining stoichiometric amounts of Cl 2 gas was solved by rst condensing and weighing Cl 2 into pressure tubes. This was then condensed onto the appropriate amount of pyridine or lutidine in propionitrile at −196°C. Solutions were warmed to −40°C to allow for reactions and cooled back to −80°C to obtain crystals for X-ray analysis. The care required to achieve this is remarkable! For the reactions of pyridine and lutidine with Cl 2, differing results were observed. For pyridine a Pyr-Cl-Cl complex was crystallized, while for lutidine, a [Lut-Cl-Lut][Cl 3 ] salt was obtained. This is a good illustration of the divergent results that can be obtained with subtle changes in systems containing weak bonds. The reactions begin with pyridine interacting with the s* orbital of chlorine, forming the complex observed for pyridine. This also induces weakening and polarization of the Cl-Cl bond. For the slightly stronger Lewis base lutidine, the polarization likely becomes strong enough to allow for another Cl 2 to abstract a chloride, and onwards reactivity to the cationic complex. Polyhalogen anions are not ideal counterions as they bring substantial reactivity into systems. For the iodine and bromine analogues this issue was solved using Ag + cations to further polarize the Pyr-X-X complex, precipitating out AgX and delivering the chosen counterion paired with the silver cation. Using this strategy in the Cl + 4 ] is the sole product observed. In all cases the N-Cl-N bonds show a shorter bond and a longer bond, but in solution the compounds appear symmetrical as previously determined, 13 and theoretical calculations also indicate identical N-Cl bonds in the geometric minima. The asymmetry likely arises from solid state packing effects and this has a substantial inuence due to the weak nature of the bonds in these complexes. All of the compounds described decompose if exposed to a temperature of −10°C. The team attempted to generate a monopyridine chloronium complex via abstraction of Cl − from Pyr-Cl-Cl with a variety of halide abstraction agents, but this resulted in complex mixtures. Aryl groups are oen susceptible to decomposition in the presence of strong halogen electrophiles, and this is likely the mode of decomposition here. To achieve the target complex, pentauoropyridine was employed, where the peruorination protects the ring from electrophilic aromatic substitution reactions, alongside [Cl 2 F][AsF 6 ] as the Cl + source. This exotic salt, previously known but not structurally characterized until this report, was generated from ClF and AsF 5 in anhydrous HF that had been dried (repeatedly) with elemental F 2 . The skills and techniques required to do this chemistry are becoming a dying art and this synthesis is likely not achievable in many laboratories, but it does nicely illustrate the lengths that researchers need to go to in the name of inventing new chemistry! The N-Cl bond in the per-ourinated [Pyr-Cl] + cation is shorter than the corresponding bond for normal pyridine (1.69 Å vs. 1.76 Å). In the report this was indicated as surprising due to the weaker Lewis basicity of uoropyridine, but it is likely due to the trans effect of the polarized chlorine being greater than the trans effect exerted by weakly coordinating [AsF 6 ] − . Like solid-state effects, trans effects have an outwardly substantial inuence in systems containing weak bonds. In summary, this discovery by Riedel and co-workers is an outstanding example of the exquisite care and skill sometimes needed to bring something appearing so simple (pyridine + Cl 2 !) to the light of day. It will be exciting to see how this chemistry can be developed further, potentially for example as a source of electrophilic chlorine, or if derivatives/conditions can be found that allow for the "bottling" of such chlorine cations, for convenient storage and later use. Author contribution Article written by J. Dutton. Conflicts of interest There are no conicts to declare.
2023-03-31T15:15:28.401Z
2023-03-28T00:00:00.000
{ "year": 2023, "sha1": "f507dee91e280263deb9724d7421b2ee687a9037", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/sc/d3sc90047a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "932167aa04ad867ad44c6f777460ba4158ac3d8f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
55602653
pes2o/s2orc
v3-fos-license
A COMPARATIVE STUDY OF PLAIN X-RAY MASTOIDS WITH HRCT TEMPORAL BONE IN PATIENTS WITH CHRONIC SUPPURATIVE OTITIS MEDIA BACKGROUND: Chronic suppurative otitis media is inflammation of the middle ear cleft. Inflammation of the middle ear affects the mastoid pneumatisation and this can be assessed radiologically. PURPOSE OF THE STUDY: The present study was undertaken to assess and to compare regarding the status of pneumatisation in patients with chronic suppurative otitis media by x-ray both mastoids and HRCT temporal bone. MATERIALS AND METHODS: A prospective study was done from January 2014 to December 2014 on 45 patients presenting with chronic suppurative otitis media. RESULTS: The pneumatisation of air cells was better appreciated with HRCT when compared to x-ray mastoids. CONCLUSION: A well taken x-ray mastoids provides the mastoid pneumatisation status. The air cells detected in HRCT is superior as compared to x-ray of the mastoid. Also the extent of disease involving the mastoid air cells and mastoid antrum is better with HRCT. INTRODUCTION: The mastoid air cell system forms an important contribution to the middle ear ventilation and acts as a surgitank of air. Depending on the presence of air cells, mastoid is classified as pneumatised, diploeic or sclerosed. In chronic suppurative otitis media there is concomitant involvement of the mastoid air cell system. The extent of mastoid air cell involvement depends on the type of CSOM. Regarding the relation between the mastoid pneumatisation and the middle ear diseases has been controversial. This is explained by two theories. According to the Diamant's hereditary theory, the extent of pneumatisation is genetically determined and reduced pneumatisation predisposes for acute or chronic otitis. 1 According to environmental theory by Tumarkin, middle ear diseases are the cause of the reduced pneumatisation in infants and children. 2 The tegmen of the mastoid and the attic is usually oriented in the horizontal plane, slightly lower than the arcuate eminence, which is formed by the top of the superior semicircular canal. In coronal sections, the floor of the middle cranial fossa deepens to form a groove lateral to the attic and labyrinth. Low lying dura may cover the roof of the external auditory canal. The sigmoid sinus forms a shallow indentation on the posterior aspect of the mastoid. Occasionally, the sinus courses more anteriorly and produces a deep groove in the mastoid, best seen in axial section. 3 Routinely x-ray mastoids are advised as a pre-operative imaging modality. Only few centres focus on computed tomography of temporal bone as a pre-operative radiological imaging modality. Excellent resolution of computed tomography provides unprecedented detail of the temporal bone. 4 MATERIALS AND METHODS: A prospective study was done on 45 patients presenting with chronic suppurative otitis media from January 2014 to December 2014. After a thorough history and complete clinical examination, these patients were subjected for both x-ray of the mastoids and high resolution computed tomography of the temporal bone. X-ray mastoids were obtained by Law's view bilaterally and high resolution computed tomography of the temporal bone was obtained with 1mm cuts in axial and coronal planes. Purpose of the study: 1. To compare regarding the pneumatisation in chronic suppurative otitis media with x-ray both mastoids and HRCT temporal bone. 2. Presence of cavities, low lying dura and anteriorly placed sigmoid sinus. Statistics was done by sensitivity and specificity, and chi-square test. RESULTS: Among the 45 cases of chronic suppurative otitis media, there were 20 male and 25 female patients. 34 patients (76%) were between 20-50 years age group. The youngest patient was 9 years old and the oldest patient was 52 years old. The mean age group was 31 years. Among the 45 patients, 36 were diseased unilaterally and 9 cases bilateral. So there were 54 diseased ears. By clinical diagnosis, tubotympanic type of CSOM was seen in 37 cases with 16 cases on the right side and 21 cases on the left side, and atticoantral type of CSOM was seen in 17 cases with 10 cases on the right side and 7 cases on the left side. Coinciding Difference Normal ear 97.2% 2.8% Diseased ear 88.9% 11.1% Table 4: Similarity and difference of the x-ray and HRCT in normal and diseased ears Fig. 2: Sclerosed mastoid on A) X-ray & B) HRCT In normal ears, the coincidence of x-ray and HRCT findings of status of mastoid was 97.2% except in one case (2.8%) there was a difference as shown in table 4. In this case it was sclerotic on xray but diploeic on HRCT. In diseased ears, the coincidence of x-ray and HRCT findings of status of mastoid was 88.9%, the difference was seen in 11.1%. The difference between x-ray and HRCT in diseased ears in 6cases is as shown in table 5. X ray HRCT 2 sclerosed 2 diploeic 4 diploeic 4 pneumatised Table 5: The difference in 6 cases between x-ray and HRCT DISCUSSION: Pneumatisation may be defined as the process of air-space formation within the temporal bone. The process of pneumatisation begins with the resorption of mesenchyme early in the third foetal month. The potential air spaces do not contain air until the child is born. 5 Resorption of mesenchyme progresses rapidly during the first two months of infancy. It is practically complete in the middle ear by the sixth month and in the mastoid antrum by the first birthday. From this time onwards, pneumatisation of the mastoid is solely a matter of resorption of the haemopoietic marrow in the diploeic bone. 5 Radiological evidence of pneumatisation in the mastoid is not usually present until about the third year of childhood. 5 The pneumatisation of the mastoid region may be divided into three parts: 1) Pneumatic mastoid. The non-pneumatised areas are the bone marrow (in the diploeic mastoid) and the dense bone (in the sclerotic mastoid). These patterns were well appreciated on HRCT. 7 The mastoid antrum may be the only airfilled space in the mastoid process when the name acellular or sclerotic is applied. This occurs in 20% of the adults with chronic suppurative otitis media. 8 Normally, the pneumatisation is symmetrical in 72-99%. 9, 10 When pneumatisation is affected, the ear is suspected to be diseased with possibility of new bone formation and hence sclerosis. In studying temporal bone pneumatisation, high resolution computed tomography (HRCT) must be used, because this technique has the advantage that it shows the complete pneumatisation with excellent resolution as observed by Virapongse et al. 7 Holmquist stated that the success of the middle ear surgery depends on the degree of the mastoid pneumatisation. 11 In our study in diseased ears, x-ray of the mastoids revealed pneumatised mastoid in 25.9%, diploeic in 7.4%, sclerosed mastoid in 57.4% and diseased cavity in 9.3% of the cases. These findings were similar to a study by Tripti. 12 HRCT temporal bone revealed pneumatised mastoid in 33.3%, diploeic in 3.7%, sclerosed mastoid in 53.7%. These findings were similar to Tripti. 12 On normal side, x-ray of the mastoids revealed pneumatised mastoid in 61.1%, diploeic in 2.8% and sclerosed mastoid in 36.1% of the cases. HRCT temporal bone revealed pneumatised mastoid in 61.1%, diploeic in 5.6%, and sclerosed mastoid in 33.3% of the cases. In a study by Sethi et al, in the normal ear, 84% had well pneumatized mastoid air cell system and 16% had poor pneumatization on x-ray. 13 Of the 37 cases in tubotympanic type, x-ray of the mastoids revealed pneumatised mastoid in 35.1%, diploeic in 5.4% and sclerosed mastoid in 59.5% of cases. HRCT temporal bone revealed pneumatised mastoid in 40.5%, diploeic in 2.7%, and sclerosed mastoid in 56.8% of the cases. In atticoantral type out of 17 cases, pneumatised mastoid on x-ray was observed in 5.9%, diploeic in 11.8% and sclerosed in 52.9% of the cases. Sclerotic mastoid was seen in x-ray mastoids of all patients with cholesteatoma in a study by Santosh et al. 14 HRCT temporal bone revealed pneumatised mastoid in 17.6%, diploeic in 5.9% and sclerosed in 47.1% of the cases. Cavity was seen in 5(29.4%) cases. These findings were similar to Kataria et al. 15 P value less than 0.05 using chi-square test, statistically significant. Low lying dura was seen in 4 cases and anteriorly placed sigmoid sinus was seen in 3 cases on HRCT. These findings were similar to Karaca et al. 16 Contracted antrum was seen in 3 cases. Among the pneumatised mastoids on HRCT, different type of mastoid air cell groups observed were squamomastoid type of air cells in all, with perilabyrinthine type of air cells in 25%, petrous apex pneumatisation in 10% and accessory pneumatisation in 25%. X-ray and HRCT findings correlated well in 48 cases except in 6 cases where HRCT could detect pneumatisation better. The diploeic mastoid on x-ray was detected by HRCT as pneumatised mastoid in 4 cases and sclerosed mastoid on x-ray as diploeic in 2 cases on HRCT. The sensitivity and specificity of x-ray and HRCT are 100%. However x-ray showed different findings, HRCT provides a thorough detail regarding the air cells. The types of air cells can be better known with HRCT. HRCT predicts the type of mastoid pneumatisation accurately, which correlates with studies by Vlastarakos et al. (2010), who found strong agreement for mastoid cell aeration. 17 CONCLUSION: A well taken x-ray mastoids provides the mastoid pneumatisation status. However the air cells and the pneumatisation pattern, low lying dura, anteriorly placed sigmoid sinus are better detected in HRCT. Also the extent of disease involving the mastoid air cells and mastoid antrum is well appreciated with HRCT. Initially an x-ray mastoid can be taken and depending on the disease and surgical plan HRCT temporal bone needs to be taken.
2019-03-16T13:13:55.418Z
2015-04-13T00:00:00.000
{ "year": 2015, "sha1": "ab1b60fed596897833c7f760661b08822af08f06", "oa_license": null, "oa_url": "https://doi.org/10.14260/jemds/2015/758", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7aeb93b19e41572ae40c6ecfb9626ff900778496", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3210530
pes2o/s2orc
v3-fos-license
Exercise induction of gut microbiota modifications in obese, non-obese and hypertensive rats Background Obesity is a multifactor disease associated with cardiovascular disorders such as hypertension. Recently, gut microbiota was linked to obesity pathogenesisand shown to influence the host metabolism. Moreover, several factors such as host-genotype and life-style have been shown to modulate gut microbiota composition. Exercise is a well-known agent used for the treatment of numerous pathologies, such as obesity and hypertension; it has recently been demonstrated to shape gut microbiota consortia. Since exercise-altered microbiota could possibly improve the treatment of diseases related to dysfunctional microbiota, this study aimed to examine the effect of controlled exercise training on gut microbial composition in Obese rats (n = 3), non-obese Wistar rats (n = 3) and Spontaneously Hypertensive rats (n = 3). Pyrosequencing of 16S rRNA genes from fecal samples collected before and after exercise training was used for this purpose. Results Exercise altered the composition and diversity of gut bacteria at genus level in all rat lineages. Allobaculum (Hypertensive rats), Pseudomonas and Lactobacillus (Obese rats) were shown to be enriched after exercise, while Streptococcus (Wistar rats), Aggregatibacter and Sutturella (Hypertensive rats) were more enhanced before exercise. A significant correlation was seen in the Clostridiaceae and Bacteroidaceae families and Oscillospira and Ruminococcus genera with blood lactate accumulation. Moreover, Wistar and Hypertensive rats were shown to share a similar microbiota composition, as opposed to Obese rats. Finally, Streptococcus alactolyticus, Bifidobacterium animalis, Ruminococcus gnavus, Aggregatibacter pneumotropica and Bifidobacterium pseudolongum were enriched in Obese rats. Conclusions These data indicate that non-obese and hypertensive rats harbor a different gut microbiota from obese rats and that exercise training alters gut microbiota from an obese and hypertensive genotype background. Electronic supplementary material The online version of this article (doi:10.1186/1471-2164-15-511) contains supplementary material, which is available to authorized users. Background Exercise practice is a non-pharmacological treatment for a series of diseases [1]. Along with dietary control, appropriate exercise programs are proposed to treat and attenuate obesity [2] and its associated cardiovascular disorders such as hypertension [3]. It is known that hypertension and obesity frequently coexist [4], affecting millions of people worldwide [5]. Gut microbiota have been recently indicated as having a close relationship with obesity, where the microbiota of an obese subject presents an enhanced ability to harvest energy and accumulate fat [6]. The gut harbors the greatest density of these microorganisms in the body (e.g.~up to 1.5 kg in the human gut) [7] with Firmicutes, Bacteriodetes and Actinobacteria constituting the dominant phyla [8]. Moreover, obesity is associated with reduced microbiota diversity at phylum level [9], seen in rodents and in humans [10,11]. The gut is a dynamic environment, highly exposed to environmental factors such as diet [12], antibiotics [13], pathogens [14] and lifestyle [15], which constantly interact with microbial communities. In addition, gut microbiota is also shaped throughout life by host-related factors such as host-genotype [16]. Disturbance within gut microbiota has been reported to influence host susceptibility to pathogens and pathological conditions such as gastrointestinal inflammatory diseases and obesity [17]. Moreover, hypertension induction was also seen to alter gut microbiota [18]. It has been proposed that dysbiosis and pathologies associated with unbalanced gut microbiota may be prevented or treated with prebiotics, probiotics and fecal microbiota transplantation [19,20]. In addition, controlled exercise intensities are related to protective effects on the gastrointestinal tract, including a reduced risk of colon inflammation and cancer [21]. It is proposed that exercise may reduce intestinal transit time, diminishing the contact between the colon and cancer-promoting agents [22]. Recently, exercise has also been shown to induce alterations within microbiota composition [22,23], which suggests that exercise may be included as a possible therapeutic factor along with diet, prebiotics and other treatments. Since exercise plays a prominent role in metabolic regulation and energy expenditure, it might modulate host-microbiota interaction, affecting the host metabolism. Although these relations are still unknown, exercise may enhance the strategies for obesity control, along with other actions such as microbiota transplant [20]. Although voluntary exercise was shown to alter microbiota in non-pathological animals [22][23][24], its effects on gut microbiota still need to be further investigated in pathologic phenotypes and through controlled parameters such as exercise volume and intensity. Therefore, in the present study, we proposed to examine the effect of controlled moderate exercise intensity on gut microbial status in rats with different phenotypes, by using pyrosequencing of 16S RNAs genes from fecal microbiota samples. To our knowledge, this is the first study to use controlled exercise parameters and distinct animal strains with different obesity and hypertension genotypes. Analyzing 16S rRNA sequences revealed a similar microbiota profile shared between Wistar and Hypertensive rats, with both being divergent from Obese rats. Exercise was shown to enhance bacterial diversity and to alter microbial communities at the species level in all animal lineages. Thus, these data contribute to the emerging knowledge regarding the effect of exercise on gut microbiota, but further studies should be performed to establish the mechanism by which exercise signals in the bacterial community and to determine the impact of these modulations on host homeostasis. Animals Animals were obtained from the Federal University of São Paulo, Brazil (UNIFESP) and started the experiment at~18 weeks of age. Three different strains from two different genotypes were used: an obese genotype, homozygous (fa/fa) obese Zucker rats [25] (Obese rats; n = 5, 389.4 ± 21 g) and a hypertensive genotype, spontaneous hypertensive rats (Hypertensive rats; n = 5, 227.4 ± 29.3 g), a strain obtained by the selective breeding of Wistar-Kyoto rats with high blood pressure [26]. A strain of Wistar rats (WR; n = 5, 223.2 ± 27.3 g) was used as normotensive control for SHR and as a non-obese phenotype [26] (Additional file 1). All animals were allocated to collective cages according to their lineages, being kept in a 12 h light-dark cycle environment, with food and water ad libitum. All experimental procedures and interventions in the present study, involving animal-welfare, were carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health and were approved by the local ethics committee for standards in animal use, at the Institute of Biological Sciences, University of Brasilia, Brazil (UnBDOC no. 48695/2010); these were also in accordance with international standards. After experimental procedures, all animals were deeply anesthetized with 2% Xylazine (50 mg.kg −1 ) and 10% Ketamine (80 mg.kg −1 ) and euthanized by cervical dislocation. Through the entire experiment, all efforts were made to minimize animal suffering. Exercise training Before the training period, all animals underwent a familiarization period on a treadmill device (Li 870, Letica Scientific Instruments, Barcelona, Spain) for 2 weeks to reduce external stress. Duration and speed on treadmill were increased progressively (up to 12.5 m.min −1 for Obese rats; 20 m.min −1 for Hypertensive and Wistar rats) as previously described [27,28]. In addition, blood pressure of Wistar and Hypertensive rats was measured by the tail-cuff method [29] at the beginning of the experiment to characterize the hypertensive phenotype from the SHR strain (171.4 ± 7.7 mm.Hg −1 for Hypertensive rats and 128 ± 5.9 mmHg −1 for Wistar rats) (Additional file 2). As regards the Obese rats group (obese (fa/fa) Zucker rat), these animals are homozygous for the fa allele and are one of the most common models of genetic obesity where rats normally exhibit hyperlipidemia, hyperinsulinemia, hyperphagia and a significant weight gain by the 3rd to the 5th week of age [25]. In this model, conflicting results were reported on whether these obese Zucker rats are hypertensive compared to the lean control rats, showing that blood pressure is not elevated in this model [30,31]. However, the obesity phenotype may enhance arterial peripheral resistance and the animal may develop hypertension secondary to obesity and to secondary mechanisms [32,33]. Considering that obese Zucker rats do not present a regular pattern of blood pressure as reported by different studies, the systolic blood pressure was not measured in this group. After the adaptation period, all animals were trained for 30 min per day, 5 days per week for 4 weeks. Running intensity was set corresponding to maximal lactate steady state (MLSS), previously identified in obese Zucker rats [28] and SHR rats [27]. Therefore, for Obese rats, running velocity was set at 12.5 m.min −1 and for Hypertensive and Wistar rats at 20 m.min −1 . A new MLSS identification was performed after the fourth week of exercise training to assess each animal's cardiovascular adaptation. Blood lactate analysis Capillary blood samples (10 μL) were collected through a small incision in the distal portion of the tail of the animals at rest and every 5 min during the MLSS test. Capillary blood samples were placed in microtubes (0.6 mL) containing 20 μL of 1% sodium fluoride and stored at −20°C. Analyses were performed by electroenzymatic method with YSI 1500 Sports (Yellow Springs, OH, USA). The MLSS was considered when there was no increase over 1 mmol.L −1 of blood lactate from 10 to 25 min of exercise tests [34]. The bacterial community partial 16S rRNA gene was amplified with primer pair 787 F-1492R [35]. For pyrosequencing analysis, the primer set was modified, where the forward primer included the Roche 454-A pyrosequencing adapter and a 12-bp barcode (unique to each sample), while the reverse primer included only the Roche 454-B pyrosequencing adapter. The 20 μl reaction mixture for the PCR contained approximately 10 ng of metagenomic fecal DNA, 1X PCR buffer (Invitrogen), 3.0 mM MgCl2, 10 pmol of each primer, 0.25 mM dNTP, and 1.5 U Taq DNA polymerase (Invitrogen). The cycling protocol started with an initial denaturation step of 3 min at 95°C, followed by 25 cycles of denaturation for 30 s at 95°C, annealing for 30 s at 58°C, and extension for 1.40 min at 72°C, followed by a final extension for 7 min at 72°C and cooling to 10°C. Finally, the rRNA amplicons from bacterial communities were purified with the QIAquick PCR Purification Kit (Chatsworth, CA). The concentrations of the rRNA amplicons were measured by Qubit fluorometer (Invitrogen), and subsequently the massively parallel GS FLX Titanium technology was performed in the Roche 454 Life Sciences Corporation, Branford, CT, USA. Analysis of 16S rRNA sequences A total of 1,398,681 16S rRNA sequences were obtained by the 454 GS FLX Titanium sequencer. All the 16S amplicons were processed by the quantitative insights into microbial ecology (QIIME) pipeline version 1.6.0-dev [36]. Briefly, all the 16S rRNA amplicons were sorted by their barcodes, and subsequently the reads with a length less than 180 bp, ambiguous sequences, bases with Phred values of <30 and their primers, barcodes and adaptor sequences were removed. The remaining sequences were submitted to Denoiser algorithm [37] to remove pyrosequencing errors. Operational taxonomic units (OTUs) were clustered at 97% similarity using an 'open-reference' OTU picking protocol, where sequences are clustered against the Greengenes database [38] using Uclust [39]. One of the most-abundant reads from each OTU was aligned using the PyNAST algorithm [36]. The chimeric OTUs were detected with ChimeraSlayer [40], and taxonomic classifications were assigned with the naïve Bayesian classifier of the Ribosomal Database Project (RDP) classifier [41] applying 80% of confidence threshold. Shannon indices and observed richness were used to evaluate community richness, and the unweighted Unifrac algorithm was performed to generate principal coordinate plots (PCoA). Statistical analysis Statistical differences between the groups pre-exercise and after exercise were tested using the analysis of similarities (ANOSIM) by permutation of group membership with 999 replicates [42], and bivariate relationships were measured with Pearson correlations and regression analysis available through QIIME [36]. Statistical tests on the taxonomic differences between samples were calculated by Welch's t-test combined with Welch's inverted method for calculating confidence intervals (nominal coverage of 95%), using the Statistical Analysis of Metagenomic Profiles (STAMP) software version 2.0.0 (STAMP) [43]. Accession number The 454 FLX Titanium flowgrams (sff files) have been submitted to the National Center for Biotechnology Information (NCBI) Sequence Read Archive database, project number: PRJNA246617. Effect of exercise training on aerobic capacity After four weeks of treadmill running exercise at moderate intensity, a novel MLSS assessment evidenced that all animals from each group had enhanced their aerobic capacity as demonstrated by improvement in the MLSS corresponding velocity (15 m.min −1 for Obese rats and 30 m.min −1 for Hypertensive and Wistar rats) ( Figure 1A). Furthermore, when comparing the initial and final velocity of exercise, a significant reduction was evidenced in blood lactate concentration (BLC) of~49% for Wistar rats~39% for Hypertensive rats and~33% for Obese rats. This reduction in BLC (before vs. after exercise training) demonstrates the effectiveness of the proposed exercise training intensity for all animals (p < 0.01) ( Figure 1B). Composition of fecal microbiota in rat lineages After quality filtering, 889,124 out of 1,398,681 sequences were obtained from fecal samples collected pre-exercise and post-exercise, after 4 weeks of moderate exercise training (Additional file 1). An average of 49,951 denoised sequences per animal were obtained (average read length of 524.8) which composed an average of 583.1 distinct observed OTUs. Post-training samples presented a higher Shannon index compared to pre-training samples (6.4 ± 0.5 vs. 6.8 ± 0.2) (Additional file 3). Detailed sequencing information from all individual rats is presented in (Additional file 3). Bacterial diversity was assessed by rarefaction measure of observed species against the number of sequences per sample, and the observed OTUs were identified with 97% of identity. Here, the rarefaction measure showed that species diversity (Additional file 4: A; Wistar rats; B; Hypertensive rats and C; Obese rats) reached a plateau tendency in all samples as the number of sequences increased, indicating that in the present study more sequences are unlikely to yield many additional species. As demonstrated in Additional file 4, the rarefaction curves revealed that OTU richness in post-exercise fecal samples is more species-rich than those found in the pre-exercise fecal samples. This was further evidenced in hypertensive rats and obese rats samples (Additional file 4B and C). When the initial and final velocity of exercise training was compared, a significant reduction was evidenced in BLC of~49% for Wistar rats,~39% for Hypertensive rats and~33% for Obese rats (p < 0.01) (B). The relative abundance of the main dominant bacterial phylum from all fecal samples collected before and after exercise training is shown in Additional file 5A. Here, Firmicutes and Bacteroidetes are the most dominant phyla, followed by Proteobacteria. Firmicutes was shown to be enhanced after exercise training (1.1 fold change, p < 0.05) (Additional file 5B), thus being more evidenced in Obese rats (Obese rats; 0.69 ± 0.03 vs. Exercised Obese rats; 0.78 ± 0.04, p < 0.05). On the other hand, Proteobacteria were shown to be 1.8 fold reduced after exercise training (p < 0.05) (Additional file 5C). The Bacteroidetes phylum was shown to be 1.3 fold reduced after exercise only in Wistar rats (Wistar rats; 0.23 ± 0.04 vs. Exercised Wistar rats; 0.17 ± 0.03, p < 0.05). Composition of bacterial communities before, during and after exercise training The relative abundance at bacterial genus level for all animal lineages in response to exercise training was assigned only to those that presented a minimum variation at significant level (p < 0.05) ( Figure 2). In Wistar rats (A), Streptococcus was the only genus that presented a significant alteration within its abundance, while untrained rats were more enriched with Streptococcus when compared to post-exercise (p < 0.05) (Figure 2A). In hypertensive rats, three genera (Allobaculum, Aggregatiobacter and Sutterella) were shown to be altered by exercise training. Despite minimal variation in the relative abundance of Allobaculum between pre-exercise and post-exercise samples, this genus was enriched by exercise training (p < 0.05) ( Figure 2B). This was in contrast to Allobaculum, Aggregatibacter and Suturella where both were more abundant in pre-exercise samples ( Figure 2B). Aggregatibacter presented a minimal variation in their relative abundance in fecal samples pre-and post-exercise training; however, exercise was shown to reduce the abundance of this genus (p < 0.05) ( Figure 2B). The Suturella genus was also shown to be more enriched pre-exercise, with a greater relative proportion of all in this genus in hypertensive rats (p < 0.05). In Obese rats, Pseudomonas and Lactobacillus were both significantly altered after exercise training ( Figure 2C). Minimal variation in Pseudomonas relative abundance was observed between samples (P < 0.05), while Lactobacillus presented the higher relative abundance after exercise from all identified genera (p < 0.05) ( Figure 2C). The proportion of sequences (%) of the main bacterial species from fecal samples collected before and after exercise training is shown in box plots (Figure 3). In preexercise fecal samples ( Figure 3A and B), only two species (Bacteroides acidifaciens and Ruminococcus flavefaciens) presented a significant differential abundance, in contrast to fecal samples post-exercise training, where six species (Streptococcus alactolyticus, Bifidobacterium animalis, Ruminococcus gnavus, Aggregatibacter pnemotropica and Bifidobacterium pseudolongum) presented a differential abundance ( Figure 3C-G). From all samples (pre and postexercise) only one species (Ruminococcus flavefaciens) was less abundant in obese animals ( Figure 3B), with all other species being significantly more enriched in the obese animals ( Figure 3C-G). Regarding pre-exercise samples, the proportion of sequences from the Bacteroides acidifaciens species was significantly more abundant in Obese rats than in Wistar and Hypertensive rats (p < 0.05) ( Figure 3A). However, an opposite profile is observed in sequences attributed to the Ruminococcus flavefaciens species, where greater abundance is seen in Wistar rats followed by Hypertensive rats, with no abundance seen in Obese rats (p < 0.05) ( Figure 3B). After exercise training, the proportion of sequences indicated that all listed species (Streptococcus alactolyticus, Bifidobacterium animalis, Ruminococcus gnavus, Aggregatibacter pneumotropica and Bifidobacterium pseudolongum) were more abundant in Obese rats compared to Wistar and Hypertensive rats lineages ( Figure 3C-G respectively). The relative abundance of Streptococcus alactolyticus in Obese rats diverged significantly from Hypertensive and Wistar rats (p < 0.05), while a diminished proportion of sequences was seen in both rat strains ( Figure 3C). The Bifidobacterium animalis species was seen to be highly enriched in Obese rats (p < 0.05) and absent in Wistar and Hypertensive rats ( Figure 3D). In relation to Ruminococcus gnavus, this species was poor in Wistar and almost absent in Hypertensive rats, but more abundant in Obese rats ( Figure 3E). The Aggregatibacter pneumotropica species presented a similar profile to the previous species, being also more abundant in Obese rats compared to Wistar and Hypertensive rats (p < 0.05) ( Figure 3F). Lastly, from the Actinobacteria phylum, Bifidobacterium pseudolongum abundance was almost exclusive to Obese rats (p < 0.05), being almost absent in Hypertensive rats and completely absent in Wistar rats group ( Figure 3G). Principal coordinates analysis (PCoA) Principal coordinates analysis (PCoA) of unweighed UniFrac distances was calculated and compared between all fecal samples collected pre and post-exercise from the three rat lineages in order to observe similarity in microbiota composition and the effect of exercise training ( Figure 4). All three biological replicates from each animal lineage (Wistar, Hypertensive and Obese rats) were shown to cluster with a high correlation between them (R = 0.79, P < 0.001). UniFrac (PCoA) analysis showed that Wistar and Hypertensive rats share a similar bacterial composition, thus clustering far from Obese rats, indicating a distinct bacterial community composition between these rat lineages (Figure 4). It also indicated that microbiota from Wistar, Hypertensive and Obese rats were significantly altered by exercise training, where pre-exercise samples clustered significantly far from fecal samples collected after four weeks of exercise training (Figure 4). However, even though exercise altered microbial community composition in every animal lineage, Wistar and Hypertensive rats still maintained a similar bacterial composition, still clustering far from Obese rats microbiota (Figure 4). Correlation of bacterial abundance and lactate concentration As shown in Figure 1A, when comparing the initial and final velocity of exercise training, a significant reduction in BLC was evidenced in all rat lineages, where lower BLC (means of 2.3 mmol.L −1 , Figure 1B) is associated to an improved aerobic capacity from trained status when compared to higher BLC (3.8 mmol.L −1 ) from untrained rats (pre-exercise samples from all rat lineages). Therefore, fecal bacterial communities were plotted against BLC to establish a correlation between microbial abundance and training status ( Figure 5). OTUs from two bacterial families (Clostridiaceae and Bacteroidaeae) and two genera (Oscillospira and Ruminococcus) were found to be significantly correlated with BLC. The OTU abundance from both bacterial families was negatively correlated to BLC (Clostridiaceae, R = −0.82 P < 0.01; Bacteroidaceae, R = −0.73 P < 0.01). In both cases, higher abundance in OTUs was observed to correlate with lower lactate concentrations, indicating that exercise training may be favorable to the proliferation of these OTUs from both bacterial families ( Figure 5A and B). The relative abundance of OTUs from Bacteroidaceae family was shown to be close to zero when BLC reached~4 mmol.L −1 (Figure 5B), being associated with untrained status. Regarding the genera, the abundance of OTUs from Oscillospira and Ruminococcus genera presented divergent correlations with BLC. OTUs from Oscillospira were shown to be positively correlated to BLC (R = 0.78 P < 0.01) and Ruminococcus was negatively correlated (R = −0.75 P < 0.01) ( Figure 5C and D). The OTUs from Oscillospira genus were shown to be almost absent in lower lactate concentrations, increasing their abundance with higher concentrations over 3.5 mmol.L −1 , which indicates that exercise training may be seen as an unfavorable factor for a specific OTU from Oscillospira genus and its proliferation in gut environment ( Figure 5C). However, the OTUs from Ruminococcus microbial genus were shown to be more abundant at lower lactate concentration and with almost no abundance at higher BLC, indicating that exercise training may also influence proliferation in this genus. Discussion Several environmental [44] and host-related factors [16] are known to affect gut microbiota composition. This dynamic ecosystem, highly susceptible to external agents, has a symbiotic link with host health homeostasis. In this context, imbalanced gut microbiota has been associated with the development of inflammatory gastrointestinal diseases, obesity and altered metabolic status [19]. Recently, physical activity was shown to modulate gut microbiota in diet induced obesity [24], and in healthy rodents, altering the microbiota diversity and composition [23] and increasing n-Butyrate concentration in the cecum [22]. In contrast to some of these previous studies that used PCR-TGGE [22] and PCR-DGGE of bacterial 16S rRNA genes [23], here a robust pyrosequencing of the 16S rRNA genes was used, along with controlled exercise training parameters to investigate this relationship in non-pathologic and pathologic rat models. Rarefaction measurements and Shannon index indicated that exercise training enhances bacterial diversity in nonpathological Wistar rats and in Obese and Hypertensive rats (Additional file 3 and Additional file 4). Here, Firmicutes and Bacteroidetes were found to be the most predominant phyla in all animal lineages (Additional file 5A). This predominance was also seen in mice cecum [6], and in exercised rats [23]. Considering all rat lineages, exercise was shown to enhance Firmicutes abundance and to diminish Proteobacteria content (Additional file 5B, C). Thus, Firmicutes were more abundant in post-exercise samples compared to pre-exercise in obese rats (Obese rats; 0.69 ± 0.03 vs. Exercised Obese rats; 0.78 ± 0.04, p < 0.05), while Bacteroidetes was shown to be reduced after training only in WR (Wistar rats; 0.23 ± 0.04 vs. Exercised Wistar rats; 0.17 ± 0.03, p < 0.05). Moreover, Bacteroidetes have been reported to be diminished in obese mice [10], while the ratio of Firmicutes to Bacteroidetes was shown to change in favor of Bacteroidetes in overweight and (See figure on previous page.) Figure 3 Species abundance profile of fecal sample before and after exercise training. Box plot showing the distribution in the proportion of sequences (%) of main species of each rat lineage (A; Bacteroides acidifaciens, B; Ruminococcus flavefaciens, C; Streptococcus alactolyticus, D; Bifidobacterium animalis, E; Ruminococcus gnavus, F; Aggregatibacter pneumotropica, G; Bifidobacterium pseudolongum) without exercise training (white box) and with exercise training (black box). The median value is shown as a line within the box and the mean value as a star, p-value for statistical significance was defined as p ≤ 0.05. Figure 4 Effect of exercise training on bacterial community. Principal coordinates analysis (PCoA) of unweighted UniFrac distances generated from fecal samples in Wistar rats (squares), Hypertensive rats (circles) and Obese rats (diamonds) collected from triplicate rats without exercise training (white symbols) and with exercise training (black symbols). The result of the ANOSIM similarity analyses confirmed that samples harbor a distinct bacterial community. obese subjects, compared to the lean group [45]. Thus, as previously stated by Harris et al., [7] studying gut microbiota and their relation with metabolic disorders revealed that there is no difference between obese and lean individuals at phylum level. However, data here reported showed a significant alteration in bacterial community abundance at phylum as well as at genus levels ( Figure 2), which could be associated with the effects of exercise and/or pathological conditions. Furthermore, our study revealed a significant alteration in bacterial community abundance at genus and species level as an effect of exercise and/or pathological stimuli ( Figure 2). In accordance with the PCoA analysis presented in this study (Figure 4), other studies have also reported a distinction between non-obese and obese microbiota from Zucker fa/fa rats [46] and ob/ob mice [10]. We also reported that Wistar and Hypertensive rats share a similar microbiota composition (Figure 4). In a similar way, it was reported that rats treated with nitric oxide synthase inhibitor NG-nitro-L-arginine methyl ester (L-NAME) develop hypertension, with a variation in cecal microbiota compared to control normotensive rats [18]. Regarding the effect of exercise, PCoA analysis demonstrated that four weeks of moderate exercise training significantly altered microbiota composition in all rat lineages (Figure 4). According to our results, different exercise volumes, 6 days [23], and 5 [22] and 12 weeks [24] of voluntary running exercise, were shown to alter microbiota composition, indicating that the microbial community is affected even by a few days of exercise. Together, these data suggest that besides other well-known factors, exercise may be seen as a potential environmental factor capable of modulating gut microbiota. Here, exercise was also shown to significantly alter six bacterial genera (Figure 2). Fecal samples were more enriched with Allobaculum (Hypertensive rats), Pseudomonas (Obese rats) and Lactobacillus (Obese rats) after exercise training, while Streptococcus (Wistar rats), Aggregatibacter (Hypertensive rats) and Sutturela (Hypertensive rats) were shown to be more abundant before exercise training was performed (Figure 2). The Lactobacillus genus presented higher abundance after exercise only in Obese rats (13.4% p < 0.05) ( Figure 2C). In agreement with our data, the recent study of Queipo-Ortuno et al. [23] revealed that Lactobacillus was also enhanced with exercise in lean rats, with a longer exercise stimulus (6 weeks). Lactic acid bacteria (LAB), represented in our study by Lactobacillus (enriched after exercise), are associated with the mucosal surface of the small intestine and colon in animals, where they produce lactic acid though homo or heterofermentative metabolism [47]. In this second process, besides lactic acid, CO 2 , acetic acid and/or ethanol are produced [48], which may contribute to a more acidic environment [48]. It has been reported that LAB in the gastrointestinal tract leads to positive health benefits with influence on microflora, modulation of mucosal immunity and exclusion of pathogens [47]. The enrichment of Lactobacillus in Obese rats after exercise may have some influence on gastrointestinal acidity trough the production of acidic compounds (e.g. lactic acid, acetic acid); however, this parameter was not measured in the present study. The capillary blood lactic acid was measured in order to establish aerobic capacity and thus to be used as a parameter for adaptation to exercise. Moreover, the lactate produced by Lactobacillus bacteria is converted into butyrate in the gut through bacteria such as B. Coccoides and E. rectale, also found to be enhanced after exercise [23]. Furthermore, butyrate is shown to be related to mucin synthesis and gut epithelium protection [49]. In another study, Matsunomo et al. [22] showed that exercise altered microbiota and enhanced n-butyrate concentration in rats' cecum. Therefore, enhanced Lactobacillus found in Obese rats group may possibly have a positive role in the gastrointestinal environment of these animals. It has been reported that obesity-associated gut microbiota is enriched in some species of Lactobacillus (e.g. Lactobacillus reuteri) [50] while other species (e.g. Lactobacillus gasseri BNR17) are involved in metabolism regulation [51], presenting anti-obesity effects [52]. In our study, the Sutterela genus was more abundant before exercise training in Hypertensive rats ( Figure 2B). The role of this genus in inflammatory bowel disease has been recently investigated with no relation being found [53]. Nevertheless, more research is needed to understand the relation of Sutturela with exercise and its possible gastrointestinal protective effect. It is observed that these alterations in genera are not consistent across all host phenotypes. We believe that part of this inconsistency may be related to the biologic differences peculiar to each host genotype used in the study. Obesity has been shown to modulate gut microbiota [10,11], while no study has shown this yet in a hypertensive phenotype. Furthermore, as a first exploratory study to use different genotypes with different pathologies, it is interesting to note that the gut microbiota of 3 different phenotypes (and 2 genotypes) was possibly altered by an external factor such as exercise. Seven bacterial species were shown to have a significant differential abundance altered between the three animal lineages ( Figure 3). From fecal samples collected pre-exercise, only Bacteroides acidifaciens and Ruminococcus flavefaciens presented a significant differential abundance ( Figure 3A and B). While Bacteroides acidifaciens was more enriched in Obese rats compared to Wistar and Hypertensive rats ( Figure 3A), Ruminococcus flavefaciens showed an opposite profile, being more enriched in Wistar rats followed by Hypertensive rats and significantly depleted in Obese rats ( Figure 3B). B acidifaciens has recently been shown to have an important role in the production of imunoglobulin (IgA) in the large intestine of mice [54]. This production plays an adaptive role in the intestinal mucosal immune system [55]. Since IgA is enhanced in metabolic disorders [56], the relative abundance of Bacteroides acidifaciens in Obese rats ( Figure 3A) may be associated with the role of gut microbiota in the inflammatory signalling peculiar to obesity [57]. Otherwise, Ruminococcus flavefaciens is a cellulolytic bacterium present in the rumen of mammals, and it has been shown to be inhibited by probiotic supplementation (L. acidophilus NCFM) in young rats [58]. Our data indicate that the obesity phenotype from obese rats may suppress this particular species. In samples collected after training, Streptococcus alactolyticus, Bifidobacterium animalis, Ruminococcus gnavus, Aggregatibacter pnemotropica and Bifidobacterium pseudolongum were all more abundant in Obese rats ( Figure 3C-G). Streptococcus alactolyticus and Bifidobacterium animalis species were shown to be present in the gut of obese rats. In contrast to our data, Bifidobacterium is often associated with lean phenotypes [19]; however, our study showed that this species was completely absent in non-obese Wistar rats and Hypertensive rats ( Figure 3D). Regarding the relative abundance of Ruminococcus gnavus in Obese rats, this species is known to have an antibacterial effect and to protect the host from pathogens [59], also found to be reduced in colon cancer tissue [60]. However, this species was also shown to be enhanced in diverticulitis [61], which is commonly associated with obesity [62]. Bifidobacterium pseudolongum was another species almost exclusively in Obese rats ( Figure 3G). Moreover, the content of this species was shown to be enhanced in obese mice induced by diet and probiotic administration, when compared to a group of mice without probiotic supplementation [63]. In the present study, the MLSS was used to assess aerobic improvement as a result of four weeks of exercise training at moderate intensity [2]. Thus, after exercise training, a significant reduction in BLC was observed in all rat lineages ( Figure 1B), which is associated with an improved aerobic capacity when compared to higher BLC from untrained rats ( Figure 1B). The OTUs from Oscillospira and Ruminococcus families were found to be negatively correlated with BLC (Clostridiaceae, R = −0.82, P < 0.01; Bacteroidaceae, R = −0.73 P < 0.01) as were the OTUs from Ruminococcus genus (R = −0.75 P < 0.01). In these three cases, the greatest relative abundance of OTUs in both families is correlated with lower BLC, indicating that exercise training may be favorable to the proliferation of these specific OTUs ( Figure 5A, B and D). Otherwise, OTUs from Oscillospira presented a positive correlation with BLC (R = 0.78, P < 0.01) ( Figure 5C). The relative abundance within these OTUs is seen to increase when the concentration of lactate goes over~3.5 mmol.L −1 . Since lower BLC during the MLSS test was associated with a more trained status, this result may indicate that exercise training may affect the abundance of the OTUs from this genus. Conclusions These findings suggest that exercise training is capable of altering gut microbiota at genus level, with significant alteration in bacterial composition and diversity in obese, non-obese and hypertensive rats. Exercise was shown to enhance the relative abundance of three genera, with Lactobacillus being the most abundant, while another three genera were shown to be more abundant before exercise training (Streptococcus, Aggregatibacter and Sutterella). Non-obese Wistar rats and spontaneously hypertensive rats were shown to share similar microbiota, unlike Obese rats. Rat lineages were also shown to harbor a differential abundance at species level, and six species were shown to be significantly more abundant in obese rats. Two bacterial families (Clostridiaceae and Bacteroidaceae) and two genera (Oscillospira and Ruminococcus) were also shown to significantly correlate with blood lactate accumulation, while exercise was shown to be favorable to the two families and Ruminococcus genus in opposition to Oscillospira. In conclusion, this is the first study to use controlled exercise parameters to assess gut bacterial community modification in three different animal lineages, which may reflect the potential of exercise to alter gut microbial community. However, the effect of exercise on the acidity of lumen or fecal samples was not measured, which limits us in establishing a direct link between exercise and gut alteration by acidic induction. Thus, more studies are necessary to establish these modifications as possible therapeutic implications for obesity or hypertension treatment through the modulation of gut microbiota.
2016-05-12T22:15:10.714Z
2014-06-21T00:00:00.000
{ "year": 2014, "sha1": "022bb4adde9e5ed0cafc6fd2bf62b96380b3998c", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-15-511", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "022bb4adde9e5ed0cafc6fd2bf62b96380b3998c", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268722607
pes2o/s2orc
v3-fos-license
Patient-reported symptoms before adjuvant locoregional radiotherapy for breast cancer: triple-negative histology impacts the symptom burden Background Multimodal breast cancer treatment may cause side effects reflected in patient-reported outcomes and/or symptom scores at the time of treatment planning for adjuvant radiotherapy. In our department, all patients have been assessed with the Edmonton Symptom Assessment System (ESAS; a questionnaire addressing 11 major symptoms and wellbeing on a numeric scale of 0–10) at the time of treatment planning since 2016. In this study, we analyzed ESAS symptom severity before locoregional radiotherapy. Patients and methods Retrospective review of 132 patients treated between 2016 and 2021 (all comers in breast-conserving or post-mastectomy settings, different radiotherapy fractionations) was performed. All ESAS items and the ESAS point sum were analyzed to identify subgroups with higher symptom burden and thus need for additional care measures. Results The biggest patient-reported issues were fatigue, pain, and sleep problems. Patients with triple negative breast cancer reported a higher symptom burden (mean 30 versus 20, p = 0.038). Patients assigned to adjuvant endocrine therapy had the lowest point sum (mean 18), followed by those on Her-2-targeting agents without chemotherapy (mean 19), those on chemotherapy with or without other drugs (mean 26), and those without systemic therapy (mean 41), p = 0.007. Those with pathologic complete response after neoadjuvant treatment had significantly lower anxiety scores (mean 0.7 versus 1.8, p = 0.03) and a trend towards lower depression scores, p = 0.09. Conclusion Different surgical strategies, age, and body mass index did not impact on ESAS scores, while the type of adjuvant systemic therapy did. The effect of previous neoadjuvant treatment and unfavorable tumor biology (triple negative) emerged as important factors associated with symptom burden, albeit in different domains. ESAS data may facilitate identification of patients who should be considered for additional supportive measures to alleviate specific symptoms. Introduction During decades of successful adjuvant radiotherapy for breast cancer, reduction of acute and late side effects such as skin reactions and pneumonitis has been an important topic of research, aiming at a continuous improvement of the therapeutic ratio [1,2].More recently, health-related quality of life (QoL) and patient-reported outcomes (PROs) have been evaluated as well, because physician-scored side effects are unable to mirror the complex patient experience during a typically multimodal sequence of different treatments [3].Preceding treatment such as neoadjuvant drugs and definitive surgery (breast-conserving or mastectomy with different approaches regarding axillary lymph nodes) may cause symptoms that are still present when patients move forward to the adjuvant phase, where new drugs and/or radiotherapy can impact QoL and PROs [4].A wide range of symptoms including but not limited to pain, fatigue, anxiety, and sleep disturbance are reported by many patients before they start with adjuvant radiotherapy. Many different instruments have been used to evaluate QoL and PROs [5].Currently, mobile apps and other digital solutions are gaining increasing importance [6,7].The Edmonton Symptom Assessment System (ESAS), originally developed in the palliative care setting [8], has occasionally been employed in curatively treated patients with breast cancer [9][10][11][12][13][14]. ESAS is a short, one-sheet questionnaire addressing major symptoms and wellbeing on a numeric scale of 0-10, which can easily be integrated into routine workflow in radiation oncology departments [15].The radiotherapy facility at Nordland Hospital started screening of all palliatively irradiated patients with the ESAS tool in 2012, following the routine procedure that already had been in place for outpatients receiving systemic anticancer therapy for several years.From 2016, patients seen for treatment planning of adjuvant radiotherapy for breast cancer also were asked to provide a symptom assessment, which may facilitate initiation of measures that contribute to better symptom control.Given that patients with more advanced disease who received more intense preceding treatment may be more likely to experience toxicity and related symptoms, we limited this initial study of ESAS data in our institution's breast cancer patients to those referred to locoregional radiotherapy.Potential correlations between patient-related parameters, e.g., age or body mass index (BMI), comorbidity, disease-related parameters, and treatment parameters on one hand and pre-radiotherapy symptom severity on the other were assessed. Materials and methods We performed a retrospective analysis of 132 unselected, consecutive female patients who started locoregional adjuvant radiotherapy at our hospital during the time period 2016-2021.Radiotherapy was administered after 3D planning, mostly with daily 2-Gy fractions with or without sequential boost in deep-inspiration breath-hold.Hybrid intensity-modulated techniques were employed on an individual basis.The same was true for hypofractionation (15 fractions of 2.67 Gy), e.g., in elderly patients.The ESAS tool was administered by a registered oncology nurse immediately before radiation oncologist consultation and computed tomography imaging for treatment planning approximately 1 week before radiotherapy.All medical records were available in the hospital's electronic patient record (EPR) system.Statistical analysis was performed with IBM SPSS Statistics 29 (IBM Corp., Armonk, NY, USA).In addition to relevant ESAS items of interest (continuous variables expressed as mean with standard deviations [SD]), we analyzed a large number of categorical baseline variables (dichotomized present/absent or categorized by quartiles or treatment groups).Analysis of variance (ANOVA) tables were employed for inter-group comparisons.A pvalue ≤ 0.05 was considered statistically significant. Results The mean age was 59 years (SD 13), range 24-88 years.The mean BMI was 27.7 kgm -2 (SD 5), range 18-45 kgm -2 .Most patients had pT1 or 2 node-positive (N1) disease.Table 1 shows additional baseline characteristics and Table 2 shows the ESAS data.The biggest patient-reported issues were fatigue, pain (while moving), and sleep problems.To reduce the likelihood of spurious findings from multiple testing, several items (appetite, nausea, constipation, dyspnea, dry mouth) were not carried forward to further individual analyses, unless special subgroups of interest appeared in the first round where all items were employed to create the point sum.In addition, the patient-reported item "overall wellbeing" possibly integrates all the different aspects, and was analyzed as a secondary outcome of interest. All the different parameters displayed in Table 1 were analyzed for correlations with the ESAS point sum.Only two significant correlations were identified: patients with triple-negative breast cancer reported a higher symptom burden (mean 30 versus 20, p = 0.038).Patients assigned to adjuvant endocrine therapy (n = 72), which had typically not yet been started at the time of ESAS scoring, had the lowest point sum (mean 18), followed by those on Her-2targeting agents without chemotherapy (n = 14, mean 19), those on chemotherapy with or without other drugs (n = 40, mean 26), and those without systemic therapy (n = 5, mean 41), p = 0.007.It should be noted that patients without adjuvant systemic therapy had triple-negative breast cancer.However, some patients with triple-negative disease received adjuvant chemotherapy. After the primary analysis of the ESAS point sum, a second analysis was run with overall wellbeing as the endpoint, K without identifying any significant correlations (all p-values > 0.1).Finally, we were interested in the prognostically favorable group with stage ypT0, i.e., pathologic complete response at surgery due to neoadjuvant treatment.These patients had significantly lower mean anxiety scores (0.7 versus 1.8, p = 0.03) and a trend towards lower depression scores (mean 0.6 versus 1.5, p = 0.09) compared to all other patients (T1-4 with or without neoadjuvant treatment; n = 19 versus 113). Discussion The present study addressed patient-reported symptoms in an unselected real-world cohort, which included patients with different disease characteristics and treatment pathways.At the time of treatment planning, 132 patients scored their symptoms with the ESAS tool.Canadian researchers have also published several studies on ESAS and curative breast cancer treatment.Chow et al. reported no statistical difference in ESAS scores for mastectomy and lumpectomy patients [9], a finding confirmed in our study.Neither reresection nor the extent of surgical axillary treatment were associated with ESAS scores in the present cohort.Barbera et al. evaluated the impact of screening with ESAS on emergency department (ED) visit rates in women with breast cancer receiving adjuvant chemotherapy [10].Interestingly, screening with ESAS was associated with decreased ED visits.Chow et al. performed a longitudinal radiotherapy study including both ESAS and QoL [11].Of the ESAS symptoms identified as significant predictors of QoL, pain, fatigue, and anxiety correlated with overall wellbeing at all timepoints.Given that such data provide reasons to believe that overall wellbeing to some degree integrates many of the individual items, our own data failed to show that overall wellbeing is of major importance as a global symptom burden criterion.We would rather advocate for employment of the ESAS point sum when researchers try to summarize the symptom burden.While the latter may facilitate statistical comparisons, clinicians need to look at the complete picture, trying to identify the individual problems of each patient in order to increase care-associated satisfaction [16]. At our department, ESAS assessment during radiotherapy and follow-up has not been performed.Lam et al. reported longitudinal data showing that patient-reported pain associated with breast irradiation peaked 1 week after treatment completion [12].Younger patients (40-49 or 50-59 years of age) reported significantly more overall pain and breast pain compared with patients ≥ 60 years of age.Our own study did not specifically address different locations of pain, but rather collected overall pain data only.In principle, the latter may be influenced by comorbidities and age as well as medications.It should also be noticed that the Canadian studies were larger than the present one.Behroozian et al. compared breast cancer patients with and without regional nodal irradiation and included 781 patients in the longitudinal analysis [13].Baseline symptom reporting was similar between cohorts.Across all timepoints, differences in outcomes between cohorts were minimal, except for lack of appetite (p = 0.03), which was significantly aggravated in patients treated with regional nodal irradiation. A major finding from the present study was the impact of adjuvant systemic treatment, which is connected to tumor type, because specific drugs are restricted to patients with Her-2-positive disease, while triple-negative histology restricts adjuvant options in a broader sense, especially in the earlier years of this study before capecitabine was introduced.Akkila et al. studied patients treated between February 2018 and September 2020 [14].They compared baseline scores between adjuvant and neoadjuvant chemotherapy patients (n = 338).Comparison of baseline ESAS scores revealed that patients who received adjuvant chemotherapy were more likely to report higher scores, reflecting higher symptom burden, compared to patients receiving neoadjuvant chemotherapy, including fatigue (p = 0.005), lack of appetite (p = 0.0005), and dyspnea (p < 0.0001).So far, no other studies have examined all the baseline and treatment parameters we were able to include.Our results suggest that patients with triple-negative disease may require particular attention.Previous research showed that women with triplenegative tumors had worse QoL compared to those with non-triple-negative tumors [17].Recently, a different study with less-well studied endpoints showed that chemotherapy, triple-negative tumor, reconstructive surgery, number of outpatient visits, and income were associated with prolonged sick leave [18]. The present study also showed a certain impact of pathologic complete response at surgery due to neoadjuvant treatment on some ESAS items.Exceedingly chemosensitive patients had significantly lower mean anxiety scores (p = 0.03) and a trend towards lower depression scores (p = 0.09).Awareness of the excellent prognosis would explain why these patients were less worried in the adjuvant setting.Interestingly, triple-negative patients were not even close to reporting significantly higher anxiety or depression scores than their non-triple-negative counterparts.This fact would argue against prognostic worries as a reason for a mainly fatigue-and pain-driven higher ESAS sum.Additional research is needed to fully elucidate our findings. When interpreting the present results, the following limitations must be acknowledged: our study cohort was comprised of Norwegian-speaking patients covered by the national publicly funded health care system.In a more diverse setting, socioeconomic factors may interfere with QoL and PROs.Since the study size and consequently statistical power were limited, we may have overlooked additional correlations that a larger study could have revealed.Longitudinal data from various timepoints during radiotherapy may provide additional information when trying to provide comprehensive, individualized supportive measures such as physical exercise, physiotherapy, psycho-oncology referral, rehabilitation, and others [19,20], aiming at high rates of treatment completion, better QoL and role-functioning, and minimum interference of breast cancer treatment with survivors' daily life.Ethical standards All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.This research project was carried out according to our institutions' guidelines and with permission to access the patients' data. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Funding Open access funding provided by UiT The Arctic University of Norway (incl University Hospital of North Norway) Declarations Conflict of interest C. Nieder, S.K. Johnsen, A.M. Winther, and B. Mannsåker declare that they have no competing interests. Table 1 Baseline characteristics before adjuvant radiotherapy in 132 female patients a NATx: neoadjuvant systemic therapy b Missing information in some cases Table 2 Edmonton Symptom Assessment System (ESAS) before adjuvant radiotherapy in 132 female patients
2024-03-28T06:19:03.365Z
2024-03-26T00:00:00.000
{ "year": 2024, "sha1": "00490f421a43dc8c06cdbaf44675d833e4fc2d57", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00066-024-02224-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d346132b1a2417bef940a082ac585774003b3b72", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14027783
pes2o/s2orc
v3-fos-license
Impact of light-curing time and aging on dentin bond strength of methacrylate- and silorane-based restorative systems Aim: To evaluate the impact of different light-curing times on dentin microtensile bond strength of two restorative systems after 24 h and 6 months of water storage. Methods: Standardized Class II preparations were performed in 56 freshly-extracted human molars (n = 7), restored with methacrylate- or silorane-based restorative systems, and light-cured using a light-emitting diode at 1390 mW/cm 2 by the recommended manufacturers’ time or double this time. After storage for 24 h at 37 o C, the teeth were sectioned to yield a series of 0.8-mm thick slices. Each slab was trimmed into an hourglass shape of approximately 0.64 mm 2 area at the gingival dentin-resin interface. Specimens were tested using universal testing machine at crosshead speed of 0.5 mm/min until failure, after 24 h and 6 months of storage. Data were statistically analyzed by three-way ANOVA and Tukey’s test ( α = 0.05). Results: The highest bond strength values were recorded for the groups restored with methacrylate system (p<0.001) as well as for extended light-curing time (p = 0.0034). There was no statistically significant difference between 24 h and 6 months storage on bond strength (p>0.05). Conclusions: Bond strength was influenced by the material and light-curing time, but the 6-month storage did not affect the bond strength of restorations. Introduction Polymerization of methacrylate-based composite is characterized by volumetric shrinkage 1 .These photo-activated restorative materials exhibit a significant proportion of methacrylate groups unreacted due to an incomplete conversion of carbon double bonds 2 .However, the higher the degree of conversion (DC), the higher the shrinkage strain 3 .Polymerization stress may result in cuspal deflection 4 , de-bonding at composite-dentin interface, post-operative sensitivity [5][6] , microleakage 5 , secondary caries formation, marginal staining, restoration and dental fractures 6 , all reducing the longevity of the restoration. Recently a low shrinkage monomer was developed from the reaction of the oxirane and siloxane molecules, termed silorane 4,7 .Silorane presents a cationic ring-opening polymerization mechanism instead of the free radical cure of methacrylate monomers 4 and an exended light-curing time to form cations is necessary to initiate the polymerization reaction 1,4 .It exhibits lower polymerization shrinkage 4,8 and mechanical properties comparable to that of methacrylate dental composites 7 .Lot. 00955A PrimerMDP, HEMA, water, CQ, hydrophilic dimethacrylate.Lot.01416A BondMDP, Bis-GMA, HEMA, CQ, hydrophobic dimethacrylate, N,N-diethanol p-toluidine, colloidal silica. In deep cavity the irradiance that reaches the restorative material surface is decreased by the distance between the guide tip of the light-curing unit and material during the restorative procedure, reducing the degree of conversion, and/or leading to the formation of more polymers with linear structures, presenting inferior physical properties and will result in the weakening of the restoration 9 . Improvement of the physical properties of resin-based materials with increase of the curing time available for the conversion of monomers to polymers has been reported 3,[10][11] .However, few studies have assessed the bond strength of this new restorative system with different light-curing times and after aging.Therefore, the objective of this study was to evaluate the influence of different restorative systems and curing times on the microtensile bond strength (microTBS) after 24 h and 6 months.The research hypotheses tested were that: (1) there would be no difference between restorative systems, (2) extended light-curing time would increase bond strength, and (3) aging would decrease microTBS values. Material and methods This study was approved by the Institutional Review Board under protocol number 031/2010.Fifty-six freshly extracted non-carious, unrestored human third molars were collected and stored in 0.1% thymol solution at 4 o C. The teeth were scaled, cleaned, stored in distilled water at 4 o C, and used within 3 months after extraction. The toot roots were embedded in polystyrene resin (Piraglass, Piracicaba, SP, Brazil) to facilitate the handling, and the occlusal surfaces were ground wet onto 320-grit SiC paper in a polishing machine (APL-4, Arotec, São Paulo, SP, Brazil) until the distance between the occlusal surface and cementum-enamel junction was 5 mm.Standardized Class II vertical slot preparations were performed on one of the proximal surfaces of human molars with regular-grit cylindrical diamond bur (no.3100; KG Sorensen, Barueri, SP, Brazil) using a high speed handpiece with water spray coolant.Cavity dimensions were 4 mm wide, 6 mm high (1 mm below the cementoenamel junction), and 2 mm of axial depth (from the proximal surface to the axial wall).A custommade preparation device allowed the standardization of the preparations dimensions.The margins were not beveled and burs were replaced after five preparations. Table 1 shows the information about the materials used in this study.Methacrylate-[Clearfil SE Bond (Kuraray Medical Inc.Okayama, Japan) + Filtek Z250 (3M ESPE, St. Paul, MN, USA)] and silorane-based [Filtek LS system (3M ESPE)] restorative systems were used in the restorative procedures.The cavities were sequentially randomized in 8 groups (n = 7) (Table 2), and the following restorative protocols were accomplished: for the methacrylate groups (1, 2, 5, and 6), Clearfil SE Bond primer (bottle A) was vigorously scrubbed with applicator brushes in the entire cavity during 20 s, a mild air stream was applied for solvent evaporation, the bonding agent (bottle B) was applied, gently air thinned and light-cured for 10 s (G1 and G5) or 20 s (G2 and G6).For silorane groups (3, 4, 7, and 8), Filtek LS primer (bottle 1) was actively applied for 15 s, gently air thinned, light cured for 10 s (G3 and G7) or 20 s (G4 and G8), and the bonding agent (bottle 2) was applied, thinned with a gentle air stream and light-cured for 10 or 20 s.After bonding procedures, individual transparent matrices were placed to allow the adequate filling of the proximal preparation.Three approximately 2-mm-thick horizontal composite resin increments were inserted, measured with a millimeter periodontal probe with Williams' markings (Golgran, São Paulo, SP, Brazil) positioned parallel to the tooth proximal surface, and light-cured for 20 or 40 s (Table 2). The resin materials were light-cured at the occlusal surface using a second-generation light-emitting diode (LED) unit (Bluephase 16i; Ivoclar Vivadent, Amherst, NY, USA) device at 1390 mW/cm 2 of output irradiance (at 0 mm).The optical power (mW) delivered by the device was measured with a power meter (Ophir Optronics, Har Hotzvim, Jerusalem, Israel).The tip diameter was measured with a digital caliper (Mitutoyo Sul Americana, Suzano, SP, Brazil); recorded 7 mm and tip area was determined in cm 2 .Irradiance (mW/ cm 2 ) was calculated dividing light power by tip area.The Light-curing time Restorative system Distinct letters (capital in the rows and lowercase in the columns) are statistically different (p < 0.05).* Differs from the silorane restorative system (p<0.001).The longer light-curing time promoted greater bond strength (p=0.0034).There was no statistically significant difference for aging (p>0.05).Table 3. Table 3. Table 3. Table 3. Table 3. Microtensile bond strength values [MPa (S.D.)] according to restorative system, aging, and curing time.irradiances also were calculated positioning a spacer device (with heights of 4 and 6 mm) between the light guide tip of the curing unit and the surface of the power meter, and beneath resin disks for both composites made using a standardized Teflon matrix (with 2 mm of thickness, simulating the first increment) at 4 mm of the top surface of resin disk.The distance between the light guide tip and the bottom of cavity was 6 mm with an irradiance of 610 mW/cm 2 , when the adhesive systems were cured.The composite increment was approximately 2 mm thick, totalizing 990 mW/cm 2 on the top surface of the first composite increment at 4 mm of distance between light guide tip and top surface of the first composite increment.The irradiance on the bottom surface at 6 mm (beneath both 2-mm-thick composite resin disks) was 380 ± 5 mW/cm 2 . After restorative procedures, specimens were stored in distilled water at 37 o C for 24 h.After this period, the proximal surface was finished and polished with Al 2 O 3 abrasive discs (Sof-Lex Pop-On, 3M ESPE), from coarse to superfine for 30 s with a rotating hand piece at approximately 10,000 rpm.Then, the restored teeth were serially sectioned to yield 3 series of 0.8 mm thick vertical slices using a diamond saw (Isomet 1000; Buehler, Lake Bluff, IL, USA) at 300 rpm.Each slab was trimmed into an hourglass shape of approximately 0.64 mm 2 area at the gingival resin-dentin interface using a super-fine diamond bur (no.1090FF; KG Sorensen).In the aged groups (G5-G8, Table 2), the hourglasses were stored in distilled water at 37 o C for 6 months, changed weekly.All specimens had direct exposure to storage fluid 12 .This procedure is commonly used and considered a type of accelerated aging 13 . Twenty-four hours or 6 months after water storage at 37 o C, the cross-sectional area of each hourglass was measured with a digital caliper to the nearest 0.01 mm and recorded for the calculation of the dentin bond strength.Each bonded slab was individually attached to a flat grip Geraldeli device for microtensile testing with cyanoacrylate instant adhesive (Super Bonder Gel; Loctite-Henkel, São Paulo, SP, Brazil), and subjected to a tensile force using a universal testing machine (DL 500; EMIC, São José dos Pinhas, PR, Brazil) at crosshead speed of 0.5 mm/min until failure.The number of slabs prematurely de-bonded during specimen preparation was recorded, but no bond strength value was attributed for statistical analysis 14 .The bond strength values obtained from the 3 slices of each tooth were used to calculate the microTBS of the specimen.Means and standard deviations were calculated and expressed in Mega Pascals (MPa). After microTBS test, the dentin side of the fractured specimens was dried by silica stored in incubator at 37 o C for 48 h, mounted on the aluminum stubs, and gold sputtercoated under high vacuum (SCD 050; BAL-TEC AG, Balzers, Liechtenstein).A scanning electron microscope (SEM; JSM 5600 LV, JEOL, Tokyo, Japan) was used to evaluate the bond failure modes of the fractured specimens on the dentin side with magnifications between 70 and 1000X and classified as follows: (1) cohesive in dentin, (2) adhesive, (3) cohesive in the composite, and (4) mixed.The microTBS data obtained by to assume the normality presuppositions were analyzed by three-way ANOVA and Tukey´s test at a 0.05 level of significance.The main factors were restorative system, curing and storage times. Results The methacrylate-based restorative system showed higher dentin bond strength than the silorane-based material (p<0.001), the extended light-curing time resulted in higher microTBS values (p = 0.0034) and there was no statistically significant difference between 24 h and 6 months (p>0.05)(Table 3). The descriptive analysis of failure modes and the number of pre-testing failures for each experimental group are shown in Table 4. Discussion The first hypothesis tested was rejected, since the methacrylate materials presented greater dentin bond strength than the low shrinkage restorative system (Table 3), in accordance with a previous study that showed higher microTBS for methacrylate than silorane composite regardless of placement technique 15 .Self-etch adhesives are based on absence of rinsing and drying steps, maintaining the ideal dentinal humidity and reducing technique sensitivity 13 .Twostep self-etch adhesive consists in a self-etch primer with acid monomers that demineralize and simultaneously penetrate monomers into dentin subsurface, followed by application of a solvent-free hydrophobic bond agent, which provides better mechanical properties 16 . All-in-one adhesive contains a mixture of acid, hydrophilic, and hydrophobic monomers, water and organic solvents in a single bottle 16 .This adhesive is more hydrophilic, allowing deeper penetration with water content increases due to adhesive acidification in water presence, interfering on polymerization, which leads to uncured acid and aggressive monomers that continue etching the dentin, affecting negatively the bonding interface [16][17] .Most one-step self-etch adhesives are severely affected by the hydrolytic degradation 18 .However, longevity over time was not related to the number of steps of the bonding systems, but to their chemical compositions 19 . Clearfil SE Bond consists in a hydrophilic self-etch primer and hydrophobic bond agent.This viscous hydrophobic resin-coating layer improves mechanical properties and increases longevity of the bonding interface 16 .Filtek LS low shrinkage composite resin has a dedicated self-etching adhesive.Although LS Adhesive System is classified by the manufacturer as a two-step self-etch adhesive, the hydrophilic LS primer is applied first and then lightcured forming the hybrid layer 1 .Thus, the bifunctional hydrophobic monomer (phosphorylated methacrylate) of the LS bond applied after the primer cured acts as a low viscosity composite connection liner between methacrylate monomers (by reaction with acrylate group) and silorane monomer (by reaction of the phosphate group with oxirane) 8 .Therefore, LS primer is a one-step self-etch adhesive, which could explain the lower bond strength values 1 . The mild self-etch primer of Clearfil SE Bond has a pH of 2.0 16 , and is composed by a functional acid monomer MDP, which adheres to the tooth hydroxyapatite most readily and intensely 20 .This stable chemical bond was left around 4. Table 4. Table 4. Table 4. Table 4. Fracture pattern analysis Failure modes (%) the collagen fibrils within the hybrid layer 21 .The self-etch LS primer has a pH of 2.7 6 and is classified as ultra-mild 1,6,21 .Transmission electron microscopy (TEM) of LS adhesive shows a thin nano-interaction zone, which is probably the combination of the resin-impregnation within smear layer and actual hybridized dentin 1,6 .Smear debris interfere in the interaction between the mild and ultra-mild self-etching adhesives with dentin tissue 22 .The bonding effectiveness of ultra-mild one-step self-etch adhesive is largely affected by the properties of the produced smear layer because it interacts superficially with the smear layer-covered dentin 23 .It has been reported that two-step self-etch adhesive systems performed better at bonding ability than one-step self-etch adhesives 13,19,21 . A longer light-curing time increased the microTBS of the tested restorative systems (Table 3); therefore, the second hypothesis was validated.It is now that only 1 mm distance increase between the light guide tip and restorative material decreases the light intensity by approximately 10% 24 .Several studies have related the improvement of the physical properties of resin-based materials with increase of the curing time, due to the higher DC 3,[10][11] .There is a significant correlation between bond strength and total curing time 25 with greater DC 26 .Special care should be taken when performing the polymerization of resinous materials with lower light power curing units at deep cavities. The onset of cationic ring-opening polymerization of the silorane is slower due to the required formation of sufficient cations to initiate polymerization, thus a longer light-curing time is required compared with radical cure of methacrylate monomer molecules into polymer network 1,4 .The curing device used in this investigation consists in single peak second generation LED.This unit presents a high optical power and spectrum between 410 and 530 nm with a peak on at 454 nm that includes the maximum energy absorption peak of the camphorquinone at (468 nm) 27 , the photo-initiator included in all tested resin-based materials. The light-curing time recommended for silorane composite using quartz-tungsten-halogen (QTH) with irradiance between 500-1400 mW/cm 2 is 40 s, as well as for LEDs with output between 500-1000 mW/cm 2 .For LEDs with irradiance between 1000-1500 mW/cm 2 is advised an exposure light time of 20 s.An irradiation of 10 s is recommended to cure the primer and bond of LS Adhesive, without concern about minimum irradiance.In this study it was used a LED with irradiance of 1390 mW/cm 2 , indicating 20 and 10 s of light polymerization for composite and adhesive, respectively.However, the irradiance achieved on the surface of the first composite increment was of 990 mW/ cm 2 at 4 mm of tip, and on the adhesive system was of 610 mW/cm 2 at 6 mm (applied on cavity bottom) of light guide tip.Furthermore, at a 4 mm distance from the light guide tip to the top surface of the composite and curing beneath the restorative material, the irradiance at the bottom surface was 380 mW/cm 2 .Bond strength is influenced by monomer conversion 26 ; thus, extended curing time may have increased the DC of adhesives 11 and/or composites, and improved the dentin microTBS. The third hypothesis was rejected because the long-term water storage did not affect the bond interface of the restorations (Table 3).Interface components can be degraded by hydrolysis and water may infiltrate, resulting in the plasticization of the polymeric matrix, by swelling and reduction of the frictional forces between the polymer chains, reducing the mechanical properties, and consequently the bonding interface integrity 13 .However, the 6 months of water storage did not decrease the microTBS values, similar result as reported elsewhere 5 .On the other hand, other studies showed a significant decrease in the bond strength after shorter periods (within 3 months) 28 , and even after longer periods (within 4 years) 25 . The MDP-contained Clearfil SE Bond adhesive system in contact with the tooth forms the MDP-calcium salt that hardly dissolved in water.Therefore the bond between MDP and hydroxyapatite should be stable 20 .Thus, the chemical interaction improves the resistance to hydrolytic breakdown and de-bonding stress, keeping the restoration margins sealed for longer periods 21 .Moreover, since primer application followed by the hydrophobic bonding agent contained mainly cross-linking monomers, Clearfil's bond provides better mechanical properties to Clearfil 16 .This fact, combined with methacrylate Filtek Z250 composite and high power density, could result in the long-term stability of the bond interface. Single-bottle adhesives such as the one-step self-etch LS primer may act as permeable membranes and be more susceptible to aging 29 .Moreover, these adhesives are strongly influenced by light intensity of the photo-curing device 16 .Thus the second viscous hydrophobic coating layer (LS bond) seems to have improved the vulnerability to water sorption resultant of the high HEMA content in the LS primer, applied and cured previously 6 , after long-term water storage.Additionally, the active application of one-step self-etch adhesives has been related to improvement in the bonding performance 30 , along with the increased hydrophobicity of the silorane composite resin due to the presence of siloxane species 7 .The high irradiance could also contribute to the bond longevity of this new restorative system.The methacrylate restorative system showed more adhesive failures, while silorane exhibited more mixed failures (Table 4).Most silorane fractures occurred between the bonding agent and the composite, with part of the bonding remaining on dentin surface as well, perhaps due to the lower adhesion compared to adhesion between the methacrylatebased materials.The longer irradiation time increased the occurrence of mixed failures and decreased the adhesive failures for methacrylate restorative system, likely by greater monomeric conversion.Water storage increased the adhesive failures percentage for both restorative systems, probably by swelling of the polymer network and reduction of the frictional forces between polymeric chains. The quality and uniformity of the polymerization reaction is an important parameter that affects the conversion of the monomers into structured polymers, and therefore improves the physical properties and clinical performance; however, this process is dependent of various factors, such as design and size of the tip guide, distance of the light guide tip from the material surface, power density, exposure duration, shade and opacity of the composite, increment thickness, materials' composition, and others 9 .Thus, manufacturers should provide information, such as minimum irradiance and time of light curing required for optimal polymerization of their adhesive systems 27 , and make clear, in their instructions of use, that the minimum irradiance indicated is the one that reaches the surface of the material and not the optical power of output of the light-curing device.Higher irradiance is necessary to adequately cure photoactivated materials in deep cavities, and contribute to the improvement of the longevity of adhesive dental restorations. The longer light-curing time improved the bond strength for both restorative materials and the groups restored with LS restorative system showed the lowest dentin microTBS values; however, long-term storage after 6 months in distilled water did not affect the bond durability of the tested restorations. *Manufacturer's recommendation or double the recommended time: adhesive system (10 or 20 s) and composite resin (20 or 40 s).
2018-04-30T20:42:45.613Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "29b5cacb56b91b18fb9c13feccc3391ca4c7d499", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjos/a/vWPbYCJjxtWSkwthNTFSxmg/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "29b5cacb56b91b18fb9c13feccc3391ca4c7d499", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Computer Science" ] }
228883024
pes2o/s2orc
v3-fos-license
Epibenthic Harmful Marine Dinoflagellates from Fuerteventura (Canary Islands), with Special Reference to the Ciguatoxin-Producing Gambierdiscus The relationship between the ciguatoxin-producer benthic dinoflagellate Gambierdiscus and other epibenthic dinoflagellates in the Canary Islands was examined in macrophyte samples obtained from two locations of Fuerteventura Island in September 2016. The genera examined included Coolia, Gambierdiscus, Ostreopsis, Prorocentrum, Scrippsiella, Sinophysis, and Vulcanodinium. Distinct assemblages among these benthic dinoflagellates and preferential macroalgal communities were observed. Vulcanodinium showed the highest cell concentrations (81.6 × 103 cells gr−1 wet weight macrophyte), followed by Ostreopsis (25.2 × 103 cells gr−1 wet weight macrophyte). These two species were most represented at a station (Playitas) characterized by turfy Rhodophytes. In turn, Gambierdiscus (3.8 × 103 cells gr−1 wet weight macrophyte) and Sinophysis (2.6 × 103 cells gr−1 wet weight macrophyte) were mostly found in a second station (Cotillo) dominated by Rhodophytes and Phaeophytes. The influence of macrophyte’s thallus architecture on the abundance of dinoflagellates was observed. Filamentous morphotypes followed by macroalgae arranged in entangled clumps presented more richness of epiphytic dinoflagellates. Morphometric analysis was applied to Gambierdiscus specimens. By large, G. excentricus was the most abundant species and G. australes occupied the second place. The toxigenic potential of some of the genera/species distributed in the benthic habitats of the Canary coasts, together with the already known presence of ciguatera in the region, merits future studies on possible transmission of their toxins in the marine food chain. Introduction Ciguatera fish poisoning (CFP), the most important food-borne illness caused by fish consumption in the world, is produced by ciguatoxins (CTX) which are suggested to be transferred from epiphytic dinoflagellates of Gambierdiscus and Fukuyoa genera into the food web [1,2]. The incidence of CFP in tropical and subtropical areas has been extensively reported since antiquity [3] but a spreading into more temperate regions of both CFP cases and Gambierdiscus and Fukuyoa populations have been reported in the last decade. In Europe, where CTXs are considered an emerging threat, the incidence of ciguatera episodes was first recorded in 2004 in the Canary Islands and Madeira [4][5][6][7]. There is awareness that global warming can cause spreading of CTX-producing dinoflagellates into higher latitudes not currently affected by CFP [8,9]. This concern has prompted Intergovernmental Panel on Climate Change to alert about the effect of global warming on the increase of CFP occurrence [10]. In fact, recent studies report Gambierdiscus and Fukuyoa species in more temperate waters of Japan, Mediterranean Sea, Canary Islands, and along the eastern coasts of North and South America [11][12][13][14][15][16]. Some authors have described that ciguatera incidence and the prevalence of Gambierdiscus and Fukuyoa cells are not always positively well correlated [17][18][19][20]. Different toxicity among Gambierdiscus species and changes in their interannual relative abundances were suggested to cause those differences between CFP outbreaks and Gambierdiscus detection [18]. Subsequent studies demonstrated high variability in the toxic potential among species. Higher toxicity has been reported, for example for G. polynesiensis in the Pacific [21][22][23], and G. excentricus in Caribbean Sea and the Canary Islands, in comparison with other species in the same regions [23,24]. This emphasizes the need for implementing adequate methodologies for the unequivocal identification of species in these genera, as well as for their quantification. The difficult morphological differentiation among species of Gambierdiscus and, consequently, the problem for species-specific cell counts by traditional microscopy-based methodologies has been abundantly mentioned in the literature [25,26]. Therefore, the unequivocal identification of Gambierdiscus cells relies in most occasions on molecular techniques, mainly on rDNA sequences of cultures. Furthermore, semi-quantitative techniques (qPCR) have been described for most species and ribotypes of Gambierdiscus [27][28][29]. However, such methods cannot always be implemented, whereas light microscopy, despite its limitations, can still provide useful information. In the present study, Gambierdiscus cells were morphologically characterized to determine to what extent their differences in morphology and size are useful for their specific identification. The methodology used was based on the parameters described by Bravo et al. [26] for the five species found in the Canary Islands so far, excepting the very recently reported G. belizeanus by Tudó et al. [30]. These morphological traits include cell depth measurement and the shapes of the second apical (2 ) and second antapical (2"") plates, as well as the position of the Po plate. For some time, while the responsible agent of ciguatera was unknown, other benthic dinoflagellates apart from Gambierdiscus were associated with this syndrome, like Ostreopsis and Prorocentrum. As it was later discovered, this was due to the potential production of palytoxin and palytoxin-like compounds in Ostreopsis, and okadaic acid, dinophysistoxins-1, 2, and 4, and prorocentrolide in several benthic species of Prorocentrum like P. lima (see references in [31]). P. hoffmannianum has been isolated from benthic communities in the Canary Islands and confirmed to produce okadaic acid and three analogs [32]. Furthermore, the single species of Vulcanodinium described so far, V. rugosum, has been described to synthesize potent bioactive compounds like pinnatoxins and portimine, though it was never associated with human poisonings [33,34]. In consequence, although Gambierdiscus and Fukuyoa are the vector species for CTX in fish and CFP outbreaks, other species of those dinoflagellates cannot be discarded to cause some kind of harmful episode. Studies in ciguatera endemic areas have described the effects of structural complexity of coral reefs on benthic harmful dinoflagellate communities. Thus, the different environmental driving factors that govern each community influence the benthic dinoflagellate assemblages [35,36]. Moreover, macrophyte host preferences as well as epiphytic dinoflagellate associations have been described in some regions [37,38]. The results, however, are sometimes contradictory due to the difficulty of understanding such complex benthic habitats [38]. The spatial distribution patterns of macrophytes depending on factors such as temperature, lighting, and wave exposure have been extensively studied on the coasts of the Canary Islands [39][40][41] and the composition and spatial distribution of marine macrophytes on the littoral of Fuerteventura exhibit a higher proportion of warm water species than on the rest of Islands of this archipelago [42][43][44]; it is not known, however, whether epiphytic harmful dinoflagellates are preferentially distributed in some of them. The specific genera examined in the present study comprised Gambierdiscus, Prorocentrum, Coolia, Sinophysis, Ostreopsis, Vulcanodinium, and Scrippsiella. All of them were surveyed from macrophytes from two locations in Fuerteventura Island with different macrophyte communities. The objectives of this study were: (1) to know if there is any preferential associations of benthic harmful dinoflagellates; (2) advance on the knowledge of the relationships of benthic dinoflagellate assemblages with different macrophyte communities; and (3) to know the most abundant species of Gambierdiscus in the benthic macrophyte communities examined, for which a morphological study was carried out. Study Sites Cotillo is located in the NW of Fuerteventura Island (Canary archipelago, Figure 1A,B). In this station (28 • 41 18.34" N/14 • 0 48.14" W), macrophytes were sampled in a "charco"-a type of coastal pool of medium size quite abundant in the Canary Islands-of approximately 4 × 10 3 m 2 located to the left of Marfolín beach, on an extensive rocky platform that extends just north of the town of El Cotillo. This pool-maximum depth of 2 m at low tide and 3 m at high tide-has a mostly rocky bottom alternating with small sand spaces. This type of pools constitutes a particular environment with great environmental variability in a limited space that displays a high biodiversity. In Cotillo the rocky platform extends in depth up to more than one kilometer offshore reaching bathymetric levels greater than 20 m. Samples were taken from the "charco" by snorkeling during low tide up to 1.5 m deep and by scuba diving until 6 m deep. Playitas is located on the eastern side of Fuerteventura Island ( Figure 1A,B). The station (28 • 13 39.80" N/13 • 59 1.69" W) was situated just to the left of the port in the town Las Playitas. The platform extends with gentle slope in the infralitoral where small tidal ponds were sampled on foot. Then, samples were taken by snorkeling until 2-3 m and by scuba diving until 6 m deep. (3) to know the most abundant species of Gambierdiscus in the benthic macrophyte communities examined, for which a morphological study was carried out. Study Sites Cotillo is located in the NW of Fuerteventura Island (Canary archipelago, Figure 1A,B). In this station (28°41′18.34″ N/14°0′48.14″ W), macrophytes were sampled in a "charco"-a type of coastal pool of medium size quite abundant in the Canary Islands-of approximately 4 × 10 3 m 2 located to the left of Marfolín beach, on an extensive rocky platform that extends just north of the town of El Cotillo. This pool-maximum depth of 2 m at low tide and 3 m at high tide-has a mostly rocky bottom alternating with small sand spaces. This type of pools constitutes a particular environment with great environmental variability in a limited space that displays a high biodiversity. In Cotillo the rocky platform extends in depth up to more than one kilometer offshore reaching bathymetric levels greater than 20 m. Samples were taken from the "charco" by snorkeling during low tide up to 1.5 m deep and by scuba diving until 6 m deep. Playitas is located on the eastern side of Fuerteventura Island ( Figure 1A,B). The station (28°13′39.80″ N/13°59′1.69″ W) was situated just to the left of the port in the town Las Playitas. The platform extends with gentle slope in the infralitoral where small tidal ponds were sampled on foot. Then, samples were taken by snorkeling until 2-3 m and by scuba diving until 6 m deep. Field Sampling and Cell Enumeration Sixty-seven samples of macrophytes were collected in the two stations in September 2016 (16th-17th in Playitas and 18th-19th in Cotillo) (Table 1, Figure 1). The macrophyte samples were carefully collected with surrounding water in plastic bags, placed in a plastic bottle and shaken to detach epiphytes. Afterwards, the gross materials were removed through a 300 µm opening nylon mesh and the remaining seawater was filtered again on a 20 mm nylon mesh to concentrate the cells. Field Sampling and Cell Enumeration Sixty-seven samples of macrophytes were collected in the two stations in September 2016 (16th-17th in Playitas and 18th-19th in Cotillo) (Table 1, Figure 1). The macrophyte samples were carefully collected with surrounding water in plastic bags, placed in a plastic bottle and shaken to detach epiphytes. Afterwards, the gross materials were removed through a 300 µm opening nylon mesh and the remaining seawater was filtered again on a 20 mm nylon mesh to concentrate the cells. Aliquots from these samples were fixed in situ with formaldehyde for identification and enumeration in the laboratory. Formaldehyde-fixed epiphyte samples were stained with Fluorescent Brightner 28 (Sigma, St Louis, MO, USA) [45] for dinoflagellates identification and counted under UV light using an Axiovert 125 epifluorescence inverted microscope (Carl Zeiss AG, Germany) at 400× magnification. Quantitative data were obtained for the following genera of benthic dinoflagellates: Gambierdiscus, Prorocentrum, Coolia, Sinophysis, Ostreopsis, Vulcanodinium, and Scrippsiella. Cell abundance was expressed as cells per gram wet weight of host macrophyte (abbreviated as cells g −1 in the results section). For this purpose, fresh macrophytes were weighted after being manually drained just after collection. Macrophyte Sampling The macrophyte community of the two sampling stations presented remarkable differences in species composition. During sampling, the most representative species (or groups of species) of each station were collected. Although macrophyte composition was not the target of the study, different macrophyte communities in Cotillo and Playitas were clearly evidenced (Table 1). In the intertidal zone of Playitas, red algae (Rhodophyceae) belonging to the orders Ceramiales, Corallinales, and Gigartinales which formed a thick turf (named as turf in Table 1) was characteristic. There, the most representative species were: Hypnea spinella, Jania adhaerens, Centroceras gasparrinii, Amphiroa fragilissima, and Palisada perforata. Added to those and frequently as an epiphyte, the filamentous cyanobacteria Blennothrix lyngbyacea was also found. Among the brown algae (Phaeophyceae), much less abundant in the intertidal zone, erect foliose species of Dictyotales as Padina pavonica and Stypopodium zonale were collected (Table 1). Other species of Ceramiales, such as the erect, filamentous, and profusely branched Lophocladia trichoclados and Cottoniella fusiformis, as well as ribbon-like Dictyotales, such as Canistrocarpus cervicornis, Dictyota dichotoma, and D. humifusa, were sampled at depths of more than two meters. Macrophyte community was mainly formed by a very diverse assemblage of erect brown and red algae in the "charco" sampled in Cotillo station. Dictyotales as Dictyota spp, Stypopodium zonale, Padina pavonica, and the also foliose Lobophora schneideri were dominant ( Table 1). Species of Sphaceraliales forming erect arborescent tufts as Halopteris scoparia and H. filicina were also collected. Among the Rhodophyceae, species from Bonnemaisoniales (as the arborescent with duster-like appearance Asparagopsis taxiformis), Nemaliales (as the cylindrical dichotomously branched Galaxaura rugosa) and Ceramiales (as Lophocladia trichoclados) were the most common macrophytes in the "charco". At 2 m depth, Asparagopsis taxiformis was the dominant species and species of Dictyotales as Lobophora schneideri prevailed deeper. Since epiphyte abundances are clearly related to differences in structure and wet weight to surface area ratios of macrophytes and an estimate of the surface/weight ratio has not yet been established, the macrophytes were categorized into four types based on external morphology classification, modified from definitions in Parsons and Preskitt [46]: (1) Type 1: Foliose (laminar thallus); (2) Type 2: Ribbon-like (several times forked ribbon-shaped thallus); (3) Type 3: Entangled clumps (thallus with cylindrical axes, 0.2-2.0 mm diameter, branched and entangled); (4) Type 4: Filamentous (thallus with thin cylindrical axes, ≤0.2 mm diameter, profusely branched and tree-like). Epiphytic Dinoflagellate Assemblages A principal component analysis (PCA) was performed to analyze the data describing the composition of epiphytic dinoflagellates. It was conducted using logarithmically transformed cell concentrations and the statistical software package SPSS. The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.66 and Bartlett's test of sphericity, which tests for the presence of correlations among variables, was significant at p < 0.001. In addition, non-parametric rank-based test (Kruskal-Wallis) was performed using the statistical software package SPSS version 14.0 (SPSS Inc., Chicago IL) to compare the distribution of the abundance values of dinoflagellate species from both stations. Morphometric Analysis and Abundances of Gambierdiscus Morphological analyses were performed on individual cells of Gambierdiscus isolated from epiphytic samples. Measurements of the epitheca and hypotheca of the same specimen were made by placing individual cells between two coverslips, which allowed them to be observed and photographed from their apical and antapical views. The morphologies of a total of 30-40 cells from each sample were studied. Cell morphology determinations were based on measurements of two thecal plates: the second apical (2 ) plate, located on the epitheca, and the second antapical (2"") plate on the hypotheca, following the methodology described by Bravo et al. [26]. Three morphometric parameters were used as follows: (1) R1 as an assessment of the rectangular vs. the hatchet shape of the 2 plate; (2) R2 representing the position of Po in the lateral edge of the 2 plate and, therefore, the degree of eccentricity of Po in the cell; and (3) R3 as an indicator of the elongation of the 2"" plate. In addition, cell depth (D), corresponding to the dorso-ventral diameter was also used. These parameters were selected following the most relevant bibliography on Gambierdiscus morphology as mentioned by Bravo et al. [26]. These authors define the parameter values for each species based on a study performed with culture cells. All measurements needed for those morphometric calculations were made on Calcofluor-stained cells using digital imaging software (ZEN lite, ZEISS Microscopy) and an epifluorescence microscope (Leica DMLA, Wetzlar, Germany) equipped with a UV light source and an AxioCam HRc (Carl Zeiss, Jena, Germany) digital camera. Concentrations of the five species of Gambierdiscus were estimated from the percentages of cells identified for each species and the total concentration value of the genus counted in each sample as explained above (section of field sampling and cell enumeration). Abundances of Epiphytic Dinoflagellates Cells of Gambierdiscus, Prorocentrum, Coolia, Sinophysis, Ostreopsis, and Vulcanodinium genera were identified in the two sampled stations but appeared in different ratios. In Cotillo, Prorocentrum, Coolia, Gambierdiscus, and Vulcanodinium were present at percentages higher than 10% (26%, 25%, 17%, and 15%, respectively) whereas Synophysis and Ostreopsis represented 10% and 8%, respectively ( Figure 1B). On other hand, Vulcanodinium and Ostreopsis prevailed in Playitas (58% and 29%, respectively), while Prorocentrum, Coolia, Gambierdiscus and Sinophysis were less abundant (7%, 5%, 1%, and 0.2%, respectively; Figure 1B). The genus Scrippsiella was only detected in three samples from Cotillo (reaching up to 486 cells gr −1 ), was not included in the statistical analyses. Total dinoflagellate abundances were higher in Playitas station. The abundance mean values for all genera are plotted in Figure 1C. The differences between stations were highly significant both for genus composition and abundances. Statistical values (mean, standard deviations, maximum and minimum) of abundances of all species and stations are shown in Table 2. Significant differences (p < 0.001) were found between the distribution of the abundances of Gambierdiscus, Sinophysis, Ostreopsis, and Vulcanodinium from both stations; on the contrary, no significant differences were found for Prorocentrum and Coolia (Table 2). Epiphytic Dinoflagellate Assemblages and Macrophyte Associations Different dinoflagellate assemblages among the six dinoflagellate genera were revealed through Principal Component Analysis (PCA) of their abundances. Component 1 (PC1) grouped four species: Ostreopsis, Prorocentrum, Coolia, and Vulcanodinium, whereas component 2 (PC2) was more associated to Gambierdiscus and Sinophysis (Figure 2A). On PC2, these last two species were negatively correlated with Ostreopsis. The two components explained 62% of the variance (31.6% for component 1 and 30.6% for component 2). Factor loadings of the genera projected on the PCA plot show a clear separation of Ostreopsis and Vulcanodinium from Gambierdiscus and Sinophysis (Figure 2A), whereas the relationship of Prorocentrum and Coolia was not so evident. The components were differently associated to the two stations. As shown in Figure 2A, while Ostreopsis and Vulcanodinium were more associated to Playitas, Gambierdiscus and Sinophysis were more to Cotillo station. The different macroalgae composition in the two stations and its differential association to the different genera of dinoflagellates is showed in Figure 2B. Abundances of the different dinoflagellate genera in the main macrophytes of the two stations are plotted in Figure 3. The means of the abundances of Gambierdiscus and Sinophysis presented the highest values in the macrophytes of Cotillo compared to those of Ostreopsis and Vulcanodinium, which are higher in the macrophytes of Playitas. The study of the abundances of the dinoflagellate genera in the different types of macrophytes showed that those with a filamentous structure clearly presented the highest abundances in all the genera studied ( Figure 4). However, within that type of macrophytes remarkable differences were found depending on the dinoflagellates. Thus, Gambierdiscus and Sinophysis presented the highest abundance values in the filamentous Halopteris and Asparagopsis, whereas Spyridia and Lophocladia showed the highest abundances of Ostreopsis and Vulcanodinium (Figures 3 and 4). Macrophytes with an entangled clump structure as turf species characteristic of Playitas station ranked second for Prorocentrum, Coolia, and Ostreopsis ( Figure 4). On the contrary, the foliose macrophyte Lobophora occupied second position for Gambierdiscus and Sinophysis and the ribbon-like Dyctiota and Canistrocarpus in the case of Vulcanodinium. Concentrations of Prorocentrum and Coolia presented a more homogenous distribution among all macrophyte species (Figure 3). Table 1). Crosses represent the factor loadings of the different dinoflagellate genera. Abundances of the different dinoflagellate genera in the main macrophytes of the two stations are plotted in Figure 3. The means of the abundances of Gambierdiscus and Sinophysis presented the highest values in the macrophytes of Cotillo compared to those of Ostreopsis and Vulcanodinium, which are higher in the macrophytes of Playitas. Table 1). Crosses represent the factor loadings of the different dinoflagellate genera. Table 1. The study of the abundances of the dinoflagellate genera in the different types of macrophytes showed that those with a filamentous structure clearly presented the highest abundances in all the genera studied ( Figure 4). However, within that type of macrophytes remarkable differences were found depending on the dinoflagellates. Thus, Gambierdiscus and Sinophysis presented the highest abundance values in the filamentous Halopteris and Asparagopsis, whereas Spyridia and Lophocladia showed the highest abundances of Ostreopsis and Vulcanodinium (Figures 3 and 4). Macrophytes with an entangled clump structure as turf species characteristic of Playitas station ranked second for Prorocentrum, Coolia, and Ostreopsis (Figure 4). On the contrary, the foliose macrophyte Lobophora occupied second position for Gambierdiscus and Sinophysis and the ribbon-like Dyctiota and Canistrocarpus in the case of Vulcanodinium. Concentrations of Prorocentrum and Coolia presented a more homogenous distribution among all macrophyte species (Figure 3). Table 1. Morphological Characterization of Gambierdiscus Species Based on cell sizes (cell depth denoted as D) and the parameters R1, R2, and R3 (related to the plate's morphology, see Material and Methods) 91% of the specimens were classified within the five Gambierdiscus species detected previously in the Canary Islands: G. australes, G. caribaeus, G. carolinianus, G. excentricus, and G. silvae. The values of the parameters (D, R1, R2, and R3) and the corresponding classification are scattered in Figure 5. G. excentricus was separated from all other species by the excentricity of Po (represented by parameter R2) ( Figure 5A) excepting the overlap of some specimens with G. australes. In those cases, R1 and R3 relationship was useful for identification ( Figure 5B). Figure 5A shows as G. excentricus and G. silvae were the most easily discriminated species basing in size and R2. In addition, the scattered plotting of R1 (denoting the shape of 2′ plate) and R3 (shape of 2'''' plate) efficiently separated G. silvae and G. caribaeus from the rest of the species ( Figure 5B). For classification of those species, size was also useful following the description by the same authors previously mentioned. Low overlap percentages were observed between the groups of G. australes and G. excentricus (1.9% of the total cells) regarding excentricity of Po (represented by parameter R2), the most differentiating trait between those species ( Figure 5A). Notwithstanding, the general appearance of the cell as well as the general shape of 2′ and 2'''' plates helped to classify them. G. australes and G. caribaeus were the most similar species. Both coincide in the three following traits: rectangular shape of 2′ plate (R1), cell size (D) and position of Po (R2) ( Figure 5). The shape of 2'''', more elongated in G. australes than in G. caribaeus, was the most useful trait to discriminate them ( Figure 5B). However, the overlapping in that parameter was also remarkable. Due to this, it was not possible to separate 9% of the total cells which were comprised in the G. australes/caribaeus group. Table 1. Morphological Characterization of Gambierdiscus Species Based on cell sizes (cell depth denoted as D) and the parameters R1, R2, and R3 (related to the plate's morphology, see Material and Methods) 91% of the specimens were classified within the five Gambierdiscus species detected previously in the Canary Islands: G. australes, G. caribaeus, G. carolinianus, G. excentricus, and G. silvae. The values of the parameters (D, R1, R2, and R3) and the corresponding classification are scattered in Figure 5. G. excentricus was separated from all other species by the excentricity of Po (represented by parameter R2) ( Figure 5A) excepting the overlap of some specimens with G. australes. In those cases, R1 and R3 relationship was useful for identification ( Figure 5B). Figure 5A shows as G. excentricus and G. silvae were the most easily discriminated species basing in size and R2. In addition, the scattered plotting of R1 (denoting the shape of 2 plate) and R3 (shape of 2"" plate) efficiently separated G. silvae and G. caribaeus from the rest of the species ( Figure 5B). For classification of those species, size was also useful following the description by the same authors previously mentioned. Low overlap percentages were observed between the groups of G. australes and G. excentricus (1.9% of the total cells) regarding excentricity of Po (represented by parameter R2), the most differentiating trait between those species ( Figure 5A). Notwithstanding, the general appearance of the cell as well as the general shape of 2 and 2"" plates helped to classify them. G. australes and G. caribaeus were the most similar species. Both coincide in the three following traits: rectangular shape of 2 plate (R1), cell size (D) and position of Po (R2) ( Figure 5). The shape of 2"", more elongated in G. australes than in G. caribaeus, was the most useful trait to discriminate them ( Figure 5B). However, the overlapping in that parameter was also remarkable. Due to this, it was not possible to separate 9% of the total cells which were comprised in the G. australes/caribaeus group. Diversity and abundances of Gambierdiscus species Total abundances of genus Gambierdiscus reached up to 3.8 × 10 3 cells gr −1 in Cotillo station and 8•10 2 cells gr −1 in Playitas. Means and standard deviations are showed in Table 2. The abundance distributions between the two stations showed significant differences (p < 0.001) ( Table 2). Individual morphometric analyses as mentioned in the previous section revealed, at least five species of Gambierdiscus in the two stations: G. australes, G. caribaeus, G. carolinianus, G. excentricus, and G. silvae. Figure 6 shows the percent cell concentrations of Gambierdiscus species. Significant differences were only detected in the distribution of percentages of G. australes and G. excentricus between the two stations (p < 0.01). G. excentricus was the most abundant of the five species, representing as average 56% and 75% in Cotillo and Playitas, respectively, followed by G. australes (mean of 24% and 18% in each station, respectively). The number of specimens which could not be identified unequivocally, denominated as G. australes/caribaeus, was quite abundant in Cotillo but rare in Playitas station (mean of 12% and 3%, respectively). G. caribaeus and G. silvae presented mean abundances of 4% and 5% in Cotillo and 1% and 3% in Playitas, respectively. Finally, G. carolinianus was even less represented in the two stations ( Figure 6). Diversity and abundances of Gambierdiscus species Total abundances of genus Gambierdiscus reached up to 3.8 × 10 3 cells gr −1 in Cotillo station and 8 × 10 2 cells gr −1 in Playitas. Means and standard deviations are showed in Table 2. The abundance distributions between the two stations showed significant differences (p < 0.001) ( Table 2). Individual morphometric analyses as mentioned in the previous section revealed, at least five species of Gambierdiscus in the two stations: G. australes, G. caribaeus, G. carolinianus, G. excentricus, and G. silvae. Figure 6 shows the percent cell concentrations of Gambierdiscus species. Significant differences were only detected in the distribution of percentages of G. australes and G. excentricus between the two stations (p < 0.01). G. excentricus was the most abundant of the five species, representing as average 56% and 75% in Cotillo and Playitas, respectively, followed by G. australes (mean of 24% and 18% in each station, respectively). The number of specimens which could not be identified unequivocally, denominated as G. australes/caribaeus, was quite abundant in Cotillo but rare in Playitas station (mean of 12% and 3%, respectively). G. caribaeus and G. silvae presented mean abundances of 4% and 5% in Cotillo and 1% and 3% in Playitas, respectively. Finally, G. carolinianus was even less represented in the two stations ( Figure 6). Diversity and abundances of Gambierdiscus species Total abundances of genus Gambierdiscus reached up to 3.8 × 10 3 cells gr −1 in Cotillo station and 8•10 2 cells gr −1 in Playitas. Means and standard deviations are showed in Table 2. The abundance distributions between the two stations showed significant differences (p < 0.001) ( Table 2). Individual morphometric analyses as mentioned in the previous section revealed, at least five species of Gambierdiscus in the two stations: G. australes, G. caribaeus, G. carolinianus, G. excentricus, and G. silvae. Figure 6 shows the percent cell concentrations of Gambierdiscus species. Significant differences were only detected in the distribution of percentages of G. australes and G. excentricus between the two stations (p < 0.01). G. excentricus was the most abundant of the five species, representing as average 56% and 75% in Cotillo and Playitas, respectively, followed by G. australes (mean of 24% and 18% in each station, respectively). The number of specimens which could not be identified unequivocally, denominated as G. australes/caribaeus, was quite abundant in Cotillo but rare in Playitas station (mean of 12% and 3%, respectively). G. caribaeus and G. silvae presented mean abundances of 4% and 5% in Cotillo and 1% and 3% in Playitas, respectively. Finally, G. carolinianus was even less represented in the two stations ( Figure 6). (Figure 7). Regarding depth in the water column, Canistrocarpus cervicornis and Dictyota implexa were the macrophytes collected at greater depths (3.5 m. and 6 m. respectively) where concentrations of G. excentricus were estimated to be higher than 10 3 cells gr −1 . That species accounted for 96% and 76% of Gambierdiscus spp. in those samples, respectively. Discussion A great deal of research and communication efforts have been carried out during the last decade on the study of tropical and subtropical benthic HABs mainly those associated with ciguatera outbreaks and Gambierdiscus. However, with the exception of a few areas and dinoflagellate genera, the knowledge on benthic harmful microalgae abundance and distribution is still very scarce [47]. That knowledge has become even more essential considering the current expansion of some harmful benthic dinoflagellate species to temperate regions. Ciguatera is an emerging human poisoning in Europe since the first outbreak occurred in the Canary Islands archipelago and in Madeira in 2004 [3,48]. Since then, populations of the CTXs-producer dinoflagellates, Gambierdiscus and Fukuyoa, have been documented both in those regions of Macaronesia and in Mediterranean Sea, though no CFP episodes have been confirmed in the latter region [3]. Yet, there are few data in the literature on harmful benthic dinoflagellates in the Canary Islands other than Gambierdiscus. Studies on this topic are increasing since the emergence of ciguatera on the Islands. It is remarkable that two species of Gambierdiscus, G. excentricus and G. silvae, and two of Coolia, C. canariensis and C. guanchica, have been described in the last decade from Table 1). Discussion A great deal of research and communication efforts have been carried out during the last decade on the study of tropical and subtropical benthic HABs mainly those associated with ciguatera outbreaks and Gambierdiscus. However, with the exception of a few areas and dinoflagellate genera, the knowledge on benthic harmful microalgae abundance and distribution is still very scarce [47]. That knowledge has become even more essential considering the current expansion of some harmful benthic dinoflagellate species to temperate regions. Ciguatera is an emerging human poisoning in Europe since the first outbreak occurred in the Canary Islands archipelago and in Madeira in 2004 [3,48]. Since then, populations of the CTXs-producer dinoflagellates, Gambierdiscus and Fukuyoa, have been documented both in those regions of Macaronesia and in Mediterranean Sea, though no CFP episodes have been confirmed in the latter region [3]. Yet, there are few data in the literature on harmful benthic dinoflagellates in the Canary Islands other than Gambierdiscus. Studies on this topic are increasing since the emergence of ciguatera on the Islands. It is remarkable that two species of Gambierdiscus, G. excentricus and G. silvae, and two of Coolia, C. canariensis and C. guanchica, have been described in the last decade from samples from the Canary archipelago [49][50][51][52]. In addition, genera as Gambierdiscus, Ostreopsis, Prorocentrum, Coolia, and Vulcanodinium had already been reported in the same region [26,53]. Diversity and Abundance of Harmful Benthic Dinoflagellates For the six epibenthic genera herein studied, both the mean and the maximum cell concentrations showed the following descending order: Vulcanodinium, Ostreopsis, Prorocentrum, Coolia, Gambierdiscus, and Sinophysis (Table 2). These genera are comparable to those reported in other studies in the Canary Islands [26,53], though Fernandez-Zabala et al. [53] limited the study to Gambierdiscus, Ostreopsis, Prorocentrum, and Coolia. As far as we know, there are very few reports on benthic dinoflagellates other than Gambierdiscus or Fukuyoa for other Islands from Macaronesia region. The genera Ostreopsis, Prorocentrum, and Coolia have been also reported for Cabo Verde Islands [53]. Moreover, a list of phytoplankton taxa including Ostreopsis (O. cf. ovata), Prorocentrum (P. lima and P. hoffmaniannum), Coolia sp., and Gambierdiscus excentricus are reported in Madeira [54,55]. Cell abundance comparisons from literature are controversial due to the methodological differences among studies. The main methodological problem is related to the differences on macrophyte surfaces and morphologies which make difficult the standardizations. Methodologies based on quantifying benthic dinoflagellates on artificial substrates have been developed in the last decade in order to normalize cell abundance to a standardized surface [56]. This methodology has been tested in the Canary Islands by Fernandez-Zabala et al. [53] showing that, in most cases, cell abundances of epiphytic dinoflagellates showed lower variability on artificial substrates than on macroalgae. However, a well-defined methodology to quantify epiphytic cells in macrophytes is still needed. In order to make the pertinent comparisons between macrophytes and artificial substrates, there should be a consensus on the methodologies of both procedures. This issue is particularly relevant to quantitate the potential associations between epiphytic dinoflagellates and certain macrophyte taxa. Maximum concentrations of Gambierdiscus of 4.9 × 10 3 cells gr −1 blot dry weight of host macrophyte (n = 128, from samples collected from five Canary Islands) were already reported in Fuerteventura by Rodriguez et al. [15]. No mention of macrophyte species was given by those authors. Blot dry procedure consists in draining algae overnight over soft laboratory paper. A loss of 62% of weight on average has been reported when dry-blot macrophyte weight is used compared with the manually drained wet weight of the macrophyte used in the present paper; obviously with the corresponding increase in the concentrations of cells when blot-dried weight expression is used [26]. Taking this into consideration, estimated maximum values for Gambierdiscus from those authors and our results (3.1 × 10 3 and 3.8 × 10 3 cells gr −1 wet weight respectively) are of the same order of magnitude. On other hand, blooms of Gambierdiscus with concentrations higher than 10 4 cells gr −1 wet weight were reported in the port of La Restinga [53,57]. Further investigations carried out with standardized methodologies should be addressed to link dinoflagellate populations and their associated environmental conditions with CFP risk areas in the Canary Islands. Furthermore, the high heterogeneity in Gambierdiscus cell numbers in the region makes essential to investigate the relationships between some habitats and detected hotspot areas. The maximum abundances of Ostreopsis found in the present study were lower than previously reported values in the region since concentrations up to 2.2 × 10 5 cells gr −1 wet weight algae had been documented [53]. Even if these numbers are lower than those for Ostreopsis blooms reported in NW Mediterranean Sea and New Zealand where they have been associated with human health problems by coastal aerosols [58], the risk of Ostreopsis proliferations in Canary Islands should be investigated. The genus Prorocentrum includes benthic species, such as P. lima and P. hoffmannianum, that produce okadaic acid and dinophysistoxins or derivatives which have been associated to Diarrhetic Shellfish Poisoning [32,[59][60][61]. Although in the present study no taxonomic studies were carried out that allowed identification at species level, the different morphologies observed in cell size and shape reveals a high specific diversity which includes both P. lima-like cells and P. hoffmannianum-like specimens. Hence the great interest to carry out taxonomic studies from this potentially toxic genus in the region. Regarding the genus Sinophysis (often observed in Cotillo station), to our knowledge it has not been associated with toxin production. The only species reported so far in the Canary Islands, S. canaliculata, harbors cyanobionts of uncertain taxonomic position [62,63]. Vulcanodinium is not a genus usually included in studies of benthic dinoflagellates, although it has been documented in benthic communities of the Canary Islands [15,26]. Its high abundance in the present study is a remarkable new interesting finding given the high concentrations observed in Playitas station. Vulcanodinium rugosum, the only one species described so far from the genus, was described in 2011 from a French Mediterranean Lagoon and is responsible for producing neurotoxic pinnatoxins (PnTXs) which have been recurrently detected in the shellfish from that region [64,65]. The morphology of Vulcanodinium cells in the samples coincide with those described by Rhodes et al. and Zeng et al. [64,66] as motile cells, however their benthic/planktonic character should be studied. In the life strategy of this species, the phase in which vegetative division occurs is the benthic non-mobile spherical cells which are considered as cysts [66]. This type of cysts has been called division cysts which have been described in species considered planktonic but with an intense relationship with the benthos [67,68]. No human poisonings by PnTXs are known, however because of their high toxic potential, European Food Safety Authority (EFSA) have pointed out the need for more information on the oral toxicity of these compounds for risk assessment as seafood contaminant [65]. Therefore, future taxonomic, life cycle, and toxin studies are required from the organism found in the Canary Islands. Associations of Benthic Harmful Dinoflagellates and Macrophyte Communities Our data showed preferential associations of benthic dinoflagellates in benthic communities of the Canary archipelago. The population distributions of Gambierdiscus and Synophysis were significantly opposite to that of Ostreopsis and Vulcanodinium. Moreover, the two principal components from PCA were preferentially associated with two different algae communities, those of Cotillo and Playitas, respectively. Our results agree with the distinct distributions of Gambierdiscus and Ostreopsis reported by other studies (as for example [69]). These authors reported Ostreopsis spp. in greater concentration in reef areas with high wave energy, coinciding with that mentioned in the Mediterranean by Vila et al. [70]. This is also supported by the results of Grzebyk et al. [71] which reported highest abundances of Ostreopsis in turbulent coral reef habitats. However, blooms of this genus have also been registered in protected areas [31,72]. To better understand these patterns, proper identification of Ostreopsis assemblages in each case, and more information about their ecology and the environmental factors associated with their proliferations are needed. On other hand, distribution of Gambierdiscus has been more associated with sheltered zones protected from the wind and adversely affected by terrestrial inputs [71]. These authors also cite Ostreopsis and Prorocentrum to be more tolerant to terrestrial loads and exploiting different ecological niches than Gambierdiscus. These opposite niches can be determined by the spatial distribution of environmental factors, such as hydrodynamics and terrestrial contributions. Macrophytes as important elements of benthic niches are interrelated with environmental factors. Among them, wave exposure integrates a wide variety of environmental factors being critical for the biodiversity of coastal ecosystems. It is known that hydrodynamic conditions influence the distribution of intertidal and subtidal organisms [73,74]. In this way, direct and indirect effects of waves have been reported an important driver of the distribution and biodiversity of marine macrophytes in coastal ecosystems [75]. Comparing the characteristics of the habitats studied here, Playitas station is more exposed to wave impact and with macrophyte communities mainly composed by mix red turf algae. This is very different to Cotillo, a more protected habitat with a very different macrophyte composition. Our data suggests that the "charco" in Cotillo station would provide a better niche for the development of Gambierdiscus and Sinophysis. Instead, the rocky platform exposed to waves in Playitas would configure a habitat more suitable for Ostreopsis and Vulcanodinium. Our results suggest that dinoflagellate-macrophyte associations are determined by the characteristics of the studied habitats. The environmental conditions and the microhabitats found in each location would determine the dominant organisms. On the other hand, their populations and the resulting associations could change over time. It must be taken into account that this study represents a fixed image at a certain time of the year. In that sense, further studies integrating spatial and temporal scales are needed, as these dimensions are highly relevant for management purposes and sampling strategy [76]. The fact is that macrophytes serve as habitat and function as complex ecological systems depending on their size, structure and longevity. They exhibit a great variety of epiphytic algae, as well as other microorganisms and associated mobile animals (including meiofauna, macrofauna, and fish). Therefore, given such complexity, discerning the relationship between macrophytes and epiphytic dinoflagellates still remains a difficult task. Notwithstanding, some trends appear in the literature about substrate preferences of the main benthic dinoflagellate genera (Ostreopsis, Prorocentrum, Coolia and Gambierdiscus), linked with macrophyte morphology and taxonomy (see Boisnoir et al. [38] and references therein). Regarding Gambierdiscus, this genus seems to be associated with a wide diversity of macrophyte taxa, although the epiphytic behaviour (growth and attachment) varies by species and host algae [77]. Recent authors have emphasized the importance of microhabitats in benthic communities of ciguatera endemic areas and the complexity of habitats as a determinant factor for the heterogeneity in Gambierdiscus and other epiphytic dinoflagellate distributions [36]. Environmental factors such as light and wave impact have a heterogeneous distribution and, therefore, generate a great deal of heterogeneity in the macrophyte communities and the associated dinoflagellates. Since the beginning of studies on communities where ciguatera-producing organisms thrive, many authors have remarked that the type of substrate plays an important role in their distributions. Yet, the role of some macrophytes as potentially preferential substrates is controversial. Some of the first ciguatera studies mentioned that rhodophytes were most prone to harbor epiphytic harmful dinoflagellates [78,79]. However, other authors described opportunistic patterns regarding substrate interactions with occurrences on rhodophytes, phaeophytes, chlorophytes, and vascular plants [20,80]. The examination of substrate preferences is controversial due to the difficulty of standardizing cell abundances calculated per weight of the host macrophyte. As far as we know, no estimates of surface/weight ratio have been established which prevent accurate comparisons among the different species of macrophytes. Trying to avoid this handicap we conducted comparisons between types of macrophytes depending on their thallus architecture. The different thallus architecture determines the total surface available for epiphytic dinoflagellates and defines a range of microhabitats which offer shelter and facilitate survival. The available surface and the microhabitats number increase progressively from the two-dimensional foliose to the three-dimensional, flexible filamentous thallus with a high surface:volume ratio (types 1-4 respectively, see material and methods). Our results revealed filamentous macrophytes as preferred substrates for all dinoflagellate genera, suggesting that it shapes a very heterogeneous habitat which increases the diversity and richness of the epiphytic communities. Macrophytes that formed entangled groups also showed high concentrations of dinoflagellates, especially of the genus Ostreopsis. Nevertheless, this classification aimed to define general trends of host preferences has some limitations. For example, the delimitation between the two macrophyte types mentioned is not strict. In our study, the "entangled clumps" type coincided mainly with turf algae that occasionally included some specimens of filamentous algae. Despite these considerations, differences in macrophyte preferences between dinoflagellate genera were observed (i.e., the association between the "entangled clumps" type formed by turfs of rhodophytes and Ostreopsis vs. the preference of ribbon-like macrophytes by Vulcanodinium). Gambierdiscus Results In the Canary archipelago, ciguatera outbreaks could be related with local Gambierdiscus spp. including those identified until date: G. australes, G. caribaeus, G. carolinianus, G. excentricus G. silvae and G. belizeanum [15,30]. The morphometric study of these first five species performed by Bravo et al. [26] was applied in the present study with the aim to identify them in samples of Fuerteventura -take into account that the publication of the detection of G. belizeanum in the region was almost coincident with that of the present manuscript. Despite of their morphological similarity, 91% of the specimens were successfully classified at species level. G. excentricus and G. australes were the most abundant species in that order representing 83% (61% and 22%, respectively) of total Gambierdiscus spp. Taking into account that 9% of analyzed specimens were classified within the group G. australes/G. caribaeus, G. australes is almost certainly underrated. The dominance of G. excentricus and G. australes matches previous molecular results based on LSUrDNA and SSUrDNA sequences of cultures and single cells isolated from Eastern Canary Islands [15,26]. Quantification based on morphology is very time consuming and not totally effective, but species-specific quantitative PCR assays have not been yet undertaken in that region as it has been the case in other areas such as the Gulf of Mexico and the Pacific Ocean [28,81,82]. The species of the genus Gambierdiscus produce ciguatoxins (CTXs) and maitotoxins (MTXs) but only the transfer of CTXs up the food chain results in their metabolism and accumulation in fish tissues, thus potentially causing CFP in humans. Although highly toxic, MTXs do not induce CFP because of their low oral potency and inability to accumulate in the muscle tissue of fish [83,84]. It has very recently been reported that different species of Gambierdiscus contain different proportions of the two types of toxins and therefore a very different toxic potential [23,24,85]. For this reason, in order to assess the potential risk of CFP occurrence it is necessary to know the specific diversity and distribution of Gambierdiscus in the region as well as the CTXs and MTXs contained by each one. G. excentricus displays the highest content of CTXs so far [23,24,50] and its CTX-like toxicity has been comparable to that of G. polynesiensis, the predominant CTX producer in the South Pacific, a ciguatera endemic region. In contrast to the consistent toxicity characteristics of G. excentricus, analyses of G. australes have yielded variable results depending on the strains and their origins [24,30,85,86]. The toxicity of the rest of Gambierdiscus species from the Canary Islands has been very scarcely studied; neuroblastoma cell-based assay (neuro-2a CBA) revealed lower CTX-like toxicity than the former ones (or even none for G. caribaeus), although high intraspecific variability has been also reported [24,30].
2020-11-19T09:15:13.428Z
2020-11-12T00:00:00.000
{ "year": 2020, "sha1": "8e44152f86d5c70199a0750615c2560637daa771", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1312/8/11/909/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "76a17bfefcb0aebff1315735a1c683a1370f4819", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
267226030
pes2o/s2orc
v3-fos-license
Energy and angular distributions in 250 eV electron and positron collisions with argon atom We present energy and angular differential cross sections for single-ionization in collisions between electrons and positrons with argon atoms at 250 eV. We treat the collision classically using the three body approximation where the target atoms are described within the single active electron approximation using a Garvey model potential and only the outermost electron is involved in the collision dynamics. Our present classical trajectory Monte Carlo model is shown to describe the ionization cross sections reasonably well and agree with existing experimental data. We show that the energy distributions, both for electron and positron impact, have the same shape and structure. In contrast, the angular distributions for electron and positron impact behave completely different which it maybe be attributed to the projectile-target core interaction. We present also the ionization probabilities as a function of impact parameter. We found that for the case of positron impact the distribution is symmetric, while for the case of electron impact the distribution is asymmetric. Introduction The understanding of the ionization process in ion-atom collisions is of fundamental interest in fields ranging from atmospheric and interstellar physics to radiation damage of solids, surfaces and biological systems.One area of interest that received much attention in the 1970's and 1980's is the angular and energy distribution of the ejected electrons as this provides basic information about the collision dynamics and has direct application with respect to modeling radiation damage.Although the attention has shifted to triply (fully) differential studies which test theory more rigorously, experimental Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. doubly differential data are presented in an absolute scale rather than the relative scale used for triply differential data.Thus, being able to compare the theoretical and experimental results on an absolute scale is one important justification of the present work.Another is to compare results obtained for both electron and positron impact since various interactions can be studied separately or deduced from the comparisons.Examples of this include being able to separate projectile scattering from electron emission using positron impact and the importance of post-collision processes by comparing electron and positron impact data. During the past years the cross sections with electron and positron impact have been studied extensively both experimentally [1][2][3][4][5] and using various models and methods such as applying the R-matrix approach [6], various version of the distorted-wave Born approximation (DWBA) [6][7][8][9][10][11] and the convergent close-coupling method [12].Here, not aiming for completeness, we mention only a few representative studies.DuBois et al [1] investigated experimentally how the 1st-and 2nd-order mechanisms influence the differential electron emission in positon and electron collisions with Ar atom.Ren et al presented the experimental tripledifferential cross sections of single ionization at 195 eV electron impact on Ar [2].In a combined experimental and theoretical work of Ren et al the low-energy electron-impact single ionization Ar(3p) has been studied [3].Babij et al measured the direct single-ionization cross section of Ar by positron impact just above the first ionization threshold [4].DuBois, and de Lucio presented experimental triply differential data for 200 eV positron and electron impact ionization of argon [5].Theoretically the R-matrix approach offers a powerful technique in the description of the cross sections.Bartschatt and Burke calculated the total and single differential ionization cross sections, including inner-shells, of argon atom using a two-state R-matrix approach in combination with the DWBA [6].The variation of DWBA was used by many authors to calculate the ionization cross sections of Ar by electron and positron impact [7][8][9][10][11].Recently the convergent close-coupling method was also used to calculate the ionization cross section of Ar by electron impact [12]. In addition to the various quantum mechanical calculations, many classical theoretical calculations have also been used for predicting the cross section in collision between electron and positron with atoms.One of the frequently used classical models is the classical trajectory Monte Carlo (CTMC) method.The CTMC method is used here because ( 1) it has been shown to be quite successful in dealing with a wide variety of processes in ion-atom collisions involving three or more particles [13][14][15][16][17][18][19], (2) it was shown that the method can also be successfully applied for the ionization studies at light particle impact like electrons and positrons [20][21][22][23][24][25][26][27].(3) It has the advantages of being non-perturbative and tracking the projectile and target particles independently, (4) individual interactions can easily be turned on or off in order to study their importance and (5) and it provides information generally not available in quantum mechanical codes such as the impact parameter and correlated information about the velocity components of the scattered projected and ejected electron.The key point of the calculations is the proper description of the collisions. In this work, which is a prelude to follow up studies of the impact parameter associated with various scattering and ejection angles, we concentrate on the energy and angular differential cross sections for single-ionization in collisions between electron and positrons with argon atoms.The present work concentrates on 250 eV primary energy which was chosen because absolute cross sections for electron impact are available (see [28,29] which presents an overview of the data).We apply the CTMC method to separate and compare the projectile scattering and target emission components and go beyond previous studies by providing impact parameter information of the total ionization. Theory In the present work the CTMC simulations were made using a three-body approximation where the many-electron argon atom is replaced by a one-electron atom [23,24].Therefore, in our CTMC model the three particles are the projectile (P), one atomic active target electron (e), and the remaining target ion (T) consisting of the target nucleus and the remainder target electrons.Figure 1 shows the relative position vectors of the three-body collision system. The three particles are characterized by their masses and charges.We note that this model is the classical analogue of the quantum-mechanical effective single-electron treatment of the collisions in which the electrons are treated equivalently.For the description of the interaction among the particles a central model potential developed by Green [30], which is based on Hartree-Fock calculations, is used.The potential can be written as: where Z is the nuclear charge, N is the total number of electrons in the atom or ion, r is the distance between the nucleus and the test charge q, and The potential parameters ξ and η can be obtained in such a way that they minimize the energy for a given atom or ion.Using the energy minimization, Garvey et al obtained the following parameters for Ar: η = 3.50 and ξ = 0.957 (in atomic units (au) [31].We note that this type of potential has further advantages, because it has a correct asymptotic form for both small (equation ( 3)) and large (equation ( 4)) values of r The Lagrange equation for the three particles can be written as: where and − → r , Z and m are the position vector, the charge and the mass of the noted particle, respectively, and ⃗ r and similar quantities in following equations are velocity vectors.Then the equations of motion can be calculated as: Introducing the relative position vectors ⃗ A =⃗ r e −⃗ r T , ⃗ B = ⃗ r T −⃗ r P , and ⃗ C =⃗ r P −⃗ r e , in such a way that ⃗ A + ⃗ B + ⃗ C = ⃗ 0, after some elementary calculus we can write: ⃗ A and ⃗ B and similar quantities in following equations are acceleration vectors. These differential equations were integrated with respect to the time as independent variable by the standard Runge-Kutta method for a given set of initial conditions.Equations ( 9) and ( 10) contain 12 coupled first-order differential equations.Therefore, we need to consider and specify 12 initial values of initial conditions.These are the coordinates and the velocities of an internal motion of (T,e) atomic system and the relative projectile ion-atomic center-of-mass motion.The origin of our coordinate-system in the laboratory frame is the centerof-mass of the target atom and the z axis is parallel to the velocity vector of the projectile (see figure 1).The initial relative motion is specified by the velocity of the projectile and the distance between the projectile and the atomic center-of-mass: During our CTMC simulations v p is fixed.The impact parameter must be chosen so that it reproduces a uniform flux of incident particles.Except for elastic collisions we determine a maximum value of impact parameter, b max , such that for impact parameters above b max , the probabilities of the investigated processes are zero or negligible.The initial distance, R, between the projectile ion and target atom is chosen at sufficiently large internuclear separations, where the projectile ion and target atom interactions are negligible.In practice we have used R = (4,5) b max Z P . The initial electronic state of the target atom is obtained from the microcanonical distribution.These are selected in a similar fashion as described by Reinhold and Falcon [32] for non-Coulombic systems.A microcanonical ensemble characterizes the initial state of the target constrained to an initial binding energy of the given shell: where K 1 is a normalization constant, E 0 is the ionization energy of the active electron, V(A) is the electron and targetcore potential, A is the length of the vector ⃗ A, and µ Te is the reduced mass of particles 'T' and 'e'.According to the equation ( 13), the electronic coordinate is confined to the intervals where the relation is verified.In the following we assume that equation ( 14) has only one root, A 0 .Therefore the values of A are then confined to the single interval 0 < A < A 0 .Potentials satisfying this condition represent the electron-core interaction.In order to generate an initial condition for the active electron, we must perform a transformation from the variables ( ⃗ A, ⃗ A) to a set of uniformly distributed variables completely specifying the initial state of the system given by equation ( 13).This transformation is a combination of two successive changes of coordinates (see [32]), and finally the required distribution can be written as: where and the independent variables are w, ϑ r , ϑ v , ϕ r , ϕ v .We note, that for or A < A 0 , w is always within the interval An initial condition for the active electron now can be easily generated.The random electronic state specified by the binding energy of the electron in the target atom, E 0 , can be selected by five random numbers distributed in the following ranges: The corresponding initial conditions for ⃗ A and ⃗ A are then obtained from the following relations: For practical reasons, at the beginning of our CTMC calculations the values of A and the corresponding values of w, computed numerically from equation ( 16) are tabulated.During the Monte Carlo simulations the particular values of A are selected from this table using interpolation.For a given set of initial conditions the three-body, three-dimensional CTMC calculation is performed as described by Tőkési and Kövér [23]. The energy and angular differential cross-sections were computed with the following formulas: The statistical uncertainty of the cross section is given by In equations ( 21)-( 23) T N is the total number of trajectories calculated for impact parameters less than b max , T (i ) N is the number of trajectories that satisfy the criteria for ionization, and b j (i) is the actual impact parameter for the trajectory corresponding to the ionization process under consideration in the energy interval ∆E and the emission angle interval ∆Ω of the electron.Note that here, unlike previous studies, one of the goals is to investigate individual processes as a function of impact parameter and see how this differs when the various Coulomb forces are reversed by changing the sign of the projectile charge.Our CTMC results are compared on the singly and doubly differential level by comparing with the absolute electron impact measurements of DuBois [28,29]. We performed a large number of classical trajectory simulations based on our 3-body code.While for the electron and Ar(3p) simulations, 3.1 × 10 7 trajectories were followed; for the positron and Ar(3p) collisions, 1.0 × 10 7 individual trajectories were calculated.The multi-electronic Ar atom was modeled by the model potential in equation (1).During the simulation we only account for ionization from the Ar 3p shell.The initial state of the target is characterized by a microcanonical ensemble, which is constrained to an initial binding energy of 0.581 a.u., at a relatively large distance from the collision center, choosing the initial parameters randomly.The differential cross sections were calculated at large separation of the particles after the ionization occurs.The convergence of our calculations were tested at two values of the separation, e.g., the integration of the classical equation of motions were stopped at 1000 a.u. and 100 000 a.u.from the collision center.We found no visible difference in the differential cross sections evaluated at these two distances.Therefore, to minimize computation time the calculations are stopped at 1000 a.u. Results and discussions Figure 2 shows the energy differential cross sections of single target ionization in collisions between 250 eV electrons and Ar atoms.Due to the fact that within the CTMC calculations the electrons are distinguishable, we show results in separation, i.e contributions from the target electron and from the scattered projectile electron. The sum of the electron yields is in reasonably good agreement with the experimental data with the biggest deviation for energies between 30 and 130 eV.For the lower portion of this energy region, the blue curve shows the discrepancy between experiment and theory is most likely due to the ejected target electron contribution which is monotonically decreasing with increasing energy but is an order of magnitude larger than the scattered projectile electron contribution, the dashed red curve, which increases with energy.However, for the higher end of the region showing discrepancies, the ejected and scattered electron contributions are comparable so it is unclear which is responsible for the discrepancy.Overall, figure 2 shows that at low energies the measured electrons can be attributed to the target electrons and for higher energies to the scattered projectile electrons.As a final note, the small oscillations seen in the blue curve and also in some of the following figures is due to the statistical nature of the Monte Carlo simulations which becomes apparent for rare events, i.e. relatively small cross sections. Figure 3 shows the angular differential cross sections of target single ionization in collisions between 250 eV electron and Ar atom.In this case, the sum of the target electron and the scattered projectile electron distributions agree well with the measured data.The target electron contribution has no strong angular dependence; it is almost uniform.In contrast, the scattered electrons have a strong angular dependence.Preliminary studies imply that the bump at 160 • is associated with the projectile-target core interaction.We note further that the position of the bump is changed with the incident electron energy.At present we do not understand what is occurring but plan to address this in detail later. To test the effect of the sign of the projectile we also performed simulations for 250 eV positron impact.Figure 4 shows the energy differential cross sections which have similar shapes and structures both for the target electron contributions and for the scattered projectile as we obtained for electron impact.Figure 5 shows the angular differential cross sections by 250 eV positron impact on argon target.The angular distributions are completely different from what we obtained for the case of electron impact.Here, the scattered positrons decrease monotonically with angle, whereas for electron impact there was an increase at large angles.A scenario that could possibly explain this is that for electron impact the scattered electron orbits around the positively charged ion core and exits in the backward direction whereas, for positron impact, there is no attractive force by the ion core for this to occur.We also note that we do not observe any bump in the scattered positron distribution which also supports the idea of a scattered projectile-target core interaction.In contrast to the nearly isotropic distribution observed for electron impact, the ejected target electron distributions for positron impact show more change as a function of the scattering angles.Figure 6 shows bP(b) for single ionization of Ar(3p) at 250 eV electron and positron impact energy.For comparison, the black dotted curve shows the radial distribution of the argon electrons.For the case of positron impact, the ionization probability has a symmetric, almost Gaussian, shape with a maximum around 1.4 a.u.Whereas for electron impact, it is asymmetric with a maximum around 0.93 a.u.Thus, for electron impact, ionization occurs at the inner portion of the 3p lobe whereas for positron impact it mostly takes place slightly outside the center of the lobe.But the integrals of these curves which are proportional to the total cross sections are almost the same for electron and positron impact. Conclusions We have presented studies of the single differential ionization cross sections in collisions between 250 eV electron and positron impact with Ar(3p) target.The calculations were performed classically using the three body CTMC approximation.We found that our present CTMC model, where the target atoms were described within the single active electron approximation, describes reasonably well the ionization cross sections and agrees with existing experimental data.We have shown that the energy distributions, both for electron and positron impact, have the same shape and structure.At the same time, the angular distributions behave completely different which we suggest is associated with a projectiletarget core interaction.Preliminary work suggests that an observed bump in the scattered electron distributions for electron impact is also due to a projectile-target core interaction.Further investigations for a range of impact energies are in progress to investigate and clarify this.The ionization probabilities as a function of impact parameter were also presented.We found different probability distributions for electron and positron impact.For the case of positron the distribution is symmetric, for the case of electron impact the distribution is asymmetric.Further works, using different incident energies, are in progress to clarify and identify the source of the bump in the angular differential cross sections. Figure 1 . Figure 1.The relative position vectors of the particles involved in three-body collisions.⃗ A =⃗ re −⃗ r T , ⃗ B =⃗ r T −⃗ r P , ⃗ C =⃗ r P −⃗ re, ⃗ r Te is the position vector of the center-of-mass of the target system, and b is the impact parameter. Figure 2 . Figure 2. Energy differential cross sections by 250 eV electron impact on argon target.Solid red circle: experimental data [12, 13], red dashed line: present CTMC results, scattered projectile electron contribution, blue line: present CTMC results, ejected target electron contribution, green line: present CTMC results, sum of the projectile and target electron contribution. Figure 3 . Figure 3. Angular differential cross sections by 250 eV electron on argon target.Solid red circle: experimental data [12, 13], red dashed line: present CTMC results, projectile electron contribution, blue line: present CTMC results, target electron contribution, green line: present CTMC results, sum of the projectile and target electron contribution. 4 . Energy differential cross sections by 250 eV positron impact on argon target.Red line: present CTMC results, positron contribution, blue line: present CTMC results, target electron contribution. Figure 5 . Figure 5. Angular differential cross sections by 250 eV positron impact on argon target.Red line: present CTMC results, scattered positron contribution, blue line: present CTMC results, target electron contribution. Figure 6 . Figure 6.bP(b) versus b for single ionization of Ar(3p) at 250 eV projectile impact energy.Blue line: electron projectile, red line: positron projectile.The dashed black curve shows the radial distribution of the argon electrons.
2024-01-26T17:09:57.987Z
2024-01-23T00:00:00.000
{ "year": 2024, "sha1": "be978f6d9d49d66fc145931883c2f1b857f64377", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1361-6455/ad2180/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "fa3891671f1f804fdf613e099ff804bb843ee185", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
58621961
pes2o/s2orc
v3-fos-license
Blood–brain and blood–cerebrospinal fluid passage of BRICHOS domains from two molecular chaperones in mice Targeting toxicity associated with β-amyloid (Aβ) misfolding and aggregation is a promising therapeutic strategy for preventing or managing Alzheimer's disease. The BRICHOS domains from human prosurfactant protein C (proSP-C) and integral membrane protein 2B (Bri2) efficiently reduce neurotoxicity associated with Aβ42 fibril formation both in vitro and in vivo. In this study, we evaluated the serum half-lives and permeability into the brain and cerebrospinal fluid (CSF) of recombinant human (rh) proSP-C and Bri2 BRICHOS domains injected intravenously into WT mice. We found that rh proSP-C BRICHOS has a longer blood serum half-life compared with rh Bri2 BRICHOS and passed into the CSF but not into the brain parenchyma. As judged by Western blotting, immunohistochemistry, and ELISA, rh Bri2 BRICHOS passed into both the CSF and brain. Intracellular immunostaining for rh Bri2 BRICHOS was observed in the choroid plexus epithelium as well as in the cerebral cortex. Our results indicate that intravenously administered rh proSP-C and Bri2 BRICHOS domains have different pharmacokinetic properties and blood–brain/blood–CSF permeability in mice. The finding that rh Bri2 BRICHOS can reach the brain parenchyma after peripheral administration may be harnessed in the search for new therapeutic strategies for managing Alzheimer's disease. Many neurodegenerative disorders such as Parkinson's disease, Huntington's disease, and Alzheimer's disease (AD) 2 are strongly linked with the accumulation of specific misfolded proteins (1). AD is the most common neurodegenerative disease, and it is characterized by the presence in the brain of extracellular plaques of ␤-amyloid (A␤) peptide and by intracellular tangles of hyperphosphorylated tau proteins (2). The causes of AD are not clear, but the "amyloid cascade hypothesis" has been brought forward as one driving factor. The levels of A␤ start to increase 20 years before onset of the disease and lead to formation of ␤-sheet oligomers and fibrils that contribute to the onset of AD (3,4). The A␤ peptide is derived from the A␤ precursor protein (A␤PP), an integral membrane protein, by sequential proteolytic processing by ␤and ␥-secretases. The ␥-secretase cleaves A␤PP in the transmembrane region, resulting in A␤ peptides of varying length, most commonly 40 or 42 residues, of which the AD-associated A␤42 is the more neurotoxic and aggregation-prone variant. Mutations in A␤PP and in the enzymatic component of ␥-secretase, presenilin 1, increase the levels of A␤42 and cause early-onset familiar AD. Soluble oligomers of A␤ have been suggested as key components for causing synaptic and cognitive dysfunction because the correlation between synaptic loss and levels of soluble A␤ species is stronger than the correlation with plaque levels (5). To date, no curative therapies for AD exist; the current treatments only alleviate the symptoms of the disease and are effective merely in the early stages (6). The BRICHOS domain consists of about 100 amino acid residues and has been found in more than 10 human transmembrane protein families, which are processed by proteolysis into fragments with different biological functions (7,8). The different BRICHOS-containing proteins show overall low sequence conservation but have a conserved architecture (9) consisting of an N-terminal cytosolic part, a transmembrane part, a linker, the BRICHOS domain, and, in all cases except prosurfactant protein C (proSP-C), a C-terminal region that has a high predicted ␤-sheet propensity (8,10). ProSP-C instead has a uniquely ␤-prone transmembrane region (11). Recombinant BRICHOS domains from the proteins associated with amyloid lung disease (proSP-C) and brain amyloid and dementia (integral membrane protein 2B (ITM2B or Bri2)) efficiently delay A␤42 fibril formation in vitro (9,(12)(13)(14)(15). The rh proSP-C BRICHOS domain specifically blocks the surface-catalyzed secondary nucleation step during A␤42 fibril formation (16), whereas rh Bri2 BRICHOS inhibits both the secondary nucleation and A␤42 fibril elongation steps (17,18). In recent in vivo studies, overexpression of proSP-C or Bri2 BRICHOS delayed A␤42 aggregation and improved the lifespan and locomotor function in a Drosophila melanogaster AD model (19,20). Although proSP-C is exclusively expressed in the alveolar epithelium (21,22), Bri2 is ubiquitously expressed, and in the brain it is particularly abundant in CA1 pyramidal neurons (23,24). Altered levels of Bri2 together with its processing enzymes This work was supported by the Swedish Research Council, Center for Innovative Medicine at Karolinska Institutet and Stockholm City Council, Hållstens forskningsstiftelse, and Alzheimerfonden.The authors declare that they have no conflicts of interest with the contents of this article. This article contains Figs. S1-S8. 1 To whom correspondence should be addressed. E-mail: janne.johansson@ ki.se. 2 The abbreviations used are: AD, Alzheimer's disease; A␤, ␤-amyloid; A␤PP, A␤ precursor protein; rh, recombinant human; BBB, blood-brain barrier; BCSFB, blood-cerebrospinal fluid-barrier; BRICHOS, domain initially found in BRI2, chondromodulin and prosurfactant protein C; CSF, cerebrospinal fluid; CNS, central nervous system; i.v., intravenous(ly); HRP, horseradish peroxidase; DAB, 3,3Ј-diaminobenzidine; AP, alkaline phosphatase. and the homologue Bri3 (also called ITM2C) have been detected in the hippocampus of post-mortem AD cases (25,26). Furthermore, the secreted Bri2 BRICHOS domain has been found to be associated with amyloid plaques (25). Based on these findings, the BRICHOS domain has emerged as a new candidate among therapeutic strategies against AD. However, the neuroprotective function of the blood-brain barrier (BBB) together with the blood-cerebrospinal fluid barrier (BCSFB) represents a potential major obstacle for the delivery of BRICHOS into the central nervous system (CNS). The BBB consists of brain capillary endothelial cells joined together by tight junctions and surrounded by pericytes, astrocytes, and neuronal cells (27). Choroid plexus epithelial cells form the BCSFB, which is more permissive than the BBB (28). The BBB and BCSFB, as physical barriers, restrict the compounds that reach the CNS and prevent many valuable therapeutic molecules from reaching their targets in the CNS. However, several peptides and proteins can cross the BBB by transendothelial diffusion (29,30,31), and antibodies can cross the BBB after peripheral injection by mechanisms related to adsorptive endocytosis/transcytosis (32,33). In this study, we evaluated the serum half-life and BBB/ BCSFB permeability of rh proSP-C and Bri2 BRICHOS after intravenous (i.v.) administration in WT mice. rh Bri2 BRICHOS forms different quaternary structures with distinct chaperone functions (18), and here we investigated rh Bri2 BRICHOS monomers, dimers, and oligomers individually. Both rh proSP-C and Bri2 BRICHOS were detected in the CSF, but only rh Bri2 BRICHOS domains reached the brain parenchyma after peripheral administration. Study design The serum half-life and permeability through the BBB and BCSFB of rh proSP-C and Bri2 BRICHOS domains was studied by injecting them into the lateral tail vein of adult WT mice. Blood and CSF samples were collected and analyzed at different time points, as schematized in Fig. 1. The presence of injected rh BRICHOS domains in the brain parenchyma was evaluated by Western blots with or without prior immunoprecipitation, ELISA, and immunohistochemistry. The penetrance of rh BRICHOS domains into the CSF was assessed by Western blotting. The detection of rh proSP-C BRICHOS in the brain was not expected to be affected by the presence of the endogenous protein because proSP-C is exclusively expressed in the lungs (21,22). To avoid the risk of interference with endogenous Bri2 present in brain tissue, an AU1 tag was added to rh Bri2 BRICHOS. The six-amino-acid-long AU1 tag was placed at the C-terminal end, and rh Bri2 BRICHOS-AU1 showed very similar chromatographic behavior and inhibitory effects against A␤42 fibril formation as WT rh Bri2 BRICHOS. In addition, Western blot analysis showed that there is no cross-reactivity between rh Bri2 BRICHOS and anti-AU1 antibodies (Fig. S1). Serum half-lives of the rh Bri2 and proSP-C BRICHOS domains The apparent serum half-lives of all tested rh BRICHOS domains were evaluated based on relative band intensities from Western blot analysis of blood samples withdrawn from the injected mice. The half-lives were calculated from the disappearance of the monomeric forms identified by reducing SDS-PAGE at 18 kDa for rh proSP-C and 17 kDa for rh Bri2 BRICHOS-AU1 (Fig. 2). rh proSP-C BRICHOS showed a halflife of 68 min in serum, which is significantly higher compared with 43 min for rh Bri2 BRICHOS-AU1 mixture and 32, 39, and 38 min for isolated rh Bri2 BRICHOS-AU1 monomers, dimers, and oligomers, respectively (Fig. 2, A-F). The differences in half-lives between the isolated rh Bri2 BRICHOS-AU1 forms were not statistically significant (Fig. 2F). Calculation of the half-lives from the disappearance of the monomer band but from SDS-PAGE run under nonreducing conditions showed shorter half-lives for all BRICHOS variants (Fig. S2). This suggests that disulfide-dependent oligomerization of BRICHOS monomers occurs in serum, as observed previously when rh Bri2 BRICHOS was incubated in mouse serum ex vivo (18). Detection of rh proSP-C and Bri2 BRICHOS in the brain by Western blotting, immunoprecipitation, and ELISA rh proSP-C BRICHOS was injected intravenously in doses from 10 to 50 mg/kg, and BBB permeability was evaluated after 2, 6, and 24 h (Fig. 1). In none of the treated mice could rh proSP-C BRICHOS be detected in the brain homogenate by Western blotting or in brain sections by immunohistochemistry (Fig. 3, A and F, and Figs. S3-S5). Rh Bri2 BRICHOS was injected in doses from 5 to 50 mg/kg, and brain samples were collected and analyzed 1, 2, 6, and 24 h after injection. Western Brain penetrance of different BRICHOS chaperone domains blot analysis of the brain homogenates from mice treated with the rh Bri2 BRICHOS mixture revealed, in the majority of samples, a band corresponding to the molecular weight of the monomeric injected recombinant protein, whereas in control samples, the corresponding band was absent (Fig. 3B). Fig. 3C shows representative Western blot results obtained with brain samples collected 1, 2, and 6 h after injection of 20 mg/kg rh Bri2 BRICHOS-AU1 mixture. The band seen after 2 h (Fig. 3C) is detectable, although faint, after a short exposure time (Fig. S3C). A semiquantitative analysis of the rh Bri2 BRICHOS-AU1 band in samples collected 2 h after injection compared with an internal standard revealed a concentration of rh Bri2 BRICHOS-AU1 in the brain homogenate of about 20 ng/mg of total brain protein, which corresponds to 200 nM rh Bri2 BRICHOS-AU1. To increase the detection of the delivered rh Bri2 BRICHOS-AU1 in the brain homogenate, immunoprecipitation of the samples was performed prior to Western blot analysis using an antibody against the AU1 tag. Western blot of the immunoprecipitated material showed a clear band corresponding to rh Bri2 BRICHOS-AU1 in brains collected 2 h after injection and a weaker band after 6 h (Fig. 3D). The entire gels of Fig. 3, A-D, shown in Fig. S3, reveal the presence of unspecific bands in samples from rh BRICHOS-injected mice and also in the PBS controls, but in all cases, they migrated slower than rh BRICHOS. In support of the immunoprecipitation data, analysis of the brain homogenates by ELISA revealed the presence of rh Bri2 BRICHOS-AU1 in the brains 2 h after injection (Fig. 3E). The average concentration of BRICHOS detected by ELISA was about 390 nM, corresponding to 0.4 -0.5% of the total amount of rh Bri2 BRICHOS-AU1 injected intravenously. The range of rh Bri2 BRICHOS-AU1 amounts detected in brain tissue by ELISA span 0.1% to 1% of the total amount. 1 and 6 h after injection, about 70 and 90 nM, respectively, of rh Bri2 BRICHOS-AU1 were detected in brain homogenates (Fig. 3E). Considering all mice treated with rh Bri2 BRICHOS or Bri2 BRICHOS-AU1, the injected protein was detected in the brain , and isolated rh Bri2 BRICHOS-AU1 monomers (C), dimers (D), and oligomers (E). The half-lives were determined by densitometry of Western blot bands corresponding to the monomers (indicated by arrows). Bands corresponding to BRICHOS dimers and oligomers, which resist complete reduction, in particular for oligomers of rh Bri2 BRICHOS-AU1, were observed. A nonspecific band between 25-35 kDa of unknown origin was observed in B, D, and E. Western blot analysis was performed using an anti-S tag and anti-AU1 tag for the detection of rh proSP-C and Bri2 BRICHOS, respectively. The time points above the gels refer to minutes after injection, and lanes marked with ϩ refer to samples from mice injected with 20 mg/kg rh BRICHOS; Ϫ denotes controls injected with PBS. F, summary of rh BRICHOS half-lives from two (rh Bri2 BRICHOS-AU1 dimers) or three (all other groups) different mice. The error bars show mean values and standard deviations. **, p Ͻ 0.001; ***, p Ͻ 0.0001 versus proSP-C BRICHOS. Brain penetrance of different BRICHOS chaperone domains parenchyma in 64% of all cases (Fig. 3F). A higher detection rate (9 of 12 cases, 75%) was obtained in mice treated with 20 mg/kg of rh Bri2 BRICHOS-AU1 and analyzed 2 h after injections (Fig. 3F). Of the nine positive cases, rh Bri2 BRICHOS-AU1 was detected by ELISA in five mice, as reported in Fig. 3E, and by Western blotting in four mice. ELISA, Western blotting, and immunohistochemistry (see further below) were used for analysis of all three negative cases. Last, to investigate the role of Bri2 BRICHOS oligomerization in BBB passage, isolated rh Bri2 BRICHOS-AU1 monomers, dimers, and oligomers were injected intravenously at a dose of 20 mg/kg, and the presence of rh Bri2 BRICHOS-AU1 was analyzed in brain tissue. Rh Bri2 BRICHOS-AU1 was detected by immunoprecipitation after injection of the monomer and dimer (Fig. 4, A and B). A band corresponding to a slightly slower-migrating species of unknown identity is seen in mice injected with rh Bri2 BRICHOS monomer and also in one of the two PBS controls (Fig. 4A). In contrast, no rh Bri2 BRICHOS-AU1 was detected in brain homogenates after injection of oligomers (Fig. 4C). For Bri2 BRICHOS-AU1 monomers, three mice gave positive detection on one occasion (Fig. 4A), but on another occasion, only a sample from one of these mice showed a positive signal (Fig. 4B). This shows that, in addition to the interindividual variability in observed passage over the BBB (Fig. 3F), there is also intraindividual variability in the extent of detection. Detection of rh proSP-C and Bri2 BRICHOS in CSF by Western blotting The permeability through the BCSFB of both the rh Bri2 BRICHOS-AU1 and rh proSP-C domains was evaluated by Western blot analysis of CSF. Three mice were treated with 20 . Rh Bri2, but not proSP-C BRICHOS, is detected in the mouse brain after intravenous injection. A-D, Western blot analysis of brain homogenates collected 1, 2, or 6 h after injection of rh proSP-C BRICHOS using an anti-S tag (A) or rh Bri2 BRICHOS using an anti-Bri2 BRICHOS antibody (B) and an anti-AU1 tag antibody (C and D) and PBS-injected controls. Rh Bri2 BRICHOS-AU1 (D) from the same sample as in C was immunoprecipitated. Lanes marked Rec indicate the migration of purified rh proSP-C or Bri2 BRICHOS-AU1. E, sandwich ELISA quantification of rh Bri2 BRICHOS-AU1 in brain homogenates 1, 2, and 6 h after injection of 20 mg/kg. Each point represents data from one mouse, and the error bar shows mean value and standard deviations. F, extent of detected rh proSP-C BRICHOS, rh Bri2 BRICHOS, or rh Bri2 BRICHOS-AU1 in the brain tissue of all injected mice. The dose ranges and times between injection and sampling used are given in parentheses above each circle. For the 5-50 mg/kg injections, both rh Bri2 BRICHOS and rh Bri2 BRICHOS-AU1 were used, whereas for the 20 mg/kg injections chart, only rh Bri2 BRICHOS-AU1 was used. In the circles, the numbers in each sector refer to the total number of mice in which rh BRICHOS was detected (black sectors) or not detected (white sectors) by Western blotting, immunoprecipitation, or ELISA in brain tissue. The percentages in parentheses refer to the extent of rh BRICHOS detection in brain tissue. The entire gels for A-D are shown in Fig. S3. Figure 4. Rh Bri2 BRICHOS-AU1 is detected in the mouse brain after intravenous injection of monomers and dimers but not oligomers. A-C, immunoprecipitation using rabbit polyclonal anti-AU1 antibody coupled with protein A-Sepharose beads followed by Western blotting analysis with a goat polyclonal anti-Bri2 BRICHOS antibody of brain homogenates collected 2 h after injection of rh Bri2 BRICHOS-AU1 monomers or dimers (A and B), oligomers (C), or PBS-injected controls. A and B include results obtained for the same three mice injected with monomeric Bri2 BRICHOS-AU1. Lanes marked Rec show migration of rh Bri2 BRICHOS-AU1. Brain penetrance of different BRICHOS chaperone domains mg/kg rh proSP-C BRICHOS, and four mice were treated with the same dose of rh Bri2 BRICHOS-AU1. As shown in Fig. 5, proSP-C and Bri2 BRICHOS-AU1 were both detected in the CSF of all samples analyzed, although the proSP-C BRICHOS bands were on the border of detection in two cases (Fig. 5A). Potential blood contamination of CSF as a cause of BRICHOS detection was evaluated by examining the presence of hemoglobin, which should not be present in CSF (34). The results showed lack of hemoglobin immunoreactivity in CSF samples (Fig. S6A), which supports that rh BRICHOS domains permeate the BCSFB. rh Bri2 BRICHOS-AU1 was also detected in brain homogenates by immunoprecipitation and Western blot analysis of all four mice used for the CSF collection (Fig. S6B). Immunohistochemical identification of rh proSP-C and Bri2 BRICHOS in the brain Mouse brain slices from the prefrontal cortex, hippocampal area, striatum, and cerebellum were analyzed by immunohistochemistry to further evaluate and localize rh proSP-C and rh Bri2 BRICHOS after i.v. injection. Rh Bri2 BRICHOS-AU1 immunoreactivity was observed in the choroid plexus and, in most samples, in the cortex 2 h after i.v. injection (Fig. 6, A-L). In some cases, immunoreactivity was also observed in the striatum (Fig. 6, M-O, and Fig. S6) and in the hippocampal sulcus (Fig. 6P), whereas rh Bri2 BRICHOS immunoreactivity was never detected in the hippocampus or cerebellum. Some of the staining observed in the cortex and striatum was localized intracellularly in the perinuclear area (Fig. 6, K and L, and Fig. S7). Immunohistochemical staining for rh Bri2 BRICHOS-AU1 of mice injected with rh proSP-C BRICHOS was negative (Fig. S5), showing that Bri2 BRICHOS-AU1 immunoreactivity is not an artifactual result of injection of a recombinant protein purified from bacteria. In agreement with the immunoprecipitation and Western blot results, rh proSP-C BRICHOS could not be detected in any brain region, including the choroid plexus, after i.v. injection (Fig. S5). Interestingly, positive staining for rh Bri2 BRICHOS-AU1 was identified in the choroid plexus of mice treated with the isolated dimeric species and also in one of the mice treated with the isolated monomers (Fig. 7, A-G). Staining was also found in the cortex of mice treated with monomeric and dimeric Bri2 BRICHOS-AU1 (Fig. 7, H-N). No rh Bri2 BRICHOS-AU1 was observed after injection of oligomeric species (data not shown). Discussion The long row of failed AD clinical trials emphasizes the importance to develop new strategies that are able to counteract the onset and course of this disease (35). Protein therapeutic agents, and in particular chaperones, could potentially be used to treat AD and other neurodegenerative disorders that are characterized by the accumulation of aggregated protein (36). Unfortunately, many potential protein-based drugs cannot be used in therapy, as they have low or no capacity to reach the brain parenchyma after parenteral administration (37, 38). In this study, we examined the ability of two recombinant chaperone-like domains, proSP-C and Bri2 BRICHOS, to cross the BBB and BCSFB. When delivered by i.v. injection into WT mice, the rh Bri2 BRICHOS mixture was found in the CSF in 100% of the injected mice and in the brain parenchyma in about 70% of the cases. Even though rh proSP-C BRICHOS showed a longer serum half-life compared with rh Bri2 BRICHOS, it was only found in the CSF and not in the brain parenchyma. The BRICHOS domains of proSP-C and Bri2 are potent inhibitors of A␤40 and A␤42 fibrillation and neurotoxicity in vitro and ex vivo (9,(12)(13)(14)(15)(16)(17)(18)39). Transgenic co-expression of either the proSP-C or Bri2 BRICHOS domain together with A␤42 in the Drosophila CNS or eyes gives rise to an increase in soluble A␤42, less aggregated A␤, and attenuation of the ADlike phenotype, including improved locomotor function and increased longevity (19,20). A delay in amyloid plaque formation and complete absence of cognitive decline were observed when a Bri2-A␤42 fusion protein was used to express A␤42 in the mouse brain (40), suggesting that co-expression of Bri2 BRICHOS alleviates A␤42 neurotoxicity (19). The rh Bri2 BRICHOS concentrations detected in brains after intravenous injection (120 -880 nM) were higher than the soluble A␤40 and A␤42 concentrations reported in different regions of AD postmortem brain tissues (30 -120 pM) (41). A␤42 aggregation is extensively delayed (9), and most importantly, A␤42 toxicity to hippocampal slice preparations is reduced in the presence of substoichiometric concentrations of rh Bri2 BRICHOS (18); we speculate that the amounts of rh Bri2 BRICHOS that reach the brain after parenteral administration could effectively inhibit A␤ fibril formation and toxicity. Rh proSP-C BRICHOS was not detected in brain tissue after i.v. injection, but it was found in the CSF. The rh proSP-C BRICHOS domain forms a homotrimer (15,42,43), a phenomenon that may influence the ability to pass through the BBB. Likewise, larger oligomeric assemblies of rh Bri2 BRICHOS (18), consisting of 20 -30 subunits, were apparently less prone to cross the BBB compared with the monomer and dimer, further suggesting that the assembly states affect the ability of BRICHOS to cross the BBB. Rh Bri2 BRICHOS was not found in all brain samples analyzed, which could be because the ratio of oligomers to monomers and dimers is increased at higher total BRICHOS concentration (Fig. S8), and the equilibrium could probably be altered after injection because the monomeric Bri2 BRICHOS reformed large oligomers in mouse serum (18). This might also explain why higher doses of rh Bri2 BRICHOS (50 mg/kg) and longer times between injection and analysis (24 h) did not result in increased amounts of Bri2 BRICHOS in the brain. Notably, rh Bri2 BRICHOS monomers potently prevent neuronal toxicity of A␤42, whereas dimers most efficiently suppress A␤42 fibril formation; high-molecular-weight oligomers are less efficient in reducing A␤42 aggregation and toxicity but are very efficient inhibitors of nonfibrillar protein aggregation (18). Our results suggest that the different assembly states of rh Bri2 BRICHOS behave in different ways after in vivo administration because monomers (17 kDa) and dimers (34 kDa) apparently pass the BBB more efficiently than the larger oligomers (340 -510 kDa) despite similar serum half-lives for all species. Further studies are needed to clarify the underlying mechanism by which rh Bri2 BRICHOS crosses the BBB. A common view is that only molecules less than 400 -500 Da in size can cross the BBB (44), but several examples of proteins that are able to cross the BBB have been described, including erythropoietin (34 kDa) (45), the exogenous tracer horseradish peroxidase (44 kDa) (30,31), and serum proteins (29). Moreover, anti-A␤ antibodies (about 150 kDa) are able to enter the brain, bind to amyloid plaques, and cause a reduction in plaque burden in AD mouse models (32,33) and in AD patients (46). BBB permeability is not the same throughout the brain, and local mechanisms may specifically regulate the transport of different molecules and proteins (47). It has been shown that specialized brain regions, such as the choroid plexus, circumventricular areas, and the subependymal zone, have higher permeability than the rest of the brain (31). These facts justify a brain region-specific distribution of exogenous protein-based compounds (30,31). Interestingly, positive staining for rh Bri2 BRICHOS was observed in the choroid plexus region. The choroid plexus contains more permeable capillaries compared with the rest of the brain and is involved in many aspects of blood-CNS exchange, including drug penetrance (48). These properties of the BCSFB probably explain the presence of both rh proSP-C and Bri2 BRICHOS in the CSF after systemic injection. A previous study has shown that CSF from AD patients has lower levels of the extracellular chaperones clusterin, hap- Brain penetrance of different BRICHOS chaperone domains toglobin, and ␣ 2 -macroblobulin compared with healthy control samples. AD CSF samples were toxic to neuroblastoma cells in culture, and re-establishing the physiological concentrations of extracellular chaperones in AD CSF protected neuroblastoma cells from A␤ toxicity (49). In addition, AD is associated with morphological changes in choroid plexus epithelial cells and compromised production of CSF (50). Intracerebroventricular injection of A␤42 oligomers in mice increased the levels of proinflammatory cytokines in the CSF. This also affected choroid plexus epithelial cell morphology and tight junction protein levels. These changes were associated with loss of BCSFB integrity, as shown by an increase in BCSFB leakage (50). Histological alteration of the choroid plexus was observed in post-mortem AD patients (51). The choroid plexus is considered to be a possible target for AD treatment. The anti-A␤42 toxicity properties of BRICHOS and the presence of rh proSP-C and Bri2 BRICHOS in the CSF after systemic administration lead us to believe that recombinant BRICHOS domains, in particular rh Bri2 BRICHOS, may have a beneficial impact on preventing CSF toxicity and BCSFB dysfunction. The BBB and the BCSFB are both rich in transport mechanisms by which solute molecules move across membranes; only small lipophilic molecules can be transported passively across the cell. Some plasma proteins and other constituents can be actively transported across endothelial cell membranes by carrier-or receptor-mediated transporters and transcytosis, but it is thought that the majority of BBB transporters have yet to be discovered (52). Bri2 is expressed in peripheral tissues and in the brain, in particular in the hippocampus, cortex, and cerebellum (23,24). The expression of Bri2 in both peripheral tissues and CNS may suggest the existence of a system for crosstalk between these sites. ProSP-C, in contrast, is expressed exclusively in the alveolar epithelium, and rh proSP-C BRICHOS could not be detected in the brain after systemic administration. The presence of rh Bri2 BRICHOS in the choroid plexus and CSF will contribute to the protein detected by Western blots and ELISA of brain homogenates. However, our immunohistochemical results indicate that injected Bri2 BRICHOS is also localized in other brain regions and intracellularly. In conclusion, the results presented here provide a new incentive to explore the BRICHOS domains, and in particular rh Bri2 BRICHOS, as a potential therapeutic tool for the treatment of neurodegenerative disorders. Systemic administration of rh Bri2 BRICHOS in mouse models of AD will be necessary to evaluate the potential of BRICHOS therapy. Recombinant proteins Rh proSP-C BRICHOS-Cloning, expression, and purification were performed as described previously (15,43). Briefly, the cells were lysed for 30 min with 1 mg/ml lysozyme and incubated with DNase and 2 mM MgCl 2 for 30 min on ice. The cell lysate was centrifuged at 6000 ϫ g for 20 min, and the pellet was suspended in 20 mM Tris (pH 8.0) containing 2 M urea and 0.1 M NaCl and sonicated for 5 min. After 30 min of centrifugation at 30,000 ϫ g at 4°C, the supernatant was collected, filtered through a 4.5-m filter, and poured on a nickel-agarose column (Qiagen, Ltd., West Sussex, UK). The column was washed with 50 ml of 20 mM Tris (pH 8.0) containing 0.1 M NaCl and urea with progressively decreased concentration; i.e. 2, 1, and 0 M. The target protein was finally eluted with 200 mM imidazole in 20 mM Tris (pH 8.0) containing 0.1 M NaCl, dialyzed against 20 mM Tris (pH 8.0) with 0.05 M NaCl, cleaved by thrombin for 16 h at 4°C (enzyme/substrate weight ratio of 0.002), and then reapplied to a nickel-agarose column to remove the released His 6 tag. The cleaved-off protein was further purified using ion exchange chromatography as described previously (53). Rh Bri2 BRICHOS-The expression and purification of the rh Bri2 BRICHOS domain, corresponding to residues 113-231 of full-length human Bri2, have been described previously (18,20). To enable specific immunodetection of injected rh Bri2 BRICHOS, a six-residue AU1 tag (DTYRYI) was added C-terminally by PCR amplification. The AU1-tagged rh Bri2 BRICHOS was expressed in Escherichia coli Shuffle T7 cells as a fusion protein with an N-terminal tag of His 6 -NT*. The cells were grown at 30°C in lysogeny broth medium containing 15 g/ml kanamycin until A 60 reached 0.7-0.9, and then the temperature was cooled down to 20°C, and 0.5 mM isopropyl 1-thio-␤-D-galactopyranoside was added. After overnight induction, the cells were harvested by centrifuge at 7000 ϫ g for 20 min at 4°C and resuspended in 20 mM Tris (pH 8.0). The cell suspension was sonicated on ice for 10 min (2 s on, 2 s off, 65% of the maximum amplitude) and centrifuged at 24,000 ϫ g at 4°C. The supernatant was collected and poured onto a nickelagarose column equilibrated with 20 mM Tris-HCl (pH 8.0). The column was washed with 20 mM Tris-HCl (pH 8.0), followed by 20 mM Tris-HCl (pH 8.0) with 20 mM imidazole. The fusion protein was eluted with 300 mM imidazole in 20 mM Tris-HCl (pH 8.0) and dialyzed against 20 mM Tris-HCl (pH 8.0) containing thrombin (enzyme/substrate ratio of 0.001, Merck) for 16 h at 4°C. The cleaved protein was reapplied onto the nickel-agarose column, and the cleaved-off rh Bri2 BRICHOS protein was collected. After concentration in a 5-kDa Vivaspin 20 column (GE Healthcare) at 4000 ϫ g, samples were applied onto a PD-10 desalting column (GE Healthcare) and eluted with filtered and autoclaved 1ϫ PBS (pH 7.4). Endotoxins were removed using a 0.50-ml Pierce high-capacity endotoxin removal column (Thermo Scientific). The final cleaved-off proteins were filtered through a 0.2-m filter and stored at Ϫ20°C. Different rh Bri2 BRICHOS species were separated and analyzed by Superdex 200 PG, 200 GL, or 75 PG columns (GE Healthcare) using an Äkta prime system (18). Animal procedures 8-to 10-week-old C57BL/6NTac (Taconic) male mice were used. All mice were kept under controlled conditions of humidity and temperature on a 12-h light-dark cycle. Animals were group-housed (seven per cage) with food and water available ad libitum. The animal procedures were approved by the ethical committees of Södra Stockholm's Djurförsöksetiska Nämnd (dnr S 6 -15) and Linköping's etiska nämnd (ID855). The experimental scheme is shown in Fig. 1. Mice received a single i.v. injection into the lateral tail vein by using a 0.3-ml syringe with a 30-gauge needle. Rh proSP-C or Bri2 BRICHOS in PBS was Brain penetrance of different BRICHOS chaperone domains injected in a dose range of 5-50 mg/kg, and control mice received PBS only. The Rh proSP-C or Bri2 BRICHOS fractions were kept frozen and not brought to room temperature until about 30 min before the injection, and this procedure has been shown not to cause significant formation of larger species (18). Before the injection, mice were placed in a single cage and put under a heat lamp for 5 min to dilate the tail veins. The mice were anesthetized with ketamine (100 mg/kg) and xylazine (20 mg/kg) and perfused intracardially with 40 ml of saline (0.9% NaCl) 1, 2, 6, and 24 h after injection. Brains were quickly removed and divided into left and right hemispheres. All tissues were snap-frozen in dry ice and stored at Ϫ80°C until analysis. Serum collection Blood samples were collected from the tail vein at different time points after rh BRICHOS proteins were injected (Fig. 1). The lateral tail vein was punctured using a 27-gauge needle, and 50 -100 l of blood was collected at each time point. Blood samples were centrifuged for 10 min at 3000 rpm (4°C), and then the serum was collected and transferred into tubes and stored at Ϫ20°C. Before analysis, the samples were diluted 1:5 in PBS. CSF collection CSF sampling was adapted from the method described by DeMattos (54). The mice were anesthetized and placed in the prone position in a stereotactic instrument. A sagittal incision of the skin was made inferior to the occiput. Under the dissection microscope, subcutaneous tissue and neck muscles through the midline were separated. The dura mater was exposed, and the area was washed with PBS to remove blood and tissue contamination. The dura mater was punched with a 27-gauge needle, and CSF was collected in a capillary tube. The volume of CSF obtained was ϳ5-8 l/mouse. All samples were stored in polypropylene tubes at Ϫ80°C until analysis. SDS-PAGE and Western blotting Mouse tissues were homogenized in 50 mM Tris-HCl (pH 7.4), 150 mM NaCl, 1.0% (v/v) Triton X-100, 0.1% (w/v) SDS, and 10 mM EDTA supplemented with protease inhibitors as described previously (55). Brain samples were centrifuged for 30 min at 14,000 rpm (4°C), and then the supernatant was collected and stored at Ϫ20°C. The protein concentrations were determined by the Bradford method. Serum, brain, and CSF samples were prepared in denaturing buffer containing 2% SDS, 0.03 M Tris, 5% 2-mercaptoethanol, 10% glycerol, bromphenol blue, and 2-mercaptoethanol (reducing conditions) or without 2-mercaptoethanol (nonreducing conditions) and heated for 10 min at 96°C. The gel loading for serum and brain homogenates was normalized so that 100 g of total proteins were loaded per well, whereas for CSF, the total sample amount obtained was loaded. The samples were separated on 10% or 13.5% SDS-PAGE gels and blotted on nitrocellulose membranes (GE Healthcare). After blotting, the membranes were blocked in 5% milk/PBS for 1 h, followed by incubation with primary antibody in 5% milk, 0.1% Tween/PBS for 1 h at room temperature or overnight at 4°C. The membranes were washed three times with 0.1% Tween/PBS, and secondary antibodies in 5% milk and 0.1% Tween/PBS were added for 1 h at room temperature. After washing, enhanced chemoluminescence detection reagent (GE Healthcare) was added according to the manufacturer's protocol, and images were acquired using a CCD camera (LAS-3000) or a fluorescence imaging system (Li-Cor, Odyssey CLx). Immunoprecipitation 5 mg of mouse brain homogenate was adjusted to a total volume of 400 l in PBS and incubated by rotation at 4°C overnight in the presence of 1:100 rabbit anti-AU1 antibody. Protein A-Sepharose beads (100 g/ml, GE Healthcare) were added to the samples, incubated by rotation for 1 h at 4°C, centrifuged at 400 ϫ g for 3 min, washed with PBS, and again centrifuged twice. The pelleted material was boiled for 10 min in SDS loading buffer with ␤-mercaptoethanol and PBS, the supernatant was loaded on a 13.5% SDS-PAGE gel, and Western blotting was performed as described. ELISA For the sandwich ELISA, the capture antibody (goat anti-Bri2 BRICHOS) was loaded in 96-well (Nunc MicroWell TM ) plates and incubated overnight at 4°C. The plates were washed three times in 0.05% Tween/PBS and blocked with 1% BSA/PBS for 2 h. The samples (250 g/ml) were incubated at room temperature for 2 h, followed by a primary antibody (rabbit anti-AU1) in 1% BSA and 0.05% Tween/PBS for 2 h at room temperature. The plate was washed, and a secondary anti-rabbit in 1% BSA and 0.05% Tween/PBS was added for 2 h at room temperature. After the washing step, 3,3Ј,5,5Ј-tetramethylbenzidine solution (100 l/well) was added and incubated for 30 min at room temperature in the dark. The reaction was stopped by adding 100 l/well of 0.5 M H 2 SO 4 . The absorbance was measured at 450 nm, using the values for ELISA of brain homogenates from PBS-injected control mice as blank. A standard curve was generated from analysis of rh proSP-C and Bri2 BRICHOS in a range from 0.1 to 64 ng. The amount of rh Bri2 BRICHOS-AU1 that reached the brain was estimated by assuming that the protein was evenly distributed in the whole Brain penetrance of different BRICHOS chaperone domains brain. For these calculations, a brain density of 1.04 g/ml (56), average brain weight of 486 mg, and average total protein in brain homogenates of 75 mg were used. Immunohistochemistry Coronal cryostat sections (10 -20 m) were thaw-mounted onto Superfrost Plus microscope slides. Brain sections were briefly washed in Tris-buffered saline prior to fixation in 4% formaldehyde solution for 5 min at room temperature. Sections were then incubated overnight at 4°C with the primary antibodies. After washing with Tris-buffered saline, sections were incubated with HRP-conjugated secondary antibody for 45 min at room temperature and developed using a DAB (3,3Ј-diaminobenzidine) substrate kit (Vector Laboratories). In some experiments, slides were incubated with Mach 2 Double Stain2 containing conjugated secondary anti-rabbit alkaline phosphatase (AP) antibody (Biocare Medical). AP staining was visualized with permanent red (Biosite) solutions. Slides were counterstained in hematoxylin, dehydrated through graded alcohols, cleared in xylene, and then mounted with Permount. Statistical analysis Statistical analysis was performed using the GraphPad Prism program. Data were analyzed using one-way analysis of variance followed by Tukey's post hoc tests. Serums half-lives were calculated after densitometric analysis of the BRICHOS monomeric band intensities with ImageJ. The concentrations were expressed as relative intensities and normalized for each curve to the sample intensity at 5 min. The apparent half-life was obtained using GraphPad Prism by a nonlinear one-phase decay analysis.
2019-01-22T22:24:07.280Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "07012c3bf771acd721c225c59839e85c324dadd5", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/294/8/2606.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "8e5cabd0b269c33b7a3adcf28b03d816fcf27f58", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
233238975
pes2o/s2orc
v3-fos-license
Photosynthesis and Bio-Optical Properties of Fluorescent Mesophotic Corals Mesophotic coral ecosystems (MCEs) are light-dependent coral-associated communities found at 30–150 m depth. Corals inhabiting these deeper reefs are often acclimatized to a limited and blue-shifted light environment, enabling them to maintain the relationship with their photosynthetic algal symbionts (family Symbiodiniaceae) despite the seemingly suboptimal light conditions. Among others, fluorescent proteins produced by the coral host may play a role in the modulation of the quality and spectral distribution of irradiance within the coral tissue through wavelength transformation. Here we examined the bio-optical properties and photosynthetic performances of different fluorescence morphs of two mesophotic coral species Goniopora minor and Alveopora ocellata, in order to test the photosynthesis enhancement hypothesis proposed for coral fluorescence. The green morph of G. minor and the low fluorescence morph of A. ocellata exhibit, in their natural habitats, higher abundance. The morphs also presented different spectral reflectance and light attenuation within the tissue. Nevertheless, chlorophyll a fluorescence-based, and O2 evolution measurements, revealed only minor differences between the photosynthetic abilities of three fluorescence morphs of the coral G. minor and two fluorescence morphs of A. ocellata. The fluorescence morphs did not differ in their algal densities or chlorophyll concentrations and all corals harbored Symbiodiniaceae from the genus Cladocopium. Thus, despite the change in the internal light quantity and quality that corals and their symbionts experience, we found no evidence for the facilitation or enhancement of photosynthesis by wavelength transformation. INTRODUCTION Stony corals are sessile organisms and are considered to be highly dependent on the photosynthates derived from their algal symbionts (family Symbiodiniaceae; LaJeunesse et al., 2018) as their main energy source (Muscatine, 1990). Sustaining high rates of photosynthesis in the underwater light environment is challenging as (1) light attenuates rapidly with depth (Kirk, 2011a), (2) wave movement creates a flickering effect that causes rapid and extreme localized changes in light intensity (Walsh and Legendre, 1983), (3) the macro-structure of the reef may shade coral colonies (Stimson, 1985;Kaniewska et al., 2008), and (4) certain substrates may be highly reflective consequently contributing to downwelling irradiance (Kirk, 2011b). Scleractinian corals in mesophotic coral ecosystems (MCEs; 30-150 m) experience light intensities that are up to 99% lower than those experienced by their shallow counterparts (0-30 m; Kahng and Kelley, 2007;Lesser et al., 2009;Tamir et al., 2019). Additionally, MCEs are also exposed to a restricted light spectrum centered around the blue region of the spectrum (Jerlov, 1968;Kahng et al., 2019). Coral photoacclimatization to MCEs is manifested at the coral-host level as changes in colony morphology (Kaniewska et al., 2008;Nir et al., 2011), skeletal features that modify the internal light environment (Enríquez et al., 2005;Kahng et al., 2012), changes in the populations of endosymbiotic Symbiodiniaceae expressed as a shift in their genetic identity (Cooper et al., 2011;Bongaerts et al., 2013;Einbinder et al., 2016) and modifications in the composition of photosynthetic pigments and structure of the photosynthetic complex (Einbinder et al., 2016). These changes, among others, potentially assist corals in maintaining a successful symbiosis with Symbiodiniaceae. Accordingly, deeper corals will usually present a higher maximal quantum yield of photosystem II (Einbinder et al., 2016;Ben-Zvi et al., 2020), a lower Symbiodiniaceae density accompanied by higher chlorophyll concentration within algal cells (Mass et al., 2007), and a reduced capacity to manage excess light (Einbinder et al., 2016;Ben-Zvi et al., 2020). Additionally, mesophotic corals may rely more on heterotrophy rather than autotrophy as their main strategy for acquiring energy (Mass et al., 2007;Lesser et al., 2010). Corals are recognized for being extremely colorful under short-wavelength lighting conditions, which is attributed to the phenomenon of fluorescence (Kawaguti, 1944;Matz et al., 1999). Fluorescence refers to the conversion of light wavelength, usually from short wavelengths into longer ones. In corals, fluorescence results from proteins belonging to the green fluorescent protein (GFP)-like family, that are produced by the coral host. The fluorescent proteins (FPs) may exhibit a diversity of excitation and emission peaks (Alieva et al., 2008), and an individual coral may possess a single or multiple FPs (Dove et al., 2001;Ben-Zvi et al., 2014Eyal et al., 2015). Fluorescence polymorphism within the same coral species has been previously described as resulting either from a difference in the expression levels of a single protein (Gittins et al., 2015;Takahashi-Kariyazono et al., 2018), or the presence of different FPs (Eyal et al., 2015;Ben-Zvi et al., 2019). One of the suggested functional roles for coral fluorescence is that of photosynthesis enhancement. Earlier studies have contended that fluorescence may enhance photosynthesis where light is limited, as in deeper habitats, by conversion of short wavelengths into longer wavelengths, capable of absorption by the photosynthetic pigments of the algal symbiont using host associated cyan FPs (Schlichter et al., 1986(Schlichter et al., , 1994Schlichter and Fricke, 1990). However, this notion is not fully supported since a conversion of blue light, which is abundant in deeper habitats, usually results in green wavelengths that are the least efficient for photosynthesis in Symbiodiniaceae (Scott and Jitts, 1977;Kühl et al., 1995). Another mechanism for photosynthesis enhancement was suggested to rely on electron transport through fluorescence resonance energy transfer (FRET) which depends on a close proximity between the donor of electrons and the recipient (Förster, 1955). Despite some evidences indicating that FRET is occurring between FPs (Cox et al., 2007), there is currently a lack of sufficient support for electron transfer between FPs and the photosynthetic apparatus (Gilmore et al., 2003;Cox and Salih, 2005). Nevertheless, recent studies suggest that FPs may alter (by wavelength conversion) and disperse (by scattering) the available light more efficiently through the tissue, enabling it to reach the Symbiodiniaceae residing in the deeper tissue layers of the coral host (Smith et al., 2017), and that symbiotic algae are attracted to green light (Hollingsworth et al., 2005) and green fluorescence (Aihara et al., 2019). The enhanced reflectance and scattering of light are also supported by direct measurements and is correlated with higher FP fluorescence (Salih et al., 2000;Lyndby et al., 2016). Thus far, hypotheses regarding the role of coral fluorescence have mostly been tested on shallow corals (DeSalvo et al., 2008;Dove et al., 2008;D'Angelo et al., 2012) and among coral morphs that display different expression levels of the same FP but not among morphs that differ in the FPs they contain (Gittins et al., 2015;Roth et al., 2015). In MCEs, the phenomenon of fluorescence polymorphism is common and multiple species can exhibit several color morphs occurring side by side at the same site and depth (Eyal et al., 2015;Ben-Zvi et al., 2019). Here we sought to investigate the photosynthetic and biooptical properties of different fluorescence morphs of two mesophotic coral species which differ either in the levels of expression (i.e., Alveopora ocellata) or by the emission peak of their FPs (i.e., Goniopora minor), in order to determine the potential links between the coral host's fluorescent pigments and coral photosynthesis, in the unique mesophotic environment. Coral Collection, Sampling, and Maintenance Ten colonies of the mesophotic scleractinian coral A. ocellata were collected from the reef in front of the Interuniversity Institute for Marine Sciences in Eilat (IUI; 29 • 30 16 N, 34 • 55 7 E) and seven colonies of G. minor were collected from the Dekel Beach site (29 • 32 17 N, 34 • 56 56 E), Gulf of Eilat/Aqaba (GoE/A), northern Red Sea. All corals were collected at 45 m depth using open-circuit technical diving. Corals were transferred to a running seawater system at the IUI in dark containers and were subsampled and preserved by dipping the fragments in liquid nitrogen and storing them at −80 • C until analyses (for Symbiodiniaceae density and genetic identification, and chlorophyll concentration analyses). The remaining corals (used for the scalar irradiance, chlorophyll fluorescence, and O 2 evolution measurements) were kept for further analyses under a lighting filter ("Lagoon blue, " Lee Filters, United States) providing a light regime similar to that of the natural mesophotic reefs at Eilat (Dishon et al., 2012;Ben-Zvi et al., 2021). Field Survey Eight line transects (each 10 m long) were deployed at 45 m depth at each collection site (i.e., IUI and Dekel Beach; total of 160 m). Each colony crossed by the transect was classified as either a high or low fluorescence morph (for A. ocellata), or as green, yellow, or red morph (for G. minor), based on their appearance under ambient light. A given morph's abundance was calculated by dividing the number of colonies from each morph by the total number of colonies from the specific species it belongs to, that were found along the transect. Host Fluorescence and Spectral Analysis Representative fragments from each morph were imaged with a SONY RX100 camera under white illumination (for the nonfluorescent images) and under a blue excitation light source (BlueStar, NightSea, United States) and a commercial yellow barrier filter (Y12, Tiffen, United States) mounted on the camera (for the fluorescent images). Host fluorescence was excited by a light source peaking at 450 nm (BlueStar, NightSea, United States) positioned horizontally to the coral colony and emission spectra were recorded with a flat cut 600 µm core UV-Visible fiber (QP600-2-UV-VIS, Ocean Optics, United States) equipped with a long pass barrier filter (cut-off <500 nm) positioned at a 45 • angle to the excitation light and connected to a spectrometer (JAZ, Ocean Optics, United States). Scalar Irradiance Measurements G. minor fragments were placed in a black acrylic flow chamber supplied with fresh seawater. Incident irradiance was provided by a tungsten halogen lamp (Schott ACE 1, Germany) equipped with a collimating lens. Measurements (n = 3 for each morph) of scalar irradiance (E 0 ) were collected using an 80 µm spherical light microprobe (Zenzor, Denmark) connected to a spectrometer (AvaSpec-UL2048XL, Avantes, United States). The microprobe was oriented at 45 • relative to the vertical incident light and carefully positioned above the coral polyp mouth (Wangpraseurt et al., 2012). Light gradients were measured through the coral gastrovascular cavity until reaching the skeleton at 100 µm increments, using a computer-controlled micromanipulator (Wangpraseurt et al., 2012). The scalar irradiance measurements were normalized to the downwelling spectral irradiance (E d ) provided by the collimated light, measured from a non-reflective black surface (Wangpraseurt et al., 2012). Integrated photon irradiance was calculated individually for wavelengths of interest (i.e., around the fluorescence emission peaks measured for each fluorescence morph; 500-530 nm, 530-560 nm, and 560-590 nm) by calculating the area under the curve of E 0 and E d , using the "MESS" package in R software (R Core Team, 2013) and equation 1, where λ a and λ b are the wavelengths of interest: Chlorophyll Fluorescence Measurements We used an imaging-pulse amplitude modulation fluorometer (imaging-PAM; blue maxi-version Walz GmbH, Germany) to perform rapid light curves (RLCs) and measure photosystem II (PSII) chlorophyll fluorescence on the intact corals. The photosynthetic quantum yield of PSII ( PSII ), relative electron transport rate (rETR) and non-photochemical quenching (NPQ) were calculated following Kramer et al. (2004) as: Where F m is the steady-state maximal fluorescence yield, PAR is the photosynthetically active radiation, F 0 is the fluorescence yield, F m is the maximal fluorescence followed by a saturating pulse, and qL is the fraction of open PSII centers. It should be noted that rETR calculation is based on the PAR absorbance (0.84) and photosystems ratio (1:1) of terrestrial plants (Björkman and Demmig, 1987), as it has not yet been determined for the selected coral species in this study. Following a 30 min dark incubation, measurements were taken using a saturation pulse intensity of 2,700 µmol photons m −2 s −1 for 800 ms after 2.5 min incubation at each light intensity (0,20,55,110,185,280,335,395,460,530,610, and 700 µmol photons m −2 s −1 ). The following PAM settings were used: measuring light intensity = 1, measuring light frequency = 1, actinic light intensity = 1, gain = 1. The initial slope (α), relative maximal electron transport rate (rETR max ), and minimum saturating irradiance (E k ) were calculated from a fitted double exponential decay function following Platt et al. (1980). O 2 Evolution Measurements Coral fragments were incubated in 85 mL acrylic closed jacket respiration metabolic chambers containing 0.45 µ filtered seawater. O 2 concentration inside the experimental chambers was measured using a FireSting pro meter (FSPRO-4, Pyroscience, Germany) connected to fiber-optic oxygen sensors (OPROB3, Pyroscience, Germany) placed at the top of each chamber, calibrated using 1-point calibration of 100% O 2 saturated water. Photosynthesis-irradiance (P-E) curves were performed by incubating coral fragments for 10 min under increasing light intensities (0,5,23,36,50,78,150,250,350,450, and 550 µmol photons m −2 s −1 ). The incident downwelling irradiance was provided by a computer-controlled array of light-emitting diodes (LEDs) measured with a LI-1400 light meter (LI-COR, United States) equipped with a cosinecorrected quantum sensor (LI-190R, LI-COR, United States). The same procedure was also performed for blue light illumination with the LED array covered with a spectral filter ("lagoon blue, " Lee Filters, United States) mimicking the light spectrum at mesophotic reefs (45 m) in Eilat (Ben-Zvi et al., 2021). During measurements, the experimental water was constantly stirred and kept at 25 • C as measured by a temperature probe (TDIP15, Pyroscience, Germany). O 2 concentrations were normalized to coral surface area that was determined from top-view images and to the volume of water in the chambers. Net and gross photosynthesis were fitted using a double exponential decay following Platt et al. (1980), and α, maximal photosynthesis rate (P max ), compensation irradiance (E c ), E k , and dark respiration (R d ) were extracted from the fitted models. Symbiodiniaceae Density and Chlorophyll Concentration Frozen fragments of G. minor and A. ocellata were thawed and tissue was removed in the presence of 0.22 µ filtered seawater using an artist's air brush into 50 ml tubes. Surface area was determined using the single dip wax method for normalization . Coral tissue was mechanically broken using a motorized homogenizer and centrifuged at 5,000 rpm for 5 min to separate the host (i.e., coral) supernatant from the algal pellet. The host fraction was discarded, and the algal pellet was used for the quantification of chlorophyll and Symbiodiniaceae cell density. A small subsample from each algal pellet was taken for Symbiodiniaceae genetic identification. Algal cells were counted in triplicates using a hemocytometer under a light microscope. The remaining algal pellet was used for chlorophyll analysis. Chlorophyll was extracted in the presence of 100% cold acetone for 15 h at 4 • C and chlorophyll a and c 2 concentrations (pg chlorophyll cell −1 ) were determined spectrometrically as in Jeffrey and Humphrey (1975). Symbiodiniaceae Genetic Identification DNA was extracted using the DNeasy Blood and Tissue kit (Qiagen, Germany) from the Symbiodiniaceae sub-samples following the manufacturer's protocol. A ∼700 bp sequence fragment of the internal transcribed spacer 2 (ITS2) was PCR amplified using the primers SYM_VAR_FWD and SYM_VAR_REV following the procedure of Hume et al. (2013). PCR products were purified by ExoSAP-IT (Thermo Fisher Scientific, United States) and bi-directionally sequenced. Individual sequences were aligned using Geneious, and a consensus sequence was constructed for comparison against the GeoSymbio database (Franklin et al., 2012). Statistical Analyses All statistical analyses were performed using R software (R Core Team, 2013). Data were tested for normality using the Shapiro-Wilk test and homogeneity of variance with Levene's test. G. minor data was tested using PERMANOVA (for repeated measure ANOVA in the light curves analyses), ANOVA (for normally distributed data), or permutational ANOVA when data did not follow a normal distribution. A. ocellata data was tested with PERMANOVA (for repeated measure ANOVA in the light curves analyses), t-tests (for normally distributed data) or Wilcoxon signed-rank test (for non-normal data). Results were considered significant if p < 0.05. Where appropriate, data were further examined using a Tukey's post hoc test. Fluorescence Polymorphism and Abundance Fluorescence morphs of A. ocellata and G. minor visually differ from one another under white illumination, under blue light illumination, and in their fluorescence emission peak (λ em ; Figure 1). Three distinct fluorescence morphs are described for G. minor; a red morph (Figure 1A; λ em = 580 nm), a green morph (Figure 1B; λ em = 515 nm), and a yellow morph (Figure 1C; λ em = 515 and 545 nm). Two fluorescence morphs are described for A. ocellata: a low fluorescence morph ( Figure 1D; λ em = 520 nm) and a high fluorescence morph ( Figure 1E; λ em = 520 nm), both of which present the same green emission peak, with the distinction that the low fluorescence morph appears red under both illuminations due to chlorophyll fluorescence of its symbionts (λ em = 680 nm). The green morph of G. minor presents the typical, brownish color of corals under white light ( Figure 1B, top image), commonly attributed to the symbiotic algae, yet exhibits a green fluorescence under the excitation light ( Figure 1B, middle image); while the yellow morph displays a yellowish appearance under white light illumination ( Figure 1C, top image) and exhibits a bright green glow under blue illumination (Figure 1C, middle image). For G. minor, out of 63 surveyed colonies, the green morph was more prevalent compared to the yellow morph (74 ± 14.7% and 26 ± 14.7%, respectively). The red morph of G. minor was extremely rare and was not crossed in our field surveys. For A. ocellata, we found that out of 40 colonies, 84 ± 17.4% (mean ± SD) presented low fluorescence and only 15.15 ± 17.43% of them displayed high fluorescence. In vivo Light Microenvironment Scalar irradiance (E 0 ) measurements revealed a strong light gradient within the coral tissue (Figure 2 and Supplementary Figure 1). At the tissue surface, irradiance is 1.5-fold higher than the incident irradiance for visible light (PAR; 400-700 nm) and at the tissue-skeleton interface, available light is greatly reduced to 50% of the incident light for PAR. While this surface light enhancement is prominent between 400 and 700 nm for the yellow and green morphs, it only occurs above 580 nm for the red morph (the λ em of this morph). Light absorbance by photosynthetic pigments (i.e., chlorophyll a) can be observed as a drop around 680 nm in all fluorescence morphs, and in the yellow morph the contribution of the host fluorescent proteins can be observed as a shoulder between 500 and 600 nm. The scattering of light at wavelengths above 700 nm was found to be 11-14% higher in the green morph compared to the other morphs. Comparing the integrated photon irradiance (Figure 2B) revealed significant differences in all morphs between the two areas in which the measurement was taken (i.e., tissue surface and tissue-skeleton interface; permutational ANOVA, p < 0.0001). When examining the differences among morphs within each location we found that the yellow morph differed from the green and red morphs in the surface measurements (permutational ANOVA, p < 0.01), while the green and red morphs did not (permutational ANOVA, Chlorophyll a Fluorimetry Relative electron transport rate values did not differ significantly between morphs of A. ocellata (Figures 3A,C; PERMANOVA, F = 0.85, p = 0.37) or G. minor (PERMANOVA, F = 0.41, p = 0.7). In A. ocellata, we found a significant effect of the fluorescence morph on PSII (Figure 3B; PERMANOVA, F = 5.68, p = 0.04), being higher in the low fluorescence morph, while in G. minor it was found to be lowest in the green morph ( Figure 3E; PERMANOVA, F = 58.13, p = 0.01). The maximum quantum yield of PSII (F v /F m ; or PSII measured after a dark incubation) was found to be similar among the fluorescence morph of A. ocellata (Figure 3B Non-photochemical quenching values were also similar between morphs of A. ocellata (Figure 3C; PERMANOVA, F = 0.1, p = 0.74), and G. minor (Figure 3F; PERMANOVA, F = 0.06, p = 0.09). The initial slope (α), and relative maximal electron transport rate (rETR max ) calculated from the RLC were found to be similar among A. ocellata morphs (Supplementary Table 1; ttest, t = −0.05, p = 0.96 and t = −0.38, p = 0.71, respectively), while the minimum saturating irradiance (E k ) was slightly higher in the high fluorescence morph (t-test, t = −2.17, p = 0.06). In G. minor, the fluorescence morph had no significant effect on any of the parameters (Supplementary Table 1; permutational ANOVA, p < 0.05). O 2 Evolution O 2 evolution differed when measured under white or blue illuminations (Supplementary Figure 3, Table 1, and Supplementary Table 2; PERMANOVA, p = 0.03). Since P-E curves performed under blue illumination resulted in fully extended polyps, smoother curves (i.e., higher R 2 ), and had been previously suggested to be more appropriate for mesophotic corals (Mass et al., 2010), we present in Figure 4 and Supplementary Figure 4 measurements, and in Table 1 parameters derived from the curves performed under blue light (parameters derived from the white illuminated curves can be found in Supplementary Table 2). We did not find differences in the P-E derived parameters between species (permutational ANOVA, p > 0.05) or between A. ocellata and G. minor morphs (t-test for A. ocellata or permutational ANOVA for G. minor, p > 0.05). Symbiodiniaceae Density and Chlorophyll Concentration Symbiodiniaceae density and chlorophyll (a and c 2 ) concentrations did not significantly differ between species FIGURE 2 | In vivo spectral scalar irradiance of three fluorescence morphs of the mesophotic coral Goniopora minor. (A) Scalar irradiance (E 0 ) was measured at the coral tissue surface (solid lines) and at the tissue-skeleton interface (dotted lines) of red (colored red), green (colored green), and yellow (colored yellow) fluorescence morphs of the mesophotic coral G. minor. Dashed black line indicates 100% of the incident irradiance (E d ). Colored lines represent the mean relative scalar irradiance (n = 3) of each morph and confidence intervals are represented by transparent corresponding areas. Fluorescence emission peaks are indicated by arrows. The solid box indicates an area of interest corresponding to (B) the integrated photon irradiance (n = 3) for specific wave bands (500-530 nm, 530-560 nm, and 560-590 nm) for the three fluorescence morphs. Boxes represent the upper and lower quartile, center lines represent medians, and whiskers extend to data measurements that are less than 1.5 * IQR (interquartile range) away from first/third quartile. DISCUSSION The spectral analyses revealed a range of fluorescence emission peaks for two mesophotic coral species (Figure 1). While, A. ocellata presented one fluorescence emission peak (at 520 nm), G. minor presented three (515, 545, and 580 nm). In the GoE/A, A. ocellata mostly inhibits mesophotic depths (Eyal-Shaham et al., 2016), whereas G. minor is a depth generalist and can be found also in the shallower parts of the reefs . The differences in the zonation characteristics of these species may explain why G. minor possesses a range of FPs. A broader FPs arsenal covers a broader spectrum of emission peaks which potentially correspond to a wider range of excitations that may mediate excess light (at shallow depths) or provide wavelengths that are absent (at mesophotic depths). A. ocellata, presents only one fluorescence emission which will have a narrower excitation range. Field surveys revealed that for A. ocellata the low fluorescence morph was the dominant one, while for G. minor the dominant morph was the green one. Despite the reported higher abundance of red FPs in deeper habits compared to shallower ones, and their suggested advantage in the dispersal of light deeper into the coral tissue (Smith et al., 2017), the red morph of G. minor is extremely rare. Host fluorescence is known to play a critical role in the modification of irradiance intensity and the spectral tuning of in-hospite irradiance environment (Salih et al., 2000;Mazel and Fuchs, 2003;Wangpraseurt et al., 2012;Lyndby et al., 2016;Smith et al., 2017;Quick et al., 2018;Wangpraseurt et al., 2019;Bollati et al., 2020). Our light microsensors measurements indicate that the optical environment within the coral tissue is influenced by the presence of different FPs (Figure 2 and Supplementary Figure 1). Despite the bright Mean (±SD) value of the initial slope (α), maximal photosynthetic rate (P max ), minimum saturating irradiance (E k ), compensation irradiance (E c ), and dark respiration (R d ) of high fluorescence (HF) and low fluorescence (LF) morphs of Alveopora ocellata and green, red, and yellow fluorescence morphs of Goniopora minor. appearance of G. minor under white light (in the red and yellow morph; Figures 1A,C) and under blue excitation light (all morphs), the contribution of the FPs to the spectral signature of the morphs was not as strong as expected and previously documented (Mazel and Fuchs, 2003;Wangpraseurt et al., 2012). This may be explained by the light source used for these measurements which is poor in blue, FP-exciting photons, or by the absorbance of light by the photosynthetic pigments sharing these emission peaks of host fluorescence. For example, the yellow morph of G. minor has a fluorescence emission peak at 545 nm ( Figure 1C), while peridinin has an absorbance peak at 540 nm (Niedzwiedzki et al., 2014). Additionally, coral-associated cyanobacteria may also contain phycoerythrin with absorption bands around 505 and 571 nm (Lesser et al., 2004), which correspond to the emission peaks of several FPs and may confound the interpretation of fluorescence spectra. Nonetheless, the in-hospite irradiance differed among the morphs (Figure 2). The yellow morph presented the greatest scalar irradiance enhancement at the shorter wavelengths (i.e., 400-700 nm), and the green morph showed greater light enhancement at longer wavelengths (700-800 nm). The dominance of certain morphs over others and the differences found in the internal light environment within the coral tissue, led us to hypothesize that certain FPs may be advantageous for photosynthesis within the mesophotic light environment. Chlorophyll fluorescence-based measurements revealed that A. ocellata and G. minor fluorescence morphs mostly did FIGURE 4 | Net photosynthesis of two mesophotic corals fluorescence morphs. Fitted (solid lines) and mean (circles) ± SE (error bars) values of O 2 evolution (µmol O 2 cm −2 hr −1 ) measured under blue light at increasing intensities (0, 5, 23, 36, 50, 78, 150, 250, 350, 450, and 550 µmol photons m −2 s −1 ) of Alveopora ocellata [(A), n = 1 and 4 for high and low fluorescence morphs, respectively] and Goniopora minor [(B), n = 2, 1, and 3 for green, red, and yellow morphs, respectively]. not differ in the quantum yield of PSII ( Figure 3A,B and Supplementary Figure 2). Likewise, Roth et al. (2015), did not find differences in PSII between high and low fluorescence morphs of Leptoseris spp. at mesophotic depths. Although it has been shown that changes in the in-hospite irradiance environment can affect absolute electron transport rates in corals , we found no differences in rETR values among the examined morphs (Figures 3A,D) despite an enhanced irradiance measured within the tissue of G. minor's yellow morph (Figure 2). When the photoprotective role of FPs was examined in mesophotic Euphyllia paradivisa, no differences in the amount of UVR-induced DNA damage were found between fluorescence morphs . However, this does not exclude the putative photoprotective role of FPs as NPQ values were slightly (but not significantly) higher for the red and yellow morphs of G. minor compared to the green morph ( Figure 3F), indicating their potentially greater capability in mediating excess light and preventing damage to the photosynthetic apparatus. Respiration and photosynthesis data from mesophotic corals are very limited and there is currently no agreed protocol for performing these measurements. Although Mass et al. (2010) provided evidence for the chromatic dependence of photosynthetic performance on the light provided during measurements, ex-situ measurements on mesophotic corals are still commonly performed under white light (Cooper et al., 2011;Ben-Zvi et al., 2020). Since corals are known to photoacclimatize to their natural light conditions, mesophotic corals are most likely acclimatize to blue light, as this is the prominent wavelength at depths of 30-150 m (Kahng et al., 2019). Moreover, the wavelength-dependent absorption cross-section for Symbiodiniaceae has already been established as being more efficient at shorter wavelengths than at longer ones . Furthermore, as corals FPs are largely excited by blue light, the FPs will in turn affect the internal light environment. We therefore compared measurements taken under white and blue light (Supplementary Figure 3 and Supplementary Table 2) and indeed found differences in coral response. Under blue light corals expanded their tentacles, while under white light they contracted them. Conducting the metabolic measurements under blue light resulted in smoother P-E curves ( Supplementary Figure 3), which may represent a more natural performance of mesophotic corals. Most of the previously reported values for P-E derived parameters that we were able to compare to our measurement aligned with the current results (Cooper et al., 2011;Nir et al., 2014;Eyal et al., 2019). The results indicate that mesophotic corals usually present relatively low compensation irradiance ranging between 15 and 96 µmol photons m −2 s −1 as well as low saturating irradiances, ranging between 28 and 80 µmol photons m −2 s −1 leaving a narrow window of light intensities which enable photosynthesis. Klueter et al. (2006) found that a highly fluorescent morph of Montipora digitata had higher Symbiodiniaceae densities and chlorophyll a concentration compared to a low fluorescent morph. We examined the Symbiodiniaceae densities as well as chlorophyll a and c 2 concentrations in all our studied morphs but found no differences among our samples (Figure 5). Our measured values of Symbiodiniaceae density are higher than previously reported values in mesophotic corals (Bongaerts et al., 2011;Cooper et al., 2011), however this parameter can greatly vary between species, depth, and light availability [reviewed by Roth (2014)]. Chlorophyll concentration values determined in this study, align with previously reported values FIGURE 5 | Photobiology of fluorescence morphs of the mesophotic corals Alveopora ocellata and Goniopora minor. Areal Symbiodiniaceae density (A), cellular chlorophyll a (B), and cellular chlorophyll c 2 (C) of low fluorescence (LF) and high fluorescence (HF) morphs of A. ocellata and green, yellow, and red fluorescence morphs of G. minor. Boxes represent the upper and lower quartile, center lines represent medians, and whiskers extend to data measurements that are less than 1.5*IQR away from first/third quartile. Outliers are represented by dots. Sample size (n) is indicated below each box. (Lesser et al., 2010;Cooper et al., 2011;Eyal et al., 2019). The latter result also indicates that the brighter color of the yellow morph of G. minor is probably the result of a higher expression of FPs rather than a low algal density or low chlorophyll concentration. Similarly, in A. ocellata, the red/brown appearance of the low fluorescence morph compared to the green appearance of the high fluorescence morph may also be a result of a higher concentrations of the host FP and not of a change in the algal symbionts. Aihara et al. (2019) demonstrated that coral fluorescence may serve as an attractive signal for symbiotic algae, and specific Symbiodiniaceae genera/species were found to be correlated with "redder" juveniles of Acropora millepora (Quigley et al., 2018). We therefore sought to determine whether a specific fluorescent signal would indeed attract symbionts that differ genetically. The genetic identity of Symbiodiniaceae revealed no significant effect of the fluorescence morph, and all corals harbored Symbiodiniaceae from the genus Cladocopium, as previously described in other coral species at the mesophotic reefs of the GoE/A (Nir et al., 2011;Einbinder et al., 2016;Eyal et al., 2019), as well as at other mesophotic reefs worldwide (Ziegler et al., 2015;Goulet et al., 2019). We therefore conclude that despite the possibly of serving as general Symbiodiniaceae attractant, specific fluorescence emissions do not attract specific Symbiodiniaceae genotypes. Since all the species and morphs we examined share the same habitat and are found in close proximity to each other, and hence experience similar environmental conditions, there may not be an apparent reason to attract different symbionts, which may or may not have an advantage in their photosynthetic performances or in their tolerance to stressors, such as temperature. In this study, we investigated the bio-optical properties and photosynthetic performances of mesophotic corals exhibiting different host fluorescence emissions resulting from differential expression of the same FP (A. ocellata) or multiple FPs (G. minor). Our results, showing only negligible differences in the photobiology of the different coral fluorescence morphs, do not support any of the prevailing mechanisms that have been suggested to enhance photosynthesis in coral by means of FPs in deeper habitats. Nevertheless, the bio-optical properties revealed changes among morphs, indicating that fluorescence may mediate the internal light environment of corals, potentially affecting other aspects of coral physiology through cellular mechanisms that are light-regulated such as circadian clocks entrainment (Levy et al., 2007), spawning (Sweeney et al., 2011), or growth (Roth et al., 1982). Future research should focus on depicting the pathways, such as gene regulation and expression, by which the effect of the diverse internal light regimes found among morphs may play a significant role. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS OB-Z and YL conceived the study. GE and OB-Z collected the coral samples and conducted the field surveys. DW performed the bio-optical measurements. OB-Z performed all other analyses, visualized all data, and wrote the first draft of the manuscript. OB-Z, DW, and OB analyzed the data. All authors reviewed, commented, and approved the manuscript.
2021-04-15T16:43:22.580Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "433d6cf92bbb7ef0adc5b7e07cc125557a4843eb", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2021.651601/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "433d6cf92bbb7ef0adc5b7e07cc125557a4843eb", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
237491681
pes2o/s2orc
v3-fos-license
Electrical spectroscopy of the spin-wave dispersion and bistability in gallium-doped yttrium iron garnet Yttrium iron garnet (YIG) is a magnetic insulator with record-low damping, allowing spin-wave transport over macroscopic distances. Doping YIG with gallium ions greatly reduces the demagnetizing field and introduces a perpendicular magnetic anisotropy, which leads to an isotropic spin-wave dispersion that facilitates spin-wave optics and spin-wave steering. Here, we characterize the dispersion of a gallium-doped YIG (Ga:YIG) thin film using electrical spectroscopy. We determine the magnetic anisotropy parameters from the ferromagnetic resonance frequency and use propagating spin wave spectroscopy in the Damon-Eshbach configuration to detect the small spin-wave magnetic fields of this ultrathin weak magnet over a wide range of wavevectors, enabling the extraction of the exchange constant $\alpha=1.3(2)\times10^{-12}$ J/m. The frequencies of the spin waves shift with increasing drive power, which eventually leads to the foldover of the spin-wave modes. Our results shed light on isotropic spin-wave transport in Ga:YIG and highlight the potential of electrical spectroscopy to map out the dispersion and bistability of propagating spin waves in magnets with a low saturation magnetization. Yttrium iron garnet (YIG) is a magnetic insulator that is famous for its low Gilbert damping and long-range spin-wave propagation [1]. At low bias fields the YIG magnetization is typically pushed in the plane by the demagnetizing field [2], leading to an anisotropic spin-wave dispersion at microwave frequencies. For applications that rely on spin-wave optics and spin-wave steering an isotropic spinwave dispersion is desirable [3], which can be achieved by introducing gallium dopants in the YIG: The presence of the dopants reduces the saturation magnetization and thereby the demagnetizing field [4], and induces a perpendicular magnetic anisotropy (PMA) [5,6], such that the magnetization points out-of-plane. Isotropic transport of forward-volume spin waves has been observed even at zero bias field [7], opening the door for spin-wave logic devices [8-10]. To harness isotropic spin waves it is essential to know the spin-wave dispersion, which is dominated by the exchange interaction for magnets with a low saturation magnetization [11]. Here, we use all-electrical spectroscopy of propagating spin waves to characterize the spin-wave dispersion of a 45nm-thick film of gallium-doped YIG (Ga:YIG). Rather than looking at the discrete mode numbers of perpendicular standing spin waves [12], this method enables extracting the exchange constant by monitoring the spin-wave transmission for a continuously-tunable range of wavevectors. We show that this technique has sufficient sensitivity to characterize spin waves in nanometer-thick Ga:YIG films, where perpendicular modes may be challenging to detect due to their high frequencies and small mode overlap with the stripline drive field. We extract the anisotropy parameters from the field dependence of the ferromagnetic resonance (FMR) frequency at different bias field orientations and find that the PMA is strong enough to lift the magnetization out of the plane. Next, we characterize the spin-wave dispersion from electricallydetected spin-wave spectra. We measure in the Damon-Eshbach configuration to boost the inductive coupling of the spin waves to the striplines [13], allowing the extraction of the spin-wave group velocity over a wide range of wavevectors from which we determine the exchange constant. When increasing the microwave excitation power, we observe clear frequency shifts of the spin-wave modes. The shifts result in the foldover of spin waves, which we verify by comparing upward and downward frequency sweeps. These results benchmark propagating spin wave spectroscopy as an accessible tool to characterize the exchange constant and spin-wave foldover in technologically attractive thin-film magnetic insulators with low saturation magnetization and PMA. We use liquid phase epitaxy to grow a 45-nm-thick film of Ga:YIG on an (111)-oriented gadolinium gallium garnet (GGG) substrate (supplementary material section 1). Using vibrating sample magnetometry (VSM) we determine the saturation magnetization M s = 1.52(6) × 10 4 A/m (Fig. 1a, the number in parentheses denotes the 95% confidence interval), which is approximately an order of magnitude smaller than undoped YIG films of similar thicknesses [14]. In addition to PMA, Ga:YIG films also have a cubic magnetic anisotropy due to a cubic unit cell. We start by determining the cubic and perpendicular anisotropy fields from the ferromagnetic resonance (FMR) frequencies ω FMR /2π using an out-of-plane (⊥) and in-plane (||) magnetic bias field B 0 . For (111)-oriented films the out-of-plane and in-plane Kittel relations are given by [14,15] Out-of-plane Here γ ⊥,|| = g ⊥,|| µ B /h is the gyromagnetic ratio with g ⊥,|| the anisotropic g-factor, µ B the Bohr magneton andh the reduced Planck constant, µ 0 is the magnetic permeability of free space, K 2⊥ is the uniaxial out-of-plane anisotropy (e.g. PMA) constant and K 4 the cubic anisotropy constant. During the in-plane FMR measurement we apply the magnetic field along the [110] crystallographic axis to minimize the out-of-plane component of the magnetization (supplementary material section 2). We neglect any uniaxial in-plane anisotropy as it is known to be small in YIG samples [14]. By substituting the value of M s that we obtained with VSM into equations 1 and 2, we can determine K 2⊥ and K 4 from the FMR frequencies ( Fig. 1b) Figure 2: All-electrical propagating spin wave spectroscopy. (a) Optical micrograph of a Ga:YIG film with two gold striplines that are connected to the ports of a vector network analyser (VNA). Port 1 applies a microwave current (typical excitation power: −35 dBm) that induces a radio-frequency magnetic field B RF at the injector stripline. This field excites propagating spin waves that couple inductively to the detector stripline at a distance s. The generated microwave current is amplified and detected at port 2. A static magnetic field B 0 is applied in the Damon-Eshbach configuration and is oriented such that the chirality of B RF favours the excitation of spin waves propagating towards the detector stripline [19]. (b) Field-derivative of the microwave transmission |S 21 | between two striplines (w = 1 µm, s = 6 µm) as a function of B 0 and microwave frequency. The colormap is squeezed, such that also fringes corresponding to low-amplitude spin waves are visible. A masked background was subtracted to highlight the signal attributed to spin waves (supplementary material section 4). We now use propagating spin wave spectroscopy to characterize the spin-wave dispersion in Ga:YIG. We measure the microwave transmission |S 21 | between two microstrips fabricated directly on the Ga:YIG as a function of static magnetic field B 0 and frequency f (Fig. 2a). The magnetic field is applied in the Damon-Eshbach geometry to maximize the inductive coupling between the spin waves and the striplines [13]. We measure a clear Damon-Eshbach spin-wave signal in the microwave transmission spectrum when B 0 overcomes the PMA and pushes the spins in the plane (Fig. 2b, supplementary material section 4). The signal appears at a finite frequency, because the bias field B 0 is applied along the [112] crystallographic axis with a finite out-of-plane angle of ∼ 1 • (supplementary material section 2). The fringes in the transmission spectra result from the interference between the spin waves and the microwave excitation field [20,21]. Each fringe indicates an extra spin-wavelength λ that fits between the striplines. We can thus use the fringes to determine the group velocity v g of the spin waves via [22] v Here ω SW = 2πf and k = 2π/λ are the spin wave's angular frequency and wavevector, ∆f is the frequency difference between two consecutive maxima or minima of the fringes (Fig. 3a) and s is the center-to-center distance between both microstrips. We extract the exchange constant of our Ga:YIG film by fitting the measured group velocity to an analytical expression derived from the spin-wave dispersion. The Damon-Eshbach spin-wave dispersion for magnetic thin films with cubic and perpendicular anisotropy is given by [15] (supplementary material section 5) Here we defined for notational convenience 2 ω SW (k) Since we determined M s and the anisotropy constants from the VSM and FMR measurements, the exchange constant is the only unknown variable in the dispersion. We determine the exchange constant from spin-wave spectra measured using two sets of striplines with different widths and lineto-line distances (w = 1 µm, s = 6 µm and w = 2.5 µm, s = 12.5 µm) at the same static field ( Fig. 3a,b). First we extract v g as a function of frequency from the extrema in the spin-wave spectra using equation 3 (Fig. 3c). By then fitting the measured v g (f ) using equations 4 and 5 (solid line in Fig. 3c), we find α = 1.3(2) × 10 −12 J/m and B 0 = 117.5(3) mT (supplementary material section 3). The determined exchange constant is about 3 times smaller compared to undoped YIG [12], which is in line with earlier observations of a decreasing exchange constant with increasing gallium concentration in micrometer-thick YIG films [23]. Simultaneously the spin stiffness is increased by about 3 times compared to undoped YIG [12] due to the strong reduction of the saturation magnetization. For large wavelengths the group velocity is negative as a result of the PMA in the sample. The spin-wave excitation and detection efficiency depends on the absolute value of the Fourier amplitude of the radio-frequency magnetic field B RF generated by a stripline, which oscillates in k with a period given by ∆k = 2π/w (Fig. 3e) [20,21]. To verify that the spin waves we observe are efficiently excited and detected by our striplines, we substitute the extracted exchange constant into equation 4 and plot the spin-wave dispersion (Fig. 3f). The shaded areas correspond to the frequencies of the spin-wave fringes ( Fig. 3a,b) and the dashed lines indicate the nodes in |B RF (k)| of both striplines ( Fig. 3d,e). We conclude that the fringes in Fig. 3a correspond to spin waves excited by the first maximum of |B RF (k)| and that the fringes in Fig. 3b correspond to spin waves excited by the second maximum. Surprisingly, we do not observe fringes in Fig. 3b corresponding to the first maximum of |B RF (k)|, but rather see a dip in this frequency range (arrows in Fig. 3b,f). This can be understood by noting that the average frequency difference between the fringes would be smaller than the spin-wave linewidth (supplementary material section 6). Low-amplitude fringes corresponding to small-wavelength spin waves excited by the second k-space maximum of the 1-µm-wide stripline are also visible ( Fig. 2b, supplementary material section 7). These results demonstrate that the spin-wave dispersion in weak magnets can be reliably extracted using propagating spin wave spectroscopy by combining measurements on striplines with different widths and spacings. When strongly driven to large amplitudes, the FMR behaves like a Duffing oscillator with a bistable response. Such bistability could potentially be harnessed for microwave switching [24]. Foldover of the FMR and standing spin-wave modes has been studied for several decades [24][25][26], but foldover of propagating spin waves was only observed before in active feedback rings [27], spin-pumped systems [28] and magnonic ring resonators [29]. We show that we can characterize the foldover of propagating spin waves in Ga:YIG thin films using our spectroscopy technique. When increasing the drive power we observe frequency shifts of the spin waves (Fig. 4a,c). These non-linear shifts result from the four-magnon self-interaction term in the spin-wave Hamiltonian. For an in-plane magnetized thin film, the shifts are given by [30] Hereω k (ω k ) is the non-linear (linear) spin-wave angular frequency, W kk,kk is the four-wave frequencyshift parameter and a k is the spin-wave amplitude. In our case W kk,kk is positive as a result of the PMA in the sample, leading to positive frequency shifts of the spin-wave modes (supplementary material section 8). The low-frequency spin waves start shifting first, because the stripline is the most efficient in exciting spin waves with small wavenumbers (Fig. 3d,e). The spin-wave modes start shifting at a surprisingly low drive power of ∼ −30 dBm, potentially caused by reduced spin-wave scattering [26] due to the low density of states associated with the increased spin stiffness and reduced saturation magnetization of our sample. In the high-power microwave spectra we observe an abrupt transition at which the spin waves fall back to their unshifted low-power frequencies, indicating the foldover of the spin waves. As the spin-wave amplitude increases the spin-wave modes shift to higher frequencies, until the maximum amplitude is reached and the spin waves fall back to their low-amplitude dispersion (Fig. 4b). To demonstrate the foldover behaviour, we compare upward and downward frequency sweeps (Fig. 4a,c). As expected the spin waves fall back to their unshifted dispersion earlier when sweeping against the frequency shift direction than when the sweep is in the same direction. The spin-wave amplitude and wavevector is thus bistable for the frequencies at which the foldover occurs. For these frequencies the stripline can excite two different wavelengths of spin waves at the same excitation power depending on the sweep direction that was used in the past. The observed frequency shifts provide an extra knob for tuning the dispersion of spin waves. They give rise to strong non-linear microwave transmission between the striplines as a function of excitation power, which may provide opportunities for neuromorphic computing devices that simulate the spiking of artificial neurons above a certain input threshold [29,31]. In summary, we used propagating spin wave spectroscopy to characterize the spin-wave dispersion in a 45-nm-thick film of Ga:YIG. The gallium doping reduces the saturation magnetization of the YIG and introduces a small PMA that lifts the magnetization out of the plane and causes the dispersion Competing interests: The authors declare that they have no competing interests. Data availability: All data contained in the figures are available in Zenodo.org at http://doi. org/10.5281/zenodo.5494466, reference number [32]. Additional data related to this paper are available from the corresponding author upon reasonable request. The FMR frequency is calculated according to [1] References Here θ M is the angle of the magnetization with respect to the film's normal, φ M is the in-plane angle of the magnetization with respect to the [110] crystallographic axis and F = F Ms , with F the free energy density and M s the saturation magnetization (Fig. S1). γ = gµ B h is the gyromagnetic ratio, with µ B the Bohr magneton andh the reduced Planck constant. The anisotropic g-factor is given by with g || and g ⊥ respectively the in-plane and out-of-plane g-factors [2]. For (111)-oriented films with cubic and uniaxial out-of-plane magnetic anisotropies the normalized free energy density is given by [3,4] with θ B and φ B the angles of B 0 with respect to respectively the film's normal and the in-plane [110] crystallographic axis (Fig. S1) and µ 0 the vacuum permeability. 2K 2⊥ Ms and 2K 4 Ms are respectively the uniaxial out-of-plane and cubic anisotropy fields, with K 2⊥ and K 4 the perpendicular and cubic anisotropy constants. Note that to calculate the FMR frequency using equation S1 at a certain B 0 , θ B and φ B , we first need to find θ M and φ M that minimize the free energy by numerically solving Using equations S1 and S2 we can calculate the FMR frequency for an out-of-plane magnetic field and magnetization (θ B = θ M = 0 • ), which gives For an in-plane magnetic field and magnetization (θ B = θ M = 90 • ), we find The factor 3 in the cosine arises from the triangular in-plane symmetry of a cubic unit cell with its normal along the [111] direction (Fig. S1). In our measurements a large in-plane magnetic field is needed to overcome the perpendicular anisotropy and push the magnetization in the plane, such that generally B 0 | 2K 4 Ms | = 8.2 mT and we can ignore the last term [3] ω Equations S3 and S5 are the same as equations 1 and 2 in the main text. We can calculate the FMR frequency also for low bias fields by substituting the extracted parameters into equations S1 and S2. We obtain the black dashed line, which fits reasonably well to the measured FMR, even when the FMR frequency is decreasing with field. The red dashed line shows the calculated FMR frequency when B 0 has an 1 • out-of-plane angle (θ B = 89 • ), which dramatically increases the minimum FMR frequency. This is because the magnetization turns only asymptotically into the plane when the angle is offset, instead of abruptly (Fig. S2b, black line: θ B = 90 • , red line: We note that in Fig. S2a at large bias fields both the black and red dashed lines overlap with the white fit. Therefore, we conclude that the in-plane FMR at φ B = 0 • is quite robust to any small out-of-plane component of the static field that might be present in our experimental setup, validating the white fit using equation S5 [3]. S2c shows a similar flipchip FMR measurement as in Fig. S2a, but now with the field applied along the [112] direction (θ B = 90 • , φ B = 90 • , the white line is the same as in Fig. S2a and is added as a reference). The FMR reaches a minimum frequency of about 1 GHz, which is significantly larger than the minimum in the φ B = 0 • geometry. We reproduce this enhanced frequency minimum by calculating the expected FMR frequency using the parameters extracted in section 2.1 (black dashed line, we ignore any potential in-plane anisotropy of the g-factor). The calculated FMR frequency matches the measured FMR remarkably well for all magnetic field values, demonstrating the accuracy of the white fit. FMR frequency and magnetization direction at Again we attribute the enhanced FMR minumum to the fact that the magnetization only slowly turns into the plane, even for a perfect in-plane magnetic field θ B = 90 • (Fig. S2d, black line). As a result the FMR frequency asymptotically approaches the in-plane Kittel relation (equation S5, white line). Similar to before, a change of 1 • in θ B lifts the minimum FMR frequency, explaining the minimum FMR frequency of about 1.25 GHz observed in Fig. 2b in the main text. Variations on the order of 1 • in θ B are expected in our measurement setup since we manually place the sample between two permanent magnets (section 1). Fig. S2d shows that the magnetization does not point exactly in the plane during our propagating spin wave spectroscopy measurements, even though this is assumed in the data analysis. We derived the exchange constant from spin-wave spectra taken at approximately B 0 = 117.5 mT, at which the magnetization points ∼3-6 degrees out of the plane (blue dashed line in Fig. S2d). We neglect this small out-of-plane angle, because we expect the induced error to be negligible compared to the ∼ 15% error obtained from the fit in Fig. 3c in the main text. Systematic error in the applied bias field In this section we calculate how a systematic error in the applied bias field affects the error of the anisotropy fields, which we extracted from the FMR frequency (Fig. 1b of (S6) Since we manually place our sample between the magnets (section 1), it may have a small offset of ∼ 1 mm with respect to the center position. Such an offset would cause a systematic error in the applied magnetic field B 0 , which enhances the error of the anisotropy fields. To obtain a conservative estimate of these errors, we determine the systematic error in the applied magnetic field via B 0 (x) is the magnetic field of a cylindrical magnet at a distance of x mm along its symmetry axis Here B r = 1320 mT is the remanence, L = 20 mm and r = 17.5 mm are the length and radius of the magnet. Fig. S3 shows the calculated error ∆B 0 (x) for a 1-mm-offset against the magnetic field 4 Background-subtraction procedures of the spin-wave spectra For the spin-wave spectra in Fig. 3a,b and Fig. 4a,c in the main text a background spectrum was subtracted consisting of the mean |S 21 | transmission at 100 mT and 138 mT, for which there are no spin waves in the frequency range of interest. In Fig. 2b in the main text a background was subtracted using Gwyddion (Fig. S4). Magnetic field B 0 (mT) Figure S4: Background-subtraction procedure of the microwave spectrum in Fig. 2b of the main text. The measured data (left figure) contains spurious signals attributed to small changes in the microwave transmission of the cables and connectors that attach the VNA to the striplines as a function of magnetic field. We filter these signals by first masking the high-curvature part of measured data that contains the spin-wave fringes. Then we fit a fifth-order polynomial through each horizontal line, excluding the masked data, and subtract it as a background (middle figure). The resulting spectrum only contains the spin-wave fringes (right figure, same as Fig. 2b in the main text). The image processing was performed using Gwyddion (version 2.58). 5 The spin-wave dispersion of a magnetic thin film with perpendicular and cubic magnetic anisotropy The spin-wave dispersion for magnetic thin films with perpendicular magnetic anisotropy (PMA) and cubic anisotropy was derived in reference [5]. Equation 30 of this work states the dispersion for an (111)-oriented film with in-plane magnetization, similar as in our experiment Here ω SW is the angular frequency of a spin wave with wavevector k that propagates at an angle φ In our experiment we measure spin waves in the Damon-Eshbach configuration (φ = π/2), we apply the external field B 0 along [112] (φ M = π/2) and the wavelengths of the detected spin waves are much smaller than the thickness of the film (kt 1), such that we can approximate f ≈ kt/2. This gives where we defined ω B = γ || B 0 , ω M = γ || µ 0 M s , and ω K = γ || ( 2K 2⊥ Ms + K 4 Ms ) for convenience of notation. Working out the brackets and rearranging the terms in orders of k gives For the spin-wave spectra taken at B 0 = 117.5 mT we find ( ω M t 2 ) 2 2ω B + ω M − ω K due to the low saturation magnetisation and thickness of our film, such that we can further approximate which is equation 4 in the main text. We derive the group velocity v g by differentiating with respect to k which is equation 5 in the main text. 6 Comparing the frequency difference between fringes to the spin-wave linewidth In this section we calculate the expected average frequency difference ∆f between spin-wave fringes excited by the first maximum of the microwave driving field Fourier amplitude (|B RF (k)|) in Fig. 3b of the main text. The stripline has a width w = 2.5 µm, such that |B RF (k)| has its first node at k min = 2π 2.5 µm −1 [6]. Everytime another wavelength fits within the center-to-center distance s between both striplines another fringe is observed in the signal. Therefore the condition s = nλ applies for every nth fringe, with λ the spin-wave wavelength. This means that fringes occur every ∆k = 2π s = 2π 12.5 µm −1 in k-space. In the first maximum of the excitation spectrum we would thus expect k min ∆k = 5 fringes. According to the reconstructed dispersion (Fig. 3f of the main text) the frequency difference between spin waves with wavevector k min and the minimum of the band is about 20 MHz, leading to an average frequency difference of 20 5 = 4 MHz between consecutive fringes. This is on the order of the FMR linewidth of undoped YIG films of similar thicknesses [4]. Assuming that Ga:YIG has a similar or larger linewidth, we argue that we cannot resolve fringes in the first maximum of the excitation field's Fourier amplitude because they are too narrow compared to the intrinsic spin-wave linewidth. 7 Zoomed-in spin-wave spectra displaying low-amplitude fringes Calculation of the non-linear frequency-shift coefficient For Damon-Eshbach spin waves with wavevector k and frequency ω k /2π the non-linear four-magnon frequency-shift coefficient W kk,kk is given by [7] W kk,kk = 1 2 2ω B + ω M (N xx,k + N yy,k ) 2ω k 2 · 3ω B + ω M (2N zz,0 + N zz,2k ) − 1 2 3ω B + ω M (N xx,k + N yy,k + N zz,2k ) , with N ij,k the (i, j)th index of the spin-wave tensor N k . The three-wave correction term vanishes since the spin waves propagate perpendicular to the magnetization. The precessional xyz-frame is defined such that z points in the plane along the magnetization, x along the film normal and y points in-plane perpendicular to z and parallel to the wavevector of the spin waves. N k is the Fourier transform of the tensorial Green's function N (r, r ) = N (r, r ) dip + N (r, r ) ex + N (r, r ) ani , which has components due to uniaxial anisotropy and the dipolar and exchange interactions N k e ikr = N (r, r )e ikr d 3 r = N (r, r ) dip + N (r, r ) ex e ikr d 3 r + N (r, r ) ani e ikr d 3 r . (S15) The contribution to N k from the N (r, r ) dip and N (r, r ) ex components in the thin-film limit were derived earlier [7]. Following this work, N (r, r ) ani due to uniaxial anisotropy in the out-of-plane x-direction is given by Here B 2⊥ = 2K 2⊥ Ms is the uniaxial out-of-plane anisotropy field, ⊗ denotes a dyadic unit vector product and δ(r − r ) is the Dirac delta function. As a result of the dyadic product only the (x, x) index of N (r, r ) ani is non-zero, leading to a contribution on N xx,k By adding this contribution to the other components, we find that the diagonal elements of N k in the Damon-Eshbach configuration are given by with f = 1−(1−e −kt )/kt and t the thickness of the film as before. We neglected the cubic anisotropy since it is small relative to the uniaxial anisotropy. By substituting equations S18 into equation S14 we can calculate W kk,kk for the wavevectors relevant for this work (Fig. S6). For all these wavevectors W kk,kk is positive, explaining the positive frequency shifts of the spin waves that we observe when increasing the drive power. This is in contrast to the frequency shift caused by the reduction of the saturation magnetization as a result of strong driving or heating. In this simple picture a downward frequency shift is expected for in-plane magnetization ( Fig. S7), highlighting the value of the Hamiltonian formalism that was used to calculate the nonlinear frequency-shift coefficient [7]. Field dependence of the FMR frequency of Ga:YIG for unreduced saturation magnetization (M s = 1.52·10 4 A/m, black line) and for 10%-reduced saturation magnetization (M s = 1.37 · 10 4 A/m, red line). The bias field is applied in the [112] direction and the magnetic anisotropy fields are the same for both curves. The dashed line indicates the field at which we performed our spin-wave spectroscopy measurements. Clearly a negative frequency shift is expected upon decreasing the saturation magnetization, which is in contrast to the positive frequency shifts we observe.
2021-09-14T01:15:45.037Z
2021-09-10T00:00:00.000
{ "year": 2021, "sha1": "e0c5abdc921422ebcea58fe3f23b4bad3b633b80", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2109.05045", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e0c5abdc921422ebcea58fe3f23b4bad3b633b80", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
245329850
pes2o/s2orc
v3-fos-license
Hashing It Out: A Survey of Programmers' Cannabis Usage, Perception, and Motivation Cannabis is one of the most common mind-altering substances. It is used both medicinally and recreationally and is enmeshed in a complex and changing legal landscape. Anecdotal evidence suggests that some software developers may use cannabis to aid some programming tasks. At the same time, anti-drug policies and tests remain common in many software engineering environments, sometimes leading to hiring shortages for certain jobs. Despite these connections, little is actually known about the prevalence of, and motivation for, cannabis use while programming. In this paper, we report the results of the first large-scale survey of cannabis use by programmers. We report findings about 803 developers' (including 450 full-time programmers') cannabis usage prevalence, perceptions, and motivations. For example, we find that some programmers do regularly use cannabis while programming: 35% of our sample has tried programming while using cannabis, and 18% currently do so at least once a month. Furthermore, this cannabis usage is primarily motivated by a perceived enhancement to certain software development skills (such as brainstorming or getting into a programming zone) rather than medicinal reasons (such as pain relief). Finally, we find that cannabis use while programming occurs at similar rates for programming employees, managers, and students despite differences in cannabis perceptions and visibility. Our results have implications for programming job drug policies and motivate future research into cannabis use while programming. INTRODUCTION Cannabis sativa (hereafter, cannabis) is the world's most commonly used illicit substance, used by more than 192 million people in 2018 [54]. The global cannabis market in 2020 was estimated at 20.5 billion USD, and is estimated to grow to 90.4 billion by 2026 [65]. Globally, cannabis legality is changing rapidly, with many countries (e.g., the United Kingdom, Colombia, and Malawi) legalizing medical cannabis, and a subset (e.g., Uruguay, Mexico, and Canada) also legalizing cannabis for recreational adult use. 1 In the US, for example, medical cannabis is allowed in 36 states, and 17 states have legalized cannabis 2 for adult use despite its federal classification as a Schedule I drug, which criminalizes cannabis and defines it as having no accepted therapeutic value and a high abuse potential [47]. This classification has hampered research on its therapeutic effects [47], and prohibition is contrary to popular opinion: 91% of Americans believe cannabis should be legal for medical or recreational purposes [56]. Similarly, 81% of Americans believe cannabis has at least one benefit [35]. These benefits are mostly medical, but 16% and 11% cited improved creativity and focus, respectively. Despite legal concerns, anecdotes of cannabis use intersecting with software engineering abound. Questions inquiring about cannabis's effects on programming are common on online forums such as Reddit, 3 Quora, 4 Hacker News, 5 and Dev, 6 often inspiring numerous conflicting answers. Similarly, popular tech-related media sites cover the topic, with one iTechPost article claiming that "folks in the tech mecca that is Silicon Valley are clearly getting their fair share of medical (or otherwise) marijuana", positing that cannabis may help with chronic pain associated with long hours of programming [36]. This claim aligns with epidemiological trends, as chronic pain is the most common reason for medical cannabis licensure in the US [7], and many people report that cannabis is useful for managing chronic pain symptoms [10]. Congruent with popular opinion, other posts claim that cannabis products enhance programming through increasing focus [4] or creativity [62]. However, to the best of our knowledge, no study has investigated motivations of focus, creativity, or wellness for cannabis use while programming. Cannabis is often prohibited in the workplace, a policy frequently enforced through mandatory drug testing for metabolites of Δ-9tetrahydrocannabinol (THC), which causes intoxication. A 2018 report found 63% of US organizations conduct drug screening, with two-thirds not accommodating medical cannabis use or lacking a medical cannabis policy [28]. In software engineering workplaces, drug tests remain common: we find that 29% of programmers have taken a drug test for a programming-related job (see Section 5.1). This prohibition of cannabis use in software engineering has contributed to a widely-reported hiring shortage for certain US government programming jobs [14,34,42]. In 2014, after struggling to meet hiring goals, then-director of the American Federal Bureau of Investigation James Comey stated that while he had "to hire a great work force to compete with . . . cyber criminals [,] . . . some of those kids want to smoke weed on the way to the interview", a behavior counter to the FBI's current policy that "prohibits anyone working for it who has used cannabis in the past three years" [34]. Despite such evidence highlighting the intersection of cannabis and programming, little empirical research has been conducted on cannabis use in software development. To our knowledge, previous related publications either combine cannabis use with other illegal substances such as cocaine [32], show data about computer scientists as a sub-population in a much larger survey [45], and/or include programming only as part of a much broader sector such as engineering or business services [45]. These previous studies do not sufficiently investigate the intersection of cannabis and programming for several reasons. First, substances such as cocaine have drastically different physiological effects than cannabis and are used for different reasons: grouping such disparate substances may result in ineffective blanket drug policies and misleading conclusions. Second, studies that group jobs by sector often focus on physical consequences of cannabis usage (e.g., impairment while operating a machine) that are less relevant to software (where our participants place more of a focus on, e.g., creativity). Thus, it remains unclear if, when, or why people currently use cannabis while programming, let alone how job-type or company policy may play a role. Without a grounded understanding of this intersection, companies and hiring managers cannot effectively evaluate the utility of extant anti-cannabis policies. We present results from the first large-scale empirical survey of cannabis use in software engineering, reporting data from 803 programmers, including 450 full-time developers (via an online, anonymized and confidential survey): • We find that some developers (18% of our sample) use cannabis while programming at least once a month, with many even choosing to use it for work-related tasks. • We find that cannabis use while programming is more commonly motivated by perceived programming-related skill enhancement than by medical reasons. This aligns with perceptions among a subset of students and younger people that cannabis use may enhance creativity or cognitive performance [1,21]. • We find that cannabis use while programming occurs at similar rates for programming employees, managers, and students despite differences in perception of cannabis approval level and cannabis visibility between the three groups. • Despite contrary anecdotal reports, we find that anti-cannabis policies and screening remain common for programming-related jobs, with 29% of respondents reporting having taken a drug test for a programming-related job. We close with a discussion of the implications of our results on software company drug policies and future research directions. BACKGROUND AND RELATED WORK We provide context about the legality and uses of cannabis, focusing on cannabis use the United States as we scoped our study to USbased developers. However, we note that recent cannabis legality shifts in the US align with global trends. We close with a discussion of prior work at the intersection of programming and cannabis. General Cannabis Background-In the past century, cannabis legality has shifted dramatically in the US. While many cannabis preparations were listed in the US pharmacopoeia in the early 20th century, cannabis was officially criminalized under the Controlled Substances Act in 1970. Since 1996, however, 36 states have legalized cannabis for medical purposes, and 17 for adult use. The policy whiplash has been largely driven by politics and popular opinion rather than science, with the re-emergence of cannabis occurring partially due to: 1) acknowledgement of the cruel excesses and ineffectiveness of the War on Drugs [20,64]; 2) increasing acknowledgement of cannabis's therapeutic benefits [47]; and 3) the ongoing opioid crisis, which has driven home the need for alternative, less harmful pain medicines [15]. In conjunction with liberalizing policies, past-year cannabis use has increased among Americans 12 or older: 11% in 2002 to 17.5% in 2019 [26]. Cannabis is used for many reasons, including medical (e.g., pain, nausea) and recreational (e.g., social or perceptual enhancement, altered consciousness) purposes [9,41]. This wide variety of uses is enabled by the pharmacopeia of compounds in cannabis, which include more than 100 active cannabis-derived compounds (cannabinoids) such as cannabidiol (CBD) and Δ-9-tetrahydrocannabinol (THC) as well as terpenes and flavanoids, which are responsible for taste and odor [47]. The most common medical reason for cannabis use is chronic pain, demonstrated by patient surveys [3,51] and registry data for medical cannabis patients [7]. Within broad categories of medical vs. recreational use, however, there is subtle shading of motivations. Lee et al. developed the comprehensive marijuana motives questionnaire, in which they identified factors associated with cannabis use, including enjoyment, coping, experimentation, altered perception, sleep/rest, and availability [41]. These factors are quite broad, and may encompass recreational, medical or other use patterns such as for creative work or performance enhancement. Cannabis and Programming-Within this complex environment, cannabis interacts with software development. For instance, cannabis use in Silicon Valley appears to be high, with area dispensaries reporting that around 40% of their clientele are tech workers [59]. Similarly, a qualitative study of coding bootcamps identified "lots and lots of [weed]" as one key element of support [11]. No previous empirical studies have directly investigated the intersection of software and cannabis. Reports either combine cannabis with other illegal substances or include programmers only as a sub-population [32,45]. For instance, one study has a table of the percentage of workers in various fields reporting illegal drug use [32,Tab 3.5,p. 26]. Programmers appear in the table, with 10.4% reporting usage in the last year. However, this is the only mention of programmers, cannabis is not separated from other drugs, and the data is from the early 1990s, well before cannabis was legalized in numerous states. Thus, questions of cannabis prevalence, usage motivations, and culture in software engineering remain unanswered. However, cannabis use has been studied in the context of creative problem solving, a key component of many software engineering tasks [23,24,43]. Generally, cannabis is seen as a creativity enhancer: in one study, 54% of participants believed it increased their creativity [22]. Similarly, Steve Jobs, who was regularly using cannabis when he founded Apple, stated that "the best way [he] would describe the effect of the marijuana . . . is that it would make [him] relaxed and creative" [66]. From a cognitive perspective, the link between creativity and cannabis use is more tenuous. LaFrance et al. found that cannabis users were more creative than non-cannabis users, however, this difference may be explained by underlying personality [39]. On the other hand, Kowal et al. found that highly potent cannabis can actually impair divergent thought, a type of creative thinking [37]. These conflicting results linking cannabis and creative problem solving may motivate future observational studies of cannabis and programming. Apart from creativity, cannabis has a range of other cognitive effects which could impact software development; long-term cannabis use can impair memory and attention control [38]. However, even for general cognitive effects, the current scientific literature is often insufficient; evidence regarding cannabis's long-term effects on skills used in software engineering such as decision making, motor ability, and working memory is inconsistent (see Kroon et al. [38] for a review of the current research on cannabis and cognition). Programming and Other Mind-Altering Substances-While the intersection between programming and cannabis remains unexplored, there exists research into connections with other mindaltering substances such as alcohol and LSD. One study of IT professionals in India linked high levels of work-related stress to increased rates of alcohol abuse and depression [19]. Jarosz et al. linked alcohol and creativity, finding that intoxicated subjects completed more creative problem solving tasks than sober counterparts [31]. As for LSD, psychedelics have historical cultural connections with software development [44,61]: LSD was commonly reported by many early PC and internet developers as a source of innovation and creativity [61], a phenomenon likely connected to 1960s counterculture [44]. However, despite preliminary evidence that micro-dosing on psychedelics may increase creativity [49], connections remain cultural and anecdotal rather than causal, further motivating future studies of mind-altering substances and programming. SURVEY DESIGN AND METHODOLOGY To understand programmers' cannabis usage, perception, and motivation, we conducted an online survey of 803 programmers. We desired a design that was time-efficient (allowing for a large number of responses), low risk (free of legal or employment-related consequences), and structured to permit statistical comparisons (within sub-populations such as employment status, age, and gender). Thus, we designed our survey to take under 30 minutes, to be anonymous and confidential, and to use best practices from drug survey construction in other fields, including adapting questions from previously-validated surveys when possible. We scoped this study to United-States-based developers. To obtain a diverse array of responses, we recruited participants from several sources including GitHub, social media, and the University of Michigan (a large public American university). We did not recruit participants directly from software companies to avoid risks of work retaliation. Recruitment materials indicated that previous cannabis use was not necessary to participate. We now describe the construction (Section 3.1), ethical considerations (Section 3.2), and distribution (Section 3.3) of our survey. Our replication package is available online with our full survey, recruitment materials, and IRB protocol. 7 Survey Construction Our survey included questions on demographics, programming background, cannabis attitudes, and cannabis usage. For cannabis sections, we asked questions about cannabis in general and in relation to programming. We employed display logic as appropriate to minimize exposure to irrelevant questions. When possible, we based questions on previous studies about cannabis [18,26,56] or software development [55] to allow comparisons with prior work. Demographics-Querying age, gender, and employment status, our demographics questions were adapted from those asked by Boehnke et al. in their recent survey of cannabidiol use for fibromyalgia [6]. To help ensure participant safety, we did not collect identifying information such as names or IP addresses. As a result, we included additional validity checks as described in Section 3.3. Programming background-Participants reported how long they had programmed, their programming education, and their programming-related job history. We also asked participants with programming jobs if they self-identified as a manager, and we asked participants to indicate how often they conducted various softwareengineering related tasks such as brainstorming and requirements elicitation, adapted from those used by Tilley et al. [55,Sec. 2]. Cannabis attitudes-We asked questions regarding attitudes toward cannabis, both in general and also in programming-specific contexts. To gauge general cannabis attitudes, we asked questions about cannabis legalization and perceived risk drawn from previous American national surveys from the Pew Research center [56] and the National Survey on Drug Use and Health [13,26]. For programming-specific cannabis attitudes, we adapted questions previously used to assess cannabis attitudes or perceptions from other contexts, e.g., from a study of high school seniors' disapproval about individuals carrying out cannabis-related activities [2]. Cannabis usage-We asked questions regarding general and programming-specific cannabis use. For general cannabis use, we asked questions from the Daily Sessions, Frequency, Age of Onset, and Quantity of Cannabis Use Inventory (DFAQ-CU) [18], a validated measure of cannabis use behavior and history. We included questions that quantify current use frequency, past periods of heavy use, medicinal vs. recreational cannabis, and the percentage of the time participants use cannabis products with THC (see Section 2). We also asked how COVID-19 affected cannabis usage patterns. To measure cannabis use while programming, we adapted questions from the DFAQ-CU, adding the phrase "while programming, coding, or completing any other software engineering-related task?". We also asked for which types of programming projects or tasks are participants likely to use cannabis, and we asked participants about their motivation(s) for using cannabis with choices reflecting those we observed in anecdotal online posts as well as a free text option. Ethical Considerations In the US (the source of our population), cannabis remains illegal at a federal level and in many states, so use can result in fines or incarceration. Further, cannabis is often explicitly prohibited by corporate policy, potentially resulting in employee termination. As such, we worked closely with our IRB to minimize legal risks and ensure participants felt comfortable answering honestly. First, we made our survey anonymous and confidential: we did not collect names or IP addresses, even though doing so necessitates additional data-quality checks. Second, all participants gave informed consent, and all questions were optional. Third, we focused our recruitment of professional developers on open source projects rather than through software engineering company contacts to avoid the risk of work place retaliation or coercion. Fourth, we collected emails for our optional incentive on a separate platform where they could not be connected back to survey responses. Finally, we are unable to publish our full data set: although anonymous, it contains demographic data which may inadvertently identify participants. Survey Distribution Survey platform-When choosing a survey platform, we wanted to ensure the data we collected was anonymous and confidential while also providing data quality assurance. Through consultation with our IRB, we used the Qualtrics XM Platform which enabled anonymous and confidential collection as well as data-quality options such as preventing multiple submissions and bot detection. As mentioned in Section 3, we scoped this initial study of cannabis use and programming to United-States-based developers due to the high variance of cannabis laws and cultures worldwide. All recruitment materials stated that the survey was optional, anonymous, and confidential. To help mitigate participant selfselection bias, we also made it clear that prior cannabis use was not required to participate. Finally, survey participation was encouraged through an optional drawing for one of five $100 awards. All data was collected between March 31 and May 2, 2021. Survey recruitment-To encourage diverse responses, we recruited from several populations: open-source GitHub developers, current and former computing students at the a large public university, and social media users. For participants recruited from GitHub, we used GitHub's REST API to obtain the top 1000 developers and top 100 repositories associated with each of 25 "popular" programming languages (as identified in the GitHub interface) and 8 additional common languages such as MATLAB and R. For each repository, we pulled the profiles of the top 25 contributors. We filtered for profiles with a public email, resulting in 31,259 potentials. Using regular expressions and manual review, we identified 7,372 with a US location. Eliminating an additional 1,613 using DNS verification, we sent 5,759 emails, of which 36 failed, and received 440 valid responses (7.7%), a rate similar to previous studies of open-source developers [29,53]. This use of GitHub profiles for research is permitted by the terms of service. 8 As for the university-recruited participants, we sent 5,638 emails to all current and former undergraduates who took a programming course for CS-majors (e.g., CS2 or CS3) at the University of Michigan between Fall 2018 and Fall 2019, receiving 283 responses (5.0%, 12 failed). As this study was conducted in 2021, this strategy recruited a mix of more senior CS undergraduates and young industrial developers rather than only current students. We also emailed CS graduate students for 56 responses. Finally, we posted the survey on Twitter, yielding 24 responses. While our study is not a random sample, we note that convenience samples are common in the cannabis and hidden populations literature [46] (see Section 7). Survey data validation-In total, 1045 participants started our survey. To ensure we analyzed only high-quality data, we implemented several post-collection checks. First, 236 partial responses were removed. While participants could skip individual questions due to the sensitivity of the survey topic, valid responses must have answered at least: 1) if they had programmed; 2) if they had used cannabis, and; 3) if they had used cannabis while programming. To mitigate rushing through the survey, we removed responses completed faster than 1.5 standard deviations below the median. Finally, we checked for consistency between reported age, years of programming experience, professional programming experience, and cannabis use. Combined, our completion time threshold and consistency checks eliminated 6 participants, leaving 803 valid participants for analysis. POPULATION CONTEXTUALIZATION We now present indirect evidence that our participants, while not a random sample, are similar in many ways to previous random samples or studies. A true random sample would not have been ethically permitted, but we gain confidence in our results' generalizability by contextualizing participants' gender, age, and employment. Table 1 overviews the gender, age, and programming-related employment of our study population by recruitment pool (e.g., how they were contacted to participate in this study). The majority (83%) of our population are men. This percentage is higher in those recruited from GitHub (91%) than those from the university (72%) and social media (83%). While this gender gap is large, it is similar to what we would expect of our sample population as a whole. For instance, Vasilescu et al. found that, of public GitHub profiles with ascertainable gender, 91% were men [58], the same percentage observed in our sample. We see a similar correspondence with university-recruited participants: 26% are women, similar to the 24% of our CS undergraduate population overall. Because they are similar to those of our recruitment populations, our observed gender ratios give confidence in the generalizability of our results even though we could not collect a random sample of all developers. The ages of participants also align with our sampled population. Our participants range in age from 15 to 70, with an average age of 29.2. As expected, GitHub-recruited developers were generally older than university-recruited participants, with an average age of 34.9 [40]. Similarly, the average age of our university-recruited participants matches a typical US senior undergraduate. As for employment, the majority (56%) of our sample are currently full-time employees at a programming-related job while an additional 6% are part-time and 36% are students. Of those that currently have a programming-related job, we observe a wide range of reported titles. While the most common titles were software engineer and developer (30% and 34% of our sample respectively), a significant number of participants identified as a systems engineer, computer science researcher, computer science instructor, data scientist, project manager, or information technician (between 2-10% of our sample for each). 9 This wide array of jobs indicates that our sample contains a diverse sample of programmers in various fields. 9 Participants could select multiple job titles. We do note that 27 respondents (around 3% of our sample) self-reported as only CS Instructors. As suggested by our reviewers, we conducted an additional sub-population analysis removing these participants from our sample. This removal resulted in no changes to our overall significance results and analysis conclusions. For example, the number of developers who have tried cannabis while programming is 35.4% with educators removed and 34.8% with educators included. We include the full results of this additional analysis in our replication package. As for self-identified students, the majority came from universityrecruited emails. Due to survey time constraints, we did not include a functional test of programming ability for students. However, all university-recruited participants had taken a programming course for CS-majors 1-3 years before participating. This gap was intentional: it resulted in recruiting young computing professionals rather than only students. Since some in this set had graduated, the "University" descriptor in Table 1 refers to the email list source rather than current enrollment. To verify that students were indeed more advanced, we asked about general and professional programming experience. For general experience, results match expectations for more advanced undergraduates and graduates: both the first quartile and median student participant reported 3-5 years of experience while the third quartile reported 6-10 years. 67% of students also reported professional programming experience. Of these students, the median was 1-3 years, with some students reporting 6-10. This high level of professional experience may reflect the 21% who were graduate students. Overall, the gender and age of our participants aligned with their populations. Also, even though we only collect data from university emails and open-source users, our sample's high percentage of professional developers and wide array of programming-related jobs indicates that our sample contains diverse types of programmers. We thus gain confidence in the generalizability of our findings. RESEARCH QUESTIONS We organize our analysis around the following questions: • RQ1-Usage: Do programmers use cannabis while programming? If so, how often? • RQ2-Context: In what contexts do programmers use cannabis? • RQ3-Motivation: Why do programmers use cannabis? • RQ4-Perception: How do opinions of programming cannabis use vary between managers, employees, and students? Statistical Methods-Our analysis was conducted in a Python Jupyter Notebook using Pandas [63]. For our statistical analyses, we primarily used SciPy [60] and Statsmodels [52]. When testing the significance of a difference between continuous variables (e.g., age) or Likert scores (e.g., 5-point scale) of two independent subpopulations, we use the Student's -test. While Likert scores are ordered categorical variables, previous research shows that with large samples, parametric tests are sufficiently robust for analyzing Likert data even though it is ordinal and normality cannot be assumed [48]. Thus, the Student's -test is best statistical practice. For testing the significance of a difference between two binary variables, we use the -1 2 -test (i.e., the proportions -test) [12]. We consider results significant if < 0.05. As this is a large survey study, we investigate multiple research questions and conduct multiple statistical tests. To avoid fishing and -hacking, we defined our primary research direction when designing the survey and we report the results for all initial research questions and analyses. Within each research question, we also correct for multiple comparisons using a Benjamini-Hochberg False Discovery Threshold of = 0.05: unless otherwise noted, all significant results pass this multiple comparisons threshold. The majority of our findings produced -values well below 0.0001, increasing confidence. RQ1: Cannabis Usage While Programming We first investigate if and how often programmers in our sample use cannabis while coding. We analyze usage trends in our sample overall and by gender, age, and recruitment pool. Overall cannabis usage while programming-Overall, we find that 35% (280/803) of our participants have tried cannabis while programming or completing another software engineering-related task, approximately half of those who tried cannabis in general (69% = 557/803). Of those that have used cannabis while programming, 73% (205/280, 26% of our population overall) used cannabis while programming in the last year. While not a perfect comparison, we observe higher cannabis use than that in recent national surveys: 35% of Americans ages 18-25 and 15% of those 26 and older report using cannabis in the last year [26] compared to 54% of our sample. However, considering our population (and open source developers in general) skews young and male, higher reported use is expected. Cannabis usage frequency-We also investigate how often participants currently use cannabis while programming. In the last year, 53% (147/280, 18% of our full sample) reported using cannabis while programming at least 12 times (monthly). Furthermore, 27% (76/280) reported using while programming at least twice a week (100 times per year) and 11% (30/280, 4% of our sample as a whole) reported using on a near daily basis. While those frequencies speak to current usage over the last year, these trends also occur over a longer term: 46% of our sample of cannabis-using programmers (128/280) also report that they have, at a point in the past, regularly used cannabis while programming (2 or more times per month for at least one six-month span). Our results regarding cannabis use frequency while programming are visualized in Figure 1. These findings give the first formal insight into the prevalence of cannabis in programming communities, and have important implications for drug tests in software engineering. Urine-based drug tests detect cannabis up to 30 days after use [25], much longer than the interval between cannabis sessions reported by many developers. Additionally, programmers' cannabis products typically contain THC, the compound detected by most drug tests [25]: on average, developers reported that 87% of their cannabis had THC, with the median reporting 100% of products included the compound. Simultaneously, we found that drug tests remain common for software: 29% of our sample reported that they had taken a drug test for a programming-related job. Thus, cannabis-using developers may avoid applying to jobs with drug tests, limiting application pools. Cannabis use demographic context-We further contextualize our results by investigating variance by gender, age, and recruitment pool. Regarding gender, we do not observe a significant difference between men and women in the percent of participants who have tried cannabis (70% vs. 69%, = 0.87). However, we do find that men are more likely than women to have tried cannabis while programming (36% vs. 25%, < 0.0001). This aligns with surveys of general populations not limited to programmers which find that men use cannabis more frequently than women [17]. Even though our sample size is smaller, we also observe that non-binary and transgender participants (not broken down in Table 1 for space) are significantly more likely to have tried cannabis while programming (57% vs. 34%, = 0.01) than the rest of the population. We also find a small but significant positive correlation between age and current frequency of cannabis use while programming ( = 0.21, = 0.003): 10 of those who currently use cannabis while programming, older programmers tend to use it slightly more often. When plotting this correlation, we observe the bulk of this increase can be attributed to increases in usage likelihood before the age of 35. After that, usage frequency appears to level off or drop slightly. We also examine our two main recruitment pools: GitHub and university. GitHub participants were not significantly more likely to have tried cannabis (71% vs. 65%, = 0.065). However, they were more likely to have used cannabis while programming (39% vs. 28%, = 0.001). This aligns with observed demographics as GitHub participants are more likely to be men and tend to be older. However, it also may indicate that cannabis use is particularly prevalent in open source communities, an observation that motivates future qualitative investigation of open-source cannabis culture. Over one-third of our sample have used cannabis while completing a programming or software engineering-related task, of which half currently use cannabis at least once or more a month. Programmers typically use cannabis products that contain THC, and 11% of programmers who have used cannabis while programming report currently doing so on a near daily basis, behaviors very likely to be detected by most drug-test related policies. We find that cannabis use while programming is particularly common for non-binary or transgender participants (57%) and participants recruited from GitHub (39%). RQ2: Cannabis Use Programming Context We now investigate in what programming-related contexts programmers use cannabis, including both high-level project qualities (e.g., personal projects vs. work-related projects) and also softwareengineering task types (e.g., refactoring, debugging, or requirements elicitation). We also analyze the impact of remote work. In which programming contexts is cannabis used?-We first investigate which programming project types (e.g., personal or work projects) developers are most likely to choose to complete while using cannabis. We provide our full results in Table 2, but we emphasize our result that 95 participants (34% of cannabis using programmers and 12% of our population overall) sometimes use cannabis for work-related tasks. While we anticipated our finding that personal programming projects would be the most common project type completed while using cannabis, we hypothesized that the percentage using cannabis for work-related projects would be lower than observed. This indicates that cannabis routinely interacts with professional software engineering environments, underlining the potential impact of corporate drug policies and motivating future studies of cannabis in software-engineering. For which software tasks do programmers use cannabis?-We now investigate how likely participants are to use cannabis while completing common software engineering tasks adapted from Tilley et al. [55]. Our full results are in Figure 2: programmers reported a higher likelihood of using cannabis while brainstorming or prototyping and a lower likelihood of using cannabis while performing quality assurance, requirements elicitation, or tasks with an imminent deadline. These results indicate that developers may self-regulate cannabis use for when it is most beneficial (i.e., for creative, open-ended tasks) while avoiding use for time-or safety-critical tasks. We note that participants who are unable to self-regulate cannabis use (e.g., are dependant on cannabis) may be unlikely to admit so in our survey. Even so, our results call into question the usefulness of blanket anti-cannabis policies. We investigate the motivations of these choices further in Section 5.3. Cannabis use and remote work-Many developers work remotely. Also, the COVID-19 pandemic was ongoing during recruitment. We find that 52% (145/280) of cannabis-using programmers report they are somewhat or a lot more likely to use cannabis for work-related tasks when working from home compared to only 5% (13/280) who report that they are somewhat or a lot less likely to do so ( < 0.0001). Similarly, 29% (82/280) report increased programming cannabis use since the onset of COVID-19 compared to only 10% (27/280) who report a decrease ( < 0.0001), a result in line with other populations such as medicinal cannabis users [8]. This indicates that workplace culture, environment, and policies can tangibly effect the frequency of cannabis use while programming. Developers most commonly choose to use cannabis during personal programming projects (63%). However, over a third of cannabis-using programmers also sometimes choose to use cannabis during work-related tasks, use that is more common during remote work. Programmers also self-regulate when they use cannabis: cannabis is more likely during creative open-ended software tasks vs. time-or safety-critical tasks. RQ3: Cannabis Use Motivation Having established that some developers regularly use cannabis while completing both personal and work-related programming projects, we now investigate why programmers use cannabis. Understanding why developers use cannabis is important because it can help inform company drug policies and developer support. Overall Motivation Results-Our results on cannabis use motivations are reported in Table 3. Overall, we found that programmers were more likely to report enjoyment or programming enhancement motivations than wellness motivations: the most common reasons were "to make programming-related tasks more enjoyable" (61%) and "to think of more creative programming solutions" (53%). In fact, all programming enhancement reasons were selected by at least 30% of respondents. On the other hand, general wellness related reasons (such as mitigating pain and anxiety) were all cited by less than 30% of respondents. Thus, while wellness does motivate some cannabis use while programming, it is not the most common motivation. This result is further corroborated by only 19% (54/280) of cannabis using programmers indicating that they have a physician's recommendation to use cannabis medicinally. Additionally, of those that have such a recommendation, two thirds report using cannabis for both medicinal and recreational reasons. This is important because it indicates that any cannabis policy should consider medicinal, recreational and performance enhancing marijuana use. We also investigated cannabis-use motivations by population pool (i.e., GitHub-recruited vs. University-recruited participants)while percentages varied slightly, the top four rationales were the same regardless of recruitment pool. Wellness responses were also consistently below programming-enhancement motivations. Additional Qualitative Responses-While we leave a formal qualitative analysis to future work, we also observe the emphasis of programming-enhancement-related reasons for cannabis use while programming in the textual free-response section. For instance, one participant said that when using cannabis while programming, they are "able to better connect ideas and think about things on a broader level, which typically leads to more well-rounded solutions. " Similarly, another participant stated that cannabis use while programming helped brainstorming, allowing them "to organize [their] thoughts better and keep them separate, which helps [them] follow threads further and come up with new paths to follow". Beyond cannabis's apparent usefulness during programming itself, participants also cite its usefulness during adjacent tasks. For example, one participant said that they used cannabis "to stay awake/focused when . . . grading 70 programming assignments. If [I] have a deadline and need to binge work for many hours, it is easier to keep going if I periodically get high. When your life is mostly work, cannabis is something that makes it bearable. " Finally, some participants indicated reasons other than enjoyment, wellness, or programming-enhancement for using cannabis while programming. For instance, some participants indicated that cannabis use while programming was a coincidence of regular cannabis use while a few others said they tried cannabis to enhance programming, but did not observe any effect. Finally, while we observed little quantitative evidence of negative experiences of cannabis use while programming, we did observe a few qualitative reports. For example, one participant stated: "I generally don't use [cannabis] while programming because [. . . ] it affects my short-term memory, which is a huge part of programming for me [. . . ] My managers wouldn't have an issue with me using cannabis during my job, but I do have an issue just due to the nature of cannabis. " Similarly, another participant stated "I wanted to see if [cannabis] would help, but all it did was make it harder to keep track of what I was doing. I'm glad I tried it, but I wouldn't do it again." These quotes show that even among cannabis-using Table 3: "Why do you use cannabis while programming, coding, or completing other software engineering-related tasks?" Participants could select multiple choices. "Cat" delineates a particular motivation as programming enhancement (P), enjoyment (E), or wellness (W). % is out of the 243 who selected at least one option. programmers, there is a wide array of experiences and reactions, a variance that invites further and more in-depth qualitative analysis. Implications-As a result of these conflicting experiences, our findings raise the question of whether the programming-related benefits of cannabis perceived by some developers actually translate to verifiable programming enhancement. For example, even though many participants report using cannabis to enhance programming creativity, it is unclear if this actually occurs. We note this concern was raised by some non-cannabis using programmers. For instance, one such programmer wrote that they "had a series of developers work for [them] that used cannabis to varying degrees. All of them fully believed that it made them better engineers, that is sparked creativity and capability. From the outside however the results have consistently not been that way. Not just from less code productivity, but far more often inability to work as well with others, and code quality issues. " Thus, our results motivate future observational study of the effects of cannabis on programming. This motivation and considerations surrounding any such study are discussed further in Section 6. We found that programmers who use cannabis were more likely to be motivated by potential programming ability enhancement or programming enjoyment than wellness-related reasons. This pattern was observed in both open source developers and university participants, and motivates future work investigating if the perceived programming benefits of cannabis manifest in a more rigorous observational study. RQ4: Perception of Cannabis Use We also investigate how programmers perceive cannabis use. Understanding this perception is important because if perception varies from actual usage, this may result in sub-optimal cannabis-related policies or biases in programming environments. We investigate cannabis perceptions in general and programming-specific contexts. For the former, we compare to national cannabis attitudes surveys. For the latter, we analyze attitude differences between programming students, employees, and software managers. We then compare any differences to each group's cannabis usage. General Cannabis Perceptions-Programmers in our sample have more positive attitudes towards cannabis than the population overall. For example, 91% of our participants say that marijuana use should be legal for both recreational and medicinal use compared to 60% of the general United States population in 2021 [56]. Similarly, only 5% of our population views smoking marijuana once or twice a week to be of "great risk" as opposed to 29% of the US population [26]. This difference is likely explained by the demographic differences in age, gender, and political leaning between programmers in our sample and the population overall (see Section 4). Programming and Cannabis Perceptions Setup-To understand cannabis perceptions in a programming context, we first ask participants to rate their approval or disapproval of someone who uses cannabis while working with them on software engineering tasks such as programming, brainstorming, debugging, or security testing, on a 5-point Likert scale (from -2 for disapproval to +2 for approval). We average these responses into an overall "cannabis approval score". This approval/disapproval format was adapted from previous research on cannabis attitudes [2]. Second, we indirectly asked about cannabis use visibility by asking if respondents knew of a colleague who regularly used cannabis while programming on a 5-point Likert scale (from -2 for a solid no to +2 for a solid yes). The exact wording varied for student, employee, or manager participants. Differentiating between groups admits investigating more nuanced perception differences. For example, we ask employees to indicate if they think their manager would disapprove of cannabis use while we ask managers if they actually would disapprove. Even though this is not a perfect comparison, as managers in our sample do not necessarily manage employees in our sample, it still allows a first investigation into perception differences between these two groups overall. Table 4 lists our population-specific questions. Perceptions of Professionals vs. Students-For student programmers, we considered those who were currently a student in computer science, software engineering, or another programmingrelated field. For professional programmers, we consider those nonstudents who were either full-time employees, part-time employees, or self-employed with a programming job. We hypothesized that students would approve more of cannabis use. However, we find no evidence of a difference between employee approval of colleagues (question 2) and student approval of fellow students (question 1) for using cannabis while completing programming tasks: the average from both groups was between neutral (0) and slight disapproval (-1) (-0.25 for student on students and -0.36 for employee on colleagues, = 0.26). This indicates that perceptions of cannabis use while programming are similar in academic and professional contexts. For visibility, however, we do observe a significant difference. Responses to questions 5 and 6 show that students are significantly more likely to know another student who regularly uses cannabis while programming than a professional programmer is to know a colleague who does the same ( < 0.0001): 48% of students report knowing or probably knowing a fellow student who regularly uses cannabis while programming compared to only 23% of professional programmers. However, we observe no significant differences in cannabis usage prevalence or frequency between the two groups, though fewer students than professionals report cannabis use while programming (32% vs 38%, = 0.09). This finding may represent cultural differences despite similar approval and usage levels between the two groups, differences perhaps driven by higher levels of support for cannabis legalization among younger Americans [56] or the fact that some students were under the age of 21, the most common age limit for legal cannabis use in the United States. Perceptions of Managers vs. Employees-For programming employees, we consider full-time, part-time, and self-employees with a programming job. Managers were those who reported they were managers. The average level of expected managers disapproval of cannabis use by employees was between slight and strong. However, managers reported an actual disapproval level between neutral and slight-a significant difference (questions 3 and 4, < 0.0001). We found no significant difference between manager disapproval of their supervisees and employee disapproval of their colleagues (questions 2 and 4, = 0.15), both reporting between neutral and slight disapproval. We also found no significant cannabis usage differences between employees and managers. Additionally, managers rarely report witnessing negative cannabis-related effects: while 27% (51/189) of managers suspect a supervisee uses cannabis, less than 3% (5/189) report that such programmers were less productive, and only one reported reprimanding an employee for cannabis use. Thus, our results indicate a perception mismatch of programming cannabis use between employees and managers: employees expect managers to disapprove of such use more than they actually do. This mismatch between managers and employees is further compounded by a difference in cannabis-use visibility. Professional programmers are more likely to know or probably know a colleague who regularly uses cannabis while programming than they are to know a manager who regularly uses cannabis while programming (23% vs. 8%, < 0.0001). One potential implication is that if managers are ambivalent on cannabis use while programming, then corporate policies might be adjusted to avoid repercussions for cannabis use as long as work performance remains reasonable. We found that US-based programmers have more favorable views on general cannabis use and legalization than the American population overall. In programming-specific contexts, we found differences in perception and/or visibility of cannabis use while programming between programming students, employees, and managers despite finding no significant differences in cannabis usage between the three groups. For example, we found that managers disapprove of cannabis less than employees expect them to. These results may indicate a mismatch between perception and reality. Q ID Perception Question Answer Choices Population Responses Score Perceived approval/disapproval of cannabis use while programming: Table 4: Student, employee, and manager perception questions (wordings modified slightly for clarity). For Q ID 1-4, participants gave approval for programming, brainstorming, debugging, and security testing, then aggregated for an overall participant score. The score column represents the average Likert score (from -2 to +2) for all "Response" participants. DISCUSSION In this paper, we report that cannabis frequently interacts with programming in both educational and professional software engineering contexts. However, our work is also only a first step toward understanding this intersection. In this discussion, we consider the impacts of our findings on programming workplace culture and policies. We also discuss questions raised by our results and potential directions for future work including focus groups and observational studies. We conclude with a broader discussion of the intersection of cannabis and software engineering in general. Cannabis Use and Company Policies-Our findings have implications for software company anti-drug policies. While perhaps less prevalent than in other industries (notably, many FANG (Facebook, Amazon, Netflix, Google) companies do not drug-test employees), anti-cannabis regulations remain common in programming environments (e.g., [16, p. 12] and [30, pp. 12-13]), policies often enforced through drug testing: 29% of our participants reported taking a drug test for a programming job. At the same time, we found cannabis use is common among programmers, with 35% of our sample having used cannabis while programming, 34% of which sometimes do so for work-related tasks (especially during remote work). Cannabis-using programmers typically use THC products, the chemical detected by most drug tests. Furthermore, perceptions of cannabis use while programming are mostly ambivalent from both managers and employees, and we observe limited reports of cannabis use negatively impacting programming work-places: while 27% of programming managers suspect cannabis use by a supervisee, less than 3% report that those programmers were less productive, and only one manager in our sample reported reprimanding an employee for cannabis use. Thus, our results indicate that software company anti-drug policies may conflict with the preferred practice of many developers, a dissonance that can lead to smaller application pools and hiring difficulties for drug-testing jobs (as evinced by the 2014 FBI cybersecurity hiring shortage [34]). Our results support additional study and reassessment of extant anti-drug policies for programming jobs including further contextualization on when and how often programmers currently take drug tests for programming-related jobs 11 and if such policies are actually necessary or effective. Outstanding Questions and Future Directions-One direct question is functional: does cannabis use while programming actually affect code quality, and if so, how? From creativity to readability to defect density, there are a number of purported effects of cannabis use on code. This is particularly compelling because programming cannabis use is often motivated by the desire to enhance programming skills such as brainstorming and focus (see Section 5.3). Observational studies, in which programmers complete various software tasks with and without cannabis, provide one way to investigate such questions. Both functional outcomes and also medical markers (e.g., eye tracking, blood testing, etc.) are relevant. Such studies are currently legally challenging in most parts of world. However, cannabis laws are increasing in leniency, and in the US, both CBD and THC have synthetic versions approved for medicinal use by federal regulatory boards. Finally, some observational studies have been conducted with real-world cannabis. For example, Bidwell et al. measured impairment with state-legalized cannabis (comparing THC dosages) using a series of cognitive tests [5], encouraging the pursuit of similar observational studies in a software context. Other research techniques may be more immediately applicable. Qualitative studies employing interviews or focus groups could deepen understanding of cannabis programming culture. Such studies could also help identify if and when anti-drug polices impact job application or hiring decisions. Other directions include additional survey-based research on other programmers sub-populations (such as workers in public vs. private companies) or on programmers' cannabis usage in countries beyond the United States. Broader Software Engineering Impacts-Our work also has broader implications on software engineering ecosystems and cannabis-connected human-computer interactions. The intersection of cannabis use and human-computer interaction remains unexplored, in both general and software contexts. As cannabis becomes more acceptable, it may interact more with socio-technical issues including accessibility, end-user security, data privacy, and programming identity. This possibility was emphasized with "CHI-nnabis", a recent panel at Computer Human Interaction (CHI) calling for more research at the intersection of cannabis and technology [33]. While we focused on cannabis use, there may be other understudied mind-altering substances that interact with programming. For instance, while psychedelics had connections to the nascent commercial computing industry [44,61], little is known about current prevalence in software engineering, let alone the accuracy of any perceived benefits. Similarly, while college students often abuse prescription stimulants as "study drugs" [57], little is known about the use of and culture surrounding such drugs by computing students or in a programming context. Overall, we hope our work spurs research on the intersection of programming and mind-altering substances from multiple angles, such as company drug policies, programming productivity, and socio-technical considerations. LIMITATIONS AND THREATS TO VALIDITY Other limitations include self-selection bias and data quality concerns, common for survey research in general and for self-reported drug research in particular [27]. Due to interest, cannabis-using programmers may be more likely than non-users to participate. Simultaneously, cannabis users may be less willing to report use due to retaliation worries. To mitigate these biases, we made it clear that prior cannabis use was not necessary (see Section 3), and our survey was anonymous and confidential to encourage honesty. To check for self-selection bias, we validated that key demographics in our sample match those of our recruitment populations (see Section 4). We also note that online cannabis use surveys typically produce high quality and internally consistent data [50]. Even so, these biases are important when interpreting our findings and may result in our findings overestimating the prevalence of cannabis use while programming. One additional limitation is timing during COVID-19: our findings may be influenced by this period of primarily remote work. We mitigate this threat by asking about cannabis-use behavior changes resulting from the onset of COVID-19 (see Section 5.2). We note, however, that while this is a potential limitation, it also admits timely and useful analysis in light of our finding that remote work increases cannabis use while programming for work-related tasks. CONCLUSION In this paper, we presented results of the first empirical study of cannabis's prevalence, perceptions, and usage motivations in programming environments. To do so, we conducted a survey of 803 programmers (including 450 full-time professional developers) recruited from open source, university, and social media programming communities. We found that some programmers regularly use cannabis while programming (18% of our sample do so at least once a month), many choosing to use cannabis for both personal and work-related projects. Furthermore, we find that cannabis use while programming is primarily motivated by perceived enhancement to programming-related skills and increased enjoyment rather than by medicinal reasons. Finally, we find that programming employees, managers, and students use cannabis while programming at similar rates, despite differences in cannabis perceptions and visibility. Such cannabis usage, however, is in conflict with anti-drug policies currently enacted for many software engineering jobs: 29% of our sample reported they had taken a drug test for a programmingrelated job, a hiring practice that may limit developer application pools. Thus, our results have implications for programming workplaces that currently have anti-drug policies and motivate future research into the effects of cannabis use while programming.
2021-12-20T02:15:10.170Z
2021-12-17T00:00:00.000
{ "year": 2021, "sha1": "37b1029d14e187be6752fb616c04c713f7405cd5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "37b1029d14e187be6752fb616c04c713f7405cd5", "s2fieldsofstudy": [ "Computer Science", "Psychology" ], "extfieldsofstudy": [ "Computer Science" ] }
232158884
pes2o/s2orc
v3-fos-license
Doorways do not always cause forgetting: a multimodal investigation Background The ‘doorway effect’, or ‘location updating effect’, claims that we tend to forget items of recent significance immediately after crossing a boundary. Previous research suggests that such a forgetting effect occurs both at physical boundaries (e.g., moving from one room to another via a door) and metaphysical boundaries (e.g., imagining traversing a doorway, or even when moving from one desktop window to another on a computer). Here, we aimed to conceptually replicate this effect using virtual and physical environments. Methods Across four experiments, we measured participants’ hit and false alarm rates to memory probes for items recently encountered either in the same or previous room. Experiments 1 and 2 used highly immersive virtual reality without and with working memory load (Experiments 1 and 2, respectively). Experiment 3 used passive video watching and Experiment 4 used active real-life movement. Data analysis was conducted using frequentist as well as Bayesian inference statistics. Results Across this series of experiments, we observed no significant effect of doorways on forgetting. In Experiment 2, however, signal detection was impaired when participants responded to probes after moving through doorways, such that false alarm rates were increased for mismatched recognition probes. Thus, under working memory load, memory was more susceptible to interference after moving through doorways. Conclusions This study presents evidence that is inconsistent with the location updating effect as it has previously been reported. Our findings call into question the generalisability and robustness of this effect to slight paradigm alterations and, indeed, what factors contributed to the effect observed in previous studies. Background Our experience of the world is continuous and rich with information. To manage this constant stream of information, we segment our experience into events, which are stored as episodic memory for later retrieval [22]. Events are determined by boundaries, denoting the beginning and end of a particular period of time. Salient environmental changes are thought to dictate the location of event boundaries (e.g., a change in location, a shift in goal, etc.; [23]. A commonly encountered event boundary is a doorway. Previous research has demonstrated that long-term memory for the temporal order of items is better for items presented within the same room [4] or context [2,22] than for items presented across different rooms or contexts. Short-term memory is also reduced for items that were presented before an event boundary. For example, while reading, memory for words preceding the phrase "An hour later" is worse than preceding the phrase "A while later", as the former is more like an event boundary [21]. Similarly, research suggests that walking through doorways-in reality, virtual reality, and even in our imagination-causes us to forget information obtained in the previous room. The effect of declined memory performance after passing through a doorway or after another event boundary has come to be known as the location updating effect [16], but is also referred to as the doorway effect or the event horizon effect [14]. In the initial demonstration of the doorway effect [15], participants played a computer game in which they freely navigated a 3D environment. The environment consisted of a series of rooms, each containing a table with an object on top. Participants were tasked with moving each object from one table to the next, which would either be in the next room connected by a door ("shift" condition) or the same room ("no shift" condition). Halfway along the trajectory, participants' recognition memory was probed with an object description (a colour-shape pair; e.g., "red cube") that either matched the object they were carrying, the object they had set down on the previous table, or neither. The results of the study revealed that, after passing through a doorway, participants would more often fail to recognise the probes (reflected by a reduced hit rate) for the objects they were carrying than if they had not passed through a doorway [15]. Numerous iterations of this experiment have explored the robustness of the doorway effect. These studies have found that the effect persists regardless of the type of probe (text vs. images [18], recognition vs. recall [10]), travel time [8,9], the level of immersion (small screens, big screens, or real-life environments [16]; active vs. passive interaction; [11], real or imagined; [6,12]), age [17], whether the dividing wall is transparent or opaque [8], whether there were additional items to remember [18], or whether participants were probed after returning to the room the item was first encoded in [16]. The underlying cause of the location updating effect is thought to relate to temporal prediction, such that the contents of working memory acquired while in one event is highly predictive while still in that event and lowly predictive of any upcoming new event, which will have its own new set of statistical regularities [19]. Hence, the information is cleared from working memory when the event boundary is crossed [7]. Within this framework, it seems somewhat surprising that the doorway effect is so robust across the literature, as all the events are relatively similar, both in terms of their visual features as well as the participants' goals (i.e., the only task is to remember the set-down and picked-up objects). Why, then, does the doorway effect persist, even when the predictive validity of task information from previous events is relatively high? The aim of the current study was to examine whether boundaries created by doors induce forgetting under different experimental conditions, ranging from virtual reality to real life movement (see Figs. 1, 2). First, in Experiments 1 and 2, we conceptually replicated key elements of Radvansky and colleagues' study design while controlling for a number of additional factors (see Experiments 1 and 2: Aim). Second, in Experiments 3 and 4, participants either passively (via video watching) or actively moved through an actual environment with or without a boundary (see Experiments 3 and 4: Aim). Aim The aim of Experiment 1 was to conceptually replicate the original study demonstrating the doorway effect [15] under controlled conditions. We increased immersion by using a full virtual reality headset and designed the virtual environment so that all rooms were visually identical, as opposed to previous studies [15] where the walls were different colours. Thus, in our experiment, any forgetting could only be attributed to boundaries rather than salient changes in context or visual processing. We also more than doubled the number of trials (51 trials to 110) so as to maximise statistical power. We hypothesised that, if the doorway effect is indeed solely attributable to door boundaries rather than extraneous experimental factors, we would observe impaired recognition memory in the form of fewer hits and more false alarms. In Experiment 2, we incorporated an additional task that increased working memory load, in which participants counted backwards aloud from a given number during the first half of the movement trajectory. The event horizon model stipulates that working memory is updated at boundaries, replacing the previous event model with the new event model [14]. By filling working memory capacity with an extraneous task, we hypothesised that the previous event model would be even more susceptible to being "flushed" from working memory when it is already overloaded [7]. Participants We estimated the Cohen's d effect size using d = M 1 -M 2 / s pooled , where s pooled = √[(s 1 2 + s 2 2 ) / 2]). This revealed that the size of the doorway effect across a range of comparable studies was d = 0.66 [4, 6, 8-11, 15, 18]. A power analysis revealed that 27 participants would be required for a paired t-test with a typical α = 0.05 and a β = 9. Radvansky et al. [16] stated that 16 pairs of participants would be required to detect the doorway effect using an independent-samples t-test. Previous studies report significant effects from samples between 40 and 60 participants [6,9,15,16,18], as well as smaller samples of 16-30 [10, 11, 17]. For Experiment 1, we recruited 40 participants through the University of Queensland's paid research participation scheme, which draws from adults within the local community. Participants were compensated AUD$20 per hour for their time and provided written consent. Of the 40 participants, 9 aborted the experiment due to motion sickness and 2 participants were excluded due to poor performance (< 20% accuracy in any condition). This left a final sample of 29 participants, consisting of 13 males and 16 females aged between 18 and 33 years (M = 23, SD = 4.06; age data missing for 1 participant). For Experiment 2, we recruited 63 first-year psychology students from the University of Queensland who received course credit for their time. All participants provided written consent and were required to have normal or corrected-to-normal vision and normal colour vision. Of the 63 participants, 14 aborted the experiment due to motion sickness, and 4 participants were excluded due to poor performance (< 20% accuracy in any condition). This left a final sample of 45 participants, consisting of 20 males and 25 females aged between 18 and 45 years (M = 23.65, SD = 6.36; age data missing for 11 participants). This study was approved by the University of Queensland's Human Research Ethics Committee. Stimuli and equipment Two similar virtual environments (one map per block) were created using Unreal Engine 4 (Epic Games, 2019). Participants viewed the environment with an HTC Vive Headset and interacted with the environment using left and right HTC Vive wireless controllers. Within the virtual environment, participants were situated inside a brick building containing Y-shaped rooms (see Fig. 1). Each room contained a white table, with two grey circular platforms on top. The left platform was empty (for participants to put objects on) and the right platform had an object on top (for participants to pick up). While an object was present on a platform, a white shield became visible to hide the contents from the participant. The shield disappeared upon intersecting with the participant's virtual hand (controlled by a wireless controller). This was done to equate visual exposure to the set-down and picked-up objects as much as possible. The 3D objects were created in Blender v2.79 (Blender Institute, Amsterdam). There were 6 different shapes (cube, cone, pole, disc, cross, and wedge) approximately 10 × 10 × 10 cm in size that could be one of 6 colours (red, blue, cyan, green, yellow, and purple), similar to previous studies [15]. The order of objects across trials was pseudorandom so that the same colour or shape could not be repeated and so that there were roughly equal instances of each object shape and colour across a block. The table was always situated at the top of the Y-shaped room. The two forks of the room always consisted of a wall with a door on one side ("shift" condition) and no wall ("no shift" condition) on the other side. The doors were elevator-style, consisting of two vertical slabs that moved apart upon the participant approaching and passing through. Whether the door was on the left or right was randomly counterbalanced across each map. This was done so that, before picking up the object, participants could not predict whether they would pass through a door or not (and thus there could be no influence of shift on initial memory encoding). This improves upon previous studies [15], where "shift" rooms were small and "no shift" rooms were large with a darkened section, hence a doorway effect could be attributable to either the boundary crossing or the way items were initially encoded. Two different maps were generated, with one used for each block (order counterbalanced across participants). There were 61 rooms in each block, giving 60 transitions (30 shift and 30 no shift). Procedure First, participants were seated at a desk where they provided written consent. The experimenter then verbally explained the task and showed the participants pictures of each object shape and colour (and their corresponding labels) before fitting the HTC VIVE Headset. Participants were virtually moved through the environment while in a seated position. At the beginning of each trial, participants faced a table with an object on top. Participants were instructed to use the right controller to pick up the object (by holding the back trigger button to 'grip' the object) and put it in their virtual "backpack" (by moving the controller behind their head and releasing the back trigger). Upon object release, participants were passively moved backwards, turned left or right (either towards a door or towards the other open part of the same room), and then moved towards the next table. Upon reaching the next table, participants took the previous object out of their "backpack" (by reaching behind their head with their left controller and holding the back trigger) and placed it on the empty grey platform on the table (by releasing the back trigger). They were to then repeat the process again by picking up the next object on the right, memorising the object they had just set down (the "dissociated" object) and the one they next pick up (the "associated" object). Participants' memory for the associated and dissociated objects was probed by a screen that appeared halfway through the movement trajectory between tables (in a "shift" condition, this occurred immediately after passing through the doors). The screen presented text for a colour (e.g. "blue") and a shape (e.g. "cube"), followed by a question mark. Underneath were buttons for "yes" and "no" which participants could select with their left or right controller, respectively (no movement required). The colour-shape probes described either the associated object (e.g., "blue cube"), the dissociated object (e.g., "red cone"), or an incorrect combination of the two (e.g., "blue cone" or "red cube"). These latter probes are referred to as "negative" probes. Participants were instructed to answer "yes" if the colour-shape probe matched either the associated or dissociated object, and "no" otherwise. Participants were encouraged to maximise accuracy, but also to keep response times short (no feedback was given). Participants were also instructed to keep their eyes open and to not say the object names out loud. For Experiment 2, a counting task was introduced to increase working memory load. After participants released the object into their inventory, the experimenter provided a random number between 20 and 100 (using a random number generator, with the result spoken aloud). Participants were required to count backwards from the number aloud in steps of 6 (e.g., from 60: "54, 48, 42, 36…") until the probe screen appeared. Participants were encouraged to count as far back as they could within the time frame (approximately 4 s) while still memorising the two objects as accurately as possible. In block 2, the a new table, participants were required to first place the object acquired in the previous room on the new table. This was done by participants reaching behind their head with the left controller and "taking the object out of their backpack" by holding the back trigger, and then releasing the trigger when positioned over the table. (b) Participants then picked up the next object and placed it in their backpack, by gripping the back trigger and reaching behind their head, and then releasing the trigger. (c) Upon releasing the object into their backpack, participants were passively moved backwards, then turned left or right (either towards a door or towards another part of the room) and moved towards the next table. Halfway along the trajectory, a probe screen appeared with an object description (colour and shape). Participants responded "yes" (right controller) or "no" (left controller) as to whether the probe described either the object that was most recently set down or the object that was most recently picked up. Probes would always be a combination of the colour and shape of the set-down and picked-up object (here, the probes could be: "green pole", "yellow cross", "green cross", or "yellow pole"). (d) A bird's eye view of an example map layout, with 6 trials ("shifts" indicated by solid red arrows and "no shifts" indicated by dashed purple arrows). All images in the figure have been created by the authors subtraction value was changed to 7 to prevent repetition. In certain cases, the counting decrement was adjusted after the first block to account for individual differences in mathematical ability. Decrements were made to be easier (to steps of 4 or 5) if participants could only count back to 2 numbers or less (seven participants), or harder (to steps of 13) if participants could count back to 5 numbers or more (six participants). Thus, participants were typically able to count back to 3 or 4 numbers before the probe appeared. The duration of each block was approximately 25 min. Experiment 1 In Experiment 1, we aimed to conduct a highly controlled conceptual replication of the doorway effect by using a highly immersive environment and controlling for elements like context (all rooms were identical) and anticipation (not possible to know doorway condition until after movement initiated). We recorded the accuracy and response time, excluding trials that were longer than 10 s (indicating a pause in the experiment) or shorter than 0.25 s (indicating accidental button press), as well as the first 10 trials of the first block (due to ongoing instruction from the experimenter). We then removed any trials with outlying response times (± 3 SDs from each participant's mean). This left 100 to 110 trials per participant, with at least 14 trials per condition (M = 17, SD = 1.58). The mean hit rate across conditions per participant ranged from 81.57% to 100% (M = 94.67%, SD = 5.54%; see Fig. 3a and Table 1). The mean false alarm rate across conditions per participant ranged from 0% to 46.55% (M = 8.31%, SD = 11.05%). We drew upon signal detection theory and calculated the sensitivity index d' and the C bias parameter of the associated and dissociated probes, per shift condition, using the hit rate and false alarm data (see Fig. 3c). We corrected for extreme proportions (i.e., 1 and 0) by using the log-linear rule, whereby a constant of 0.5 was added [3]. Finally, we analysed the response time data (see Fig. 3d and Table 1) Overall, these results demonstrate evidence in favour of there being no effect of shift on signal detection sensitivity or response bias for associated or dissociated objects. Experiment 2 To address the ceiling effect observed in Experiment 1, we introduced a distractor task that would interfere with object memorisation and thus encourage forgetting. After picking up the object and releasing the trigger (initiating movement), participants had to verbally count backwards, in sixes, from a random number provided by the experimenter until the probe screen appeared. Hence, this task increased working memory load during the period between interacting with the objects on the table to being probed by the question screen. As expected, the mean hit rate in Experiment 2 was lower overall at 81.79% (SD = 13.13%, range = 41.71% to 98.68%; see Fig. 4a and Table 1), after removing trials according to the same criteria as Experiment 1 (minimum 13 trials per condition, M = 18, SD = 1.94). The . Therefore, going through doorways significantly reduced sensitivity to object probes and induced an overall bias towards reporting "yes". To further investigate the nature of this effect, we performed paired t-tests on the hit rates and false alarms separately for associated, dissociated and negative probes, similar to previous studies [15]. Although the accuracy data in Experiment 2 was not as highly negatively skewed (skewness ranged from -1.23 to -0.23) as it was in Experiment 1 due to the reduction of an obvious ceiling effect, the residuals were still not normally distributed (Shapiro-Wilk tests for four out of six conditions were significant: p < 0.038). Accordingly, we computed non-parametric two-tailed exact sign tests. This revealed that the doorway effect was significant only for negative (33 participants had lower accuracy after a shift, 9 had higher accuracy, and 3 had no difference, p < 0.001) but not associated (28 participants had lower accuracy after a shift, 17 had better accuracy, and 4 showed no difference, p = 0.136) or dissociated (19 participants had lower accuracy after a shift, 20 had better accuracy, and 6 showed no difference, p = 1) probes. Paired Bayesian t-tests also provided extremely strong evidence for a shift effect on negative probes (BF 10 = 68.563) and sufficient evidence for the null hypothesis for there being no shift effect on associated (BF 01 = 4.322) or dissociated (BF 01 = 6.115) probes. These results indicate that the reduction in d' after a shift was predominantly due to there being a higher false alarm rate for negative probes, rather than a reduced hit rate for associated and dissociated probes. Similarly, the significant shift in response criterion towards saying 'yes' was primarily due to the increased false alarm rate for negative probes. Overall, the findings from Experiment 2 suggest that, under conditions of working memory load during memorisation time, doorways do impair mnemonic performance but not due to forgetting (i.e., fewer hits and more misses, as typically reported by previous research [15]). Instead, doorways increased the false alarm rate to negative probes. Experiments 3 and 4 Aim In Experiments 3 and 4, we aimed to conceptually replicate previous experiments that demonstrate the doorway effect in real life contexts [16]. Experiment 3 consisted of passively watching a video from a first-person perspective of someone traversing a corridor with or without curtain boundaries (see Fig. 2). Experiment 4 involved active navigation through the same corridor. The curtain set-up closely resembled that used by a previous study demonstrating the doorway effect during imagined navigation [6]. Also, similar to previous real-life investigations into the doorway effect [6,10,12,16], we increased working memory load demand (counting task) and had participants memorise multiple items to increase task difficulty. We hypothesised that, should the doorway effect be robust to even impermanent boundaries (e.g., curtains) and returning to the original context (as has been shown previously [16]), then participants would demonstrate impaired memory performance after crossing a boundary. consent and received partial course credit for their participation and were also given the chance to win a $50 gift voucher after completing the study. Stimuli Both experiments used a hallway in the behavioural research building at Bond University as the spatial navigation environment. The hallway was 16.45 m long, 2.36 m wide, and 2.35 m high. The hallway contained no furniture, was brightly lit, and had task-irrelevant doors to other rooms on either side (see Fig. 2). To create boundaries for the "shift" condition, blue curtains were hung from the ceiling that segmented the hallway into 3 sections (each section approximately 3.5 m long). The curtains had a split in the middle that hung slightly open. In Experiment 3, participants viewed a video from a first-person perspective (i.e., filmed from eye-level) that simulated the experience of walking down the hallway and back again, either with curtains for the "shift" condition (two event boundaries, each crossed twice) or without curtains for "no shift" condition. To reduce stimulus repetition and maximise participant engagement, we recorded 5 different videos (approximately 45.2 s duration) of the same walk for each condition (10 videos total) using an iPhone 6 (f/2.2, 8 megapixels), turning either left (2 videos) or right (3 videos) at the end of the hallway. Participants were required to memorise photographs of butterflies. There were 16 photographs of unique butterfly species. The stimuli were printed in a 4 × 4 grid and subsequently cut out in 10 × 10 cm squares so that they could be arranged in different configurations when presented to the participant. Twenty-five grids were pseudorandomly generated for the experiment. Procedure In Experiment 3, participants were seated in a dark room on a swivel chair at a desk upon which the 16 butterfly stimuli were arranged in a specific grid layout (one of the 25 layouts). After the lights were turned on (revealing the stimuli on the desk), participants were given 30 s to memorise the location of each different butterfly in the grid. After 30 s, the lights were turned off and participants were required to spin their chair around to face an open laptop on a desk behind them. The participant then used the laptop track pad to press play on the video of the hallway walk. Participants were encouraged to imagine they were the person walking down the hallway. Like in Experiment 2, participants were also required to count backwards out loud in decrements of 3 from a random number between 90 and 100 (provided by the experimenter). While the participant watched the video, the experimenter stacked and shuffled the photographs behind the participant. After the video ended, the lights were turned back on. To ensure participants paid attention during the video, the experimenter asked the participant whether they had turned left or right at the end of the hallway (all participants answered 100% correctly). After this, participants were given 45 s to rearrange the butterflies into the grid formation they had memorised. Overall, there were 24 trials that alternated between the shift and no shift condition (the starting condition was counterbalanced across participants). The procedure for Experiment 4 was essentially the same as Experiment 3, except that the participants actually completed the walk themselves instead of watching a video. Participants memorised the butterfly stimuli for 30 s while seated at a desk at one end of the hallway. The experimenter then collected the stimuli while the participant stood up and completed the walk, counting backwards out loud in decrements of 3 from a random number between 90 and 100 provided by the experimenter. Participants freely chose to turn left or right at the end of the hallway. Upon return, the participants were given 45 s to rearrange the stimuli into the memorised grid formation. In both Experiments 3 and 4, participants completed an initial practice trial. On each trial, the experimenter recorded the number of stimuli placed by the participant in the correct grid location. Finally, once the experiment was complete, participants were questioned about which condition they believed was more challenging (i.e., which condition they personally believed had made it more difficult for them to remember the stimuli). Results For both Experiments 3 and 4, we conducted a two-way ANOVA (shift, no shift), with condition order (shift or no shift completed first) as a between-subjects factor, to determine whether memory for the butterfly grid positions had been impaired after passively (Experiment 3) or actively (Experiment 4) experiencing the doorway transitions. Accuracy was calculated as the percentage of correctly placed items out of a possible 16. We found that performance was not significantly different between the shift and no shift conditions in Experiment [1] or proactive interference [5]. There was no significant interaction between shift and condition order in Experiment 3 (F(1,24) = 2.762, p = 0.110, η 2 = 0.103, BF 10 = 0.169). Crucially, condition order was counterbalanced across participants and thus did not confound the shift condition. Altogether, these results suggest that crossing event boundaries, either imaginatively through watching a video or by actually moving in a real-life environment, did not influence memory, even on a relatively more difficult task (mean accuracy was 36.30% ± 12.85% and 40.22% ± 12.17% for Experiments 3 and 4, respectively). Discussion The doorway effect has been reported by multiple previous studies, each demonstrating a robust medium-tostrong effect across various environmental and cognitive manipulations. The aim of the present study was to investigate the doorway effect under a particular set of constraints. In the first two experiments, a highly immersive and controlled virtual environment was used, with or without working memory load. In the last two experiments, a real-life environment was used, with active (navigation carried out by participant) or passive (navigation was observed from a first-person perspective video) movement. Contrary to our hypotheses, we observed sufficient evidence for the null hypothesis in all but one of four experiments. In Experiment 2, where the memory task was carried out in a virtual environment with additional working memory load, there was a significant effect of the door on mnemonic performance. Signal detection was reduced and there was a shift in response criterion towards saying "yes". Further examination, however, showed that doorways did not induce forgetting in the way that has been typically reported by previous studies [15]. In Experiment 2, hit rates were not significantly influenced by doorways, but there were significantly more false alarms to mismatched recognition probes (i.e., negative probes). The increased false alarm rate suggests that, rather than flushing working memory of the previous event model (resulting in fewer hits), the boundaries, in concert with a secondary counting task, created sufficient cognitive interference in the memory system to effectively reduce the discriminability across encoded objects (resulting in more false alarms). This is in line with some previous studies that also observed more errors to negative probes (e.g., [15,16]), although many studies have omitted negative probes from the design altogether and thus only report reduced hit rates on shift trials (e.g., [6,[9][10][11]17]. The simplicity of the task was reflected by the results of Experiment 1, where accuracy was at ceiling (mean error rate was 6.49%, which translates to approximately 1 mistake per condition). We demonstrated that even the worst-performing participants showed no shift effect, although there was insufficient evidence for dissociated probes. Notably, some previous studies report similarly high accuracies between 80 and 99% [8,17] and yet still report significant shift effects on hit rates using parametric statistics. This is especially poignant given that the trial numbers in these studies were fewer and unbalanced compared to the current study (e.g., 6 shift trials and 12 no shift trials per probe in [8], compared 20 trials each in the present study). Hence, in previous studies, a 5-10% difference in performance translates to a difference of only 1 to 2 trials. Behavioural patterns generated from few observations are susceptible to spurious artefacts, especially when the same experimental procedure is used for all participants. Hence, our higher-powered counterbalanced design is less likely to be confounded by systematic noise introduced by experimental procedures. The results from Experiments 1 and 2 suggest that event boundaries interfere with mnemonic performance during passive movement only under conditions of high interference, such as while performing a concurrent counting task. This finding extends knowledge gained from previous studies in multiple ways. Firstly, it suggests that the location updating effect is dependent on working memory load capacity. This relates to the second principle of the event horizon model, which postulates that information from previous event models is less available than information from the current event model because there is a primacy of the current event model in working memory [19]. Under minimal working memory load, there might be a primacy of the current event model but still enough capacity for previous event models, thus resulting in intact memory recognition of the held object within the same or previous room, as seen in Experiment 1. When working memory load is pushed to capacity, however, the previous event model might then become more difficult to access without explicit probing (i.e., the intact colour-shape pair on associated and dissociated probes, as opposed to the mismatched colour-shape pair on negative probes), resulting in impaired signal detection such as that seen in Experiment 2. The second contribution of our findings is that our highly controlled virtual environment highlights the significance of doorways. Every room in the experiment was essentially identical, meaning that there could be no effects of anticipating a shift during item encoding (i.e., participants could not predict whether they would transition through a door or not while they were seated at the table memorising objects), nor any visual prediction error or attentional capture due to an environmental change [19], such as a new wall pattern [15]. Therefore, Experiment 2 demonstrated that the simple and task-irrelevant visual addition of a doorway significantly increased false alarm rates. Notably, however, there was no significant effect of doorways on hit rates in either of the VR experiments. We speculate that, had the event boundaries been more salient (e.g., changes in room colour), we might have observed the same reduced hit rate after a shift as previous studies [15]. This highlights a potentially fruitful line of investigation into how varying the strength of event boundaries might differentially impact signal and noise distributions, resulting in different impacts on hits and false alarms. A notable difference between the present VR experiments (1 and 2) and previous studies is that the navigation experience was passive rather than active [8,9,12,[15][16][17][18]. Previous research, unrelated to the doorway effect, has shown that active navigation enhances memory for the spatial layout, while passive navigation enhances memory for objects [13]. Thus, the effect of doorways on object memory might be reduced in passive navigation paradigms. This also dovetails with the observation by Pettijohn and Radvansky [11] that, during passive virtual navigation, the effect size of the doorway effect was approximately halved. A likely explanation is that active navigation increases engagement and heightens attention to the visuo-spatial environment, which in turn enhances the impact of the location updating effect by strengthening the saliency of event boundaries. In Experiments 3 and 4, we sought to replicate the doorway effect using a real-life environment, navigated either passively via watching a recorded video (Experiment 3) or actively via actually walking through the environment (Experiment 4). Despite previous research yielding the doorway effect in both forms of interaction [11], we found sufficient evidence in favour of the null hypothesis in both scenarios. This was surprising for a number of reasons. Firstly, we increased task difficulty by imposing greater working memory load (counting task, similar to maths problems given in similar studies; [16]) and increasing the memorised information (16 visually similar items in a specific arrangement). As a result, the mean accuracy (38.26%) was in between chance level (6.25%) and near-perfect performance (one mistake = 87.50%), eliminating floor or ceiling effects. Secondly, previous studies have found that re-crossing multiple boundaries and returning to the original location that items were encoded in impairs memory further [16]. This was not the case in Experiments 3 or 4. Thirdly, interviews with the participants revealed that the majority (approximately 64%) perceived the shift condition as being more difficult than the no shift condition, while only approximately 22% perceived the no shift condition as being more difficult and 14% perceived the conditions as equally difficult. There are several potential explanations for why the doorway effect did not replicate in Experiments 3 and 4. One is that memory was probed in a different way. In previous studies (including Experiments 1 and 2 here), recognition memory was tested by providing the names of a colour and shape, to which participants responded "yes" or "no". In Experiments 3 and 4, participants were presented with a shuffled arrangement of the 16 stimuli and required to put them back in the memorised order. Hence, the spatial relations between stimuli were tested, rather than recognition of the stimuli themselves. Such a task is perhaps more similar to familiarity than explicit recognition, the former of which has been shown to be relatively unaffected by the location updating effect [20]. Another potential explanation is that the curtains did not convincingly create event boundaries. Note, however, that this is at odds with other studies using even subtler event boundaries (e.g., transparent doors; [8], imagining navigation along a similar corridor with curtains as boundaries, [6], etc.). Conclusions Overall, our findings across all four experiments suggest that the renowned "doorway effect" is likely to be more nuanced than originally thought, as it only emerged in the form of increased false alarms under considerable working memory load. The same task without working memory load produced no significant effects, nor did a similar memory task implemented in real life with either active or passive interaction. Indeed, this finding resonates more closely with real-life experience, where we might occasionally forget a single item we had in mind after walking into a new room but, crucially, this usually happens when we have other things on our mind, or when we have moved from one distinct context to another. Our findings reveal that a number of elements are likely crucial for spatial updating to impact recognition memory. In particular, comparing our findings to previous literature reveals that active versus passive navigation, as well as visual context changes, likely augment the doorway effect by increasing the salience of and attention to location changes. Finally, although the focus here was on spatial event boundaries, our findings suggest that other forms of boundaries (e.g., semantic and temporal) are likely to increase false alarm rates to ambiguous recognition probes, while they might more effectively reduce hit rates when the boundaries more clearly delineate between event models (e.g., via increasing the attention to and the salience of the shift).
2021-03-10T06:23:18.223Z
2021-03-08T00:00:00.000
{ "year": 2021, "sha1": "b25e6b343f1814bdd2d0c85e4fb27aa9fe653015", "oa_license": "CCBY", "oa_url": "https://bmcpsychology.biomedcentral.com/track/pdf/10.1186/s40359-021-00536-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99abf1b31f7be1af92a1e41251c678fa291692d3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
237756294
pes2o/s2orc
v3-fos-license
A case of venous aneurysm of a splenorenal shunt A 66-year-old man presented with liver cirrhosis due to non-alcoholic steatohepatitis and hyperammonemia. Contrast-enhanced CT showed a dilated and tortuous splenorenal shunt and a large venous aneurysm in the shunt. The venous aneurysm showed gradual enlargement over 10 years and worsening hyperammonemia, so balloon-occluded retrograde transvenous obliteration was performed. Under balloon occlusion, 5% ethanolamine oleate was injected from a microcatheter into the venous aneurysm, which was subsequently embolized with microcoils. Contrast-enhanced CT after the procedure showed complete thrombosis of the venous aneurysm. 10 months later, the venous aneurysm reduced in size, and hyperammonemia had improved. INTRODUCTION Portosystemic shunts (PSSs) are formed under conditions of portal hypertension due to cirrhosis and frequently associated with hepatic encephalopathy (HE). 1 Chronic recurrent HE (CRHE) due to PSS has recently been treated with balloon-occluded retrograde transvenous obliteration (B-RTO). 2 In this case, B-RTO was performed for hyperammonemia due to splenorenal shunt with localized aneurysmal change in the splenorenal shunt. Venous aneurysm of the splenorenal shunt ("splenorenal shunt aneurysm") is rare, and we report herein a case with successful endovascular treatment of a splenorenal shunt aneurysm. CASE PRESENTATION Clinical course A 66-year-old male presented with liver cirrhosis due to nonalcoholic steatohepatitis and hyperammonemia. Follow-up contrast-enhanced CT showed a dilated and tortuous splenorenal shunt and large venous aneurysm in the hilus of the spleen ( Figure 1A and B). Laboratory data on admission were as follows: erythrocyte count, 488 × 10 4 /mm 3 ; hemoglobin, 15.0 g dl −1 ; platelet count, 98 × 10 4 /mm 3 ; total bilirubin, 2.9 mg dl −1 (elevated); aspartate aminotransferase, 37 IU l −1 ; alanine aminotransferase, 20 IU l −1 ; alkaline phosphatase, 552 IU l −1 (elevated); serum ammonia, 125 µg dl −1 (elevated); total protein, 6.9 g dl −1 ; serum albumin, 3.6 g dl −1 ; blood urea nitrogen, 11 mg dl −1 ; and creatinine, 0.74 mg dl −1 . Child-Pugh grade was B (score 7) and albumin-bilirubin (ALBI) grade was 2b (score −1.94). The splenorenal shunt aneurysm had been followed by annual CT for 10 years and gradually enlarged from 20 mm x 27 mm x 24 mm to 65 mm × 55 mm × 58 mm, with an increase of 6 mm in 1 year. Exacerbation of HE was also noted over the previous year. Because the splenorenal shunt aneurysm tended to increase over time and there was a risk of rupture, and HE worsened despite medical therapy, we judged that this was an indication for treatment. The increased splenorenal shunt flow was thought to be one of the causes of the aneurysm enlargement and exacerbation of HE. Therefore, B-RTO was selected to decrease the shunt flow. B-RTO A coaxial double-balloon catheter system (Candis; Medikit, Tokyo, Japan) was inserted into the splenorenal shunt from the left renal vein via the right femoral vein under local anesthesia. The balloon-occluded retrograde venography showed the portal vein was patent, no thrombosis and the hepatic blood flow was hepatopetal. The microcatheter was advanced into the venous aneurysm (Figure 2A), then 9 ml of 5% ethanolamine oleate (Oldamin; ASKA Pharmaceutical, Tokyo, Japan) was injected from the microcatheter under balloon occlusion. Finally, the draining vein was embolized with microcoils ( Figure 2B). We used coils of 1.5 times size in diameter compared to the shunt vein for preventing coil migration. A 5-Fr balloon catheter (9 mm diameter, Selecon MP Catheter II; Terumo, Tokyo, Japan) was inserted into the hepatic vein through the right femoral vein, and pressures were measured using a Polygraph MSC-7000 manometer (Fukuda Denshi, Tokyo, Japan). The measured parameters were right atrial pressure, hepatic venous pressure, and wedged hepatic venous pressure (WHVP). WHVP was 22 mmHg. Under balloon occlusion of the splenorenal shunt, WHVP was 32 mmHg. B-RTO was successfully performed, and no complications were observed. FOLLOW-UP 10 months later, the venous aneurysm was seen to have shrunk (Figure 3), and hyperammonemia had improved. No esophageal varices or ascites were noted. Child-Pugh grade changed from B (score 7) to A (score 6) and ALBI grade changed from 2b (score −1.94) to 2a (score −2.47). DISCUSSION PSSs are common in patients with portal hypertension due to cirrhosis and develop as portal vein pressure increases. 3 These shunts can be divided into intra-and extrahepatic shunts, such as gastrorenal shunt, splenorenal shunt, superior mesenteric veininferior vena cava shunt, and inferior mesenteric vein-inferior vena cava shunt, and these can also lead to HE. 2 Splenorenal shunt causes HE due to reflux of venous blood and is the most common cause of HE (60%). 3 CRHE is often controlled using drugs such as lactulose or rifaximin, but some cases prove refractory to pharmacotherapy. Surgical ligation is reportedly effective for the treatment of CRHE, but B-RTO has been widely adopted in Japan for the management of HE. 4 No reports have described cases with localized aneurysmal changes in the splenorenal shunt, but several reports have described cases with HE due to large splenorenal shunt. Venous aneurysms included portal system aneurysm (PSA), and splenorenal shunt aneurysm are very similar in terms of portal hypertension. 5-8 PSA is associated with not only portal hypertension but also an inherent weakness of the vessel wall. 5 In this case as well, the congenital wall weakness and thinning of the shunt itself were thought to be the main cause of aneurysmal change, with the splenorenal shunt aneurysm subsequently increased by portal hypertension. Standard treatments for splenorenal shunt aneurysm with HE remain lacking. Careful observation without treatment is often selected for extrahepatic PVA. 6 Surgical treatments for PSA are often indicated in cases with severe symptoms, thrombus formation, worsening of liver function, and enlargement during follow-up. The rupture of PVA has been reported. 8 Sfyroeras et al reported the BJR|case reports Case Report: A case of venous aneurysm of a splenorenal shunt diameter of the ruptured PVA was 2 cm. 9 Similarly, if splenorenal shunt aneurysm continues to increase, there is a risk of rupture. Splenorenal shunt aneurysm should be treated if symptoms such as HE are present or if the aneurysm tends to be large. The increased splenorenal shunt flow is thought to be one of the causes of the aneurysm enlargement and exacerbation of HE. B-RTO is useful to treat the aneurysm itself and HE with splenorenal shunt closure at the same time. In this case, we treated the patient with B-RTO, resulting in thrombosis of the splenorenal aneurysm and shunt closure. The improvement of HE is mainly due to the effect of shunt closure. Thrombosis and reduction of the splenorenal aneurysm by B-RTO will prevent it from rupturing. Conversely, increased portal blood flow after shunt embolization can lead to complications such as exacerbation of gastric varicose veins, retention of ascites, and progression of hepatic failure. 10 The indications for treatment of PSS remain unclear, but preoperative liver function is one of the most important factors in post-operative complications. This case showed Child-Pugh score 7 (class B), and the increase in WHVP was less than 60% before and after balloon occlusion of the splenorenal shunt. No post-operative complications such as varicose vein exacerbation or retention of ascites were observed. Some recent reports have described portosystemic shunt syndrome, in which the presence of PSS worsens liver function in the longterm. 3,11 B-RTO plays a protective role against the lowering of hepatic functional reserve in the long term because portal blood flow increases after B-RTO. 11,12 In our case, Child-Pugh and ALBI grades changed from Child-Pugh Grade B (score 7) and ALBI Grade 2b (score −1.94) to Child-Pugh grade A (score 6) and ALBI Grade 2a (score −2.47). B-RTO was feasible to improve liver function and to prevent rupture of venous aneurysm. CONCLUSIONS B-RTO was feasible as a treatment to improve liver function and prevent rupture of splenorenal shunt aneurysm. LEARNING POINTS • Portosystemic shunt may show aneurysmal formation / aneurysmal change. • B-RTO for shunt aneurysm was feasible. PATIENT CONSENT Written informed consent was obtained from the patient for publication of this case report, including accompanying images.
2021-09-28T01:09:38.677Z
2021-07-08T00:00:00.000
{ "year": 2021, "sha1": "ba5c0c74eeb24815d55c4ecd7e7aeb70bf76634d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1259/bjrcr.20210011", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4979beac86195ce69564b5a8fe778ad1072516c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225539819
pes2o/s2orc
v3-fos-license
Pisidia longimana (Risso, 1816), a junior synonym of P. bluteli (Risso, 1816) (Crustacea: Decapoda: Anomura: Porcellanidae) and a species distinct from P. longicornis (Linnaeus, 1767) Pisidia longimana (Risso, 1816) and P. bluteli (Risso, 1816), both described from Nice, France, have been considered each other’s synonyms or have been validated depending on successive taxonomic opinions. The validity of both in respect to P. longicornis (Linnaeus, 1767) has also been contradicted a number of times. The current lack of clarity in the use of the names P. longicornis, P. longimana and P. bluteli has resulted in nomenclatural instability, but also in unreliability and miscommunication as regards the available ecological and distributional information. The validity of P. bluteli and P. longimana is revisited herein based on a large number of specimens (241 males, 190 females and 33 juveniles) from many different localities. The latter species is confirmed as a junior synonym of the former, whereas P. bluteli and P. longicornis are herein considered separate species. Diagnostic characters and morphological variations are discussed and illustrated. Key-Words. Biodiversity; Eastern Atlantic; Mediterranean; Porcelain crabs. INTRODUCTION The worldwide genus Pisidia Leach, 1820, currently consists of 15 species distributed across the tropical and subtropical zones of the eastern Pacific, western and eastern Atlantic and Indo-West Pacific Oceans, and Mediterranean and Black Seas (Haig, 1978;Osawa & McLaughlin, 2010;Dong & Li, 2014;WoRMS, 2020). Three species are currently considered valid from the eastern Atlantic and Mediterranean and Black Seas basins: Pisidia longicornis (Linnaeus, 1767), P. bluteli (Risso, 1816) and P. longimana (Risso, 1816) (Osawa & McLaughlin, 2010). However, P. longimana and P. bluteli have been lumped and split alternately with each successive taxonomic opinion, and the validity of both in respect to P. longicornis has also been contradicted a number of times (Zariquiey-Álvarez, 1951;Holthuis, 1961;García-Raso, 1987;d'Udekem d'Acoz, 1995, 1999Koukouras et al., 2002;Osawa & McLaughlin, 2010). Thus, the current lack of clarity in the use of the names P. longicornis, P. longimana and P. bluteli has resulted in nomenclatural instability, but also in unreliability and miscommunica-tion of the available ecological and distributional information. An ongoing phylogenetic analysis of the genus Pisidia and the examination of 241 males, 190 females and 33 juveniles from many different localities in the collections of the National Museum of Natural History, Smithsonian Institution (USNM) and Museu de Zoologia, Universidade de São Paulo (MZUSP), prompted us to revisit the validity of P. bluteli and P. longimana. The latter species is confirmed as a junior synonym of the former, whereas P. bluteli and P. longicornis are herein considered two separate species. Diagnostic characters and morphological variations are discussed and illustrated. MATERIAL AND METHODS Abbreviations used include: cl, carapace length, taken from the front to the posterior median margin of the carapace; cw, carapace width, taken at the level of its widest point; P1, cheliped (pereopod 1); P2-P4, pereopods 2 to 4; St, station. Pisidia bluteli was generally regarded as a junior synonym of P. longicornis (Linnaeus, 1776), until Zariquiey-Álvarez (1951) provided evidence that both species were morphologically distinct. While agreeing with Zariquiey-Álvarez, Holthuis (1961) argued that not only was P. bluteli valid, but so was P. longimana, and he therefore removed the latter species from the synonymy with P. longicornis. Holthuis' (1961) view, however, was challenged by the observations of Manning & Števčić (1982), who, without further details, commented that some specimens from the Piran Gulf (northern Adriatic Sea) showed intergradations between P. bluteli and P. longimana. García-Raso (1987) went farther and moved P. bluteli and P. longimana back into the synonymy with P. longicornis. Conversely, Koukouras et al. (2002), once again considered P. bluteli and P. longimana as being distinct from each other and from P. longicornis. Arguments in favor of splitting P. bluteli from P. longimana are essentially those of Holthuis (1961): (1) the orbital margin shows a row of spines in P. bluteli, whereas the orbital margins are usually crenulate or minutely serrate, never spinous in P. longimana; (2) there are several distinct spines on the dorsal surface of the carapace in P. bluteli, whereas the carapace spines are smaller and in P. longimana larger specimens, hardly visible; (3) the antennal basis-ischium and merus have a distinct spine at the distal end of the mesial margin in P. bluteli, whereas the antennal merus bears no spine, although a distinct spine is present in the antennal basis-ischium in P. longimana; (4) numerous spinules, arranged in more or less distinct longitudinal rows, are found on the dorsal surface of the carpus and the palm in P. bluteli, whereas the dorsal surface of the carpus and palm are smooth, although in the juveniles they may be provided with a median longitudinal row of granules or spinules, in P. longimana; (5) a row of slender spinules usually is present along the lateral margin of the carpus in P. bluteli, whereas the lateral margin of the carpus is smooth in the adults, but may be provided with spinules in the juveniles in P. longimana; and (6) numerous strong dorsal spines are present on the merus, carpus and propodus of the walking legs in P. bluteli, whereas the carpus and merus of the walking legs do not show a row of spinules, although very few short and blunt granules or spinules may sometimes be observed on the merus in P. longimana (Holthuis, 1961). Additionally, Koukouras et al. (2002) submitted that P. bluteli and P. longimana could be further differentiated in that the branchial region, behind the cervical groove, is provided with 2 or more spines (rarely 1) in P. bluteli, whereas, in contrast, the branchial region bears 0 to1 spines (rarely 2) spines in P. longimana. However, García-Raso (1987) opined that the characters used by Holthuis (1961) do not allow for distinguishing between P. longicornis, P. longimana and P. bluteli for intermediate forms in which all possible combinations of the purportedly distinguishing characters are commonly found, sometimes even in the same specimen. Consequently, García-Raso (1987) concluded that P. bluteli and P. longimana should be sunk into the synonymy with P. longicornis (see also d'Udekem d 'Acoz, 1995'Acoz, , 1999.The Avulsos Zool., 2020;v.60: e20206036 5/6 large number of specimens examined herein from the collections of the USNM and MZUSP lends support to the view that P. bluteli and P. longimana are synonyms. The purportedly diagnostic characters for distinguishing between P. bluteli and P. longimana actually intergrade between specimens, even from the same locality. For instance, the specimen USNM 1278011 (Fig. 1F-J) presents the "bluteli type" of carapace with epibranchial spines (Fig. 1F) and the P1 carpus bears a row of spines laterally and mesially (Fig. 1G), but also presents the "longimana type" of P1 ischium with one ventrodisto-mesial spine (Fig. 1H); antenna merus without a spine mesially (Fig. 1I); and P2-P4 dorsal spines absent (Fig. 1J). Likewise, the characters proposed by Koukouras et al. (2002) clearly overlap with each other and therefore, cannot be used to distinguish among P. longicornis, P. bluteli and P. longimana. However, three diagnostic characters differentiate P. longicornis from P. bluteli: (1) P. longicornis (as already noticed by Holthuis, 1961) presents inconspicuous or absent spinulation compared to P. bluteli, whose spines in the carapace, antenna and P1 are always well-developed (Figs. 1A-O; 2A-E); (2) P. longicornis males present the major P1 broader and swollen, whereas in P. bluteli the P1 is long, slender and slightly flattened (present study); (3) the front in P. longicornis presents a deep longitudinal groove in the median lobe (so that the median lobe seems to be divided into two), whereas P. bluteli presents three conspicuous lobes, with the longitudinal one shallow and poorly visible (present study) (Figs. 1A, F, K; 2A). Pisidia longicornis s. str. is known from the Atlantic coast of Europe, from south Norway to Portugal, as well as from the Mediterranean Sea, where it inhabits greater depths, between 30 and 100 m (d'Udekem d'Acoz, 1999). Its record from the west African coast, from Mauritania to Angola (Chace, 1956) deserves further investigation.
2020-07-23T09:02:18.538Z
2020-07-16T00:00:00.000
{ "year": 2020, "sha1": "fc1d985d986daebce627e469ce99c64719bb3302", "oa_license": "CCBY", "oa_url": "https://www.revistas.usp.br/paz/article/download/169045/162040", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "38577dd56d1cf3179922a627ebc2e5757ccc2829", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Geography" ] }
253319203
pes2o/s2orc
v3-fos-license
Hyperprogressive disease after avelumab maintenance therapy in a patient with advanced ureter cancer: A case report In the early stages of immunocheckpoint inhibitor administration, we should be aware of rapid cancer progression, known as hyperprogressive disease, in real-world clinical practice. We report a case of a 73-year-old man who presented with right abdominal pain and was diagnosed with advanced right ureteral cancer involving the duodenum. He received four cycles of chemotherapy with gemcitabine plus cisplatin, followed by maintenance with avermab. After two cycles of avermab within a month, his primary cancer dramatically progressed and he died. This is the first report of a case in which unresectable ureteral cancer caused hyperprogressive disease after avelumab maintenance therapy. Introduction The efficacy of avelumab maintenance therapy was demonstrated in the JAVELIN bladder 100 trial for unresectable or metastatic urothelial carcinoma (UC) that has not progressed with platinum-based chemotherapy as the PFS and OS were prolonged compared to those of the best supportive care group. 1 Similarly, the Japanese subgroup analysis showed a favorable benefit-risk balance, which supports maintenance averumab as the new standard of care. 2 Thus, immune checkpoint inhibitors (ICIs) have been associated with long-term survival in several cancers. However, there is a small group of patients with rapid disease progression during the initiation of ICIs, known as hyperprogressive disease (HPD), which severely compromises the quality of life and prognosis of patients. 3 Herein, we report a case of HPD after avelumab maintenance therapy in a patient with advanced ureteral cancer despite achieving partial response (PR) with prior chemotherapy. Case presentation A 73-year-old hypertensive man presented with primary complaints of right abdominal pain and frequent vomiting. Abdominal computed tomography (CT) demonstrated a tumor surrounding his right upper ureter with hydronephrosis and duodenal invasion causing ileus (Fig. 1). On admission, his Karnofsky performance status (KPS) score was 90, and his laboratory parameters showed renal dysfunction, high inflammatory marker (such as neutrophils and CRP) levels. Urine cytology was unremarkable at this time. The clinical diagnosis of a primary tumor was unchanged. Gastrointestinal endoscopic findings showed no abnormalities in the mucosa and duodenal stenosis due to compression by the tumor. Thereafter, we performed retrograde pyelography with a right ureteral filling defect, which revealed the tumor was in the right ureter. Moreover, divided urine cystology suggested UC. We diagnosed the patient with unresectable advanced ureteral cancer (cT4N0M0) and considered starting systematic chemotherapy as soon as possible. To ameliorate renal dysfunction, a ureteral stent was implanted, which resulted in an improvement in renal dysfunction and hydronephrosis. However, we thought that it was first necessary to improve the patient's general condition which was worsened by anorexia secondary to severe duodenal stenosis. In fact, he experienced frequent vomiting because of ileus. Since endoscopic duodenal stent insertion was difficult due to the risk of gastrointestinal perforation, palliative gastrojejunostomy was performed. His postoperative course was uneventful, and he could start eating and improve his general condition. One month after surgery, we started chemotherapy with gemcitabine and cisplatin. After four cycles of chemotherapy with achieving PR (Fig. 2a and b), we switched to maintenance therapy using avelumab. However, after two doses of avermab, he experienced severe fatigue, anorexia, and frequent vomiting, and his KPS was 60. CT scanning revealed the rapid re-growth of an aggressive tumor invading the abdominal wall, along with the appearance of cancerous ascites and suspected intestinal tract compression causing ileus (Fig. 2c and d). We considered that these findings met the criteria for HPD. Thereafter, his general condition rapidly worsened, and he died of the disease 49 days after avelumab therapy was initiated. Discussion There still has been no consensus on the definition of HPD because HPD is evaluated by several methods. At present, the tumor growth rate is ≥ twofold, which is the most widely used method of evaluating HPD treated with ICIs in comparison with the pretreatment duration. Moreover, a time-to-treatment failure of less than 2 months was also considered an alternative assessment method for HPD. 3 In the present case, we determined our patient had experienced HPD because all of these criteria were met. Hwang et al. 4 reported that HPD occurred in up to 11.9% of UC patients treated with ICIs, a rate that was higher than that in RCC patients (0.9%). Multivariate analyses showed that UC and creatinine levels above 1.2 mg/dL were independent predictive factors for HPD in this study. In our case, laboratory data such as creatinine levels during avelumab treatment, which were predictive of HPD as reported in a previous report, 4 had not been changed. Only one case of HPD after maintenance anti-PD-1 therapy following chemotherapy with proper disease control has been reported. On the other hand, in this case, pembrolizumab was used as maintenance therapy after third-line chemotherapy with platinum-doublet regimen re-challenge. 5 To our knowledge, this is the first report of a patient having advanced ureteral cancer with HPD after maintenance anti-PD-L1 therapy following chemotherapy with PR. Conclusion In this report, we demonstrated HPD after avelumab maintenance therapy for advanced UC. We should be aware of the possibility of HPD at the start of ICI therapy, regardless of the good response of prior chemotherapy. Consent Written informed consent to publish was obtained from the patient for the publication of this case report and any accompanying images. Author contributions KO and DI drafted the report and cared for the patient. ST, TT, and HF cared for the patient. DI and WO supervised the work and critically reviewed the report. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declaration of competing interest The authors have no conflicts of interest to declare.
2022-11-05T15:35:37.335Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "1b0f9481dc320e962aeabfccfe5d45f51a122d43", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.eucr.2022.102278", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94624fb6d053d08647595132de5517f6fb2f1c22", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
100067402
pes2o/s2orc
v3-fos-license
Assessment of the Ti-rich corner of the TiSi phase diagram using two sublattices to describe the Ti 5 Si 3 phase The thermodynamic optimization of Ti-X-Si systems requires that their respective binary systems be constantly updated. The most recent assessments of the Ti-Si phase diagrams used three sublattices to describe the Ti5Si3 phase. The stable version of this phase diagram indicated the presence of Ti(β)+Ti5Si3→Ti3Si and Ti(β)→Ti(α)+Ti3Si reactions in the Ti-rich corner, while the metastable version featured the presence of a Ti(β)→Ti(α)+Ti5Si3 reaction. The present investigation assessed these phase diagrams using two sublattices to describe the Ti5Si3 phase in order to simplify the optimization of Ti-X-Si systems. Introduction There is a technological interest in the Ti-Si system promoted by the beneficial effect of Si addition for the oxidation and creep resistance of Ti-X-Si alloys (Azevedo, 1996).The earliest Ti-Si experimental phase diagram was obtained in 1952 (Hansen et al., 1952), indicating in the Ti-rich corner the presence of a eutectoid reaction at 1133K, Ti(β) → Ti(α) + Ti 5 Si 3 .In 1954, another work confirmed the presence of this eutectoid reaction at 1129K (Sutcliffe, 1954).In 1970, a new experimental version of this phase diagram was proposed (Svechnikov et al., 1970), indicating in the Ti-rich corner the presence of two new reactions (a peritectoid reaction at 1444K, Ti(β) + Ti 5 Si 3 → Ti 3 Si and a eutectoid reaction at 1133K, Ti(β) → Ti(α) + Ti 3 Si), instead of the eutectoid reaction previously observed. In late 70´s, however, careful investigations of the eutectoid reaction of the Ti-Si system were performed without showing any evidence on the presence of the Ti 3 Si phase (Plitcha et al. 1977;Plitcha and Aaronson, 1978).They confirmed instead the presence of Ti 5 Si 3 phase at 1148K, Ti(β) → Ti(α) + Ti 5 Si 3 .The first thermodynamic assessment of the Ti-Si phase diagram was performed in 1976 (Kaufmann, 1976) considering the Ti 5 Si 3 phase as a stoichiometric intermetallic.Murray (Murray, 1987) assessed the Ti-Si system assuming the Ti 5 Si 3 phase as a non-stoichiometric phase and the calculated phase diagram was in agreement with one of the previous results (Svechnikov et al., 1970).In 1996, Seifert et al. (Seifert et al., 1996) employed an optimization method for the determination of the variables used for the thermodynamic description of the phases in order to assess the Ti-Si phase diagram from selected experimental data.They described, for instance, the Ti 5 Si 3 phase as a nonstoichiometric compound containing three sublattices, (Ti) 3 (Ti,Si) 2 (Si,Ti) 3 , to represent its D8 8 crystal structure.Their calculated phase diagram was in good agreement with previous calculated (Murray, 1987) and experimental (Svechnikov et al., 1971) phase diagrams, presenting Ti 3 Si as the stable phase of the eutectoid reaction.The dispute over the stability of the Ti 3 Si phase in Ti-Si and Ti-X-Si systems was, however, far from over.Azevedo (Azevedo, 1996;Azevedo and Flower, 1999;Azevedo and Flower, 2000;Azevedo and Flower, 2002) and Bulanova (Bulanova et al., 1997) identified the presence of Ti 5 Si phase (instead of Ti 3 Si) after long isothermal heat treatments below the eutectoid temperature.By contrast, the presence of Ti 3 Si phase was observed by other investigations (Kozlov and Pavlyuk, 2004;Ramos et al., 2006;Costa et al.;2010;Li et al., 2014).In 2010, the stability of intermetallic phases in the Ti-Si system was studied by ab-initio calculations, indicating that the stability of Ti 3 Si phase was controversial (Colinet and Tedenac, 2010).Recent ab-initio calculation showed that Ti 5 Si 3 phase was actually more stable than Ti 3 Si phase at 0 K (Poletaev et al., 2014). The present work will calculate and compare the Ti-rich corner of the stable and metastable Ti-Si phase diagrams, using two sublattices, (Ti,Si) 5 (Si,Ti) 3 , to describe the Ti 5 Si 3 phase, assuming that Ti 3 Si is the stable phase in the eutectoid decomposition of Ti(β) phase.These results will be compared to previous calculated phase diagrams using three sublattices to describe the Ti 5 Si 3 phase (Cost, 1998;Fiori et al., 2016). Methodology The liquid, Ti(α) and Ti(β) phases are described using Equations 1 to 5. The Gibbs free energy of reference (G ref ) is described by Equation 2, while the Gibbs free energy of the ideal solution (G id ) is described by Equation 3 and the excess Gibbs free energy (G ex ) of the regular solution is described using the Redlich-Kister polynomial (see Equations 4 and 5) [23].Additionally, the Gibbs energy for formation of the stoichiometric Ti 3 Si phase is described using the Kopp-Neumann rule (see Equation 6) and the non-stoichiometric Ti 5 Si 3 phase is described by the Compound Energy Formalism (Lukas, 2007), using a two-sublattices containing Ti and Si, see Equations 7 to 10. Si Ti Where: G i ref = G i SER and x Si and x Ti are the molar fraction of the elements. Where: L phase is the Ti-Si interaction parameter in the phase. The parameters and variables used for the thermodynamic description of the Ti 5 Si 3 and Ti 3 Si phases are listed in Table 1.These variables were calculated from selected experimental data (see Tables 2 and 3) using the Parrot module of the Thermo-Calc software.The variables related to the Ti 5 Si 3 phase were initially calculated during the assessment of the metastable phase diagram (suspending the presence of the Ti 3 Si phase).These variables were then fixed during the assessment of the stable phase diagram for the calculation of the variables related to the Ti 3 Si phase.These diagrams were compared to the stable and metastable Ti-Si phase diagrams obtained by Thermocalc software using COST 507 database (Cost, 1998), whose Ti-Si system was based on the assessed version by Seifert et al. (Seifert et al., 1996).(Meschel and Kleppa, 1998;Coelho et al., 2006) Table 2 Enthalpy for the formation of intermetallic phases, Ti-Si system (kJ/mol of phase). Results and discussion The calculated values of the variables are shown in Table 4.According to Thermo-Calc User Guide (Thermo, 2015), the order of magnitude of Vi1type variables should not be higher than 10 5 and the Vi2-type variables should not be higher than 10 1 .In the present assessments V11 presented an order of magnitude above 10 5 ; and V52 above 10 1 .This Vi2-type variable, however, was used to describe the excess term of the enthalpy rather than the entropy for the formation of intermetallic phases.The values of the reduced sum of squares (~ 5 for both optimization procedures) exceeded the advisable maximum value of one (Thermo, 2015).These results indicate that the optimization procedures of the Ti-Si system using two sublattices to describe the Ti 5 Si 3 phase were successful but they can be further improved.Table 5 Main experimental and calculated values of the Ti-Si system. Table 5 compares the values of the experimental and the calculated equilibria and the enthalpies for the formation of Ti 3 Si and Ti 5 Si 3 phases.Six out of the 38 calculated values presented relative deviation above 5% in relation to the experimental data.Two of these deviations were origi-nated in the equilibria involving the liquid phase and they could be decreased by the use of a more complex model for the thermodynamic description of the liquid phase (Lukas, 2007;Seifert et al., 1996;Fiori et al., 2016).The other values were found for the β +Ti 5 Si 3 →Ti 3 Si, β→α+Ti 3 Si and β→α+Ti 5 Si 3 reactions, indicating that further experiments in these critical regions of the Ti-rich corner of the Ti-Si phase diagram are needed to improve the results of the present optimization procedures; and to define which one of the eutectoid reactions is actually the stable one (β→α+Ti 3 Si or β→α+Ti 5 Si 3 ). Figure 1-a shows a general view of the calculated stable Ti-Si phase diagram, indicating that the position of the phase boundaries are in fair agreement with previous results (Svechnikov et al. 1970;Fiore et al. 2016), except for the narrower solubility range of the Ti 5 Si 3 phase field.Figure 1-b shows a detail of the Ti-rich corner near the eutectoid reaction, indicating that there are no experimental data to validate the position of the calculated Ti(α) and Ti(β) solvus lines.The present assessment showed lower Si-solubility in the Ti(α) and Ti(β) phases when compared to the calculated phase diagram using COST 507 database (Cost, 1998), without any change in the eutectoid temperature. Figure 2-a shows the calculated metastable Ti-Si phase diagram, indicating that the position of the phase boundaries are in good agreement with previous experimental (Hansen et al, 1952;Sutcliffe, 1954) and calculated (Fiore et al. 2016) phase diagrams, except for the narrower sol-ubility range of the Ti 5 Si 3 phase field.The shape of this phase field resembles a previous result, which described the Ti 5 Si 3 phase as Ti 3 Ti 2 (Ti,Si) 3 (Beneduce et al., 2016).Figure 2-b shows a detail of the Ti-rich corner near the eutectoid reaction, comparing the present assessment with previous experimental (Plitcha et al. 1977;Plitcha and Aaronson, 1978) and calculated (Cost, 1998;Fiore et al. 2016) phase diagrams.The present assessment showed smaller Si-solubility in the Ti(α) and Ti(β) phases when compared to the calculated phase diagram using COST 507 database (Cost, 1998) and a slightly higher value for the eutectoid temperature.The slope of the Ti(α) solvus line showed a typical inclination, unlike the one obtained by COST 507 database (Cost, 1998), indicating that the Si solubility of the Ti(α) phase decreased with decreasing temperature.This result is agreement with the most recent assessment of the metastable Ti-Si phase diagram (Fiore et al. 2016). The position of the Ti 5 Si 3 phase field in both assessments was slightly shifted towards smaller Si contents.Additionally, its Si-solubility range was comparatively narrower and presented a maximum of 37.5at%.This maximum Si-solubility value suggests that the present thermodynamic description of the excess terms of the (Ti,Si) 5 (Si,Ti) 3 phase was not able to induce the presence of Si atoms on the Ti sublattice.In this sense, the hypothesis that the interaction between Si and Ti on each sublattice is independent of the occupation of the other sublattice , see Table 1) should be further analyzed. For instance, another hypothesis, assuming that the interaction parameters on the two sublattices are symmetrical , can be investigated.Finally, the description of the Ti 5 Si 3 phase using only two sublattices presented promising results for the assessment of Ti-X-Si phase diagrams. Conclusions • The assessed versions of the stable and metastable Ti-Si phase diagrams, using only two sublattices to describe the Ti 5 Si 3 phase, were in fair agreement with previous experimental and calculated phase diagrams. • The slope of the Ti(α) solvus line of the assessed metastable Ti-Si phase diagram showed a typical inclination, indicating that the Si-solubility of the Ti(α) phase decreased with decreasing temperature. • The position of the Ti 5 Si 3 phase field in both assessments was slightly shifted towards smaller Si contents.Additionally, its Si-solubility range was comparativelly much narrower than expected and presented a maximum value of 37.5at%. • The assessment of the Ti-Si phase diagram using two sublattices to describe the Ti 5 Si 3 phase might be further improved by the inclusion of new experimental data near the eutectoid reaction of the Ti-rich corner of the Ti-Si phase diagram.In this sense, further experimental work is needed to define which eutectoid reaction (β→α+Ti 3 Si or β→α+Ti 5 Si 3 ) is stable. • Finally, the use of a more complex description for the liquid phase and another thermodynamic description for the excess terms of the Ti 5 Si 3 phase might be useful to improve the quality of the assessed phase diagrams. Table 3 Experimental values of the Ti-Si invariant reactions (X Si phase : atomic fraction of Si).
2019-04-08T13:13:07.494Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "2abdccd6e1208d656868c14810b19a084ebae739", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/remi/v70n2/2448-167X-remi-70-02-0201.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2abdccd6e1208d656868c14810b19a084ebae739", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
245124698
pes2o/s2orc
v3-fos-license
Quantum metrology based on symmetry-protected adiabatic transformation: Imperfection, finite time duration, and dephasing The aim of quantum metrology is to estimate target parameters as precisely as possible. In this paper, we consider quantum metrology based on symmetry-protected adiabatic transformation. We introduce a ferromagnetic Ising model with a transverse field as a probe and consider the estimation of a longitudinal field. Without the transverse field, the ground state of the probe is given by the Greenberger-Horne-Zeilinger state, and thus the Heisenberg limit estimation of the longitudinal field can be achieved through parity measurement. In our scheme, full information of the longitudinal field encoded on parity is exactly mapped to global magnetization by symmetry-protected adiabatic transformation, and thus the parity measurement can be replaced with global magnetization measurement. Moreover, this scheme requires neither accurate control of individual qubits nor that of interaction strength. We discuss the effects of the finite transverse field and nonadiabatic transitions as imperfection of adiabatic transformation. By taking into account finite time duration for state preparation, sensing, and readout, we also compare performance of the present scheme with a classical scheme in the absence and presence of dephasing. I. INTRODUCTION Precise estimation of parameters is desired for realizing upcoming quantum technologies such as quantum information processing. Quantum metrology is a promising method that offers higher precision sensing of target parameters than classical counterparts by exploiting entanglement [1][2][3]. Appropriate entanglement among probe qubits enhances sensitivity, surpassing the standard quantum limit (SQL) [4][5][6], which is known as the limit of classical sensors composed of independent qubits. In particular, the Greenberger-Horne-Zeilinger (GHZ) state [7,8] achieves the ultimate precision called the Heisenberg limit in the absence of noise [9,10]. Even under specific noise, the GHZ state can still beat the SQL [11][12][13][14][15][16][17]. Considerable effort has been devoted to the development of entanglement generation and interferometry for practical use. However, application of entanglement-enhanced sensing is still limited due to the following reasons. In a ferromagnetic Ising model with a transverse field, macroscopic entanglement can be created in the ground state by adiabatically decreasing the transverse field [33][34][35]. This process does not require accurate control of qubits. Moreover, this process is protected by symmetry, i.e., nonadiabatic transitions from even-parity energy eigenstates to odd-parity energy eigenstates do not take place because of parity conservation due to spin-flip symmetry [36][37][38][39]. This suppression of nonadiabatic transitions protects the macroscopic entanglement from spontaneous symmetry breaking. To use the macroscopic entanglement in the ferromagnetic Ising model for quantum metrology, parity measurement is required to extract information of a target parameter. Several ways to perform parity measurement exist. For example, we can obtain information of parity by post-processing data of single-qubit measurement on each qubit. However, operators to be measured in single-qubit measurement do not commute with the Hamiltonian (interaction term). In general, measurement of operators that do not commute with a given Hamiltonian is experimentally hard [40], and thus we cannot perform single-qubit measurement unless interactions are turned off. Recently, adiabatic transformation has been discussed [41][42][43] as a method of interaction-based readouts [44][45][46][47] to change readout protocols. In particular, adiabatic transformation of the transverse field was introduced for the ferromagnetic Ising model to replace parity measurement with global magnetization measurement [42]. However, to achieve the Heisenberg limit, complicated optimization of the transverse field is necessary to adjust a redundant relative phase, which may not be suitable for practical use. Moreover, the dynamical range is limited, i.e., the Heisenberg limit scaling is achieved only for specific values of a target parameter. It is also unclear for protocols based on adiabatic transformation whether or not they can beat the SQL when we take into account time duration for state preparation and readout. In this paper, we consider a scheme for quantum metrology, in which we use the macroscopically entangled state in the ferromagnetic Ising model. This state is prepared by adiabatically decreasing the transverse field. After exposing the macroscopically entangled state to a target longitudinal field, we adiabatically induce the transverse field again. This process is also protected by the symmetry, conserving the parity. Consequently, we can extract full information of the parity by global magnetization measurement. For the strong transverse field, an operator to be measured commutes with the dominant part of the Hamiltonian (transverse field term), and thus our scheme is feasible in experiments. We discuss the effects of the finite transverse field and nonadiabatic transitions as imperfection of adiabatic transformation. By taking into account finite time duration for state preparation, sensing, and readout, we also compare performance of the present scheme with a classical scheme in the absence and presence of dephasing. A. Quantum metrology In this section, we briefly review theory of quantum metrology (for details, see, Ref. [1-3] and references therein). A typical procedure of quantum metrology is as follows. We prepare a probe state |Ψ and expose it to a target parameter θ as |Ψ θ = exp(iθĴ)|Ψ , where we assume that the generatorĴ is the summation of N local operators and its maximum (minimum) eigenvalue is N/2 (−N/2). Then, we measure an observable of the probe and obtain a measurement outcome. By repeating this process many times, we estimate the target parameter θ. The uncertainty of the estimation is given by the error-propagation formula where Here, M is the number of measurement. According to the Cramér-Rao bound, the uncertainty of the estimation is lower bounded by the quantum Fisher information, i.e., δθ est ≥ 1/ M F Q , where F Q = 4 ∂ θ Ψ θ |(1 − |Ψ θ Ψ θ |)|∂ θ Ψ θ is the quantum Fisher information. For a probe state satisfying Ψ|Ĵ 2 |Ψ = N 2 /4 and Ψ|Ĵ|Ψ = 0, the Cramér-Rao bound provides the ultimate limit δθ HL = 1/N √ M , which is the Heisenberg limit. Note that the Cramér-Rao bound also provides the limit of classical sensors composed of separable states δθ SQL = 1/ √ N M , which is the SQL. In physical setups, the target parameter θ is the product of a physical target parameter ω and time duration for sensing (interaction with the target parameter) T int , i.e., θ = ωT int . Then, the uncertainty of the estimation is given by The Heisenberg limit and the SQL are also rewritten as and respectively. Moreover, for a given total time T , the number of measurement can be expressed in terms of time duration for state preparation T prep , sensing T int , and readout T read , as When T prep → 0, T read → 0, and T int → T , these limits are minimized as and Note that the minimized Heisenberg limit (6) is not realistic because a statistical average for obtaining the expectation value of the observable is neglected. Therefore, another minimized Heisenberg limit is also used, where T prep → 0 and T read → 0, but T int ≪ T so that M ≫ 1. B. Model As a probe system, we consider the following infinite-range Ising model with a transverse field where we express Pauli matrices as {X,Ŷ ,Ẑ}, and J and h x are the interaction strength and the amplitude of the transverse field, respectively. We assume that h x is tunable, while J is fixed. This is a reasonable assumption for many physical systems. In addition, we assume N to be even for simplicity. Our purpose is to estimate a target longitudinal field h z . In a sensing process, is added to the Hamiltonian (9). Here, the relationship between the target parameter ω in the previous section and the target longitudinal field h z is given by ω = 2h z . For convenience, we use eigenvectorŝ of collective spin operatorsŜ to express energy eigenstates of the Hamiltonian (9). Here we suppose that the system is confined in the maximum spin subspace satisfying W =X,Y,ZŜ 2 W = N/2 × (N/2 + 1), i.e., m = −N/2, −N/2 + 1, . . . , N/2. This system (9) conserves the parityΠ i.e., the commutation relation between the Hamiltonian (9) and the parity operator (13) becomes zero. That is, for any h x (see, e.g., Ref. [36][37][38][39]). Therefore, (N + 1) energy eigenstates of the Hamiltonian (9) in the maximum spin subspace are classified into two sets, {|ψ n (h x ) } with the parityΠ = −1, in the ascending order of energy, respectively. These energy eigenstates are given by |ψ n (∞) = |N/2, N/2 − 2n X , |φ n (∞) = |N/2, N/2 − (2n + 1) X (15) in the h x → ∞ limit and for n = 0, 1, . . . , N/2 − 1 and |ψ N/2 (0) = |N/2, 0 Z in the h x → 0 limit. Notably, the degenerate ground states |ψ 0 (0) and |φ 0 (0) , which are known as the GHZ states, can achieve the Heisenberg limit (3) by parity measurement [9]. For example, we can obtain the expectation value of the parity (13) by implementing single-qubit measurement ofX on each qubit and multiplying the measurement outcomes, and by averaging it for many independent and identically distributed samples. However, single-qubit measurement ofX is nontrivial for the present model because eachX does not commute with the interaction term of the Hamiltonian. If the interaction term is much smaller than the resonant frequency of qubits, we can perform single-qubit rotation along the y-axis by π/2 and subsequent single-qubit measurement ofẐ, which commutes with the interaction term of the Hamiltonian, for each qubit. The measurement outcome is equivalent toX of the original state. However, when the interaction term is as large as or larger than the resonant frequency of qubits, we cannot use this method. Other approaches are necessary to measure the parity. C. State preparation and readout based on symmetry-protected adiabatic transformation In this section, we explain our scheme with a reasonable readout protocol extracting full information of the parity. First, we generate |ψ 0 (0) by adiabatic transformation, i.e., we prepare the trivial ground state |ψ 0 (∞) as the initial state and adiabatically change the transverse field h x from infinity to zero [33][34][35]. We then expose the system to the target longitudinal field h z during a time interval T int . As mentioned in the previous section, we do not assume a situation where the interaction term can be turned off during sensing. Finally, we adiabatically change the transverse field h x again to infinity, and then the probe state becomes except for a global phase factor [42]. Here, α is a relative phase accompanying the adiabatic transformation of the transverse field h x . In Ref. [42], global magnetization measurement ofŜ Z was discussed, but we consider global magnetization measurement ofŜ X (or, projection measurement ofŜ X = N/2, i.e., measuring P = | Ψ θ=2h z Tint |N/2, N/2 X | 2 ). The uncertainty of the estimation achieves the Heisenberg limit (3), Here we explain key points of the present scheme. The first point is that adiabatic transformation for state preparation and readout is protected by symmetry. That is, owing to the spin-flip symmetry, nonadiabatic transitions between the ground state and the first excited state (the degenerate ground state for small h x ) do not take place [36][37][38][39]. It mitigates the adiabatic condition. The second point is that the full information of the target longitudinal field h z , which is encoded on the amplitude of different parity eigenstates with a factor N in the sensing process, is completely mapped to the amplitude of different magnetization eigenstates ofŜ X because of the parity conservation due to the spin-flip symmetry. Therefore, the present scheme achieves the Heisenberg limit. It is also an important point that the observableŜ X commutes with the dominant part of the Hamiltonian. Global magnetization measurement ofŜ Z discussed in Ref. [42] leads to similar results, but complicated nonlinear adjustment of the relative phase α is required and the dynamical range is limited (see Appendix A). Note that if we can apply the π/2 pulse along the y-axis, we can replaceŜ X measurement withŜ Z measurement. D. Phase shift While the present scheme achieves the Heisenberg limit for any h z , both the denominator and the numerator in Eq. (2) vanish for h z ≪ 1 because the expectation value ofŜ X is the sine-squared function (the expectation value of projection measurement P is the cosine-squared function). However, in noisy situations, the numerator typically has a finite value, while the denominator is infinitesimal for small h z , resulting in divergence of the uncertainty. For example, the numerator becomes large when readout measurement becomes noisy [48,49]. To avoid such a problem, we introduce a phase shift. The target parameter can be divided into two parts, h z = h z k + h z u , where h z k is a known part and h z u is an unknown part. By performing prior estimation with a classical sensor, we can assume that an approximate value of h z is known, i.e., h z k ≈ h z and h z u ≪ 1. Then, we try to estimate h z u by entanglement-enhanced sensing for further improvement of precision. As a phase shift, we add an offset h z 0 so that 2(h z k + h z 0 )N T int = (2n + 1)π/2 with an integer n. Then, the denominator of Eq. (2) turns into |∂ Ŝ X θ=2h z Tint /∂h z | = N T int | cos(2h z u N T int )| for global magnetization measurement (|∂P/∂h z | = N T int | cos(2h z u N T int )| for projection measurement), which does not vanish for small h z u . This phase shift is necessary for beating the SQL when we take into account finite transverse field, dephasing, and nonadiabatic transitions. E. Dephasing Dephasing during the sensing process is a main obstacle for quantum-enhanced sensing. Here we explain the effect of timeinhomogeneous dephasing (non-Markovian dephasing) during the sensing process. Note that, in the following discussion, we always apply the phase shift discussed in the previous section. As a reference scheme, we consider an ensemble of N qubits without entanglement and assume that time duration for state preparation and readout is negligibly small. In the presence of non-Markovian dephasing, the uncertainty of the estimation is given by where Γ is the dephasing rate (the decay rate of the off-diagonal elements). This uncertainty of the estimation is minimized when T 2 int = 1/2Γ 2 , and then the reference scheme gives the minimized SQL under dephasing In schemes using the GHZ state, we also take into account time-inhomogeneous dephasing during the sensing process, and then the uncertainty of the estimation is given by When time duration for state preparation and readout is negligibly small, it is minimized for T 2 int = 1/2Γ 2 N , and then we obtain the Zeno limit scaling δh z est = (2eΓ 2 ) 1/4 /2N 3/4 √ T [11,12]. Moreover, with this sensing time, we can still beat the SQL in the sense of scaling when time duration for state preparation and readout is T prep + T read < O(N 0 ) [50] although such fast state preparation and readout may not be realistic for many-body entanglement creation. In the entanglement scheme with T prep + T read ≥ O(N 0 ), constant factor improvement over the SQL is possible when the equality is satisfied, and conditional improvement over the SQL is still possible when the number of qubits N is smaller than a certain threshold [50]. Even in this case, by preparing sub-ensembles consisting of N ′ (< N ) qubits, where the number of qubits in each sub-ensemble N ′ satisfies the threshold, we can perform entanglement-enahanced sensing with large number of qubits N [50]. A. Finite transverse field We considered the infinite transverse field in Sec. II C. In this section, we discuss the case of a finite transverse field, i.e., we change the transverse field h x from h x 0 (0) to 0 (h x 0 ) in the state preparation (readout) process. In particular, we derive conditions of the transverse field for achieving the Heisenberg limit scaling. Let us discuss two approaches to prepare the initial state. The first approach is as follows: for h x 0 /JN ≫ 1, we prepare the ground state |ψ 0 (h x 0 ) as the initial state, which can be done by cooling the system because of large energy gap. However, in this case, long operation time is required to adiabatically change the transverse field from large h x 0 to 0 and from 0 to large h x 0 . The other approach is as follows: we apply a strong magnetic field h x /JN ≫ 1 and perform projection measurement of |ψ 0 (∞) = |N/2, N/2 X , and implement sudden quench to h x 0 satisfying h x 0 /JN ≈ 1. In this case, the operation time to satisfy the adiabatic condition can be shorter than the first approach, while the initial state becomes |ψ 0 (∞) . This state is not the ground state of the given Hamiltonian, but close to it as discussed later. Note that h x /JN = 1 is the critical point in the thermodynamic limit, and thus we cannot prepare the ground state by cooling because of small energy gap. Similarly to Sec. II C, we adiabatically turn off the transverse field from h x 0 to 0, expose the system to the target longitudinal field h z , and adiabatically turn on the transverse field from 0 to h x 0 . The probe state becomes in the former case of initial state preparation, and, in the latter case, where g n = ψ n (h x 0 )|ψ 0 (∞) = ψ n (h x 0 )|N/2, N/2 X is the overlap between the initial state and the ground state. Here, α ′ , α n , and γ n are relative phases. Finally we perform the projection measurement of |ψ 0 (∞) = |N/2, N/2 X and obtain in the former case, and, in the latter case, as the survival probability of this measurement. We can immediately find an upper bound for the uncertainty of the estimation (2), in the former case, and after some calculations, we can also derive an upper bound in the latter case, for |g 0 | 4 > 1/2 when the condition 0 ≤ 2h z N T int ≤ π/2 is satisfied (see Appendix B for derivation). Notably, the factor sin(2h z N T int ) becomes unity when we consider the phase shift discussed in Sec. II D and the right-hand sides of Eqs. (25) and (26) exactly coincide with the Heisenberg limit when |g 0 | 2 = 1. These bounds guarantee the Heisenberg limit scaling when the overlap between the initial state and the ground state, |g 0 | 2 = | ψ 0 (h x 0 )|ψ 0 (∞) | 2 , satisfies |g 0 | 2 = Θ(N 0 ) in the former case and 2|g 0 | 4 − 1 = Θ(N 0 ) in the latter case, respectively. We plot the overlap |g 0 | 2 and the latter threshold |g 0 | 4 = 1/2 in Fig. 1. We find that the initial condition h x 0 /JN ≈ 2 is large enough for achieving the Heisenberg limit scaling, and the condition h x 0 /JN = 1 is enough for beating the SQL when N ≤ 100. B. Finite time duration for state preparation and readout In this section, we take into account finite time duration for state preparation and readout, and discuss conditions for beating the minimized SQL (7) and for achieving similar scaling to the minimized Heisenberg limits (6) and (8). From the derivation of these minimized limits, faster implementation of state preparation and readout than that of sensing seems necessary, and then one may suspect that critical slowing down could spoil the effectiveness of our scheme as in the case of criticality-based quantum metrology [51,52], i.e., long time duration for state preparation and readout based on adiabatic transformation restricts the sensing time T int and the total process may be beaten by even the SQL. However, our conclusion is that time duration for state preparation and readout is not necessarily shorter than that for sensing. For simplicity, we assume that T prep = T read = T a in the present paper. In the present model, the energy gap appears at the critical point h x /JN = 1, and it scales as ∆E = O(N −1/3 ) [53][54][55]. Therefore, according to the adiabatic condition, time duration for state preparation and readout is roughly given by with an N -independent constant C (see also, Appendix C). To satisfy T ≫ T int + 2T a , the total time T must scale as at least T =CN 2/3 (28) with an N -independent constantC ≫ C. Then we find that the condition for beating the SQL, i.e., δh z est < δh z SQL,min , is given by Therefore, even if the interaction time with the target field is much shorter than state preparation and readout, i.e., T int < 2T a = O(N 2/3 ), we can beat the SQL by setting Next, by increasing time duration for sensing, we show how the uncertainty is improved and when the Heisenberg limit scaling is achieved. To elucidate these points, we rewrite the uncertainty of the estimation as δh z est = δh z SQL,min /η and δh z est = δh z * HL,min /η ′ , where η and η ′ are given by and respectively. We set the sensing time T int as where ǫ ≥ 0. Since T int ≤ T = O(N 2/3 ), we must keep ǫ ≤ 1/2. Then, we find for 0 < ǫ < 1/2, and for ǫ = 1/2. That is, we can beat the SQL and improve the uncertainty by N ǫ for 0 < ǫ < 1/2 and achieve the Heisenberg limit scaling for ǫ = 1/2. We can also find similar results for Eq. (6) (see, Appendix D). C. Dephasing As mentioned in the previous section, time duration for state preparation and readout in our scheme is given by T prep +T read = CN 2/3 . Therefore, our scheme cannot achieve even the SQL scaling in the presence of dephasing as discussed in Sec. II E. However, our scheme can still beat the SQL for specific number of qubits N . In the presence of dephasing, time duration for sensing must be much smaller than that for state preparation and readout, T int ≪ T prep +T read , and thus the uncertainty of the estimation is roughly given by δh z est ≈ T prep + T read e Γ 2 N T 2 int /2 /2N T int √ T , which is minimized for T 2 int = 1/Γ 2 N . For this sensing time, the condition for beating the mimimized SQL, i.e., δh z est < δh z SQL,deph,min , is given by For various values of ΓC, we plot the left-hand side of this equation with the right-hand side in Fig. 2. We find that, for small ΓC, we can beat the SQL for specific number of qubits N . Now, we discuss the value of ΓC. According to the analysis in Ref. [54], we find that the constant C is given by JN C = (h x 0 /JN )C with a dimensionless constantC (see, Appendix C). The dimensionless constantC is roughly given by O(1) or O(10) depending on the required fidelity. As we show in Sec. III A, h x 0 /JN = 2 is large enough for our scheme, and thus JN C can also be O(1) or O(10). Therefore, for beating the SQL with dozen or several hundreds of qubits, it is expected that Γ/JN should be O(10 −2 ) or O(10 −3 ) at worst. D. Nonadiabatic time scale Finally, we discuss performance of our scheme with small system size N = 10, 20, . . . , 100 in nonadiabatic time scale. We set h x 0 = JN and change the transverse field h x as h x = h x 0 cos(πt/2T a ) for 0 ≤ t ≤ T a , which was introduced as coherent driving in Ref. [35] and is similar to a geometrically optimal schedule [38]. Under this transverse field, we can shorten the operation time T a because nonadiabatic transitions and interference result in high fidelity to the GHZ state even in nonadiabatic time scale [35]. We also change the transverse field h x as h In the following numerical simulations, we set JN = 1 and omit M . First, we optimize the operation time T a for the sensing time T int = 0. We plot (red circles) the fidelity of the probe state to the GHZ state |ψ 0 (0) at the time t = T a and (green triangles) that to the initial state |ψ 0 (∞) at the time t = 2T a + T int = 2T a for N = 10 in Fig. 3. Here, interference appears when nonadiabatic transitions take place, and thus these quantities show oscillating behavior. We find a locally optimal operation time T a ≈ 150(2JN 2 ) −1 showing high fidelity to the GHZ state (∼ 0.97) and to the initial state (∼ 0.91). Now, we set T a = 150(2JN 2 ) −1 and study the uncertainty of the estimation (2) for the infinitesimal target parameter h z u with the phase shift discussed in Sec. II D. We calculate the denominator of Eq. (2) by finite difference, ∂P/∂h z u ≈ (P | h z u =10 −10 − P | h z u =0 )/10 −10 . The sensing time T int contributes to relative phases between different levels, and it affects the uncertainty of the estimation (2) [see Eq. (24)]. Therefore, we plot the uncertainty of the estimation (2) with respect to T int in Fig. 4. We find that the uncertainty is very close to the Heisenberg limit. Indeed, the uncertainty of the estimation (2) achieves δh z est ≈ 1.07/2N T int on average for (2JN 2 )T int = 1, 3, 5, . . . , 199. Here, (h z k + h z 0 )/JN = π/2. Note that 1.07 ≈ (0.93) −1 and thus it is smaller than that expected from the fidelity to the GHZ state (∼ 0.97) and a little bit larger than that expected from the fidelity to the initial state (∼ 0.91) for T int = 0. We also discuss these quantities Typically, the interaction strength J can be O(N −1 ) [56], and thus it means that we can set T prep(read) = T a = O(N 0 ) for small system size. This time duration is much faster than that for state preparation and readout satisfying the adiabatic condition, T prep(read) = T a = O(N 2/3 ), and thus we can use much longer time for sensing or increase the number of measurement. By using these locally optimal operation time, we calculate the uncertainty for several N against T int (see Appendix E). We find that the uncertainty has some dependence on T int and it slightly deviates from the Heisenberg limit. We express the average uncertainty of estimation as δh z est = 1/2pN T int , where p (0 ≤ p ≤ 1) is an index denoting how close the uncertainty is to the Heisenberg limit. Here, the uncertainty is averaged for (2JN 2 )T int = 1, 3, 5, . . . , 199. For locally optimal time T a in Fig. 5, the fidelity to the GHZ state (red circles), that to the initial state (green triangles), and the index p (blue squares) are calculated and plotted in Fig. 6. These quantities show complicated behavior against the number of qubits because of nonadiabatic transitions and interference. Remarkably, the uncertainty surpasses the SQL. Note that the shown performance is not the best; there exists other longer operation time showing better performance. If coherent time is long enough, we can choose those operation time. IV. SUMMARY We considered quantum metrology based on symmetry-protected adiabatic transformation. In this protocol, parity measurement, which is difficult to be implemented in experiments, is replaced with simple global magnetization measurement by adiabatic transformation of the transverse field. Here, we exploited the fact that the parity is a conserved quantity because of the spin-flip symmetry. We discussed the effects of the finite transverse field and nonadiabatic transitions as imperfection of adiabatic transformation. By taking into account finite time duration for state preparation, sensing, and readout, we also compared performance of the present scheme with the classical scheme in the absence and presence of dephasing. 6. System size dependence of (red circles) the fidelity of the probe state at time t = Ta to the GHZ state, (green triangles) that of the probe state at time t = 2Ta with Tint = 0 to the initial state, and (blue squares) the index p, which shows how close the uncertainty is to the Heisenberg limit on average. Here we use locally optimal time Ta plotted in Fig. 5. The error bar in the index p represents standard deviation and the dotted curve represents the SQL. In this paper, we considered the finite transverse field, finite time duration for state preparation, sensing, and readout, dephasing, and nonadiabatic transitions as possible situations. We leave effects of other errors and noises as future work, but we mention some evidence of robustness against various errors and noises. Our protocol utilizes the ground state, and thus decay from excited states to lower energy states during entanglement generation is less problematic than conventional dynamical approaches. In addition, the offset discussed in Sec. II D makes our protocol robust against measurement imperfection as in the case of the finite transverse field, dephasing, and nonadiabatic transitions. Robustness of dynamics against bias, which breaks symmetry-protected conservation laws, during symmetry-protected adiabatic transformation was discussed in Ref. [39]. Robustness of entanglement generation against a loss process, which breaks a symmetry-protected conservation law and confinement in subspace of the Hilbert space, during (super)adiabatic transformation was discussed in Ref. [37]. Symmetry-protected superadiabatic transformation [37,57] based on shortcuts to adiabaticity [58] can also speedup the present protocol and reduce negative effects. Appendix A: Global magnetization measurement ofŜZ discussed in Ref. [42] In Ref. [42], global magnetization measurement ofŜ Z was discussed for the probe state (17). Its measurement outcome is and its standard deviation is Therefore, the uncertainty of the estimation is given by i.e., it satisfies the Heisenberg limit, but cancellation of the relative phase α is necessary and its sensing range is limited due to the factor cos(2h z N T int ) even in the ideal situation. can obtain the same conditions even if we consider the other minimized Heisenberg limit (6), which is frequently used, but a statistical average is ignored.
2021-04-08T01:16:07.437Z
2021-04-07T00:00:00.000
{ "year": 2021, "sha1": "41c18dc187274da8c02c015054999a60355dbcde", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "41c18dc187274da8c02c015054999a60355dbcde", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236468849
pes2o/s2orc
v3-fos-license
Numerical Investigation of Droplet Properties of a Liquid Jet in Supersonic Crossflow The atomization process of a liquid jet in supersonic crossflow with a Mach number of 1.94 was investigated numerically under the Eulerian-Lagrangian scheme. The droplet stripping process was calculated by the KH (Kelvin-Helmholtz) breakup model, and the secondary breakup due to the acceleration of shed droplets was calculated by the combination of the KH breakup model and the RT (Rayleigh-Taylor) breakup model. In our research, the existing KH-RT model was modified by optimizing the empirical constants incorporated in this model. Moreover, it was also found that the modified KH-RT breakup model is applied better to turbulent inflow of a liquid jet than laminar inflow concluded from the comparisons with experimental results. To validate the modified breakup model, three-dimensional spatial distribution and downstream distribution profiles of droplet properties of the liquid spray in the Ma = 1:94 airflow were successfully predicted in our simulations. Eventually, abundant numerical cases under different operational conditions were launched to investigate the correlations of SMD (Sauter Mean Diameter) with the nozzle diameter as well as the airflow Mach number, and at the same time, modified multivariate power functions were developed to describe the correlations. Introduction Within the combustion chamber of the Scramjet engine, a transverse liquid jet is injected into the supersonic airflow at a certain flow rate. Once exposed to the incoming airflow with a high Mach number, the jet deforms immediately and breaks apart into small fragments rapidly under strong aerodynamic forces [1]. None of any clear boundaries can be utilized to divide the whole breakup process into several specific stages. However, generally, the process of the liquid column breaking apart into pieces of initially large droplets can be recognized as "the primary breakup," while the process of initially large droplets further breaking up into small-sized droplets is usually recognized as "the secondary breakup." Gorokhovski [2] pointed out that the primary breakup has a significant impact on the subsequent droplet formation and dispersion, and yet, it is of huge challenge to understand the whole physical mechanisms within for various complicated structures in the flow field as well as the large time and space spans which thus make it even harder to investi-gate the microstructures both experimentally and numerically. On the other hand, since our goal is to produce small droplets downstream with appropriate sizes to increase the evaporation and mixing rates of the liquid fuel, one of the most significantly practical motivations to investigate the atomization process is to determine the conditions in which the desired final fragment sizes can be acquired. As noted by Tryggvason [3], the highest airflow velocity does not always guarantee the smallest droplet diameter. Therefore, the secondary breakup process should also be clearly understood to meet the actual engineering demands. Plenty of experimental investigations have been launched regarding the liquid jet in supersonic crossflow. With the application of laser technology and computer technology, many advanced testing technologies have been developed, such as phase Doppler particle analyzer (PDA/PDPA/LDV), laser scattering technology/laser holography technology, laser-induced fluorescence technology (PLIF), and particle image velocity instrument (PIV). The liquid spray properties measured by different instuments turn out differently. Koh [4] found that the mass distribution of PDA is obviously smaller than that of optical imaging. Lin [5] contrasted the effects on the penetration height caused by different optical approaches and pointed out that PDA is more sensitive to thin liquid mist thus measuring a higher penetration height than other approaches such as high-speed photography and schlieren. Lin [6] studied the structures of water jets injected into a Ma = 1:94 crossflow by utilizing a two-component PDPA and successfully discovered the S-type distribution profiles of droplet size in the downstream position. What is more, the correlations of the operational conditions and droplet properties can be investigated, and the whole physical mechanisms of the atomization process can be further understood by numerical calculations. Currently, numerical approaches to investigate the whole gas-liquid mixing and interacting process in supersonic conditions can be generally divided into two types. One is to capture the movement of gas-liquid interface based on the Eulerian scheme such as VOF (volume of fluid) and LS (level set) approaches. These approaches are at high accuracy but huge computational costs. The other is to track the position of single droplets by integration of time based on the Lagrangian scheme. In this paper, the lagrangian method is focused on because compared with the Eulerian scheme, it can remarkably reduce computation costs and combine itself with breakup models compatibly; although, the initial size and velocity distributions of large droplets need to be determined. In the beginning stage of the investigation, out of the restriction of experimental conditions, most of the research was focused on the dependence of deformation and breakup of droplets on those dimensionless numbers such as Reynolds number and Weber number [7][8][9]. At present, it is still extremely difficult to simulate the whole atomization process from continuous liquid column to uniform spray plume with one single physical model. Therefore, existing breakup models have been coupled and improved to calculate breakup processes of liquid jets to acquire good agreement with experimental results. Currently, the theories of surface waves induced by the Kelvin-Helmholtz (KH) and the Rayleigh-Taylor (RT) instabilities are considered to explain the essential mechanisms of the liquid breakup process mostly [10]. The KH-RT breakup model is based on the surface wave theories and takes both KH and RT instabilities into account. At present, it is considered as the most appropriate breakup model to truly reveal the breakup mechanisms of the liquid phase in the supersonic environment. Liu [11] studied the breakup parameters of the KH-RT model and pointed out that in the supersonic environment, parameters controlling the breakup time have quite limited impacts for the very short breakup time in supersonic airflows. Yang et al. (Yang, Zhu, Sun, and Chen, 2017) [12] improved the KH-RT breakup model by considering the compressible effects of the gaseous environment and found that the penetration height of the improved model agreed well but the spread and size distributions of the liquid spray were still different from the experiment results. Li et al. (Li, Wang, Sun, and Wang, 2017) [13] adopted the KH breakup model to simulate the droplet stripping process near the nozzle and coupled RT breakup and TAB (Taylor analogy breakup) models to simulate the secondary breakup of drop-lets. In his research, a LES code program of two-phase flow was carried out under high resolution grid, and the downstream atomization characteristic profiles obtained were in good agreement with the experimental results. The inflow turbulence of a liquid jet was widely investigated both experimentally and numerically. It was noted to control the primary breakup process of a liquid jet in supersonic crossflow and thus further control the downstream droplet properties. Mazallon et al. (Mazallon, Dai, and Faeth, 1999) [14] and Sallam et al. [15] (Sallam, Ng, Sankarakrishnan, Aalburg, and investigated the laminar liquid jet injected into a uniform gaseous crossflow by using pulsed shadowgraphy and pulsed holography. It was found that surface waves formed on the upstream side of the laminar jet column, and that the wavelength decreased with the increase of the gaseous Weber number which indicated that the surface waves in the primary breakup process originate from RT instability. Xiao et al. [16] (Xiao, Dianat, and McGuirk, 2013) also drew the same conclusion with a two-phase-flow large-eddy simulation. It was also concluded that it is liquid rather than gaseous turbulence that determines the initial liquid-jet instability and interface characteristics. Lee et al. [17] (Lee, Aalburg, Diez, Faeth, and Sallam, 2007) discovered that the SMD of droplets stripped from the surface of a turbulent liquid jet column was not influenced by the crossflow, and thus the liquid turbulence controls the primary breakup process. In this paper, the atomization process of a liquid jet in supersonic crossflow with a Mach number of 1.94 was investigated numerically using the Lagrangian method. In this method, large droplets with initial size and velocity distributions were injected into a supersonic crossflow in substitution for a continuous liquid jet. Then, the droplets broke apart into small fragments calculated with the modified KH-RT breakup model and eventually formed the spray plume. This paper is organized as follows: In Section 2, relevant mathematical models are presented or modified including the dynamic equations, breakup models, and states of internal nozzle flow. In Sections 3 and 4, computational conditions are introduced, and our modified models are verified via comparing our numerical results with experiment. Furthermore, the correlation functions of SMD and two important parameters are summarized. Eventually, several important conclusions are presented in Section 5. Mathematical Models International Journal of Aerospace Engineering where ρ is the gas density, u ! is the gas velocity, P is the static pressure, v is the gas dynamic viscosity, F ! is the momentum source term of droplets, C d is the drag coefficient, u ! p is the droplet velocity, ρ p is the droplet density, d p is the droplet diameter, F other is other forces per unit gas mass, τ r is the droplet relaxation time [18], and R e is the relative Reynolds number . Liquid-Phase Dynamic Equation. d u where ðu ! − u ! p Þ/τ r is the drag force per unit droplet mass, and F ! ′ is an additional acceleration per unit droplet mass term, such as "virtual mass" force Integration of time in Equation (4) yields the velocity of the droplet at each point along the trajectory, with the trajectory itself predicted by The new location x n+1 p can be computed from where a includes accelerations due to all other forces except drag force, and u n p and n n represent droplet velocity and gas velocity at the old location. Turbulence Model. In this paper, the k − ω − SST (Shear-stress Transport) model is used for its high prediction ability in the far-field and near-wall regions. In RANS simulations, the instantaneous quantity f is splitted into a mean f and fluctuating f ′ components ð f = f + f ′Þ. For compressible flow, the density ρ varies so widely that a mass-weighted averagef (called Farve average) is usually preferred ðf = ρf / ρÞ. Any quantity f may be splinted into mean and fluctuating components as The average balance equations of continuous phase are written as ðc v ′ c p are specific heat capacity of constant volume and pressure, respectively. The turbulent kinetic energy k is written as where μ is the molecular viscosity, and K is thermal conductivity. To provide closures for the unknown turbulent kinetic energy k, Menter [19] proposed the k − ω − SST model. The classical k − ε turbulence model has high prediction ability with high Reynolds number region in far field while the classical k − ω turbulence model has better prediction ability and more stable numerical properties with the near-wall region. The k − ω − SST turbulence model combines the advantages of both k − ε and k − ω turbulence models switched by a mixing function F. The transport equations of k − ω − SST turbulence model are as follows: The mixing function F 1 is defined by where y is the vertical distance from the wall surface, D kw is the positive part of the cross diffusion term, and D kw is determined by International Journal of Aerospace Engineering All the coefficients σ k , σ w , β, γ can be calculated in a uniform form by where ϕ 1 ′ ϕ 2 ′ is the corresponding coefficients of k − ε and k − ω turbulence models, respectively. The eddy viscosity coefficient v t is defined by where F 2 = tanh ðarg 2 2 Þ, Ω is the eddy vector. In Eq. (23), the first component is the eddy viscosity coefficient of the k − ω turbulence model, and the second component is acquired from the one-equation turbulence model based on the characteristics of shear stress in laminar boundary layers. Modified KH-RT Breakup Model. The KH-RT breakup model is based on the linearized instability theory and takes both KH and RT breakup into account. It is often used to simulate high Weber number sprays. Within this model, a length-limited liquid core in the near nozzle region is assumed as shown in Figure 1. The droplet stripping process within the liquid core is considered by KH instability, and the acceleration of shed droplets was calculated by the competition of both KH and RT instabilities. For the KH breakup model, the propagation equations of unstable waves on the surface of cylindrical jet are calculated numerically, and the maximum growth rate Ω KH corresponding to wave length Λ KH is obtained as below [20]. where We g p . The radius of child droplet stripped from the cylindrical jet is τ KH is the KH breakup time calculated by where B 0 is the KH breakup radius constant, and B 1 is the KH breakup time constant. For the RT breakup model, the size of child droplet and breakup time depends on the fastest growing wave. The wave length of the fastest growing wave is calculated by [21,22]. The RT breakup child droplet radius related to the wave length of the fastest growing wave is calculated by The RT breakup time is calculated as below where α d is the acceleration of the droplet, C 1 is the RT breakup radius constant, and C 2 is the RT breakup time constant. When the KH-RT breakup model is adopted to track wave growth on the surface of droplets, it is often beneficial Table 1. On the basis of Yang's work, the optimal KH-RT breakup model constants were further investigated in this paper. Figure 2 illustrates that the downstream SMD distribution is mainly controlled by the KH radius constant B 0 rather than time constant B 1 . It is also found that the RT radius constant C 1 has much less influence on the downstream SMD than the KH radius constant B 0 . It can be explained that in the KH-RT breakup model, the KH instability participates both the stripping and acceleration breakup process, and that the breakup time for a droplet in the supersonic environment is extremely short; thus, the impacts of the time constants can be neglected. Typically, the RT instability grows faster when droplet acceleration is high, and this effect dominates for high Weber number sprays. Therefore, the cause of the little control shown by C 1 is probably because the present liquid core length is larger than expected. Therefore, using the idea of controlling variables, the optimal values of breakup constants B 0 , B 1 , C 1 , C 2 for supersonic simulation of two-phase interactions calculated by FLUENT software are shown in Table 2. Inflow Turbulence of a Liquid Jet. In this paper, the DPM (discrete particle model) is used under the Eulerian-Laglangian frame via the fluid mechanic simulation software FLUENT. In the DPM model, the "Blob" model was applied by injecting large droplets into the free airflow in substitution for a continuous liquid jet. Then, the behavior and trajectory of droplets are calculated by breakup models and interaction with the crossflow. Although with the exact same orifice diameter specified, the inflow turbulence of a liquid jet is discovered to have significant effects over the primary breakup process by affecting the initial droplet sizes and velocities. Therefore, to accurately predict the spray characteristics, the correct turbulent state of the internal nozzle flow must be specified. As can be seem from Figure 3, the internal nozzle flow can be divided into single-phase flow, cavitating flow, and flipped flow, respectively, with the intensity of turbulence decreasing in turn. And the turbulence within liquid jet is mainly determined by Reynolds number. where p 1 , p 2 is the controllable upstream and downstream pressure, respectively. In the rest of this section, the varied initial droplet size and velocity of different flow turbulence are focused on. And it is shown that with the same nozzle diameter and flow Determination of the Initial Droplet Size Distribution. According to our investigation, the initial droplet diameter distribution is closely related to the nozzle state. To indicate the connection, a two-parameter RR (Rosin-Rammler) distribution is used to represent the droplet diameter distribution as below, characterized by the most probable droplet size d 0 and a spread parameter n. The mass fraction of droplets with diameters greater than d is calculated by The first parameter required to specify the droplet size distribution is the most probable droplet size d 0 . For a single-phase nozzle flow, the correlation of Wu et al. (Wu, Tseng, and Faeth, 1992) [24] is applied to calculate SMD ðd 32 Þ considering the initial drop size related to the turbulence quantities of the liquid jet. Snyder [25] gives the most general relationship between SMD and most probable diameter for a Rosin-Rammler distribution. We = ρ l u 2 λ σ , ð35Þ where λ = d/8, λ is the radial integral length scale at the jet exit based upon fully developed turbulent pipe flow, and We is the Weber number of the liquid jet. For a cavitating nozzle flow, the correlation of Wu can still be applied, and yet the length scale for a cavitating nozzle is λ = d eff /8, where d eff is the effective diameter of the exiting liquid jet according to Schmidt and Corradini [26]. For the case of a flipped nozzle flow, the initial droplet diameter is set to the diameter of the liquid jet where C ct is a theoretical constant equal to 0.611, which comes from potential flow analysis of flipped nozzles. The second parameter required to specify the droplet size distribution is the spread parameter n. The values for the spread parameter are determined from past modeling experience and experimental observations. Table 3 lists the values of n for three flow states. The larger the value of the spread parameter, the narrower the droplet size distribution. Having specified the most probable diameter and the spread parameter, the initial droplet size distribution can thus be determined. It should be noted that the actual size distribution may be a little different with the theoretical one for the limited number of droplets injected from the nozzle exit. Figure 4 shows the initial droplet size distribution of three nozzle flow states. It can be seen that the size of injected droplets of turbulent internal flow tends to be smaller and more uniform while the laminar internal flow tends to be larger and more centralized. It has been experimentally figured out that the droplet size distribution in the near nozzle region has a tremendous effect on downstream droplet properties. Figure 5 illustrates the SMD distribution at x = 50,100 mm in the central plane distinguished by the flow turbulence 9 International Journal of Aerospace Engineering with different initial droplet diameter distributions. Therefore, the turbulent liquid jet fits better with the experimental results in the aspect of the droplet size distribution. Determination of Initial Droplet Velocity. For a single-phase nozzle, the estimate of exit velocity u comes from the conservation of mass and the assumption of a uniform exit velocity: For a cavitating nozzle, an expression for a higher velocity over a reduced area derived by Schmidt and Corradini [26] is presented here instead of a uniform exit velocity: where C c is the contraction coefficient by Nurick's [27] fit, p 1 is the internal upstream pressure of the nozzle, p 2 is the internal downstream pressure of the nozzle, and p v is the vapor pressure of the nozzle. For a flipped nozzle, the exit velocity is derived from the conservation of mass and the value of the reduced flow area: The tremendous effect on the initial droplet velocity directly affects the penetration height of the liquid spray. According to our investigation, the penetration height of a turbulent liquid jet fits better with experimental results [6] than that of a laminar liquid jet in the same gaseous conditions as shown in Figure 6. It can be explained that the laminar liquid jet has a larger initial droplet velocity with the mass flow rate specified due to the flow contraction in the nozzle, which is inconsistent with the actual situation. Computational Conditions Lin et al. (Lin and Kennedy, 2002) [6] carried out the test research of a water liquid jet injected into supersonic crossflow with a Mach number of 1.94 and acquired abundant experimental data using a two-component phase Doppler particle analyzer (PDPA), which has been commonly simulated to verify new numerical models. In this section, the exact case mentioned above was simulated by FLUENT software to validate our physical models. Considering the computational cost, the calculation domain was set to be a rectangular region near the injector with L x × L y × L z = 200mm × 40mm × 40mm. The calculation domain was meshed by structured grids with a total number of 761904 cells as shown in Figure 7. The position of the injector was designed to be ðx 0 , y 0 , z 0 Þ = ð50mm, 0, 0Þ, the vicinity of where was locally refined as shown in Figure 8. The k − ω − SST turbulence model is applied to calculate the turbulence of supersonic gas due to its high prediction ability in the far-field and near-wall regions. Default values of the coefficients of this model embedded in FLUENT software are applied. Water was used as the simulated liquid in the research which has a density of 998 kg/m 3 , viscosity of 2:67 × 10 −3 kg/(m‧s), and surface tension of 0.072 N/m. The momentum flux ratio q = ρ l v 2 l /ρ ∞ v 2 ∞ is set to be constant 7, and other detailed parameters of liquid and gas are given in Tables 4 and 5. Results and Discussion 4.1. Model Validation 4.1.1. Spatial Distribution of the Liquid Spray. Spray penetration height is an important atomization characteristic to indicate the liquid-gas mixing effect. Figure 9 illustrates the numerical result compared with the experimental result. The black dots represent the averaged spray droplets of 100 instantaneous moments, and the red dashed line represents the experimental correlation function developed by Lin et al. (Lin and Kennedy, 2002) [6]. Cross-sectional distribution of the spray is another important parameter to evaluate the mixing characteristic of liquid and gas phases. Figure 10 shows the spray spread at the position of x = 50 mm compared with the experimental result. The black dots represent the averaged spray droplets of 100 instantaneous moments, and the red dashed line is the experimental correlation function as seen in Figure 11 developed by Wu [24]. It is worth noting that although Lin also showed their experimental results of the spray cross-sectional distribution, however, the explicit empirical correlations were not summarized in their research as done by Wu. Therefore, in this part, our numerical result is compared with Wu's experimental correlation function. As can be seen, the spray foot is not observed as expected, because the thin liquid spray in the near-wall region calculated has much weaker obstruction than the experiment. In the Eulerian-Lagrangian methods, it is assumed that the liquid phase is sufficiently dilute that droplet-droplet interactions, and the effects of the droplet volume fraction on the gas phase are negligible. The result can be better improved by increasing the droplet density of the liquid spray, which is beneficial to enhance the entrainment effect of gas-phase vortices on the droplets but increase the computational cost. The consistency with the experimental results of the spatial distributions of the liquid spray firmly proves that present physical models are reliable. Figure 12 illustrates the normalized SMD distribution along the y-axis at different x positions in the central plane. It shows that the numerical calculation can basically reproduce the S-shaped profile observed in the experiment even though the increasing trend of SMD with the y-axis height in the upper periphery of the liquid spray is not successfully simulated, which can be attributed to the weaker liquid-gas interaction than reality. Apart from the experiment results, our results of modified models is also compared with Li's simulation results [13] which were obtained under the same numerical conditions but refined calculation region of 408 × 201 × 201 grid points in the x, y, and z directions. And the comparison shows considerable consistency in the SMD, absolute, and relative velocities distributions at the position of x = 50mm as shown in Figure 13. All these comparisons validate the reliability of our modified models. To further investigate the difference between numerical and experimental results, the filled contours of the SMD distribution at the position of x = 50 mm are contrasted in Figure 14. It can be found that the simulated gas-phase vortex field is remarkably weaker than the actual vortex field because the droplet volumes are neglected in the Eulerian-Lagrangian scheme. As a result, in simulations, it is the aerodynamic shear force that simply controls the downstream 12 International Journal of Aerospace Engineering SMD distribution. That is to say that the closer to the free airflow, the smaller the size of droplets is. However, the experimental results show that the maximum droplet size appears at both the top of the spray plume and the y/h = 0:3 location due to the complicated liquid-gas interaction and complex gas-phase vortex field. Grid Independence Verification. To investigate the reliability of current grids, a higher-resolution set of grid with a total number of 2093184 cells is used to calculate this prob-lem. The top view of this refined set of grid is screenshot as Figure 15. The results of the penetration, spread distribution and downstream properties curves are compared between two sets of grid as shown in Figures 16-21. It can be naturally concluded that despite the trivial differences, the grid independence is got basically verified. 13 International Journal of Aerospace Engineering airflow Mach numbers and nozzle diameters. Although the S-shaped profile of the SMD distribution can inevitably increase the modeling difficulty, a new modeling approach is presented in this section to help settle the SMD quantification problem. To solve that, modified power functions are established which show good accuracy in predicting the SMD distribution. Dozens of numerical cases are launched to acquire statistically enough data for mathematical analysis with the aid of the multivariate nonlinear fitting method. Detailed operational conditions are listed below, and the expressions of the SMD distribution in the free stream direction as well as the errors are exhibited. Operational Conditions. In this section, a total of 54 sets of operating conditions are designed, among which the nozzle diameter d varies in 0.3, 0.5, 1, 2, and 3 mm, and the airflow Mach number Ma varies in 2.1, 3, and 4. It is worth noting that since the flux momentum ratio q and the airflow Weber number We have a significant effect on droplet properties, when considering the scale effects of the nozzle diameter and the airflow Mach number, the injecting velocity of the liquid jet and the density of the compressible air are controlled accordingly to keep q and We always constant. Detailed settings of parameters are shown in Table 6. is modified into a new-form power function to describe the SMD distribution in the free stream direction where δ is the boundary layer thickness, and d is the nozzle diameter, Ma is the free stream Mach number, x is the distance from the nozzle. A, B, a, b, and c are parameters to be determined. These unknown parameters are optimized by using the multivariate nonlinear fitting approach to correlate the local SMD at different x positions with various nozzle diameters and airflow Mach numbers. Eventually, the formulas of SMD are given separately at different y positions in Table 7, and the MSE (mean squared error) represents the fitting error. As can be seen, the fitting results are acceptably Data Availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. 16 International Journal of Aerospace Engineering
2021-07-28T06:00:42.461Z
2021-07-09T00:00:00.000
{ "year": 2021, "sha1": "33f51037492591a625795363e8948e3b9c9d5687", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijae/2021/8828015.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "33f51037492591a625795363e8948e3b9c9d5687", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
218528608
pes2o/s2orc
v3-fos-license
Gait speed and handgrip strength as predictors of all-cause mortality and cardiovascular events in hemodialysis patients Background Low physical performance in patients undergoing maintenance hemodialysis is associated with a high mortality rate. We investigated the clinical relevance of gait speed and handgrip strength, the two most commonly used methods of assessing physical performance. Methods We obtained data regarding gait speed and handgrip strength from 277 hemodialysis patients and evaluated their relationships with baseline parameters, mental health, plasma inflammatory markers, and major adverse clinical outcomes. Low physical performance was defined by the recommendations suggested by the Asian Working Group on Sarcopenia. Results The prevalence of low gait speed and handgrip strength was 28.2 and 44.8%, respectively. Old age, low serum albumin levels, high comorbidity index score, and impaired cognitive functions were associated with low physical performance. Patients with isolated low gait speed exhibited a general trend for worse quality of life than those with isolated low handgrip strength. Gait speed and handgrip strength showed very weak correlations with different determining factors (older age, the presence of diabetes, and lower serum albumin level for low gait speed, and lower body mass index and the presence of previous cardiovascular events for low handgrip strength). Patients with low gait speed and handgrip strength had elevated levels of plasma endocan and matrix metalloproteinase-7 and the highest risks for all-cause mortality and cardiovascular events among the groups (adjusted hazard ratio of 2.72, p = 0.024). Elderly patients with low gait speed and handgrip strength were at the highest risk for poor clinical outcomes. Conclusion Gait speed and handgrip strength reflected distinctive aspects of patient characteristics and the use of both factors improved the prediction of adverse clinical outcomes in hemodialysis patients. Gait speed seems to be a better indicator of poor patient outcomes than is handgrip strength. Background The increasing prevalence of end-stage renal disease (ESRD) is a major public health problem in most developed countries, including South Korea [1,2]. Despite remarkable advances in dialysis modality and patient care, the mortality rate of ESRD patients is still exceedingly high compared with that of the general population [3]. Well-established risk factors for major adverse events associated with ESRD include old age, preexisting cardiovascular disease, the presence of diabetes, and underdialysis [4][5][6][7][8][9][10]. Nonetheless, hemodialysis patients exhibit high interindividual variability, and it is frequently difficult to predict the clinical course accurately on an individual level. The identification and management of potential risk factors is of particular importance because individualized therapeutic interventions might improve the clinical outcomes of ESRD patients. Sarcopenia is defined as quantitative and qualitative loss of skeletal muscle that is frequently linked to adverse effects in patients [11]. Uremic toxins in chronic kidney disease (CKD) patients are often associated with not only the chronic catabolic state of inflammation, oxidative stress, and nutritional imbalance but also a high prevalence of cardiovascular events, all of which eventually lead to clinically evident sarcopenia. Recent studies have highlighted that reduced physical performance is independently associated with poor patient survival and poor quality of life among CKD patients [12,13], indicating the importance of physical activity in risk stratification among these patients. Currently, however, the optimal method of assessing physical performance in these populations has not yet been defined. Measurements of gait speed (GS) and handgrip strength (HS) are used as reliable tests to determine the functioning of skeletal muscle [14,15]. Both tests are simple, rapid, inexpensive, and can be performed in the geriatric population [16]. Accumulating evidence suggests that these parameters are useful for predicting outcomes in CKD [17][18][19] and ESRD patients [20][21][22][23][24]. Nonetheless, both tests have several limitations, such as a nonstandardized protocol or intraindividual variability. Moreover, performing either test may result in the misinterpretation of the performance status because dialysis patients frequently exhibit isolated problems in their upper or lower extremities but not the other parts of their body. Therefore, it can be speculated that combining these two simple tests may compensate for the shortcomings of each individual test. The aim of this study was to determine whether GS and HS have distinctive clinical relevance and whether combining these tests could offer a better indicator of patient outcomes than performing a single test. Participant and study design This study was performed using the data obtained from the K-cohort, a prospective cohort of 460 hemodialysis patients who visited six hospitals between June 2016 and January 2018 (CRIS no. KCT0003281). Inclusion/exclusion criteria was described previously [25]. The patient recruitment strategy is illustrated in Fig. 1. In brief, after excluding 68 patients who were unable to be assessed for their physical performance because of their medical conditions and 115 patients who refused the tests, a total of 277 patients were finally enrolled in this study. We subsequently classified the enrolled patients into 4 groups based on their physical performance: normal GS and HS (n = 119, 43.0%), normal GS and low HS (n = 80, 28.9%), low GS and normal HS (n = 34, 12.3%), and low GS and HS (n = 44, 15.9%). Baseline demographics and clinical parameters, including the Charlson [26] and Liu [27] comorbidity indexes, were obtained at the time of study entry. All patients were monitored for major adverse events, which were defined as all-cause mortality and cardiovascular events, including acute coronary syndrome, symptomatic heart failure, cerebral infarction and hemorrhage, and peripheral artery disease, until June 2019. Measurements of gait speed and handgrip strength GS was measured after the end of a dialysis session on a treatment day with a short interdialytic interval (i.e., one-day interval) within 1 month of patient enrollment. We assessed GS by measuring the walking speed over a 4-m course at the participant's usual pace. The test was repeated three times, and the average speed was calculated. HS was measured by a Jamar hand dynamometer (Sammons Preston Inc., Bolingbrook, IL) on the dominant hand unless contraindicated during dialysis sessions. Each measurement was repeated three times, and the highest value was noted. Based on the suggestions made by the Asian Working Group for Sarcopenia [28], low GS was defined as less than 0.8 m/s, and low HS was defined as less than 26 kg for men and less than 18 kg for women. Questionnaires related to physical performance and mental health social. The physical components included the domains of physical functioning, pain, general health, and energy/ fatigue. The mental components included the domains of cognitive function, sleep, and emotional well-being. Finally, the social components included work status, quality of social interaction, social support, and social function. Statistical analysis All statistical analyses were performed with SPSS for Windows, version 20.0 (SPSS, Chicago, IL). Baseline characteristics and clinical parameters are expressed as the means ± standard deviations (SDs) or as the numbers of patients and percentages. Analysis of variance (ANOVA) with Bonferroni post hoc analysis, chi-square test, and Fisher's exact test were used to compare these variables, as appropriate. Non-normally distributed variables, physical performance scores, comorbidity index, and quality of life scores were described as median [first and third interquartile rage] and compared among the subgroups by the Kruskal-Wallis test with Bonferroni post hoc analysis. We used Pearson's correlation analyses to determine the relationship between GS and HS. Multiple logistic regression analysis was used to determine the risk factors for low GS and HS. Levels of plasma inflammatory markers were expressed as boxand-whisker plots, and their comparisons were made by ANOVA with Bonferroni post hoc analysis. Finally, Kaplan-Meier curves were generated to assess the probabilities of the patient outcomes according to GS and HS, and the Cox proportional hazards model was used for further multivariate adjustments with possible confounders including age, sex, previous history of cardiovascular disease, serum albumin levels, and Charlson comorbidity index. P values less than 0.05 were considered to indicate statistical significance. Baseline clinical characteristics of patients The baseline demographics and laboratory parameters of patients stratified by physical performance status are shown in Table 1. The prevalence of low GS and HS was 78 (28.2%) and 124 (44.8%), respectively. Patients with low GS and HS were older and had a lower body mass index and a shorter duration of dialysis than those in the other groups. The prevalence of previous cardiovascular events and diabetes was also higher in these patients. The predialysis serum albumin and creatinine levels were significantly lower in patients with poor physical performance, while spKt/V was inversely correlated with GS and HS. Mid-arm muscle circumference (MAMC) was positively correlated with GS and HS, although the statistical significance was marginal. Finally, a higher rate of the prescription of statins was observed in patients with low GS than in those with normal GS. Associations among physical performance, comorbidity index scores, and mental health We performed a correlation analysis to determine the relationship between GS and HS and found that the two parameters were significantly correlated with each other, but the correlation was weak (R 2 = 0.070 and p < 0.001; Fig. 2). We next evaluated the relationships among physical performance, comorbidity index scores, and mental health. As shown in Table 2, GS and HS were significantly associated with comorbidity scores and poor physical status (Charlson comorbidity scores of 4 [2,4] vs. 4 [3,5] vs. 5 [3,5] vs. 5 [4,5] and Liu comorbidity scores of 4 [3,5] vs. 4 [3,6] vs. 6 [4,7] vs. 6 [4,7] for the normal GS and HS, normal GS and low HS, low GS and normal HS, and low GS and HS groups, respectively; p < 0.001 for both comparisons). In addition, patients with low GS and HS showed profoundly impaired cognitive functioning as assessed by the MMSE and the KDQOL-SF (28 [26,29] vs. 27 [24,28] vs. 27 [25,30] vs. 27 [23,29] The relationship between plasma inflammatory markers and physical performance We next measured various plasma inflammatory markers and compared their levels across the groups. Among the cytokines and chemokines, the levels of plasma endocan and MMP-7 were significantly higher in patients with low GS and HS than in those with normal GS and HS ( Fig. 3a and b). In contrast, the levels of traditional inflammatory markers, including TNF-α, IL-6, and high sensitivity C-reactive protein (hs-CRP), were not associated with physical performance (Fig. 3c-e). Impacts of gait speed and handgrip strength on all-cause mortality and cardiovascular events The mean duration of follow-up since the recruitment of patients was 25.3 months, and a total of 19 deaths (6.9%) and 30 (10.8%) cardiovascular events occurred during this period. Patients with low GS and HS showed the highest cumulative incidence rate for major adverse events (11.8, 15.0, 17.6, and 29.5% for the normal GS and HS, normal GS and low HS, low GS and normal HS, and low GS and HS groups, respectively, p = 0.004 for overall comparisons; Fig. 4). The observed hazard ratios (HRs) for major adverse events are shown in Table 4. Multivariate Cox regression analysis revealed that patients with low GS and HS had the highest level of risk for major adverse events (adjusted HR of 2.72, 95% CI of 1.14-6.46; p = 0.024) compared to the risk levels of those with normal GS and HS after multivariate adjustments of possible confounders. Patients with normal HS but low GS also exhibited a tendency toward an increase in major adverse events (adjusted HR of 2.38, 95% CI of 0.86-6.53; p = 0.084). In contrast, isolated low HS was not related to an increased risk of adverse outcomes, although the adjusted HRs were slightly elevated. Notably, low GS and HS was associated with significantly increased composite event rate There was a significant interaction between GS and HS for major adverse events (p = 0.019). Finally, we performed a subgroup analysis of enrolled patients according to their age. As shown in Fig. 5, physical performance was not associated with composite outcomes in hemodialysis patients under 65 years of age. In contrast, the risk of major adverse events was significantly increased in elderly patients with low GS and HS (adjusted HR of 5.76, 95% CI of 1.78-18.62; p = 0.012). Discussion Although sarcopenia was originally described as an agerelated structural and functional decline in skeletal muscle, recent investigations have consistently acknowledged that decreased kidney function is also involved in sustained muscle wasting and the subsequent development of sarcopenia. Compared to the elderly population, in which the prevalence of sarcopenia is 11% [33], CKD patients are likely to be much more prone to its occurrence, with an estimated prevalence of 30-60% [20,21,24,[34][35][36]. The two main components of sarcopenia, muscle strength and mass, are dissociated in the setting of ESRD, and the muscle strength is more important than muscle mass in terms of patient outcomes [20,35]. In line with these findings, a recent metanalysis showed a strong association between CKD progression and slowing of walking speed [37]. In this context, we extensively investigated the effects of skeletal muscle dysfunction on major adverse events in hemodialysis patients. Our findings suggest that GS and HS represent different aspects of patient characteristics and that their combination could identify those at the highest risk for mortality and cardiovascular events. Of note, MAMC showed a tendency to be relatively lower in patients with poor physical performance but was not related to either clinical outcome (data not shown). Together, our data support the idea that the functional assessment of skeletal muscle is more important than its quantitative assessment and that measuring GS and HS is a suitable method for the evaluation of skeletal muscle function in hemodialysis patients. Based on the significant correlation between poor physical performance and high mortality in CKD patients, several prospective trials and metanalysis have assessed whether exersice intervention could improve patient outcomes [38][39][40][41][42][43][44][45]. Although physical training significantly improved patient quality of life and inflammatory parameters in most studies, these benefits were not translated into better patient survival. One of the reasons for this discrepancy might be that patients enrolled in these studies were highly heterogeneous in their baseline clinical characteristics, underlying comorbidities, and laboratory findings. Moreover, there is no consensus on the definition of adequate exercise for hemodialysis patients, thereby limiting the application of intradialytic exercise in routine clinical practice. Therefore, well-designed randomized controlled trials are needed to clarify the clinical significance of intradialytic exercise, especially in terms of improving patient mortality. We noticed that spKt/V, currently used as a standard method for the assessment of dialysis adequacy, was highest in patients with low GS and HS and lowest in patients with normal GS and HS ( Table 1). The inverse relationship between Kt/V and physical performance was consistently shown in other studies, suggesting that this relationship is likely to be a universal phenomenon [24,34,36,46]. We speculate that the low muscle mass and subsequent decreased volume of distribution of urea in the body (V) in patients with low GS and HS resulted in a relative increase in the value of Kt/V without affecting the true dialysis efficacy [47]. Therefore, sarcopenic patients may be underdialyzed if their dialysis time and dialyzer filter are selected solely based on the levels of Kt/V. Further study is warranted to define the optimal target of Kt/V in dialysis patients based on the severity of sarcopenia. Although GS and HS are the two representative tests used to assess physical performance, direct comparisons of these parameters have rarely been made, especially in dialysis patients. Here, we examined their relationship and found that a substantial portion of patients exhibited low performance on one test while demonstrating normal performance on the other (114/277, 41.2%). Moreover, the correlation coefficient between GS and HS was very weak despite its statistical significance, suggesting that the factors contributing to these two conditions might be different. We consider that this finding is at least in part due to the differences in the muscles and neurologic systems involved during the execution of the HS and GS tests. In accordance with our data, Roshanravan et al. showed a discrepancy in upper and lower muscle strength in a nondialysis CKD cohort study [19]. Thus, these data provide a rationale that the combination of the GS and HS tests could integrate the different patient components, thereby allowing us to predict future outcomes better. Despite the fact that the clinical relevance of GS and HS as predictors of mortality and cardiovascular outcomes was documented in previous studies, direct comparisons between these two tests have not been performed so far. Interestingly, patients with isolated low GS had a tendency to exhibit worse comorbidity indexes and physical functions than those with isolated low HS (Table 2). Furthermore, GS was significantly superior than HS for the prediction of all-cause mortality in the analysis of our cohort, implying that the muscle function of the lower extremities might be more important than that of the upper extremities in terms of patient outcomes. Several recent studies also revealed that skeletal muscle function in the lower extremities but not in the upper extremities was associated with overall physical performance and the hospitalization rate [48,49], emphasizing the clinical importance of lower extremity performance. Moreover, the GS test is still valuable because low GS is associated with increased HRs for death and cardiovascular mortality regardless of HS ( Fig. 3 and Table 4). Johansen et al. investigated longitudinal trends in the physical performance of hemodialysis patients and found that GS frequently declined while HS did not change over time [50]. GS was the strongest individual predictor of future frailty and mortality among various physical activity assessment tools, including HS, which is in line with our findings. Therefore, we consider that monitoring gait functions has the potential to serve as a valuable tool for continuous risk stratification of dialysis patients. We found that the levels of endocan and MMP-7 were elevated in patients with low GS and HS. Endocan is a water-soluble proteoglycan consisting of amino acid polymers and a single dermatan sulfate chain [51]. Plasma endocan is known to exclusively originate from the vascular endothelium, and its levels reflect endothelial activation and systemic inflammation. Several previous studies have demonstrated the clinical value of plasma endocan in the prediction of cardiovascular mortality as well as the progression of kidney diseases [52][53][54][55]. It should be confirmed whether elevated levels of plasma endocan result from sarcopenia itself or from other confounding factors, such as vascular injuries or infection [56,57]. MMP-7 is an endopeptidase that belongs to the MMP family. In addition to its basic functions in cleaving extracellular matrix substrates, MMP-7 is also involved in the development of local and systemic inflammation [58][59][60]. Although MMP-2 and MMP-9 seem to play major roles in the degradation of the extracellular matrix that leads to muscle wasting, the pathophysiological relevance of MMP-7 in the development and progression of sarcopenia is still mostly unknown. Increased MMP-7 activity is observed in a hereditary form of muscular dystrophy [61], suggesting that upregulated MMP-7 might have detrimental effects on skeletal muscle. In contrast with a previous report [20], the levels of hs-CRP, IL-6, and TNF-α were not elevated in sarcopenic patients in our study. We speculate that these inconsistent findings are attributable to the differences in the degree of overall inflammation; the absolute concentrations of hs-CRP and IL-6 were lower and the levels of serum albumin were higher in patients in our study than in those in the previous study [20]. Although low GS or HS alone was not predictive of patient outcomes in our cohort (Table 4), several other studies showed that isolated low GS or low HS was an independent predictor of all-cause mortality in patients with CKD [19][20][21][22]. This discrepancy is, at least in part, because the number of cardiovascular events in this study during follow-up was low. Thus, the statistical power of multivariable analysis with respect to separately analyzing the prognostic impacts of GS and HS was reduced. Moreover, the appropriate cutoff values for low GS and HS are still controversial, even though guidelines had already been established for Asian populations [28]. More vigorous validations are needed to determine the clinical relevance of these criteria as predictors of patient outcomes. The limitations of this study should be mentioned. There is a concern about selection bias because patients who were incapable of performing the GS and/or HS tests were excluded from our study. Indeed, a previous study reported that dialysis patients who could not complete a walking test had the highest comorbidity index and worst survival rate, even when compared to those who could walk very slowly (< 0.6 m/s) [21]. Plasma inflammatory markers were not adjusted for other clinical parameters. Thus, the impacts of these markers on patient outcomes were substantially limited. Nonetheless, we believe that these results may help clinicians assess the overall status of hemodialysis patients since their levels could reflect physical performance. Finally, we could not determine the possible mechanisms underlying the association between low physical performance and high mortality. We speculate that chronic sustained inflammation might be an essential mediator that contributes to both phenomena (Fig. 3). This hypothesis should be explored in further studies. Conclusion Our data suggest that poor physical performance, as assessed by GS and HS, was significantly associated with high all-cause mortality and cardiovascular diseases in hemodialysis patients. GS and HS seem to capture the function of different sets of skeletal muscles, neurological impairments, and malnutrition that develop in ESRD patients. Given that the measurements of GS and HS are relatively easy to perform, the combination of these two tests would provide clinicians opportunities for better patient assessment and individualized care.
2020-05-07T09:08:45.099Z
2020-01-24T00:00:00.000
{ "year": 2020, "sha1": "7230863265735a9a94d84bce34dbbddafec65459", "oa_license": "CCBY", "oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-020-01831-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "673b44f5b8173076ec3e42aa3d36d6cbd42f8058", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19709252
pes2o/s2orc
v3-fos-license
Statistics of Long Period Gas Giant Planets in Known Planetary Systems We conducted a Doppler survey at Keck combined with NIRC2 K-band AO imaging to search for massive, long-period companions to 123 known exoplanet systems with one or two planets detected using the radial velocity (RV) method. Our survey is sensitive to Jupiter mass planets out to 20 AU for a majority of stars in our sample, and we report the discovery of eight new long-period planets, in addition to 20 systems with statistically significant RV trends indicating the presence of an outer companion beyond 5 AU. We combine our RV observations with AO imaging to determine the range of allowed masses and orbital separations for these companions, and account for variations in our sensitivity to companions among stars in our sample. We estimate the total occurrence rate of companions in our sample to be 52 +/- 5% over the range 1 - 20 M_Jup and 5 - 20 AU. Our data also suggest a declining frequency for gas giant planets in these systems beyond 3-10 AU, in contrast to earlier studies that found a rising frequency for giant planets in the range 0.01-3 AU. This suggests either that the frequency of gas giant planets peaks between 3-10 AU, or that outer companions in these systems have a different semi-major axis distribution than the overall gas giant planet population. Our results also suggest that hot gas giants may be more likely to have an outer companion than cold gas giants. We find that planets with an outer companion have higher average eccentricities than their single counterparts, suggesting that dynamical interactions between planets may play an important role in these systems. INTRODUCTION The presence of a substantial population of gas giant planets on orbits interior to 1 AU poses a challenge to models of planet formation and migration. Standard core accretion models favor giant planet formation beyond the ice line, where core-nucleated accretion may proceed on a timescale substantially shorter than the lifetime of the disk (Pollack et al 1996;Alibert et al 2005;Rafikov 2006). In this scenario, gas giant planets on short period orbits most likely migrated in from their original formation locations (e.g., Lin et al 1996). Migration models for these planets can be divided into two broad categories. The first is smooth disk migration, in which exchanges of angular momentum with the disk causes the planet's orbit to gradually decay. This mechanism would be expected to produce close to, if not completely, circular orbits that are well aligned with the spin axis of the host star (Goldreich & Tremaine 1980;Lin & Papaloizou 1986;Tanaka et al 2002). The second migration channel is three-body interactions. These include the Kozai mechanism, in which the presence of a stellar or planetary companion causes the argument of periastron to undergo resonant librations, allowing the planet's orbit to exchange between mutual inclination and eccentricity. Alternatively, planet-planet scattering or long term secular interactions between planets could impart a large orbital eccentricity to the inner planet (Chatterjee et al 2008;Nagasawa et al 2008;Wu & Lithwick 2010). This highly eccentric orbit can then shrink and circularize at short periods via tidal dissipation. High eccentricity migration channels and dynamical interactions between planets are thought to frequently produce planets whose orbits are misaligned with the rotation axes of their host stars 9 . Over the past decade, Rossiter-McLaughlin measurements of spin-orbit alignment have found a number of hot Jupiter systems that are misaligned (Winn et al 2010;Hebrard et al 2011;Albrecht et al 2012). However, previous studies demonstrated that there is no correlation between the presence of an outer planetary or stellar companion and the spinorbit angle of hot Jupiters (Knutson et al 2014;Ngo et al 2015). Furthermore, Batygin (2012) and Batygin & Adams (2013) have suggested that a distant stellar companion could tilt the protoplanetary disk with respect to the star's spin axis, in which case disk migration could lead to a misaligned orbit (Spalding & Batygin 2014). This scenario is supported by the discovery of apparently coplanar multi-planet systems with spin-orbit misalignments (Huber et al 2013;Bourrier & Hebrard 2014), although other surveys have suggested that such systems may be relatively rare (Albrecht et al 2013;Morton & Winn 2014). In either case, it appears that the cause of hot Jupiter misalignment is more complicated than the simple picture presented above. Measurements of orbital eccentricities for a large sample of single and multi-planet systems provide a more direct diagnostic of the importance of dynamical interactions in shaping the observed architectures of planetary systems. We expect dynamical interactions between planets to pump up the eccentricities of their orbits, a process that could result in migration if the periapse of an orbit gets close enough to the star for tidal forces to become significant (Rasio & Ford 1996;Juric & Tremaine 2008). However, previous radial velocity studies of gas giants indicate that high eccentricities are more common in apparently single systems (Howard 2013). It has been suggested that this enhanced eccentricity may be due to planet-planet scattering, where one planet was ejected from the system (Chatterjee et al 2008). This is consistent with the results of Dawson & Murray-Clay (2013), which suggest that higher eccentricities are more common when the star has a high metallicity, and infer that this is because higher metallicity stars are more likely to form multiple giant planets, which then interact and pump up planet eccentricities. Limbach & Turner (2014) also find a positive correlation between lower eccentricity and higher system multiplicity. Conversely, Dong et al (2014) finds that warm Jupiters with outer companions are more likely to have higher eccentricities than single warm Jupiters, albeit with a relatively small sample size of just 26 systems. We can test these trends by directly searching for outer companions at wide orbital separations in a large sample of known planetary systems, and checking to see if these companions are associated with a larger orbital eccentricity for the inner planet. In order to understand whether or not dynamical interactions between planets are responsible for the inward migration of a subset of these planets, it is useful to study systems where we can obtain a complete census of gas giant planets across a broad range of orbital separations. While large surveys have made it possible to understand the statistical properties of exoplanet populations, recent studies have focused on determining mass distributions and occurrence rates of short period, low mass planets around apparently single main sequence FGK stars (e.g. Howard et al 2012;Fressin et al 2013;Howard 2013;Petigura et al 2013). Many of these surveys are primarily sensitive to short-period planets, making it difficult to evaluate the role that a massive distant planetary companion might have on the formation and orbital evolution of the inner planets. Early studies of hot Jupiters, which are among the best-studied exoplanet populations, indicated that they rarely contain nearby companions (Steffen et al. 2012, but see Becker et al. 2015 for a recent exception). In contrast, recent work by Knutson et al (2014) looked at 51 hot Jupiter systems and found that they are not lonely -the occurrence rate of massive, outer companions was 51 ± 10% for companions with masses of 1-13 M Jup and separations of 1-20 AU. This implies that long period companions to hot Jupiters are common, and thus might play an important role in the orbital evolution Figure 1. Transiting hot Jupiters from our previous radial velocity study (Knutson et al 2014) are shown as red triangles, and the new sample of gas giant planets in this study are shown as black circles. The blue diamonds represent the gas and ice giant planets in the solar system for comparison. of these systems. In this study we combine Keck HIRES radial velocity measurements with NIRC2 K band adaptive optics (AO) imaging to search for massive, long period companions to a sample of 123 known exoplanet systems detected using the radial velocity (RV) method. Unlike our previous survey, which focused exclusively on transiting hot Jupiter systems, our new sample includes planets with a wide range of masses and orbital separations (Fig. 1). We present results from this survey in two papers. In this paper, we focus on long-term RV monitoring of the confirmed exoplanet systems, probing planetary and brown dwarf mass companions out to ∼100 AU. We test whether close-in gas giant planets are more likely to have outer companions than their long period counterparts, and whether planets in two-planet systems are more likely to have higher eccentricities than single planet systems. In the second paper, we will use our complementary K-band AO images to find and confirm low mass stellar companions in these systems in order to determine how stellar companions might influence the formation and evolution of the inner planets. In section 2 we describe the selected sample of systems, as well as the methods for obtaining the RV and K-band AO imaging data. In section 3 we describe fits to the RV data, generation of contrast curves from the AO data, identification of significant RV accelerations, calculation of two-dimensional companion probability distributions, and the completeness analysis that was performed for each individual system. Finally, in section 4 we discuss our occurrence rate calculations and analysis of eccentricity distributions. OBSERVATIONS Radial velocity measurements were made at Keck Observatory as part of more than a dozen PI-led programs falling under the umbrella of the California Planet Survey (CPS; Howard et al 2010). We observed each target star using the High Resolution Echelle Spectrometer (HIRES) (Vogt et al 1994) following standard practices of CPS. Our selected sample includes all known one-and twoplanet systems discovered via the radial velocity method with at least ten RV observations obtained using HIRES. We also excluded systems with a Keck baseline shorter than the published orbital period. The published planets in our resulting sample of 123 systems span a range of masses and semi-major axes, as shown in Figure 1. RV baselines for these targets range from 5.02 to 18.18 years, making it possible to detect gas giant planets spanning a broad range of orbital semi-major axes. Properties of the target stars are described in Table 1. Figure 2 shows the distribution of stellar masses in our sample. While most stars are F and G stars, there are significant numbers of M, K, and A stars. The A stars in this sample are all moderately evolved, which facilitates precise radial velocity measurements . Keck HIRES Radial Velocities All of the target stars were observed using the High Resolution Echelle Spectrometer (HIRES) on Keck I (Vogt et al 1994). While the majority of the RV data used in this study was published in previous papers, we also obtained new observations that extend these published baselines by up to 12 years. To reduce the RV data, the standard CPS HIRES configuration and reduction pipeline were used . We measured Doppler shifts from the echelle spectra using an iodine absorption spectrum and a modeling procedure descended from Butler et al (1996) and described in . The set of observations for each star comprise a "template spectrum" taken without iodine and de-convolved using a reference point spread function (PSF) inferred from near-in-time observations of B-stars through iodine, and a set of dozens to hundreds of observations through iodine that each yield an RV. We used one of the 0. 86-wide slits ('B5' or 'C2') for the observations taken through iodine and a 0. 57 ('B1' or 'B3') or 0. 86-wide slit for the template observations. Using a real-time exposure meter, integration times of 1-8 minutes were chosen to achieve (in most cases) a signal-to-noise ratio of ∼220 in the reduced spectrum at the peak of the blaze function near 550 nm. All Doppler observations were made with an iodine cell mounted directly in front of the spectrometer entrance slit. The dense set of molecular absorption lines imprinted on the stellar spectra provide a robust wavelength fiducial against which Doppler shifts are mea-sured, as well as strong constraints on the shape of the spectrometer instrumental profile at the time of each observations (Marcy & Butler 1992;Valenti et al 1995). The velocity and corresponding uncertainty for each observation is based on separate measurements for ∼700 spectral chunks each 2Å wide. The RVs are corrected for motion of Keck Observatory through the solar System (barycentric corrections). The measurements span 1996-2015 (see Table 2). Measurements made after the HIRES CCD upgrade in 2004 August have a different (arbitrary) velocity zero point (not the star's systemic velocity) and suffer from somewhat smaller systematic errors. A summary of the radial velocity data used in this work is provided in Table 2. We include best-fit stellar jitter and RV acceleration "trend" values from our orbital solution fitting described in section 3. NIRC2 AO Imaging We observed K band images for all targets using the NIRC2 instrument (Instrument PI: Keith Matthews) on Keck II. We used natural guide star AO imaging and the narrow camera setting (10 mas pixel −1 ) to achieve better contrast and spatial resolution. For most targets, we imaged using the full NIRC2 array (1024×1024 pixels) and used a 3-point dither pattern that avoids NIRC2's noisier quadrant. Because NIRC2 does not have neutral density filters, we used the subarray mode (2.5" or 5" field of view) to decrease readout time when it was necessary to avoid saturation. We typically obtained two minutes of on-target integration time per system in position angle mode. We use dome flat fields and dark frames to calibrate the images. We identify image artifacts by searching for pixels that are 8σ outliers compared to the counts in the surrounding 5×5 box. We replace these pixels by the median value of the same 5×5 box. To compute contrast curves, we register all frames with the target star and then combine using a median stack. Table 3 summarizes the NIRC2 AO observations taken during this survey that were used in subsequent analysis. Fischer et al (2007) Note. -Systems with 3σ trends and above are listed in bold. 10 Because this system has a new outer planet whose period is just covered by the RV baseline, we fix the trend to zero. 11 Because the RV accelerations in systems HD 50499, HD 68988, HD 72659, HD 75898, HD 92788, and HD 158038 have some curvature, we fit them with a two planet solution. Since the partially resolved orbit and linear trend are degenerate, we fix the slope to zero in these fits. During these fits, we also fix the poorly constrained eccentricity of the outer planet to zero. One caveat is that we assume that the residual RV signals are due to a single body, even though they could be the sum of multiple bodies. Note. -The "Array" column denotes the horizontal width, in pixels, of the section of the detector used to capture the image. All PHARO images are taken in the full 1024×1024 array. The NIRC2 array dimensions used in this survey were 1024×1024 (the full array), 512×512, or 256×264. These dimensions are constrained by NIRC2's readout software. The Tint column indicates the total integration time of a single exposure, in seconds, and the Nexp column indicates the number of exposures used in the final stacked image. System HD158038 was imaged using PHARO; the rest were imaged using NIRC2. Radial Velocity Fitting The presence of a distant, massive companion manifests as a long-term acceleration for observations with baselines significantly shorter than the companion's orbital period (e.g. . To detect and quantify the significance of these "trends", we performed a uniform analysis of these systems using a Markov Chain Monte Carlo technique. The initial set of parameter values for the MCMC run were determined using a χ 2 minimization fitting procedure. For a single-planet system, the MCMC algorithm simultaneously fit eight free parameters to the RV datasix orbital parameters (the velocity semi-amplitude, the period of the orbit, the eccentricity of the orbit, the argument of periastron, the true anomaly of the planet at a given time, and the arbitrary RV zero point), a linear velocity trend, and a stellar jitter term (Isaacson & Fischer 2010). This additional error term is added to the internal uncertainty of each radial velocity measurement in quadrature. All parameters had uniform priors. While it is formally correct to use log priors for parameters such as the velocity semi-amplitude, jitter term, and linear trend, we find that our use of uniform priors has a negligible effect on our posterior PDFs. We initialize our MCMC chains using the published parameters for the inner planets in these systems, which are typically quite close to our final best-fit parameters. Furthermore, we note that the choice of prior should only affect the posterior probability distributions in the data-poor regime; in this case the data provide good constraints on the parameters in question, and as a result the posterior PDF is effectively independent of our choice of prior. The likelihood function used in this analysis is given in Equation 1, where σ i is the instrumental error, σ jit is the stellar jitter, v are the data, and m is the model. The confidence intervals on each parameter were obtained from their posterior distribution functions. On August 19 2004, the HIRES CCD was upgraded, leading to a different RV zero point for data taken before and after this date. For systems with Keck HIRES RVs obtained prior to 2004, we include an offset parameter between the two datasets as an additional free parameter. Although there is some evidence that the post-upgrade jitter is lower than the pre-upgrade jitter by approximately 1 m/s (e.g. Howard et al. 2014), we find that this change is much smaller than the average jitter level for the majority of our targets, and our decision to fit a single jitter term across both epochs is therefore unlikely to have a significant effect on our conclusions. Approximately 30% of our targets have no pre-upgrade data at all, while an additional 50% have fewer than ten data points pre-or post-upgrade, making it difficult to obtain meaningful constraints on the change in jitter between these two epochs (e.g., Fulton et al. 2015). We therefore conclude that a uniform approach to these fits is preferable to a more customized approach in which we include two separate jitter terms for the approximately 20% of systems where such an approach is feasible. In addition to reproducing the published solutions of confirmed exoplanets, we detected eight new long-period planets with fully resolved orbits in systems GJ 317, HD 4203, HD 33142, HD 95089, HD 99706, HD 102329, HD 116029, and HD 156279. Trends were previously men- We note that the two planets in HD 116029 are in 3:2 period commensurability. To assess whether a dynamical model fit was needed, we used the Mercury integrator to numerically integrate the orbits of both planets in HD 116029 in order to determine the magnitude of the change in orbital parameters. We found that over the observational window of ∼ 8 years, the orbital elements of both planets varied by less than a fraction of a percent. Thus we conclude that a Keplerian model fit is sufficient to characterize the planets in HD 116029. Relevant characteristics of the new outer planets are listed in Table 5, and the corresponding RV solutions are plotted in Figures 3 through 10. RV measurements for these eight systems are listed in Table 4. . RV measurements and best fit models for HD 156279. The first and second panels show the combined two planet orbital solution and the residuals of that fit, respectively. The third plot shows the orbital solution for the inner planet after the outer planet solution and trend were subtracted, while the fourth plot shows the outer planet orbital solution with the inner planet and trend subtracted. We considered a linear trend detection to be statistically significant if the best-fit slope differed from zero by more than 3σ, and report best-fit trend slopes and stellar jitter values for all systems in Table 2. The nominal values quoted in this table are taken from the χ 2 fits, and the errors come from the MCMC analysis. We detected 20 statistically significant trends due to the presence of an outer companion. We find that all but 16 of our orbital solutions for the known inner planets in these systems were consistent with the published orbits at the 2σ level or better. Of the solutions that changed, the majority were sys- tems with long-period planets for which our newly ex- tended baseline provided a more tightly constrained orbital solution. This longer baseline was particularly important for systems with both long-period planets and RV accelerations, such as HD 190360. We present updated orbital solutions for all of the planets outside 3 AU in Table 5. We defer the publication of updated orbits for planets inside 3 AU and individual radial velocities for all systems to future publications, as these systems are the subject of other research projects currently in progress. 3.2. Non-Planetary Sources of RV Trends There were two scenarios in which systems with statistically significant trend detections were excluded from further analysis. In two systems, we found that the observed accelerations were correlated with stellar activity. We compared the RV trends in each system to the measured emission in the Ca II H&K lines, quantified by the S HK index Isaacson & Fischer 2010), to determine if the RV trends were caused by stellar activity instead of an outer companion (Santos et al 2010). Both HD 97658 and HD 1461 showed a clear correlation between the observed RV trend and the measured S HK values, and we therefore excluded them from subsequent analysis. We also excluded systems with a linear acceleration that could have been caused by a nearby directly imaged stellar companion. We first examined our K band AO images for all stars with statistically significant radial velocity trends in order to determine which systems contained a directly imaged stellar companion. HD 164509 has a companion 0.75 away, and HD 195109 has a companion 3.4 away. To determine whether these companions could have caused the RV trends in these systems, we compared the minimum mass estimate from the RV trend to the companion mass estimate from the AO image. We calculated the minimum companion mass using the equation from Torres (1999): In this equation, d is the distance to the star, ρ is the projected separation of the companion and the star on the sky,v is the radial velocity trend, and F (i, e, ω, φ) is a variable that depends on the orbital parameters of the companion that are currently unconstrained. We use a value of √ 27/2 for F , which is the minimum value of this function calculated in Liu et al (2002). HD 164509 is 52 pc away and has a companion located at a separation of 0.75 . With a radial velocity trend of 3.4 m s −1 yr −1 , this trend corresponds to a minimum companion mass of 0.072 M . To estimate the mass of the companion from the AO image, the brightness of the companion in K band relative to the primary is used, as described in section 3.4. With a relative K band magnitude of 3.59, we find that the estimated mass from the AO data is 0.33 M . Since the companion mass calculated from the AO data is greater than the minimum mass needed to explain the RV trend, we therefore conclude that this companion may indeed be responsible for the observed trend and exclude this system from subsequent analysis. HD 195109 is 38.5 pc away and has a companion located at a separation of 2.4 . With a radial velocity acceleration of 1.9 m s −1 yr −1 , a stellar companion at the observed AO separation must have a mass of at least 0.44 M in order to cause the observed trend. With a relative K-band magnitude of 2.66, we find that the estimated mass from the AO data is 0.58 M . We conclude that the imaged companion could have caused the RV acceleration, and thus removed this system from future analyses. We note that this companion was previously reported in Mugrauer et al (2007). Howard et al (2010) imaged a faint M-dwarf companion located 489.0±1.9 mas from the primary star HD 126614. With an absolute K-band magnitude of 6.72, the authors estimated the mass of this companion to be 0.324 ± 0.004 M . From Equation 1, the estimated minimum mass of the companion inducing the RV trend, given a distance of 72.6 pc and a trend of 14.6 ms −1 yr −1 , is 0.26 M . Since the minimum estimated RV mass is lower than the estimated AO mass, we conclude that the imaged AO companion could cause the RV trend, and thus remove this system from subsequent analyses. Note that none of these AO companions have second epoch data, and thus have not been confirmed as bound to their respective primaries. However, at these projected separations and contrast ratios the probability that the companion is a background star is relatively low, and we therefore proceed under the assumption that they are bound. We also carried out a literature search to determine whether any of the remaining trend systems had additional stellar or substellar companions. We found that HD 109749 has a known binary companion described in the published literature. HD 109749 has a companion with K-band magnitude of 8.123 separated by 8.35 (Desidera & Barbieri 2007). This visual binary lies outside the field of view for our AO observations. After calculating the minimum companion mass from the measured RV trend and comparing this value to the estimated mass from the AO data found in the literature, we found that this companions cannot explain the accelerations observed in these systems. After removing stellar sources of RV trends, we find 20 systems with accelerations that have slopes at least 3σ away from zero. The RV data and best-fit accelerations for each of these systems are plotted in Contrast Curves We used contrast curves from our AO observations to put limits on the masses and separations that a companion in each system could have. We calculate contrast curves for our target stars as follows. First, we measure the full width at half max (FWHM) of the central star's point spread function in the stacked and combined image, taking the average of the FWHM in the x and y directions as our reference value. We then create a box with dimensions equal to the FWHM and step it across the array, calculating the total flux from the pixels within the box at a given position. The 1σ contrast limit is then defined as the standard deviation of the total flux values for boxes located within an annulus with a width equal to twice the FWHM centered at the desired radial separation. We convert absolute flux limits to differential magnitude units by taking the total flux in a box of the same size centered on the peak of the stellar point spread function and calculating the corresponding differential magnitude at each radial distance. We show the resulting 5σ average contrast curve for these observations in Figure 13; although our field of view extends farther in some directions than the maximum separations shown here, we have limited our calculations to radial separations with data available at all position angles. We next use our contrast curves to place limits on the allowed masses of stellar companions as a function of projected separation. We interpolate the PHOENIX stellar atmosphere models (Husser et al 2013) in the available grid of solar metallicity models to produce a model that matches the effective temperatures and surface gravities of the primary star. For the proposed low-mass main sequence companions, we create PHOENIX models with radii and effective temperatures drawn from Baraffe et al (1998). We then calculate the corresponding contrast ratio between the primary and secondary by integrating over the appropriate bandpass (either K p or K s ), adjusting the mass of the secondary downward until we match the 5σ limit from our contrast curve. We discuss the merits of this approach as compared to other methods commonly utilized in AO imaging searches in Knutson et al. (2014). Companion Probability Distributions We combine our AO and RV observations in order to constrain the allowed range of masses and semi-major axes for the observed companions. The duration and shape of the RV trend places a lower limit on the mass and semi-major axis of the companions. Similarly, a nondetection in AO gives a complementary upper limit on these quantities. We create a two dimensional probability distribution for each companion, by defining an equally spaced 50×50 grid of logarithmic companion mass (true mass) and semi-major axis ranging from 1-500 AU and 0.05 − 1000 M Jup . We then subtract off the orbital solutions of the confirmed inner planets, leaving only the trends due to the companions. At each grid point in mass and semi-major axis, we inject 500 simulated companions. While the semi-major axis and mass of the companion remain fixed at each point, we drew a new inclination of the orbit each time from a uniform distribution in cos(i), and a new eccentricity each time from the beta distribution (Kipping 2013). This distribution is defined in Equation 3, where P β is the probability of a given eccentricity, Γ is the gamma function, and a = 1.12 and b = 3.09 are constants calculated from the known population of long period giant planets. Given this fixed mass, semi-major axis, and eccentricity for each simulated companion, we fit the remaining orbital parameters to the RVs using a least squares algorithm, and we calculate a corresponding χ 2 value. We note that the probability distribution calculations are not particularly sensitive to the assumed eccentricity distri- Figure 11. Best fit accelerations to the radial velocity data with a 3σ trend. The best fit trend is shown as a solid blue line, the errors on the slope are presented as dashed purple lines. The solid red line marks the date when the HIRES detector was replaced, which caused an offset in the measured RVs for the stars in our sample. The confirmed planet orbital solutions have been subtracted from both the RV data and from the best fit orbital solution to yield the trends. Systems with curved trends include HD 50499, HD 68988, HD 72659, HD 75898, HD 92788, and HD 158038. The plots with the curved trends show the best fit one planet orbital solution to the data after the inner planet solution was subtracted. bution. We recalculate the probability distributions for 30 random systems within our sample assuming a uniform eccentricity distribution, and found that the 1σ semi-major axis and mass ranges, as presented in Table 6 for the 3σ trend systems, are generally consistent with each other to a couple of grid points. We incorporate the constraints on potential companions from our AO observations using a method identical to the one described above. Within each mass and semimajor axis box we first generate a set of 500 companions with randomly selected masses, semi-major axes, and an eccentricity drawn from Eq. 3. We then fit for the remaining orbital parameters using the RV data, and use this best-fit orbit to calculate a set of 1000 projected separations for the companion sampled uniformly across the orbit. We then use our AO contrast curve to determine whether or not a companion of that mass and projected separation could have been detected in our AO image for each of the 1000 time steps considered. If the companion lies above our contrast curve we assume that it would have been detected, and if it lies below the curve we count it as a non-detection. For companions with large enough projected separations our images do not span all position angles, and we therefore assume that companions that lie above our contrast curve would be detected with a probability equal to the fractional position angle coverage of our image at that separation. We can then calculate the probability that a given companion would have been detected by determining the fraction of our 1000 time steps in which the companion lies above the contrast curve for that star. The lower and upper limits on the mass/semi-major axis parameter space occupied by each companion can be combined to form a two dimensional probability distribution. After multiplying the χ 2 cube in mass, semimajor axis, and eccentricity from the RV trends by the detection probability cube from the AO contrast curves, we marginalize this new cube over eccentricity to yield a two dimensional probability distribution. Figure 14 shows the posterior distributions for the companions in each of the 20 systems with statistically significant RV trends. Table 6 lists the 1σ mass and semi-major axis ranges derived for each companion from this analysis. As expected, systems with strong curvature in the observed radial velocity accelerations have tighter constraints on the allowed mass and semi-major axis of the companion than those with linear trends. Figure 14. Companion probability distributions. The three contours define the 1σ, 2σ, and 3σ levels moving outward. While the radial velocity trends constrain these distributions on the low mass, low semi-major axis end, AO imaging constrains the high mass, high semimajor axis parameter space. Note that the masses in these plots are true masses, not M sin i. Also note that the probability contours for HD 50499, HD 68988, HD 158038, and HD 180902 are not shown here. This is due to the fact that the grid is too course to resolve the contours of these well-constrained systems (the probability density is concentrated in only a couple of grid points). Finally, in some of these plots there is an apparent splitting of the contours at high mass and separation (e.g. HD 4208, HD 168443). This is due to the fact that the constraints from the AO images were modified by the percentage of position angles covered at wide separations. Based on the probability contours in Figure 14 and corresponding table of allowed companion masses, we conclude that the majority of companions are most likely gas giant planets, as field surveys indicate that the occurrence rate of brown dwarfs (13 -80 M Jup ) around sun-like stars is 3.2 +3.1 −2.7 % (Metchev & Hillenbrand 2009). We note that while the Metchev and Hillenbrand result is for brown dwarf companions to sun-like stars between 28-1590 AU, the brown dwarf parts of parameter space for our companions are typically outside of 28 AU. Therefore, the comparison to the Metchev and Hillenbrand occurrence rate is appropriate. For comparison, Cumming et al (2008) states that 17% − 20% of solar type stars host a giant planet (0.3 -10 M Jup ) within 20 AU. Completeness Maps We quantified the sensitivity of this survey to companions over a range of masses and semi-major axes by determining the completeness of each system given the system's radial velocity baseline. Once again, we defined a 50×50 grid in log mass/semi-major axis space from 1- Figure 15. Average completeness map for all systems. Each color corresponds to a detection probability. For example, companions occupying parameter space in the white areas of the map had a 90% to a 100% chance of being detected by this survey. 500 AU and 0.05-1000 M Jup . In each defined grid box, we injected 500 simulated planets, each with a random mass and semi-major axis uniformly drawn from the grid box. We draw the inclination of the orbit from a uniform distribution in cos i, the eccentricity from the beta distribution, and the remaining orbital elements from a uniform distribution. At each epoch that the star was observed, we calculated the expected RV signal caused by the injected companion. We generated errors for these simulated data by drawing randomly from a normal distribution of width σ 2 i + σ 2 jitter , where σ i are the randomly shuffled measurement errors from the original radial velocities and σ jitter is the best-fit jitter value. To determine if a simulated companion would be detectable, we fit either a one planet orbital solution, a linear trend, or a flat line to the simulated RV observations over the observed baseline. To determine which was the best fit, we used the Bayesian information criterion (BIC). This is defined as: BIC = −2L + k ln n, where L is the likelihood of the model, k is the number of free parameters in the model, and n is the number of data points in the observed data set. While the likelihood can be increased by simply fitting models with more free parameters, BIC selects against these with a penalty term. The lower the BIC value the better the model fit. Comparing two models, if ∆BIC > 10, this is very strong evidence for the model with the lower BIC (Kass & Raftery 1995). Thus if the BIC values for the trend or the one-planet models were less than ten compared to the BIC value for the flat line, the simulated companion was "detected", whereas if the flat line was the best fit, that companion was "not detected". This process was repeated for 500 simulated companions injected into each grid box, producing a completeness map of detection probability as a function of mass and semimajor axis. Figure 15 shows the average completeness map of all of the systems. Figure 16 shows the 50% contour for the average of all the systems, for the least sensitive system, and for the most sensitive system. The sensitivity of each system to planets with varying masses and semi-major axes de- Figure 16. Completeness contours corresponding to 50% probability of detection. The black contour corresponds to the average sensitivity for all the systems, the blue contour corresponds to HD 156668, the system with the greatest sensitivity, and the green contour corresponds to HD 5891, the system with the least sensitivity. pends on the length of the RV baseline, the magnitude of the measurement errors, and the number of data points for the system. The longer the baseline, the smaller the errors, and the greater the number of data points, the more sensitive the system. The least sensitive system is HD 5891, while the most sensitive system is HD 156668. The distribution of wide companions Now that we have determined the parameter space where each detected companion is most likely to reside, we can determine the most likely underlying distribution for these massive, long-period companions in confirmed exoplanet systems. We assume that the companions are distributed in mass and semi-major axis space according to a double power law (e.g. Tabachnik The total likelihood for a set of N exoplanet systems is given by: where the expression on the right is the probability of obtaining the set of data d for a system i given values for C, α, and β. We assume that each system can have at most one companion, and that the probability of obtaining the measured RV dataset for an individual star is therefore the sum of the probability that the system does contain a planet and the probability that the system does not contain a planet for each set of C, alpha, and beta values considered. The probability of a system having zero planets is given by: The quantity p(d i |0) is the probability of obtaining the measured RV dataset given that there are no planets in the system. Z is the probability that the system contains a planet within the specified range in mass and semimajor axis space. Here, p(d i |0) and Z are given by the following equations. In equation 7, d j is the jth datapoint in the dataset d for system i, m j is the corresponding model point, and σ j is the error on the jth datapoint. The probability of a system having one planet given values C, α, and β is: where p(d i |a, m) is the probability of a companion at a given mass and semi-major axis, which we know from the previously calculated two dimensional probability distributions. We then combine these expressions in order to calculate the likelihood of a given set of C, α, and β values given the measured RV data for all the stars in our sample: Note that for this calculation we use the probability distributions for all systems, not just those with 3σ trends. To maximize L , we varied the values of C, α, and β using a grid search. The 16% -84% confidence intervals on these parameters were then obtained using the MCMC technique. Occurrence Rates The overall occurrence rate for the population of companions can be estimated by integrating f (m, a) over a range of masses and semi-major axes. In addition to the population of exoplanet systems described previously, we also included the 51 hot Jupiter systems published in Knutson et al (2014). While we adopted the published RV model fits for each of the hot Jupiter systems, we recalculated probability distributions with the same grid spacing used for the 123 new systems described in this study for consistency. In Knutson et al (2014), we utilized a conservative approach in which we defined a given planet as a nondetection with 100% probability whenever the measured trend slope was less than 3σ away from zero. Instead of using a binary picture of planet occurrence, our revised likelihood function is more statistically correct, as it considers the probability of hosting a planet in all of our systems. We note that integrated companion occurrence rates calculated using this approach are particularly sensitive to the estimated jitter levels in our fits, where an underestimate of the true stellar jitter levels could result in an over-estimate of the corresponding companion occurrence rates. As a test of this new method we recalculate the companion occurrence rate for the sample of 51 transiting hot Jupiters presented in Knutson et al (2014) and find a value of 70 ± 8% for companions between 1 -13 M Jup and 1 -20 AU. This is approximately 2σ higher than the value of 51 ± 10% obtained for this sample of stars using our older, more conservative likelihood function. We calculate the overall frequency of companions beyond 5 AU in our new expanded system of 174 planetary systems by integrating over our best-fit probability distributions. We evaluate the companion frequency using a variety of different mass and period ranges in order to determine how sensitively this result is to the specific limits of integration selected. The resulting total occurrence rates are presented in Table 7, and the corresponding values of C, α, and β are shown in Table 8. Note. -We note that the α and β values presented here are strongly influenced by the slope of the probability distributions for companions with partially resolved orbits, and therefore should not be taken as reliable estimates of the actual companion distribution. Please see the discussion below for further explanation. We find that our values of α and β vary significantly depending on the integration range chosen, and are therefore not accurate estimates of the power law coefficients for this population of long-period companions. This dependence on integration range is due to the fact that many of the companions detected in our study have poorly constrained masses and orbits. When we vary the range of masses and semi-major axes used in our fits we truncate the probability distributions for these companions at different points, therefore biasing our corresponding estimates of α and β. Although it is difficult to obtain reliable estimates for the values of α and β for long-period companions, we can nonetheless investigate whether or not this population increases in frequency as a function of increasing mass and semi-major axis by calculating the occurrence rate of this sample of systems using equal steps in log space to increase the semi-major axis and mass integration ranges. When stepping in semi-major axis, we keep the mass range constant, 1 -20 M Jup , and when stepping in mass, we keep the semi-major axis range constant, 5 -20 AU. We then compare the observed changes in companion frequency per step in log mass or log semi-major axis in order to determine empirically how the overall distribution of companions compares to predictions from various power law models. For example, if the increase in frequency per log semi-major axis declines at larger separations this would imply a negative value for β, whereas the opposite would be true for a positive β. We calculate the uncertainties on the changes in occurrence rates by adding the individual uncertainties on the occurrence rates in quadrature. We calculate the change in the integrated occurrence rate as a function of increasing semi-major axis ( Figure 17) using a lower integration limit of 1 AU and including all planets in these systems, not just the outer companions. We find that for small separations these rates increase relatively quickly as compared to the predictions of a power law model with β = 0 (i.e. a uniform distribution in semi-major axis), whereas for large separations these rates increase relatively slowly. This suggests a positive β value for giant planets at smaller separations and a negative β value for outer companions at larger separations, with a broad peak in the distribution between 3 -10 AU. When we examine the corresponding change in occurrence rate for companions beyond 5 AU as a function of planets mass (Figure 18), we find that these rates also increase slowly as compared to the predictions of a power law model with α = 0. This implies a negative α value. We next compare our constraints on the mass and semi-major axis distribution of long-period companions to predictions based on studies of short-period planets around FGK stars. Since values of α and β are broadly consistent among these studies (e.g. Bowler et al 2010), the results from Cumming et al (2008) will be taken as representative: α = −0.31±0.2 and β = 0.26±0.1. These values were derived for planet masses between 0.3 − 10 M Jup and periods less than 2000 days (approximately 3 AU). We would like to know whether or not the population of companions beyond 5 AU is consistent with predictions based on the power law coefficients from this study. We answer this question by repeating our previous calculation using the Cumming et al power law, where we determine the change in the integrated occurrence Figure 17. This plot shows the change in occurrence rate between adjoining semi-major axis steps as a function of the upper semimajor axis integration limit. The results for the Cumming et al power law distribution are plotted in purple, while the results from this survey are plotted in blue. For the fits for our survey we include all planets in these systems outside 1 AU, not just outer companions as in the rest of our analysis. This allows us to study the relative distribution of planets in these systems across a broad range of semi-major axes. The sensitivity limit of the Cumming et al survey is ∼3 AU. For our survey, we are ∼50% complete between 1 -20 M Jup and 5 -100 AU. We note that the slight upward trend of the purple histogram bins corresponds to a β value that is 2.6σ away from zero. Figure 18. This plot shows the change in occurrence rate between adjoining mass steps as a function of the upper mass integration limit. The results from the Cumming et al power law distribution are plotted in purple, while the results from this survey are plotted in blue. We note that Cumming et al only includes planets with masses below 10 M Jup in their survey, whereas we include companions with masses up to 20 M Jup . The occurrence rates for larger masses shown in this plot are therefore an extrapolation based on our best-fit power law models. The slight downward trend in the purple histograms corresponds to an α value that is 1.6σ away from zero. rate per log mass and semi-major axis steps over the parameter range of interest. We calculate the uncertainties on these changes in occurrence rate by assuming Gaussian distributions for α and β and using a Monte Carlo method to get a distribution of occurrence rates for each semi-major axis and mass integration range. We then determine the uncertainties on the changes in occurrence rates by adding the uncertainties on the occurrence rates in quadrature. We note that due to correlations between α and β these uncertainties are slightly overestimated. We then compare these results to those obtained by fitting to our sample of long-period planets in Figures 17 and 18. As shown in Figure 17, the Cumming et al. power law predicts an increase in the frequency of planets as a function of increasing semi-major axis, whereas our fits suggest a declining frequency for gas giant companions beyond the conservative 3 -10 AU range. This implied disagreement between the integrated occurrence rates for our sample as compared to the extrapolated occurrence rates of Cumming et al is not surprising, as Cumming et al (2008) only fits gas giant planets interior to 3 AU. We speculate that this difference may indicate either a peak in the frequency of gas giant planets in the 3-10 AU range, or a difference between the population of outer giant planet companions in these systems and the overall giant planet population. In contrast to this result, Figure 18 indicates that the mass distribution of the long-period companions in our study is consistent with the negative α value (i.e. increasing frequency with decreasing planet mass) reported by Cumming et al. for the population of planets interior to 3 AU. We next consider how the frequency of companions in these systems varies as a function of other parameters, including the inner planet mass, semi-major axis, and stellar mass. We select an integration range of 1 − 20 M Jup and 5 -20 AU for these companions; this range is large enough to include all known companions detected by our survey, while still remaining small enough to ensure that we do not extrapolate too far beyond the region in which we are sensitive to companions. We find that within this integration range, the total occurrence rate for massive, long-period companions is 52.4 +4.5 −4.7 %. showed that planet occurrence rates and system architecture vary as a function of stellar mass. The A and M star systems are the high and low extremes of the sample's stellar mass range. To address the concern that including A and M star systems would influence our final results, we ran the entire grid search and MCMC analyses again excluding the 29 A and M star systems in the sample. The occurrence rate for this FGK-only sample is 54.6 +4.8 −4.8 %. We therefore conclude that the occurrence rates for the sample with and without the A and M stars are consistent with each other at the 0.4σ level. Following the total occurrence rate calculation, we calculated the occurrence rate of massive, long-period companions as a function of inner-planet semi-major axis. We divided the total sample up into three bins -systems with planets interior to 0.1 AU (hot gas giants), systems with planets between 0.1 and 1 AU (warm gas giants), and systems with planets between 1 and 5 AU (cold gas giants). For each bin, we repeated our fits to derive new values of C, α, and β, which we integrated over a range of 1 − 20 M Jup and 5 -20 AU. Our results are presented in Figure 19. The hot gas giant companion frequency is 2.4σ higher than that of the warm gas giants, and 2.3σ higher than that of the cold gas giants. This suggests that gas giants with orbital semi-major axes interior to 0.1 AU may have a higher companion fraction than their long-period counterparts, albeit with the caveat that this short-period bin is dominated by our transiting hot Jupiter sample. These planets typically have fewer radial velocity measurements than planets detected using the radial velocity technique, which could result in an underestimate of the stellar jitter for these stars. If this enhanced companion fraction for short-period planets is confirmed by future studies, it would suggest that three body interactions may be an important mechanism for hot Jupiter migration. Alternatively, this trend might also result from differences in the properties of the protoplanetary disks in these systems. If we suppose that each disk that successfully generates gas giant planets produces them at some characteristic radius (e.g. the ice line -see Bitsch et al (2013)) separated by some time span, and these planets subsequently migrate inwards via type II migration. Gas giants that migrate early in the disk's lifetime will reach the inner magnetospheric cavity of the disk, and due to eccentricity excitation mechanisms (Rice et al (2008)), will rapidly accrete onto the host star over a timescale that is short compared to the lifetime of the disk. As the disk ages however, photoevaporation will grow the radius of the inner disk cavity. Accordingly, for those gas giants that arrive later in the lifetime of the disk, the inner disk edge will have been eaten away to the point that the eccentricity excitation mechanisms are no longer effective at shepherding the planets into the host stars, allowing migration to halt. We note that there is a very narrow window of time where the aforementioned processes allow for a successful formation of a hot Jupiter (which may self-consistently explain their inherent rarity -see Rice et al (2008)). We would thus expect hot Jupiters to form primarily around stars that hosted disks that were especially efficient at giant planet formation, thus increasing the chances of having a planet reach the inner disk edge during the small window of time where hot Jupiter formation is possible. These highly efficient disks would also be expected to produce more than one gas giant planet, which leads to the expectation that hot Jupiters would be more likely to have companions. We also calculated the occurrence rate of companions as a function of inner planet mass. We divided the sample Figure 20. We find that intermediate mass planets may be more likely to have a massive, long-period companion, although all three bins are consistent at the 2σ level. We note that our ability to discern trends in companion rate as a function of planet mass is limited by the relatively small sample sizes in the lowest and highest mass bins, which result in correspondingly large uncertainties on their companion rates. Finally, we calculated the occurrence rate of companions outside of 5 AU as a function of stellar mass. Once again, we divided the sample up into three bins -systems with stellar masses from 0.08 -0.8 M (M and K stars), 0.8 -1.4 M (G and F stars), and 1.4 -2.1 M (A stars). Our results are plotted in Figure 21. We find that the occurrence rates for each stellar mass bin are consistent with each other at the 0.2σ level. Earlier studies indicated that the occurrence rate for gas giant planets interior to 3 AU is higher around A stars than F and G stars ; our results for companions beyond 5 AU suggest that these differences may be reduced at large orbital separations, albeit with large uncertainties due to the small number of A stars included in our sample. We note that while mass estimates for the evolved A stars have been debated in the literature (Schlaufman & Winn 2013;Johnson & Wright 2013;Johnson et al 2013;Lloyd 2013Lloyd , 2011, this has a minimal impact on our conclusions in this study as we find that these evolved stars have the same frequency of companions as the main sequence FGKM stars in our sample. Eccentricity Distribution In addition to the results described above, we also seek to quantify how the eccentricity distribution of exoplanets in single planet systems might differ from that of exoplanets in two planet systems or systems with an outer body, as indicated by a radial velocity trend. We quantify these differences by fitting the set of inner planet eccentricities for each sample using the beta distribution (Kipping 2013): We account for the uncertainties in the measured eccentricities for each planet by repeating our beta distribution fit 10,000 times, where each time we draw a random eccentricity from the MCMC posterior probability distribution for each individual planet. The resulting distributions of best-fit a and b values therefore reflect both the measured eccentricities and their uncertainties. Figure 22 plots the distribution of best-fit eccentricities for the two groups of planets. We excluded planets interior to 0.1 AU whose eccentricities might be circularized due to tidal forces from the primary star from this plot as well as the beta distribution fits. Figure 23 compares the twodimensional posterior probability distributions in a and b for each of the two groups, taking into account the uncertainties on each planet eccentricity. We find that the two-planet systems appear to have systematically higher eccentricities than their single planet counterparts, with a significance greater than 3σ. This result appears to contradict previous studies, which found that multi-planet systems have lower eccentricities (Chatterjee et al 2008;Howard 2013;Limbach & Turner 2014;Wright et al 2009). This difference may be explained if the separation between inner and outer planets is larger for cases where the inner planet has a large orbital eccentricity. Previous surveys were typically only sensitive to a 1 M Jup planet out to 3 -5 AU, suggesting that many of the multi-planet systems detected by our survey would have been misclassified as single planet systems. The most detailed study of this correlation to date was presented in Limbach & Turner (2014). This study used 403 cataloged RV exoplanets from exoplanet.org (Han et al 2014) to determine a relationship between eccentricity and system multiplicity. 127 of these planets were members of known multi-planet systems, with up to six planets in each system. When the authors calculated the mean eccentricity as a function of the number of planets in each system, they found that systems with more planets had lower eccentricities. We note that the differ-ence between our new study and this one may be due to the fact that the majority of their planets have relatively short orbital periods. For systems with three or more planets, this means that the spacing between planets is typically small enough to require less eccentric orbits in order to ensure that the system remains stable over the lifetime of the system. Furthermore, their analysis did not take into account the uncertainties on individual exoplanet eccentricities, which can be substantial. Howard (2013) reaches a similar conclusion in their simpler analysis of published RV planets. This study compared eccentricity distributions of single giant planets to giant planets in multi-planet systems, and found that eccentricities of planets in multi-planet systems are lower on average. Because Limbach & Turner (2014) did not carry out their own fits to the radial velocity data, they did not consistently allow for the possibility of long-term radial velocity accelerations due to unresolved outer companions. Previous studies by Fischer et al (2001) and Rodigas & Hinz (2009) demonstrate that undetected outer planets can systematically bias eccentricity estimates for the inner planet to larger values. This is also a problem for systems where the signal to noise of the planet detection is low or the data are sparsely sampled (Shen & Turner 2008). Although we use a smaller sample of planets for our study than Limbach & Turner (2014), our systems all have high signal to noise detections and long radial velocity baselines, which we use to fit and remove long-term accelerations that might otherwise bias our eccentricity estimates. In contrast to these other studies, Dong et al (2014) found that warm Jupiters with companions have higher eccentricities than single warm Jupiters. However, we note that this study relied on a relatively small sample of planets (9 systems with e > 0.4 and 17 with e < 0.2), and the authors did not report uncertainties on their estimated occurrence rates for either sample. In this study the authors also point out that in order to migrate a warm Jupiter inwards via dynamical interactions with an outer body, the perturber in question must be close enough to overcome GR precession of the inner planet. We use this constraint, presented in their Equation 4, to test this formation scenario for the warm Jupiter population in our sample. Of the 42 warm Jupiter systems in our sample, 15 have resolved companions and 4 have statistically significant linear trends. We find that for the resolved companions, 13 out of the 15 companions satisfy the criterion for high-eccentricity migration (namely that warm Jupiters must reach a critical periastron distance of 0.1 AU within a Kozai-Lidov oscillation). We take the best fit masses and semi-major axes for the companions causing the trends from their probability distributions, and use these values to calculate the upper limit on the separation ratio between the warm Jupiter and the companion. We find that zero out of the four systems satisfy the criterion for high-e migration. Combining the resolved and trend systems, 13 out of 19 warm Jupiter systems with companions satisfy the criterion. However, we note that the criterion presented in Dong et al (2014) is necessary but insufficient for high-eccentricity migration. While our observations in principle do not rule out Kozai-Lidov migration for the warm Jupiter population, in order to decide if migration is relevant the character The purple contours represent the 1σ and 2σ contours of the two planet systems and single planets with positive trend detections. The blue contours represent the 1σ and 2σ contours of the single planet systems with no outer bodies. of the angular-momentum exchange cycle must be understood. In order to do this to lowest order, the mass and semi-major axis of the perturbing orbit, as well as the mutual inclination, must be known. CONCLUSIONS We conducted a Doppler survey at Keck combined with NIRC2 K-band AO imaging to search for massive, long period companions to a sample of 123 known one and two planet systems detected using the radial velocity method. These companions manifest as long term radial velocity trends in systems where the RV baseline is not long enough to resolve a full orbit. We extended archival RV baselines by up to 12 years for the stars in our sample, and found that 25 systems had statistically significant radial velocity trends, six of which displayed significant curvature (HD 68988, HD 50499, HD 72659, HD 92788, HD 75898, and HD 158038). We found that trends detected in HD 1461 and HD 97658 correlxated with the Ca II H&K line strengths, indicating that these trends were likely due to stellar activity and not due to a wide-separation companion. These systems were removed from further analysis. We also checked each system for stellar companions, and found that HD 164509, HD 126614, and HD 195109 had stellar companions that could account for the linear RV accelerations. These systems were also removed from further analysis. For the remaining 20 trend systems, we placed lower limits on companion masses and semi-major axes from the RV trends, and upper limits from the AO contrast curves of the corresponding systems. We quantified the sensitivity of our survey and found that on average we were able to detect a 1 M Jup planet out to 20 AU, and a Saturn mass planet out to 8 AU with 50% completeness. We fit the companion probability distributions with a double power law in mass and semi-major axis, and integrated this power law to determine the giant planet companion occurrence rate. We found the total occurrence rate of companions over a mass range of 1 -20 M Jup and semi-major axis range of 5-20 AU to be 52.4 +4.5 −4.7 %, and obtained a comparable occurrence rate when the A and M star systems were removed from the calculation. The distribution of these long-period companions is best matched by models with a declining frequency as a function of increasing semimajor axis, and appears to be inconsistent with an extrapolation from fits to the population of gas giant planets interior to 3 AU described in Cumming et al. (2008). This suggests that either the radial distribution of gas giants peaks between 3 -10 AU, or that the distribution of outer gas giant companions differs from that of the overall gas giant population. When calculating the occurrence rate as a function of inner planet semi-major axis, we found that the hot gas giants were more likely to have a massive outer companion as compared to their cold gas giant counterparts. This result suggests that dynamical interactions between planets may be an important migration mechanism for gas giant planets. When we compared the eccentricity distributions of single planets in this sample with no outer bodies to planets in two-planet systems and single planets with a positive trend detection, we found that in multi-body systems, the eccentricity distribution was significantly higher than that of single planet systems with no outer bodies. The higher average eccentricities in these systems suggest that dynamical interactions between gas giant planets play a significant role in the evolution of these systems. If we wish to better understand the role that dynamical evolution plays in these systems, there are several possible approaches to consider. First, continued RV monitoring would help to better constrain companion orbits and masses. Second, deep imaging of the trend systems could probe down to brown dwarf masses and determine whether any of the observed trends could be caused by stellar instead of planetary mass companions. If any brown dwarf companions are detected via direct imaging, the existence of complementary radial velocity data would allow us to dynamically measure their masses, which would provide a valuable test of stellar evolution models in the low mass regime . Finally, long term RV monitoring of systems with lower mass planets and/or systems with three or more short period planets detected by transit surveys such as Kepler could allow us to determine if the companion occurrence rate of these systems differs from that of their gas giant counterparts. A significant limitation of this last suggestion is the need to detect low mass planetary systems orbiting bright, nearby stars -most Kepler stars are time consuming to observe with RVs, but K2, and later TESS, should provide a good sample of low mass planets orbiting nearby stars.
2016-01-29T01:35:05.000Z
2015-12-01T00:00:00.000
{ "year": 2016, "sha1": "85dc1760808ada676abbb107effb0486454fef48", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/64143/2/apj_821_2_89.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "85dc1760808ada676abbb107effb0486454fef48", "s2fieldsofstudy": [ "Geology", "Physics" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
202015341
pes2o/s2orc
v3-fos-license
A Nanoemulsion as an Effective Treatment Against Human Pathogenic Fungi The emergence of immunocompromising diseases such as HIV/AIDS or other immunosuppressive medical conditions have opened an opportunity for fungal infections to afflict patients globally. An increase antifungal drug resistant fungi have posed a serious threat to patients. Combining these circumstances with a limited variety of antifungal drugs available to treat patients has left us in a situation where we need to develop new therapeutic approaches that are less prone to development of resistance by pathogenic fungi. In this study we present the utilization of the nanoemulsion NB-201 to control human pathogenic fungi. We found that the NB-201 exhibited in vitro activity against C. albicans, including both planktonic growth and biofilms. Furthermore, treatments with NB-201 significantly reduced the fungal burden at the infection site and presented enhanced healing process after subcutaneous infections by multidrug resistant C. albicans in a murine host system. NB-201 also exhibited in vitro growth inhibition activity against other fungal pathogens, including Cryptococcus spp, Aspergillus fumigatus, and Mucorales. Due to the nature of the activity of this nanoemulsion, there is a minimized chance of drug resistance to develop, thus presents a novel treatment to control fungal wound or skin infections. Introduction 54 During the past decade there has been an exponential growth in discoveries and 55 medical advances for the treatment of human disease. This has led to better treatment for 56 patients, and as a result we have been able to prolong human life. While these recent medical 57 advances have certainly been beneficial overall, procedures such as solid organ transplants and 58 cancer treatments have left many patients in an immunocompromised state. The emergence of 59 immunocompromising diseases such as HIV/AIDS or other immunosuppressive medical 60 conditions have opened an opportunity for fungal infections to plague patients globally (1)(2)(3)(4). 61 Candida albicans is a human commensal fungus found on the skin, mucosal 62 membranes, and the normal gut flora (5, 6). C. albicans is known to be an opportunistic fungus 63 and the most common fungal pathogen which typically infects immunocompromised patients (3, 64 4). Treatment for candidiasis currently relies on three major classes of antifungal drugs including 65 echinocandins, azoles, and polyenes (7,8). Prior to the introduction of echinocandins, 66 fluconazole was the most common drug used to treat C. albicans infections (7). Recently there 67 has been an increase in cases of drug resistant C. albicans infections resulting in an increase in 68 morbidity and mortality of patients (4,(9)(10)(11). One explanation for drug resistance is the 69 development of mutations in the target genes of the antifungal drug (9). Secondly the 70 overexpression of efflux pumps and multi-drug resistance genes could also lead to antifungal 71 drug resistance (9). In addition, pathogenic fungi can also form biofilms that are resistant to 72 antifungal drugs (12,13). Thus, it is of upmost importance to develop new therapeutic 73 approaches that are less prone to the development of resistance by pathogenic fungi. 74 Membrane disruptive nanoemulsions have been developed to control pathogenic 75 bacteria (14, 15). One example is the nanoemulsion NB-201, which is an emulsification of 76 refined soybean oil, water, glycerol, EDTA, Tween 20, and the surfactant benzalkonium chloride 77 (BZK), which is commonly used as an antimicrobial preservative in drugs, topical antiseptic, 78 determined by using a 100% killing point of the C. albicans planktonic cells collected at 1, 24, 103 48, and 72-hour post addition of NB-210 to the media (Table 2). We observed within 1 hour, a 104 concentration of 1:512 of the NE was able to kill all the planktonic cells plated. As the incubation 105 time was increased, we observed a lower MIC was required. At 24 hours, a concentration of 106 1:1024 was able to kill all the strains plated. Within 48 hours the concentration of NB-201 107 required to kill all ten strains remained at an MIC of 1:1024. At 72 hours the MIC for 100% killing 108 of the strains plated was lowered to a concentration of 1:2048 (Table 2). 109 The ability of C. albicans to form biofilms, which increases antifungal drug resistance, is 110 a major virulence factor observed in the clinical setting (12,13). To test the efficacy of NB-201 111 on C. albicans biofilms, two multi-drug resistant clinical isolates TW1 and TW17 (21) were 112 chosen. The C. albicans clinical isolates TW1 and TW17 were plated on 96-well plates and 113 allowed to form a biofilm over the course of 24 hours (PFB). We then treated these PFBs with 114 the NE added in various ratios ranging from 1:1-1:2048 followed by a second-generation 115 tetrazolium (XTT) metabolic assay (Sigma-Aldrich) (22) to measure ratio of the metabolism, 116 indicative of disruptions of the biofilms, after NB-201 treatments. Within 2 hours a NE 117 concentration of 1:32 was able to inhibit 100% of the metabolism in the TW1 clinical isolate 118 Within 2 hours post addition of NB-201, a concentration of 1:16 was required for 100% 126 metabolism inhibition, while we find it important to note that a concentration of 1:32 inhibited 127 95% of the metabolism in the PFBs (Figure 1 G). Within 4 hours of exposure to the NE the 128 PFBs presented 100% metabolism inhibition at a 1:32 concentration with >50% inhibition being 129 observed at a concentration of 1:64 ( Figure 1H). After 6 hours of exposure, a 1:64 concentration 130 of NB-201 inhibited 80% of the metabolism in the PFBs ( Figure 1I). A 24-hour exposure to the 131 NE at a concentration of 1:128 presented a 70% inhibition of the metabolism in the TW17 PFBs 132 while a concentration of 1:64 inhibited 100% of the metabolism ( Figure 1J). At 48 hours of 133 exposure a concentration of 1:256 presented 85% inhibition of metabolism ( Figure 1K) while a 134 72-hour exposure increased that to 100% inhibition at the same concentration ( Figure 1L The formulation of the NB-201 was further tested to examine the ability to kill other pathogenic 181 fungi (Table 1) (Table 2). 186 Aspergillus fumigatus 187 We performed a checkerboard assay with ten different strains of Aspergillus fumigatus including 188 drug resistant strains, all of which are known clinical isolates (Table 1). Within one hour we 189 observed a concentration of 1:16 showed complete killing of all clinical isolates. We would like 190 to note that a concentration of 1:128 was able to kill seven out of the ten A. fumigatus clinical 191 isolates within the same timepoint ( Table 2). As incubation time with the NE progressed, we 192 observed a reduction in the MIC required to kill all of the A. fumigatus clinical isolates. Within 24 193 hours a concentration of 1:128 showed 100% killing of these clinical isolates (Table 2). Finally,194 at 48 and 72 hours all ten of the A. fumigatus clinical isolates were killed at a concentration of 195 1:512 (Table 2). 196 Mucorales 197 We tested ten clinical isolates of varying Mucorales species (Table 1) hour an MIC of 1:1024 was able to kill all four serotypes of C. neoformans (Table 2). This was 212 followed by 24, 48, and 72 hours showing an MIC of 1:2048 (Table 2) The stains used in this study are listed in Table 1. C. albicans and C. neoformans strains were 221 grown in liquid or solid yeast extract peptone dextrose [YPD, 10 g/L yeast extract, 20 g peptone, 222 20 g dextrose, 20 g agar (for plates only)] at 30C. Mucorales were grown in potato dextrose 223 agar (PDA, potato starch 4 g/L, dextrose 20 g/L, agar 15 g/L) or yeast extract peptone glucose 224 agar (YPG, 3 g/L yeast extract, 10 g/L peptone, 20 g/L glucose, 2% agar, pH = 4.5) at 30C in 225 the light for four days. A. fumigatus strains were grown PDA at 30 C for 4 days. To collect 226 spores of Mucorales and A. fumigatus, sterile water (2 ml per plate) was added to the plate and 227 spores were collected by gently scrapping the fungal mycelial mats. In vitro efficacy of NB-201 against Mucorales Spp., C. neoformans, and A. fumigatus 259 The respective fungal strains were inoculated at a concentration of 1x10 6 in a 96-well plate 260 containing NB-201 serially diluted in RPMI (100 l per well). Dilution concentrations ranged from 261 1:1 to 1:2048. 10µl samples taken at 1, 24, 48, and 72 hours from each well and plated on PDA 262 agar plates, which were incubated for 48 hours. After incubation, every plate was examined for 263 growth on the site of inoculation. 264 265 In vivo efficacy of NB-201 in a murine subcutaneous infection model 266 CD-4 mice weighing between 19-23 g were housed together. C. albicans strains 267 SC5314, TW1, and TW17 were grown in YPD liquid media, washed in PBS, and suspended in 268 PBS at a concentration of 1x10 6 . Under anesthesia, the dorsal fur of the mice was shaved. The 269 exposed skin was washed with 70% ethanol and mice were infected with 1x10 6 CFUs via 270 subcutaneous injection on the shaved dorsal side. Subsequent subcutaneous injections of NB-271 201, PBS, or fluconazole followed at 6, 24, and 48 hours. The mice were euthanized at 72 272 hours, and the skin of the infected area was collected immediately for analysis. 273 The collected mouse tissue was placed in PBS on ice then homogenized with a tissue 274 homogenizer. The homogenized tissue was then diluted 1:10 and plated on YPD agar plates 275 treated with antibiotics to prevent unwanted bacterial growth. The plates were incubated at 32°C 276 for 48 hours. The CFUs were then counted and quantified. Table 1). The in vitro susceptibility test with NB-201 301 was observed on every tested fungus, with an exceptional killing efficacy observed in all four 302 serotypes of C. neoformans. 303 During our in vitro test we observed a similar trend in the efficacy of NB-201. 304 Interestingly, we observed that longer incubation times with NB-201 resulted in a lowered MIC 305 regardless of the fungal organism that was being tested. The top etymological agent of 306 candidiasis, C. albicans, still ranks amongst the leading fungal organisms to cause infection in 307 immunocompromised patients around the world and causes >50% of bloodstream infections in 308 the US (3). The biofilms produced by this fungal organism makes it intrinsically harder to treat 309 and is a growing problem and concern in the clinical setting (12,13,26,27). We found that NB-310 201 has in vitro antifungal activity against planktonic form and biofilms of C. albicans. 311 Furthermore, the in vitro activity was also observed against drug-resistant clinical isolates. In an 312 animal subcutaneous infection model, NB-201 also exhibited antifungal activity against two 313 azole-resistant strains, TW1 and TW17 (Figures 2, 3, and 4). These results demonstrate that 314 NB-201 has anti-C. albicans activity both in in vitro and in vivo regardless of drug resistance. 315 And C. albicans is less likely to develop resistant to NB-201 ( Figure 2).
2019-09-09T21:21:51.338Z
2019-08-16T00:00:00.000
{ "year": 2019, "sha1": "aede5f3ac3e476c89dfc482cc7a796dbabee06b5", "oa_license": "CCBY", "oa_url": "https://msphere.asm.org/content/msph/4/6/e00729-19.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "48c68f6b8ba97309681124f5b29574720129fb06", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259260103
pes2o/s2orc
v3-fos-license
Knowledge about diabetes and its association with adherence to self-care and glycemic control in patients with type 1 diabetes in Southern Brazil ABSTRACT Objective: To evaluate the association between knowledge about the disease, adherence to self-care, and glycemic control in people diagnosed with type 1 diabetes mellitus. Subjects and methods: A cross-sectional study of patients aged over 18 years diagnosed with type 1 diabetes mellitus, treated at an outpatient clinic of a Brazilian university hospital. Participants with other types of diabetes, cognitive impairment, pregnancy, and outpatient discharge were excluded. Data were collected from January to March 2021 (by telephone call), with questions about the participants’ profile, diabetes knowledge questionnaire (DKN-A), and self-care inventory revised (SCI-R) translated into and adapted for Brazilian Portuguese. Data analysis involved chi-square associations, Mann-Whitney U tests, and Poisson regression. Results: Among 198 adult participants, the mean age was 42 ± 12 years, 53.5% were women, the mean glycated hemoglobin was 8.6 ± 1.6%, 140 (70.8%) had satisfactory knowledge about diabetes, 65 (32.8%) had adherence to self-care, and 46 (23.2%) had adequate glycemic control. We found an association between knowledge and adherence to self-care (p < 0.001). Knowledge was not associated with glycemic control (p = 0.705). Conclusion: Knowledge about diabetes was associated with greater adherence to self-care in people with type 1 diabetes mellitus, but it did not reflect in better glycemic control. INTRODUCTION D iabetes mellitus is a chronic metabolic condition that affects 16.8 million people in Brazil and worldwide. Currently, Brazil ranks third regarding prevalence of type 1 diabetes mellitus (T1DM) cases worldwide and has an estimated number of 92,300 cases in people under 20 years of age (1). People living with diabetes are at greater risk of developing acute and chronic complications (1,2). These patients need to perform complex self-care activities to obtain good metabolic control for preventing these outcomes (2). The constant challenge that diabetes represents to those who live with it is a topic of paramount importance. Many patients have difficulties in adhering to the lifestyle changes necessary to promote effective glycemic control and self-care (3,4). Disturbances in glycemic control, with hyperglycemic peaks, can sometimes be related to lack of knowledge about the disease and negligence with self-care, compromising the health of people with diabetes (3,4). Interventions by healthcare providers are often insufficient to ensure the effectiveness of diabetes treatment and to prevent its complications, as they may depend on the individual's knowledge about their disease, as well as care to maintain an adequate lifestyle with diabetes (2,5). Knowledge works together with motivational factors, driving self-care actions, thus, with a better understanding of the disease, interventions can become more effective and uncomplicated to achieve the goal of glycemic control (4,6). Studies conducted in different countries show that patients with type 1 diabetes have low to medium knowledge about the disease (7,8). Brazilian studies have been carried out in people with type 1 and type 2 diabetes, and involve both knowledge of the disease and its complications. These study reported that participants have low knowledge about the disease (4,5,9). Important factors to be considered for the adequate disease treatment are: analyzing the level of knowledge about the disease, understanding the extent of diabetes acceptance, establishing new ways of providing guidelines, and confirming the effectiveness of healthcare providers' actions aimed at people with T1DM. The use of validated instruments makes it possible to standardize the language among healthcare providers (4,10), in addition to allowing the assessment of responses to therapies and data comparison over time. Therefore, it is intuitive to think that the proper management of T1DM depends not only on the appropriate use of medications, but also on the patients' knowledge about their treatment, healthy eating habits, exercise, and self-monitoring of blood glucose (11). Understanding the knowledge about diabetes in patients with T1DM can help to improve the quality of care and serve as a starting point for knowing how to involve the patients in their own care. Thus, healthcare providers ensure that patients receive the necessary support to understand, to assess, and to apply disease management guidelines to the process of managing their health. Thus study aimed to evaluate the association between knowledge about the disease, adherence to self-care, and glycemic control in people with T1DM. Study design and setting This is a descriptive cross-sectional study, guided by the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline, which contains items that should be included in observational studies (12). The study was carried out in a public tertiary university hospital. Around 395,826 outpatient consultations are performed each year at this hospital and, in 2021, more than 67,000 teleconsultations were conducted (13). Endocrinologists, nurses, social workers, and nutritionists work at the institution's endocrinology outpatient clinic, the research field. Population and sample The population consisted of patients diagnosed with T1DM, with regular follow-ups at the institution's endocrinology outpatient clinic. All patients with T1DM treated at the institution's endocrinology outpatient clinic in the last two years were selected by a query request from keyworded electronic medical records. For inclusion in the study, participants had to be aged over 18 years and diagnosed with T1DM. Exclusion criteria were having a record of another type of diabetes (type 2 diabetes, maturity-onset diabetes of the young (MODY), latent autoimmune diabetes in adults (LADA), or an uncertain type of diabetes), cognitive impairment, pregnancy, death, and outpatient discharge. To calculate the power of the sample, the online version of Power and Sample Size Health was used (14). Considering the 198 participants (Flowchart 1), 5% significance level, 0.3 Cohen's W effect size, and 1 degree of freedom as obtained by Borba and cols. (4), the power to test whether there is an association between knowledge and self-care in our study was 98.8%. Data collect Data collection was carried out from January to March 2021, by telephone, due to social isolation measures implemented to reduce COVID-19 transmission. The calls were made by three researchers during business hours, that is, from 8 a.m. to 6 p.m. Patients were asked about their interest in participating in the survey by telephone and their availability to answer questions during the call, or if they wished to schedule it to another occasion. The questionnaires were answered by the participants during the phone calls, which were recorded and the participants were asked to answer, before the application of the questionnaires, if they agreed to participate in the research. To facilitate the completion of the participants' answers, an online form was created to collect data on the studied variables, including: medical record number, telephone, sex, age, schooling level, time of diagnosis, smoking status, value of the last glycated hemoglobin (HbA1c), comorbidities (cardiovascular diseases, dyslipidemia, arterial hypertension, diabetes kidney disease, neuropathy, foot injuries, previous amputations, and psychiatric conditions), Diabetes Knowledge Questionnaire (DKN-A), and Self-Care Inventory-Revised (SCI-R) validated for Brazilian Portuguese (15,16). The DKN-A is a 15-item multiplechoice questionnaire on different aspects related to general knowledge of diabetes. Scale ranges from 0 to15 and each item is measured with a score of one (1) for correct answers and zero (0) for incorrect answers. Items one to 12 require a single correct answer. For items 13-15, some answers are correct, and all must be checked to obtain a score of one. A score greater than eight indicates knowledge about diabetes (15). Notably, in the results presentation, participants with scores from 0 to 8 were classified as "low knowledge" and above 9 as "satisfactory knowledge". The SCI-R has 14 items on a 5-point Likert scale (1 = never; 5 = always) that reflects how the participants followed the self-care recommendations during the last two months; higher scores indicate greater adherence, and the cut-off value to classify a patient as having a greater or lesser adherence score is 48 (16). In this case, when presenting the results, participants with scores below 48 were referred to as having lesser adherence to self-care and scores above 49 as having greater adherence. To establish adequacy or lack of it for glycemic control, individualized goals were used. Participants with a history of ischemic heart disease, frequent episodes of hypoglycemia, severe visual impairment, those who underwent hemodialysis or peritoneal dialysis, and underwent only two or fewer capillary blood glucose tests per day were considered for a flexible target (HbA1c ≤ 8.0%). For all other participants, strict glycemic control was considered adequate (HbA1c target ≤ 7.0%). Patients who were within the glycemic target were considered to have good control and the others to be inadequate. The primary outcome of the study was the presence of an association between diabetes knowledge and selfcare. The secondary outcome included the presence of an association between diabetes knowledge and HbA1c levels. The study included a pilot plan, to identify possible errors in the questionnaires and to reduce biases. The pilot plan was carried out with four patients with type 2 diabetes, not included in the study sample. Data analysis Data analysis was performed using the Statistical Program Package for the Social Sciences (SPSS) version 22.0. Categorical variables were described by absolute number and percentile and continuous variables were described as mean and standard deviation in case of normal distribution; otherwise, data were described as median and interquartile range. Normality was defined by the Shapiro-Wilk test. The analysis of the association between the results of the applied questionnaires (DKN-A and SCI-R) and glycemic control was performed using the chi-square test. To analyze the association between the DKN-A questionnaire and schooling level, the Mann-Whitney U test was performed for independent samples. Poisson's regression with adjustments for robust variances was used to identify significant predictors of knowledge about diabetes in relation to this variable being associated with self-care and schooling level. The statistical significance level was 5%. Ethical aspects The study was approved by the Research Ethics Committee of the institution via Plataforma Brasil under CAAE number 20380919800005327, considering the prerogatives announced in Resolution 466/2012 of the Brazilian National Health Council. The researchers followed the institution's telephone call script for inviting participants to the research, which contained three options for the participant to choose to send an informed consent form (email, WhatsApp, or message), with the document being sent according to their preference. When handling the information, the researchers preserved the participants' anonymity during the treatment and publication of the data. In total, 198 patients answered the questionnaires; their mean age was 42 ± 12 years, 106 (53.5%) were women and 140 (70.8%) had satisfactory knowledge about diabetes. Greater self-care was observed in 65 patients (32.8%). Table 1 summarizes the other demographic and clinical characteristics and scores obtained by the participants in the questionnaires. The analysis of the association between knowledge about diabetes and self-care by the DKN-A and SCI-R questionnaires, respectively, showed that among the participants with greater knowledge about the disease (n = 140), 58 (41.4%) had greater adherence to self-care (p < 0.001). Table 2 shows the evaluated variables (glycemic control, sex, time since diagnosis, and schooling level) according to the knowledge about diabetes (DKN-A) of the 198 participants. By the Poisson's regression model with adjustments for robust variances of the knowledge questionnaire for the self-care and schooling inventory, prevalence of satisfactory knowledge in participants with better selfcare was 44.7% higher than the prevalence in those who showed lower adherence to self-care (odds ratio: 1.447, confidence interval: 1.235-1.696, p < 0.001). The prevalence of satisfactory knowledge about the disease among those with higher education was 42.2% higher than the prevalence among those with elementary school (odds ratio: 1.422, confidence interval: 1.143-1.770, p = 0.006). Table 3 shows the evaluated variables (glycemic control, sex, time since diagnosis, and schooling level) according to the patients' self-care (SCI-R). Among the 65 participants with higher self-care score on the SCI-R, 18 (27.7%) had adequate glycemic control (p = 0.390). answers in the DKN-A questionnaire, different from what was verified in an Indian study, in which the score of knowledge about the disease was medium (7) and in an Ethiopian study, in which the level of knowledge was low (8). Furthermore, in our study, having knowledge about the disease greatly influenced adherence to selfcare, a behavior that is idealized for all patients with chronic diseases (17). Importantly, knowledge about diabetes encompasses the basic physiology of the disease, management of hypoglycemia, food groups and their substitutions, management of diabetes in intercurrences, and the general principles of care for the disease (4). Self-care is related to a brief and psychometric measure of perceptions of adherence to recommended self-care behaviors in patients with diabetes (16). Following the self-care guidelines provided by the health team is essential for adhering to the treatment, however, patients needs to be active and show care attitudes towards their disease. Our study showed that satisfactory knowledge about diabetes in respondents with higher education was greater in relation to participants with lesser knowledge about the disease. In addition, greater selfcare was presented in patients with higher education in relation to participants with lesser self-care. We also observed that satisfactory knowledge was more prevalent among women and this patient profile was also observed in people with type 2 diabetes (2,11,18). Better knowledge scores were also associated with a higher schooling level in people with T1DM in India, Ethiopia, and Canada (7,8,19). People with a low schooling level tend not to value preventive actions, underestimating the severity of the disease and postponing the search for assistance, which impairs commitment to their treatment (9). Although knowledge of the disease alone does not guarantee the necessary changes in behavior, assessing the patients' knowledge of the disease is essential for designing educational health interventions (11). Regarding glycemic control, the results are worrying, as most patients with inadequate control showed satisfactory knowledge about the disease. Knowledge and awareness of diabetes regarding the biology of the pathology, its ongoing health implications, and how to manage the condition are vital to understand the need to maintain good glycemic control (20). However, based on the results, many people diagnosed with diabetes, even with good scores on the knowledge questionnaire, may not have a clear understanding of disease control goals or how to effectively manage their health. We understood that either the questionnaire is not sensitive enough to capture the entire knowledge that the patient has about all aspects of diabetes, or, more than having knowledge, other domains of these patients' attitudes need to be activated so that there is an effect on attitude changes that lead to better glycemic control. As for the time of diagnosis, the results show that knowledge about diabetes was not associated with the duration of the disease, unlike what was reported in a study carried out in Ethiopia with the same population (8). This data suggests a reflection on the effective communication of diabetes education to patients with T1DM. The strengthening of information, education, and effective communication on diabetes is of paramount importance (8). Adherence to self-care also showed no association with the duration of the disease. It is expected that the longer the duration of the disease, the more knowledge about diabetes and its treatment the patients should have (4). However, we did not observe this trend in our study nor in another study carried out in primary care in Northeastern Brazil (4), in which the negative attitude towards self-care also showed no difference between different durations of the disease. Notably, age and duration of diabetes are known but not modifiable risk factors for microvascular and cardiovascular outcomes and mortality in T1DM patients (21). In our study, adherence to self-care practice in diabetes was lower in patients who had inadequate glycemic control compared to participants with greater adherence to self-care. A Brazilian multicenter study showed that inadequate glycemic control, common in Brazilians with T1DM, is associated with lower schooling level, insufficient self-perception of adherence and inadequate monitoring of glycated hemoglobin levels (22). A study carried out with the same population evaluated the practice of foot care, demonstrating a disagreement with the knowledge presented by the participants. Among the interviewees, 32.7% had good knowledge about foot care, but only 12.2% practiced it (23). However, in patients with type 2 diabetes from another population, better self-care practices were associated with greater knowledge about diabetes and lower levels of HbA1c (11). Better selfcare was also associated with a 0.4 fold increase in the quality of life of participants with type 2 diabetes in another study (24). However, it is known that in Brazil, despite the free supply of insulin and supplies for selfcare (needles, glucometers, reagent strips), the patient cannot always access these supplies, because of lack of knowledge or because of differences in providing this support between different municipalities and states (25,26). These circumstances can affect adherence to self-care with the disease. Furthermore, in regions with a lack of medical resources, individuals with T1DM tend to die early from acute metabolic complications or infections (27). Another factor to consider, in relation to the selfcare result presented by the participants, is the date of data collection, which took place during one of the lockdowns imposed by the coronavirus pandemic. The COVID-19 outbreak has caused many municipalities worldwide to have their routines completely changed by social isolation measures, suddenly changing the daily routine of people with diabetes, increasing sedentary behavior and changing dietary patterns (28)(29)(30). These circumstances imposed by the pandemic resulted in changes in self-care and glycemic control, especially in patients on complex therapeutic regimens, such as people with diabetes (30,31). This study has some limitations, such as its crosssectional design and the population belonging to only one tertiary care center, but a reference in this type of care and with patients from all over the state. Therefore, the characteristics of the participants can be considered representative of the population served by public hospitals and the results should be explored considering cultural and economic aspects, which affect the disease management. Another limitation includes the possibility of self-report bias, as patients may not be willing to reveal deficiencies in self-care knowledge and practices and may not be accurate all the time. In any case, our results draw the attention of nurses, physicians, and other healthcare providers to the challenges of improving self-care and, consequently, health among T1DM patients. In addition, our study makes a strong case for diabetes educators to actively involve their patients in a more participatory position, in order to put their knowledge about the disease into practice, so that the necessary care with diabetes is carried out, thus reducing complications that health carelessness can bring. In conclusion, knowledge about the disease was associated with greater adherence to self-care in people with T1DM, but this was not reflected in better glycemic control. Improvement in self-care, however, can be reflected in other health domains of the person with diabetes, therefore, this result should be valued. Patients should be encouraged to introduce their knowledge about the disease into their routine and to improve self-care at each follow-up appointment with healthcare providers. In addition, it is essential to seek alternatives to strengthen the information provided to these patients, which may reflect on better glycemic control. Thus, this study contributes to the field of nursing by bringing a relevant analysis of the knowledge about the disease of adult patients with T1DM and draws attention to the urgent need to search for new tools that can improve education in relation to self-care and glycemic control.
2023-06-28T06:17:24.879Z
2023-06-19T00:00:00.000
{ "year": 2023, "sha1": "6f9f681aecf6a401a18ae48253168ca1bc500c0f", "oa_license": "CCBY", "oa_url": "https://www.aem-sbem.com/wp-content/uploads/articles_xml/2359-4292-aem-67-06-e000648/2359-4292-aem-67-06-e000648.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "16363126572a1d3071505afdd15734a6936e5be5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259967502
pes2o/s2orc
v3-fos-license
Antiviral Activity of an Indole-Type Compound Derived from Natural Products, Identified by Virtual Screening by Interaction on Dengue Virus NS5 Protein Dengue is an acute febrile illness caused by the Dengue virus (DENV), with a high number of cases worldwide. There is no available treatment that directly affects the virus or the viral cycle. The objective of this study was to identify a compound derived from natural products that interacts with the NS5 protein of the dengue virus through virtual screening and evaluate its in vitro antiviral effect on DENV-2. Molecular docking was performed on NS5 using AutoDock Vina software, and compounds with physicochemical and pharmacological properties of interest were selected. The preliminary antiviral effect was evaluated by the expression of the NS1 protein. The effect on viral genome replication and/or translation was determined by NS5 production using DENV-2 Huh-7 replicon through ELISA and viral RNA quantification using RT-qPCR. The in silico strategy proved effective in finding a compound (M78) with an indole-like structure and with an effect on the replication cycle of DENV-2. Treatment at 50 µM reduced the expression of the NS5 protein by 70% and decreased viral RNA by 1.7 times. M78 is involved in the replication and/or translation of the viral genome. Introduction Dengue virus (DENV) is a flavivirus transmitted by the bite of female mosquitoes of the genus Aedes spp., endemic in tropical and subtropical countries worldwide [1]. It is the causative agent of the infection known as Dengue or break bone fever [2]. Approximately 400 million cases [3] and 22,000 deaths occur worldwide each year due to Dengue [4]. According to the World Health Organization (WHO), the global incidence of Dengue has dramatically increased in the last decade, and approximately half of the world's population is at risk [5]. DENV has four genetically distinct serotypes (DENV [1][2][3][4]. It is an enveloped virus with a single-stranded positive-sense RNA genome that encodes three structural proteins (capsid [C], pre-membrane [prM], and envelope [E]) and seven non-structural proteins (NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5) [4,6], each of which perform different functions during the virus infectious cycle. The non-structural proteins are responsible for viral replication and host immune evasion. The NS5 protein plays an essential role in viral RNA replication, as the deletion of this protein from the viral genome inhibits replication [7], making it a promising pharmacological target [8,9]. This protein has two domains, the RNA-dependent RNA polymerase (RdRp) domain at the C-terminal end and the methyltransferase (Mtase) domain at the N-terminal end. The latter is responsible for protecting the RNA at the 5 end of new viral genomes [1]. Despite the significant economic and social impact of this disease and the important advances made against Dengue, there is currently no effective antiviral therapy available [10][11][12]. Considering these limitations, it has become increasingly important to continue the search for molecules, compounds, or drugs that can inhibit enzymatic targets or essential processes for the replication cycle of the virus. The development and search for therapeutic molecules, such as direct-acting antivirals (DAA), has been shown to be a truly effective approach [2]. As a result, the use of computational techniques that have been employed in other research is considered a strategy of interest. Although DAA have not been approved for use in treatment of DENV [13], these have shown great promise in in vitro assays. Furthermore, bioactive agents from natural resources have laid a great foundation for the design of new therapeutic drugs [14], allowing a return to the use of traditional medicine to search for treatments for emerging diseases. Similarly, the innovation in the X-ray structures of several DENV proteins has allowed the development of in silico computational screening strategies [11]. Therefore, the execution of screenings from databases and docking analysis is promising when selecting an action target, such as important proteins in the virus's infectious cycle [15][16][17][18][19], with the viral polymerase NS5 protein standing out among these [7]. In this research, compounds derived from natural products were identified through virtual screening with an interaction on the NS5 protein. The in vitro antiviral effect of an indole-type compound, identified here as M78, was evaluated in a DENV-2 infected cellular model. This evaluation showed that the action is related to intervention during stages of replication and/or translation of the genome. Virtual Screening of Natural Compound Derivatives on DENV NS5 Protein The structures of NS5 proteins from the four DENV serotypes were obtained as described by García et al. [20]. The selected cavities for interaction corresponded to the substrate binding site of its natural substrate, S-adenosyl homocysteine (SAH), located in the methyltransferase (MTase) domain, and the entrance to the RNA tunnel, present in the RNA-dependent RNA polymerase (RdRp) domain. The ligands SAH and 68E, crystalized in the selected cavities, respectively [21], were subjected to re-docking onto their binding sites to find the coordinates and dimensions of the interaction boxes and obtain a theoretical value of the binding energy as a starting reference point for the selection of the best natural compounds with stronger binding on each region. The Root Mean Square Deviation (RMSD) was calculated as the most commonly used quantitative measure of similarity between two superimposed atomic coordinates [22]. In total, eight virtual screenings were performed (two per serotype). To perform this, the library of 190,090 natural compound derivatives available on the DrugDiscovery@TACC web portal (https://drugdiscovery. tacc.utexas.edu/#/) (accessed on 20 March 2019) [23] from the Texas Advanced Computing Center (TACC) was used. It is worth mentioning that all virtual screening calculations were executed using the AutoDock Vina 1.1 software [24], which is implemented in the DrugDiscovery@TACC web portal. Selection of Compounds by Interaction on NS5 of DENV The compounds were selected based on their ability to bind to NS5 in the four serotypes of DENV. Predictions of aqueous solubility were performed using the Swis-sADME web server (http://www.swissadme.ch/) [25]. This server delivers three predictions for this physicochemical descriptor with six possible outcomes: insoluble, poorly soluble, moderately soluble, soluble, highly soluble, and very highly soluble. We used a score of 0 for the descriptors of insoluble and poorly soluble, a score of 1 for the descriptors of moderately soluble and soluble, and a score of 2 for the descriptors of highly soluble and very highly soluble. Thus, only compounds that obtained a value of two or higher (by summing the scores of the three predictions provided by the SwissADME server) were selected for the next filter. A prediction was made of compliance or violation of the four Lipinski rules (Molecular weight ≤ 500, LogP ≤ 5, hydrogen bond acceptors ≤ 10, hydrogen bond donors ≤ 5) [26]; thus, only compounds that had a maximum of one violation were accepted for the next filter. This prediction was made using the Swis-sADME server. Prediction of possible toxicological risks using the ProTox-II web server (http://tox.charite.de/protox_II/) [27], such as hepatotoxicity, carcinogenicity, immunotoxicity, mutagenicity, and cytotoxicity, were carried out, and only those compounds that did not present toxicological risks after the prediction were selected. 3D visualizations of protein-ligand complexes were performed with the Chimera v1. 13 The compounds identified and selected through in silico assays were acquired through a synthesis service at MolPort (https://www.molport.com/shop/index). Subsequently, the cytotoxic effect on Huh-7 cells (ATCC HB 8065) was evaluated using the cell viability assay, using resazurin as a metabolic indicator. For this purpose, 15,000 cells were cultured per well in DMEM medium (Dulbecco's Modified Eagle Medium, Life Technologies 12100-046, New York, NY, USA), supplemented with 10,000 units/mL of penicillin/streptomycin, 20 mM L-glutamine, and 2% (v/v) heat-inactivated fetal bovine serum (Eurobio, CVFSVF00-01, Les Ulis, France) in 96-well Multiwell plates (Costar 3590, New York, NY, USA). The cell monolayer was allowed to stabilize for approximately 24 h at 37 • C and 5% CO 2 . Subsequently, the cells were treated with the compounds at concentrations of 12.5, 25, 50, 100, 125, 250, and 500 µM. Cell viability was determined after 24 and 48 h of compound exposure, using resazurin at a final concentration of 44 µM, incubating for 2 h under the previously described conditions. Finally, absorbance was measured at 603 and 570 nm. The percentage of cell viability was calculated considering the difference between the absorbances for each treatment and the untreated cell control (CC), using the following formula: Cell viability (%) = [Sample absorbance/Control absorbance] × 100 (1) The mean cytotoxic concentration (CC 50 ) was also established, defined as the concentration at which cell viability decreases by 50%. Antiviral Screening of Compounds on NS1 Protein Production in DENV-2 Infected Cells For antiviral screening, 15,000 Huh-7 cells (ATCC HB 8065) were cultured per well in 96-well Multiwell plates (Costar 3590, New York, NY, USA) under the same conditions as described in item 2.2.1, and then these were infected with DENV-2 New Guinea for 2 h at a multiplicity of infection (MOI) of 1. The DENV-2 strain used was isolated and cultivated in C6/36 mosquito cells (ATCC ® CRL-1660) and maintained in L-15 medium supplemented with 10% tryptose and 2% fetal bovine serum, incubated for seven days at 28 • C and stored at −80 • C. This strain was provided by the Biomedical Research Center of the University of Quindío, Colombia. After infection, the supernatant was removed, and the compounds were added at non-cytotoxic concentrations (between 40 and 100 µM, previously determined) and incubated for 24 h. Mycophenolic acid 20 µM [29] was used as an inhibition control, and 0.45% DMSO (vehicle) was used as a negative control. Subsequently, the cells were treated with 4% paraformaldehyde for 30 min and permeabilized for 5 min with 1X PBS, 0.5% Triton. The cells were then blocked with 5% fetal bovine serum in 0.05% PBS-Tween for 24 h at 4 • C. The primary Monoclonal anti-Dengue Virus NS1 antibody (SAB2702307 Sigma Aldrich, Saint Louis, MO, USA) (1:1000) was added and incubated for 1 h and 30 min at 37 • C. The secondary antibody, Anti-Mouse IgG (whole molecule) −Alkaline Phosphatase antibody produced in goats (A3562-Sigma Aldrich), was then added and incubated for 1 h at 37 • C. Finally, the alkaline phosphatase substrate pNPP (S0942-Sigma Aldrich) was added and incubated for 30 min. Then, NaOH 0.1 M solution To determine the effect of the compound on protein expression, the production of NS5 was evaluated using the Huh-7 cell line, which carries a DENV-2 subgenomic replicon. The replicon includes a luciferase reporter gene, a geneticin resistance gene, and the coding region of NS proteins (NS1 to NS5), allowing for stable expression of these proteins. These systems contain genetic elements necessary for autonomous genome replication in cells and have been useful for expressing viral genes in several flaviviruses, including DENV, WNV, YFV, and TBEV [30]. The cells were cultured in DMEM (Dulbecco's Modified Eagle Medium, Life Technologies 12100-046, New York, NY, USA), supplemented with 10,000 units/mL of penicillin/streptomycin, 20 mM L-glutamine, and 10% (v/v) heat-inactivated fetal bovine serum. Geneticin G418 (10131-035, Gibco, Grand Island, New York, NY, USA) was added at a final concentration of 0.2 mg/mL as a selection antibiotic for the transfected cells with the replicon. This cell line was provided by the Biomedical Research Center of the University of Quindío, Colombia. To begin, 15,000 Huh-7 DENV-2 replicon cells were cultured per well in 96-well Multiwell plates (Costar 3590, New York, NY, USA). After reaching 70-80% confluency, they were treated with compounds that had previously shown an effect on the viral cycle and were incubated for 24 h at 37 • C and 5% CO 2 . NITD008 compound, an NS5 protein inhibitor [31], was used as a positive control, along with other respective controls. The cells were treated with 4% paraformaldehyde for 30 min, and permeabilized for 5 min with a 0.5% PBS 1X Triton X100 solution. Then, the cells were blocked with 5% fetal bovine serum in 0.05% PBS-Tween for 24 h at 4 • C. Then, the cells were treated with primary Anti-NS5 antibody produced in rabbits (SAB2700025 Sigma Aldrich) (1:10,000) and incubated for 1 h and 30 min at 37 • C. Then, goat anti-rabbit (whole molecule) alkaline phosphataseconjugated antibody (A3687-Sigma Aldrich) diluted 1:30,000 was added and incubated for 1 h at 37 • C. Subsequently, alkaline phosphatase substrate (S0942-Sigma Aldrich ® ) was added for 1 h at 37 • C, followed by the addition of 0.1 M NaOH solution, and the absorbance was measured at 405 nm using an Epoch spectrophotometer. The absorbance values were transformed into NS5 production percentage and compared to the viral control, using the (2) formula. The IC 50 was estimated through the dose-response curve, using the GraphPad Prism 6 software, and the selectivity index (SI) was also predicted by calculating the ratio between CC 50 /IC 50 . Determination of the Inhibitory Effect on Viral RNA Production of DENV-2 After treatments with the selected compounds on the previously infected Huh-7 cells with DENV-2, as described above, total RNA extraction was performed using the TRIzol LS Reagent ® (Lot.50867000) following the manufacturer's recommendations. The concentration and purity of the RNA were determined by the absorbance ratio at 260 nm and 280 nm, read on an Epoch spectrophotometer. Subsequently, the amplification of the NS5 protein gene was performed using the primers DENV 7764 Fwd 5 -CGTCGAGAGAAATATGGTCA CACC-3 and DENV 7844 Rev 5 -CCACAATAGTATGACCAGCCT-3 . The endogenous GAPDH gene was amplified using the primers hGAPDH Fwd 5 -TGTTGCCATCAATGA CCCCTT-3 and hGAPDH Rev 5 -CTCCACGACGTACTCAGCG-3 . RT-qPCR was performed using the Power SYBR ® Green RNA-to-CTTM 1-Step kit (Ref 4389986, Applied Biosystems TM ), following the manufacturer's instructions, for a total reaction volume of 20 µL. As a negative amplification control, a reaction mixture without genetic material was included. The RT was performed at 48 • C for 30 min, enzyme activation at 95 • C for 10 min, denaturation for 40 cycles of 95 • C for 15 s, and annealing and extension at 60 • C for 1 min. The relative expression of this gene was calculated using the comparative CT [32], which makes several assumptions, including that the PCR efficiency is close to 1 and the PCR efficiency of the target gene is similar to that of the internal control gene, using the following equation: Statistical Analysis In all cases, treatments were compared with their respective controls. A Shapiro-Wilk test was performed to evaluate data normality. Parametric data were evaluated using one-way ANOVA with multiple comparisons test through Dunne's t method. Nonparametric data were evaluated using Kruskal-Wallis test, with comparison test through Dunn's test. t-test was performed to compare two groups of parametric data, Tukey test to compare means, and Mann-Whitney U test for two groups of non-parametric data. A p-value < 0.05 was considered statistically significant. The analyses were performed using GraphPad Prism 6 software. The MTase and RdRp Domains Were Validated As Binding Sites The crystallized structure of NS5 protein from serotype 3, PDB 5JJR, containing two ligands on the regions of interest of the MTase and RdRp domains (SAH and 68E, respectively), was used to re-dock these two compounds onto the NS5 protein of all DENV serotypes. The defined dimensions for all boxes were 24 Å in all axes (x, y, and z). In Figure 1, the re-docking of the SAH and 68E ligands onto the NS5 protein of serotype 3 is presented. As shown, molecular docking was able to approximately reproduce the crystallographic pose of the control ligands. In the case of PDB 5JJR, a value of 1.2 Å was obtained for the RMSD between the crystallized SAH ligand and the predicted pose, and for the case of the 68E ligand present in the RdRp domain, a value of 1.7 Å was obtained for the RMSD. The calculated interaction energies for SAH was −7.6 kcal/mol, and for 68E was −8.6 kcal/mol. 4389986, Applied Biosystems TM ), following the manufacturer's instructions, for a total reaction volume of 20 µL. As a negative amplification control, a reaction mixture without genetic material was included. The RT was performed at 48 °C for 30 min, enzyme activation at 95 °C for 10 min, denaturation for 40 cycles of 95 °C for 15 s, and annealing and extension at 60 °C for 1 min. The relative expression of this gene was calculated using the comparative CT method (2 −ΔΔCT ) [32], which makes several assumptions, including that the PCR efficiency is close to 1 and the PCR efficiency of the target gene is similar to that of the internal control gene, using the following equation: Statistical Analysis In all cases, treatments were compared with their respective controls. A Shapiro-Wilk test was performed to evaluate data normality. Parametric data were evaluated using oneway ANOVA with multiple comparisons test through Dunne's t method. Non-parametric data were evaluated using Kruskal-Wallis test, with comparison test through Dunn's test. T-test was performed to compare two groups of parametric data, Tukey test to compare means, and Mann-Whitney U test for two groups of non-parametric data. A p-value <0.05 was considered statistically significant. The analyses were performed using GraphPad Prism 6 software. The MTase and RdRp Domains Were Validated As Binding Sites The crystallized structure of NS5 protein from serotype 3, PDB 5JJR, containing two ligands on the regions of interest of the MTase and RdRp domains (SAH and 68E, respectively), was used to re-dock these two compounds onto the NS5 protein of all DENV serotypes. The defined dimensions for all boxes were 24 Å in all axes (x, y, and z). In Figure 1, the re-docking of the SAH and 68E ligands onto the NS5 protein of serotype 3 is presented. As shown, molecular docking was able to approximately reproduce the crystallographic pose of the control ligands. In the case of PDB 5JJR, a value of 1.2 Å was obtained for the RMSD between the crystallized SAH ligand and the predicted pose, and for the case of the 68E ligand present in the RdRp domain, a value of 1.7 Å was obtained for the RMSD. The calculated interaction energies for SAH was −7.6 kcal/mol, and for 68E was −8.6 kcal/mol. Figure 2 shows the steps taken to select compounds from the Zinc Natural Compounds database that have interaction with the DENV NS5 protein. Selected Compounds by Interaction on MTase and RdRp of DENV NS5 Protein ribbons, respectively. The crystallized and predicted poses are shown in green and cyan sticks, respectively. Figure 2 shows the steps taken to select compounds from the Zinc Natural Compounds database that have interaction with the DENV NS5 protein. After selecting the binding sites and setting up the interaction boxes used in the compound search, virtual screening was performed. Initially, 190,090 natural compounds were docked to the two binding sites of interest (MTase and RdRp) in the four serotypes. This means that a total of eight virtual screenings and approximately 1,520,720 molecular dockings were performed on the NS5 protein models of DENV. The DrugDiscov-ery@TACC web portal [23] provided the top 1000 compounds for each screening, and the list was reduced to 8000 natural compounds (4000 for each domain). In order to identify compounds that may exhibit anti-Dengue effects, only compounds that interacted with the NS5 protein domains in all Dengue serotypes were selected. After verifying the compounds at each binding site, a total of 479 compounds were obtained in the MTase domain and 127 compounds were obtained in the RdRp domain in all four serotypes. Finally, physicochemical and toxicological property screening was performed. The solubility evaluation in aqueous systems allowed for the selection of 216 compounds for the MTase domain and 69 compounds for the RdRp domain. Then, the Lipinski rule compliance was checked, resulting in 202 compounds selected for the MTase domain and all previously identified compounds for the RdRp domain. The next analysis was based on the predictions of possible toxicological risks, which resulted in 67 compounds for the MTase domain and 32 compounds for the RdRp domain. The final selection of compounds was based on the binding energies. Five compounds were chosen for the MTase domain, five for the RdRp domain, and five with interaction in both domains ( Table 1). The compounds were named according to the interaction site and the last two digits of the Zinc code. Based on the results, the compounds were acquired from Molport (https://www.molport.com/shop/index) (accessed on 30 August 2019) for synthesis and to begin the in vitro evaluation of the molecules' activity against DENV-2. After selecting the binding sites and setting up the interaction boxes used in the compound search, virtual screening was performed. Initially, 190,090 natural compounds were docked to the two binding sites of interest (MTase and RdRp) in the four serotypes. This means that a total of eight virtual screenings and approximately 1,520,720 molecular dockings were performed on the NS5 protein models of DENV. The DrugDiscovery@TACC web portal [23] provided the top 1000 compounds for each screening, and the list was reduced to 8000 natural compounds (4000 for each domain). In order to identify compounds that may exhibit anti-Dengue effects, only compounds that interacted with the NS5 protein domains in all Dengue serotypes were selected. (Table 1). The compounds were named according to the interaction site and the last two digits of the Zinc code. Based on the results, the compounds were acquired from Molport (https://www.molport.com/shop/index) (accessed on 30 August 2019) for synthesis and to begin the in vitro evaluation of the molecules' activity against DENV-2. Cytotoxic Effect Evaluation of Compounds in Huh-7 Cells The cytotoxic effect was evaluated for 10 of the 15 identified compounds (M66, M76, M78, R07, R32, R53, R55, MR25, MR41, and MR94). According to the results, among the compounds with interaction on the MTase domain, M66 caused a decrease in cell viability (≥ 50%) at concentrations of 50 and 100 µM, with statistically significant differences when compared to the control cells (**** p < 0.0001). The estimated CC 50 was 44.08 µM. For compound M76, no effect was observed on cell morphology or metabolism at the highest concentration evaluated (100 µM) after 24 h of treatment exposure; therefore, CC 50 was not determined. On the other hand, after treatment with compound M78, cellular viability close to 100% was evidenced at concentrations of 12.5, 25, and 50 µM, while at 100 µM, damage to the monolayer and therefore loss of viability was observed, with statistically significant difference when compared to the control cells (**** p < 0.0001), finding a CC 50 of 60.77 µM. Regarding the compounds identified with interaction on the RdRp domain, it was found that compounds R07, R32, and R55 did not reduce cell viability at any of the evaluated concentrations. On the other hand, treatment with compound R53 reduced cell viability at all evaluated concentrations, with evident cell death. The determined CC 50 for the compound was 30 µM. On the other hand, for compounds with binding in both domains, CC 50 of 70.57 µM was found for MR25 and CC 50 of 46.22 µM for MR94, and no effect was evidenced for MR41 at any of the evaluated concentrations, so CC 50 was considered >100 µM (Table 2). determined. On the other hand, after treatment with compound M78, cellular viability close to 100% was evidenced at concentrations of 12.5, 25, and 50 µM, while at 100 µM, damage to the monolayer and therefore loss of viability was observed, with statistically significant difference when compared to the control cells (**** p < 0.0001), finding a CC50 of 60.77 µM. Regarding the compounds identified with interaction on the RdRp domain, it was found that compounds R07, R32, and R55 did not reduce cell viability at any of the evaluated concentrations. On the other hand, treatment with compound R53 reduced cell viability at all evaluated concentrations, with evident cell death. The determined CC50 for the compound was 30 µM. On the other hand, for compounds with binding in both domains, CC50 of 70.57 µM was found for MR25 and CC50 of 46.22 µM for MR94, and no effect was evidenced for MR41 at any of the evaluated concentrations, so CC50 was considered >100 µM (Table 2). determined. On the other hand, after treatment with compound M78, cellular viability close to 100% was evidenced at concentrations of 12.5, 25, and 50 µM, while at 100 µM, damage to the monolayer and therefore loss of viability was observed, with statistically significant difference when compared to the control cells (**** p < 0.0001), finding a CC50 of 60.77 µM. Regarding the compounds identified with interaction on the RdRp domain, it was found that compounds R07, R32, and R55 did not reduce cell viability at any of the evaluated concentrations. On the other hand, treatment with compound R53 reduced cell viability at all evaluated concentrations, with evident cell death. The determined CC50 for the compound was 30 µM. On the other hand, for compounds with binding in both domains, CC50 of 70.57 µM was found for MR25 and CC50 of 46.22 µM for MR94, and no effect was evidenced for MR41 at any of the evaluated concentrations, so CC50 was considered >100 µM (Table 2). The Compounds Reduce the Production of NS1 Protein in Cells Infected with DENV-2 The antiviral activity of the compounds was determined for nine (9) out of the ten (10) compounds previously evaluated, excluding compound R53, which showed higher toxic effects with a CC50 of 30 µM. This first selection analysis was performed through an ELISA assay with detection of the DENV NS1 protein. According to these results, there was a reduction in NS1 production after treatment with eight out of nine compounds (M66, M76, M78, R07, R32, R55, MR25, MR41), with statistically significant differences The Compounds Reduce the Production of NS1 Protein in Cells Infected with DENV-2 The antiviral activity of the compounds was determined for nine (9) out of the ten (10) compounds previously evaluated, excluding compound R53, which showed higher toxic effects with a CC50 of 30 µM. This first selection analysis was performed through an ELISA assay with detection of the DENV NS1 protein. According to these results, there was a reduction in NS1 production after treatment with eight out of nine compounds The Compounds Reduce the Production of NS1 Protein in Cells Infected with DENV-2 The antiviral activity of the compounds was determined for nine (9) out of the ten (10) compounds previously evaluated, excluding compound R53, which showed higher toxic effects with a CC50 of 30 µM. This first selection analysis was performed through an ELISA assay with detection of the DENV NS1 protein. According to these results, there was a reduction in NS1 production after treatment with eight out of nine compounds 46.22 Viruses 2023, 15, 1563 9 of 16 The Compounds Reduce the Production of NS1 Protein in Cells Infected with DENV-2 The antiviral activity of the compounds was determined for nine (9) out of the ten (10) compounds previously evaluated, excluding compound R53, which showed higher toxic effects with a CC 50 of 30 µM. This first selection analysis was performed through an ELISA assay with detection of the DENV NS1 protein. According to these results, there was a reduction in NS1 production after treatment with eight out of nine compounds (M66, M76, M78, R07, R32, R55, MR25, MR41), with statistically significant differences when compared to the viral control (**** p <0.0001), except for treatment with MR94. It is worth noting that the addition of M78 and MR25 reduced the expression of this protein, with production rates close to 38% and 45%, respectively. These rates were lower than that presented by the inhibition control (mycophenolic acid), which was around 63% (Figure 3). The Compounds Reduce the Production of NS1 Protein in Cells Infected with DENV-2 The antiviral activity of the compounds was determined for nine (9) out of the ten (10) compounds previously evaluated, excluding compound R53, which showed higher toxic effects with a CC50 of 30 µM. This first selection analysis was performed through an ELISA assay with detection of the DENV NS1 protein. According to these results, there was a reduction in NS1 production after treatment with eight out of nine compounds (M66, M76, M78, R07, R32, R55, MR25, MR41), with statistically significant differences when compared to the viral control (**** p <0.0001), except for treatment with MR94. It is worth noting that the addition of M78 and MR25 reduced the expression of this protein, with production rates close to 38% and 45%, respectively. These rates were lower than that presented by the inhibition control (mycophenolic acid), which was around 63% ( Figure 3). . One-way ANOVA analysis indicated statistically significant differences between the compounds and viral control (VC) (*** p < 0.001), (* p < 0.1; **** p < 0.0001), except for compound MR94. t-test indicated no difference between M78 and MR25 (ns: non-significant). Compound M78 Affects NS5 Protein Production by Interfering with Genome Replication and/or Translation The compound M78 was selected because it showed the greatest reduction in NS1 protein production when compared to the other compounds, decreasing expression by approximately 60%, according to the previously described results (Figure 3). M78 was evaluated at lower concentrations to assess its effect on NS5 expression. The ELISA results are presented in Figure 4, indicating the percentage of protein production and showing dose-dependent reduction with statistically significant differences when compared to viral control (VC) (*** p < 0.001). However, treatment with the compound at three different concentrations reduced protein production to values close to those observed in the positive inhibition control. Nevertheless, there were statistically significant differences between the three treatments, indicating a greater effect of M78 at 50 µM, where NS5 production was close to 30% compared to VC (100%). The IC 50 was estimated, with a value of 24.61 µM, and according the determined CC 50 , an SI of 2.5 was found. dose-dependent reduction with statistically significant differences when compared to viral control (VC) (*** p < 0.001). However, treatment with the compound at three different concentrations reduced protein production to values close to those observed in the positive inhibition control. Nevertheless, there were statistically significant differences between the three treatments, indicating a greater effect of M78 at 50 µM, where NS5 production was close to 30% compared to VC (100%). The IC50 was estimated, with a value of 24.61 µM, and according the determined CC50, an SI of 2.5 was found. Figure 4. Effect of M78 compound on the production of DENV-2 viral protein NS5 in Huh-7 replicon cells using ELISA assay. CC: Cell control, VC: Viral control, IC: Inhibition control (20 µM Mycophenolic Acid). Data represent mean and standard deviation (n = 8). One-way ANOVA analysis indicates statistically significant differences between all concentrations of M78 evaluated in relation to viral control (*** p < 0.001). Tukey test indicates difference between the three M78 treatments, between 12.5 µM and 25 µM (*** p < 0.001), and between 12.5 µM and 50 µM (**** p < 0.0001). The M78 Compound Affects The Production of DENV-2 Viral RNA The results of the antiviral effect and protein expression after treatment with 50 µM of M78 led to the evaluation of the compound's action on viral RNA synthesis. The evaluation showed a decrease in the relative expression of the DENV-2 gene when compared to the viral control ( Figure 5). One-way ANOVA analysis indicates statistically significant differences between all concentrations of M78 evaluated in relation to viral control (*** p < 0.001). Tukey test indicates difference between the three M78 treatments, between 12.5 µM and 25 µM (*** p < 0.001), and between 12.5 µM and 50 µM (**** p < 0.0001). The M78 Compound Affects The Production of DENV-2 Viral RNA The results of the antiviral effect and protein expression after treatment with 50 µM of M78 led to the evaluation of the compound's action on viral RNA synthesis. The evaluation showed a decrease in the relative expression of the DENV-2 gene when compared to the viral control ( Figure 5). dose-dependent reduction with statistically significant differences when compared to viral control (VC) (*** p < 0.001). However, treatment with the compound at three different concentrations reduced protein production to values close to those observed in the positive inhibition control. Nevertheless, there were statistically significant differences between the three treatments, indicating a greater effect of M78 at 50 µM, where NS5 production was close to 30% compared to VC (100%). The IC50 was estimated, with a value of 24.61 µM, and according the determined CC50, an SI of 2.5 was found. Figure 4. Effect of M78 compound on the production of DENV-2 viral protein NS5 in Huh-7 replicon cells using ELISA assay. CC: Cell control, VC: Viral control, IC: Inhibition control (20 µM Mycophenolic Acid). Data represent mean and standard deviation (n = 8). One-way ANOVA analysis indicates statistically significant differences between all concentrations of M78 evaluated in relation to viral control (*** p < 0.001). Tukey test indicates difference between the three M78 treatments, between 12.5 µM and 25 µM (*** p < 0.001), and between 12.5 µM and 50 µM (**** p < 0.0001). The M78 Compound Affects The Production of DENV-2 Viral RNA The results of the antiviral effect and protein expression after treatment with 50 µM of M78 led to the evaluation of the compound's action on viral RNA synthesis. The evaluation showed a decrease in the relative expression of the DENV-2 gene when compared to the viral control ( Figure 5). Kruskal-Wallis test analysis shows statistically significant differences between treatment with compound M78 and inhibition control (IC) compared to viral control (VC) (*** p < 0.001). Mann-Whitney test indicates no difference between inhibition control (IC) and M78 (ns: non-significant). Figure 5 shows the relative expression levels for VC, IC, and M78 treatment. The results indicate statistically significant difference between treatment with M78 and viral control (**** p < 0.0001). Based on this, it is possible to state that the expression of the evaluated gene was reduced 1.7 times due to the compound treatment, with similar behavior to that of the inhibition control. Discussion The use of bioinformatics tools for the study of molecular docking has become an important component in the drug discovery process [33]. Over the past two decades, computational technologies have played a crucial role in the development of antiviral drugs [34]. For DENV, the NS5 protein has been reported as an important target for the search for new molecules with inhibitory capacity of its function, knowing that it does not have homologues in the eukaryotic cell, which decreases the probability of toxic effects. It is highly conserved in flaviviruses, and within this protein, the thumb subdomain in RdRp plays a crucial role in assisting in the synthesis of viral RNA [2]. Likewise, the MTase domain has essential enzymatic activity in replication and positively influences polymerase activity [20,35]. Considering this, the molecules that bind to these sites may hinder conformational changes for RdRp activity [21]. Seeking compounds that bind in these regions remains an objective, since it has been demonstrated that the development of therapeutic molecules, such as direct-acting antivirals (DAA), is a truly effective approach [2]. The NS5 structures of the four serotypes were obtained from García et al. These models include most of the NS5 amino acids from all serotypes, and they are reliable and comparable based on validation of Z-score values against structures resolved by Xray and NMR and torsion angles [20]. Molecular docking with natural compounds was performed on the SAM and RNA tunnel regions of NS5, after the re-docking of the SAH and 68E ligands on the NS5 protein of serotype 3. The RMSD values obtained between the computationally calculated and experimental poses for the control ligands SAH and 68E were 1.0 Å and 0.8 Å, respectively (Figure 1), presenting small values on an atomic scale. A value equal to 0 indicates identical structures, and as the two structures become different, its value increases [36], suggesting that the two compared structures are very close to having the same formation. In other words, this result supports the interaction coordinates used in the re-docking, indicating that they are suitable in the search for natural compounds with interaction in DENV NS5, known as the most conserved protein, and considered a promising pharmacological target due to its fundamental role in replication, viral RNA methylation, RNA polymerization, and evasion of the host immune system [8]. Molecular docking has become an essential component for drug development and represents an approach that can aid in therapeutic and pharmaceutical development. The purpose of evaluating the antiviral potential of compounds derived from natural products is aimed at their proven success in many pharmacological therapies. As such, and because they present antiviral properties, they may be an alternative target for drug development in order to combat the Dengue virus [37]. The binding energies of compounds on NS5 protein were found to be between −9.2 and −12 kcal/mol (Table 1), indicating the global minimum energy of the complex formed between ligand and receptor [38]. The molecular docking results suggest that all our compounds have a higher binding site affinity than their respective controls, based on re-docking (for SAH, this was −7.6 kcal/mol; for 68E this was −8.6 kcal/mol). This condition was considered as an initial criterion when choosing the compounds. The binding energy is often used to determine the affinity of biomolecular interactions and drug efficacy [39]; the more negative the binding affinity, the stronger the ligand-receptor interaction and the better molecular docking prediction [40]. Further, predictions of physicochemical and toxicological properties favored the selection of these compounds. Compliance with Lipinski's rules was established as an important criterion for this classification, as higher molecular weight is associated with a lower rate of permeability in lipid bilayer membranes, and a LogP less than five has approximately a 90% probability of being orally soluble. When a drug-like molecule satisfies the five fundamental principles, it exhibits greater pharmacokinetic properties and bioavailability [37], making it feasible to consider whether the compounds possess possible drug properties and can be used in the future as candidates [20,41]. Likewise, predicting possible risks favored ordering the compounds in order to evaluate the most promising ones in vitro. This allowed for the postulation of 15 candidate compounds for in vitro evaluation in DENV. The experimental phase began with the determination of the cytotoxicity of 10 of the 15 identified compounds. The results showed that six of them had CC 50 in Huh-7 cells above 100 µM (M76, R07, R32, R55, MR25 and MR41), while the CC 50 of the others were less than or equal to that found for M78 (60.77 µM). No apparent cellular damage was found up to 50 µM (Supplementary Figure S1) so it was decided to evaluate these compounds at lower concentrations relative to the CC 50 determined for each one (Table 2). According to the chemical structure of the compounds, eight out of ten evaluated in vitro contain a scaffold of five-ring named 7,8,13,13b-tetrahydro-5H-benz [1,2] indolizino [8,7-b] indole. Indole compounds possess pharmacological potential that has been used as an excellent scaffold in the discovery of antimicrobial drugs, anticancer agents, antihypertensive, antiproliferative, and anti-inflammatory agents [42]. The activity of these compounds is associated with the molecular interactions generated between the indole compound and the therapeutic target [43], with high affinity, which aids in the development of new biologically active compounds [44,45]. Medically useful or promising indole compounds span the entire structural spectrum, from simple indoles to highly complex indole alkaloids [43]. Recent studies have pointed out that these types of structures exhibit an effect against flaviviruses [46] such as DENV, and other viruses such as HIV and influenza virus [47], and that fused tricyclic derivatives of indoline and imidazolidinone have action on ZIKV and DENV infection [48]. The effect of the compounds against DENV was initially evaluated for nine out of the ten selected compounds (excluding R53) by measuring the expression of DENV-2 NS1 protein, which was used as a model because it is one of the most prevalent serotypes [49]. The results indicated that all compounds, except MR94, reduced NS1 expression when compared to the viral control ( Figure 3). The presence of NS1 confirms Dengue infection and serves as evidence of successful viral replication [9,50]. Among the compounds, M78 showed better effects by decreasing protein production by over 60% (production close to 38%). In other studies, the detection of DENV-2 NS1 through ELISA also identified that hydroalcoholic extracts of leaves (UGL) and bark (UGB) from the medicinal species Uncaria guinanensis reduced the levels of this protein at concentrations of 0.5, 5, and 10 µg/mL [51]. Considering that NS1 is a multifunctional protein essential for virus production, and that, in infected cells, it is necessary for the formation of virus-induced membrane structures that serve as replication sites for DENV [52], the findings presented here support the concept of an antiviral action by the evaluated compounds. On the other hand, it was found that compound M78 induces a reduction in NS5 expression, with a dose-dependent effect (Figure 4), and a predicted selectivity index (SI) of 2.5 µM, indicating that the compound is effective and selective at this concentration. This parameter is accepted to express the efficacy of a compound in inhibiting viral replication, although studies have shown that SI values < 10 have limited antiviral activity, as observed in the case of OA, a methylated flavone from Oroxylum indicum, with an SI of 2.66 against DENV-2 [53]. Furthermore, by using the replicon system, which allows studying aspects of viral replication due to the lack of structural genes [30], it is possible to consider that the intervention of M78 is directed towards viral replication and/or the expression of proteins associated with this process [54], which is also evidenced by the reduction in viral RNA copies (Figure 5), where a 2 −∆∆CT value less than 1 indicates a reduction in gene expression due to the treatment [32], estimating that such reduction was 1.7 times greater compared to the viral control. Based on the above, we can say that M78 intervenes in the viral cycle of DENV-2, although the mechanism through which this effect occurs requires further validation, using complementary techniques that allow for a deeper understanding of the role of this compound as an antiviral. Other studies have reported the evaluation of compounds and plant extracts against DENV-2; among them, it has been indicated that the ethanolic extract of A. calamus root (Tatanan A) presented a similar effect to that found for M78, related to the intervention in the initial stage of viral replication, while inhibiting DENV-2 mRNA and protein levels [55]. On the other hand, it has also been published that hirsutine, an alkaloid from Uncaria rhynchophylla that shares structural similarity with M78 in relation to the indole nucleus, was identified as a potent anti-DENV compound in the four serotypes, inhibiting viral particle assembly, budding, or release step, but not translation and viral replication in the DENV lifecycle [56]. However, we have found a different dynamic for the compound M78, demonstrating that the compound intervenes in DENV-2 viral replication by acting on RNA Viruses 2023, 15, 1563 13 of 16 synthesis and/or translation of the viral genome, causing a decrease in the production of viral proteins, as we have observed (Figures 4 and 5). Furthermore, considering that indole is a potent basic pharmacophore present in a wide variety of antiviral agents [45], it is also known that some indole derivatives have been effective and selective inhibitors of this virus replication [57]. In this sense, the design of antiviral drugs containing indole is useful for combating viral infections [42], and furthermore, its application is interesting if this type of compound is found in natural products, as compounds from these sources have prevented DENV from infiltrating the genome or act by reducing structural and non-structural proteins that are produced [58]. This supports our findings. The identification of this naturally occurring compound is interesting, as similar bioactive compounds with antiviral properties could be combined with existing therapies along with different administration methods to enhance their efficacy [59]. Furthermore, our results indicate that the in silico strategy used in this research to search for compounds against Dengue proved to be effective, allowing the identification of 15 compounds with interaction in NS5 from DENV-1 to DENV-4 out of a total of 190,090 evaluated natural compounds. Among them, compound M78 showed in vitro antiviral activity, highlighting the utility of this methodology for future studies in the identification of compounds targeting viral targets. The findings suggest that the natural compound M78 could be considered a candidate for DENV-2. M78 is involved in viral replication, and it is recommended to study its role in NS5 in more detail, as well as its action in pre-treatment and its virucidal effect against other Dengue serotypes and flaviviruses. It is important to note that its chemical structure, for which no other biological activity assays are currently known, possesses an indole core that could be associated with its antiviral effect, which increases the interest in further investigating this compound identified here through virtual screening. Informed Consent Statement: Not applicable. Data Availability Statement: The study did not report any data.
2023-07-19T15:15:55.380Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "72148bb9453bd953ac4c44e5928b25cab0a885b4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/15/7/1563/pdf?version=1689566755", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02cdee7be66cb3d835ed9ea30a4b319e2fb3ae8b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
261111181
pes2o/s2orc
v3-fos-license
Prognostic Impact of Modified H2FPEF Score in Patients Receiving Trans-Catheter Aortic Valve Replacement Background: H2FPEF is a recently introduced score for the diagnosis of heart failure with preserved ejection fraction (HFpEF). Many patients with severe aortic stenosis have clinical/subclinical HFpEF and have worsening heart failure even after trans-catheter aortic valve replacement (TAVR). We investigated the prognostic impact of the H2FPEF score in TAVR candidates. Methods: Patients undergoing TAVR procedures at a single academic center between 2015 and 2022 were included. The H2FPEF score was calculated using baseline characteristics before TAVR. The prognostic impact of the score on the post-TAVR composite endpoint, consisting of all-cause death and heart failure readmissions during the 2-year observation period, was evaluated. Results: A total of 244 patients (median age 86 years, 70 males) were included. The median value of H2FPEF score was 3 (2, 4). The score was significantly associated with the primary outcome with a hazard ratio of 1.33 (95% confidence interval 1.02–1.74, p = 0.036). We constructed a modified H2FPEF score by adjusting cutoffs of several items for better prognostic stratification (i.e., age and body mass index). A modified score had a higher area under the curve than the original one (0.65 vs. 0.59, p = 0.028) and was independently associated with the primary outcome with an adjusted hazard ratio of 1.22 (95% confidence interval 1.01–1.49, p = 0.047). Conclusions: A modified H2FPEF score, which was originally developed to diagnose the presence of HFpEF, could be used to risk-stratify elderly patients receiving TAVR. The clinical utility of this score should be validated in future studies. Background Trans-catheter aortic valve replacement (TAVR) was introduced as a less invasive intervention for severe aortic stenosis, initially in patients at high risk for surgical valve replacement [1,2] and currently in patients with less sick cohorts [3], as endorsed by the guidelines [4,5].The clinical outcomes after TAVR have further improved due to improvements in sedation technique, smaller sheaths for vascular access, innovation of vascular closure devices, and more sophisticated peri-procedural management [6][7][8]. Nevertheless, heart failure recurrence after TAVR is one of the unsolved issues [9].Many patients with severe aortic stenosis have heart failure with preserved ejection fraction (HFpEF) with left ventricular hypertrophy, impaired diastolic function due to a longstanding increase in afterload, and lower stroke volume [10][11][12].Patients with reduced ejection fraction due to severe aortic stenosis also have diastolic dysfunction [13,14].Such an impaired cardiac function persists even after TAVR [15].However, the detailed association between pre-TAVR degree of diastolic dysfunction and clinical outcomes after TAVR remains unknown. The diagnosis of HFpEF is sometimes challenging.Several scores have been introduced to help us diagnose HFpEF, such as H2FPEF score [16].The H2FPEF score consists of J. Clin.Med.2023, 12, 5396 2 of 10 several easily available clinical parameters, including body mass index, the presence of hypertension, the presence of atrial fibrillation, pulmonary hypertension, age, and echocardiographic E/e' ratio.H2FPEF score may be useful not only for the diagnosis of HFpEF but also for risk-stratifying heart failure patients [17].We hypothesized that the H2FPEF score would be associated with clinical outcomes after TAVR.In this study, we evaluated the prognostic impact of H2FPEF score in patients receiving TAVR.We further modified the score to improve its predictability in the contemporary TAVR candidates who were elderly with advanced sarcopenia. Patient Selection Consecutive patients with severe aortic stenosis undergoing TAVR procedures at a large academic center, University of Toyama, between 2015 and 2022 were prospectively included in the institutional registry database, and this study was retrospectively conducted using this database.All patients were followed for 2 years or until May 2023 unless lost to follow-up.H2FPEF score, which was originally introduced to diagnose HFpEF [16], was calculated using baseline characteristics before TAVR.Patients with missing data were excluded.Written informed consents were obtained from all participants on admission.The institutional review board approved the study protocol. Calculation of H2FPEF Score H2FPEF score was calculated in all participants using baseline characteristics before TAVR, by assigning each a weighted score if patients satisfied the variables, including body mass index >30 (2 points), use of multiple anti-hypertension medications (1 point), atrial fibrillation (both persistent and paroxysmal) (3 points), pulmonary hypertension with estimated pulmonary artery systolic pressure by echocardiography >35 mmHg (1 point), age >60 years (1 point), and Doppler echocardiographic E/e' ratio >9.0 (1 point) [16].The total score was calculated as a summation of these scores, ranging between 0 and 9 points. We further constructed a modified H2FPEF score in this study by updating cutoffs of body mass index and age, because most TAVR candidates had advanced sarcopenia and probably did not satisfy body mass index >30.Also, most TAVR candidates were elderly aged over 60 years.We believed that the original H2FPEF score should be updated to better fit the contemporary TAVR candidates. Other Baseline Characteristics Baseline demographics, laboratory, echocardiographic, and medication data before TAVR were obtained as baseline characteristics. TAVR Procedure Patients with symptomatic severe aortic stenosis with peak velocity >4.0 m/s, mean pressure gradient >40 mmHg, or aortic valve area <1.0 cm 2 were eligible for TAVR.The indication for TAVR was determined by the heart valve team conference consisting of cardiac surgeons, interventional cardiologists, anesthesiologists, nurses, and imaging specialists.Patients underwent standard TAVR procedure using the Edwards Sapien XT/Sapien 3 Transcatheter Heart Valve (Edwards Lifesciences, Irvine, CA, USA) or the Medtronic CoreValve/Evolut R Revolving System (Medtronic, Minneapolis, MN, USA).An antithrombotic regimen following TAVR was used at the discretion of the attending physician. Post-TAVR Course and Primary Outcome Patients were followed at our center or affiliated centers by board-certified cardiologists every 1-2 month(s) at out-patient clinic in a standard manner.Day 0 was defined as the time of TAVR procedure.The observation period was 2 years or until May 2023 from day 0. Clinical outcomes including death and heart failure readmissions were counted.The primary outcome was a composite of all-cause death and heart failure readmissions. Statistical Analysis Continuous variables were presented as median and interquartile range and compared using Mann-Whitney U test.Categorical variables were presented as numbers and percentages and were compared using Fisher's exact test.A value of 2-tailed p < 0.05 was considered statistically significant.Statistical analyses were performed using SPSS Statistics 23 (SPSS Inc., Armonk, IL, USA). An independent variable was the H2FPEF score, which was modified as detailed below.Patients were followed for 2 years or until May 2023 from the TAVR procedure (day 0).The primary outcome was all-cause death and heart failure readmissions. Cox proportional hazard ratio regression analyses were performed to evaluate the prognostic impact of the H2FPEF score (and the modified H2FPEF score).Potential confounders were considered to be included in the multivariable analyses for the adjustment after confirmation of statistical significance in the univariable analyses, including age, male sex, body mass index, serum albumin, estimated glomerular filtration ratio, plasma B-type natriuretic peptide, left ventricular ejection fraction, heart failure history, and atrial fibrillation.Receiver operating characteristics analyses were performed to evaluate the prognostic impact and calculate cutoffs of variables for the primary outcome.Kaplan-Meier analysis with log-rank test was performed for risk stratification using a modified H2FPEF score. H2FPEF score was modified by adjusting cutoffs of body mass index and age, which were calculated using receiver operating characteristics analyses, for better adjusting in the current TAVR candidates (i.e., elderly patients with advanced sarcopenia). H2FPEF Score Calculation H2FPEF score was calculated in all participants using baseline characteristics.Of note, almost no patients satisfied body mass index >30, and almost all participants satisfied age >60 years, both of which were major components of H2FPEF score.H2FPEF score was distributed relatively narrowly with a median value of 3 (3, 4) (Figure 1A).Examples of score calculation are displayed in the Appendix A. H2FPEF Score and Post-Procedural Clinical Outcome All patients underwent successful TAVR.Patients were followed for a median of 730 (382, 730) days, with 730 days as a maximum observation duration.A total of 26 patients encountered the primary outcome defined as all-cause death and heart failure readmissions (12 death alone, 10 heart failure alone, and 4 for both).H2FPEF was significantly associated with the primary outcome with a hazard ratio of 1.33 (95% confidence interval 1.02-1.74,p = 0.036). Modified H2FPEF Score Given the unique characteristics of TAVR candidates (elderly patients with advanced sarcopenia), the H2FPEF score was modified by updating the cutoffs of body mass index (from 30 to 23) and age (from 60 to 84), both of which were calculated using the receiver operating characteristics analyses for the primary outcomes.A modified H2FPEF score was calculated in all participants.The modified H2FPEF score was distributed widely with a median value of 3 (2, 4) (Figure 1B).Examples of the score calculation are displayed in Appendix A. H2FPEF Score Calculation H2FPEF score was calculated in all participants using baseline characteristics.Of note, almost no patients satisfied body mass index >30, and almost all participants satisfied age >60 years, both of which were major components of H2FPEF score.H2FPEF score was distributed relatively narrowly with a median value of 3 (3, 4) (Figure 1A).Examples of score calculation are displayed in the Appendix. H2FPEF Score and Post-Procedural Clinical Outcome All patients underwent successful TAVR.Patients were followed for a median of 730 (382, 730) days, with 730 days as a maximum observation duration.A total of 26 patients encountered the primary outcome defined as all-cause death and heart failure readmissions (12 death alone, 10 heart failure alone, and 4 for both).H2FPEF was significantly associated with the primary outcome with a hazard ratio of 1.33 (95% confidence interval 1.02-1.74,p = 0.036). Modified H2FPEF Score Given the unique characteristics of TAVR candidates (elderly patients with advanced sarcopenia), the H2FPEF score was modified by updating the cutoffs of body mass index (from 30 to 23) and age (from 60 to 84), both of which were calculated using the receiver operating characteristics analyses for the primary outcomes.A modified H2FPEF score was calculated in all participants.The modified H2FPEF score was distributed widely with a median value of 3 (2, 4) (Figure 1B).Examples of the score calculation are displayed in Appendix. Prognostic Impact of the Modified H2FPEF Score The modified H2FPEF score was independently associated with the primary outcome with an adjusted hazard ratio of 1.22 (95% confidence interval 1.01-1.49,p = 0.047), which was adjusted for male sex and body mass index (Table 2).The predictability of the modified H2FPEF score, which was assessed using the area under the curve in the receiver operating characteristics analysis, was superior to the original H2FPEF score (0.69 vs. 0.59, p = 0.028; Figure 2). Prognostic Impact of the Modified H2FPEF Score The modified H2FPEF score was independently associated with the primary outcome with an adjusted hazard ratio of 1.22 (95% confidence interval 1.01-1.49,p = 0.047), which was adjusted for male sex and body mass index (Table 2).The predictability of the modified H2FPEF score, which was assessed using the area under the curve in the receiver operating characteristics analysis, was superior to the original H2FPEF score (0.69 vs. 0.59, p = 0.028; Figure 2). A modified H2FPEF score was not significantly associated with 2-year mortality with a hazard ratio of 1.17 (95% confidence interval 0.89-1.54,p = 0.25), whereas it was significantly associated with 2-year heart failure readmissions with a hazard ratio of 1.36 (95% confidence interval 1.05-1.77,p = 0.021). Stratification Using Modified H2FPEF Score Patients were assigned to three groups according to their risk scores: a low-risk group (0-2 points, n = 63), an intermediate-risk group (3-5 points, n = 154), and a high-risk group (6-9 points, n = 27).The prevalence of patients who satisfied each item of the modified H2FPEF score was summarized in Table 3. The cumulative incidence of the primary outcomes during the 2-year observation period was significantly stratified into three risk groups (6%, 12%, and 30% for low-, intermediate-, and high-risk groups, p = 0.001; Figure 3).The hazard ratio of intermediate risk vs. low risk was 1.74 (95% confidence interval 0.63-4.85,p = 0.29).The hazard ratio of high risk vs. intermediate risk was 2.80 (95% confidence interval 1.15-6.81,p = 0.023).The sensitivity was 0.21, and the specificity was 0.89 in the high-risk group.The sensitivity was 0.93, and the specificity was 0.27 in the low-risk group.A modified H2FPEF score was not significantly associated with 2-year mortality with a hazard ratio of 1.17 (95% confidence interval 0.89-1.54,p = 0.25), whereas it was significantly associated with 2-year heart failure readmissions with a hazard ratio of 1.36 (95% confidence interval 1.05-1.77,p = 0.021). Stratification Using Modified H2FPEF Score Patients were assigned to three groups according to their risk scores: a low-risk group (0-2 points, n = 63), an intermediate-risk group (3-5 points, n = 154), and a high-risk group (6-9 points, n = 27).The prevalence of patients who satisfied each item of the modified H2FPEF score was summarized in Table 3.All 20 patients with left ventricular ejection fraction <40% were assigned to low-or intermediate-risk group, except for one patient.One patient at high risk had heart failure readmission on day 312. The cumulative incidence of the primary outcomes during the 2-year observation period was significantly stratified into three risk groups (6%, 12%, and 30% for low-, intermediate-, and high-risk groups, p = 0.001; Figure 3).The hazard ratio of intermediate risk vs. low risk was 1.74 (95% confidence interval 0.63-4.85,p = 0.29).The hazard ratio of high risk vs. intermediate risk was 2.80 (95% confidence interval 1.15-6.81,p = 0.023).The sensitivity was 0.21, and the specificity was 0.89 in the high-risk group.The sensitivity was 0.93, and the specificity was 0.27 in the low-risk group.All 20 patients with left ventricular ejection fraction <40% were assigned to low-or intermediate-risk group, except for one patient.One patient at high risk had heart failure readmission on day 312. Discussion In the present study, we evaluated the prognostic impact of the H2FPEF score [16], which was originally introduced to screen HFpEF patients, on the composite primary outcome consisting of all-cause death and heart failure readmissions during the 2-year observation period after TAVR.The original H2FPEF score was significantly associated with the Discussion In the present study, we evaluated the prognostic impact of the H2FPEF score [16], which was originally introduced to screen HFpEF patients, on the composite primary outcome consisting of all-cause death and heart failure readmissions during the 2-year observation period after TAVR.The original H2FPEF score was significantly associated with the 2-year primary outcome after TAVR.The modified H2FPEF score was constructed by updating the cutoffs of body mass index and age for better suitability to the current TAVR candidates (i.e., elderly patients with advanced sarcopenia).The predictability of the modified H2FPEF score for the primary outcome was superior to that of the original one.The cumulative incidence of the primary outcome was significantly stratified according to the modified H2FPEF score. HFpEF and H2FPEF Score The accurate diagnosis of HFpEF is challenging [18].The prevalence of HFpEF is increasing for several reasons, whereas HFpEF remains underdiagnosed so far.The gold standard for diagnosing HFpEF is a direct measurement of the elevated intra-cardiac pressure at rest or during exercise [19].The H2FPEF score has been introduced as a convenient screening tool for HFpEF [16]. The utility of H2FPEF score to discriminate suspected HFpEF patients has been validated in various cohorts [20].Furthermore, HFpEF score appears to be useful for the risk stratification of patients with various diseases, including HFpEF [17].Given that patients with severe aortic stenosis have a similar pathology to HFpEF [13,14], we hypothesized that the H2FPEF score may also be useful for the risk-stratification of TAVR candidates. Prognostic Impact of H2FPEF Score As hypothesized, the H2FPEF score had a prognostic impact on all-cause death and heart failure readmissions after TAVR.Several previous studies support our findings: The elevated intra-cardiac pressure, which was invasively measured after TAVR, was associated with worse clinical outcomes [21,22].In another study, more advanced diastolic dysfunction, which was graded by echocardiography, was associated with a higher risk for 1-year mortality after TAVR than milder degrees of diastolic dysfunction [15].Aortic stenosis itself can be treated with TAVR, but extra-valvular cardiac damage, including left ventricle, left atrium, mitral valve, pulmonary artery, and right ventricle, can persist even after TAVR [23].Thus, it is reasonable that demographics and baseline hemodynamics items of the H2FPEF score, such as the presence of atrial fibrillation and E/e' ratio, were associated with post-TAVR clinical outcomes. One previous study showed that H2FPEF score served as an independent predictor of adverse cardiovascular and heart failure outcomes after TAVR [11].This study used the original H2FPEF score, whereas we modified the score to better fit current TAVR candidates: elderly patients with sarcopenia [24].Almost no patients had a body mass index above 30, and almost all patients were aged over 60 years, both of which are cutoffs of the original H2FPEF score's items.The modified H2FPEF score had greater predictability than the original one.We highly recommend to use the modified H2FPEF score in TAVR candidates, instead of the original one. Clinical Implication of the Modified H2FPEF Score The H2FPEF score is convenient and can be calculated non-invasively using several simple parameters [16].We further improved its predictability by modifying the cutoffs of several items: we reduced the cutoff of body mass index and increased the cutoff of age, because most TAVR candidates were elderly and had advanced sarcopenia. The score can be used for shared decision making before TAVR among clinicians, patients, and their relatives.Given the high specificity at the high risk score and the high sensitivity at the low risk score, we can expect/rule out the patients at risk of future adverse events.After TAVR, careful monitoring of worsening heart is highly recommended to prevent heart failure readmissions for the high-risk cohort.Post-TAVR prognosis may improve by intervening to several items of the H2FPEF score [25].For cardiac rehabilitation may ameliorate the metabolism of visceral fat [26].Catheter ablation for atrial fibrillation may improve atrial function [27].Aggressive titration of heart failure medication may optimize post-TAVR hemodynamics [28]. Limitations This study included a moderate-sized cohort from a single center.Given the small number of events, the included potential confounders in the multivariable analysis were limited.The profiles of HFpEF may vary depending on legions and ethics [29].For example, few HFpEF patients in the Asian lesion have obesity.The applicability of our findings should be validated in larger multi-center studies including a variety of lesions and ethics.E/e' ratio, one of the items of the H2FPEF score, may not necessarily be measured routinely in all institutes.We highly recommend to measure such echocardiographic data routinely before TAVR to calculate H2FPEF score.In this study, we preferred the H2FPEF score to the HFA-PEFF score.One of the limitations of the HFA-PEFF score is a requirement of more detailed echocardiographic data, including left atrial volume, left ventricular mass, and global strain.These may not necessarily be measured routinely before TAVR.Several variables in the H2FPEF score can be modifiable by any interventions.The prognostic impact of intervention on some of the items of H2FPEF score remains the next concern. Conclusions A modified H2FPEF score, which was originally constructed to diagnose the presence of HFpEF, could be used to risk-stratify elderly patients receiving TAVR.We constructed a modified H2FPEF score by updating the cutoffs of age and body mass index for better suitability to the current TAVR candidates (i.e., elderly patients with sarcopenia).The clinical utility of this score should be validated in future studies. Figure 1 . Figure 1.Distribution of the calculated H2FPEF scores: the original score (A) and the modified score (B). Figure 1 . Figure 1.Distribution of the calculated H2FPEF scores: the original score (A) and the modified score (B). Figure 2 . Figure 2. Receiver operating characteristics analyses of the original (blue line) vs. modified H2FPEF scores (red line) for predicting the primary outcome.The AUC was significantly higher in the modified H2FPEF score than the original H2FPEF score.AUC, area under the curve; CI, confidence interval.* p < 0.05. Figure 2 . Figure 2. Receiver operating characteristics analyses of the original (blue line) vs. modified H2FPEF scores (red line) for predicting the primary outcome.The AUC was significantly higher in the modified H2FPEF score than the original H2FPEF score.AUC, area under the curve; CI, confidence interval.* p < 0.05. Figure 3 . Figure 3. Cumulative incidence of the primary outcome during the 2-year observation period after TAVR stratified according to the modified H2FPEF score.Patients were stratified according to the modified H2FPEF score into three groups: high-risk group (score 6-9 points), intermediate-risk group (score 3-5 points), and low-risk group (score 0-2 points).* p < 0.05 via log-rank test. Figure 3 . Figure 3. Cumulative incidence of the primary outcome during the 2-year observation period after TAVR stratified according to the modified H2FPEF score.Patients were stratified according to the modified H2FPEF score into three groups: high-risk group (score 6-9 points), intermediate-risk group (score 3-5 points), and low-risk group (score 0-2 points).* p < 0.05 via log-rank test. Patients were stratified into three groups according to the modified H2FPEF score: low-, intermediate-, and highrisk groups.eGFR, estimated glomerular filtration rate; BNP, B-type natriuretic peptide; LVDd, left ventricular end-diastolic diameter; LVEF, left ventricular ejection fraction; MR, mitral regurgitation; AR, atrial regurgitation; TR, tricuspid regurgitation; RVSP, right ventricular systolic pressure.Continuous variable are stated as median and interquartile and compared between the two groups using Mann-Whitney U test.Categorical variables are stated as number and percentage and compared between the two groups using Fischer's exact test.* p < 0.05. Table 2 . Potential predictors of the primary outcome including modified H2FPEF score.Potential predictors of the primary outcome (2-year death or heart failure readmission) were included in the univariable analyses.Variables with p < 0.05 in the univariable analyses were included in the multivariable analysis.eGFR, estimated glomerular filtration rate; BNP, B-type natriuretic peptide, LVEF, left ventricular ejection fraction.* p < 0.05 via Cox proportional hazard ratio regression analyses. Table 3 . Prevalence of patients who satisfied each item of modified H2FPEF score. RVSP, right ventricular systolic pressure.Categorical variables are stated as number and percentage.* p < 0.05 via Fischer's exact test.
2023-08-25T15:26:28.436Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "9ce2effefde7d1c371beb65f22c7947434d0ed76", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/12/16/5396/pdf?version=1692436477", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29c10032a5b3a4c0b108fa470f8572fcf396393a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
80922317
pes2o/s2orc
v3-fos-license
Congenital generalized lipodystrophy Congenital generalized lipodystrophy (also called Berardinelli-Seip congenital lipodystrophy) is a rare condition characterized by an almost total lack of fatty (adipose) tissue in the body and a very muscular appearance. Adipose tissue is found in many parts of the body, including beneath the skin and surrounding the internal organs. It stores fat for energy and also provides cushioning. Congenital generalized lipodystrophy is part of a group of related disorders known as lipodystrophies, which are all characterized by a loss of adipose tissue. A shortage of adipose tissue leads to the storage of fat elsewhere in the body, such as in the liver and muscles, which causes serious health problems. The signs and symptoms of congenital generalized lipodystrophy are usually apparent from birth or early childhood.One of the most common features is insulin resistance, a condition in which the body's tissues are unable to recognize insulin, a hormone that normally helps to regulate blood sugar levels.Insulin resistance may develop into a more serious disease called diabetes mellitus.Most affected individuals also have high levels of fats called triglycerides circulating in the bloodstream (hypertriglyceridemia), which can lead to the development of small yellow deposits of fat under the skin called eruptive xanthomas and inflammation of the pancreas (pancreatitis).Additionally, congenital generalized lipodystrophy causes an abnormal buildup of fats in the liver (hepatic steatosis), which can result in an enlarged liver (hepatomegaly) and liver failure.Some affected individuals develop a form of heart disease called hypertrophic cardiomyopathy, which can lead to heart failure and an abnormal heart rhythm (arrhythmia) that can cause sudden death. People with congenital generalized lipodystrophy have a distinctive physical appearance.They appear very muscular because they have an almost complete absence of adipose tissue and an overgrowth of muscle tissue.A lack of adipose tissue under the skin also makes the veins appear prominent.Affected individuals tend to have prominent bones above the eyes (orbital ridges), large hands and feet, and a prominent belly button (umbilicus).Affected females may have an enlarged clitoris (clitoromegaly), an increased amount of body hair (hirsutism), irregular menstrual periods, and multiple cysts on the ovaries, which may be related to hormonal changes.Many people with this disorder develop acanthosis nigricans, a skin condition related to high levels of insulin in the bloodstream.Acanthosis nigricans causes the skin in body folds and creases to become thick, dark, and velvety. Researchers have described four types of congenital generalized lipodystrophy, which are distinguished by their genetic cause.The types also have some differences in their typical signs and symptoms.For example, in addition to the features described above, some people with congenital generalized lipodystrophy type 1 develop cysts in the long bones of the arms and legs after puberty.Type 2 can be associated with intellectual disability, which is usually mild to moderate.Type 3 appears to cause poor growth and short stature, along with other health problems.Type 4 is associated with muscle weakness, delayed development, joint abnormalities, a narrowing of the lower part of the stomach (pyloric stenosis), and severe arrhythmia that can lead to sudden death. Frequency Congenital generalized lipodystrophy has an estimated prevalence of 1 in 10 million people worldwide.Between 300 and 500 people with the condition have been described in the medical literature.Although this condition has been reported in populations around the world, it appears to be more common in certain regions of Lebanon and Brazil. Causes Mutations in the AGPAT2, BSCL2, CAV1, and CAVIN1 genes cause congenital generalized lipodystrophy types 1 through 4, respectively.The proteins produced from these genes play important roles in the development and function of adipocytes, which are the fat-storing cells in adipose tissue.Mutations in any of these genes reduce or eliminate the function of their respective proteins, which impairs the development, structure, or function of adipocytes and makes the body unable to store and use fats properly.These abnormalities of adipose tissue disrupt hormones and affect many of the body's organs, resulting in the varied signs and symptoms of congenital generalized lipodystrophy. Some of the genes associated with congenital generalized lipodystrophy also play roles in other cells and tissues.For example, the protein produced from the BSCL2 gene is also present in the brain, although its function is unknown.A loss of this protein in the brain may help explain why congenital generalized lipodystrophy type 2 is sometimes associated with intellectual disability. In some people with congenital generalized lipodystrophy, no mutations have been found in any of the genes listed above.Researchers are looking for additional genetic changes associated with this disorder. Inheritance Pattern This condition is inherited in an autosomal recessive pattern, which means both copies of the gene in each cell have mutations.The parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene, but they typically do not show signs and symptoms of the condition.
2019-12-10T06:23:20.263Z
2018-05-01T00:00:00.000
{ "year": 2019, "sha1": "746d6d59f6b585b226a85de3ee76acc69c1afc9f", "oa_license": "CCBY", "oa_url": "https://www.rarediseasesjournal.com/articles/congenital-generalized-lipodystrophy.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4146437b56dd878eadd0f88a7d4f88ae80d54512", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49324724
pes2o/s2orc
v3-fos-license
RISE: Randomized Input Sampling for Explanation of Black-box Models Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on black-box models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches. Project page: http://cs-people.bu.edu/vpetsiuk/rise/ Introduction Recent success of deep neural networks has led to a remarkable growth in Artificial Intelligence (AI) research. In spite of the success, it remains largely unclear how a particular neural network comes to a decision, how certain it is about the decision, if and when it can be trusted, or when it has to be corrected. In domains where a decision can have serious consequences (e.g., medical diagnosis, autonomous driving, criminal justice etc.), it is especially important that the decision-making models are transparent. There is extensive evidence for the importance of explanation towards understanding and building trust in cognitive psychology [15], philosophy [16] and machine learning [6,14,21] research. In this paper, we address the problem of Explainable AI, i.e., providing explanations for the artificially intelligent model's decision. Specifically, we are interested in explaining classification decisions made by deep neural networks on natural images. Consider the prediction of a popular image classification model (ResNet50 obtained from [32]) on the image depicting several sheep shown in Fig. 1(a). We might wonder, why is the model predicting the presence of a cow in this photo? Does it see all sheep as c 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. equally sheep-like? An explainable AI approach can provide answers to these questions, which in turn can help fix such mistakes. In this paper, we take a popular approach of generating a saliency or importance map that shows how important each image pixel is for the network's prediction. In this case, our approach reveals that the ResNet model confuses the black sheep for a cow ( Fig. 1(c)), potentially due to the scarcity of black colored sheep in its training data. A similar observation is made for the photo of two birds (Fig 1(d)) where the same ResNet model predicts the presence of a bird and a person. Our generated explanation reveals that the left bird provides most of the visual evidence for the 'person' class. Existing methods [8,17,23,25,30,32,33] compute importance for a given base model (the one being explained) and an output category. However, they require access to the internals of the base model, such as the gradients of the output with respect to the input, intermediate feature maps, or the network's weights. Many methods are also limited to certain network architectures and/or layer types [33]. In this paper, we advocate for a more general approach that can produce a saliency map for an arbitrary network without requiring access to its internals and does not require re-implementation for each network architecture. LIME [21] offers such a black-box approach by drawing random samples around the instance to be explained and fitting an approximate linear decision model. However, its saliency is based on superpixels, which may not capture correct regions (see Fig. 2 We propose a new black-box approach for estimating pixel saliency called Randomized Input Sampling for Explanation (RISE). Our approach is general and applies to any off-the- base model's prediction along with 'deletion' scores (AUC). The top row shows an input image (from ImageNet) and saliency maps produced by RISE, Grad-CAM [23] and LIME [21] with ResNet50 as the base network (redder values indicate higher importance). The bottom row illustrates the deletion metric: salient pixels are gradually masked from the image (2(e)) in order of decreasing importance, and the probability of the 'goldfish' class predicted by the network is plotted vs. the fraction of removed pixels. In this example, RISE provides more accurate saliency and achieves the lowest AUC. shelf image network, treating it as a complete black box and not assuming access to its parameters, features or gradients. The key idea is to probe the base model by sub-sampling the input image via random masks and recording its response to each of the masked images. The final importance map is generated as a linear combination of the random binary masks where the combination weights come from the output probabilities predicted by the base model on the masked images (See Fig. 3). This seemingly simple yet surprisingly powerful approach allows us to peek inside an arbitrary network without accessing any of its internal structure. Thus, RISE is a true black-box explanation approach which is conceptually different from mainstream white-box saliency approaches such as GradCAM [23] and, in principle, is generalizable to base models of any architecture. Another key contribution of our work is to propose causal metrics to evaluate the produced explanations. Most explanation approaches are evaluated in a human-centered way, where the generated saliency map is compared to the "ground truth" regions or bounding boxes drawn by humans in localization datasets [23,32]. Some approaches also measure human trust or reliability on the explanations [21,23]. Such evaluations not only require a lot of human effort but, importantly, are unfit for evaluating whether the explanation is the true cause of the model's decision. They only capture how well the explanations imitate the human-annotated importance of the image regions. But an AI system could behave differently from a human and learn to use cues from the background (e.g., using grass to detect cows) or other cues that are non-intuitive to humans. Thus, a human-dependent metric cannot evaluate the correctness of an explanation that aims to extract the underlying decision process from the network. Motivated by [8], we propose two automatic evaluation metrics: deletion and insertion. The deletion metric measures the drop in the probability of a class as important pixels (given by the saliency map) are gradually removed from the image. A sharp drop, and thus a small area under the probability curve, are indicative of a good explanation. Fig. 2 shows plots produced by different explanation techniques for an image containing 'goldfish', where the total Area Under Curve (AUC) value is the smallest for our RISE model, indicating a more causal explanation. The insertion metric, on the other hand, captures the importance of the pixels in terms of their ability to synthesize an image and is measured by the rise in the probability of the class of interest as pixels are added according to the generated importance map. We argue that these two metrics not only alleviate the need for large-scale human evaluation or annotation effort, but are also better at assessing causal explanations by being human agnostic. For the sake of completeness, we also compare the performance of our method to state-of-the-art explanation models in terms of a human-centric evaluation metric. Related work The importance of producing explanations has been extensively studied in multiple fields, within and outside machine learning. Historically, representing knowledge using rules or decision trees [26,27] has been found to be interpretable by humans. Another line of research focused on approximating the less interpretable models (e.g., neural network, nonlinear SVMs etc.) with simple, interpretable models such as decision rules or sparse linear models [3,28]. In a recent work Ribeiro et. al. [21], fits a more interpretable approximate linear decision model (LIME) in the vicinity of a particular input. Though the approximation is fairly good locally, for a sufficiently complex model, a linear approximation may not lead to a faithful representation of the non-linear model. The LIME model can be applied to black-box networks like our approach, but its reliance on superpixels leads to inferior importance maps as shown in our experiments. To explain classification decisions in images, previous works either visually ground image regions that strongly support the decision [18,23] or generate a textual description of why the decision was made [10]. The visual grounding is generally expressed as a saliency or importance map which shows the importance of each pixel towards the model's decision. Existing approaches to deep neural network explanation either design 'interpretable' network architectures or attempt to explain or 'justify' decisions made by an existing model. Within the class of interpretable architectures, Xu et. al. [29], proposed an interpretable image captioning system by incorporating an attention network which learns where to look next in an image before producing each word of the caption. A neural module network is employed in [1,12] to produce the answers to visual question-answering problems in an interpretable manner by learning to divide the problem into subproblems. However, these approaches achieve interpretability by incorporating changes to a white-box base model and are constrained to use specific network architectures. Neural justification approaches attempt to justify the decision of a base model. Thirdperson models [10,18] train additional models from human annotated 'ground truth' reasoning in the form of saliency maps or textual justifications. The success of such methods depends on the availability of tediously labeled ground-truth explanations, and they do not produce high-fidelity explanations. Alternatively, first-person models [2,8,23,33] aim to generate explanations providing evidence for the model's underlying decision process without using an additional model. In our work, we focus on producing a first-person justification. Several approaches generate importance maps by isolating contributions of image re- gions to the prediction. In one of the early works [31], Zeiler et al. visualize the internal representation learned by CNNs using deconvolutional networks. Other approaches [17,25,30] have tried to synthesize an input (an image) that highly activates a neuron. The Class Activation Mapping (CAM) approach [33] achieves class-specific importance of each location of an image by computing a weighted sum of the feature activation values at that location across all channels. However, the approach can only be applied to a particular kind of CNN architecture where a global average pooling is performed over convolutional feature map channels immediately prior to the classification layer. Grad-CAM [23] extends CAM by weighing the feature activation values at every location with the average gradient of the class score (w.r.t. the feature activation values) for every feature map channel. Zhang et al. [32] introduce a probabilistic winner-take-all strategy to compute top-down importance of neurons towards model predictions. Fong et al. [8] and Cao et al. [2] learn a perturbation mask that maximally affects the model's output by backpropagating the error signals through the model. However, all of the above methods [2,8,17,23,25,30,32,33] assume access to the internals of the base model to obtain feature activation values, gradients or weights. RISE is a more general framework as the importance map is obtained with access to only the input and output of the base model. Randomized Input Sampling for Explanation (RISE) One way to measure the importance of an image region is to obscure or 'perturb' it and observe how much this affects the black box decision. For example, this can be done by setting pixel intensities to zero [8,21,31], blurring the region [8] or by adding noise. In this work we estimate the importance of pixels by dimming them in random combinations, reducing their intensities down to zero. We model this by multiplying an image with a [0, 1] valued mask. The mask generation process is described in detail in section 3.2. Random Masking Let f : I → R be a black-box model, that for a given input from I produces scalar confidence score. In our case, I is the space of color images H} × {1, . . . ,W }), where every image I is a mapping from coordinates to three color values. For example, f may be a classifier that produces the probability that object of some class is present in the image, or a captioning model that outputs the probability of the next word given a partial sentence. Let M : Λ → {0, 1} be a random binary mask with distribution D. Consider the random variable f (I M), where denotes element-wise multiplication. First, the image is masked by preserving only a subset of pixels. Then, the confidence score for the masked image is computed by the black box. We define importance of pixel λ ∈ Λ as the expected score over all possible masks M conditioned on the event that pixel λ is observed, i.e., M(λ ) = 1: The intuition behind this is that f It can be written in matrix notation, combined with the fact that P[M(λ ) = 1] = E[M(λ )]: Thus, saliency map can be computed as a weighted sum of random masks, where weights are the probability scores, that masks produce, adjusted for the distribution of the random masks. We propose to generate importance maps by empirically estimating the sum in equation Note that our method does not use any information from inside the model and thus, is suitable for explaining black-box models. Mask generation Masking pixels independently may cause adversarial effects: a slight change in pixel values may cause significant variation in the model's confidence scores. Moreover, generating masks by independently setting their elements to zeros and ones will result in mask space of size 2 H×W . A larger space size requires more samples for a good estimation in equation (6). To address these issues we first sample smaller binary masks and then upsample them to larger resolution using bilinear interpolation. Bilinear upsampling does not introduce sharp edges in I M i as well as results in a smooth importance map S. After interpolation, masks M i are no longer binary, but have values in [0, 1]. Finally, to allow more flexible masking, we shift all masks by a random number of pixels in both spatial directions. Formally, mask generation can be summarized as: 1. Sample N binary masks of size h × w (smaller than image size H ×W ) by setting each element independently to 1 with probability p and to 0 with the remaining probability. Experiments Datasets and Base Models: We evaluated RISE on 3 publicly available object classification datasets, namely, PASCAL VOC07 [7], MSCOCO2014 [13] and ImageNet [22]. Given a base model, we test importance maps generated by different explanation methods for a target object category present in images from the VOC and MSCOCO datasets. For the ImageNet dataset, we test the explanation generated for the top probable class of the image. We chose the particular versions of the VOC and MSCOCO datasets to compare fairly with the stateof-the-art reporting on the same datasets and same base models. For these two datasets, we used ResNet50 [9] and VGG16 [24] networks trained by [32] as base models. For ImageNet, the same base models were downloaded from the PyTorch model zoo 1 . Evaluation Metrics Despite a growing body of research focusing on explainable machine learning, there is still no consensus about how to measure the explainability of a machine learning model [19]. As a result, human evaluation has been the predominant way to assess model explanation by measuring it from the perspective of transparency, user trust or human comprehension of the decisions made by the model [11]. Existing justification methods [23,32] have evaluated saliency maps by their ability to localize objects. However, localization is merely a proxy for human explanation and may not correctly capture what causes the base model to make a decision irrespective of whether the decision is right or wrong as far as the proxy task is concerned. As a typical example, let us consider an image of a car driving on a road. Evaluating an explanation against the localization bounding box of the car does not give credit for (in fact discredits) correctly capturing 'road' as a possible cause behind the base model's decision of classifying the image as that of a car. We argue that keeping humans out of the loop for evaluation makes it more fair and true to the classifier's own view on the problem rather than representing a human's view. Such a metric is not only objective (free from human bias) in nature but also saves time and resources. Causal metrics for explanations: To address these issues, we propose two automatic evaluation metrics: deletion and insertion, motivated by [8]. The intuition behind the deletion metric is that the removal of the 'cause' will force the base model to change its decision. Specifically, this metric measures a decrease in the probability of the predicted class as more and more important pixels are removed, where the importance is obtained from the importance map. A sharp drop and thus a low area under the probability curve (as a function of the fraction of removed pixels) means a good explanation. The insertion metric, on the other hand, takes a complementary approach. It measures the increase in probability as more and more pixels are introduced, with higher AUC indicative of a better explanation. There are several ways of removing pixels from an image [4], e.g., setting the pixel values to zero or any other constant gray value, blurring the pixels or even cropping out a tight bounding box. The same is true when pixels are introduced, e.g., they can be introduced to a constant canvas or by starting with a highly blurred image and gradually unblurring regions. All of these approaches have different pros and cons. A common issue is the introduction of spurious evidence which can fool the classifier. For example, if pixels are introduced to a constant canvas and if the introduced region happens to be oval in shape, the classifier may classify the image as a 'balloon' (possibly a printed balloon) with high probability. This issue is less severe if pixels are introduced to an initially blurred canvas as blurring takes away most of the finer details of an image without exposing it to sharp edges as image regions are introduced. This strategy gives higher scores for all methods, so we adopt it for insertion. For deletion, the aim is to fool the classifier as quickly as possible and blurring small regions instead of setting them to a constant gray level does not help. This is because a good classifier is usually able to fill in the missing details quite remarkably from the surrounding regions and from the small amount of low-frequency information left after blurring a tiny region. As a result, we set the image regions to constant values when removing them for the deletion metric evaluation. We used the same strategies for all the existing approaches with which we compared our method in terms of these two metrics. Pointing game: We also evaluate explanations in terms of a human evaluation metric, the pointing game introduced in [32]. If the highest saliency point lies inside the humanannotated bounding box of an object, it is counted as a hit. The pointing game accuracy is given by #Hits #Hits+#Misses , averaged over all target categories in the dataset. For a classification Table 1: Comparative evaluation in terms of deletion (lower is better) and insertion (higher is better) scores on ImageNet dataset. Except for Grad-CAM, the rest are black-box explanation models. Experimental Results Experimental Settings: The binary random masks are generated with equal probabilities for 0's and 1's. For different CNN classifiers, we empirically select different numbers of masks, in particular, we used 4000 masks for the VGG16 network and 8000 for ResNet50. We have used h = w = 7 and H = W = 224 throughout. All the results used for comparison were either taken from published works or by running the publicly available code on datasets for which reported results could not be obtained. Deletion and Insertion scores: Table 1 shows a comparative evaluation of RISE with other state-of-the-art approaches in terms of both deletion and insertion metrics on val split of ImageNet. RISE reports an average value with associated standard deviations for 3 independent runs. The sliding window approach [31] systematically occludes fixed size image regions and probes the model with the perturbed image to measure the importance of the occluded region. We used a sliding window of size 64 × 64 with stride 8. For LIME [21], the number of samples was set to 1000 (taken from the code). For this experiment, we used the ImageNet classification dataset where no ground truth segmentation or localization mask is provided and thus explainability performance can only be measured via automatic metrics like deletion and insertion. For both the base models and according to both the metrics, RISE provides better performance, outperforming even the white-box Grad-CAM method. The values are better for ResNet50 which is intuitive as it is a better classification model than VGG16. However, due to increased number of forward passes, RISE is heavy in computation. This can potentially be addressed by intelligently sampling fewer number of random masks which is kept as a future work. RISE, sometimes, provides noisy importance maps due to sampling approximation especially in presence of objects with varying sizes. Fig. 4 shows examples of RISE-generated importance maps along with the deletion and insertion curves. The appendix contains additional visual examples including a few noisy importance maps. Pointing game accuracy: The performance in terms of pointing game accuracy is shown in Table 2 for the test split of PASCAL VOC07 and val split of MSCOCO2014 datasets. In this table, RISE is the only black-box method. The base models are obtained from [32] and thus we list the pointing game accuracies reported in the paper. RISE reports an average value of 3 independent runs; low standard deviation values indicate the robustness of the proposed approach against the randomness of the masks. For VGG16, RISE performs consistently better than all of the white-box methods with a significantly improved performance for the VOC dataset. For the deeper ResNet50 network with residual connections, RISE does not have the highest pointing accuracy but comes close. We stress again that good pointing accuracy may not correlate with actual causal processes in a network, however, RISE is competitive despite being black-box and more general than methods like CAM, which is only applicable to architectures without fully-connected layers. RISE for Captioning RISE can easily be extended to explain captions for any image description system. Some existing works use a separate attention network [29] or assume access to feature activations [32] and/or gradient values [23] to ground words in an image caption. The most similar to our work is Ramanishka et al. [20] where the base model is probed with conv features from small patches of the input image to estimate its importance for each word in the caption. However, our approach is not constrained to a single fixed size patch and is thus less sensitive to object sizes as well as better at capturing additional context that may be present in the image. We provide a small example of RISE being applied for explaining image caption. We take a base captioning model [5] that models the probability of the next word w k given a partial sentence s = (w 1 , . . . , w k−1 ) and an input image I: f (I, s, w k ) = P[w k |I, w 1 , . . . , w k−1 ] We probe the base model by running it on a set of N randomly masked inputs f (I M i , s, w k ) and computing saliency as 1 N·E[M] ∑ N i=1 f (I M i , s, w k ) · M i for each word in s. Input sentence s can be any arbitrary sentence including the caption generated by the base model itself. Three such explanation instances for MSCOCO image are shown in Fig. 5. Conclusion This paper presented RISE, an approach for explaining black-box models by estimating the importance of input image regions for the model's prediction. Despite its simplicity and generality, the method outperforms existing explanation approaches in terms of automatic causal metrics and performs competitively in terms of the human-centric pointing metric. Future work will be to exploit the generality of the approach for explaining decisions made by complex networks in video and other domains.
2018-06-22T00:32:06.010Z
2018-06-19T00:00:00.000
{ "year": 2018, "sha1": "d00c7fc5201405d5411b5ad3da93c5575ce8f10e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e8bd0584e769bfc4eff38ea185196809c7a0d074", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
250711458
pes2o/s2orc
v3-fos-license
Exacerbation of Autoimmune Bullous Diseases After Severe Acute Respiratory Syndrome Coronavirus 2 Vaccination: Is There Any Association? Background and Aim There have been concerns regarding the potential exacerbation of autoimmune bullous diseases (AIBDs) following vaccination against COVID-19 during the pandemic. In the current study, vaccine safety was evaluated in patients with AIBDs. Methods In this study, patients with AIBDs were contacted via face-to-face visits or phone calls. Patient demographics, vaccine-related information, pre- and post-vaccine disease status, and complications were recorded. The exacerbation was considered either relapse in the remission/controlled phase of the disease or disease worsening in the active phase. The univariate and multivariate logistic regression tests were employed to determine the potential risk factors of disease exacerbation. Results Of the patients contacted, 446 (74.3%) reported receiving at least one dose of vaccine injection (54.7% female). Post-vaccine exacerbation occurred in 66 (14.8%) patients. Besides, there were 5 (1.1%) patients with AIBD diagnosis after vaccination. According to the analysis, for every three patients who received vaccines during the active phase of the disease one experienced disease exacerbation. The rate of disease exacerbation increased by three percent with every passing month from the last rituximab infusion. Active disease in the past year was another risk factor with a number needed to harm of 10. Conclusion Risk of AIBD exacerbation after the COVID-19 vaccine is not high enough to prevent vaccination. This unwanted side effect, can be reduced if the disease is controlled at the time of vaccination. INTRODUCTION Autoimmune bullous diseases (AIBDs) are a group of blistering dermatoses of the skin and mucosa. Vaccinations have always been a challenging issue for patients with AIBDs and their physicians for the possible risk of disease exacerbation (1). The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) outbreak began in Wuhan, China, in December 2019 and was declared a global pandemic by World Health Organization in March 2020 (2). Patients with AIBDs may be at increased risk of severe complications of coronavirus disease 2019 (COVID- 19), and they are in substantial need to undergo vaccination (3)(4)(5). However, those with an underlying disease like AIBDs were excluded from vaccine clinical trials, and little is known concerning the safety profile of SARS-CoV-2 vaccines in this population. Some studies have reported cases of exacerbation or newonset AIBDs post-SARS-CoV-2 vaccination (6-9), yet it has not been evaluated in larger series. Herein, we sought to determine whether SARS-CoV-2 vaccination might affect the natural course of AIBDs and find the potential risk factors of disease exacerbation. Study Design and Participants This cross-sectional study was performed at the AIBD clinic of Razi Skin hospital, Tehran, Iran, for 3 months between September 10 and December 10, 2021. To evaluate the SARS-CoV-2 vaccine outcome, AIBD patients were contacted, and those who had received at least one vaccine shot were enrolled. Informed consent was obtained from each patient verbally. The study was conducted according to the Helsinki Declaration, and ethical approval was obtained from the Tehran University of Medical Sciences Ethics committee (IR.TUMS.MEDICINE. REC.1400.911). The patients were interviewed through face-toface visits or phone calls. Demographic data, vaccine-related information, pre-and post-vaccine disease status, and vaccineinduced complications were self-reported by the patients and completed by referring to their medical records. If the patients had received only the first dose vaccination, they were contacted again by the follow-up phone call 1 month after the scheduled vaccination time. To assess the post-vaccination disease status, an in-person visit was set for a thorough examination of the patients who reported new lesions after vaccination on the telephone interviews. Moreover, if this had occurred in the past, patients' medical records were used to extract the necessary information. It is noteworthy that patients whose information was not complete and accurate were not included. The SARS-CoV-2 diagnosis was based on a positive polymerase chain reaction (PCR) test result or lung involvement on chest computed tomography (CT) scan congruent with SARS-CoV-2 (2). The patients were enrolled based on the following inclusion and exclusion criteria: Inclusion criteria: • Definite diagnosis of AIBDs Definition of Exacerbation In patients with active disease, a 10-point increase in Pemphigus Disease Area Index (PDAI)/Bullous Pemphigoid Disease Area Index (BPDAI)/Mucous Membrane Pemphigoid Disease Area Index (MMPDAI) was defined as disease worsening (10). For linear IgA disease and epidermolysis bullosa acquisita, BPDAI and MMPDAI scorings were employed, respectively. Relapse or newly diagnosed uncontrolled cases were considered to be in the active phase of the disease. Relapse in patients with disease remission corresponded to the appearance of ≥ 3 new lesions in a month that did not heal spontaneously within 1 week, and in those with controlled disease corresponded to extension of established lesions (11). To determine the severity of relapses, the emergence of ≥ 20 lesions on ≥ 3 body sectors was defined as major relapse, and the appearance of < 20 lesions on < 3 body sectors as minor relapse (12). New lesions that did not comply with any of these criteria were considered as transient lesions. According to a previous study, the exacerbation must be occurred within 2 weeks of the vaccines to be considered as vaccine-associated (13). In summary, the criteria for defining the vaccine-associated exacerbations were as follows: • A 10-point increase in PDAI/BPDAI/MMPDAI in the active phase of the disease • ≥3 new lesions or extension of established lesions in the remission/controlled phase of the disease • Occurrence within 2 weeks of the vaccine Statistical Analysis Absolute number and percentage were employed for reporting qualitative variables. The quantitative variables were presented as mean with standard deviation (SD) for normal distributed variables and median with interquartile range (IQR) for nonnormal ones. Logistic regression analysis was employed to predict dependent variable of post-vaccination disease exacerbation. Odds ratio (OR) with 95% confidence interval (95% CI) were measured for demographic and clinical data of the patients. Factors that demonstrated statistical significance (p < 0.05) based on univariate analysis and had no obvious colinearity with other variables were incorporated into multivariate analysis. Number needed to harm was measured for detected categorical risk factors reciprocating the absolute risk reduction (difference of event rate between two groups). All analyses were performed using SPSS (version 24; IBM, New York, United States), and statistical significance was defined as P < 0.05. RESULTS Six hundred patients with AIBD diagnosis were contacted. The flow diagram of patients' enrollment is illustrated in Figure 1. Of the patients, 446 (74.3%) reported receiving at least one dose of vaccine injection (54.7% female). Supplementary Figure 1 compares the first dose SARS-CoV-2 vaccination rates between the present study and the country's general population. The mean age of the 446 vaccinated patients was 50.2 ± 12.5 years old. The types of AIBDs were pemphigus vulgaris in 361 (80.9%) patients, pemphigus foliaceus in 38 (8.5%), bullous pemphigoid in 29 (6.5%), mucous membrane pemphigoid in 13 (2.9%), linear IgA disease in 2 (0.4%), epidermolysis bullosa acquisita in 2 (0.4%), and paraneoplastic pemphigus in 1 (0.2%). Of the patients, 334 (74.9%) were on systematic medications, mostly prednisolone (73.1%) at the time of vaccination. Twenty-nine patients (6.5%) were above minimal therapy (>10 mg/d prednisolone or equivalent). There were 184 (41.3%) patients who had a previous history of COVID-19 affection, of whom 118 (26.5%) had confirmed diagnosis of COVID-19 and 39 (8.7%) were hospitalized. Among these infections, 31 (6.9%) occurred within 3 months of the first vaccine injection, and the others were before that. The detailed data of clinical characteristics and vaccine-related information of the patients are depicted in Table 1. The three most common vaccine side effects were pain at the injection site in 221 (26%) vaccine shots, 139 (16.3%) fatigue, and 88 (10.3%) flu-like symptoms. Within 1 month of vaccinations, 17 (3.8%) patients were diagnosed as COVID-19. Of them, 14 (82.4%) were treated as outpatients, and the other 3 (17.6%) were administered remdesivir. A 56-year-old man who developed COVID-19 after the first vaccine dose, in the follow-up phone call, was reported to have passed away after hospitalization due to that infection. He was a known case of pemphigus vulgaris for 2 years with hypertension and diabetes. He was in partial remission on minimal therapy (prednisolone 5 mg) and received the first vaccine dose of Sinopharm (BBIBP-CorV); no vaccine-related side effects were reported, and the second dose was postponed for COVID-19 symptoms. A total of 66 (14.8%) patients experienced post-vaccine disease exacerbation. Of those 401 patients who were either in remission or were controlled, 34 (8.5%) patients experienced minor, and 14 (3.5%) experienced major relapses. Among 40 patients who received the vaccine in the active phase of the disease, 18 (45%) patients reported disease worsening post-vaccination. There were also five patients who were diagnosed as AIBDs after the vaccination. Two of them reported occasional lesions before the vaccine, and the other three denied any history of skin diseases and were otherwise healthy. Of the 404 fully vaccinated patients, only 17 (4.2%) developed disease exacerbation after both vaccine shots. Figure 2 demonstrates the two patients who reported disease exacerbation after the SARS-CoV-2 vaccination. According to multivariate analysis, every passing month from the last rituximab infusion increased the rate of disease exacerbation by 3 percent (95% CI, 1.01-1.05) (p = 0.03). Vaccination in those with a history of active disease in the past year had an OR of 2.11 (95% CI, 1.49-6.49) (p = 0.009) for exacerbation, with a number needed to harm of 10. Moreover, vaccine injection in the active phase of the disease escalated the risk of exacerbation with an OR of 8.9 (95% CI, 3.6-22.0) (p < 0.001), corresponding to the number needed to harm of 3; suggesting that for every three patients who receive the vaccine at the active phase of the disease one would experience disease exacerbation. There was no significant difference in the exacerbation rate of AIBD subtypes (p > 0.05), as the recorded rates were 16.5% in pemphigus vulgaris, 10.5% in pemphigus foliaceus, 3.8% in bullous pemphigoid, 23.1% in mucous membrane pemphigoid, and none in the others. Due to the importance of disease exacerbation in AIBD subtypes, the detailed information of each is presented in Supplementary Table 1. Other factors, including age, sex, vaccine type, disease duration, prednisolone dosage, previous COVID-19 infection, and having risk factors of relapse (comorbidity, major stress, medication reduction/cessation, infection), had no significant effect on post-vaccination disease exacerbation ( Table 2). DISCUSSION The current study assessed the effect of SARS-CoV-2 vaccination on AIBDs' course in a large cohort of patients. Previous reports suggested exacerbation of AIBDs after tetanus and influenza vaccination (14)(15)(16). A similar phenomenon has been observed with regard to SARS-CoV-2 vaccine (6,17). In a recent study, five patients with confirmed diagnosis of AIBDs were reported to develop disease exacerbation after the first COVID-19 vaccine. All patients had received mRNA vaccines at the remission phase of the disease, and only one experienced flare following the second dose (6). In another report, a patient with pemphigus vulgaris at remission on maintenance therapy with 5 mg/day of prednisone for 10 months presented new lesions after both doses of BNT162b2 vaccine. The exacerbations occurred 5 days after vaccine injections and were confirmed using histological and serological evaluation (17). Given the need for vaccination in AIBD patients, it is very important to have an estimate of postvaccination disease exacerbation in a large population of patients. Our results showed that only less than one-fifth of the AIBD patients experienced disease exacerbation post-SARS-CoV-2 vaccination. Moreover, there is no sufficient evidence to relate all the reported exacerbations to the vaccine with certainty, and some might be coincidental events. Therefore, while dermatologists encourage patients to be vaccinated against COVID-19, they should be aware of this possible event and inform their patients about it. It is noteworthy that exacerbation of the disease after the first dose does not preclude the administration of a second dose. According to our study, only a limited number of patients suffered from disease exacerbation after both vaccine shots, which is consistent with a previous report in this regard (6). In addition to disease exacerbation, we detected five cases (three bullous pemphigoid and two pemphigus vulgaris) with a new diagnosis after vaccination. Numerous studies have reported similar cases induced by SARS-CoV-2 vaccination (7,(18)(19)(20). A recent review summarized demographic, clinical, and immunological characteristics of 35 case reports with new diagnoses following the COVID-19 vaccine. Of them, 26 were bullous pemphigoid, 6 pemphigus vulgaris, 2 linear IgA disease, and one pemphigus foliaceus. Contrary to the common epidemiology of AIBDs, they reported a female to male ratio of 1:1.7. The bullous lesions were presented after a median of 7 days (IQR: 3-14) following vaccine shots (21). In the largest case series of patients with SARS-CoV-2 vaccine-associated bullous pemphigoid, characteristics of 21 patients were compared to idiopathic disease. They reported similar clinical presentations, with a predominance of male patients. Immunopathological features were typical, but anti-BP230 autoantibody was remarkably reduced (22). It should be noted that some authors argue against the casualty relation between the SARS-CoV-2 vaccine and AIBDs as they recorded no increase in disease incidence in the year of vaccination (23). The mechanisms triggering disease activity after vaccination are unclear, although both dysregulation of the immune system and molecular mimicry of vaccine adjuvants or antigens were previously suggested to play role in this regard. A recently published study has argued against the cross-reactivity relation between SARS-CoV-2 immunization and AIBDs. The authors in that study found that none of the 12 individuals with recent COVID-19 infection nor 12 individuals with SARS-CoV-2 immunization had any concomitant autoantibody of pemphigus or pemphigoid (24). Therefore, the role of the immune response following the SARS-CoV-2 vaccine would be highlighted. It is supposed that excessive generation of type I interferons and proinflammatory cytokine following vaccination would induce innate and adaptive immune cells proliferation (25). Triggering humoral immunity by regulatory T cells dysfunction in susceptible persons may contribute to autoantibody production in pemphigus and pemphigoid disorders (26). Cytotoxic T cell activation might also stimulate post-vaccination exacerbation of disease especially in pemphigoid group (27,28). In addition, it is shown that SARS-CoV-2 vaccine is associated with complement dysregulation, which could be another reason for exacerbation of bullous pemphigoid and mucous membrane pemphigoid (29,30). In this study, patients who were vaccinated in the active phase of the disease were more prone to experience postvaccine disease exacerbation with a number needed to harm of 3. Furthermore, disease exacerbation rate increased with a longer duration between the last rituximab infusion and vaccination. These findings can be important in the routine practice of dermatologists caring for patients with AIBDs, implying that it would be best to prescribe the COVID-19 vaccine when the patients are in the remission/controlled phase of the disease. It has been shown that immunosuppressive drugs reduce the efficacy of SARS-CoV2 vaccines (31). Patients with immunemediated skin conditions treated with immunosuppressants had low levels of serum anti-SARS-CoV-2 IgG antibodies, according to a recent study (32). The higher doses of corticosteroids in patients with active disease would attenuate the response to the vaccine. Therefore, vaccine administration in the remission/controlled phase of the disease would improve vaccine-induced immunity along with lowering the risk of disease exacerbation. This was previously suggested in the expert recommendations for the management of AIBDs during the COVID-19 pandemic, which can now be relied on with greater certainty (33). With regard to rituximab, the same group of experts suggested that vaccination should be completed more than 4 weeks prior to rituximab or 12-20 weeks after the last infusion. Although vaccination was found safe, vaccine efficacy, CD20-B cell count, and seroconversion rate were not studied in the present cohort; therefore, we cannot draw any conclusion regarding the optimal timing for obtaining the best efficacy. The authors acknowledge the limitations of this study. It is important to mention that there might be multiple factors affecting AIBDs exacerbation. So, merely observing an exacerbation of the disease after vaccination cannot be concrete evidence for a probable association. Since the beginning of the COVID-19 pandemic, countries have been affected differently, and each has chosen a particular approach for prevention, vaccination, and treatment. Preexistence differences between countries may lead to variable results. Therefore, replicating the current study in further research from other regions would be crucial to reinforce the results. Taking the following measures could further determine the effect of the SARS-CoV2 vaccine on disease exacerbation. First, performing laboratory assessments may help to elucidate the immunopathogenesis of this phenomenon, including different aspects of innate and adaptive immunity. Moreover, it could help detect patients who experience only serologic changes and minimal or no clinical findings after SARS-CoV2 vaccination. Second, a comparison of the exacerbation rates in vaccinated and unvaccinated patients during the same period with controlling other confounding factors would provide further information regarding the impact of SARS-CoV2 vaccines on the natural history of AIBDs. Despite these limitations, the message of this study regarding the low rate of exacerbation after SARS-CoV2 vaccines and identifying more susceptible individuals for this phenomenon can still be trusted. CONCLUSION Together, these findings suggest that there might be an association between SARS-CoV-2 vaccination and exacerbation of AIBDs. Active disease was especially concerning with a number needed to harm of 3. Still, the benefits of vaccination undoubtedly outweigh the potential risk of complications. Vaccination is therefore strictly recommended for patients with AIBDs, although preferably in the remission/controlled phase of the disease. DATA AVAILABILITY STATEMENT The original contributions presented in this study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were approved by the Tehran University of Medical Sciences Ethics Committee (IR.TUMS.MEDICINE.REC.1400.911). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS MD, HM, and KB contributed to conception and design of the study. NK, SD, and AS gathered data. NK and SD performed the statistical analysis and wrote the first draft of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
2022-07-21T15:22:25.715Z
2022-07-19T00:00:00.000
{ "year": 2022, "sha1": "5abf75d258056c63a0954e6e27271d41bfd23da1", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2022.957169/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3188244a9a79bf51061de6a090e1ca074e8368d2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
1902177
pes2o/s2orc
v3-fos-license
Effect of Surface Treatment on Enamel Cracks After Orthodontic Bracket Debonding: Er,Cr:YSGG Laser-Etching Versus Acid-Etching. Objectives This study sought to compare enamel cracks after orthodontic bracket debonding in the surfaces prepared with erbium, chromium: yttrium-scandium-galliumgarnet (Er,Cr:YSGG) laser and the conventional acid-etching technique. Materials and Methods This in-vitro experimental study was conducted on 60 sound human premolars extracted for orthodontic purposes. The teeth were randomly divided into two groups (n=30). The teeth in group A were etched with 37% phosphoric acid gel, while the teeth in group B were subjected to Er,Cr:YSGG laser irradiation (gold handpiece, MZ8 tip, 50Hz, 4.5W, 60μs, 80% water and 60% air). Orthodontic brackets were bonded to the enamel surfaces and were then debonded in both groups. The samples were inspected under a stereomicroscope at ×38 magnification to assess the number and length of enamel cracks before bonding and after debonding. Independent-samples t-test was used to compare the frequency of enamel cracks in the two groups. Levene's test was applied to assess the equality of variances. Results No significant difference was noted in the frequency or length of enamel cracks between the two groups after debonding (P>0.05). Conclusions Despite the same results of the frequency and length of enamel cracks in the two groups and by considering the side effects of acid-etching (demineralization and formation of white spot lesions), Er,Cr:YSGG laser may be used as an alternative to acid-etching for enamel surface preparation prior to bracket bonding. INTRODUCTION Brackets are used in fixed orthodontics to force the teeth to move in three dimensions. The introduction of direct bracket bonding revolutionized orthodontic treatments; however, establishing a sufficiently strong bond to enamel to keep the brackets in place during the entire course of treatment, yet not too strong to damage the enamel upon debonding, has remained a challenge [1]. The bond between the bracket and enamel is based on mechanical interlocking of the adhesive into the microporosities of the enamel surface. Therefore, a successful bond requires precise enamel surface preparation [2]. The introduction of the acid-etching technique enabled bonding of orthodontic brackets to the enamel surface. Some modifications have been made in this technique to accelerate the procedure and decrease the extent of enamel demineralization [3]. At present, enamel preparation with acid-etching is the gold standard in orthodontic treatments and 30% to 50% phosphoric acid gel, applied for 30 to 60 seconds, is commonly used for this purpose. Although removing the www.jdt.tums.ac.ir September 2017; Vol.14, No. 5 interprismatic mineral structure of the enamel surface by acid-etching and creating a rough surface enhances the retention of adhesive resins, the treated enamel becomes more susceptible to caries. Acidetching removes the superficial protective enamel layer, making the teeth more vulnerable to long-term acid attacks. This problem is magnified when the acid-etched surface is not entirely covered by resin or is exposed to saliva before resin application [4]. Thus, researchers have long been in search of alternative conditioning methods to overcome the disadvantages of acid-etching with a phosphoric acid etchant. Surface treatment with erbium, chromium: yttrium-scandium-gallium-garnet (Er,Cr:YSGG) laser has been suggested as an alternative method to achieve this purpose. Although Er,Cr:YSGG laser was introduced to dentistry for ablation of hard and soft dental tissues, its sub-ablative irradiation has been proposed as an alternative to acid-etching of enamel and dentin. It seems that laser-etching is a suitable alternative to acid-etching of enamel since it is painless and creates no vibration or heat. Additionally, laser-etching of enamel creates microporosities that are perfect for resin penetration [5]. Due to the benefits of laser-etching over the acidetching technique, the former is becoming increasingly popular for routine clinical use [6]. Carbon dioxide (CO2) laser irradiation alters the calcium-phosphate ratio and confers resistance to the enamel against acid attacks [7]. Moreover, laseretching is time-saving since water-spraying and airdrying are not required in Er,Cr:YSGG laseretching; therefore, the risk of salivary contamination during rinsing and drying is eliminated [8]. [6,9,10], and not on the frequency of cracks after debonding. Considering the gap of information in this respect, this study sought to assess and compare enamel cracks after orthodontic bracket debonding in the surfaces prepared with Er,Cr:YSGG laser-etching and conventional acidetching techniques. MATERIALS AND METHODS This in-vitro experimental study was conducted on 60 human premolars, freshly extracted due to orthodontic purposes. The sample size was calculated using twosample t-test power analysis procedure of PASS 11 software program (SPSS Inc., Chicago, IL, USA). The inclusion criteria consisted of maxillary and mandibular premolars of patients aged 13-19 years with a normal anatomical form and sound enamel, without any cracks, fractures, caries or fluorosis, and with no history of surface treatment with chemical agents (such as bleaching treatment with hydrogen peroxide). The specimens were evaluated under a stereomicroscope (SNZ1000, Nikon, Tokyo, Japan) at ×38 magnification to ensure that all the teeth met the inclusion criteria. The teeth were stored in saline at 4°C for one month. Microscopic examination of the enamel surface before bracket bonding: To standardize the viewing conditions under the microscope, each tooth was mounted in a modeling dough on a plate while another plate of the same size was compressed over it in order to position the buccal surface parallel to the horizon (Fig. 1). The cracks and their directions were observed under the stereomicroscope at ×38 magnification with light illumination. As recommended by Pickett et al [15], the teeth were rotated 360° around the central point of their buccal surfaces; otherwise, the cracks in the same direction as the light rays could not be visualized. The length of the cracks on the surfaces of 10 samples was measured by a ruler on the images transferred to a computer. The cracks that were not in the form of a straight line were divided into smaller straight lines with different directions. The lengths of these small segments were measured and added to obtain the entire crack length. By considering the magnification parameters and the distance between the lens and the tooth surface, the length of each unit of the ruler was calculated to be 62.5µm. Thus, the length of the cracks was initially calculated in microns and was then converted to millimeters. After evaluating the structural pattern of the buccal surface of each tooth, the number and length of enamel cracks were recorded by two observers from the Anatomy Department and the Histomorphometry and Stereology Research Center of Shiraz University of Medical Sciences. Each crack was allocated a number (Fig. 2a). In order to standardize the conditions, the number and length of cracks after debonding were recorded by using the previously described method and by the same observers (Fig. 2b). The microscope was connected to a computer equipped with a digital camera (Sony, Tokyo, Japan The brackets were debonded using bracket-removing pliers (Dentaurum, Ispringen, Germany), according to the manufacturer's instructions. A shear peeling force was applied by the pliers to the bracket wings until they were detached from the enamel surface. Microscopic examination of the enamel surface after bracket debonding: By using the digital camera connected to the stereomicroscope and the Stereolith software program, the bonding area on the tooth surface was divided into 96 smaller areas. Each small area represented one unit with a surface area of 0.126 mm². The total bonded surface area was 12.096 mm², which was equal to the base area of the bracket. The surface area covered by adhesive remnants was calculated in mm² and was reported in percentage. The ARI score, described by Artun and Bergland [18], was calculated, as score 0 indicated no adhesive remnant on the enamel surface, score 1 indicated that less than half of the adhesive was remaining on the surface, score 2 indicated that more than half of the adhesive was remaining on the surface, and score 3 indicated that the entire adhesive was left on the surface. The composite and adhesive remnants were removed and the enamel surfaces were polished using a low-speed handpiece (operating at 30,000 rpm) and a tungsten carbide bur (Dentaurum, Ispringen, Germany) under water coolant [19]. The teeth were observed again under the microscope and the frequency, length, and direction of enamel cracks were studied by the same two observers. Statistical analysis: The data were analyzed using SPSS version 20 software program (SPSS Inc., Chicago, IL, USA). The analysis of covariance was applied to compare the frequency and length of enamel cracks between the two groups after debonding by considering the baseline values as the covariate. Mann-U-Whitney test was applied to evaluate the differences in the ARI scores between the two groups. P<0.05 was considered statistically significant. RESULTS The means and standard deviations (SD) of the frequency and length of cracks before and after acidetching and laser-etching are presented in Table 1. The mean±SD number of cracks in the acid-etched and laser-etched groups equaled to 2.07±0.333 and 1.93±0.509, respectively. The mean±SD crack length in the acid-etched and laser-etched groups equaled to 12734.41±4104.42 and 11557.1±5586.056 µm, respectively. No significant difference was noted in the frequency or length of enamel cracks between the two groups before debonding (P>0.05); therefore, the two groups were identical with regards to these characteristics before the intervention. The results of the analysis of covariance showed that there were no significant differences in the length and number of cracks between the groups after the intervention (P=0.356 and 0.199, respectively). The ARI scores are presented in Table 2. The ARI scores of the acid-etched group were significantly higher than those of the laser-etched group (P<0.001). DISCUSSION Direct bracket bonding offers many benefits in contemporary orthodontics; however, the enamel surface preparation method and type of adhesive can significantly affect the bracket bonding. As explained by Martinez-Insua et al [4] in 2000, conventional acid-etching has several disadvantages including removal of the superficial protective enamel layer and demineralization, which make the teeth more vulnerable to long-standing acid attacks. This is especially important when the acid-etched surface is not entirely covered by resin and is exposed to saliva. Considering the shortcomings of acid-etching, Ozer et al in 2008 [6], and Lee et al in 2003 [8], introduced laseretching as a suitable alternative to acid-etching of the enamel surface. In the current study, the frequency and length of enamel cracks in the buccal surface and the ARI scores were compared between the two groups of teeth subjected to acid-etching and laseretching. The results revealed no significant difference in terms of the length or number of cracks between the two groups after orthodontic bracket debonding. The fragility of enamel depends on the age of the patient since the organic and mineral contents of the enamel surface change with aging; thus, the extracted teeth of 13-19-year-old patients were used in the current study due to low susceptibility to fracture [20]. A search of the literature yielded no previous study on the effect of laser-etching of the enamel surface prior to bracket bonding on the frequency and number of enamel cracks after debonding. Thus, we compared our findings with those of the previous studies on the bond strength following laser-etching and acidetching. Several studies have evaluated the efficacy of enamel surface preparation with laser prior to orthodontic bracket bonding. The morphological changes in the enamel caused by laser irradiation depend on the energy density of the laser, duration of exposure, distance of the laser handpiece from the surface and frequency of water and air spraying [21,22]. In the current study, the ARI scores were also compared www.jdt.tums.ac.ir September 2017; Vol.14, No. 5 between the two groups. According to Mann-U-Whitney test, a significant difference in the mean rank of the ARI score was found between the two groups and a lower value was observed in the laser-etched group. In other words, less adhesive remained on the enamel surface in this group, which is in line with the results of the study by Hosseini et al [17] in 2012, but in contrast to those of the study by Gokcelik et al [23] in 2007, since the latter showed higher ARI scores in the Er:YAG laser-etched samples compared to that in the acid-etched group. The difference between our results and those of Gokcelik et al [23] is probably due to the different types of the applied lasers. In our study, based on the ARI scores, debonding mainly occurred at the resin-enamel interface, leaving less adhesive remnant on the enamel surface in the laser-etched group; therefore, less time is needed for resin removal with a lower risk of damaging the enamel surface. Thus, this type of bonding is clinically favorable [1]. It should be noted that debonding at the resin-enamel interface has a higher frequency in the clinical setting compared to the in-vitro conditions because the factors in the oral environment such as thermal changes, humidity, temperature and microbial plaque compromise the enameletching and decrease its efficacy [24]. Moreover, the structural pattern of the bracket base is designed in such a way that debonding is uncommon at the resin-bracket interface [25]. In contrast to the current results, Lee et al [8] observed that the teeth prepared with acidetching or Er:YAG laser irradiation showed a higher frequency of adhesive fractures at the resin-bracket interface. Such a difference in the results may be attributed to the different types of tests since Lee et al [8] performed tensile bond strength test. Similarly, Valletta et al [26] reported that debonding occurred mainly at the bracket-resin interface during tensile bond strength testing and at the resin-tooth interface in shear bond strength testing, which were in line with our findings. In contrast to our results, Fernandez and Canut [24] observed a higher frequency of bond failure at the bracket-resin interface. Proffit et al [27] stated that the greatest damage to the enamel occurs after debonding at the enamel-resin interface, which is in contrast to our findings. In previous studies [15,28], in order to observe enamel cracks and measure their lengths, the teeth had been fixed in only one direction and illuminated from another direction under a microscope; thus, only the cracks perpendicular to the direction of the light rays were visualized, while in the present study, the teeth were rotated 360° around the center of their buccal surfaces to detect all enamel cracks with different orientations. In this method, the whole length of enamel cracks, even curved cracks, was recorded. Also, we had a relatively large sample size, which increased the reliability of our findings. These were among the strong points of the current study. However, the current study had an in-vitro design. In-vitro studies cannot completely simulate the oral clinical environment in terms of thermal changes, humidity, acid attacks and microbial plaque. Moreover, the force applied to the brackets under the laboratory conditions is different from that in the clinical setting. Thus, the generalization of in-vitro results to the clinical setting must be done with caution. Adhesive failure at the enamel-adhesive interface, although favorable in terms of leaving minimal adhesive remnants on the enamel, may negatively affect the shear bond strength in the laser-etched samples. Thus, this issue must be investigated in future studies. Also, further studies are recommended to find the most suitable settings of Er,Cr:YSGG laser irradiation to obtain the most favorable results. CONCLUSION Within the limitations of this study, no significant difference was noted in the frequency or length of enamel cracks after bracket debonding between the two groups of laser-etching and acid-etching. Therefore, by considering the side effects of acidetching (demineralization and formation of white spot lesions), Er,Cr:YSGG laser irradiation with the exposure settings applied in this study is recommended as an efficient alternative to acid-etching for enamel surface preparation prior to bracket bonding.
2018-04-03T03:18:26.053Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "6626380742c024cbb310b0220c17fe2dc98f036b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4c549b35c14f5fa33a0ddfbbdf95c949a2e0c32a", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
213484717
pes2o/s2orc
v3-fos-license
Dry matter production and accumulation in different plant parts of rice cultivars as influenced by irrigation regimes and systems of cultivation Alternate wetting and drying (AWD) irrigation has been widely adopted to replace continuous flooding (CF) irrigation for saving water and increasing water productivity in irrigated rice systems. There is limited information on the performance of different rice cultivars under different establishment methods. A field experiment was conducted on a clay loam soil at Indian Institute of Rice Research (IIRR) formerly Directorate of Rice Research (DRR), Rajendranagar, Hyderabad, Telangana during the kharif seasons of 2017 and 2018. to study the “productivity and water use efficiency of rice cultivars under different irrigation regimes and systems of cultivation” The treatments consisted of two irrigation regimes Alternate wetting and drying and Saturation as main plot treatments, three establishment methods System of Rice Intensification (SRI), Drum Seeding (DS) and Normal transplanting (NTP) as sub plot treatments and four Cultivars namely DRR Dhan 42, DRR Dhan 43, MTU-1010 and NLR34449 as sub-sub plot treatments summing up to 24 treatment combinations laid out in split-split plot design with three replications. Among the cultivars, DRR Dhan 43 registered higher dry matter production at 90 DAS/DAT and harvest as compared to other cultivars. Whereas MTU-1010 and NLR34449 recorded on par dry matter production values at all the crop growth stages during both the years of study. However DRR Dhan 42 produced the lowest dry matter production compared to other genotypes. DRR Dhan 43 recorded higher dry matter accumulation (g m) in root, stem and leaves at all the crop growth stages, during both the years of the study over other cultivars. Introduction Rice is one of the most important cereal crops occupying second position in global agriculture and it is widely grown in India due to its wider adaptability. To safeguard and sustain the food security in India, it is quite important to increase the productivity of rice under limited resources, especially land and water. Hence, the major challenges are to produce more rice per unit amount of natural resource. As per the concepts of water foot print and virtual water to produce one kg of rice 3000 to 5000 litres of water is required. Being a water-intensive crop, cultivation of rice has been a big drain on water resources. Rice is a heavy water consumer but water for rice production is becoming scarce and expensive due to the increased demand for water from the ever growing population and industries (Choudhury et al., 2014) [4] . Rainfall patterns in many areas are becoming more unreliable, with extremes of drought and flooding occurring at unexpected time. Traditional planting has been the most important and common method of crop establishment practice under irrigated lowland rice ecosystems in tropical Asia. In irrigated lowland rice which not only consumes more water but also causes wastage of water resulting in degradation of land. In recent years to tackle this problem, many methods of cultivation have been developed and one among them is System of Rice Intensification (SRI). Growth and yield characteristics of any cultivar depend on genetic and environmental factors. Among the different production factors, varietal selection at any location plays an important role. Proper crop management depends on the growth characteristics of various varieties to get maximum benefit from new genetic material. Among the different water-saving irrigation methods in rice, the most widely adopted is alternate wetting and drying (AWD). Many of the rice cultivars vary in their performance under different systems of cultivation. Higher dry matter production per unit area is the critical prerequisite for higher yield. The amount of dry matter production partitioning depends on effective photosynthesis and respiration of crop. The total yield of dry matter is the total amount of dry matter produced and less the photosynthates used for respiration. Finally, the manner in which the net dry matter produced is distributed among the different parts of the plant, which determine magnitude of the economic yield (Arnon, 1972) [3] . There was a progressive and conspicuous increase in root, stem and leaf dry matter accumulation (g m -2 ) with the advancement of crop growth stage up to 90 DAT. Material and Methods The and NLR-34449 as sub-sub plot treatments laid out in splitsplit plot design with three replications. The area of each gross plot was 7 x 3 m 2 . Seedlings were transplanted with an average of one seedling per hill in the SRI method of planting. FYM at @ 10 t ha -1 was uniformly applied to all the plots before final puddling and levelling. The recommended dose of phosphorus @ 60 kg P 2 O 5 kg ha -1 as single super phosphate (SSP) was applied to all the treatments uniformly as basal and potassium @ 40 kg K 2 O ha -1 as muriate of potash (MOP) was applied in two splits, 75 per cent as basal and the remaining 25 per cent at 75 DAS/DAT. Recommended dose of nitrogen (120 kg ha -1 ) was applied through urea in three splits, 50 per cent as basal, 25 per cent at 50 DAS/DAT and the remaining 25 per cent at 75 DAS/DAT. Results and Discussion Effect on dry matter production and accumulation Increase in average total dry matter production of rice was rather slow up to 30 DAS there after it increased linearly up to 90 DAS and further, it continued to increase until maturity but it was at a diminishing rate in both the years of study (Table.1 and Fig.1) there was a Progressive increase in dry matter accumulation (g m -2 ) in different plant parts viz.. root, stem and leaves with the advancement of crop growth stage up to 90 DAS/DAT (Fig.2). (Table 1 and Fig.1). It is because of rapid growth by maintenance of adequate wetness with intermittent water to crop that maintained good plant roots and varied metabolic processes that perform higher nutrient mobilization. These results were also in line with the observations made by Lu et al. (2000) [8] , Kumar et al. (2013) [7] and Chowdhury et al. There was no significant difference in dry matter accumulation (g m -2 ) of root, stem and leaves among the irrigation regimes during both the years of study (Fig. 2). Relatively higher dry matter accumulation (g m -2 ) of root was observed in AWD at 30, 60, 90 DAS and harvest during the both years. It might be due to increased root oxidation activity and root source cytokinins in intermediate irrigation in AWD. This finding was in conformity with the findings of Armstrong and Webb (1985) [2] , who observed the possibility of extended growth of rice roots under influence of oxygen. (Table.1 and Fig.1). Higher dry matter production of the above treatment may be attributed to better establishment of seedlings and more number of tillers m -2 . Significantly lower dry matter was recorded with drum seeding at all the stages except at 30 DAS. Lowest dry matter production in drum seeding method may be attributed to non-uniform plant stand and less number of tillers m -2 . This was supported by Anbumani et al. (2004) [1] . The higher dry matter production in SRI method was attributed to planting of young seedling at shallow depth in wider spacing and cono-weeding which leads to taller plants, higher leaf area, better root growth, profuse and strong tillers with higher crop growth rate. Increased shoot: root ratio and production of more number of tillers hill -1 under wider spacing were the reasons for increased dry matter production (Rajesh and Thanunathan, 2003) [11] . In addition to that conoweeding increased the soil aeration which enhanced availability of dissolved oxygen in irrigation water thereby increasing shoot: root ratio and LAI and subsequently increasing dry matter production (Uphoff, 2002) [15] . The results obtained in this investigation are in conformity with the findings of Hussain et al. (2012) [5] , Sridevi and Chellamuthu (2012) [14] and Rajendran et al. (2013) [10] . Dry matter accumulation in different plant parts was superior with system of rice intensification over drum seeding and NTP during both the years of study at all the growth stages (Fig. 3). Less interplant competition would have enabled the plants to have more physiological activity. In square planting with wider spacing more soil area was available for foraging thus leading to improved root growth in SRI. This is in accordance with the observations of Jayakumar et al. (2005) [6] , Priyanka et al. (2013) [9] and Rani and Sukumari (2013) [12] . [13] and Vijay (2018) [16] . Among the cultivars, dry matter accumulation (g m -2) in root, stem and leaves at all the crop growth stages in both the years of study and in pooled means was statistically non-significant except DRR Dhan 43 where recorded significantly higher dry matter accumulation (g m -2 ) during both the years of the study over other cultivars. Effect of interaction The interaction was statistically non significant among irrigation regimes, systems of rice cultivation and rice cultivars on dry matter production and accumulation at all the growth stages during both the years of study. Conclusion Results revealed that the increase in average total dry matter production of rice was rather slow up to 30 DAS there after it increased linearly up to 90 DAS and further, it continued to increase until maturity but it was at a diminishing rate in both the years of study. There was progressive increase in dry matter accumulation (g m -2 ) in different plant parts viz.. root, stem and leaves with the advancement of crop growth stage up to 90 DAS/DAT under semi arid tropical climatic condition in clay loam soil at Indian Institute of Rice Research (IIRR), Rajendra nagar, Hyderabad.
2020-02-13T09:12:34.449Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "04fdaf79146bcd743fc72bcc71b1d17529c7f994", "oa_license": null, "oa_url": "https://www.chemijournal.com/archives/2020/vol8issue1/PartX/8-1-132-476.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "765679132441f679855ea3573794332396c63add", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
10191635
pes2o/s2orc
v3-fos-license
A Preliminary Study of Three-dimensional Sonographic Measurements of the Fetus Objectives: This study was aimed at establishing an ideal method for performing three-dimensional measurements of the fetus in order to improve the estimation of fetal weight. Methods: The study consisted of two phases. Phase I was a prospective cross-sectional study performed between 28 and 40 weeks’ gestation. The study population (n=110) comprised low-risk singleton pregnancies who underwent a routine third-trimester sonographic estimation of fetal weight. The purpose of this phase was to establish normal values for the fetal abdominal and head volumes throughout the third trimester. Phase II was a prospective study that included patients admitted for an elective cesarean section or for induction of labor between 38 and 41 weeks’ gestation (n=91). This phase of the study compared the actual birth weight to two- (2D) and three-dimensional (3D) measurements of the fetus. Conventional 2D ultrasound fetal biometry was performed measuring the biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), and femur diaphysis length (FL). Volume estimates were computed utilizing Virtual Organ Computer-aided AnaLysis (VOCAL), and the correlation between measured volumes and actual neonatal weight was calculated. Results: Overall, this longitudinal study consisted of 110 patients between 28 and 41 weeks’ gestation. Normal values were computed for the fetal abdomen and head volume throughout the third trimester. Ultrasound examination was performed within three days prior to delivery on 91 patients. A good correlation was found between birth weight and abdominal volume (r=0.77) and between birth weight and head volume (r=0.5). Correlation between bidimensional measurements and actual fetal weights was found to be comparable with previously published correlations. Conclusion: Volume measurements of the fetus may improve the accuracy of estimating fetal size. Additional studies using different volume measurement of the fetus are necessary. INTRODUCTION Fetal measurements obtained by prenatal ultrasonography have become an integral part of fetal assessment. They are used for estimating fetal weight and for measuring fetal organs. Fetal weight estimation is obviously important for recognizing intrauterine growth restriction and macrosomia, both of which require planning the time and mode of delivery. Measurements of fetal organs are also important for diagnosing fetal abnormalities such as microcephaly and skeletal abnormalities. Therefore, different algorithms and tables have been established in the past for estimating fetal weight and for creating nomograms for fetal organs size throughout gestation. 1,2 However, the traditional methods for these measurements were based on the use of twodimensional (2D) ultrasound. For example, even though the fetal body is a voluminous mass, its weight is traditionally calculated by using only two dimensions, with a 10%-15% deviation. We hypothesized that by using threedimensional (3D) ultrasound we would be able to improve the accuracy of fetal measurements as well as the estimation of fetal weight. To that end, we initiated this preliminary study to determine the ideal method for performing 3D measurements of the fetal abdomen and head. This paper presents the results of that study. METHODS The study was performed in the Division of Ultrasound in Obstetrics and Gynecology at Rambam Medical Center, Haifa, Israel between January 2011 and July 2012. The study consisted of two phases and two different study populations, respectively. Phase I Phase I was aimed at establishing the normal values for fetal abdominal and head volumes throughout the third trimester of pregnancy. A prospective cross-sectional study was performed between 28 and 40 weeks of gestation. All patients included in the study had low-risk singleton pregnancies; each patient underwent a routine third-trimester sonogram to estimate fetal weight. Phase II Phase II was a prospective study that included patients admitted for an elective cesarean section or for induction of labor between 38 and 41 weeks of gestation. This phase of the study compared the actual birth weights with the estimated 2D and 3D fetal measurements. The criteria for participating in the study included: well-defined gestational age based on embryonic/fetal crown-rump length measurement during the first trimester; normal fetal anatomy scans; delivery within three days of acquisition of the 2D measurements and 3D volumes. The study was approved by the Institutional Review Board, and all participating patients signed an informed consent. Maternal age, gestational age, and parity were recorded at the time of the scan. The subjects included in this study were mostly Caucasians from all socioeconomic backgrounds. Data on the gestational age at birth, mode of delivery, and clinical characteristics of the newborn were collected postpartum from the hospital records of the mother and the neonates. All neonates were weighed immediately after birth in the delivery room. April 2015  Volume 6  Issue 2  e0019 Equipment Used for the Studies Ultrasound examinations were performed using a Voluson 730 Pro (GE Healthcare, Solingen, Germany) machine using a RAB 4-8L probe. All ultrasound examinations were performed transabdominally by two physicians (U.E. and Z.W.). Two-dimensional Ultrasound Measurements Conventional 2D ultrasound fetal biometry was performed as follows: Head measurements were obtained in the axial view at the level of the cavum septi pellucidi, where both thalami could be seen symmetrically and the anterior and posterior aspects of the cerebral falx were equidistant to the parietal bones. The biparietal diameter (BPD) was measured from the outer edge of the proximal parietal bone to the inner edge of the distal skull table, in a line perpendicular to the orientation of the cerebral falx. The head circumference (HC) was calculated using the scanner's automatically generated ellipse including the outer margins of the fetal skull. Abdominal circumference (AC) was measured using the scanner's automatically generated ellipse on a transverse circular view of the abdomen at the level of the stomach and the portoumbilical vein complex. Femur diaphysis length (FL) was measured in a plane in which the full femoral diaphysis was almost parallel to the transducer surface, and the measurement was taken from one end of the diaphysis to the other. Three-dimensional Ultrasound Measurements Acquisition and storage of 3D data sets of the fetal head and abdomen were performed as follows: Initially, the transducer was held over the planes as described above for the BPD and AC 2D acquisitions. Volumes were acquired using automatic sweeps; the sweep angle was set at 30•. The acquisition process was repeated if there was any maternal or fetal movement. Head and abdomen volume acquisition by the VOCAL technique was performed as follows: the data set containing the fetal head or abdomen was displayed on the screen in the transverse view, and this image was rotated so that the head or the abdomen was identified in a perpendicular position. Volume estimates were computed using the Virtual Organ Computer-aided AnaLysis (VOCAL) program version 5.3 (GE Medical Systems, Solingen, Germany) with a manual trace at 30• of rotation, so six planes were demonstrated. Traces of the scanned organ contours were performed manually using a touch screen stylus pen directly on the displayed image. Figures 1 and 2 present the head and abdominal plans used for reconstructing the 3D images. Fetal abdominal volume was measured between the fetal diaphragm and pelvis. Fetal head volume was measured above the base of the skull. Statistical Analysis Normal values for the abdomen and head volumes were calculated throughout the third trimester of pregnancy. Pearson's correlation was used to compare between the 2D and 3D measurements and the birth weights. Phase I A total of 110 patients participated in the longitudinal study between 28 and 41 weeks of gestation. Three patients who developed intrauterine growth restriction and three patients who developed gestational diabetes were excluded from the study. The mean maternal age of this study group was 30.45±4.9 years; 55% were primiparous. Mean birth weight was 3498.7±480 g, and mean gestational age at delivery was 40.23±1.3 weeks. The normal values calculated for the fetal abdomen and head volumes are presented in Table 1. Phase II A total of 91 patients had ultrasound examination performed within three days prior to delivery. The mean maternal age of this study group was 29.65±3.9 y; 25% were primiparous. Mean birth weight was 3454.7±440 g, and mean gestational age at delivery was 39.33±1.3 weeks. Correlations between 2D and 3D measurements of the fetus and birth weight are presented in Table 2. As shown in this table, similar results were obtained using 2D or 3D measurements. DISCUSSION Prenatal 3D ultrasound has been widely used during the last decade for different purposes. Fetal volume measurements have been studied in the first trimester of pregnancy suggesting that a small fetal volume may result in earlier detection of high-risk pregnancies. [2][3][4] During the second trimester, 3D ultrasound appeared to be valuable in the anatomical survey of the fetus. [5][6][7] There has been a clear benefit in using 3D ultrasound for detection of clinical situations such as facial cleft, brain anomalies, and spinal defects. One of the most studied fields using fetal 3D ultrasound has been fetal echocardiography. 8 Finally, volume measurements have been also used for estimating the amniotic fluid volume and placental size. 5 Estimation of fetal size is one of the most important goals in prenatal diagnosis. Prenatal diagnosis of intrauterine growth restriction allows early intervention and improvement of pregnancy outcome, while prenatal diagnosis of fetal macrosomia may avoid birth trauma. However, estimation of fetal weight based on the existing formulas is still limited, especially in macrosomic fetuses. 2,9 Hoping that 3D fetal measurements would improve estimation of fetal size, we initially focused on studying the fetal abdominal and head volume. There are few publications attempting to describe fetal measurements via 3D techniques. Bromley et al. used offline 3D reconstruction of the thirdtrimester fetus. The authors concluded that this technique is a reliable method for estimating fetal weight. 10 Yang et al. have shown that the use of 3D ultrasound compared to 2D, even by an inexperienced operator, allows faster measurements of the fetus. 11 Nardozza et al. performed 3D measurements of the fetal upper arm and thigh and created formulas to predict birth weight. The authors concluded that the new formulas, based on 3D measurements, were not superior to 2D formulas. 12 We have used rotational measurements of volume using the VOCAL imaging program, which extends the 3D view. This technique allows rotation of the 3D data set around a central axis through a number of rotation steps. Volume calculation in the in vitro setting has been proved reliable and valid to within 4% of the "actual" volume. 13 It is noteworthy that comparison between contemporaneous sonographic and 3D magnetic resonance at late gestational age demonstrated an acceptable correlation between the two techniques for fetal head and abdomen measurements. 14 Our results in the Phase II study demonstrated a similar correlation with birth weight when comparing the conventional 2D and the 3D sonograms. However, the fetal abdomen and head volume measurements were not superior to the traditional 2D measurements of abdominal and head circumference, and biparietal diameter. In conclusion, fetal volume measurements may improve the accuracy of fetal size estimations. Future studies should use different volume measurements, which may improve the accuracy. 4. Smeets NA, Prudon M, Winkens B, Oei SG. Fetal volume measurements with three dimensional ultrasound in the first trimester of pregnancy outcome, a
2016-05-12T22:15:10.714Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "41af42ba1dbe524dbab95e774ece975681012e41", "oa_license": "CCBY", "oa_url": "https://www.rmmj.org.il/userimages/494/0/PublishFiles/494Article.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8bc4bd83aff4ef85d7fb8d4db2cc95b2cf14d947", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
30046837
pes2o/s2orc
v3-fos-license
Activation of fetal promoters of insulinlike growth factors II gene in hepatitis C virus-related chronic hepatitis, cirrhosis, and hepatocellular carcinoma. Increased prevalence of hepatitis C virus (HCV) infection has been found in patients with hepatocellular carcinoma (HCC). The expression of insulinlike growth factor II (IGF-II) has been linked to hepatocarcinogenesis in the experimental animal and in humans. Since reactivation of fetal IGF-II transcripts has been observed in human HCC, we have analyzed the levels of adult P1 and fetal P3 and P4 IGF-II promoter-derived transcripts in the liver of patients with HCV-related chronic active hepatitis (CAH), cirrhosis, and HCC by means of a semiquantitative reverse-transcription polymerase chain reaction (RT-PCR) assay. Transcripts derived from adult P1 promoter were increasingly expressed from normals to patients with CAH and cirrhosis, but were undetectable in the tumorous area of 5 of 7 HCC patients and present at low levels in the nontumorous area of all HCC patients. Transcripts derived from fetal P3 promoter were not detectable in normal subjects, while they were expressed abundantly in most CAH and all cirrhotic patients. Transcripts from fetal P4 promoter were detected at high levels in 3 of 9 CAH patients and in the majority of cirrhotic patients. Increased expression of fetal promoter-derived transcripts was also found in the liver of HCC patients, although levels were lower than in cirrhosis. Also, the activity of fetal P3 and P4 promoters was higher in the nontumorous than in the tumorous area of the liver of HCC patients. The expression of IGF-II transcripts was correlated with the rate of cell mitotic activity by measuring the expression of the proliferating cell nuclear antigen (PCNA) gene. PCNA messenger RNA (mRNA) levels progressively increased from normals to CAH and to cirrhotic patients, and persisted at a high level in the tumorous and in the nontumorous area of HCC subjects, thus showing that the increase of IGF-II transcripts in CAH and cirrhosis is accompanied by an activation of cell mitosis in these samples. These data suggest that the activation of IGF-II gene expression from adult and fetal promoters may play a role in premalignant proliferation observed in HCV-related chronic liver disease. sis. Also, the activity of fetal P3 and P4 promoters was Increased prevalence of hepatitis C virus (HCV) infechigher in the nontumorous than in the tumorous area tion has been found in patients with hepatocellular carof the liver of HCC patients. The expression of IGF-II cinoma (HCC). The expression of insulinlike growth factranscripts was correlated with the rate of cell mitotic tor II (IGF-II) has been linked to hepatocarcinogenesis activity by measuring the expression of the proliferating in the experimental animal and in humans. Since reacticell nuclear antigen (PCNA) gene. PCNA messenger RNA vation of fetal IGF-II transcripts has been observed in (mRNA) levels progressively increased from normals to human HCC, we have analyzed the levels of adult P1 CAH and to cirrhotic patients, and persisted at a high and fetal P3 and P4 IGF-II promoter-derived transcripts level in the tumorous and in the nontumorous area of in the liver of patients with HCV-related chronic active HCC subjects, thus showing that the increase of IGF-II hepatitis (CAH), cirrhosis, and HCC by means of a semitranscripts in CAH and cirrhosis is accompanied by an quantitative reverse-transcription polymerase chain reactivation of cell mitosis in these samples. These data action (RT-PCR) assay. Transcripts derived from adult suggest that the activation of IGF-II gene expression P1 promoter were increasingly expressed from normals from adult and fetal promoters may play a role in premato patients with CAH and cirrhosis, but were undetectlignant proliferation observed in HCV-related chronic able in the tumorous area of 5 of 7 HCC patients and liver disease. (HEPATOLOGY 1996;23:1304-1312.) present at low levels in the nontumorous area of all HCC patients. Transcripts derived from fetal P3 promoter were not detectable in normal subjects, while they were Hepatitis C virus (HCV) is a positive-stranded RNA expressed abundantly in most CAH and all cirrhotic pavirus that plays a major role in the development of tients. Transcripts from fetal P4 promoter were detected chronic liver disease (CLD). 1,2 Acute posttransfusion at high levels in 3 of 9 CAH patients and in the majority hepatitis due to HCV is followed by chronic hepatitis of cirrhotic patients. Increased expression of fetal proin more than 50% of cases, 3,4 and 20% to 50% of these moter-derived transcripts was also found in the liver of patients eventually progress to cirrhosis. 3,5 Mounting HCC patients, although levels were lower than in cirrhoevidence suggests that HCV infection may play a role in the development of hepatocellular carcinoma (HCC) in cirrhotic patients. 5-7 Chronic injury of liver cells and Abbreviations: HCV, hepatitis C virus; CLD, chronic liver disease; HCC, the associated inflammatory and regenerative rehepatocellular carcinoma; IGF-II, insulinlike growth factor II; IGF-IR, insusponse that occurs in CLD are known to represent a linlike growth factor type I receptor; RT-PCR, reverse-transcription polymerase chain reaction; CAH, chronic active hepatitis; PCNA, proliferating cell preneoplastic process that may evolve toward malignuclear antigen; PCR, polymerase chain reaction; mRNA, messenger RNA; nancy. 8 ers in the extrahepatic and hepatic tissues. 15-18 Each sion of proliferating cell nuclear antigen (PCNA), which is a known marker of cell proliferation. 30,31 promoter is followed by one or more alternative untranslated exons, which are all spliced to the last three Patients. The biochemical and histological characteristics of our patient population are described in Table 1. The study Evidence demonstrates that the synthesis of IGF-II population consisted of 4 anti-HCV-negative, HCV-RNAand the activation of its signaling pathway through the negative control subjects (age range, 44-66 years; 1 woman, tyrosine kinase domain of the IGF-IR play important 3 men) and 25 anti-HCV-positive patients (9 with CAH [age roles in tumorigenesis. 19 The expression of IGF-II and IGF-IR genes is activated in several human and experimental tumors, and, in some of them, an autocrine/ more evident in cirrhosis than in HCC, and the activi-HAI, histological activity index. ties of fetal promoters were higher in the nontumorous * Times the upper limit of the normal range (°37 U/L). than in the tumorous area of the liver of HCC patients. † All cirrhotic and HCC patients were in Child A class. All HCCs The expression of IGF-II transcripts in patients with were well differentiated (Edmonson's class I-II) except case 25, which was poorly differentiated (Edmonson's class III-IV). PCR products were phenol-chlorophorm-extracted, etha-CLD and from surgical specimens in control subjects. In patients with HCC, liver biopsy was performed in the tumorous nol-precipitated, and subjected to 5% polyacrylamide gel electrophoresis and autoradiography. Sizes of the amplified frag-and nontumorous areas at distance from the tumor. In each case, a portion of the liver sample was fixed in 10% buffered ments were estimated from migration of the 1-kb ladder molecular-weight marker (Gibco-BRL), and identity was as-formalin for immunohistochemistry and routine histological examination. The remaining sample was immediately sessed by restriction-enzyme digestion. PCR products were quantified by densitometric scanning of the autoradiograms washed with 0.3% NaCl, snap-frozen in liquid nitrogen, and stored at 080ЊC until assayed. Informed consent was ob-using a Howteck Scanmaster 3 densitometer with RFL Print-TM Software (Pharmacia). tained from the patients. Histology. Diagnosis of CAH or cirrhosis was reached ac-Statistical Analysis. Correlation was evaluated by regression analysis. Significance of differences was evaluated by cording to internationally accepted criteria. 32,33 The histological activity index was assessed according to Knodell. 34 All ANOVA, followed by Wilcoxon's rank sum test. HCCs were graded histologically according to the criteria of RESULTS Edmondson and Steiner. 35 RNA Extraction. Total RNA was extracted using the gua- Semiquantitative RT-PCR Analysis of IGF-II Promot- nidinum thiocyanate method, and the high quality of the ers. We have studied the expression of the IGF-II gene product was assured by analysis on 1% agarose/formaldehyde from the adult (P1) and the two fetal (P3 and P4) proelectrophoresis stained with 1% ethydium bromide. 36,37 moters, which are more abundantly expressed in nor-Reverse Transcriptase. First-strand complementary DNA mal and transformed hepatocytes, 18,29 in patients with was prepared using 200 units of reverse transcriptase (Super-HCV-related CLD by using a semiquantitative RT-PCR transcript RT, Gibco BRL, Gaithersburg, MD), 1 mg of total assay. Oligonucleotides were designed from the se-RNA as template, and 10 pmol/L of random hexamers in quences of exon 3, exon 5, and exon 6, and used as 5 the presence of 0.1 mmol/L dithiothreitol, 0.5 mmol/L dNTP (Pharmacia, Milan, Italy), and 20 units of RNase inhibitor primers to specifically amplify by RT-PCR transcripts (Promega, Madison, WI), as previously described. 38 The reacoriginating from adult P1 or fetal P3 and P4 promoters, tion profile was 37ЊC 1 10 minutes, followed by 42ЊC 1 60 respectively (Fig. 1). An oligonucleotide complemenminutes. To control for contamination by genomic DNA, all tary to exon 7 sequence was used in all cases as 3 RNA samples were run in duplicate with or without addition primer (Fig. 1). To compare the relative activities of P1, of reverse transcriptase. P3, and P4 IGF-II promoters in different RNA samples, PCR Analysis. Hybridization sites of primers used for polyeach sample was simultaneously amplified with IGFmerase chain reaction (PCR) analysis of IGF-II transcripts are II promoter-and GAPDH-specific primers, and the inshown in Fig. 1. P1-, P3-, and P4-specific IGF-II transcripts 15-17 tensities of IGF-II signals were normalized to those of were analyzed using primer A: 5-AGAACTGAGGCTGGC-GAPDH. To analyze the linearity of RT-PCR coamplifi-AGCCA-3 (P1); primer B: 5-CTGTTCGGTTTGCGACACGCA-3 (P3); primer C: 5-GAGCCTTCTGCTGAGCTGTAG-3 (P4) cation of IGF-II and GAPDH transcripts, aliquots withas 5 primers; and primer D: 5-GTAGCACAGTACGTCdrawn from the reaction tube after different numbers TCCAG-3 (exon 7) as 3 primer. The GAPDH-specific primers 39 of cycles were electrophoresed and autoradiographed were 5-CACCATCTTCCAGGAGCGAG-3 (fore); 5 TCACGC- (Fig. 2A). Specific IGF-II and GAPDH transcripts were CACAGTTTCCCGGA-3 (reverse). The PCNA-specific primamplified with increasing efficiency up to 30 cycles, as ers 30 were 5-CAAGAAGGTGTTGGAGGCAC-3 (fore); 5-TACshown by densitometric analysis of PCR products ( Table 2). IGF-II transcripts derived from adult P1 Branchburg, NJ). MgCl 2 was added at the final concentration promoter were increasingly expressed from normal paof 1.5 mmol/L, except for P1-GAPDH reaction, where the contients to patients with CAH and cirrhosis. The increase centration was 1.75 mmol/L. After an initial denaturation in P1 transcripts was evident in 6 of 9 CAH patients step, 97ЊC 1 10 minutes, the PCR amplification was per- (Fig. 3, cases 6, 7, 8, and 13, and, to a lesser extent, 2.14-13.51) in cirrhotic patients (P õ .01 vs. normal patients and P õ .001 vs. CAH) (Fig. 4). Five cirrhotic IGF-II transcripts derived from fetal P3 promoter were not detectable in normal subjects, while they were patients (cases 15, 18, 19, 20, and 22) had P3 mRNA levels 10-fold higher than the median densitometric detected with progressively increasing expression in most of the patients with CAH (with the exception of scan values of CAH patients ( Fig. 4B and Table 2). IGF-II transcripts from fetal P4 promoter were de-case 11) and in all patients with cirrhosis ( Fig. 3 and Table 2). The median densitometric scan values for tected in all subjects except case 21 ( Fig. 3 and Table 2). Increased expression of P4 transcripts was found in IGF-II transcripts derived from fetal P3 promoter were Anti-HCV-Negative Controls and in 25 Subjects and adult human IGF-II promoters was tested in tu- With HCV-Related CLD morous and nontumorous area of the liver of patients IGF-II Promoters with HCV-related HCC (Fig. 5 and (Fig. 4). These values were in the range of 8 Table 2). and cannot be used for comparing the activities of the different pro- PCNA Expression in Normal Controls and HCV-Re- moters in the same patient. Because of a certain degree of variability in the ex-(P õ .05 vs. normal patients), and 0.60 (range, 0.28-0.86) in cirrhotic patients (P õ .01 vs. normal patients pression of IGF-II transcripts within each group of patients (Table 2), we evaluated whether this might be and P õ .001 vs. CAH) (Fig. 4D). The level of expression of PCNA in HCC subjects was due to a different inflammatory activity. We did not find any significant correlation between histological ac-comparable in the tumorous and nontumorous areas (median densitometric scan values, 0.59 and 0.51; tivity index (Table 1) range, 0.36-1.25 and 0.30-1.07, respectively). These lope polypeptide and Z2 a 1 -antitrypsin transgenic mice) indicated that, independently of the causative values were higher than in normal patients (P õ .01) and CAH patients (P õ .001), but comparable with agent, liver cell injury and chronic inflammation may those found in cirrhotic patients (Figs. 4D and 6). stimulate mediators of hepatocellular proliferation, The increase in the expression of P1, P3, and P4 IGFwhich in turn leads to the development of precursor II promoters from normal patients to CAH and cirrhotic lesions of HCC, 9,10,40 thus suggesting the existence of a patients correlated positively with PCNA transcript common endogenous pathway for liver carcinogenesis. levels in the same subjects (r Å .46, P Å .030; r Evidence supports the hypothesis that IGF-II plays Å .57, P Å .0054; r Å .49, P Å .018, for P1, P3, and P4, a role during liver carcinogenesis in rodents and hurespectively). mans. [22][23][24][25][26][27][28] In addition, a number of studies have shown increased expression of IGF-II RNA and/or protein lev-DISCUSSION els in HBV-related CLD and HCC. 41-43, 50 The role of IGF-II in HCV-related CLD and HCC has not yet been HCV is a major causative agent of cirrhosis, which studied. is a known risk factor for the development of HCC. [1][2][3][4]8 We hypothesized that, in the course of HCV-related As well, a strong association between HCV infection CLD, there might be an increase in the expression of and HCC has been described. 5-7 However, the mecha-IGF-II, which might contribute to the proliferative hit nisms by which HCV contributes to development of ultimately leading to development of HCC. Our data HCC is unknown. Recently, two different transgenic mouse models of HCC (i.e., hepatitis B virus large enve-show a progressive increase in the expression of tran-5p0e$$0029 05-21-96 17:59:17 hepal WBS: Hepatology whereas it was activated in CAH and cirrhosis. Since P3-derived transcripts are expressed abundantly in human HCC, 28 we postulate that the activation of IGF-II fetal promoter in CLD may represent a preneoplastic lesion. The increase in IGF-II expression significantly correlated with the expression of PCNA, which is a known marker of cell mitotic activity. 30,31 Therefore, our study indicates that there is a significant relationship between liver cell proliferation and IGF-II expression in the course of HCV-related CLD. IGF-II expression during hepatitis B virus-related CLD and HCC has been localized mainly to the hepatocytes. 41-43, 50 Additionally, isolated and cultured rat hepatocytes and human hepatoma cell lines are known to express IGF-II and IGF-IR that mediates IGF-II proliferative effects. 44,45 Therefore, the increase in IGF-II-mRNA levels during HCV-related CLD suggests that IGF-II might contribute through an autocrine mechanism to the enhanced proliferative activity of liver cells that may ultimately lead to the development of HCC. One additional possibility is that the increased IGF-II expression in the cirrhotic liver is contributed to by nonparenchymal cells. In this case, IGF-II would act in a paracrine mechanism to stimulate the growth of hepatocytes and promote hepatocarcinogenesis. We also studied the expression of transcripts derived from IGF-II promoters in patients with HCV-related P3 (but not P4) promoter, though to a lesser degree when compared with cirrhosis. In the nontumorous area of the same HCC patients, transcripts from adult P1 promoter were expressed to the same extent as in scripts originating from P1 adult and P3 and P4 fetal normal subjects, while those from both P3 and P4 fetal promoters of IGF-II from normal patients (i.e., subjects promoters were expressed more abundantly. IGF-II exwith normal liver histology not infected with HCV) to pression in the nontumorous area was higher than in subjects with CAH and cirrhosis. The increase in the the tumorous area but still lower than in cirrhosis. This expression of IGF-II was evident mostly in patients data is in agreement with a recent report by Su et with cirrhosis. P3, the most active IGF-II fetal promoter, [16][17][18]29 was not expressed in normal liver tissue, al., 43 demonstrating that IGF-II immunoreactivity was
2018-04-03T01:35:33.576Z
1996-06-01T00:00:00.000
{ "year": 1996, "sha1": "ac15efeae8ee473716b274b788b70c1fd6c7595f", "oa_license": "CCBY", "oa_url": "https://aasldpubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hep.510230602", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7ec5f59bf2c59e84231c53845744003deee3c812", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
55939818
pes2o/s2orc
v3-fos-license
The Role of the Computer in Learning Mathematics Through Numerical Methods The paper exploits the results of an experimental, formative-ameliorative research conducted by 30 3-year students from the Department of Mathematics, Vasile Alecsandri University of Bacău, during their teaching practice at the Ștefan cel Mare National Pedagogical College from Bacău, involving 150 students from 6 11-grade classes with a real specialization profile. The research was based on the following hypothesis: if we use numerical methods to solve linear equations systems and to the graphical representation of functions in the instructive-educational process, then we shall enhance the efficiency of these activities and increase the students’ performance by enhancing intrinsic motivation. In order to achieve the objectives, there were presented various techniques for solving linear equations systems and for the graphical representation of functions through numerical methods, followed by the application of sets of tests on the different methods for solving mathematical problems integrated in the various moments of the lesson, either in teaching new content or in consolidating and checking it. The paper highlights the role and values of computer use in learning Mathematics, in the informative as well as formative-educational sense, in agreement with the taught objectives and contents, based on the tendencies of updating and upgrading the school activity, and enhancing its role in preparing students for life. The research objectives were: knowledge of the students’ (initial) training level as a basis for implementing the experiment; presentation of the theory on numerical methods; evaluating the contribution of the methods for solving linear equations systems and the graphical representation of functions through numerical methods to the enhancement of school performance; recording progress following the application of the progress factor, respectively the various methods for solving linear equations systems and the graphical representation of functions through numerical methods. Introduction The algorithm for solving linear equations systems can also be computer programmed. This method relies on serially reducing unknowns, the system evolving into other equivalent systems, whose number of equations diminishes step-by-step; this method is called Gaussian elimination or row reduction. For the gradual processing of the system, there are applied the following elementary transformations that generate equivalent systems: -swapping two rows (equations);reordering the unknowns; -multiplying a row by a non-zero number; -adding a multiple of one row to another row. (Frumuşanu G., 2008), By applying these operations, there is generated one of the situations: -the final system is triangular, its solution being unique (compatible determined); -the final system is trapezoidal, with several solutions (compatible non-determined); -the final system contains a contradiction, with no solutions (incompatible). Practically, applying the Gaussian method consists in covering the following steps: -writing the extended matrix of the system (namely, the system's matrix, to which we annex the column of free terms); -applying elementary transformations to this matrix, until it takes a triangular or trapezoidal form; -analysing the linear system to which this extended matrix belongs; -if there occurs a contradiction within this system, then the system is incompatible; -if there occurs no contradiction, then the system is compatible or compatible non-determined, according to whether it is triangular or trapezoidal; -the system's solution is easily found by covering the path backwards (back-substitution), from the last equation (with the fewest unknown values) towards the first (with the most unknown values) (Mariș S., Brăescu L., 2007). is called the system's matrix or the matrix of the system's coefficients. . is called the column matrix of the free terms, and matrix ! = … … … … … … … obtained from the system's matrix through bordering to the right with the column of free terms, is also called the extended matrix of the system. Examples for Applying the Gaussian Method to Linear Systems The Gaussian method consists in the equivalent transformation of the system through elementary transformations, in systems where the unknown occurs only in the first equation, whereas in the other equations it is eliminated. For the system thus formed the first equation is kept unchanged and for the other m-1 equations there is applied the procedure for the unknown , keeping it in the second equation and eliminating it in the other − 2 equation. The procedure is repeated until in one equation of the system there remains only one unknown. Its value is transferred to the other equations and the other unknowns are determined. By applying the Gaussian method, the unknowns are successively eliminated. 1. Solve, in the set of real numbers, the following linear system, using the Gaussian method: − 2# + $ = 0 2 + # − $ = 1 −3 + # + $ = 2 . Solution: We write the extended matrix associated with the system and by applying elementary transformations we give it a triangular form. By solving the system, we shall finally reach the solution: = 1, # = 2, $ = 3. The dotted line from the extended matrix has the role of rendering the system's free terms visible, and the matrix of the final system (except the column of the free terms) has a triangular form, hence, from that moment on, it is known that the system is compatible determined. (Lupu C., 2014) 2. Solve, in the set of real numbers, the following linear system by using the Gaussian method: ' ( ) ( * + 2# + $ + 2+ = −2 3 − # + $ + 2+ = 1 2 + # + $ + + = 1 −3 − 3# + 2$ − + = 6 4 − 2# − $ + 3+ = 3 . Solution: We write the extended matrix associated with the system and by applying elementary transformations we give it a triangular form. From the finally obtained system, there are obtained 2 contradictory equations: 35t = 71 and 96t = 186. Therefore, the system is incompatible. Solving Linear Equations Systems Using Computers The Gaussian method for solving linear systems with the matrix of coefficients as a column (set). Such a matrix attached to the system has all the elements null, except those from the main diagonal and some parallels to the main diagonal. The program uses a special method for memorizing the column matrix, further briefly presented. Given B=(b i,j ) a matrix with n rows and n columns, having 2d+1 non-zero parallels to the main diagonal. The row i of this matrix is: The meaningful information from this row may be memorized in the row i of a matrix A=(a i,j ), with n rows and 2d+1 columns: (a i,1 ,a i,2 ,...,a i,p ,a i,p+1 ,...,a i,2d+1 ) There may be observed that p=p(i,j)=i+d+1-j. There is applied to the matrix A=(a i,p(i,j)) the classic Gaussian elimination algorithm. ( Application Examples of Applying Numerical Methods to Solve the Equation System Solve the equation system: Research Methods and Techniques The research was of an experimental type, using the test method. Other research methods and techniques used were: -Pedagogical observation; -The communication; -Analysis of school documents and student work products; -The interview; -Statistical techniques for data processing. Research Description The paper highlights the role and values of computer use in learning Mathematics, in the informative as well as formative-educational sense, in agreement with the taught objectives and contents, based on the tendencies of updating and upgrading the school activity, and enhancing its role in preparing the student for life. Sample Description The paper exploits the results of an experimental, formative-ameliorative research conducted by 30 3 rd -year students from the Department of Mathematics, Vasile Alecsandri University of Bacău, during their teaching practice at the Ștefan cel Mare National Pedagogical College from Bacău, involving 150 students from 11 th -grade classes with a real specialization profile. Research Objectives The research objectives were: -knowledge of the students' (initial) training level as a basis for implementing the experiment; -presentation of the theory on numerical methods; -evaluating the contribution of the methods for solving linear equations systems and the graphical representation of functions through numerical methods to the enhancement of school performance; -recording progress following the application of the progress factor, respectively the various methods for solving linear equations systems and the graphical representation of functions through numerical methods. Research Hypothesis The research was based on the following hypothesis: if we use numerical methods to solve linear equations systems and to the graphical representation of functions in the instructive-educational process, then we shall enhance the efficiency of these activities and increase the students' performance by enhancing intrinsic motivation. Research Variables The research hypothesis generates two research variables:the independent variable, introduced through the numerical methods for solving systems and the graphical representation of functions; -the dependent variable related to enhancing the motivation for acquiring mathematical notions and school progress. Research Stages The research was conducted February 5 th -May 30 th , during the 2 nd term of the 2014-1015 school year. The paper exploits the results of an experimental, formative-ameliorative research conducted by 30 3rd-year students from the Department of Mathematics, Vasile Alecsandri University of Bacău, during their teaching practice at the Ștefan cel Mare National Pedagogical College from Bacău, involving 150 students from 6 11th-grade classes with a real specialization profile. The research was based on the following hypothesis: if we use numerical methods to solve linear equations systems and to the graphical representation of functions in the instructive-educational process. The paper highlights the role and values of computer use in learning Mathematics, in the informative as well as formative-educational sense, in agreement with the taught objectives and contents, based on the tendencies of updating and upgrading the school activity, and enhancing its role in preparing students for life. The Results and Their Interpretation Through the experiment carried out on an initial test with second year Math undergraduates of "Vasile Alecsandri" University of Bacău, it was proved that the teaching and the development of skills and abilities for assessment in high school are possible if we use various evaluation methods and procedures. This information was very useful in planning the following activities, taking into account the specificities of each student. Motivation for team learning consists (without the students being aware) of exciting activities, attractive, intuitive special materials, worksheets and modern teaching methods. In terms of the second year students of the Faculty of Mathematics, it was found that through impact assessment, observed learning and assessment records, there was active participation on the part of the students, increasing the degree of intellectual effort, interest and curiosity with regard to mathematics. This data was recorded in an observation grid. At the same time, the experiment results confirm the hypothesis that if we use various techniques for teaching-learning-assessment in all lesson stages, the teaching of mathematics in school will be more efficient, and the results of the pupils will improve. In terms of the second year students of the Faculty of Mathematics, it was found that through impact assessment, observed learning and assessment records, there was active participation on the part of the students, increasing the degree of intellectual effort, interest and curiosity with regard to mathematics. Analysing the results obtained by the students in the initial and final evaluation tests, there may be diagnosed a relevant progress in terms of problem-solving competences and skills, calculus abilities as well as correct use of mathematical concepts. Of the 15 students who got initial marks below 5, 10 succeeded in getting marks above 5 in the final test, applied at the end of the term. The raise in the number of students who obtained the marks 7, 8, 9 and 10 is relevant and may be followed in the table and frequency polygons below. This comparative frequency polygon showing the results in the initial and final test highlights the fact that although the number of students who got 5 in the final test is much smaller than the number of students who got this mark in the initial test, the number of students who got marks above 8 increased significantly compared to the initial test to 20 marks of 10. Conclusions The progress of the school results is obvious, both at the individual and class level, an aspect reflected in the frequency of the marks obtained and the general means of the class, calculated at the two stages of the research: pre-experimental and post-experimental. Regarding the initial evaluation, the mean of the class results were 6.42, an average that corresponds to below standard performances. In the final evaluation, the class average was 7.52, showing a relevant increase of 1,10 points. It is worth mentioning that a first progress was observed as of the stage of applying the experimental factor, the mean of the marks obtained by students in the formative evaluation being 6,85. In the final evaluation test, 74% of the students obtained marks at least equal to 7, and 22 of the 30 students who got marks below 5 in the initial test succeeded in obtaining marks above 5 in the final test. The comparative analysis reveals the relevant growth of the students' results at Mathematics, which validates the hypothesis of the experimental research. Besides the progress recorded at the level of the school results, it is also worth mentioning the progress on the motivational level, there being a greater number of active students, interested in the school activity, at the expense of the passive and disinterested ones.
2019-02-15T14:23:39.660Z
2016-03-24T00:00:00.000
{ "year": 2016, "sha1": "815f61e29ec0b6c8c1a9ab2d97a96539a3e48ac5", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.sjedu.20160402.13.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "196d69fbe87554664207b1c54a6ad8c6318615b3", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
12084494
pes2o/s2orc
v3-fos-license
GESDB: a platform of simulation resources for genetic epidemiology studies Computer simulations are routinely conducted to evaluate new statistical methods, to compare the properties among different methods, and to mimic the observed data in genetic epidemiology studies. Conducting simulation studies can become a complicated task as several challenges can occur, such as the selection of an appropriate simulation tool and the specification of parameters in the simulation model. Although abundant simulated data have been generated for human genetic research, currently there is no public database designed specifically as a repository for these simulated data. With the lack of such a database, for similar studies, similar simulations may have been repeated, which resulted in redundant work. Thus, we created an online platform, the Genetic Epidemiology Simulation Database (GESDB), for simulation data sharing and discussion of simulation techniques for genetic epidemiology studies. GESDB consists of a database for storing simulation scripts, simulated data and documentation from published articles as well as a discussion forum, which provides a platform for discussion of the simulated data and exchanging simulation ideas. Moreover, summary statistics such as the simulation tools that are most commonly used and datasets that are most frequently downloaded are provided. The statistics will be informative for researchers to choose an appropriate simulation tool or select a common dataset for method comparisons. GESDB can be accessed at http://gesdb.nhri.org.tw. Database URL: http://gesdb.nhri.org.tw Introduction Computer simulations are routinely conducted in genetic epidemiology studies. For example, when a new statistical method is developed to test associations between genetic variants and a disease, it is important to evaluate the type I error rates for the method and compare the power of the method with other existing methods under different scenarios. Simulation studies are also important to evaluate the study design, such as case-control or family-based design, and to calculate the numbers of samples required to achieve reasonable power when planning a genetic epidemiology study. Because of the complicated structures in human genomes and disease models, simulating realistic genetic variants and trait values can be challenging. A group consisting of population geneticists, genetic epidemiologists and computational scientists addressed several current and emerging challenges and opportunities in genetic simulation studies in the 'Genetic Simulation Tools for Post-Genome Wide Association Studies of Complex Diseases' workshop held at the National Institutes of Health in Bethesda, Maryland on [11][12] March 2014 (1). One of the challenges that was addressed is that researchers may have difficulties in choosing an appropriate simulation tool from a large number of existing tools. For example, the Genetic Simulation Resources (GSRs) website (2) has collected >100 genetic data simulation tools, and each tool has unique properties; however, some tools also share common features. Because of the difficulties in choosing an appropriate simulation tool, the researchers ultimately developed their own tools that had functions overlapping those of the existing tools, which resulted in redundant work (3). Another challenge is that simulated data for a certain study may be generated in favor of the assumptions for the statistical models developed in the study. This could lead to unfair comparisons of the method with other methods. One of the solutions is to create benchmark simulation datasets with detailed documentation for the simulation procedures so that the datasets can become standards for method comparisons (1,4). The opportunities discussed by the group included the creation of a server for sharing genetic simulation data, identification of common datasets for method comparisons, and encouragement of making simulated datasets publicly available. In response to the challenges and opportunities addressed above, we created the Genetic Epidemiology Simulation Database (GESDB). The platform consists of a multi-functional website with friendly web interfaces, an FTP server, and a database server. The platform was designed as a repository for simulated datasets generated from published articles or articles under peer review related to genetic epidemiology studies. GESDB has two important features. The first is that each dataset on GESDB can be voted on by the user, and the other is that summary statistics, such as the datasets with the most votes, the most frequently downloaded datasets, and the most frequently used simulation tools, are reported on the main page of GESDB. The summary statistics will be informative to help users select an appropriate simulation tool and a common dataset for method comparisons. Methods Architecture of GESDB Figure 1 shows the hardware architecture of GESDB. The hardware supporting GESDB includes a server-level computer, equipped with an Intel XEON quad-core 2.4 GHz CPU and 96 GB of memory, where the computer is connected by a disk array (with a storage of 50 TB) and a Network Attached Storage (NAS) system with an equal amount of storage to the disk array. The redundant array of independent disks four technique was applied to the disk array as a backup mechanism to protect the data in case of disk failure. The data are copied weekly to the NAS system, which serves as a secondary backup mechanism for the data in the disk arrays. The Web, FTP and MySQL servers were set up on the computer. Web server A person who registers on GESDB and deposits their simulated datasets into the database is referred to as the author, while a person who registers on GESDB and downloads the datasets from the database is referred to as the user. Friendly web user interfaces (UIs) were created for the author and the user on GESDB. The interfaces were tested by four internal and two external users and modified based on their feedback. The author uses an information form to specify the properties of the datasets. The information form collects some basic information about the datasets, including a general description of the data (e.g. study design and types of data) as well as a more detailed survey of the datasets (e.g. the tools and scripts used to generate the data and technical notes for generating the data). Datasets such as simulated raw data, scripts, and any other related files are uploaded to GESDB via the FTP server by the author. The datasets are then classified by the author on the web UI by adding the files to the five categories defined by GESDB. The five categories include 'Readme', 'Scripts', 'Result data', 'Raw data' and 'Other'. The 'Readme' category includes documentation such as a description of the simulation steps, while simulation scripts are classified as 'Scripts'. The results such as type I error rates and power are classified as 'Result data', while 'Raw data' refers to the simulated raw data. Other file formats are also accepted by GESDB, such as presentation slides and links to published articles, and these are classified as falling into the 'Other' category. The user can search the datasets by data attributes (e.g. article title, keywords and author names) on the web UI and then use the FTP server to download the datasets. The user can also leave comments and vote for a dataset on the web UI. A discussion forum is also hosted on the Web server. The forum provides a platform for questions and answers between the authors and users. On the main page of the web server of GESDB, summary statistics such as the most frequently downloaded datasets, the most frequently used simulation tools, the datasets with the most votes and the most viewed datasets are provided. MySQL server Data attributes from the information form are saved in the MySQL database, and queries sent from the Web server are processed by the MySQL server. The author and user profiles, votes, paths to the files uploaded by the authors on the FTP server, summary statistics and forum discussions are also saved in the MySQL database. FTP server The FTP server handles downloading and uploading the data. Any user can download the data freely via the FTP server. Currently the author can upload files of a maximum size of 50 GB for each study. A folder is created for each author on the FTP server, and the author can create subfolders for different studies. Considering the current storage of 50 TB in the disk array, GESDB will be able to accommodate data from 1000 studies. However, because the size of the data for many studies may be significantly <50 GB, we expect that the actual number of studies that GESDB can host will be >1000. At present, data on GESDB come from datasets deposited by the author, replicated datasets generated by our group and curated web links to other websites consisting of simulated datasets. We selected articles that have clear descriptions of the simulation procedures and followed these procedures to generate replicated datasets. The curated web links were created by our group by a web search to identify websites that contain simulated data for genetic epidemiology studies, and the web links instead of simulation data are saved in GESDB. The websites containing simulated data are usually those generated by the authors of various published articles. Our curators regularly check the web links once per month to ensure that the links are still valid and they will update the database if there are changes of the links from the authors' websites. The datasets deposited by the author or the replicated datasets generated by our group are under the creative commons (https://creativecommons. org) BY-SA license, which allows licensees to use the datasets if the author is credited (i.e. the author's article is cited) and allows licensees to distribute derivative works under a license identical to the license. The usage for the datasets hosted on the author's websites, to which GESDB has linked, is regulated by the author. Each dataset in GESDB is assigned a unique identifier, which can be cited when the dataset is used for other studies. Figure 2 shows the flowchart for a general user to access GESDB. The general user first needs to register on the website to become an author and/or user. After the registration is approved, the author first uploads the datasets via the FTP server. Then, the author fills out the information form on the website to provide information about the uploaded datasets. The user first searches for data on the website and then downloads the data via the FTP server. The author and user can participate in the discussion forum to ask and answer questions. Unregistered users will only be able to browse the datasets and the summary statistics on the website. Note that it is possible for the same person to register as multiple authors or users on GESDB, provided that the person fills different information in the registration form. To avoid repeated votes from the same person, multiple votes from the same IP address will be counted as one vote in the voting system. Results The friendly web interfaces were created for the author to upload the data and for the user to search and download the data. Table 1 shows the entries of the information form that the author must fill before uploading the data. Some entries such as the simulated data type and trait type are in the same format as those in GSR. Note that although GESDB aims to host simulated data from published studies, data from articles that are currently under review are also accepted in GESDB. This will provide opportunities for the journal editors or reviewers to assess the simulation scripts and data as part of the review process. Stress tests were performed for both the Web and FTP servers. Both servers functioned normally, assuming that there were 100 simultaneous users who performed regular tasks including web browsing, searching, uploading and downloading the data. The numbers of views as well as votes and comments from users are reported for each dataset on GESDB. On the main page, GESDB reports the summary statistics, including the most frequently used tools, the most frequently downloaded datasets, the most viewed datasets, the datasets with the most votes, and the most viewed and voted posts in the discussion forum. The summary statistics will be informative for other simulation studies, such as choosing a simulation tool that has been widely adopted in the research community. Moreover, the most frequently downloaded datasets may become benchmark datasets for method comparisons. Finally, the forum provides an important communication platform for exchanging simulation strategies and for discussing the simulated data. Table 2 shows the comparisons between GESDB and two other popular public data repositories, Dryad and figshare. Dryad and figshare are open for the general research community, while GESDB is designed specifically for simulations in genetic epidemiology studies. In terms of hosting genetic simulation data, GESDB has several advantages over these two repositories. GESDB provides a larger free storage space per study (i.e. 50 GB), considering that simulated data are generally large, when compared with the 20 GB free space offered by figshare and the 20 GB space for $120 US dollars offered by Dryad. User statistics such as the number of views and downloads for a dataset are provided for all three repositories, while voting statistics for datasets are uniquely provided by GESDB. Moreover, several crucial summary statistics are also uniquely provided by GESDB, such as the datasets receiving the most votes and the most frequently used simulation tools. These statistics will help eliminate difficulties faced by the user in choosing an appropriate simulation tool and will help researchers identify common datasets for method comparisons. Moreover, a discussion forum is provided by GESDB, making GESDB not only a data repository but also a platform for exchanging simulation strategies. Discussion and Conclusions GSR mainly serves as a catalogue of existing genetic simulation tools. Another website, OMICtools (5), constructs a catalog that covers a broader range of tools related to omic data analysis when compared with GSR; however, relatively fewer tools for genetic simulations are collected in OMICtools. The user can search and compare tools on GSR based on different features of the tools, such as simulation method, input and output types and the type of traits. GSR provides certification for a simulation tool based on whether the tool is publicly accessible, is well documented, has been successfully applied to genetic epidemiology studies, and is actively supported by the developers. Because the GSR certification criteria were defined on the basis of the discussions by the experts in the field (1), it is expected that this type of certification will become the norm for genetic simulation software development. Moreover, similar to the purpose of the summary statistics on GESDB, the certification will help the user to determine the most appropriate simulation tools. When compared with GSR, the major advantage of GESDB is that a data repository with simulation data and scripts is included, which will prevent redundant work if the same simulation study is considered by the user and will facilitate statistical method comparisons. Therefore, GESDB can be a complementary resource to GSR. That is, the user can identify an appropriate simulation tool on GSR, and with this information, the user can search and download the datasets simulated by the tool on GESDB. At present, the simulated data deposited to GESDB are expected to be generated by the author's local computing resources and uploaded to GESDB via the FTP server. As discussed by Chen et al. (1), a genetic simulation server with common application program interfaces (APIs) to different simulation tools would be helpful for the authors to directly simulate data on the server. Such a server would have several advantages. For example, the server would reduce the local computing burden for the author. APIs would also allow for communication and 'Empirical power Simulation' in the article. The datasets for type 1 error rates were simulated using the regular SeqSIMLA2. The datasets for the power were simulated using a modification of SeqSIMLA2, which can be downloaded as SeqSIMLA_SKATpower in the Script. A brief description of the uploaded data a Note that the simulated data used in the original article were generated with the tool developed by the article authors. The datasets on GESDB were the replicated datasets generated by our group using SeqSIMLA2 (12). among different simulation tools, and modules of common functions such as the generation of sequencing errors could be developed based on the APIs. Moreover, the data simulated on the server would be available for both authors and users and could be stored for later analyses including the selection of benchmark datasets. Furthermore, it would be easier for users to compare results as all analyses would be stored on the server. However, as recognized by Chen et al. (1), several challenges still exist, including the creation of an ontology for genetic simulation to develop the APIs, maintenance and storage costs, computing resources and intellectual property issues. The creation of an ontology can be based on other related works such as HuPSON (6), an ontology for simulations in human physiology, but will require more discussion among the genetic simulation community. Moreover, creating a cluster of a large number of computing nodes that fulfill the computing demand from the authors will require a significant amount of funding for purchasing and supporting the hardware. Before these challenges can be resolved, GESDB, which has some common advantages with the proposed server including the storage of simulated datasets and the selection of benchmark datasets, is useful as a genetic simulation resource for the genetic simulation community. As discussed in Chen et al. (1), the cancer intervention and surveillance modeling network (CISNET) group has developed standardized model documentation to facilitate the comparison of simulation or analytical models related to cancer interventions (7). In other research fields, guidelines for reporting simulation studies have also been developed. For example, the Minimum Information About a Simulation Experiment (MIASE) (8), proposed by a group of experts in the field of systems biology where simulations are routinely performed, defines the minimum requirements for describing a simulation experiment. The MIASE guidelines include rules such as a clear description of each simulation model, a precise description of the simulation steps, and the availability to obtain numerical results. Languages such as SED-ML (9) or SBRML (10) have also been developed to formally describe the guidelines for reporting studies in the field of systems biology, which can facilitate the exchange of data between users. Some of the MIASE guidelines are also applicable to genetic simulation studies, while some rules that are more specific to the field of genetic simulation, such as the minimum requirement to validate a statistical method, may be required. Similar to CISNET and MIASE, developing a guideline for describing a genetic simulation experiment will require discussion from a consortia of experts in the field. Other than the prespecified entries in the information form shown in Table 1, GESDB is also flexible in terms of adding new entries. Therefore, the authors are encouraged to follow similar guidelines as MIASE to provide further detailed information for the data on GESDB. Furthermore, if a guideline for reporting a simulation experiment is developed by the genetic simulation community, it will be incorporated as a required entry on the information form on GESDB. Another database management system, SEEK (11), was also developed for data and model sharing in systems biology. GESDB and SEEK can both store heterogeneous datasets such as raw simulated data, documentation (e.g. simulation steps), simulation models, and publication information. SEEK provides versioning of datasets and data can be restricted for access to specific users, while the datasets on GESDB are publicly available to all registered users. However, voting and summary statistics are not provided in SEEK. If the simulation models follow the systems biology markup language format (13), SEEK allows for direct simulations on the platform. Direct simulation on a common platform is similar to the concept of the simulation server as discussed in Chen et al. (1). Again, this addresses the importance of developing a standard language for simulation models in genetic simulations. In conclusion, a very useful platform GESDB was created for genetic data simulations. With the information provided by GESDB, it will become straightforward for the user to identify the most appropriate simulation tool. In addition, benchmark datasets can be selected, which can become common datasets for method comparisons. GESDB aims to promote simulation data sharing and improve transparency and efficiency in simulation studies for genetic epidemiology. GESDB is funded by an intramural grant, which was awarded for the period of 2015-19 from the National Health Research Institutes in Taiwan. The 5year grant will allow us to continue the development of GESDB and to expand its hardware structure. Funding support for GESDB after 2019 will be sought through the same funding agency or another major funding agency, the Ministry of Science and Technology in Taiwan. GESDB can be accessed at http://gesdb.nhri.org.tw.
2016-11-01T19:18:48.349Z
2016-05-30T00:00:00.000
{ "year": 2016, "sha1": "a30c650bd7bc219495f22cbfb5ba979cdc85ac69", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/database/article-pdf/doi/10.1093/database/baw082/8224693/baw082.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85d8ed381868678806a78dbcf132c0033bbf12e1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
221837200
pes2o/s2orc
v3-fos-license
The Pattern of Presentation and Incidence of Tuberculosis in Patients on Chronic Hemodialysis Original Research Article Objective: The incidence of tuberculosis (TB) among dialysis population is much higher than general population. Immunosuppression induced by end-stage renal disease (ESRD) modified the clinical presentation of TB, resulting in atypical signs and symptoms, and a more frequent extrapulmonary presentation. This study was undertaken to determine the pattern of presentation and incidence of TB in ESRD patients. Methods: This was a prospective observational study done using 200 ESRD patients who are on chronic maintenance hemodialysis (HD) at Adichunchanagiri Institute of Medical Sciences, BG nagara, India. TB was diagnosed using clinical, radiological, biochemical, microbiological, and histological findings. Results: The incidence of TB was found to be 12%. It was found to be commoner in females, commonest in the age group 40–34 years. Pleural effusion was the commonest type of TB found (50%). The incidence of extrapulmonary TB was 87.5%. There was a high incidence of TB during the early years of initiation of HD. Patients with TB had a statistically significant low BMI compared to non-TB patients (18.42 kg/m 2 Vs. 22.63 kg/m 2 . P<0.001). TB had a significant impact on mortality among HD patients. Conclusion: Patients with ESRD, on chronic maintenance HD are at increased risk for pulmonary and extra pulmonary tuberculosis and should be diagnosed with high index of suspicion. The TB infected patients generally presented worse mortality rates than the non-TB infected patients. INTRODUCTION Tuberculosis (TB) is a widespread infectious disease, most of which occurring in developing countries. People in the developing world contract tuberculosis because of a poor immune system [1]. Endstage renal disease patients who are on chronic maintenance hemodialysis will have poor cell mediated immunity. This has led to increased TB rates in dialysis population [2]. An individual who is on hemodialysis has 6.9 to 52.5 times increased risk of developing TB compared to healthy individual [3]. The presentation of TB in dialysis patients varies from the normal, both in clinical features and investigation findings. Also there is a rise in the extrapulmonary presentation of TB [3]. Data on the incidence and prevalence of TB in dialysis patients in India is scanty. We undertook this study to determine the pattern of presentation of TB in dialysis population and to ascertain the incidence of TB in dialysis patients. PATIENTS AND METHODS Study Setting: The study was done simultaneously at Adichunchanagiri Institute of Medical Sciences, B.G. nagar between January 2016 and December 2018. The patients for the study were recruited from the dialysis unit of both the hospitals. Sample Size: 100 patients who are on chronic maintenance hemodialysis were studied. Sampling Method: All consecutive ESRD patients who are on maintenance HD aged 18 years and above, who presented to the dialysis department within the study period and who met the selection criteria, were recruited into this study. Inclusion Criteria (1) All patients who are on maintenance HD aged 18 years and above who presented to the dialysis department within the study period. Medicine (2) All patients who consented to participate in the study. Exclusion Criteria (1) All chronic kidney disease patients who are not on dialysis. (2) Patients who did not consent to join the study. Method of Data Collection All subjects that met the inclusion criteria were clinically assessed. That involved detailed history taking and a physical examination. A designed questionnaire was used as the study instrument. All subjects gave informed consent for participation in the study, which was approved by the human ethics committee. All of the procedures were in accordance with the Helsinki Declaration of 1975. The above information with the subject's biodata and the results of the following investigations were entered into the questionnaire. Method of Data Analysis The Microsoft excel computer software was used to record the data. The statistical package for social sciences (SPSS) version 13.0 was used to analyse them. Frequency tables were drawn to show the distribution of data within variables. Contingency tables were drawn to compare two discrete variables. RESULTS A total of 200 patients were recruited in the study. The ages of the respondents ranged between 19 and 74 years. The mean age was 57.2 years. Of all the 200 patients studied, 123 (37.3%) were males. The remaining 77 patients (62.7%) were females. Twentyfour (12%) of the 200-dialysis patients were diagnosed with TB. Among the patients diagnosed with TB, 14 of them were females while the remaining 10 are males, giving a male to female sex ratio of 1: 1.4 ( Table 1). The incidence of TB is highest among the age group 40-49 years (9 cases), followed by the age group 50-59 years (6 cases). The lowest incidence is seen in age groups less than 20 years (1 case) and more than 60 years (2 cases) ( Figure 1). There was a significant relationship between low body mass index (BMI) and TB in dialysis population ( Figure 3). The mean BMI of patients without TB was 22.63 kg/m2 ,while the mean BMI of patients with TB was 18.42 kg/m2, P value <0.001. Duration on dialysis had a bearing on the incidence of TB. When patients were on dialysis for <1 year, we had 12 patients out of 24 (50%) with diagnosis of TB. TB was diagnosed in 8 patients between 1-3 years of HD and in 4 patients with >4 years of HD (Table 3). No statistically significant association was found between the TB and non-TB groups with respect to other predisposing factors for TB such as diabetes, retroviral disease, smoking and occupational exposure. Furthermore, TB contributed significantly to the mortality among hemodialysis patients. On applying Kaplan Meyer survival analysis between status of TB and days of mortality, a significant survival benefit was observed (p<0.001) in non-TB patients (Figure 4). DISCUSSION ESRD patients who are on chronic maintenance HD are prone to many infections including TB because of poor cell mediated immune response [4]. In our study the incidence of TB among HD patients was 12%. This agrees with the study by Ghulam Hassan Malik et al. who demonstrated that 14.5% of the cases were found to have TB [5]. Diagnosing TB in dialysis population is challenging because of its varied presentation, non-specific symptoms and high incidence of extrapulmonary involvement [6]. Extrapulmonary involvement has been reported between 38-80% among dialysis population, whereas in general population extrapulmonary TB is reported to account for only 4.5% of the total cases of TB [7]. In our study, the incidence of extrapulmonary TB was 87.5%. We found tuberculous pleural effusion as the most common type of extrapulmonary TB. Classical TB symptoms (cough & hemoptyasis) in general population are less frequently reported in dialysis patients [8]. These symptoms are reported in an average 22% of dialysis TB patients [9]. In our study, both of these symptoms were reported in only 8.3% of patients. The most common presenting symptom of TB in our study was weight loss, accounting for 95.8% of all TB patients. This may probably due to TB, which is a chronic debilitating disease, is coexisting with another chronic inflammatory condition (chronic kidney disease). The mean body mass index (BMI) of TB infected patients (20.42 kg/m2), was found to be less than that of the non TB infected patients (22.63 kg/m2). This is expected as the coexistence of the TB with end stage renal disease will ultimately affect the BMI negatively. In our study, most of the TB cases were diagnosed (50%) within first year of dialysis. These results are similar to what was found in the studies done by Erkoc et al. [10]. This may be due to the generalized debility and profoundly depressed cell mediated immunity during the early months of initiation of dialysis. Several studies reported a high mortality of 17% to 75% in CKD patients with TB [11,12]. In our study, significantly high mortality rate (70%) was observed in TB infected patients. Poor nutritional status, delay in diagnosis because of varied presentation, drug toxicity and poor compliance might have led to this increased mortality in these patients. Limitations of our study includes: a) small number of the study population, b) delay in diagnosing TB due to its varied presentation in HD population and c) ours is only an observational study. CONCLUSION In conclusion, TB is common in hemodialysis patients, and this worsens their clinical status. The presentation of TB in dialysis patients is mostly atypical making diagnosis difficult. The TB infected patients generally presented worse mortality rates than the non-TB infected patients.
2020-09-22T04:43:46.135Z
2020-09-20T00:00:00.000
{ "year": 2020, "sha1": "bd86a041bd753857c98f48e7444bbd7f9332132a", "oa_license": null, "oa_url": "https://doi.org/10.36347/sjams.2020.v08i09.029", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bd86a041bd753857c98f48e7444bbd7f9332132a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269817623
pes2o/s2orc
v3-fos-license
EFFECT OF PRE-HARVEST APPLICATION OF GA 3 AND POTASSIUM NITRATE ON YIELD AND QUALITY OF PEACH FRUIT : The research study entitled “effect of pre harvest application of GA 3 and potassium nitrate on yield and quality of peach fruit” was carried out at Horticulture Research Farm, The University of Agriculture Peshawar during 2019. The plots were arranged using Randomized Complete Block Design (RCBD) with two factors. Three replications were used in the experiment. The experiment was conducted on already established peach orchard of almost 14 years old trees of Early Grande cultivar. The uniform size trees were selected and tagged for the experiment. The plants were managed under uniform cultural practices. The selected plants of peach cv. Early Grande were sprayed with various concentrations of Gibberellic acid (GA 3 ) (0, 20, 40, 60 and 80 ppm) and potassium nitrate (KNO 3 ) (0, 1000, 2000 and 3000 ppm) at berry stage. Peach plants were sprayed with GA 3 and KNO 3 and compared with control. The analysis of data showed that the foliar application of GA 3 and KNO 3 significantly influenced the yield and quality attributes of peach. Maximum fruit weight (111.98 g), chlorophyll content (51.00 SPAD), TSS (14.60 o Brix), fruit juice content (85.80 %), Fruit firmness (2.16 kg.cm -2 ), yield plant -1 (54.56 kg) and yield ha -1 (15.548 tons) were recorded in plants treated with 80 ppm of GA 3 . In case of KNO 3 Maximum fruit weight (101.20 g), chlorophyll content (49.46 SPAD), TSS (13.38 o Brix), fruit juice content (84.86 %), Fruit firmness (2.12 kg.cm -2 ), yield plant -1 (51.56 kg) and yield ha -1 (14.829 tons) were recorded in plants treated with 3000 ppm of KNO 3 . Minimum days to maturity (72.52) and minimum number of fruits kg -1 (9.65) were found in plants sprayed with 80 ppm of GA 3 while in case of KNO 3 minimum days to maturity (76.92) and minimum number of fruits kg -1 (10.62) were found in plants sprayed with 3000 ppm of KNO 3 . Interactive effect of levels of GA 3 and KNO 3 was also found significant for some of the parameters. It is therefore concluded that when peach Early Grande cultivar sprayed with 80 ppm of GA 3 and 3000 ppm of KNO 3 the quality as well as yield of the crop were improved. INTRODUCTION Peach (Prunus persica) ranked third among the top growing temperate fruits through the world.It belongs to the family Rosaceae which is mostly grown in temperate zone between the 30-40 o N and S latitude.This area is referred as commercial production area for quality peach.It is most likely assumed that it was developed in Persia; however, China is considered as native country [1]. Peach is famous for its delicious taste, flavor and aroma comprised of 10-14 percent sugar, 2% protein and rich in ascorbic acid.Moreover, vitamins like A and B, iron, phosphorus and calcium could also be obtained.Around the world the most important types are free stone type while Early Grande, Florda King 6-A and 8-A are most popular cultivars dominantly grown in Peshawar and Swat region.Whereas in Baluchistan Golden Early, Shah Pasand and Shireen are grown.On the basis of market availability, Swat valley enjoys not only access to national market but international market as well.At present, peach showed adaption to other environments like subtropical and even some of the recent varieties has low chilling requirements for reproductive growth.Besides its good productivity, the peaches grown in subtropical climate reported to be highly perishable, reducing its quality and ultimately export.In order to overcome this situation farmers usually prefer harvesting before the fruit reached to full maturity keeping in mind handling and transport facility.The operation may give some relief financially but these fruits never reach their full flavor, aroma and consumer acceptance A considerable part of the crop was lost at post-harvest.The mayhem could be possibly resolved through extensive trainings of farmers about mixed farming, preparation of pickles from a part of fruits and other handling operations.Excessive bearing of fruits may prefer by grower but this phenomenon severely influences the size and quality causing poor returns or profit [2]. Physiological maturity function plays a key role in the post-harvest quality and shelf life because peach fruits are more susceptible to losses due to a rapid softening after harvest.Some plant growth regulators like gibberellins have the ability to avoid senescence naturally.Which sometimes may also use for prolong harvesting and marketing seasons.Some researchers reported that pre-harvest application of GA 3 promote growth, improve fruit size and extends the shelf life of peaches.Extended post-harvest storage life for prolong marketing season through delaying the picking of fruits and late season cultivars [3].To achieve this, the possible ways are conventional breeding of potentially prolonged fruit storage cultivars having the characteristic of late ripening or the appropriate use of plant growth regulators to enhance vegetative growth and maturity and fruit development mainly in the indigenous cultivars [4]. Gibberellic acid (GA 3 ) is an important and extensively used growth regulator of plants to manipulate the development and ripening of fruits in number of crops including stone fruit [4].It triggers several processes and pathways, depends on plant development and organs [5].Recent studied showed that GA 3 is valuable in lowering the flowers density, which subsequently increase size of fruit and decreases the crop load in nectarine and peaches [6].Gibberellic acid interrupts the development and ripening of internal breakdown in nectarines when applies at pit hardening [7].However, at the pit hardening stage, it enhances the cell wall and directed maximum percentage of cellulose in the cell wall than the control fruits [8]. Several factors in the pre-harvest stage enhance the fruit quality.Therefore, pre-harvest cultural methods play a vital role in maximizing the fruits quality.One such practice is the use of plant nutrients like potassium, boron and calcium during fruit growth at pre-harvest foliar spray.Potassium increases the fruit firmness [9].Foliar sprays of K have been successfully tried to improve fruit quality in peach.The use of potassium sulphate during foliar spray improves the fruit appearance and maximize the soluble solid contents of the fruit [10]. The most important and common source of potassium is potash muriate, other sources also work will in comparison with muriate.Excess intake of potassium inhibits magnesium and calcium uptake and is thus undesirable.Mineral nutrition also effects the fruit storing quality in several ways [11]. Potassium nitrate improve the effectiveness of photosynthesis in plants [12].The increase in fruit size due to KNO 3 treatments may be due to the reason that K helped in increasing the entry of water into the cells by osmotic processes, and increased cell expansion which affected fruit size [13].The increase in fruit weight with KNO 3 application might be due to the fact that N is extremely mobile and developing fruit acts as a metabolic sink for the nutrient elements.Further, nitrogen has been reported to prolong the phase of fruit cell division resulting in greater number of cells per fruit [14].The potassium application increased fruit weight and fruit size in 'Kinnow' mandarin.Larger fruit size and fruit weight in 'Valencia' orange with dormant, post bloom and summer foliar application of potassium were reported by Boman [15]. Keeping in view the above-mentioned peach problems and benefits of gibberellic acid and potassium nitrate research is designed to study the effect of pre harvest application of GA 3 and potassium nitrate on yield and quality of peach fruit (cultivar Early Grande). MATERIAL AND METHODS Experimental site: An experiment was conducted to study the performance of foliar application of gibberellic acid and potassium nitrate on yield and fruit quality of peach, at the Horticultural Research Farm, Malakandher, The University of Agriculture Peshawar, during the year 2019. Experimental design: The experiment was conducted on already established peach orchard of 14 years old trees of Early Grande cultivar.The trees were selected and tagged for the experiment.The plants were managed under uniform cultural practices.Selected plants were sprayed with Gibberellic acid and potassium nitrate separately and in combination.The treatments were applied at fruit setting stage in three replicates.Two factorial Randomized complete block design (RCBD) was used for statistical analysis. Days to maturity: The number of days from fruit setting to physiological maturity were noted to find out days to maturity and their average was calculated. Single Fruit weight (g): Five randomly selected fruit from the treated tree were collected and their weight was measured in grams with the help of electrical balance and the mean was calculated. Chlorophyll Content (SPAD): Chlorophyll content was determined in leaves of randomly selected plants of all treatments through SPAD meter then their average was calculated. Number of fruits kg: One-kilogram fruits from each treatment were selected and their number of fruits were counted and their average was taken for further analysis. Fruit firmness (kg.cm -2 ): Fruit firmness was measured on two pared sides of each fruit using a penetrometer fitted with an 8-mm diameter plunger. Fruit juice content (%): Fruits from each treatment were weighted through electric balance, their juice were extracted with juicer, the juice were weighted and juice content was found out by using the given formula.Juice content = Juice weighted (g) ----------------------x 100 Fruit weighted (g) Total soluble solids ( o Brix): A wedge-shaped slice (approx.5 g) was removed from each fruit.Slices were passed through an electric juicer for the measurement of total soluble solids by using hand refractometer. Yield tree -1 : All the yield of fresh fruits were weighed through electronic balance.The yield was calculated in kg plant -1 after taking weight of individual plant tree yield. Yield ha -1 : The yield per hectare was found by using the following formula: Yield ha -1 = Number of plants ha -1 x yield plant -1 Statistical analysis: A computer package statistic version 8.1 was used for analyzing field and laboratory data by ANOVA technique and the means were compared by LSD-test of significance, when the F-values was found significant [16]. RESULTS AND DISCUSSION Results of the studied parameters were analyzed, compared and discussed with the results of other researchers in this chapter.Tables from 4.1 to 4.09 represent the mean data while Tables from 4.1a to 4.09a shows analysis of variance (ANOVA).Original replicated data of the studied parameters in research are given in Appendices from I. IX. Days to maturity: Data regarding days to maturity of peach are presented in mean table 4.1 whereas ANOVA is shown in table 4.1a.Original replicated data are given in Appendix-I.The analyzed data showed that days to maturity of peach was significantly affected by foliar application of gibberellic acid and potassium nitrate.The interaction of GA 3 and KNO 3 was found non-significant for days to maturity of peach. The foliar application of GA 3 significantly affected number of days to maturity of peach at various concentrations.Data pertaining GA 3 reveal that less number of days to maturity of peach (72.52) was noted in plants sprayed with 80 ppm of GA 3 which was statistically different from days to maturity (77.17) of peach when plants were sprayed with 60 ppm of GA 3 . The highest days to maturity (85.92) was recorded in plants of control treatment. Data regarding KNO3 shows that minimum days to maturity (76.92) were noted when plants were sprayed with 3000 ppm of KNO3 which was significantly different from days to maturity (79.32) when KNO3 was sprayed on plants with 2000 ppm.The maximum days to maturity (83.94) was recorded in control treatments. The above results are supported by the findings of Sankar et al. [17] who noted early maturity of fruits during their work effect of plant growth regulators on growth and yield of "Le Conte" Pear.This might be due to the fact that GA 3 and KMNO3 stimulates the conversion of starch into sugar and ripen the fruits earlier. Chlorophyll content (SPAD): Data pertaining chlorophyll content of peach are shown in mean table 4.2 whereas ANOVA is shown in table 4.2a.Original replicated data are given in Appendix-II.The analyzed data showed that chlorophyll content of peach was significantly affected by foliar application of Gibberellic acid and potassium nitrate.The interaction of GA 3 and KNO 3 also significantly influenced the chlorophyll content of peach. Data regarding GA 3 reveal that chlorophyll content of peach was found higher (51.00 SPAD) when plants were sprayed with 80 ppm of GA 3 which was statistically different from chlorophyll content (48.40 SPAD) when plants were sprayed with 60 ppm of GA 3 .The low chlorophyll content (43.65 SPAD) was recorded in plants received no GA 3. Data regarding KNO 3 shows that maximum chlorophyll content (49.46 SPAD) was noted when plants were sprayed with 3000 ppm of KNO 3 which was significantly different from chlorophyll content (47.90) when KNO 3 was sprayed on peach plants with 2000 ppm.The minimum chlorophyll content (44.50 SPAD) was recorded in control treatments. The interaction of GA 3 and KNO 3 for chlorophyll content was significantly affected.The highest chlorophyll content (55.80) was found when plants received 80 ppm GA 3 and 3000 ppm KNO 3 .The minimum chlorophyll content (42.10) was noted in control treatments. The enhancement of chlorophyll content with increasing GA 3 and KNO 3 may be due to the fact that these plant growth regulators play role in improving vegetative growth of the plant which improve the green pigment in plant leaves and increase in chlorophyll content take place.Findings of the present results are supported by the findings of Ahmad and Sharma [18] who performed their experiment on influence of GA 3 , KNO 3 and I.A.A on performance of strawberry.Increase in chlorophyll content was also noted by Ingram et al. [19] when they carried experiment on peach.The analyzed data showed that the fruit weight of peach significantly affected by foliar application of Gibberellic acid and potassium nitrate.The interaction of GA 3 and KNO 3 was also found significant. Data regarding GA 3 shows that fruit weight of peach was found higher (111.98 g) when plants were sprayed with 80 ppm of GA 3 which was statistically different from fruit weight (103.88 g) when plants were sprayed with 60 ppm of GA 3 .Minimum fruit weight (85.10 g) was recorded in plants received no GA 3 and is statistically similar to fruit weight (87.95 g) of plants that received 20 ppm GA 3 . Data regarding KNO 3 shows that maximum fruit weight (101.20 g) was noted when plants were sprayed with 3000 ppm of KNO 3 which was significantly different from fruit weight (98.10 g) when KNO 3 was sprayed on plants with 2000 ppm while minimum fruit weight (93.06 g) was recorded in control treatments. The interaction of GA 3 and KNO 3 shows that the fruit weight was significantly affected.The highest fruit weight (119.00 g) was found when plants received 80 ppm GA 3 and 3000 ppm KNO 3 .The lowest fruit weight (80.10 g) was noted in control treatments. The increase in fruit weight with the application of high doses of GA 3 and KNO 3 was recorded in the present experiment which are in agreement with the results of Nomis et al. [20] who found improvement of fruits weight during their research on improvement of growth, yield and chemical composition of Apple (Pyrus malus) through plant hormones.GA 3 and KNO 3 helps the plants to increase their photosynthesis which improve the availability of food and hence increase fruit weight.The analyzed data showed that fruit firmness of peach was significantly affected by foliar application of gibberellic acid and potassium nitrate.The interaction of GA 3 and KNO 3 was found non-significant for fruit firmness.Data regarding GA 3 shows that fruit firmness of peach was found higher (2.51 kg.cm -2 ) when plants were sprayed with 80 ppm of GA 3 which was statistically different from Fruit firmness (2.00 kg.cm -2 ) when plants were sprayed with 60 ppm of GA 3 while lower fruit firmness (1.27 kg.cm -2 ) was recorded in plants of control treatment . Data regarding KNO 3 shows that maximum fruit firmness (2.12 kg.cm -2 ) was noted when plants were sprayed with 3000 ppm of KNO 3 which was significantly different from fruit firmness (1.95 kg.cm -2 ) when KNO 3 was sprayed on peach plants with 2000 ppm.Minimum fruit firmness (1.33 kg.cm -2 ) was recorded in control treatments. In the present study, increase in fruit firmness was noted with the increase in concentrations of gibberellic acid and potassium nitrate.The enhancement of fruit firmness will help to prolong the post-harvest life of peach.These findings are in line with the findings of Shukla et al. (2007) who recorded increase in fruit firmness with the increase of concentrations of PGRs after performing an experiment on influence of PGRs on growth, quality and yield of peach (Pronus persica) var.Florida king.This might be due to the fact that these PGRs slows down the metabolic process of fruit which enhance their firmness.The analyzed data showed that the number of fruits per kg of peach significantly affected by foliar application of gibberellic acid and potassium nitrate.The interaction of GA 3 and KNO 3 was found non-significant for this parameter.Data regarding GA 3 shows that less number of fruits per kg of peach (9.65) were recorded when plants were sprayed with 80 ppm of GA 3 which was statistically different from number of fruits per kg (10.25) of peach when plants were sprayed with 60 ppm of GA 3 .Maximum number of fruits per kg (12.53) of peach was recorded in plants received no GA 3 . Data regarding KNO 3 shows that minimum number of fruits per kg (10.62) of peach was noted when plants were sprayed with 3000 ppm of KNO3 which was significantly different from number of fruits kg -1 (10.40) when KNO3 was sprayed on plants with 2000 ppm, whereas maximum number of fruits per kg (11.94) of peach was recorded in control treatments. It has been noted that less number of fruits were found due to good when high concentration of GA 3 and KNO 3 were sprayed on peach plants.Similar results were found by Mosa et al. [21] when they performed their research on quince.The present findings are also similar with records of El-Ese and Finder who worked on effect of plant growth regulators on mango.Foliar application of GA 3 and KNO 3 improved fruit productivity and quality as compared to control [22].The analyzed data showed that the TSS of peach significantly affected by foliar application of gibberellic acid and potassium nitrate.The interactive effect was also found significant.Data pertaining GA 3 reveal that TSS of peach was found higher (14.60 o Brix) when plants were sprayed with 80 ppm of GA 3 which was statistically different from TSS of peach (12.58 o Brix) when plants were sprayed with 60 ppm of GA 3 .The minimum TSS (10.37 o Brix) was recorded in plants received no GA 3 and is statistically similar to TSS (11.01 o Brix) of plants that received 20 ppm GA 3 . Data regarding KNO 3 shows that maximum TSS (13.38 o Brix) was noted when plants were sprayed with 3000 ppm of KNO3 which was significantly not different from TSS (12.45 o Brix) when KNO 3 was sprayed on plants with 2000 ppm.The minimum TSS (10.90 o Brix) was recorded in control treatments. In the present study increase in TSS was recorded with increase in conc. of plant growth regulators such as GA 3 and KNO 3 .This may be due to the facts that GA 3 and KNO 3 enhance the metabolic conversion of starch and pectin into sugars which improve the TSS.GA 3 and KNO 3 also improve the fast transformation of carbohydrates into sugars as well as rapid metabolites from source to sink such as fruits which ultimately increase the TSS of peach fruits.The same results were found by Shahid and Tariq [23] after performing work on effect of PGRs on Apple where they recorded increase in TSS with increase in conc. of plant growth regulators.Data pertaining GA 3 reveal that fruit juice content of peach was found maximum (85.80 %) when plants were sprayed with 80 ppm of GA 3 which was statistically different from fruit juice content (84.65 %) when plants were sprayed with 60 ppm of GA 3 .The minimum fruit juice content (80.17 %) was recorded in control treatment plants . Data regarding KNO 3 indicate that maximum fruit juice content (84.86 %) was noted when plants were sprayed with 3000 ppm of KNO 3 which was significantly different from fruit juice content (84.18 %) when KNO 3 was sprayed on plants with 2000 ppm.The minimum fruit juice content (81.84 %) was recorded in control treatments. In the current study increase in fruit juice content were found with increase in concentration GA 3 and KNO 3 .These results are in line with the findings of Azlan et al. [24] who obtained high fruit juice content with increase in concentration of plants growth regulator.GA 3 and KNO 3 helps the plants in the availability of nutrients as a result of which fruits get good size.The improvement in size enhances the pulp of fruits which give high fruit Juice content.The analyzed data showed that the yield tree -1 of peach was significantly affected by foliar application of gibberellic acid and potassium nitrate.The interaction of GA 3 and KNO 3 for yield tree -1 was found non-significant.Data pertaining GA 3 reveal that yield tree -1 of peach was found higher (54.56 kg) of peach when plants were sprayed with 80 ppm of GA 3 which was statistically different from yield tree -1 (51.73 kg) of peach when plants were sprayed with 60 ppm of GA 3 .The minimum yield tree -1 (45.80 kg) was recorded in plants received no GA 3. Data regarding KNO3 shows that maximum yield tree -1 (51.56 kg) of peach was noted when plants were sprayed with 3000 ppm of KNO 3 which was significantly different from yield tree -1 (50.52 kg) when KNO 3 was sprayed on plants with 2000 ppm.The minimum yield tree -1 (48.14 kg) was recorded in control treatments. It has been found that application of GA 3 and KNO 3 improve the cropping of peach trees and consequently increase the yield [25].It might be due to the fact that GA 3 and KNO 3 helps in cell elongation and cell wall formation which cause increase in fruit size and hence improve the yield tree -1 .Similar results were found by Essa et al. [26] after doing work on the effect of plants hormones on apple and pear.The analyzed data showed that yield hac -1 of peach was significantly affected by foliar application of gibberellic acid and potassium nitrate.The interaction of GA 3 and KNO 3 for yield hac -1 was found significant.Data regarding GA 3 indicate that yield hac -1 of peach was found higher (15.548 tons) when plants were sprayed with 80 ppm of GA 3 which was statistically different from yield ha -1 (15.115 tons) when plants were sprayed with 60 ppm of GA 3 .The minimum yield hac -1 (13.137 tons) was recorded in plants received no GA 3. Data about KNO 3 shows that maximum yield hac -1 (14.829 tons) was noted when plants were sprayed with 3000 ppm of KNO 3 which was significantly different from yield hac -1 (14.637 tons) when KNO 3 was sprayed on plants with 2000 ppm.The minimum yield hac -1 (14.206 tons) was recorded in control treatments. In present research we recorded the increase in yield with the increase in the concentration of GA 3 and KNO 3 .These findings are supported by the finding of Maryam and Sana [27] who found the increase in yield per hectare when they carried an experiment on the influence of plant growth regulators on performance of plum.This may be due to the fact that plant growth regulators GA 3 and KNO 3 act as ethylene inhibitor due to which softening of fruit do not occur and fruits get good size and weight which ultimately enhance the yield per hectare. Table 4 .2a. Analysis of variance table for chlorophyll content (SPAD) of peach as affected by application of GA 3 and potassium nitrate. Fruit weight (g):Data regarding weight of peach are presented in mean table 4.3 whereas ANOVA is shown in table 4.3a.Original replicated data are given in Appendix-II. Table 4 .3. Fruit weight (g) of peach as affected by application of GA 3 and potassium nitrate. Means followed by different letters are statistically dissimilar at 1% significance level.LSD value for levels of GA 3 at 1% level of probability = 4.10 LSD value for levels of KNO 3 at 1% level of probability = 2.50 LSD value for levels of interaction at 1% level of probability = 2.10 Table 4 .5. Number of fruits kg -1 of peach as affected by application of GA 3 and potassium nitrate. Means followed by different letters are statistically dissimilar at 1% significance level.LSD value for levels of GA 3 at 1% level of probability = 0.11 LSD value for levels of KNO 3 at 1% level of probability = 0.13 Table 4 .5a. Analysis of variance table for number of fruits kg -1 of peach as affected by application of GA 3 and potassium nitrate. Data regarding total soluble solids ( o Brix) of peach are presented in mean table 4.6 whereas ANOVA is shown in table 4.6a.Original replicated data are given in Appendix-VI. Table 4 .6a. Analysis of variance table for total soluble sugar ( o Brix) of peach as affected by application of GA 3 and potassium nitrate. Data regarding fruit juice content of peach are shown in mean table 4.7 whereas ANOVA is shown in table 4.7a.Original replicated data are given in Appendix-VII.The analyzed data showed that the fruit juice content of peach significantly affected by foliar application of gibberellic acid and potassium nitrate.The interaction of GA 3 and KNO 3 was found nonsignificant. Table 4 .8a. Analysis of variance table for yield tree -1 (kg) as affected by application of GA 3 and potassium nitrate Data regarding yield hac -1 of peach are shown in mean table 4.9 whereas ANOVA is shown in table 4.9a.Original replicated data are given in Appendix-IX. Table 4 .9. Yield ha -1 (tons) of peach as affected by application of GA 3 and potassium nitrate. Means followed by different letters are statistically dissimilar at 1% significance level.LSD value for levels of GA 3 at 1% level of probability = 510 LSD value for levels of KNO 3 at 1% level of probability = 170 Table 4 .9a. Analysis of variance table for yield ha -1 (tons) of peach as affected by application of GA 3 and potassium nitrate. Keeping in view the results obtained from the experiment, it is concluded that the yield and quality of peach (cv.Early grande) increased with increasing levels of GA 3 and KNO 3 .Application of GA 3 at the rate of 80 and 3000 ppm increased the quality and yield of peach.
2024-05-18T15:55:48.604Z
2023-07-20T00:00:00.000
{ "year": 2023, "sha1": "51d29d8052a633c4826e5b26c10c1a1009c0b1e3", "oa_license": "CCBYSA", "oa_url": "https://pjosr.com/index.php/pjs/article/download/873/786", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0943b979e7fc02102e7551e1545f88c518a73abb", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
5323909
pes2o/s2orc
v3-fos-license
The character table of a split extension of the Heisenberg group $H_1(q)$ by $Sp(2,q)$, $q$ odd In this paper we determine the full character table of a certain split extension $H_1(q)\rtimes Sp(2,q)$ of the Heisenberg group $H_1$ by the odd-characteristic symplectic group $Sp(2,q)$. Introduction In his paper ( [Gér]) P. Gérardin constructed the Weil representations of the oddcharacteristic symplectic groups using the properties of a certain split extension H t (q) ⋊ Sp(2t, q) of the Heisenberg group H t (q) of order q 2t+1 by the symplectic group Sp(2t, q). In this paper we explicitly determine the character table of this extension, in the case where t = 1. A motivation lies in the fact that knowledge of this character table seems to be useful in the study of the restrictions to parabolic subgroups of certain unipotent characters of odd-dimensional orthogonal groups (see [DPW]). Let V be the column vector space of dimension 2t over a finite field F of order q, where q is odd, and V is provided with a non-degenerate symplectic form j. Given w ∈ V , we denote by w * the element of the dual space (we think at w * as a row) such that w * w 1 = j(w, w 1 )/2. Let H t (q) be the group consisting of the matrices where w ∈ V and z ∈ F . We call this group the Heisenberg group of V . H t (q) is obviously a central extension of (V, +) by (F, +). Furthermore, H t (q) is a two-step nilpotent group of order q 2t+1 whose center is isomorphic to F (cf. [Gér,Lemma 2.1]). Let S be the symplectic group associated to the form j and, for each s ∈ S, denote by sw the image of w under the natural action of S on V . Then, the map h (w,z) → h (sw,z) defines an automorphism of H t (q) fixing pointwise Z(H t (q)). Viewed as acting on matrices, this map is the conjugation by the element s = diag(1, s, 1) Let us denote by G the semidirect product H t (q)⋊Sp(2t, q) defined by the above action of S. We want to construct the character table of G in the case where t = 1. So, G = H 1 (q) ⋊ Sp(2, q). In this case, we can write in a unique way a generic element g of G as where s ∈ S = Sp(2, q) (here we identify s ∈ S with s ∈ G ), w ∈ V and z ∈ F . The conjugacy classes In the sequel, we denote by (g) the conjugacy class of G containing the element g, and by |(g)| the size of the conjugacy class (g). The following lemma lists the conjugacy classes of G. Proof. Let g 1 = g (s1,w1,z1) and g 2 = g (s2,w2,z2) be two generic elements of G. Then g 1 g 2 g −1 . It easily follows that if g 1 is conjugate to g 2 in G, then s 1 is conjugate to s 2 in S. Moreover, if z 1 = z 2 , then the elements g (s1,0,z1) and g (s2,0,z2) cannot be conjugate in G. Observe that . Recall (e.g., see [Dor,§38]) that S admits elements b of order q + 1, the so-called 'Singer cycles'. As observed before, for different values of z and s the elements g (s,0,z) belong to q 2 + q distinct conjugacy classes of G. Now, an element g (s1,w1,z1) belongs to C G (g) if and only if Since s does not have eigenvalue 1, the condition sw 1 = w 1 implies w 1 = 0. It follows that |C G (g)| = q|C S (s)|, and using the information about the centralizers of elements of S contained in [Dor,§38], we obtain the results listed in the statement of the lemma. Next, let us consider elements g = g (s,0,z) ∈ {H (z), I (z)}. We argue as above, but note that this time s does admit the eigenvalue 1. This implies that in (1) Finally, let us consider elements . Since the condition w + s −1 w 1 = w 1 + s −1 1 w implies a = 1, it follows that g (s1,w1,z1) can be chosen in q 2 different ways. Thus, So far, we have found q 2 + 5q distinct conjugacy classes, adding up to |G| elements. Thus, we are done. The character table First of all, we observe that the character table of SL(2, q) ∼ = Sp(2, q) ∼ = G/H 1 (q) is well-known, e.g., see [Dor,§38], to which we refer for notation and all the information needed in the sequel. Next, note that, as Z(G) = {A (z) : z ∈ F }, for any irreducible character χ of G for all z ∈ F . The same holds for the classes (D k (z)), (E (z)), (F (z)), (G m (z)), (H (z)) and (I (z)). So, in the character table we only report the values of a character on C (0), D k (0) and so on. Since G/H 1 (q) ∼ = SL(2, q), knowledge of the character table of SL(2, q) gives us by inflation q + 4 characters: Next, we construct q − 1 distinct irreducible characters of G having degree q. Denote by λ a fixed non-trivial character of Z(G) ∼ = (F, +). Clearly, each of the q linear characters of Z(G) can be parametrised as λ u (u ∈ F ), where λ u (z) = λ(uz) for all z ∈ F . In particular, λ 0 = 1 Z(G) . We know by [Gér,Lemma 1.2] that H 1 (q) has exactly q − 1 non-linear irreducible charactersλ u , defined as Furthermore, by [Gér,Theorem 2.4] the charactersλ u can be extended to G. We denote such extensions by ω u (u ∈ F × ). The values taken by the characters ω u on the elements of S can be found in [Sze,Proposition 2]. Namely: We are left to compute the values of the ω u 's on the classes (L m ) and (M m ). To this purpose, we compute . At this stage, q irreducible characters of G are still missing. We construct them as follows. In the same way, we can prove that the characters κ ν,n are also irreducible. To conclude, we are left to show that the characters κ 1,n and κ ν,n are pairwise distinct. This can be obtained proving that (κ d,n , κ d1,n1 ) G = 0, for d, d 1 ∈ {1, ν}, 1 ≤ n ≤ q−1 2 and (d, n) = (d 1 , n 1 ). As above, we exploit Mackey's formula. The double cosets Ks(β)K are dealt with in the same way as before. In the case of the double cosets Ks(α)K, for d = d 1 we can argue as before. In the case (d, d 1 ) = (1, ν), if the restrictions of µ 1,ν n and µ ν,ν n 1 are the same, then λ 1 (aα 2 )λ ν n ( y α ) = λ ν (a)λ ν n 1 (y) for all a, y ∈ F . In particular, for y = 0, we get λ α 2 = λ ν , a contradiction, since ν is not a square in F .
2008-05-16T09:20:11.000Z
2008-05-16T00:00:00.000
{ "year": 2008, "sha1": "50af5b36bd40b8fe859471530df8b406d56d868c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0805.2481v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fba697063ad0bb1d0fd6f7e23b27852c850d236b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
53681182
pes2o/s2orc
v3-fos-license
Triggering collective oscillations by three-flavor effects Collective flavor transformations in supernovae, caused by neutrino-neutrino interactions, are essentially a two-flavor phenomenon driven by the atmospheric mass difference and the small mixing angle theta_13. In the two-flavor approximation, the initial evolution depends logarithmically on theta_13 and the system remains trapped in an unstable fixed point for theta_13 = 0. However, any effect breaking exact nu_mu-nu_tau equivalence triggers the conversion. Such three-flavor perturbations include radiative corrections to weak interactions, small differences between the nu_mu and nu_tau fluxes, or non-standard interactions. Therefore, extremely small values of theta_13 are in practice equivalent, the fate of the system depending only on the neutrino spectra and their mass ordering. The transformation arising from an instability implies that the processed spectrum is independent of the mixing angle as long as it is small (collective transformations in the presence of matter are suppressed for maximal mixing because one projects on the interaction direction, but this cos 2θ effect is irrelevant if θ ≪ 1 [20,22]). In a two-flavor treatment, θ enters only as a trigger to the subsequent evolution, so in the SN context a very small θ shifts logarithmically the onset radius for collective transformations [8,9]. In numerical studies, choosing θ as small as allowed by the machine precision barely impacts the processed spectrum, although for θ = 0 the system remains stuck in the unstable fixed-point solution defined by the initial conditions. Such a situation looks unphysical-it does not seem plausible that, at least in principle, one can distinguish between θ being exactly zero and some arbitrarily small but non-zero value. One may speculate, for example, that quantum fluctuations could trigger the transformation even for θ = 0 [8], noting that collective transformations actually preserve flavor lepton number. The purpose of our paper is to show that in real life we do not need to worry about such subtleties. If θ 13 is sufficiently small, three-flavor effects [20,[23][24][25][26][27] trigger the instability and the logarithmic θ 13 dependence saturates at a small but non-zero value. Why are collective SN neutrino transformations an effective two-flavor phenomenon anyway? In the outer layers the temperature is too low to support thermal µ or τ populations, obviating the possibility to distinguish between ν µ and ν τ by charged-current reactions. Ignoring radiative corrections, these flavors are exactly equivalent, allowing us to define new flavors ν x and ν y such that effectively θ 23 = 0. If in addition θ 13 = 0, one of the new states, say ν y , becomes equivalent to ν 3 , decoupling entirely from the other flavors: We are left with a twoflavor system consisting of ν e and ν x , governed by θ 12 and the solar mass difference δm 2 . This system can show collective transformations. However, the solar mass hierarchy is normal, suppressing the dominant transformation effect if the primary fluxes show the usual excess of ν e andν e . Moreover, the collective oscillation region is at larger radii because the solar mass difference is small, so multiple-split effects are more easily suppressed by adiabaticity violation [12,13]. However, collective oscillations driven by the solar mass difference do modify the spectra in some scenarios [21]. Once we allow for a small but non-vanishing θ 13 , collective ν e ↔ ν y transformations become possible that are driven by the atmospheric mass difference ∆m 2 and occur in the usual region of large neutrino flux densities. Our main point is that for θ 13 = 0 these transformations are triggered by small perturbations of the exact ν µ -ν τ equivalence because ν e and ν x then no longer form an exact two-flavor system. Such perturbations include radiative corrections to the ν µ and ν τ matter effect [28,29], or small ν µ -ν τ flux differences. The latter can be caused by the presence of muons in deeper layers of the SN core and by radiative corrections to the interaction rates, modifying the relative ν µ and ν τ opacities. Non-standard interactions can also break the ν µ -ν τ symmetry [27], a possibility that we will not pursue here. We begin in Sec. II with a brief discussion of the equations of motion. In Sec. III we prove that for θ 13 = 0 and for exact ν µ -ν τ equivalence, collective oscillations driven by the atmospheric mass difference are not possible, justifying the usual two-flavor treat-ment. In Sec. IV we study concrete departures from ν µ -ν τ equivalence in the limit θ 13 = 0 and show that collective transformations are triggered by these effects. In schematic models we compare them with an equivalent θ 13 that would trigger collective transformations at the same onset radius. In Sec. V we consider a realistic SN and study the competition between a small θ 13 and a small ν µ -ν τ flux difference. We conclude with a brief summary in Sec. VI. II. EQUATIONS OF MOTION A. Matrix form For our conceptual discussion it is sufficient to consider the simplest three-flavor system showing collective transformations. We take the neutrino ensemble to be homogeneous and isotropic, study its time evolution, and describe mixed neutrinos by matrices of densities ̺ E for each energy mode E. We use an overbar to represent the corresponding quantities for antineutrinos. Diagonal entries are the usual occupation numbers, whereas offdiagonal entries encode phase information. The equations of motion (EoM) are The Hamiltonian 3 × 3 matrix is made up of the vacuum, matter, and neutrino-neutrino terms Here H vac E = UM 2 U † /2E, with U = R 23 R 13 R 12 the neutrino mixing matrix and M = diag(m 1 , m 2 , m 3 ) the mass matrix. We use the standard notation of R ij as the rotation matrix between the i and j mass eigenstates, with argument θ ij . For antineutrinos, the vacuum Hamiltonian picks up a relative minus sign (H vac E = −H vac E ), whereas all other pieces remain identical. For oscillation studies, we may neglect terms proportional to the identity and may write in the mass basis The solar mass-squared difference δm 2 > 0, whereas the atmospheric one ∆m 2 < 0 for inverted mass hierarchy (IH) and ∆m 2 > 0 for normal hierarchy (NH). The matter term, due to neutrino interactions with the charged leptons, is in the flavor basis where N e is the net electron density (electrons minus positrons) and similarly for the other leptons. The second-order term is due to radiative corrections and can be non-negligible at high densities [28]. The contribution associated to ν-ν interactions is Multi-angle effects are ignored in our isotropic system. Radiative corrections can be important in dense neutrino gases at the second order [29]. B. New interaction basis e-x-y Since we are concerned with a system where the ν µ and ν τ flavors are exactly or approximately equivalent, it is more useful to introduce new flavors x and y that simplify the mixing matrix [24]  Here R † 23 "unmixes" ν µ and ν τ with the angle θ 23 . For θ 13 = 0, ν y is the mass eigenstate ν 3 . Henceforth the interaction basis is understood to be the e-x-y-basis. This basis is useful because it explicitly removes θ 23 from the formalism, if the Hamiltonian and the initial conditions do not distinguish ν µ and ν τ . Naturally, the evolution of ν e andν e is independent of θ 23 in this approximation. C. Expansion in Gell-Mann matrices The commutator structure of the equations of motion ensures that the trace of ̺ E is conserved, so we may re-define them to be traceless by subtracting a term proportional to the identity matrix I. The traceless part can be expanded in Gell-Mann matrices Λ i with the expansion coefficients forming an 8-vector X. Thus one can project any matrix X as where Λ is an 8-vector of Λ matrices. We normalize as are the non-vanishing values. The neutrino matrices of density can now be decomposed, as in Eq. (7), in terms of an 8-dimensional polarization vector n E is the total neutrino density per unit energy interval. Analogous expressions pertain to antineutrinos. The different parts of the Hamiltonian can also be similarly decomposed. The vacuum Hamiltonian is where ω E = ∆m 2 /(2E). The "magnetic field" is Ignoring a term proportional to identity, the matter term can be written as Here λ = √ 2G F N e is the effective MSW potential. For later reference we have included ǫ λ ≪ 1, encoding radiative corrections or small ν µ -ν τ flux differences. The leptonic "magnetic field" is The ν-ν interaction term finally is where the effective neutrino-neutrino interaction energy is µ = √ 2G F (N + N ). Here N = N νe + N νµ + N ντ is the overall neutrino density, and N for antineutrinos. The collective vector D is explicitly The EoM are theṅ The 8-dimensional vector product is defined as (a×b) i = f ijk a j b k . In this form, the problem resembles a set of polarization vectors P E precessing under the influence of the combined magnetic fields B, L, and the mean field D due to all polarization vectors. III. EXACT νµ-ντ EQUIVALENCE In the approximation that nothing distinguishes between the ν µ and ν τ flavor, 2-3 mixing is physically irrelevant and we expect that oscillations reduce to a two-flavor problem. In fact, for θ 13 = 0, no collective effects driven by the atmospheric mass difference occur. To prove this point we study a simplified system consisting of two Bloch vectors, representing equal numbers of neutrinos and antineutrinos, with the single vacuum oscillation frequency ω. For θ 13 = 0 the magnetic field simplifies to where e i are unit vectors in the 8-dimensional flavor space. Assuming exact ν µ -ν τ equivalence implies that ǫ λ = 0, and therefore Likewise, if the initial ν µ and ν τ densities are equal, the initial polarization vectors P = P are proportional to the same linear combination of e 3 and e 8 . The static vectors B and L have components in the 1, 3, and 8 directions, whereas the only dynamical component of H, the self-term D, develops an e 2 component. The EoM of the D vector derives from the difference of Eqs. (16) and (17) D = −ǫ ω ω S 12 (P 3 + P 3 ) + C 12 (P 1 + P 1 ) e 2 . (20) In other words, the vector H = ωB + λL + µ D has only components in the 1, 2, 3 and 8 direction and thus can not mix ν e and ν y . The same conclusion is reached if we consider the EoM in matrix form. The part consisting of the e and x flavor and the y flavor form separate block matrices both for the Hamiltonian matrix and the matrices of densities. In the e-x-y basis and with θ 13 = 0, the third mass eigenstate ν 3 is not admixed to the ν e and ν x flavors. IV. BROKEN νµ-ντ EQUIVALENCE Even in the absence of thermal µ or τ populations the exact ν µ -ν τ equivalence is broken by several subleading effects that distinguish between these flavors. In this case, the ν 3 flavor does not fully decouple from the ν e -ν x system and collective transitions driven by the atmospheric mass difference are inevitably triggered. The first is provided by radiative corrections to the neutrino matter effect where charged leptons appear in the loop. Even in the absence of ordinary matter, similar radiative corrections arise for neutrino-neutrino interactions, although the detailed structure of the EoM becomes more complicated [29]. Since collective effects require a large density of neutrinos, radiative corrections and thus the breaking of ν µ -ν τ equivalence are unavoidable. Finally we note that differences in the initial ν µ and ν τ fluxes also provides the required instability. A. Radiative corrections to ντ matter effect The presence of matter (i.e. λ = 0) has a similar effect as decreasing the effective mixing angle, although in detail the dynamics is more complicated. In a frame rotating around L there is a fast-rotating transverse Bfield that disturbs the system and triggers the evolution [8]. However, if matter effects distinguish ν µ and ν τ , they can play a more important role in trigerring collective oscillations, particularly for a small mixing angle. The largest correction is for ν τ where a background of ordinary matter with baryon density N B has the same refractive effect on ν τ andν τ that would be provided by a density of real τ leptons (N eff τ = 2.6 × 10 −5 N B ) [28]. This subleading correction is parametrized as ǫ λ to the usual matter effect λ in Eq. (13). Off-diagonal terms in the Hamiltonian generated by ǫ λ will mix ν e and ν y . In the limit θ 13 → 0, when ν 3 would otherwise have decoupled, these terms play a role similar to θ 13 , and recouple ν 3 to ν e . We can estimate the effective θ 13 generated by these sub-leading matter effects by diagonalizing the Hamiltonian instantaneously in matter as where the M and U denote the mass and mixing matrix in matter. Using the standard parametrization, Eq. (21) can be solved for the parameters of U and M. We find that the matter induced 1-3 mixing is tan 2 θ 13 = 2ǫ λ ǫ ω λS 12 S 23 2ω + 2ǫ ω ωC 12 + (ǫ λ − 2)λ + ǫ λ λC 23 , where we ignore terms beyond the leading order in ǫ λ . This equation should be interpreted as providing the critical value of θ 13 such that if θ 13 < ∼ θ 13 , the role of ν µν τ equivalence breaking is more important than θ 13 itself, and the two-flavor approximation is not valid anymore. To demonstrate this, we consider a toy model with one Bloch vector for neutrinos P and one for antineutrinos P with equal length. In the two-flavor case this would be the simple flavor pendulum without intrinsic angular momentum. A nonvanishing mixing angle triggers an exponential growth of the misalignment between the force direction and the initial orientation. The time (or distance) after which an O(1) deviation from the initial orientation is achieved grows logarithmically with decreasing mixing angle. We define the radius at which there is a change of 1% in the ν e flavor content as the onset radius. In Fig. 1, we show the onset radius for this system, and how it depends on θ 13 . We use λ = 100 km −1 , ǫ λ = 5 × 10 −5 , µ = 10 km −1 , ω = 1 km −1 , and the mixing angles θ 12 = 0.6, θ 23 = π/4. Using Eq. (22) for the chosen parameters, we expect that the matter induced mixing becomes important at θ 13 ≈ 10 −7 . This is in good agreement with what we find. The logarithmic increase of the onset radius stops below this critical mixing angle. B. Different primary νµ and ντ fluxes Another way to break the exact ν µ -ν τ equivalence is through an initial flux difference. Although this effect is inevitable it has not been studied in detail. Deep in a SN core, the temperature is large enough to support a thermal muon population, slightly modifying the primary fluxes. Moreover, the same radiative effects that create a refractive difference between ν µ and ν τ also modify the scattering rates and the two flavors will have slightly different opacities and therefore different thermally driven fluxes. Obviously the discrete nature of particle emission and thermal fluctuations of the regions emitting the neutrinos would necessarily make the two spectra different. As a toy example we again assume two equal Bloch vectors P and P. The difference between the initial densities of ν µ and ν τ is parameterized as and the same for antineutrinos. Ignoring the matter effect, we have explicitlẏ which dynamically generates components of D along e 5 and e 7 even though initially H has only components in the 1, 2, 3, and 8 directions. Therefore H develops a component along e 5 , leading to a mixing of ν e and ν y . It is not straightforward to define an effective mixing angle in this case. The effect of the different fluxes for ν µ and ν τ is to provide terms proportional to ǫ N µ to the ν xν y block in the Hamiltonian. These terms are themselves dynamical (time-dependent), and are communicated to the ν e -ν y block by the mixing between ν e and ν x . The effective mixing angle can be thought as being the initial misalignment of P from the Hamiltonian which is approximately proportional to ǫ N /(ω + λ). As this is a three-flavor effect, it must vanish when ǫ ω → 0. We therefore expect The logarithmic increase of r onset with decreasing θ 13 saturates at θ 13 approximately equal to the effective mixing θ 13 induced by unequal ν µ -ν τ fluxes. In Fig. 2, we plot r onset for this system as a function of θ 13 for different values of ǫ N , illustrating this effect. We use the frequencies (ω, µ, λ) = (1, 10, 100) km −1 , and the mixing angles sin 2 θ 12 = 0.314, and sin 2 θ 23 = 0.5. Using Eq. (25) for the chosen parameters, we expect the fluxasymmetry induced mixing to become important at θ 13 ∼ ǫ N /(3 × 10 −3 ). This is in good agreement with what we find. The logarithmic increase of the onset radius stops below the estimated value of the mixing angle. V. REALISTIC SUPERNOVA We finally consider a more realistic SN example in a single-angle treatment. The neutrinos are assumed to be emitted isotropically from the neutrinosphere at R ν = 10 km. We assume equal luminosities for all neutrino flavors, given by and thermal spectra with average energies E νe = 10, Eν e = 15, and Eν µ,τ = 20 MeV. The electron density of the matter is the same as in [30] at t = 1 s after the bounce. For the neutrino mixing parameters we use ∆m 2 = 2 × 10 −3 eV 2 , δm 2 = 8 × 10 −5 eV 2 , sin 2 θ 12 = 0.31 , sin 2 θ 23 = 0.50 . With these assumptions, we have calculated the onset radius for collective transformations as a function of θ 13 , assuming a flux difference ǫ N = 10 −5 and ignoring radiative corrections to the matter effect. Our results are shown in Fig. 3. For θ 13 < ∼ 10 −3 , the onset radius is not sensitive to θ 13 , as expected from Eq. (25). In a realistic SN, even when the ν µ -ν τ equivalence is perfect, the onset radius depends only very weakly on θ 13 . The unavoidable breaking of this symmetry by radiative corrections and the the presence of charged muons in the deep SN core almost completely removes the θ 13 dependence in the three-flavor context. Of course, MSW transitions caused by the ordinary matter effect depend on θ 13 in the usual way. VI. CONCLUSIONS Collective oscillations are an instability-driven phenomenon. The system transits from its initial unstable configuration to a stable one, triggered by the influence of a disturbance. Usually one thinks of this disturbance as being provided by the small offset between the relevant flavor and the propagation eigenstates, encoded into the mixing angle θ 13 . When this mixing angle is exactly vanishing, one would naively think that the oscillations do not take place. However, one should recognize that a system sitting on an unstable fixed point is bound to be disturbed, unless there are symmetries that forbid all perturbations capable of providing an initial disturbance. In the neutrino oscillation context, this symmetry happens to be the µτ symmetry -which is explicitly broken. Consequently, collective oscillations are inevitable. This means that collective oscillations take place as usual even at θ 13 = 0, once triggered by subleading effects. Another fundamental point is that SN neutrino oscillations are not sensitive to arbitrarily small values of the mixing angle. The fantastic sensitivity to an arbitrarily small mixing angle, as it appears in two-flavor analyses, disappears when one takes into account other sub-leading corrections. As a result, strategies outlined in Refs. [15,16] may be useful for determination of the mass hierarchy if the relevant signals are observed, but not for determination of a non-zero θ 13 itself. On the other hand, in principle we could determine the neutrino mass hierarchy even if θ 13 were exactly zero-which might end up being our only hope if θ 13 is beyond the reach of laboratory-based oscillation experiments.
2010-04-15T16:32:16.000Z
2010-01-29T00:00:00.000
{ "year": 2010, "sha1": "78f1f203717b37ea2142055f59d725ce32e454f5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1001.5396", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "78f1f203717b37ea2142055f59d725ce32e454f5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
25442345
pes2o/s2orc
v3-fos-license
A Dominant-negative Form of Mouse SOX2 Induces Trophectoderm Differentiation and Progressive Polyploidy in Mouse Embryonic Stem Cells* SOX2 plays an important role in early embryogenesis by cooperating with OCT4 in regulating gene expression in fertilized eggs, yet the precise mechanism through which SOX2 accomplishes this important function remains poorly understood. Here, we describe the identification of two nuclear localization signals (NLS) in SOX2 and the generation of a dominant-negative mutant (Dmu-mSox2) by mutating these two NLS in its high mobility group domain. Characterization of this mutant demonstrated that SOX2 shuttles between the cytoplasm and nucleus using these two NLS. The mutant has lost its ability to interact with OCT4, but remains competent to interact with wild-type SOX2. Functionally, Dmu-mSox2 is inactive and unable to cooperate with OCT4 in transactivating target promoters bearing its binding sites. However, Dmu-mSox2 is able to inhibit the activity of wild-type SOX2 and subsequently suppress the activity of downstream genes such as Oct4 and Nanog. When stably expressed in embryonic stem (ES) cells, Dmu-mSox2 triggered progressive doublings of cell ploidy (>8N), leading to differentiation into the trophectoderm lineage. Knockdown of Sox2 by small interfering RNA also induced trophectoderm differentiation and polyploid formation in mouse ES cells. These results suggest that SOX2 maintains stem cell pluripotency by shuttling between the nucleus and cytoplasm in cooperation with OCT4 to prevent trophectoderm differentiation and polyploid formation in ES cells. Stem cell-based therapies are promising solutions to many current unmet medical needs such as diabetes and Parkinson disease. Given their potential to differentiate into virtually all types of cells in the human body, human embryonic stem (ES) 2 cells may be used to replace any aged or damaged cells under various pathological conditions (1). To accomplish these therapeutic goals, formidable obstacles must be overcome in such areas as the procurement and maintenance of ES cells in pluripotent states and efficient methods to differentiate the ES cells to a specific cell type suitable for transplantation. Although progress has been made in the derivation of human ES cells, the lack of understanding of stem cell pluripotency and differentiation may hamper any serious attempt to harness the potential of stem cell-based therapies (2). Thus, investigations into the molecular and cellular mechanisms of stem cell biology such as self-renewal and pluripotency are essential steps in developing the necessary tools for utilizing ES cells in future therapeutic interventions (2)(3)(4). One of the key properties of ES cells is their ability to undergo self-renewal indefinitely (2). This property appears to be regulated by a network of transcription factors (2,5). The first such factor recognized to play a critical role in stem cell pluripotency is the homeodomain transcription factor OCT4, a deficiency in which fails to support the development of pluripotent inner cell mass in early embryogenesis (2,3). Most recently, another homeodomain protein, NANOG, has also been shown to be involved in maintaining the inner cell mass in early embryogenesis and the self-renewal of ES cells in culture (6,7). Given the common pathway these factors appear to regulate, recent studies have revealed that both factors indeed regulate similar sets of genes by co-occupying adjacent sites within their regulatory regions (5). We (25) and others (8) have demonstrated recently that OCT4, NANOG, and FOXD3 are part of a regulatory network in that they regulate each other's activity at the transcriptional level. Furthermore, our work suggests that these factors may form a negative feedback loop that limits the activity of OCT4 and ensures the proper expression of these factors in a dynamic fashion in ES cells. 3 This observation is consistent with the hypothesis that transcription factors play a critical role in controlling stem cell self-renewal in a cooperative fashion (2,3). SOX2 is a transcription factor with a high mobility group (HMG) domain and has also been implicated in the regulation of stem cell pluripotency by maintaining the expression of Fgf4 in the inner cell mass (9). Gene targeting experiments revealed a cell-autonomous requirement for SOX2 in both the epiblast and extraembryonic ectoderm (10). Mechanistically, SOX2 functions to regulate downstream genes through cooperation with OCT4 by binding to adjacent sites (8, 10 -14). In addition to its role in early embryogenesis, when SOX2 and OCT4 coexpress in pluripotent cells, SOX2 appears to play an additional role in later developmental stages such as neural differentiation by consolidating the neural identity of early neuroectoderm cells in Xenopus and the proliferation and/or maintenance of neural stem cells in mice (15,16). Indeed, mutations in SOX2 have been identified in individuals with anophthalmia (17,18). SOX2 is also expressed in pluripotent cells in the extraembryonic ectoderm (10). These observations raise the possibility that SOX2 may be able to mediate gene expression independently of its known partner, OCT4, yet the mechanism through which SOX2 can regulate gene expression and cell differentiation remains poorly understood. Unlike OCT4, SOX2 appears to be localized in both the nuclei and cytoplasm in pre-implantation embryos (10). This unique subcellular localization pattern suggests that SOX2 shuttles between these two subcellular compartments and that its nuclear localization regulates its transcription activity. Here, we demonstrate that SOX2 contains two nuclear localization signals (NLS). The ablation of these two NLS resulted in a dominant-negative form of SOX2 that we named Dmu-mSox2. When stably expressed in ES cells, this mutant protein triggered a dramatic differentiation of ES cells into the trophectoderm lineage and the formation of polyploid (Ͼ8N) cells. Our results not only suggest that SOX2 functions to prevent the differentiation of ES cells into the trophectoderm lineage, but also demonstrate that Dmu-mSox2 may be a potentially useful tool in controlling stem cell differentiation. Plasmid Construction and Reporter Assays-The mouse Sox2 cDNA was amplified by reverse transcription For mutant SOX2 expression without any tag, primers 5Јtcgcacatgatcgactgaaaggacgacgatgac-3Ј (forward) and 5Ј-gtcatcgtcgtcctttcagtcgatcatgtgcga-3Ј (reverse) were used. Oct4 and Nanog promoter fragments were amplified by PCR from mouse liver genomic DNA and inserted into the SmaI site of the promoterless luciferase reporter vector pGL-Basic (Promega, Madison, WI). Primers 5Ј-acagacaggactgctgggctgcagg-3Ј (forward) and 5Ј-tggaaagacggctcacctaggg-3Ј (reverse) for the Oct4 promoter (2170 bp) and primers 5Ј-ttgggcatggtggtagacaa-3Ј (forward) and 5Ј-gtcagtgtgatggcgagggaagggat-3Ј (reverse) for the Nanog promoter (926 bp) were used. The Oct4 reporter plasmid 6w-Luc and the control vector p37TK-Luc were kindly provided by Dr. Hans Schöler. 6ϫO/S was provided by Dr. Lisa Dailey. For reporter assays, transfection efficiencies were normalized with a Renilla plasmid as internal references, and DNA concentrations were kept constant with an empty expression vector. Cells were harvested 48 h after transfection, and luciferase activity was measured using Dual-Luciferase (Promega). For cell cycle analysis, cells were stained with propidium iodide and analyzed by FACS with a MoFlo high performance cell sorter using a 488-nm argon laser excitation source and a 580/30-nm band-pass red filter. For Troma-1-positive cell analysis, cells were dissociated with 0.25% trypsin and 1 mM EDTA; resuspended in phosphate-buffered saline; and incubated with anti-TROMA-1 primary antibody (1:100; Institut Pasteur), followed by fluorescein isothiocyanate-conjugated secondary antibody (Santa Cruz Biotechnology, Inc.), using the MoFlo high performance cell sorter (excitation at 488 nm and measured at 530/40 nm with a band-pass green filter). Cell Lysate Preparation and Western Blot Analysis-After 48 h of transfection, cells were washed with phosphate-buffered saline and lysed on ice in radioimmune precipitation assay buffer (50 mM Tris-HCl (pH 7.5), 150 mM NaCl, 0.25% sodium deoxycholate, 0.1% Nonidet P-40, and 0.1% Triton X-100) for 10 min and cleared of debris by centrifugation at 15,000 rpm for 15 min at 4°C. After boiling with an equal volume of 2ϫ SDS loading buffer for 5 min, cell lysates were electrophoresed on 10% SDS-polyacrylamide gel and blotted onto polyvinylidene difluoride membranes (Millipore). The membranes were then blotted with 5% nonfat milk and incubated with anti-FLAG antibody (1:2500), followed by alkaline phosphatase-conjugated anti-mouse secondary antibodies (1:2500). The membranes were washed extensively and developed by incubation in solution containing nitro blue tetrazolium/5-bromo-4-chloro-3-indolyl phosphate. Co-immunoprecipitation-For co-immunoprecipitation, expression plasmids were transfected into 293T cells using calcium phosphate. After 24 h of transfection, cells were washed two times with phosphate-buffered saline and lysed in 600 l of buffer containing 50 mM Tris-HCl (pH 7.5), 150 mM NaCl, 0.5% Nonidet P-40, and 1 mM EDTA plus 10 l of protease inhibitor mixture (Sigma). Cell lysates were cleared by centrifugation at 15,000 rpm for 5 min at 4°C. Cleared cell lysates (40 l) were saved for direct Western blot analysis to determine protein expression levels, and the remaining samples were transferred to a new tube containing 30 l of anti-FLAG antibody-conjugated agarose beads (Sigma) and washed twice with Tris-buffered saline (20 mM Tris-HCl (pH 7.6) and 137 mM NaCl) before incubation for 6 h at 4°C. The anti-FLAG antibody-bound beads were then washed three times with Tris-buffered saline. The beads were eluted by boiling for 5 min in 2ϫ SDS loading buffer with 5% ␤-mercaptoethanol. After centrifugation, supernatants were loaded onto 10% SDS-polyacrylamide gel and blotted onto polyvinylidene difluoride membranes for detection. Microscopy and Cell Staining-The morphologies of cells and embryonic bodies were captured using a Nikon digital camera and then imported into Adobe Photoshop Version 6.0. For immunostaining, cells grown on coverslips were fixed with 2% paraformaldehyde in phosphate-buffered saline; washed; blocked in 10% normal goat serum; and then stained with primary antibodies, including anti-SOX2 (catalog no. sc-20088) and anti-OCT4 (catalog no. sc-5279) antibodies (Santa Cruz Biotechnology, Inc.), anti-SSEA-1 antibody (Chemicon), and anti-TROMA-1 antibody. Secondary antibodies, including FIGURE 1. Identification of two NLS in mSOX2. A, schematic presentation of mSOX2 NLS mutants and the two-NLS motif. The mSOX2 coding region and two NLS oligonucleotides were fused to GFP-FLAG (F) at the C terminus. B, Western blot analysis of these constructs. HeLa cells transfected with the indicated plasmids were lysed and analyzed by Western blotting using anti-FLAG antibody as described under "Materials and Methods." CK, blank. C, nuclear localization of wild-type (WT) and mutant mSOX2-GFP-FLAG and NLS-GFP fusion proteins. The images were taken 36 h after transfection in fluorescent fields (panels a, d, g, j, m, p, s, and v) and bright fields (panels b, e, h, k, n, q, t, and w). Both fields were combined to show the localization of each protein (panels c, f, i, l, o, r, u, and x). EGFP, enhanced GFP. TRITC-conjugated goat anti-mouse IgM (SouthernBiotech) and TRITC-conjugated goat anti-rabbit IgG and fluorescein isothiocyanate-conjugated goat anti-rat IgG (Santa Cruz Biotechnology, Inc.), were used for detection. The images were captured using an Olympus FV500 confocal system. Mouse (m) SOX2 Contains Two NLS Required for Nuclear Localization and Transcription Activity-SOX2, a member of the HMG transcription factor family previously shown to be expressed early in embryogenesis and in neural stem cells, interacts with the POU transcription factor OCT4 through the HMG/POU domains to regulate the expression of many downstream genes, including Fgf4, Utf1, Nanog, and pouf5, for maintaining the pluri-or multipotent states both in vivo and in vitro (8,10,14,19). As a nuclear protein, SOX2 must enter the nucleus to regulate the expression of its downstream genes, yet little is known about how SOX2 is localized in the nucleus. Interestingly, it has been observed that SOX2 appears to be present in the cytoplasm and to shuttle between the cytoplasm and nuclei during early embryogenesis (10), suggesting that it may have a function outside the nucleus. To characterize the cellular distribution of SOX2 further, we fused a green fluorescent protein (GFP)-FLAG tag to its C terminus (Fig. 1A) and expressed the chimera in HeLa cells. As shown in Fig. 1C, although GFP alone diffused randomly in the whole cell (panels d-f), SOX2-GFP localized exclusively to the nuclei (panels g-i), confirming the fact that SOX2 is indeed a nuclear protein. This appears to agree with SOX9, a close relative of SOX2, which was reported to localize in the nuclei by two functional NLS within the HMG domain: a traditional basic amino acid clusters and a bipartite motif (20). Apparently, these two NLS are well conserved in the HMG transcription factor family, as shown in Fig. 1A. To test the role of these two NLS in SOX2, we performed single or double mutagenesis on these sites (Fig. 1A). Upon transfection into HeLa cells, these mutants were expressed as stable proteins, like the wild-type protein (Fig. 1B). On the other hand, as shown in Fig. 1C, mutations of NLS1 (panels s-u), NLS2 (panels v-x), or both mutations (panels j-l) caused the diffusion of SOX2 into the whole cells, suggesting that each NLS is partially effective in localizing SOX2 to the nuclei. Indeed, when fused to GFP, either NLS could drive GFP to the nuclei only partially (Fig. 1C, panels m-r), further demonstrating the requirement of both NLS for the nuclear localization of SOX2. Because the GFP tag may influence the subcellular localization of its fusion partners, we repeated the localization experiments with FLAG-tagged SOX2 as shown in Fig. 2. As expected, the mutant proteins were expressed at identical sizes compared with wild-type SOX2 as detected by Western blotting (Fig. 2B). The subcellular localizations of these proteins were then analyzed by immunofluorescent staining using anti-SOX2 antibody. Consistent with the data in Fig. 1, wild-type SOX2 localized exclusively in the nuclei (Fig. 2C, panels d-f), whereas SOX2 proteins with either single mutation (NLS1 or NLS2; panels j-o) or the double mutation (panels g-i) diffused to the whole cells. Together, these data demonstrate that both NLS are required for the nuclear localization of SOX2. . Wild-type mSOX2 interacts with mOCT4 and enhances mOCT4 transcription activity, whereas NLS mutant mSOX2 has lost this function. A, formation of a heterodimer between wild-type mSOX2 and wild-type mOCT4, but not between NLS mutant mSOX2 and wild-type mOCT4. After transfection with mSOX2-FLAG (F), Dmu-mSOX2-FLAG, and untagged OCT4, cell lysates were analyzed for protein expression (lanes 1-6) and dimerization by FLAG resin immunoprecipitation (IP; lanes 7-12). B, wild-type mSOX2 enhances mOCT4 transcription activity, whereas NLS mutant mSOX2 has lost this function in the 6ϫ O/S reporter. C, wild-type mSOX2 enhances mOCT4 transcription activity, whereas NLS mutant mSOX2 has lost this function in the on 6w-Luc reporter. Our previous work on OCT4 demonstrated that the NLS is essential not only to its nuclear localization, but also to its transcription activity (22). To test whether the same applies to SOX2, we analyzed the transcription activity of wild-type SOX2 or its mutants in HeLa cells with the reporter construct 6ϫ O/S, which harbors six copies of the OCT4/SOX2-binding site from the Fgf4 promoter (21). As shown in Fig. 2D, SOX2 activated the 6ϫ O/S reporter strongly but had no effect on the control reporter p37TK (bar 5 versus bar 3). However, SOX2 mutants with either a single or NLS double mutation lost some or most of the transcription activity (Fig. 2D, bars 6 -8). These data demonstrate that the two NLS of SOX2 play dual roles in SOX2 function: nuclear localization and transcription activity, as observed in OCT4 (22). Ablation of the NLS in SOX2 Also Impairs Its Interaction with OCT4-It is well established that SOX2 interacts with OCT4 to regulate downstream genes. To investigate whether the NLS in SOX2 also play a role in its interaction with OCT4, FLAG-tagged wild-type or NLS double mutant SOX2 was expressed either alone or with untagged OCT4 in 293T cells. The cell lysates were immunoprecipitated by anti-FLAG antibody and probed with anti-FLAG (Fig. 3A, panel a) or anti-OCT4 (panel b) antibody, respectively. As shown in Fig. 3A 12). These data demonstrate that SOX2 interacts with OCT4 in an NLS-dependent fashion. We then performed reporter assays to determine the consequence of the impaired SOX2/ OCT4 interaction on OCT4 activity. Two different reporters were used: 6w-Luc, with six copies of a conventional OCT4-binding site that can be bound only by OCT4 itself, and 6ϫ O/S-Luc, as mentioned above with six copies of the OCT4/SOX2-binding site capable of binding both OCT4 and SOX2. Both reporters were transfected with OCT4 alone or with either wild-type or NLS double mutant SOX2 into HeLa cells. The reporter activities were evaluated by luciferase assay as described previously (22). As shown in Fig. 3 (B and C), wild-type SOX2 strongly increased the transcription activity of OCT4 in both 6w-Luc and 6ϫ O/S-Luc (bars 3, 5, and 6), suggesting that SOX2 can cooperate with OCT4 in gene regulation regardless of its binding context. On the other hand, NLS mutant SOX2 failed to enhance the activity of OCT4 in both reporters (Fig. 3, B and C, bars 3, 7, and 8), consistent with the co-immunoprecipitation results. These results demonstrate that the NLS of SOX2 are also required for its ability to cooperate with OCT4 in regulating downstream genes. . Dmu-mSox2 dimerizes with wild-type mSOX2 and suppresses its transcription activity. A, Dmu-mSox2 can form heterodimers with wild-type SOX2. Wild-type mSOX2-GFP-FLAG (F) and untagged Dmu-mSox2 were transfected into 293T cells as indicated. Anti-SOX2 antibody was used for Western blotting to detect the expressed protein (L) and the dimers, followed by immunoprecipitation using anti-FLAG antibody (IP). In lane 8, untagged Dmu-mSox2 co-immunoprecipitated with FLAG-tagged wild-type mSOX2 as indicated. CK, blank. B, Dmu-mSox2 can block the nuclear localization of wild-type SOX2. The control vector, wild-type mSOX2-GFP, and mutant mSOX2-FLAG were transfected into HeLa cells. As shown, part of the GFP signals of wild-type mSOX2 was trapped in the cytoplasm by coexpression of mutant mSOX2. C, Dmu-mSox2 suppresses the activity of wild-type SOX2. Dmu-mSox2 inhibited the activity of wild-type SOX2 in the 6ϫ O/S reporter in a dose-dependent manner. D, Dmu-mSox2 suppresses the activity of mSOX2/mOCT4 in the 6ϫ O/S reporter. E, Dmu-mSox2 suppresses the activity of mSOX2/mOCT4 in the 6w-Luc reporter. NLS Mutant mSOX2 Forms Complexes with Wild-type SOX2, Suppresses Its Activity, and Down-regulates Its Target Genes- We have shown previously that ablation of the NLS in OCT4 generated a dominant-negative mutant that could suppress the activity of its wild-type protein and induce the differentiation of pluripotent cells (22). The data presented so far on the NLS mutant of SOX2 are also consistent with the idea that it may behave as a dominant-negative mutant. To test this hypothesis, we performed co-immunoprecipitation experiments to demonstrate that the mutant is capable of forming a complex with wild-type SOX2. GFP-FLAG-tagged wild-type SOX2 was transfected with the untagged NLS mutant of SOX2 into 293T cells, and the proteins were immunoprecipitated with anti-FLAG antibody and probed with anti-SOX2 antibody. As shown in Fig. 4A, the untagged NLS double mutant of SOX2 precipitated with wild-type SOX2 tagged with GFP-FLAG (lane 8), suggesting that the NLS double mutant of SOX2 remains competent in forming complexes with the wild-type molecule. Given that the same NLS mutant of SOX2 failed to interact with OCT4 (Fig. 3), we concluded that SOX2 forms homodimers not only independently of its NLS, but also differently from its ability to interact with OCT4. We then tested whether the NLS mutant of SOX2 can suppress the activity of wild-type SOX2. SOX2 was cotransfected with the NLS mutant, and the effects on the reporter were analyzed as shown in Fig. 4C. The NLS mutant suppressed the activity of wild-type SOX2 in the absence and presence of its cofactor OCT4 in a dose-dependent manner (Fig. 4, C-E). To probe the mechanism of suppression further, we then cotransfected GFP-tagged SOX2 with the FLAG-tagged mutant in HeLa cells and observed that the SOX2 mutant blocked the nuclear localization of wild-type molecules (Fig. 4B), suggesting that the NLS mutant inhibits the wild-type molecules by sequestering them in the cytoplasm. To further investigate whether the NLS mutant of SOX2 can down-regulate the expression of SOX2 target genes, we analyze its impact on the promoters of Oct4 and Nanog, both reported to be downstream targets of the SOX2-OCT4 complex (8,14). As shown in Fig. 5A, both promoter regions contain a SOX2binding site. The activities of these two promoters were then evaluated in pluripotent and non-pluripotent cells. As shown in Fig. 5B, consistent with the endogenous expression levels of Nanog and Oct4, reporters bearing these two promoters were much more active in pluripotent cells (ES and F9) than in nonpluripotent NIH3T3 cells. Used as a control, the reporter bearing the Fgf4 minimal promoter plus six copies of the OCT4/ SOX2-binding site had much higher activity in pluripotent cells than in non-pluripotent cells as expected (Fig. 5B). To test whether the SOX2 mutant could suppress the activities of these promoters in pluripotent cells, each reporter was cotransfected with increasing amounts of the NLS mutant into ES cells. As shown in Fig. 5C, the NLS mutant suppressed the activities of all three promoters in a dose-dependent manner in ES cells. Taken together, our data demonstrate that the NLS mutant behaves as a dominant-negative mutant capable of suppressing the activity of both exogenous and endogenous SOX2, presumably by forming protein complexes, and subsequently suppressing its downstream targets such as Fgf4, Oct4, and Nanog. Constitutive Expression of the NLS Mutant of mSOX2 Induces the Differentiation of ES Cells into Trophectoderm-SOX2 is known to cooperate with OCT4 in a combinatorial fashion to specify the three embryonic lineages in pre-implantation embryos (10). The absence of either factor in early embryos leads to outgrowth of trophectoderm from isolated embryos and the complete absence of pluripotent stem cells, suggesting that SOX2 may function to prevent the differentiation of ES cells into trophectoderm during early embryogenesis (2, 10). However, very little is known about its role in maintaining stem cell fate and trophectoderm differentiation, as previous analysis with SOX2 knock-out mice was potentially complicated by the presence of maternal SOX2 proteins in the cytoplasm of mature oocytes and stromal cells (10). To this end, we took advantage of the apparent dominant-negative effect of the NLS mutant of mSOX2 and analyzed the consequence of its expression in ES cells. The NLS mutant of SOX2 or a control vector was then transfected into mouse ES cells, which were selected in the presence of G418. After 2 weeks of selection, ES cell lines transfected with the NLS mutant of SOX2 or the control vector were obtained, and the expression level of SOX2 was confirmed by Western blotting using anti-FLAG antibody (Fig. 6A, lane 3) and immunostaining (Fig. 6B, lower panels). We observed a morphological difference between ES cells expressing the NLS mutant and those transfected with the control vector or the parental ES cells. First, the cells expressing the NLS mutant were much more spread out and larger than the control ES cells (Fig. 6C), indicating a differentiated phenotype. FACS analysis of the DNA content showed that these cells displayed progressive polyploidy compared with the control cells, which were mostly diploid or tetraploid (presumably prior to cell division into diploid cells) (Fig. 6C), indicating that these cells may have a reduced proliferation rate or reduced cell divisions. Indeed, cell proliferation analysis demonstrated that the cells expressing NLS mutant SOX2 grew at a much slower rate than the control cells (Fig. 6E). To determine whether these cells are pluripotent, we examined their capacity to form embryonic bodies. As shown in Fig. 6F, when cultured in suspension, the control cells formed typical embryonic bodies (panels a-d), whereas the NLS mutant-expressing cells did not (panels e-h). These data demonstrate that ES cells expressing the NLS mutant of SOX2 are no longer pluripotent and have undergone differentiation. We then tested the expression of several cell cycle regulators in the Dmu-mSox2-transfected cells by real-time PCR. Among the regulators that we tested, cyclins D 1 and D 2 were significantly up-regulated; p21 and p27 were slightly up-regulated; and the others remained unchanged (Fig. 6D). ES cells have a unique cell cycle property with a long S phase and a short gap phase (G 1 or G 2 ). Consistent with this unique cell cycle property, ES cells have much lower expression levels of G 1 phase regulators (cyclins D 1 and D 2 ) and cyclin-dependent kinase inhibitors (p21 and p27) compared with differentiated cells. Thus, up-regulation of these factors in Dmu-mSox2transfected cells suggests that Dmu-mSox2 leads to the formation of progressive polyploidy by releasing the suppression of these factors in ES cells. To further define the lineage of these differentiated cells, we analyzed the expression of pluripotent and differentiated markers by immunostaining and real-time PCR. As shown in Fig. 6G, pluripotent markers such as Ssea-1 and Oct4 were virtually absent in cells expressing the NLS mutant, but were highly expressed by the control ES cells. We then determined the differentiated state of the NLS mutant transfectants by examining the expression of Troma-1, a trophectoderm-specific marker. As shown in Fig. 6G, these cells expressed a relatively high level of Troma-1, whereas the control cells did not. FACS analysis demonstrated that Ͼ50% of the cells transfected with the NLS mutant were positive for Troma-1 in contrast to only a few that were positive among the control cells (Fig. 6H). These data clearly demonstrate that the NLS mutant of SOX2 triggers the differentiation of ES cells into the trophectoderm lineage. Finally, we further confirmed this observation by analyzing more markers by real-time PCR. As shown in Fig. 6I, ES cells transfected with NLS mutant SOX2 had much reduced levels of pluripotent markers (including Oct4 and Nanog), yet had significantly elevated expression levels of trophectoderm markers (Pi-1, Cdx2, and Fgfr2) compared with the control ES cells, but no endoderm markers (Gata4 and Gata6) compared with the embryonic bodies derived from untransfected ES cells. We conclude that the NLS mutant of SOX2 induces the differentiation of ES cells into the trophectoderm lineage. Knockdown of Endogenous Sox2 by Small Interfering RNA (siRNA) Also Induces Trophectoderm Differentiation and Polyploid Formation in Mouse ES Cells-Oct4 has been shown to maintain ES cell pluripotency by preventing trophectoderm differentiation (3). In this study, we have shown that impairment of its partner, Sox2, also caused trophectoderm differentiation, suggesting that both are required to prevent trophectoderm differentiation. To further confirm the role of Sox2 in ES cells, we constructed the Sox2 siRNA expression vector as shown in Fig. 7A. This vector contains a U6 promoter that drives siRNA FIGURE 6 -continued expression. A GFP expression cassette was included in this vector to monitor the transfected cells. First, we tested the efficiency of Sox2 siRNA by reporter assays. We cotransfected the Sox2 reporter 6ϫ O/S-Luc with wild-type Sox2 and the siRNA vector or a control vector with a point mutation in Sox2 siRNA into ES cells. The regulatory activity of Sox2 was tested by luciferase assay. As shown in Fig. 7C, the regulatory activity of Sox2 was significantly suppressed by the Sox2 siRNA, but was not affected by the mutant siRNA, suggesting the high efficiency of the Sox2 siRNA. We then transfected the Sox2 siRNA vector or the control vector into ES cells and sorted out the GFP-positive cells by FACS analysis at different time point. Endogenous Sox2 was tested by RT-PCR. As shown in Fig. 7B, endogenous Sox2 mRNA was significantly suppressed 3 days after siRNA transfection, but was not affected by the control vector. To further characterize the Sox2 knockdown cells, we replated the GFPpositive cells and cultured them for several passages. As shown in Fig. 7D, the Sox2 siRNA-transfected cells showed a clear differentiated phenotype compared with the control cells, which had a typical undifferentiated morphology. The expression of trophectoderm markers was detected by RT-PCR in these cells (Fig. 7E), suggesting that knockdown of Sox2 induces trophectoderm differentiation in ES cells. DNA content analysis revealed that most Sox2 siRNA-transfected cells were also polyploid (4N to 8N) compared with control cells (2N to 4N) (Fig. 7D), consistent with the Dmu-mSox2-transfected cells. Together, these data demonstrate that Sox2 functions together with its partner, Oct4, to prevent trophectoderm differentiation and polyploid formation in mouse ES cells. DISCUSSION Our observation that a dominant-negative form of SOX2 is able to trigger the differentiation of ES cells into the trophectoderm lineage and to generate trophoblast-like cells further confirms a general strategy we proposed for the transcription factors involved in the pluripotency of ES cells (22). In contrast to the results obtained with the dominant-negative form of OCT4, we obtained stable clones constitutively expressing Dmu-mSox2. These cells assumed a pattern of gradual differentiation toward trophoblast-like cells, reflected by the gradual shift from 2N/4N to 8N/nN in ploidy. These cells may be a good model to further investigate the uncoupling of chromosomal duplication and cell/ nuclear division known for trophoplasts. This mutant may also be used to probe the role of SOX2 in neural function in late developmental stages using a transgenic approach. Transcription factors that regulate the expression of gene programs associated with stem cell pluripotency or differentiation have recently become a focal point of interest (5,23,24). OCT4, SOX2, and NANOG have been implicated in stem cell pluripotency initially through knock-out studies and have been recently proposed to regulate overlapping sets of genes by chromatin immunoprecipitation analysis (5)(6)(7)10). On the other hand, overexpression of Cdx2 has identified it as a key factor specifying the trophectoderm lineage by reciprocally suppressing the expression of target genes regulated by the pluripotent factor OCT4 (23). One may argue that stem cell self-renewal or differentiation is regulated by a network of transcription factors such as OCT4, NANOG, and CDX2, yet the precise role of these factors remains very poorly understood. In this study, we have focused on SOX2, a transcription factor that has been implicated in maintaining stem cell pluripotency through its interactions with OCT4. Our results demonstrate that it contains two distinct NLS that are required for SOX2 to function as a transcription factor. Furthermore, we have generated a dominant-negative form of SOX2. This SOX2 mutant can interfere with endogenous SOX2 expressed in ES cells and triggers the differentiation of these ES cells into the trophectoderm lineage. This observation is consistent with data obtained in SOX2 knock-out experiments demonstrating failure to derive SOX2 Ϫ/Ϫ ES cells, the lack of epiblasts, and the presence of trophoblast giant cells and extraembryonic endoderm (10). However, because of the lack of SOX2 Ϫ/Ϫ ES cells, the precise role of SOX2 in cell fate determination has not been analyzed at the molecular level. Our results recapitulate a portion of the phenotype generated in the knock-out embryos, i.e. trophectoderm differentiation of ES cells transfected with Dmu-mSox2, especially formation of the trophoblast giant cells (Fig. 6). Because the trophectoderm is the first differentiated cell lineage of mammalian embryogenesis and forms the placenta, the molecular mechanisms that control this differentiation event have attracted considerable attention (23). The apparent role of Cdx2 in trophectoderm differentiation observed by Niwa et al. (23) suggests that multiple transcription factors are involved in the cell fate decision during the first cell lineage differentiation. Oct4 has been shown to maintain ES cell pluripotency by preventing EC cell differentiation into the trophectoderm lineage (3). Here, we demonstrated that impairment of its partner (Sox2) also triggers trophectoderm differentiation, suggesting that the cooperation of Oct4 and Sox2 is required in preventing trophectoderm differentiation, as in the model proposed in Fig. 8. Consistent with Oct4 down-regulation, Dmu-mSox2 and Sox2 siRNA can also induce the expression of Cdx2 (Fig. 6), suggesting that they trigger trophectoderm differentiation by inducing Cdx2. Thus, SOX2 may participate in the reciprocal inhibition between lineage-specific transcription factors as observed by Niwa et al. (23).
2018-04-03T00:44:37.384Z
2007-07-06T00:00:00.000
{ "year": 2007, "sha1": "573e14e09479142eac7f75e07c4b981e902d8659", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/282/27/19481.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "adf5dba4f4dde3076ba1fc3b3e35754bdba6fe66", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258110902
pes2o/s2orc
v3-fos-license
Genome sequencing and multifaceted taxonomic analysis of novel strains of violacein-producing bacteria and non-violacein-producing close relatives Violacein is a water-insoluble violet pigment produced by various Gram-negative bacteria. The compound and the bacteria that produce it have been gaining attention due to the antimicrobial and proposed antitumour properties of violacein and the possibility that strains producing it may have broad industrial uses. Bacteria that produce violacein have been isolated from diverse environments including fresh and ocean waters, glaciers, tropical soils, trees, fish and the skin of amphibians. We report here the isolation and characterization of six violacein-producing bacterial strains and three non-violacein-producing close relatives, each isolated from either an aquatic environment or moist food materials in northern California, USA. For each isolate, we characterized traditional phenotypes, generated and analysed draft genome sequences, and carried out multiple types of taxonomic, phylogenetic and phylogenomic analyses. Based on these analyses we assign putative identifications to the nine isolates, which include representatives of the genera Chromobacterium , Aquitalea , Iodobacter , Duganella , Massilia and Janthinobacterium . In addition, we discuss the utility of various metrics for taxonomic assignment in these groups including average nucleotide identity, whole genome phylogenetic analysis and extent of recent homologous recombination using the software program PopCOGenT. In an effort to better understand the diversity and evolution of violacein-producing bacteria, we sequenced the genomes of nine bacterial isolates. Eight isolates were selected from a culture collection created and maintained at Sierra College; these were chosen because they either produced violet-coloured colonies (five isolates) or were inferred, based on 16S rRNA gene sequence analysis, to be closely related to known violacein-producing strains (three isolates). Additional criteria for selection of these specific isolates included culture viability and access to metadata about the strains. In addition to these eight isolates from the Sierra College collection, one violet-coloured strain isolated from refrigerated tofu (purchased fresh from a farmers market, not packaged) was also selected for whole genome sequencing and analysis. In addition to genome sequencing, observation of cell and colony morphology, plus metabolic testing was used to assist in characterizing the nine isolates. Proper bacterial species designation can be critical for many purposes including public health, clinical, food safety and biosafety applications because recognizing and mapping out evolutionary relationships can lead to a better understanding of the metabolic potential and survivability of organisms in a given niche [12]. Analysis of the genome data of the nine new isolates revealed that some of the strains could be relatively easily placed into a taxonomic and phylogenetic context with regard to other known bacteria. However, for some of the isolates, taxonomic and phylogenetic placement was not straightforward. In particular, it was challenging to make lower level assignments for these isolates -and especially challenging to make any type of clear species identification. We discuss various approaches to taxonomic determination for these new isolates including using BLASTn analysis of 16S rRNA genes, average nucleotide identity (ANI), whole genome phylogenies and measures of horizontal versus vertical gene flow (using the PopCOGenT program). Based on a combination of these diverse approaches, we propose novel species designations with high confidence for five of the isolates: Chromobacterium perflumen strain HSC-31F16, Aquitalea aquatica strain HSC-21Su07, Iodobacter violacea strain HSC-16F04, Massilia hydrophila strain HSC-2F05 and Duganella violaceicalia strain HSC-15S17. We tentatively classify the three new Janthinobacterium lividum strains HSC-65S10, HSC-3S05 and UCD_MED1 because this group Impact Statement Bacterial pigments are a subject of investigation that can have a positive impact on economies, the environment and human health. Bacterial pigments are becoming important pharmaceutical and industrial chemicals because the production process of synthetic dyes used for textiles and foods requires hazardous chemicals generating hazardous wastes and dangerous working conditions for employees. Due to their promising applications, the discovery of new bacteria and new bacterial pigments is at the forefront of scientific research and economics. The violet pigment violacein is gaining attention from the scientific community because of its various antibiotic activities, which can be used for human and animal health, and its brilliant colour which can be used for industrial purposes. Pigmented bacteria can be isolated from natural environments, especially freshwater and marine habitats, and should be studied in greater detail because out of all the secondary metabolites having antibiotic activity, pigments are an understudied group. Nine new strains of bacteria were isolated from the environment, including a purple Janthinobacterium sp. strain which was observed growing on tofu. Of the nine strains characterized, six showed purple growth and genomic analysis revealed that at least one copy of all of the five genes (vioABCDEF) needed to produce violacein were present in each of these genomes. The remaining three strains were closely related to violacein producers, but they do not contain violacein genes nor did they exhibit pigmented colonies. The taxonomy and evolution of bacterial strains that are capable of expressing violacein are still not well understood even though the biosynthesis of the compound has been clearly demonstrated. There is currently no standardized method of taxonomic classification of bacteria to the species level, making correct species identification difficult. Classical methods include metabolic phenotyping, and 16S rRNA gene sequence analysis. Current classification methods include whole genome sequence analysis using average nucleotide identity, and phylogenomic methods based on established gene marker sets. We used classical methods and the currently accepted genomic methods with additional genomic population ecology approaches to maximize confidence in taxonomic placement of the nine novel strains and to accurately assign strains to existing species and describe five novel species of bacteria. This approach contributed to a better understanding of evolutionary relationships between pigmented bacteria and how pigment genes such as for violacein are transferred and retained between and within populations. presents problems in both taxonomy and annotations that require continued investigation, and we discuss the use of a mixed method approach for more accurate taxonomic classification within this group. Isolation of strains used in this study Sierra College students and staff collected and isolated bacteria from various local environments over many years. A subset of these were then characterized in more detail and chosen for genome sequencing and analysis as a part of this study (the selection process is described in the Results and discussion section). General information about the selected isolates is presented in Table 1. The general growth conditions used for obtaining these isolates is summarized here. Bacterial growth conditions were similar to those previously used to isolate violacein-producing bacteria [13]. Specifically, environmental samples were inoculated onto media containing tryptophan such as tryptic soy agar (TSA), nutrient agar (NA) and/or Reasoner's 2A agar (R2A agar). One exception was strain HSC-15S17 that was initially isolated from mannitol-rich, nitrogen-free agar and subcultured on yeast extract malt (YEM) agar. Some water samples were collected in new Ziploc bags and transferred to TSA or NA plates using sterile pipettes (50-100 µl of water per plate). Other samples were collected with sterile cotton swabs that were used to inoculate plate media directly. Violet-coloured growth was taken from cooked rice using a sterile wire loop and inoculated onto NA plates. Violet-coloured growth was taken from tofu using a sterile loop and inoculated onto R2A plates. Cultures were incubated at room temperature resembling outdoor temperatures (18-20 °C). After incubation at room temperature for 2-3 days, colonies were subcultured for isolation. The subculturing of selected colonies on new agar plates was repeated several times until the strains were purified. All strains were routinely maintained, grown on TSA, NA, R2A or YEM agar media, and preserved as glycerol stocks with a final concentration of 20-25 % at −80 °C. The bacteria maintained during this investigation were frequently sub-cultured either from cryo-vials or isolated colonies and plated on TSA, NA or YEM media. Selective and differential plate media (5 % sheep blood agar, MacConkey's agar and Tergitol-7 agar) were used for determining specific characteristics, while Mueller-Hinton agar (MHA) was used to conduct antimicrobial susceptibility testing as described below. All cultures were allowed to grow at room temperature until isolated colonies became visible (typically 48-72 h). Refrigeration or incubation at 37 °C was utilized under some circumstances. Plate cultures were examined visually to determine purity before samples were taken for sub-culturing or enzymatic testing. Colony form, margin, elevation, surface texture, optical character, pigmentation and size (mm) were recorded and variations developing over time were noted. Cell samples were aseptically transferred from isolated colonies on agar plates to new media (broth, agar deeps, agar slants or agar plates) using sterile wire loops. Specific media used for metabolic testing are described below (Bacterial phenotyping and metabolic testing). Tryptic soy broth (TSB) and yeast extract malt broth were used to establish broth cultures for antimicrobial Facility. Sequencing primers included the PCR primers listed above as well as Internal 533-forward (5′-CCAGCACGCCGCG-GTAA-3′) and 907-reverse (5′-CCGTCAATTCMTTTRAGTTT-3′). Raw sequence data (.ab1 files) were opened with 4Peaks (RRID:SCR_000015) and traces were observed to determine quality and length (typically 800-900 nt). Reads generated with reverse primers were 'flipped' to obtain reverse complementary sequences and overlapping regions were compared visually. Nucleotide sequences were copied to text files for additional comparison and editing. The four sequences were concatenated and overlapping sections deleted. These 16S rRNA gene sequences were compared with reference RNA sequences (refseq_rna) available in the NCBI database using BLASTn to tentatively assign each isolate to a genus [14]. Genome sequencing, assembly and annotation DNA for genome sequencing was obtained by transferring single colonies from plate cultures into 5 ml aliquots of TSB and incubating for 24 h on an orbital shaker (220 r.p.m.) at room temperature. DNA was extracted from these cultures using a Qiagen DNeasy Blood and Tissue kit following the manufacturer's instructions. The resulting DNA samples were quantified using a Qubit 4 Fluorometer. Due to the highly viscous biofilm produced by strain HSC-15S17, this strain was submitted to Microbes NG for DNA extraction and sequencing as a subculture on a YEM agar plate. The genomes of all the HSC strains were sequenced by Microbes NG (http://www.microbesng.uk) using Illumina sequencing (HiSeq 2500 or NovaSeq 6000) with 2×250 bp paired-end reads. These genomes were put through a standard analysis pipeline by Microbes NG. Microbes NG identifies the closest available reference genome using Kraken and maps the reads to this using BWA mem to assess the quality of the data. Microbes NG performed a de novo assembly of the reads using SPAdes v. 3.7 (December 2019) and mapped the reads back to the resultant contigs using BWA mem again to obtain additional quality metrics. UCD_MED1 was sequenced at the UC Davis DNA Technologies Core Facility using Illumina Hiseq 4000 technology with a library preparation of PE150. Reads were trimmed for quality using BBDuk v. 37.02, and assembly was performed using the A5-miseq v. May-2019 assembly pipeline [15], which includes a universal Illumina adapter trimming step. CheckM was used to assess assembly quality and contamination. All genomes were eventually annotated by the Prokaryotic Genome Annotation Pipeline (PGAP) when submitted to NCBI [16][17][18]. Whole genome phylogeny For phylogenetic analysis we created a data set containing the following: the nine new genomes introduced here, all Gammaproteobacteria genomes from the NCBI genome assembly database that were listed as being in the genera Chromobacterium (n=85), Janthinobacterium (n=85), Duganella (n=77), Massilia (n=71), Aquitalea (n=15) and Iodobacter (n=6). This set includes all the genera that the nine new genomes were tentatively assigned to based on 16S rRNA gene sequencing, and one genome to serve as an outgroup. The genome chosen to serve as an outgroup was that of Archangium violaceum (formerly Cystobacter violaceus) strain Cbvi76, in the order Myxococcales, which was selected because this bacterium is a distantly related member of the Deltaproteobacteria that is also known to produce violacein (and thus we were examining its genome for features related to violacein production). For this set of genomes, gene marker alignments were generated using GTDB-tk 1.3.0 [19]. Details of the GTDB marker set are available for download at https://data.ace.uq.edu.au/public/gtdb/data/releases/latest/. Then, a maximum likelihood tree was inferred using RAxML 8.2.11 [20], with the following settings: 250 bootstraps, PROTGAMMABLOSUM62 substitution matrix done on amino acids, random seeds for parsimony inference, and rapid bootstrap set to 8 and 47, respectively. Phylogenetic trees were viewed using Dendroscope version 3.5.9 [21]. Average nucleotide identity (ANI) The whole-genome similarity metric ANI was calculated for all genomes using Fast ANI v1.32 [22]. Fast ANI estimated the ANI between each new genome represented here and all genomes in the NCBI database that share the same genus by using alignment-free approximate sequence mapping. ANI was used with caution as only one part of the entire process of taxonomic classification because ANI measure does not strictly represent core genome evolutionary relatedness, as orthologous genes can vary widely between pairs of genomes compared [23]. A heat map of the ANI results was generated using the R packages gplots v3.1.1 [24] and Ape v5.4.1 [25] to incorporate the phylogenetic tree, and dplyr v1.0.5 [26] for data preparation and manipulation. PopCOGenT (Populations as Clusters Of Gene Transfer) To help delineate populations based on gene flow, an analysis using PopCOGenT was performed on a select subset of clades within the genus Janthinobacterium (https://github.com/philarevalo/PopCOGenT) [27]. For the PopCOGenT analysis, 68 Janthinobacterium genomes were chosen based on their close phylogenetic similarity to the type strain. After creating a phylogenetic tree, all of the genomes within the two major clades encompassing the three Janthinobacterium genomes presented in this study were included in the analysis. Only the phylogenetically most distantly related Janthinobacterium clades were excluded. The subpopulation structure of genomes was investigated based on horizontal gene flow. The program PopCOGenT was used to compare the length of identical DNA sequences in genome pairs. The length bias parameter was calculated for each pair of genomes and a genome network was created to estimate horizontal gene flow between the nodes. The network was then clustered with Infomap to identify subpopulations with higher rates of horizontal gene flow. Specifically, in PopCOGent (using the default window length of 1000 bp) the magnitude of the sum of squared differences (SSD) is measured to estimate genome homogenization. In addition to SSD, the 'initial divergence' is calculated, which is a measure of the total diversity seen in each genome comparison. The log of the SSD was plotted against the log of the initial divergence in ggplot2 (R version 4.2.1 [28] 3+492 'Prairie Trillium') to view distinct clusters of genomes with high and low rates of recombination [24,28]. Log 10 values were used to address skewness towards large values because compared to the large dataset very few points were much larger than the bulk of the data. We then ran a cluster module using Infomap [29] within PopCOGent to observe network formations between clonal pairs or unknown groups. The cluster output was viewed and manipulated using the network visualization software Gephi, with the parameters: layout=ForceAtlas2 [30]. Bacterial phenotyping and metabolic testing Bacterial morphology includes both cellular (shape, endospore, flagella, inclusion bodies, Gram staining) and colonial (colour, dimensions, form) characteristics. The physiological and biochemical features include data on growth at different temperatures, pH values, salt concentrations or atmospheric conditions, growth in the presence of various substances such as antimicrobial agents, and data on the presence or activity of various enzymes, metabolization of compounds, etc. [31]. All media were initially maintained at room temperature, and data were collected and results determined according to published protocols [32]. Cell morphology was observed by using light microscopy with cells grown in sulfide indole motility medium (SIM) for 24-72 h at 18-24 °C. Observation of violet pigment production was used to infer violacein biosynthesis. Gram staining and KOH testing were performed as previously described [32,33]. Motility was observed in wet mounts (cells in deionized water) magnified 450× and by stab inoculation in SIM medium (where growth away from the inoculation line indicates motility). Metabolic testing for all isolates was conducted in triplicate using media and reagents prepared for a general microbiology teaching laboratory and as indicated by previously published species descriptions. Oxidase reactions were performed on filter paper moistened with a solution of N,N,N′,N′-tetramethyl-p-phenylenediamine and catalase activity was demonstrated using 3 % hydrogen peroxide on glass slides. Mode of metabolism was determined using the oxidation/fermentation (O/F) test. Citrate utilization, urea hydrolysis and aesculin hydrolysis were determined on agar slants (Simmon's citrate agar, Christensen's urea agar and aesculin agar respectively). Gelatin hydrolysis was determined in nutrient gelatin deeps, starch hydrolysis was determined on starch agar plates with the addition of Gram's iodine, and methyl-red-Voges-Proskauer (MR-VP) tests were conducted in tubes of MR-VP broth media followed by transfer of 1 ml aliquots of 48-72 h growth to screw-top tubes, application of Barrit's reagents and vortex mixing to test for acetoin. Testing for acid production in MR-VP broth involved addition of six drops of methyl red indicator to cultures minus sample removed for the VP test. The formation of hydrogen sulphide was determined through observation of cultures stabbed into SIM medium, and indole formation was determined in SIM medium with the addition of Kovac's reagent. The reduction of nitrate to nitrite or to nitrogenous gasses was determined in nitrate agar deeps using nitrate reagent A (sulphanilic acid), reagent B (alpha-naphthylamine) and zinc powder (when required). The formation of acid aerobically from carbohydrates (arabinose, glucose, inositol, lactose, maltose, mannitol, raffinose, rhamnose, sorbitol, sucrose and xylose) was determined on the surface of agar deeps containing a bromothymol blue agar base with peptone [32]. Haemolysis reactions and growth on selective and differential media were determined on agar plates (5 % sheep blood, MacConkey's, Tergitol-7 and nitrogen-free agar). The presence/absence of genes associated with selected metabolic processes was investigated using the Protein Family Sorter tool of PATRIC-BRC (now BV-BRC) [34]. Antimicrobial susceptibility was determined using the disc diffusion method (Kirby-Bauer test) conducted on MHA plates inoculated with lawn cultures. Because strain HSC-15S17 would not grow on MHA, antimicrobial susceptibility testing for this strain was conducted on YEM. Ampicillin resistance was also determined on TSA plates containing ampicillin. Antimicrobial discs containing ampicillin (AM-10), bacitracin (B-10), kanamycin (K-30), penicillin G (P10), polymyxin (PB-300), rifampin (RA-5), streptomycin (S-10), tetracycline (TE-30) and vancomycin (VA-30) were applied with a disc dispenser (BD) or with sterile forceps. Following metabolic testing as described above, single or sole carbon source assimilation was investigated using BiOLOG EcoPlates [35]. Inocula were prepared for each strain by suspending one loopful of cell material (3 mm ball) from a 48 h plate culture in 15 ml of sterile water within a screw-top tube. The cellular material was thoroughly suspended by alternately shaking, vortex-mixing and allowing the tubes to stand over a period of 10-15 min. Each EcoPlate was warmed to room temperature, inoculated under aseptic conditions (100 µl of cell suspension per well), closed, sealed inside a clean Ziploc bag and stored at room temperature. Carbon source assimilation was determined by directly observing each EcoPlate placed on the surface of a portable LED lightbox (plate lid and plastic bag removed) at daily intervals over a period of 4 weeks. Selection for genome sequencing Eight strains out of many opportunistically collected environmental isolates were selected. These isolates were considered candidates for potential violacein production. Five were selected because they formed violet-or deep violet-coloured colonies. Three isolates that did not form violet-coloured colonies were selected because 16S rRNA gene sequence analysis indicated they were closely related to known violacein producers (see rRNA analysis details in the next section). In addition, one strain isolated at UC Davis that produced violet-coloured colonies was also included in this study. Table 1 provides additional details about the selected isolates. The isolates chosen came from a diversity of freshwater environments and prepared foods (Table 1). Key growth features of selected isolates The cultures grown on plate media all formed colonies that were initially well isolated (Fig. 1); for some strains colony expansion and fusion occurred over time. HSC-31F16 formed circular, entire, raised, smooth-shiny, semi-opaque, pale pinkish-cream colonies, 1-3 mm in diameter after 48 h on TSA. Colony surfaces became wrinkled with age. This strain also grew well on MAC and T-7 media but colonies were smaller and more translucent. HSC-77S12 formed circular, entire, low-convexity, smooth-shiny, opaque, deep violet colonies, 1-3 mm in diameter after 48 h on TSA. It also grew well on MAC and T-7 media, but colonies were somewhat smaller. HSC-21Su07 formed circular, entire, low-convexity, smooth-shiny, semi-opaque, milky-tan colonies 1-3 mm in diameter after 48 h on TSA. It also grew well on MAC and T-7 media but colonies were more translucent and smaller. HSC-16F04 formed circular, entire, flat, smooth-shiny opaque to semi-translucent, violet colonies 2-4 mm in diameter after 48 h onNA (evidence of swarming appeared over time). It also grew well on TSA, MAC and T-7 media. Colonies formed on TSA were darker violet, more opaque and tended to swarm less than those grown on NA. HSC-15S17 formed circular, entire, high-convexity, smooth-shiny, semi-opaque, milky-beige colonies that became violet with age and were 1-3 mm in diameter after 48 h on YEM. Colonies became rubbery and difficult to sample over time. This strain grew poorly on NA and not at all on MAC, T-7, MHA or TSA. HSC-2F05 formed circular, entire, low-convexity, smooth-shiny, semi-opaque, milky-beige colonies 1-3 mm in diameter after 48 h on TSA. It also grew well on MAC and T-7 media but colonies were more translucent and smaller. HSC-65S10 formed circular to irregular, entire to undulate, low-convexity, wrinkled-shiny, opaque, deep violet colonies 1-3 mm in diameter after 48 h on NA. It also grew well on MAC and T-7 media but new growth was cream-coloured when sub-cultured repeatedly on TSA. HSC-3S05 formed circular, entire, convex, wrinkled-shiny, opaque, deep violet colonies 1-3 mm in diameter after 48 h on NA. It also grew well on MAC and T-7 media but new growth was cream-coloured when sub-cultured repeatedly on TSA. UCD_MED1 formed circular to irregular, entire to undulate, low-convexty, wrinkled-shiny, opaque, deep violet colonies 1-3 mm in diameter after 48 h on NA. This strain also grew well on MAC and T-7 media but new growth became cream-coloured when sub-cultured repeatedly on TSA. Cell morphology and metabolic testing A summary of metabolic capabilities including antimicrobial susceptibility and single carbon-source assimilation as determined through the application of BiOLOG EcoPlates is presented in Table 2. HSC-31F16 Cells were motile, Gram-negative (KOH-positive), non-sporing, fermentative bacilli, 0.7-1×2-5 µm in size. This strain was positive for catalase, oxidase, citrate utilization, aesculin and gelatin hydrolysis, reduction of nitrate to nitrite (but not to nitrogenous gases) plus beta-haemolysis on 5 % sheep blood. It was negative for violacein formation, urea and starch hydrolysis, indole and hydrogen sulphide formation (SIM), formation of acid and acetoin in MR-VP medium and growth on nitrogen-free media. Acid was formed aerobically from inositol but none of the other carbohydrates provided; acid was formed through the fermentation of glucose (O/F). HSC-77S12 Cells were motile, Gram-negative (KOH-positive), non-sporing, fermentative bacilli, 1-1.2×2-3 µm in size. This strain was positive for violacein formation, catalase, oxidase, citrate utilization, gelatin hydrolysis, the ability to reduce nitrate to nitrite (but not to nitrogenous gasses) and beta-haemolysis on 5 % sheep blood. It was negative for urea, starch and aesculin hydrolysis, indole and hydrogen sulphide formation (SIM), formation of acid and acetoin in MR-VP medium and growth on nitrogen-free Table 2. Continued media. Acid was not formed aerobically from any of the carbohydrates utilized, but acid was formed through the fermentation of glucose and maltose (O/F). HSC-21Su07 Cells were motile, Gram-negative (KOH-positive), non-sporing, fermentative bacilli, 0.5-1.0×1.5-2.5 µm in size. This strain was positive for catalase, oxidase, citrate utilization, urea hydrolysis, the reduction of nitrate to nitrite (but not to nitrogenous gasses) and haemolysis on 5 % sheep blood. It was negative for violacein formation, aesculin, starch and gelatin hydrolysis, indole and hydrogen sulphide formation (SIM), the formation of acid and acetoin in MR-VP medium and growth on nitrogen-free media. Acid was formed aerobically from inositol but none of the other carbohydrates utilized; acid was formed through the fermentation of glucose in O/F medium. HSC-16F04 Cells were motile, Gram-negative (KOH-positive), non-sporing, fermentative bacilli, 0.7-1.0×3-5 µm in size. This strain was positive for violacein formation, catalase, gelatin hydrolysis, reduction of nitrate to nitrite (but not to nitrogenous gasses) and beta-haemolysis on 5 % sheep blood. It was negative for oxidase, citrate utilization, urea, starch and aesculin hydrolysis, indole and hydrogen sulphide formation (SIM), the formation of acid and acetoin in MR-VP medium and growth on nitrogen-free media. Acid was formed aerobically from maltose but none of the other carbohydrates utilized; acid was formed through the fermentation of glucose in O/F medium. HSC-15S17 Cells were motile, Gram-negative (KOH-positive), non-sporing, oxidative coccobacilli, 1.0-1.5×1.5-2 µm in size. This strain was positive for violacein formation, catalase, oxidase, gelatin liquefaction, starch hydrolysis, the reduction of nitrate to nitrite (but not to nitrogenous gases) and growth on nitrogen-free media. It was negative for citrate utilization, urea and aesculin hydrolysis, indole and hydrogen sulphide production (SIM) and the formation of acid and acetoin from glucose fermentation (O/F and MR-VP). It was not haemolytic on sheep blood agar and acid was not formed aerobically from any of the carbohydrates utilized. HSC-2F05 Cells were motile, Gram-negative (KOH-positive), non-sporing, oxidative bacilli, typically 0.7-1.0×2-5 µm in size (some reached 20 µm in length). This strain was positive for catalase, oxidase, gelatin liquefaction, starch hydrolysis, the reduction of nitrate to nitrite (but not to nitrogenous gasses) and haemolysis on sheep blood agar. It was negative for violacein formation, citrate utilization, urea and aesculin hydrolysis, indole and hydrogen sulphide production (SIM), the formation of acid and acetoin from glucose fermentation (O/F and MR-VP) and growth on nitrogen-free media. Acid was not formed aerobically from any of the carbohydrates utilized. HSC-65S10 Cells were motile, Gram-negative (KOH-positive), non-sporing, oxidative bacilli, typically 0.5-1.0×3-5 µm in size (some reached 20 µm in length). This strain was positive for violacein formation, catalase, oxidase, citrate utilization, aesculin and gelatin hydrolysis, the ability to reduce nitrate to nitrite (but not to nitrogenous gases), beta-haemolysis on 5 % sheep blood and growth on nitrogen-free media. It was negative for urea and starch hydrolysis, indole and hydrogen sulphide formation (SIM), and the formation of acid and acetoin in MR-VP medium. Acid was formed aerobically from arabinose, glucose, inositol, maltose, mannitol, rhamnose, sorbitol, sucrose and xylose in O/F-type media, but not from lactose or raffinose. HSC-3S05 Cells were motile, Gram-negative (KOH-positive), non-sporing, oxidative bacilli, typically 0.5-1.0×3-5 µm in size (some reached 15-20 µm in length). This strain was positive for violacein formation, catalase, oxidase, citrate utilization, aesculin, gelatin and urea hydrolysis, the ability to reduce nitrate and nitrite, beta-haemolysis on 5 % sheep blood and growth on nitrogen-free media. It was negative for starch hydrolysis, indole and hydrogen sulphide formation (SIM) and the formation of acid and acetoin in MR-VP medium. Acid was formed aerobically from arabinose, glucose, inositol, maltose, mannitol, rhamnose, sorbitol, sucrose and xylose in O/F-type media, but not from lactose or raffinose. Genome sequencing and assembly All isolates were sequenced by Microbes NG except UCD_MED1 which was sequenced by UC Davis (see Methods for details). Results from the whole genome shotgun sequencing and genome assembly for each isolate are shown in Table 4. Whole genome phylogeny We created a phylogenetic tree that shows the placement of each of the nine new genomes within the phylogenetic context of a total of 349 genome assemblies. The tree consists of all bacterial strains of the same genera as the new strains presented here which are available within the NCBI database, plus one outgroup. This phylogenetic assemblage included genomes from 87 Chromobacterium strains, 16 Aquitalea strains, seven Iodobacter strains, 78 Duganella strains, 72 Massilia strains, 88 Janthinobactertium strains (incorporating three strains labelled as Janthinobacterium but are of inconclusive genus identity) and one Archangium strain. Here we show relevant subsets of each major clade a new strain is grouped within (Fig. 2a-g). Taxonomic in each phylogenetic tree. The entire phylogenetic tree containing all publicly available genomes from the six genera in NCBI plus the outgroup used to root the tree, Archangium violaceum (349 taxa in total), can be found in Fig. S1. Implications of the patterns seen in the whole genome phylogeny are discussed in the Discussion section below. Average nucleotide identity ANI values were determined for the nine draft genome sequences by comparison with all available genome assemblies annotated as bacteria classified within the same genera. Table 5 shows the strains with the highest ANI values against the strains in this study as well as the type strains with the highest ANI value. The full ANI dataset is available in Table S2. Comparison of ANI to phylogenetic relationships A heatmap of the ANI values, with strains ordered by their position in the whole genome phylogeny, was generated (Figs 3a-f, and S2). This ANI heatmap dendrogram shows the association between ANI and each pairwise comparison of genomes in the dendrogram. Identifying and differentiating distinct populations within the genus Janthinobacterium For the strains that had been assigned to the genera Aquitalea, Chromobacterium, Iodobacter, Duganella, and Massilia, ANI analysis and whole genome phylogenetics was considered sufficient for making taxonomic assignments (see Discussion). However, for the strains assigned to the genus Janthinobacterium, ANI analysis and whole genome phylogenetics was inconclusive due to ambiguous patterns of ANI% versus clusters in the tree. We therefore applied an additional type of analysis, using the program PopCOGenT to augment characterization of the Janthinobacterium group. PopCOGenT measures recent gene flow and recombination among genomes and thus serves as a 'reverse ecology' approach that can aid in the identification of genetic boundaries between isolates that are not discernible from standard ANI and phylogenomic methods [27]. PopCOGenT is designed to assess population structure under the assumption that organisms within the same population, compared to organisms that are genetically isolated from each other, should have higher rates of recombination and genomes with higher percentages of regions that have undergone recent recombination. It assesses the rates and extent of recombination by measuring levels of divergence between genomes as well as the lengths of identical stretches of DNA. Recent recombination between lineages should lead to an overabundance in number and length of identical regions between two genomes from those lineages (because there will not have been as much time to accumulate substitutions as compared to non-recombining regions). The method estimates recombination by comparing the lengths of identical regions between genome pairs versus an expectation under a Poisson model of just clonal mutation. This parameter is referred to as 'length bias' and is reported as 'Observed SSD' with higher values indicating more recombination. Generally, length bias is positively (although not linearly) correlated with the amount of gene flow between each pairwise genome comparison [27]. In addition, PopCOGenT measures the 'initial divergence' , which is a measure of the total diversity seen in each genome comparison. One way of assessing the results of PopCOGenT analysis is to examine the relationship between initial divergence and SSD, which are shown for the Janthinobacterium genomes in Fig. 4. Another way of assessing PopCOGenT analysis is to infer networks of recent gene flow, which are shown in Fig. 5. The implications of the PopCOGenT analysis are addressed in the Discussion. For all pairs of Janthinobacterium genomes, the log 10 transformation of initial divergence (a measure of the total similarity of the genomes) is plotted versus the log of the SSD score (a measure of the length bias towards longer stretches of identical DNA, which is used as a measure of the extent of recombination). The pairwise comparisons represented by the points in the upper left quadrant of the plot are genomes that are very similar to each other and in some cases may be separate sequences of the same strain. They thus have low levels of initial divergence and very high levels of inferred SSD. The pairwise comparisons represented by the points in the lower right of the plot represent the genome pairs that are significantly more distantly related (note the large jump in log initial divergence) and these generally have lower SSD scores. Taxonomic assignment based on a combination of analyses A multifaceted approach, including a combination of chemotaxonomic, phenotypic and genotypic data, was used to determine the taxonomic and phylogenetic positions of the nine isolates introduced here. Initial 16S rRNA gene sequence analysis allowed strains to be assigned a putative genus-level taxonomic classification. Species-level classification was then confirmed or designated using a combination of whole genome phylogenetic placement and ANI analysis. ANI values of approximately 95 % are proposed by some to represent an accurate threshold for demarcating a species boundary for some bacteria [36,37]. However, measures of similarity, even when taken across a whole genome, can be misleading regarding relatedness due to factors such as selection, mutation bias, unequal rates of evolution and gene loss; therefore, ANI should be used with caution as a measure of possible species boundaries. It is thus critically important to supplement ANI measures with actual phylogenetic analysis, such as that based on whole genome sequences. We used the GTDB-tk-based system here to provide such a phylogenomic analysis. In addition, morphological features, and the results of metabolic tests and other phenotypic characteristics have been used extensively to characterize bacteria, and although these alone cannot be used for classification or phylogenetic placement they do provide information about new strains and so were included. For most of the taxa found in this study, 16S rRNA gene sequence Table 5. Summary of highest ANI (%) scores ANI was calculated for each new strain presented here. A summary of the bacterial strains with the highest matching ANI scores to each of the nine new strains is listed. The ANI score with the type strain of the closest matching species is also listed. Genome ID Highest ANI for any genome ANI (%) Highest ANI for any type strain ANI (%) Chromobacterium comparisons, ANI values and whole genome phylogenetic analyses provided a reasonable basis for inference of taxonomy and phylogenetic position. For the taxa determined to be in the genus Janthinobacterium, these methods alone were not fully sufficient, and we used PopCOGenT to provide additional information about evolutionary relationships as indicated by rate of gene flow. We discuss the isolates and their species designations in greater detail below. In addition, we also discuss possible needs for taxonomic reassignment of some of the genomes currently in NCBI. Genus Chromobacterium: strains HSC-31F16 and HSC-77S12 Results indicate that Chromobacterium strain HSC-31F16 and the reference strain H4137_1 represent a novel species. Although strain HSC-31F16 initially appeared most similar to C. aquaticum strain CC-SEYA-1 [38] based on 16S rRNA gene sequences, it presented seven metabolic differences. It was catalase-positive, susceptible to rifampin, unable to assimilate galacturonic acid, glucosaminic acid, itaconic acid and phenylethylamine as single carbon sources, but able to assimilate γ-hydroxybutyric acid; these are all in opposition to the responses shown by C. aquaticum. Although genome data for C. aquaticum strain CC-SEYA-1 were not available for comparison, it is unlikely that strain HSC-31F16 is a representative of the same species. The whole genome-based phylogenetic tree Figs 2a and S1) indicated strain HSC-31F16 might represent a C. haemolyticum strain; however, this designation was not strongly supported by ANI values. The genome from strain HSC-31F16 shared only 93.92 % ANI with the C. haemolyticum type strain MDA0585=DSM 19808 [36]. It also shared only 94.03 % ANI with a complete genome from C. haemolyticum strain Bb2. Because the genome from strain HSC-31F16 is located on a long branch with only one other genome, that from C. haemolyticum strain H4137_1, and because it shares 97.46 % ANI with that genome, both strains probably represent a species other than C. haemolyticum. Based on results from all methods, we propose that strain HSC-31F16 be designated as representing a novel species. Specifically, we propose that this strain and strain H4137_1 be named Chromobacterium perflumen ( per. flu' .men per L. prep. through or by means of, flumen L. n. river, perflumen through the river); HSC-31F16 was isolated from water collected along a major river in northern California. Strain HSC-77S12 should be taxonomically classified as the species Chromobacterium piscinae. This assignment is based on the whole genome phylogeny, which shows that it groups into a clade with those from other C. piscinae strains, including the type strain C. piscinae DSM 23278 (Figs 2b and S1). In addition, this genome shares >95 % ANI with those of other members of the species, including 98.82 % ANI with the type strain DSM 23278. We also suggest that the Chromobacterium 'vaccinii' strains A7 (GCF_019733135.1), A8 (GCF_019733095.1) and A15 (GCF_019732845.1) should be renamed as C. piscinae due to the 98.85, 98.72 and 98.82 % ANI similarity scores, respectively, of their genomes with that of HSC-77S12 and the phylogenetic grouping of these genomes within the C. piscinae clade containing the C. piscinae type strain. In addition, C. 'piscinae' strain ND17 (GCF_000812585.1) should probably be renamed as C. amazonense because the genome of this strain is in a clade with that of the C. amazonense type strain DSM 26508 (GCF_001855565.1). The metabolic characteristics of HSC-77S12 were generally consistent with the description provided for Chromobacterium piscinae LMG 3947 [39], with a few exceptions. Strain HSC-77S12 was positive for catalase and citrate utilization, while C. piscinae was described as negative for these traits. In addition, C. piscinae was described as being aerobic while strain HSC-77S12 was facultative (able to ferment glucose). Chromobacterium piscinae was also described as being negative for putrescine and l-phenylalanine assimilation while HSC-77S12 demonstrated weak assimilation of putrescine and positive assimilation of l-phenylalanine. Within the phylogenetic tree, the genomes of strains HSC-21Su07, MWU13-2470, MWU14-2217 and ASV1 form a small but separate clade, and when compared to one another have ANI values (~94 %), just below the current standard species threshold of 95 %. The sister clade to this group includes the genome of the type strain Aquitalea aquatilis THG-DN7_12 but the ANI value with this strain is moderately low (~92 %) and HSC-21Su07 should not be assigned to this species. Strain HSC-21Su07 appears even more distantly related and has lower ANI values when compared to the other Aquitalea species for which genomes are available (A. denitrificans, A. magnusonii and A. pelogenes)and (Figs 2c and S1) and is not assigned to these species. The metabolic features of strain HSC-21Su07 were most consistent with those described for A. denitrificans [40], with the exception of glucose fermentation and urease formation (negative for strain 5YN1-3 and positive for HSC-21Su07). Features were less similar to those of A. magnusonii, which was described as being positive for urea hydrolysis and glycogen utilization, resistant to ampicillin, rifampin and streptomycin, but susceptible to tetracycline [41]. Strain HSC-21Su07 was also urease-positive but could not utilize glycogen, and was susceptible to ampicillin, rifampin and streptomycin but resistant to tetracycline. Based on these findings we propose a novel species name for strain HSC-21Su07 and the other strains within the same clade, namely MWU13-2470, MWU14-2217 and ASV1: Aquitalea aquatica ( a. qua' . ti. ca. L. adj. aquatica living, growing or found in water, aquatic). HSC-21Su07 was isolated from spring water. (Figs 2d and S1), but the genome of strain HSC-21Su07 shared an ANI value of only 92.1 % with that of this type strain. Not all existing Iodobacter species are represented in this phylogeny due to lack of genome availability; unrepresented Iodobacter species include I. limnosediminis and the proposed novel species I. arcticus. The characteristics of strain HSC-16F04 were generally consistent with those described for I. fluviatilis [42], except that unlike I. fluviatilis, Iodobacter strain HSC-16F04 was oxidase-negative, did not assimilate malic acid and showed weak assimilation of Tween 40. Characteristics were less similar to those of Iodobacter arcticus, which was described as being strictly aerobic (respiratory), non-motile, citrate-positive and gelatinase-negative [43]. Strain HSC-16F04 was facultative (fermentative), motile, citrate-negative and gelatinase-positive. Since the genome from HSC-16F04 does not group with existing I. fluviatilis or I. ciconiae genomes phylogenetically and shares only low ANI values with these, we suggest that strain HSC-16F04 represents a novel Iodobacter species. The genome from this violet pigment producer and that of Iodobacter strain BJB302 appear closely related and are significantly unlike other Iodobacter whole genomes reported to date. The species name we propose for these two strains is Iodobacter violacea ( vi. o. la. ce. a. L. adj. violaceus -a -um violet; the colonies are deep violet when formed on TSA or R2A agar). In terms of species-level assignment, the genome of HSC-15S17 is in a clade with genomes from two violet-coloured strains classified only to genus: Duganella sp. BJB475 and BJB476. A genome from the closest named relative to these is from D. aceris SAP-35; however, this taxonomic name has been effectively published but not validly published under the rules of the International Code of Nomenclature of Bacteria (Bacteriological Code). In terms of metabolism, the metabolic features of strain HSC-15S17 were only partially consistent with those of described species within this genus and characteristics described for the related species Duganella zoogloeoides [44]. Strain HSC-15S17 produces large, gelatinous, violet-coloured colonies when grown on YEM while the colonies of D. zoogloeoides were described as pale yellow to straw-coloured. In addition, D. zoogloeoides was described as forming acid aerobically from glucose and urease enzymes which strain HSC-15S17 did not. Strain HSC-15S17 should be considered as representing a novel species and propose that this and the Duganella strains BJB475 and BJB476 be given the name Duganella violaceicalia ( vi. o. la. ce. i. cal. i. a. L. adj. violaceus -a -um violet; the purple pigment violacein is produced by this strain; Gr. adj. kal or kall beauty or beautiful; N.L. fem. adj. violaceicalia violet beauty). The striking violet colour of Duganella strain HSC-15S17 colonies develops over time as they grow. Genus Massilia: strain HSC-2F05 Results indicate that the Massilia strains HSC-2F05 and MS-15 represent a novel species. Strain HSC-2F05 was assigned to the genus Massilia based on 16S rRNA gene sequence comparison and showed 98.32 % identity with the same genes from Massilia varians strain CCUG 35299 (NR_042652). This assignment was supported by its placement within the GTDB-tk-aligned whole genome phylogeny and the ANI values (Figs 2f and S1; Table 5). At the time of writing, the NCBI genome assembly database contained 98 public genomes with the genus annotation Massilia, and these were incorporated into the phylogenetic tree and heatmap dendrogram. Analysis of the GTDB-tk-aligned heatmap dendrogram verified that almost every one of the Massilia genome assemblies is not closely related to its nearest phylogenetic neighbours (<90 % ANI). Many individual genomes can be considered to represent unique species. The genome of Massilia strain HSC-2F05 has a 96.37 % ANI value with that of its nearest phylogenetic neighbour Massilia sp. strain MS-15, a strain without species-level classification, but has less than 89 % ANI values with sister taxa within the whole genome phylogenetic tree. The closest genome with species-level classification shares 84.52 % ANI with Massilia oculi strain CCUG_43427 (GCF_003143515.1). The metabolic features of strain HSC-2F05 were only partially consistent with characteristics described for the closest 16S rRNA gene match Massilia varians, previously known as Naxibacter varians [45,46]. Morphological features were similar, but Massilia varians was described as being H 2 S-positive and negative for nitrate reduction while strain HSC-2F05 was negative for H 2 S production and positive for nitrate reduction. In addition, M. varians was described as being positive for aesculin hydrolysis while HSC-2F05 was negative, and as susceptible to polymyxin while HSC-2F05 was resistant to that antibiotic. Genus Janthinobacterium: strains HSC-65S10, HSC-3S05 and UCD_MED1 Strains HSC-65S10, HSC-3S05 and UCD_MED1 were tentatively assigned to the genus Janthinobacterium based on analysis of 16S rRNA gene sequences. This was confirmed by the whole genome phylogenetic analysis Figs 2g and S1); however, making species-level assignments for these strains was complicated for multiple reasons discussed here. We describe some of the challenges presented by this genus and possible solutions below. Challenge 1. Taxonomic misannotation in the genus Janthinobacterium One major challenge in making specieslevel assignments for our strains related to problems in the naming of organisms with genomes listed as being from the genus Janthinobacterium in NCBI. There are currently five formally recognized species in the genus Janthinobacterium, including the original type strain: J. lividum DSM 1522 T =NCTC 9796=H-24=ATCC 12473 [47]; the only pathogenic strain: J. agaricidamnosum DSM 9628 T =NBRC 102515 [48]; and three species introduced by Lu et al. [49]: J. violaceinigrum strain FT13W T =GDMCC 1.1638=KACC 21319, J. aquaticum strain FT58W T =GDMCC 1.1676=KACC 21468 and J. rivuli strain FT68W T =GDMCC 1.1677=KACC 21469 [49]. One issue with this group is that there is some confusion in the NCBI database regarding the last two genomes. The 16S rRNA gene sequences for strains FT58W and FT68W are listed in the NCBI taxonomy database as J. aquaticum and J. rivuli but their genomes are not included under their taxonomic entries. Instead, the genomes are listed as being J. sp. with strain designations but not species identification. Another complicating factor for this group is that, according to the NCBI bacterial taxonomy database, three additional species names have been effectively published but not validly published under the rules of the International Code of Nomenclature of Bacteria (Bacteriological Code). These are J. In addition to these general naming issues, there are also many genome assemblies in the NCBI database that are included in the genus Janthinobacterium but for which the taxonomic assignment is almost certainly incorrect. We note, for taxa which we have inferred the names are probably incorrect, we write them with square brackets (e.g. [misannotated name]) around what we have found to be the incorrect parts of the names. One type of incorrect taxonomic naming involves multiple genomes that in the GTDB-tk phylogeny clearly do not belong in the genus Janthinobacterium. For example, the genome from the isolate listed as [Janthinobacterium] sp. strain CG23_25 (RefSeq ID: GCF_001485665.1) falls within the Massilia branch of the whole genome phylogenetic tree (Fig. S1). Similarly, the genome labelled as [Janthinobacterium] sp. strain B9-8 (RefSeq ID GCF_000969645.2) falls within the Iodobacter clade, and the genome labelled as [Janthinobacterium] strain HH01 (RefSeq ID: GCF_000335815.1) falls within the genus Duganella (Fig. S1). In addition, there are three genome assemblies annotated as 'Janthinobacterium' that are placed in the GTDB-tk tree outside of the genus but in a group for which no clear genus can be assigned. These genomes are: [Janthinobacterium] sp. 17J80-10 (GCF_004114795.1), [Janthinobacterium marseille] P9896 (GCF_903469655.1) and [Janthinobacterium marseille] (GCF_000013625.1). In addition to mis-assignment to the genus Janthinobacterium, there are many genomes correctly assigned to the genus Janthinobacterium but which have species names assigned that are inconsistent with the placement of type strains in the GTDB-tk tree. For example, the genome assembly labelled J. [agaricidamnosum] strain BHSEK (RefSeq ID: GCF_003667705.1) shares a more recent branch node with J. lividum strains than with the type strain of J. agaricidamnosum. The genome assemblies labelled J. svalbardensis PAMC 27463 and J. [tructae] strain SNU WT3 T are sister taxa that group together within a clade consisting of strains named only to genus or labelled as J. lividum. These incorrect taxonomic assignments and the ambiguity regarding formal species names, as well as some taxa for which the 16S rRNA gene data are listed under one name and the genomes are listed under another, all serve to make assigning names to new isolates within this group challenging. Challenge 2. Inconsistent results from 16S rRNA gene sequence analysis, ANI and whole genome phylogeny Another key challenge in making species-level assignments for the genus Janthinobacterium is that results from analysis of 16S rRNA gene sequences, ANI and whole genome phylogeny were somewhat ambiguous and contradictory; we summarize some of these results here. First, the 16S rRNA genes of all three of our strains showed >99.80 % identity to 16S rRNA genes of the Janthinobacterium lividum type strain DSM 1522 (NR_026365), and equally high percentage identity scores with many other Janthinobacterium lividum strains in the NCBI database. Such high percentage identity of 16S rRNA genes is frequently used to indicate that strains are from the same species. However, the results of ANI analysis and whole genome phylogenetics did not support a simple assignment of these strains to J. lividum. In particular, while ANI overall generally tracked well with the whole genome phylogeny for the genus Janthinobacterium, the specific ANI levels for the new strains and for strains that could potentially be assigned to J. lividum were lower than the standard 95 % cutoff used by many to delineate species. Using strict ANI value cutoffs can be insufficient to identify species boundaries because of issues including unequal rates of evolution, different gene loss and gain, and the fact that species boundaries are determined by ecological and genetic factors and not percentage identity. In addition, it is important to note that the 95 % species threshold is based on analysis of a limited number of genomes generally with high levels of similarity [23]. Regardless of these caveats, the ANI values within this group suggest that drawing species boundaries using ANI in the same way as has been sometimes used for other taxa is potentially invalid. Improving resolution in the genus Janthinobacterium using PopCOGenT Because the genus Janthinobacterium does not clearly fit into the standard ANI model, and the strains investigated here have nearly identical phenotypes (Tables 1 and 2), we used PopCOGenT to assist in investigation of the evolutionary history and species-level classification of this group in greater detail. As mentioned above, PopCOGenT assesses population structure by inferring the extent of recombination relative to the amount of divergence between genomes and this in turn can help determine where natural genetic boundaries exist between strains [27]. PopCOGenT has been used to distinguish recent gene flow events from historical ones and to identify ancestral nodes in phylogenomic trees that lead to speciation events [53]. PopCOGenT can be used to predict the recombination pattern and population structure within bacterial species and reveal distinct populations. Directionality of gene flow from population to population can also be determined [54]. We used PopCOGenT to analyse all genomes for a clade within the genus Janthinobacterium that included the type strains, the three new Janthinobacterium genomes presented in this study, and other related strains (68 genomes in total, Table S1). The results were assessed in two ways: by examining the relationships between initial divergence and length bias (Fig. 4) and by inferring networks of recent gene flow (Fig. 5). The results shown in Fig. 4 highlight that the Janthinobacterium genome comparisons come in two main categories. First, there are genome comparisons where the two genomes are very closely related (on the left along the 'initial divergence' x-axis). Some of these pairs appear to represent resequencing of the same strains, either because they were isolated from the same sample or because groups separately sequenced strains obtained from a common source. Other pairs appear to represent just quite closely related separate isolates. One application of PopCOGenT is to detect gene flow in coexisting microbes of the same species. The program can also be applied to examine gene flow for ancestral strains. PopCoGenT can be used to measure the amount of relatively recent horizontal gene flow compared to the level of divergence among many sets of microbes, whether or not they coexist, and data can provide information about historical genetic boundaries between lineages. In this study, PopCOGenT was used to provide information on patterns of gene flow in lineages of Janthinobacterium strains in order to supplement analysis of phylogeny, genomes and ANI. In addition, there are genome comparisons where the genome pairs are significantly more distantly related (along the right side of the x-axis) such that the initial divergence scores 'jump' multiple log values. The discontinuity along the x-axis is caused by the genomes that form distinct subpopulations (upper left side) having lower initial divergence than the genomes that do not form populations. This confirms that subpopulation clusters have lower rates of genetic divergence and genomes with higher rates of genetic divergence do not form subpopulations with other genomes in the dataset. The genome pairs with high levels of initial divergence almost always have lower SSD scores (and thus less inferred gene flow) than the genome pairs with low levels of initial divergence. Within this dataset genomes that form subpopulations must have an initial divergence score of <−4, and those that do not form subpopulations have an initial divergence score of >−3. To infer specific subpopulation structure from the initial divergence and SSD scores shown in Fig. 4, we generated a gene-flow network diagram (Fig. 5). Each genome is represented by a node in this diagram and the edges represent inferred gene flow with the thickness of the edges representing the amount of gene flow. Genomes that do not have any inferred gene flow with other genomes here are coloured in grey. Genomes that are part of gene flow networks are coloured in other colours with each specific colour used representing a distinct gene flow network. Overall, the only genome groups that formed strong gene flow networks were ones that were very closely related to each other as indicated by low initial divergence scores and being very close in the whole genome phylogenetic tree. Of note, no networks were seen that included pairs of strains with ANIs <95 %. Overall, the PopCOGenT analysis shows that for the strains in the genus Janthinobacterium analysed here, significant amounts of gene flow only occurs among some (but not all) of the strains for which the genome tree shows they are very closely related. In some cases, there is no detectable gene flow even with strains that are quite close. This possibly could be due to physical or ecological separation of strains that are closely related, which could prevent gene flow from occurring. We note that the strains included in this analysis were isolated from a wide diversity of locations and ecosystems and it is not surprising that they may be genetically isolated from each other. Integrating information from 16S rRNA genes, ANI, whole genome phylogeny, phenotype and gene flow networks for the genus Janthinobacterium In this section we discuss the implications for the taxonomic assignments of the new Janthinobacterium isolates by integrating the PopCOGenT results with the analysis of 16S rRNA sequences, ANI, whole genome phylogeny and phenotype. Phylogenetically, HSC-3S05 groups in a clade in the whole genome tree with three other strains classified only to the genus level. HSC-3S05 has an ANI >95 % with those strains. This clade is a sister group to a clade that includes both the J. lividum type strain and the type species for J. violaceinigrum. HSC-3S05 has 93.67 % ANI with the J. lividum type strain and 92.39 % ANI with the J. violaceinigrum type species. The metabolic characteristics of HSC-3S05 are generally consistent with the description provided for 'typical' Janthinobacterium lividum [55], except that the assimilation of N-acetyl-d-glucosamine, cellobiose, α-lactose and l-phenylalanine were found to be negative in published strains while HSC-3S05 was positive or weakly positive for these. HSC-65S10 groups in a clade in the whole genome tree with four other strains classified only to the genus level. HSC-65S10 has an ANI >95 % with those strains. HSC-65S10 is phylogenetically more distant from the J. lividum and J. violaceinigrum type strains than HSC-3S05 is. However, the ANI of HSC-65S10 with these two type strains is 93.27 and 92.29 respectively, similar to that of HSC-3S05 with these strains. The PopCOGenT results show that HSC-65S10 is not part of a gene flow network with any other strains in the dataset, providing evidence that strain HSC-65S10 may have a history of ecological isolation from other strains in the dataset. The metabolic characteristics of HSC-65S10 were generally consistent with the description provided for 'typical' Janthinobacterium lividum [55], except that the assimilation of N-acetyl-d-glucosamine and cellobiose were described as being negative for all published strains while HSC-65S10 was positive. Strain UCD_MED1 has 97.58, 96.36, and 96.36% ANI with Janthiobacterium strains only classified to genus: NCTC8861, EB271-G4-3-1 and EB271-G4-3-2 respectively. The PopCOGenT results show that UCD_MED1 is not part of a gene flow network with any other strains in the dataset, providing evidence that strain UCD_MED1 may have a history of ecological isolation from other strains in the dataset. Whether groups such as this one should be considered separate species or subspecies is debatable. The metabolic characteristics of UCD_MED1 were generally consistent with the description provided for 'typical' Janthinobacterium lividum [55], except that the assimilation of N-acetyl-d-glucosamine and cellobiose were found to be negative in published strains while UCD_MED1 was positive. The three new representatives of the Janthinobacterium strains presented here could be representatives of novel species as they (1) have little gene flow with other strains represented here, (2) are phylogenetically somewhat distinct from type strains of formally described species and (3) have ANI values of <95 % with any strains that can be readily assigned to a species. However, it may be premature for proposing novel species designations here given that (1) gene flow in the genus Janthinobacterium appears to drop off precipitously even between quite closely related strains (see above), (2) ANI is an indirect and imprecise measure of species boundaries and there are other examples of ANIs of <95 % occurring within proposed species groups [23] and (3) the metabolic features of the three new strains are generally consistent with those described for typical J. lividum strains. We thus propose that without any other evidence all the Janthinobacterium strains shown in Fig. 2g be considered in the same species, namely J. lividum, and furthermore that a 90 % ANI cutoff is more appropriate for species boundaries in the genus Janthinobacerium. This thus would mean that several recently published 'species' be designated subspecies within this group. CONCLUSIONS Escaping the mire of subjectivity in taxonomy During this study, we determined the taxonomic status of nine bacterial strains using phylogenomic and population ecology approaches. We found that taxonomically classifying six of these strains using relatively standard methods (a combination of analysis of 16S rRNA gene sequences, ANI and whole genome phylogeny) worked well and we suggested five of the strains be recognized as representing novel species. However, for the three strains in the genus Janthinobacterium, these standard approaches gave somewhat inconsistent and ambiguous results. Using these standard approaches we were unable to determine where species boundaries were within the group. This is not particularly surprising given that the methods being used are somewhat indirect ways to assess species boundaries and do not actually address factors such as genetic isolation between groups, which is needed to formally determine where species boundaries exist. We thus added an additional analysis using the program PopCOGenT to examine population structure within the Janthinobacterium group. PopCOGenT estimates the extent of recent genetic exchange between genomes by assessing the balance between shared identical genome regions versus total divergence of genomes. This additional analysis showed that between the genomes immediately surrounding the J. lividum type strain, low recombination rates dominate throughout the clades, suggesting that existing strains may comprise novel, unnamed species. Using a combination of standard tools as well as population genetic analysis as with PopCOGenT was useful here and will presumably be useful in future studies and will better integrate naming with species concepts for bacteria in general [56]. Experimenting with new bacterial taxonomic classification methods leads to open discussions on how taxonomists can escape the mire of subjectivity and agree on a definitive methodology for classifying bacteria to species level. The perils of misnaming Our analysis here was made complicated by mis-classification of strains for which genome sequences are available. Overall, we suggest revision of the taxonomic status of many genome assemblies within all six of the genera evaluated in this study, especially within the genus Janthinobacterium. This highlights a major issue for microbiology in that published draft genome assemblies can sometimes be inaccurate [57,58]. Published mis-classified assemblies are problematic because they can lead to false conclusions and mis-identified strains [59]. Misidentification can also lead to inappropriate ongoing research. We encourage more attention to be paid to the taxonomic annotations of new genomes being uploaded to public databases. Relevance of metabolic and physiological assays For this study we recorded classical phenotypic characteristics including cell and colony morphology, metabolic testing and antimicrobial susceptibility testing. Although not all phenotypic characteristics were determined, those that were are consistent with characteristics included previously in the descriptions of genera and species investigated. Phenotypes are observable characteristics of cells [60], and although the term 'phenotype' is sometimes applied more broadly, the classical phenotypic characteristics of bacteria comprise morphological, physiological and biochemical features [61]. Growth phenotypes are of particular interest because they help better understand the selective forces that may have shaped the history of a particular organism and also because they provide insights into an organism's niche and how to work with it in the lab [60]. Many bacteria carry genes that are only expressed under certain circumstances, such as virulence factors regulated through quorum sensing and genes involved in pigment production being influenced by temperature variation [62,63]. Although it is doubtful that individual phenotypes or small collections of phenotypes can consistently and correctly represent evolutionary relationships [64], organisms are described in phenotypic terms, and the descriptions help define a taxonomic group that may also be recognized at the phylogenetic level [65]. Even with the advent of rapid DNA sequencing, and its growing use in identifying and describing species, verification of bacterial species still requires phenotypic descriptions, whenever possible using experimental cell culture and phenotyping. Importantly, it is still not possible to accurately predict phenotypes of bacteria from complete genome sequences and thus characterizing and describing phenotypes is still a critical component of any species description. A minimal phenotypic description is not only the identity card of a taxon but also a key to its biology. Although they are accepted as necessary, differential phenotypic characters are often hard to find with a reasonable amount of effort and time [31]. This of course does not mean phenotypes are the only thing needed to describe an organism. Phenotypes are prone to convergent evolution and can differ greatly under varying conditions and even for strains that are genetically identical (e.g. due to epigenetic modifications). Thus, it is important to use both phenotypic and genetic/genomic information in characterizing taxa. Future directions for investigating violacein-producing bacteria The violacein biosynthesis pathway relies on the amino acid tryptophan. The first two genes in the pathway, vioAB, are homologous with genes used for indole compounds staurosporine and rebeccamycin, which are found in some actinomycetes. VioC is homologous with a monooxygenase housekeeping gene used in various processes across a wide range of prokaryotic and eukaryotic organisms, while vioED are uniquely used for production of violacein and deoxyvioalcein. The striking violet pigment violacein is the final product of a biosynthetic pathway that also produces prodeoxyviolacein (green) from the first three genes in the pathway (vioABE), and deoxyviolacein (violet) from the first four genes (vioABEC). The end product, violacein (navyviolet), is produced when all five genes are used (vioABEDC). Deoxyviolacein is not cytotoxic, whereas violacein is, and therefore synthesizing deoxyvioalcein can be good for developing products such as biosensors and food colorants [66]. Further research should focus on predicting violacein production in cultured or uncultured bacteria by developing models and using phylogenomic methods. Investigations can elucidate how horizontal gene transfer of violacein genes may have contributed to the pigment's distribution throughout the bacterial tree of life. Models could be useful for future research where production of this pigment can be inferred from any bacterial whole genome without the need for culturing and extraction. The construction of models that allow the user to discover new strains of violacein-producing bacteria and the methods for creating and using these models could easily be co-opted for searching for other pigments in any bacterial whole genome assemblies. In this way, the vioABCDE genes can be searched for and compared to determine if they are present in non-pigmented strains (e.g. in Janthinobacterium) and also to determine if they were probably acquired independently through multiple horizontal gene transfers by various strains/species. The taxonomy and evolution of bacterial strains that are capable of expressing violacein, such as Janthinobacterium, are not well understood even though the biosynthesis of the compound has been clearly demonstrated in multiple genera. Future studies should focus on understanding concordance between biochemical phenotyping and metabolomic predictions. Phenotyping and genome annotation are important for understanding if the bacteria will be useful, safe and economic for future applications. This evolutionary knowledge can help provide context for future synthetic biology and genetic modification studies using violacein and non-model bacterial species such as these. The evolutionary or adaptive reasons leading to the Janthinobacterium subpopulations inferred by PopCOGenT should be further investigated. Some metabolic tests can be used to define species and biotypes within a species, and the delimitation of biotypes within a specific group has been found to generally agree with observed phylogenetic structure [67]. Future studies should use metabolic tests to observe if splitting a specific species such as Janthinobacterium lividum into biotypes based on differences in niche occupancy reflects phylogenetic subdivisions within the species. Codon usage redundancy may cause GC content to shift within members of a genus. Differences in GC content among coding sequences may be caused by mutational bias due to adaptation to different lifestyles or niches among species isolated from very different environments. The variations in metabolic capabilities and GC content between different species and lineages that have adapted to distinct niches suggest that these traits have evolved over a long period of time in response to the demands of their specific niches [67]. Using publicly available whole genome sequences, additional studies on genera with violacein-producing bacteria should also involve comparing GC contents of coding sequences as these are thought to reflect subtle differences in mutational bias as a consequence of long-term niche adaptation. Funding information The author(s) received no specific grant from any funding agency.
2023-04-14T06:18:22.001Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "6f363c7a1c74e7d1a7f8a0a5d95909fddbf15b31", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1099/mgen.0.000971", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5f549146e1e1b124e8720279ba099fd1f6d4167", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221824896
pes2o/s2orc
v3-fos-license
Uric acid is associated with adiposity factors, especially with fat mass reduction during weight loss in obese children and adolescents Background Current adult studies suggest that uric acid (UA) is associated with body fat, but the relationship in obese children is unclear. Thus, we aim to evaluate the association between uric acid and body composition of obese children. Methods A total of 79 obese children were included in this study, and 52 children (34 boys and 18 girls) underwent a 6-week weight loss camp, including 34 boys and 18 girls. Six-week weight-loss interventions were performed on all participants through aerobic exercise and appropriate dietary control. Laboratory tests and body composition were collected before and after the intervention. Results Before the intervention, correlation analysis demonstrated that uric acid was positively correlated with height, weight, body mass index (BMI), waist circumference, hip circumference, fat mass (FM), and free fat mass (FFM) with adjusting for age and gender (P < 0.05). After 6 weeks of intervention, the participants gained 3.12 ± 0.85 cm in height, body fat percentage decreased by 7.23 ± 1.97%, and lost 10.30 ± 2.83 kg in weight. Univariate and multivariate analysis indicated that uric acid at baseline was associated with FM reduction during weight loss (P < 0.05). Conclusions This study is the first report that uric acid is associated with BMI and FM, and may play an important role in the reduction of FM during weight loss in obese children and adolescents. The interaction between UA and adiposity factors and its underlying mechanisms need to be further explored. Trial registration This study was registered in Clinical Trials.gov (NCT03490448) and approved by the Ethics Committee of Xinhua Hospital, Shanghai Jiao Tong University School of Medicine. Background In 2014, a large-scale study of the world's obese population revealed that the number and proportion of obese and overweight people worldwide has been increasing in the past 30 years [1] . The total global obese and overweight population has increased from 857 million in 1980 to 2.1 billion in 2013, and the number of obese and overweight children has increased by 47.1% [1]. Obesity has been defined as a worldwide epidemic metabolic disease by the World Health Organization (WHO), and it has become a worldwide public health problem endangering human health [2]. Childhood obesity not only leads to an increase in the incidence of chronic diseases such as fatty liver, diabetes, lipid metabolism disorders, Open Access *Correspondence: tangqingya@xinhuamed.com.cn † Yang Niu and Xue-lin Zhao contributed equally to this study and share first authorship 1 Department of Clinical Nutrition, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200092, China Full list of author information is available at the end of the article and hypertension, but also easily leads to psychological problems such as inferiority and social disorders. Hyperuricemia, previously thought to occur in adults, is also increasing in children, especially in obese children. The United States of American Bogalusa Heart Research Institute found that the incidence of hyperuricemia in boys and girls of normal weight was 8.1% and 8.5%, respectively, while that of obese children increased to 24.6% and 23.9% [3]. In present, a large number of current studies are exploring the relationship between serum uric acid and body mass index (BMI), glucose and lipid metabolism and blood pressure in children [4][5][6][7]. However, BMI has some limitations because body composition is not taken into account. Studies in adults have shown that blood uric acid levels are related to body composition and may be affected by fat mass (FM) and free fat mass (FFM), which are the major components of body weight [8,9]. But no studies have explored the associations between FFM, FM, and serum uric acid in obese children and adolescents. Therefore, this study is to explore the potential relationship between blood uric acid levels and FFM and FM, and the effect of serum uric acid on FFM and FM during weight loss in obese children and adolescents. Participants A total of 79 obese children and adolescents who participated in the 6-week weight loss camp in Shanghai from July to August 2014 were selected as the research subjects. All participants and their parents signed written informed consent. All enrolled children had no prior liver and kidney damage and had no history of using drugs that affected uric acid. According to the WHO standards, obesity is diagnosed when it equals or exceeds the 95th percentile (P95) of the BMI of children of the same age and gender [10]. Interventions and methods A 6-week weight loss program was conducted for all participants under the guidance of professional weight loss coaches and medical staff, and weight loss was performed according to a unified exercise and diet program. Prior to the intervention, all subjects were required to undergo an exercise stress test to ensure safe and effective physical exercise. ① Sports programs: Sports programs mainly included ball sports such as basketball, table tennis, badminton, as well as aerobic sports such as jogging, brisk walking, power cycling, and swimming. All sports were carried out indoors under the guidance of a full-time coach. Exercises are performed 6 days a week, once in the morning and afternoon, and 2 h each time. Preparatory activities were performed for 15-20 min before and after the exercise. During exercise, the participants' heart rate was monitored though the sports wrist watch to ensure that the participant performs a small-medial load aerobic exercise. ② Diet plan: Considering the needs of children's growth and development, the diet should guarantee the daily energy and physiological requirements, and the basal metabolic rate (BMR) is calculated according to the Harris-Benedict formula to formulate the diet. The diet consists of 20% protein, 30% fat and 50% carbohydrates. In the menu provided to the participants, we increased the amount of coarse grains and vegetables and reduced the intake of high-fat foods. All foods were mainly cooked by steaming, boiling, and cold and dressed with sauce, rather than fried. All subjects did not take any type of nutritional supplement during weight loss. In the same state, participants' serum uric acid (UA), fasting blood glucose (FBG), fasting insulin (FINS), triglyceride (TG), total cholesterol (TC), high density lipoprotein-cholesterol (HDL-C), and low density lipoprotein-cholesterol (LDL-C) were measured and collected. All laboratory indicators were tested by Aidikang Medical Test Center, FBG is tested by HK method, FINS was detected by chemiluminescence method, HDL-C and LDL-C were detected by homogeneous assay method, TC was detected by cholesterol oxidase method, TG Enzymatic GPO-POD was used for detection, and UA was analyzed using urase method. Homeostasis model assessment-insulin resistance (HOMA-IR) and Homeostasis model assessment-β (HOMA-β) were calculated as follows: HOMA-IR = FBG (mmol/L) × FINS (mIU/L)/22.5; HOMA-β = 20 × FINS (mIU/L)/[FBG (mmol/L)-3.5]%. The relative changes (Δ) = the value after intervention − the value before intervention. Therefore, in the expression of the results, a positive value indicates a increase, and a negative value indicated an decrease. Statistical analysis SPSS V.25.0 statistical software was used for statistical processing. Normal distributions of parameters were assessed using the Kolmogorov-Smirnov test. For the continuous variables, the independent sample t test was used, which was expressed as the mean ± standard deviation. For the categorical variables, the chi-square test and percentage (%) were used. Non-normally distributed data were presented as median (P25, P75). Firstly, univariate analysis was used to analyze the correlation between uric acid before intervention and relative changes of uric acid and variables. Considering that children's uric acid was related to age and gender [11][12][13], age and gender were adjusted while analyzing the correlation between uric acid and various indicators. Secondly, in order to further analyze the correlation between the reduction of body fat and various indicators at baseline, participants were divided into two groups according to fat mass reduction during weight loss: an ΔFM < 10 kg group (n = 29), and an ΔFM ≥ 10 kg group (n = 23). The correlation between FM and other factors was assessed using the Pearson correlation, partial correlation analysis was used with adjusting for age and gender. Multiple linear regression analysis was conducted to further evaluate the impacts of the associated variables. A two-sided P value < 0.05 was considered statistically significant. General data and factors associated with uric acid before and after the intervention Initially, a total of 79 obese children (52 boys and 27 girls) with the mean age of 13.13 ± 2.15 years (9-18 years old) participated in the weight loss camp. Before the intervention, correlation analysis demonstrated that serum uric acid was positively correlated with height, weight, BMI, WC, HC, FM, FFM, TC, and TG (All P < 0.05, Table 1). Furthermore, after adjusting for age and gender, there were still significant correlations between UA and BMI, HC, FM, FFM, TC, and TG (P < 0.05, Table 1). However, Only 52 children (34 boys and 18 girls) underwent a 6-week weight loss camp, with the mean age of 13.21 ± 2.20 years (9-17 years old). After 6 weeks of intervention, the indicators after the intervention were better than those before the intervention. After adjusting for age and gender, UA was associated with BMI, HC, and FFM (P < 0.05, Table 2), but not FM (P > 0.05, Table 2). As for the changes before and after intervention, the participants gained 3.12 cm in height, body fat percentage decreased by 7.23%, and lost 10.30 kg in weight (Table 3). Weight loss is dominated by a decrease in fat mass (9.92 kg), but with little increase in free fat mass (0.28 kg) ( Table 3). The correlation analysis indicated that serum uric acid before intervention was negatively correlated with the relative change of weight, BMI, TG, FM and BFP (All P < 0.05, Table 3). After adjusting for age and gender, serum uric acid was still negatively correlated with the relative change of weight, TG, FM and BFP (All P < 0.05, Table 3), but the correlation coefficient was decreased. General data and factors at baseline associated with fat mass loss ΔFM ≥ 10 kg group (13.83 ± 1.58 years) was older than ΔFM < 10 kg group (12.72 ± 2.18 years), and the serum uric acid level (532.91 ± 98.50 vs 424.66 ± 88.53 mmol/L) was also higher than that of ΔFM < 10 kg group (both P < 0.05, Table 4). Compared with ΔFM < 10 kg group, ΔFM ≥ 10 kg group had significantly higher height, weight, BMI, WC, HC, WHR, WHtR, FM, FFM and BFR before intervention, and had worse glucose metabolism (all P < 0.05, Table 4). Furthermore, there was significant difference in FINS, TG, UA, SBP and DBP before intervention between the two FM groups (all P < 0.05, Table 4). However, multiple linear regression analysis showed that only UA before intervention were not conducive to the reduction of FM during weight loss (P < 0.05, Table 5). Discussion In this study, we investigated the correlation between serum uric acid and body fat before and after weight loss in obese children and adolescents. The results showed that a combination of aerobic exercise and appropriate caloric control can help reduce weight, especially FM. Importantly, we found that UA was associated with BMI, FM, and FFM with adjusting age and gender, and played an important role in weight loss. To date, the research on weight loss methods is roughly divided into dieting, exercise, meal replacement, drugs and surgical weight loss. Among them, exercise combined with diet intervention is considered to be the most scientific and reasonable way to lose weight [14,15]. In terms of exercise methods, aerobic Table 3 The correlation between uric acid at baseline and relative changes of variables during weight loss exercise is characterized by activities that are fun and have low exercise load intensity, which is more suitable for children. In addition, when the aerobic exercise time exceeds 30 min, the fat decomposition efficiency in the body is increased, and the body functions as a main energy supply material, thereby achieving the effect of weight loss [16]. Aerobic exercise not only helps to reduce fat and retain muscle [12], but also improves glucose and lipid metabolism [17,18]. Likewise, our results showed that after a 6-weeks weight loss interventions, participants' weight and body fat were significantly reduced, while muscles were little increased. The average weight loss in obese children was 10.30 kg (12.07%) consisting of 9.92 kg of FM, which was more evident than in many other studies. Globally, the incidence of hyperuricemia is estimated to be about 2.0-3.1%, and it is higher in men than in women [19][20][21][22]. With the increase in the incidence of childhood obesity, it is one of the reasons for the increase in the incidence of hyperuricemia in obese children [23]. Importantly, a 21-year follow-up study found that obese children were 3.25 times (male) and 3.55 times (female) more likely to develop hyperuricemia than adults of normal weight [3]. However, since the normal range of uric acid depends on age and gender [11,12], the cut-off points for the diagnosis of hyperuricemia in children and adolescents are also different, which also brings some trouble for pediatricians to diagnose hyperuricemia. Previous studies have shown that an elevated uric acid levels are associated with obesity, metabolic syndrome, hypertension and disorders of glucose and lipid metabolism in childhood [24,25]. In addition, studies in adults indicated that hyperuricemia is positively associated with BMI, WC, and body fat, but is also positively associated with muscle [9,26]. Surprisingly, the results from our study found that uric acid in obese children and adolescents was not only positively related to BMI, FM, and FFM, but also to the decrease in FM during weight loss. To the best of our knowledge, this result is first reported herein and is a highlight of our research. However, the pathophysiological mechanism involved in the occurrence of hyperuricemia in obese children is not yet clearly established, and the mechanism of the effect of UA on body fat is also not yet fully understood. Previous study indicated that hyperuricemia induced inflammation of fat cells and oxidative stress. And fat cell inflammation and oxidative stress were the key mechanisms for fat mice to develop obesity and metabolic syndrome [27]. In addition, as for the association between blood uric acid and TC and TG, significant relationship was found in this study with adjusting for age and gender before and after intervention. This might imply that high uric acid affected the redistribution of adipose tissue by inducing the abnormality of lipids metabolism. In light of the findings of this study, we should pay close attention to blood uric acid levels during weight loss in obese children, which may be a sensitive and useful signal indicator. However, for obese children diagnosed with hyperuricemia, whether uric acid lowering treatment is needed before weight loss should be considered and further researched. But it was worth mentioning that our research found that reasonable diet and exercise can reduce uric acid by 53 mmol/L. Of course, the limitations of this study should be considered. First, the sample size of this study is small, and it is difficult to elaborate the mechanism of uric acid on body fat. Second, clinical data were only monitored and evaluated before and after the intervention. Finally, this study did not follow up on the effects of aerobic exercise combined with dietary weight loss on children's longterm quality of life and health status. Conclusions In conclusion, a reasonable and professional aerobic exercise combined with proper diet control is of great significance for the reduction of body fat and muscle retention in obese children. It is found that a positive correlation between uric acid and body fat, but the higher the serum uric acid level before intervention, the more beneficial it is to reduce body fat. For this unexplainable results, it is worth studying what the reasonable range of serum uric acid is beneficial for the reduction of body fat at different age and gender. Therefore, in the future research, it is necessary to further explore the exact mechanism of the effect of uric acid on body fat through basic and clinical trials on the premise of expanding the sample size.
2020-09-22T14:00:49.456Z
2020-09-22T00:00:00.000
{ "year": 2020, "sha1": "d6380702843ea45749bb5bcbcdb6cdaebe73e72f", "oa_license": "CCBY", "oa_url": "https://nutritionandmetabolism.biomedcentral.com/track/pdf/10.1186/s12986-020-00500-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6380702843ea45749bb5bcbcdb6cdaebe73e72f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266571984
pes2o/s2orc
v3-fos-license
Statistical analysis of synonymous and stop codons in pseudo-random and real sequences as a function of GC content Knowledge of the frequencies of synonymous triplets in protein-coding and non-coding DNA stretches can be used in gene finding. These frequencies depend on the GC content of the genome or parts of it. An example of interest is provided by stop codons. This is relevant for the definition of Open Reading Frames. A generic case is provided by pseudo-random sequences, especially when they code for complex proteins or when they are non-coding and not subject to selection pressure. Here, we calculate, for such sequences and for all 25 known genetic codes, the frequency of each amino acid and stop codon based on their set of codons and as a function of GC content. The amino acids can be classified into five groups according to the GC content where their expected frequency reaches its maximum. We determine the overall Shannon information based on groups of synonymous codons and show that it becomes maximum at a percent GC of 43.3% (for the standard code). This is in line with the observation that in most fungi, plants, and animals, this genomic parameter is in the range from 35 to 50%. By analysing natural sequences, we show that there is a clear bias for triplets corresponding to stop codons near the 5′- and 3′-splice sites in the introns of various clades. Interestingly, also many non-coding sequences such as intergenic regions and constitutively spliced introns can be considered to be pseudo-random if they are subject to low selection pressure.In this context, it is worth mentioning that several lines of evidence suggest that novel genes (or their precursors, sometimes called protogenes) can emerge also from non-coding regions [14][15][16] .Since start and end signals for transcription as well as splice sites are rather short, they are expected to occur frequently even in random sequences 14 .This facilitates de novo emergence of genes.Tautz and coworkers 15 expressed clones with synthetically generated random sequences (as equimolar mixes of A, C, G and T) in Escherichia coli and showed that transcribed and translated random sequences could indeed have a high potential to become functional.In view of all the above-mentioned observations, we consider it useful to analyse highly complex sequences, which we assume to be quasi-random. Several decades ago, Temple Smith 17 (especially well-known for the Smith-Waterman algorithm) calculated, for the standard genetic code (SGCode), the frequency of each amino acid based on its set of codons and as a function of GC content and determined the inherent Shannon information 18,19 for this amino acid frequency distribution.Furthermore, he calculated the GC content at which the Shannon information has its maximum.Hasegawa and Yano 20 extended this work by considering stationary second-order Markov chains.Mir et al. 3 introduced a geometric model for the evaluation of several genome statistics in bacteria, like ORF number and length distribution, in dependence on codon usage and GC content.For the special case of stop codons, we have presented a statistical analysis of codon distribution in dependence on GC content previously 2 . Here, we perform the above-mentioned statistical analyses 2,4,17 in more detail and extend them by considering 25 genetic codes.Although the SGCode and its codon assignments are predominantly used in almost all life forms 21,22 , variations exist, for example, in some archaea, eubacteria (especially those with small genomes), yeasts as well as mitochondria and several types of plastids 23,24 .As of August 2022, the National Center for Biotechnology Information (NCBI) catalogued 24 alternative codes 25 . In particular, we determine several features in dependence on GC content, because that parameter differs from 50% in many genomes and the evolution of de novo genes depends on that parameter 16 .Our analysis is aimed at two main applications: calculating the variability of proteins (expressed by Shannon's information) and determining the frequency of translation termination codons.We show where the frequency functions reach their maxima, that is, for which GC content a given amino acid would occur most often in pseudo-random sequences.We also calculate how much information is contained in such sequences for each genetic code in dependence on GC content using Shannon's entropy equation.In doing so, we consider the different codon numbers of the different amino acids and the stop codon.Therefore, the information content differs from what would be obtained by just considering the distribution of nucleotides.Additionally, we analyse the GC contents of the genomes of archaea, eubacteria, fungi, plants, protozoa, invertebrates, vertebrates, and viruses and compare them with the calculated GC content at maximum information. The second, related goal of our paper concerns the distribution of stop codons.As mentioned above, in protein-coding sequences, those triplets occur less often than expected by chance.Thus, in the SGCode, for a GC content of 50%, a termination codon will appear less often than at every 64/3 ≈ 21st triplet 2,3 .Accordingly, de novo genes should emerge more frequently in genomic regions with elevated GC content because these tend to involve fewer AT-rich stop codons 16 .For our analysis, it is important that this average distance depends not only on GC content, but also on the genetic code used.The thraustochytrium mitochondrial code, for example, includes an additional stop codon, UUA, so that at 50% GC content every 16th codon would encode termination purely by chance.In the alternative flatworm mitochondrial code, there is only the "amber" triplet UAG, which would occur purely by chance at every 64th triplet.In the first part of our study, in which we analyse pseudorandom sequences, we neglect the property of stop codons to occur less often in protein-coding sequences than expected by chance. Stop signals are relevant in the definition of ORFs.In their most basic definition, ORFs are nucleotide sequences that are enclosed by a start and a stop codon, whose lengths are divisible by three and that do not have any other stop codons in between 3,28,29 .While this definition is sufficient as a first step for gene finding in prokaryotes 2,3 , it often fails to be applicable in eukaryotes due to the presence of introns 28,30 .Most introns contain sequences that would be stop codons if in a coding region and/or cause shifts between reading frames.Henceforth, we use the term stop signal in the general case where it is not yet clear whether or not the sequence is coding a protein sequence. A further problem with the traditional ORF definition is the occurrence of alternative start codons 28,31 .A third problem is that the 5′ and 3′ untranslated regions are part of the gene and transcript while not being included in the start-to-stop stretch 32 .For all of these reasons, an alternative ORF definition is often used, especially in gene finding software, saying that an ORF is delimited by two consecutive stop codons 4,28,33 .Extending our analysis from pseudo-random sequences to natural genomes, we will here investigate, by empirical analysis, the stop signal distribution in introns of hundreds of genomes from several kingdoms of life.We compare those results to the predicted distribution for a given GC content.This is relevant for the question as to how far a predicted ORF according to the alternative definition extends into an intron, although mainly exonic sequences are searched for in gene finding. Genetic codes The mapping tables of the 25 known genetic codes are taken from the NCBI genetic code databank 25 . Frequencies of amino acids and stop signal in pseudo-random sequences We determine the frequency of each amino acid according to the equations presented by Smith 17 , Pohl et al. 2 and Mir et al. 3 .Since we consider pseudo-random sequences, our calculations are independent of the reading frame.According to Chargaff 's second parity rule, the frequencies of the complementary bases in each strand are (almost) equally distributed, that is, P(A) ≈ P(U) and P(G) ≈ P(C) [34][35][36] .Therefore, the frequency of each base is dependent on the GC content -denoted here by g: For pseudo-random sequences, statistical independence of the nucleotide positions can be assumed.Thus, the probability of a codon can be calculated by multiplying the frequency of each base in the triplet.For example, the frequency of the "amber" triplet UAG is as follows: The expected frequency of an amino acid is calculated by summing up the probability of each codon by which it is encoded.In the analysis of pseudo-random sequences, we neglect that stop codons usually occur less often in protein-coding sequences than expected by chance.Thus, for the SGCode, the expected frequency of the stop signal is as follows 2 : This is done analogously for all canonical amino acids and genetic codes by calculating the frequencies for all GC contents (Supplement S4). Each codon-to-amino-acid assignment is usually unique.In our calculations, we take into account that in some alternative codes, codon assignment is non-unique for some canonical amino acids or translation stop.For example, in the ascidian mitochondrial code, the codons AGA and AGG can code for glycine, arginine or serine.In the mitochondrial genome of Halocynthia roretzi, which uses that code, the tRNA with the anticodon UCU encodes glycine when the first uracil is a 5-carboxymethylaminomethyl-uridine (cmnm 5 U) 26,27 . In this case, the codon frequency is evenly split between the respective signals for simplicity's sake.For the example mentioned above, AGA and AGG are assigned by 1/3 to each of the amino acids glycine, arginine, and serine.All cases of non-unique assignments are outlined in the Supplement S5. Shannon's entropy of genetic codes Finally, we calculate the inherent information content of each code given the frequencies of each amino acid and the stop signal as a function of GC content using Shannon's entropy equation 18 : where n is the number of all signals and p i is the frequency of amino acid i or the stop signal in pseudo-random sequences based on its codon number.In addition, for each genetic code, we numerically calculate at which GC content the maximum entropy is reached.This is in accordance with an optimality principle saying that complex proteins should have as much variability as possible (measured by Shannon's information). Impact on ORF definition The probability of the absence of a stop codon in a stretch of c triplets for a given GC content is calculated, starting from any given point 2 (see also 37 ).Thus, the probability of a sequence involving c triplets and at least one stop signal for a given GC content is as follows (for the SGCode): where P SGCode,Stop g is given by Eq. (4). To obtain the minimum required triplet length for a given sequence probability with at least one stop codon, we solve Eq. ( 6) for c: We perform the same calculations for all other genetic codes and for all proteinogenic amino acids (Supplement S6). Genome data The genome files in Fasta format and the genetic information files in GFF format of the genomes of all archaea, bacteria, fungi, plants, protozoa, invertebrates, vertebrates, and viruses currently available in the NCBI Genome (1) P G g = P C g = g 2 (2) P SGCode,Stop g = P UAA g + P UAG g (5) Intron stop signal distribution By data mining in the above-mentioned genomes, we examine the relative frequency of stop signals per triplet in all three frames of all introns near both splice sites (for intron with length n → 5′-splice site: positions at 1 to 3, 2 to 4 and 3 to 5; 3′-splice site: positions at n − 4 to n − 2, n − 3 to n − 1 and n − 2 to n) as well as in the in-between (non-splice site) intron sequence.For the 5′-and 3′-splice sites of each intron, the stop signal frequencies are calculated by counting the number of stop codons in the three frames divided by the overall number of introns for this organism.For the intermediate sequences, the number of initial nucleotide positions is determined for each reading frame in all introns.For example, for two hypothetical introns with length five and seven, we can write {0, 1, 2, 0, 1} and {0, 1, 2, 0, 1, 2, 0}, where the numbers indicate the three reading frames.Now we can ignore the last two positions in each intron because they cannot form triplets.Next, we count the number of stop signals over all introns, for each frame separately.These counts are divided by the overall number of triplets (over all introns together) in the respective frame.In the above examples, that number equals three for frame zero (one coming from the first intron and two from the second), three for frame one and two for frame two. To be able to define the three intron regions sufficiently (near 5′-splice site, near 3′-splice site and intermediate sequence), all introns with lengths less than 100 nt are removed for this analysis (Table 1). Results In the Results section, we focus on the amino acid (and stop codon) frequencies of the SGCode.The results for the 24 alternative codes are shown in Supplement S1. Amino acid and stop codon frequencies for the standard genetic code While Pohl et al. 2 only calculated the frequency of stop codons (as a function of GC), Smith 17 did so for all amino acids.However, he had only shown the calculation for phenylalanine explicitly.Here, we show the calculations, by way of example, for the amino acids serine and lysine in the SGCode: The equations for the remaining amino acids for the SGCode can be found in Table 2.Note that four formulas are cubic functions, which is understandable because the frequencies of three nucleotides are multiplied.However, the remaining formulas are quadratic functions because two cubic terms cancel each other (see Eq. ( 9)).Importantly, all amino acids for which the functions are quadratic are encoded by an even number of codons. It is of interest to see where these functions reach their maxima.In the SGCode, the maximum is reached at five different positions for different amino acids, notably at GC contents of 0%, 33.33%, 50%, 66.67% and 100% (8) P SGCode,Ser g = P AGG g + P AGU g + P UCA g + P UCC g www.nature.com/scientificreports/(Fig. 1).The maxima in the interior of the admissible interval, notably at 33.33% and 66.67%, correspond to methionine and tryptophan, respectively.Both amino acids only have one codon.This means, for example, that in a random sequence, methionine (and, thus, the start codon) occurs with the highest frequency of 1.85% for a GC of 33.33% (in contrast to 1/64 ≈ 1.56% at a GC content of 50%). Table 2. Frequency equations for each amino acid (including stop signal) in random sequences for the SGCode, their maximum frequency, and GC content where that is reached.At 0% GC content, obviously, only those amino acids encoded by at least one codon only involving A and/ or U, that is, asparagine, isoleucine, leucine, lysine, phenylalanine, tyrosine, and the stop signal can occur.Their frequency then is 12.5%, except for isoleucine with 25%.Isoleucine is the only amino acid encoded by three different codons: the purely AU-codons AUA and AUU as well as AUC. Aspartic acid, cysteine, glutamic acid, glutamine, histidine, serine, threonine, and valine reach their maximum at 50% GC content.Threonine and valine then have a frequency of 6.25% (encoded by four codons) and serine has 9.375% (encoded by six codons) while the remaining amino acids (encoded by two codons) have 3.125%.Finally, at 100% GC content, only alanine, arginine, glycine, and proline can occur, notably with a frequency of 25%.They are all encoded by four codons (two of which are pure GC codons), except arginine, which is encoded by six. For gene finding and for the stop-to-stop ORF definition, the frequency of stop codons is of interest.As mentioned above, at 50% GC content, on average every 64/3 ≈ 21 st codon in a random sequence would be a termination codon just by chance alone.However, the distance fluctuates around this average value according to a monotonic decreasing exponential distribution with respect to the distance 4 .The curve of the function given in Eq. ( 7) is shown in Fig. 2. For GC contents tending to 100%, stop signals occur less and less often.Mathematically, the cubic polynomial in the denominator in Eq. ( 7) then tends to zero, so that the argument of the logarithm tends to one and the reciprocal of the logarithm diverges.Thus, the curve grows very steeply near 100% GC. We can calculate the minimum sequence length so that at least one stop codon occurs with a probability of 95%.At 50% GC content and with the SGCode, a length of 63 triplets (or 189 nt) is obtained (Fig. 2).Since the average length of introns in the human genome, for example, equals about 1806 triplets (5419 nt) 38,39 and is, thus, much longer than 63 triplets, introns practically always contain a stop signal in any reading frame. For the alternative codes, the values at a GC content of 50% are given in Supplement S1.The lowest length of 47 triplets is calculated for the thraustochytrium mitochondrial and vertebrate mitochondrial genetic codes.These are the only two codes where the stop signal is encoded by an additional triplet (compared to the SGCode), namely UUA, which reduces the required length.This is in contrast to the karyorelict nuclear genetic code where the stop signal is only encoded by the "opal" triplet UGA, which can also be transcribed into tryptophan.In this case, the highest length of 382 triplets is obtained.Besides those two, four additional lengths are calculated (see Supplement S1). Maximum potential information at around 43% GC Using the expected amino acid (including stop signal) frequencies of the various genetic codes as input for Shannon's entropy, we determined their potential information content and at what GC content the codes reach their maximum entropies (Fig. 3).The optimal GC content for the SGCode is 43.3%.The entropy value then amounts to 4.24 bits, which is near the maximum possible value of log 2 (20) ≈ 4.32 bits achieved upon equal distribution Figure 2. Number of triplets in a random sequence so as to contain at least one stop codon with a probability of 95% using the SGCode as a function of GC content.The horizontal and vertical dashed lines indicate the number of triplets for GC contents of 14.8% (28 triplets) as found in the protozoon Leishmania braziliensis, 50% (63 triplets) as found in E. coli and 70.3% (159 triplets) in the slime mold Fonticula alba. of amino acids.For the alternative codes, the values are given in Supplement S2.The lowest GC content implying maximum information is for the yeast mitochondrial code with 38.11% (the only code for which the optimum is reached at a GC below 40%), while the highest is for the alternative flatworm mitochondrial genetic code with 45.61%.Note that at 100% GC for all genetic codes, the Shannon entropy equals two bits because only the four amino acids alanine, arginine, glycine, and proline can be encoded then and are equally distributed.On the other hand, at 0% GC content, the entropies are between 2.25 bits (for the alternative flatworm mitochondrial genetic code) and 3 bits (for the ascidian mitochondrial, invertebrate mitochondrial, vertebrate mitochondrial and yeast mitochondrial genetic codes). GC contents of fungi, plants and metazoa cluster around 40% Looking at the distribution of genomic GC contents across the clades, it can be seen that in complex organisms, notably fungi, plants, invertebrates, and vertebrates, the genomic GC contents are mainly in the range from 35 to 50% (Fig. 4).Especially in non-mammalian and mammalian vertebrates, around 44.6% and 65.6% of genomes, respectively, have a GC content between 40 and 45% which coincides with the maximum obtained information content in the SGCode.In contrast, GC contents in the genomes of less complex organisms, notably archaea, eubacteria, protozoa, and viruses, are distributed across a GC range from 10 to 70%.Extreme cases are the protozoon Leishmania braziliensis with a GC content of 14.8% and the slime mold Fonticula alba with a GC content of 70.3%.Less than 50% of lower genomes have GC contents between 35 and 50% except for viral genomes. 5′ and 3′-splice sites are biased for stop signals In view of the stop-to-stop definition of ORFs, we looked at the stop signal frequencies in introns of fungi, plants, protozoa, invertebrates, and vertebrates.All six groups of organisms show very similar results (Fig. 5).There seems to be a clear bias in introns near the 5′-and 3′-splice sites (i.e., acceptor and donor splice sites, respectively) for the occurrence of a stop signal.In the genomes of invertebrates, non-mammalian, and mammalian vertebrates over 60% of introns contain a stop signal in nucleotide positions 2-4 downstream of the 5′-splice sites (i.e., at the next triplet position in frame 1).In introns of fungi, plants and protozoa, such triplets are also enriched at the same position but with lower frequencies.Near 3′-splice sites in introns of plants, invertebrates, non-mammalian, and mammalian vertebrates, stop signals appear in frame 2 with frequencies between 20 and www.nature.com/scientificreports/30%.Frequencies are considerably higher in protozoan and fungal introns, notably 39.4% and 38.8%, respectively.This finding corroborates the suitability of the ORF definition in terms of stop-to-stop. For the in-between sequences, the stop signal frequencies determined by data mining range from around 4% to around 6% per triplet.The calculated probabilities given the average GC content over all intermediate sequences and the SGCode range from 5.2% (mammals) to 7.5% (invertebrates). Discussion Here, we have calculated the frequencies of all groups of synonymous codons in pseudo-random sequences in dependence on GC content.We neglected any codon bias apart from that resulting from the varying GC content.Following earlier approaches 4,17 , we use pseudo-random sequences as a proxy for highly complex DNA sequences such as encoding enzymes or regulatory proteins (coding) or introns (non-coding).It should be noted, however, that a random sequence need not have maximum complexity (i.e., Kolmogorov complexity) 40 .A long random sequence can contain a repeat like AAAA, while this cannot occur in the maximally complex sequence because it can be compressed to 4A. In our calculations, we used Chargaff 's second parity rule saying that the frequencies of G and C are equal in each strand, and so are those of A and T.However, this rule is not fulfilled in mitochondria, plastids, singlestranded viral DNA genomes and (single-or double stranded) viral RNA genomes 41,42 .Therefore, that parity rule may not be valid in all alternative genetic codes.For simplicity's sake, we ignored this feature here. Based on the calculated frequencies, we have determined the potential information entropies.In the Shannon formula, we have used overall frequencies of amino acids (summed over the synonymous codons).It is worth mentioning that the formula used by Zeeberg 11 differs in that a double sum over amino acids and over synonymous codons was used, which implies that the Shannon information is calculated on the basis of the frequencies of all codons.The mathematical difference is that the logarithm is calculated for the different amino acids in our approach (tracing back to 17 ) and for the different codons in the latter approach.Therefore, the maxima are reached at different GC contents.In our calculations, the entropies reach their maxima between GC contents of about 38% and 46%.The GC content of several mammals, birds and reptiles using the SGCode are indeed between 40 and 50% 43 .For example, the GC content of the human genome is 40.9% 44 and, therefore, only about 2% below the optimal value for the SGCode. An interesting outcome is that the optimal GC contents do not differ considerably from each other for different genetic codes.Moreover, although the amino acids are not equally distributed, the maximum information content is very close to the maximum possible value of 4.32 bits which would be achieved in the case of equipartition.Importantly, the region around the maximum entropy (at the amino acid level) of all genetic codes is relatively flat.For example, the calculated information in the SGCode for the plant Arabidopsis thaliana www.nature.com/scientificreports/and the green alga Chlamydomonas reinhardtii with GC contents of 36% and 64% 45 , respectively, is still high, notably about 4.11 bits.A similar pattern can be seen for all the other genetic codes.Even at a GC content as low as 28%, none of the entropies of any genetic code fall below 4 bits.Due to the flat shape of the maxima, genetic codes allow some flexibility in the composition of the nucleotide structure of genomes while still providing a high information encoding. In addition to sorting out GC contents from the literature, we extracted such values also from all the genomes in the NCBI Genome RefSeq database.Thus, we were able to show that in complex organisms, genomic GC contents cluster in the regions where the SGCode reaches its maximum information content, namely in range of 35% to 50% GC content.These findings support the hypothesis put forward here that evolution has optimized the GC content to maximize variability of amino acid sequences. However, it is unclear whether the GC content and the nucleotide structure of a genome have been mainly adapted during evolution to encode as much information as possible or some other mechanisms play key roles.It is worth noting that there are species with GC contents lower than 20% or greater than 70%.For example, the values in bacteria can range from as low as 17% (Carsonella ruddii) to as high as 74% (Anaeromyxobacter dehalogenans) 46,47 .Low GC contents can be explained by GC to AT transitions due to methylation of cytosine www.nature.com/scientificreports/and subsequent deamination to thymine.This has been shown to be one of the most common mutations in both prokaryotes and eukaryotes [48][49][50] .However, in many genes, this is counteracted by biased gene conversion leading, on average, to a higher GC content than in non-coding regions 51,52 .In general, regions with high GC contents are associated with increased transcription 53 . A further cause for GC drift may be related to viral defence mechanisms.Bacteria are able to discriminate between their own and foreign DNA based on differences in GC content 54 .It was also shown that bacteriophages try to mimic the GC contents of their host to evade this mechanism whereas the same could not be seen for nonbacteria-infecting viruses 55 .For example, the GC contents of vertebrate viruses can range from 33 to 70% 56 .At the same time, in our viral dataset, the majority of viruses have a GC content of 40-45% which also coincides with the GC content of most vertebrates. An important point is that different amino acids imply different metabolic costs in their synthesis (in terms of ATP and carbon).These costs can be computed by metabolic network analyses 57,58 .A compromise needs to be found between maximum variability and minimum costs.Interestingly, there is an analogy to thermodynamics in that the minimization of free energy also implies a trade-off between maximum entropy and minimum energy 59 .This factor is implicitly included in our analysis by the different codon numbers of the amino acids.Amino acids such as tryptophan and tyrosine that are "costly" in terms of carbons and energy have lower codon numbers and, hence, occur less frequently in proteomes than "cheap" amino acids such as glycine and alanine.A correlation between metabolic costs of amino acids and codon bias was found 58 .In particular, it can be hypothesized that the factors influencing the number of codons during the evolution of genetic codes 60,61 , include metabolic costs of amino acids.It would be interesting in future studies to consider the costs more explicitly. As a second application, we analysed the frequency of stop signals.Considering ORFs of a minimum length of 100 triplets, Pohl et al. 2 showed that, with a significance level of p = 0.05, random and non-random distributions of stop signals can be distinguished below a GC content of 61.8%.Here, we have calculated that in pseudo-random sequences, such triplets occur often enough at those GC contents so that any intron (in the typical length range) is very likely to involve at least one of them in any reading frame.This supports the ORF definition in terms of stop-to-stop 28 .As mentioned in the introduction, Pohl et al. 2 used their method to search in prokaryotes since their genomes do not contain any introns and, therefore, splicing is not an issue.It is worth noting that, using the stop-to-stop definition, the method is also applicable to eukaryotic genomes. To compare our statistical analysis concerning the occurrence of stop signals with real sequence data, we performed data mining and looked at the distribution in the intron sequences of six clades.Although splicing and subsequent frameshifts pose a problem, we are able to show that there is a bias towards stop signals encoding near the 5′-splice and 3′-splice sites.At the same time, the frequencies in the other frames and the intermediate sequences clearly show depletion of stop codons.This increases the applicability of the stop-to-stop definition of ORFs even more. Our results are further supported by the fact that a very common splice site motif in introns is GT…AG 62 .The thymine in the 5′-splice site is often followed by an adenine or guanine which gives the canonical GTR motif 63 .Since two of the three stop codons are TAA and TGA, two of the three required nucleotides are already provided by the 5′-splice site motif.Thus, there is a considerable probability that a stop signal is formed by the triplet starting at the second nucleotide of the intron sequence just by chance alone.At the 3′-splice site, the adenine is often preceded by a cytosine or thymidine, which gives the canonical YAG motif 64 .Similar to the 5′-splice site motif, there is a considerable probability that the YAG motif forms the remaining stop codon TAG in the last three nucleotides of the intron sequence just by chance alone.Overall, this fact can potentially be used in gene finding to 'hop' from exon to exon by following consecutive stop codons, the first one upstream of an exon (i.e., at the end of the preceding intron or in the 5′UTR) and the next one at the beginning of the following intron or the canonical termination of the final exon. An interesting extension of our analysis is to take into account that, in many species including humans, the GC content varies considerably along their genome.Moreover, simulating the dynamics of approaching the distribution of synonymous codons at given GC content is an interesting topic for future studies.In addition to gene finding, our results may be relevant for applications in synthetic biology.For example, when synthetic genomes are constructed 65,66 , it is advantageous to optimize the GC content so as to maximize their inherent information (in the sense of variability) or to enrich specific amino acids of interest. Figure 1 . Figure1.Frequencies of all amino acids (including stop signal) as encoded by the SGCode in random sequences as a function of GC content between 0 and 100%.For better visibility, the 20 amino acids and stop signal were grouped into four sets.The dashed lines mark the maximum achieved frequency for each group of synonymous codons. Figure 4 . Figure 4. Percentages of organisms for the clades archaea, bacteria, protozoa, fungi, plants, invertebrates, nonmammalian vertebrates, mammalian vertebrates and viruses with given genomic GC contents binned in 5% intervals.For the numerical data, see Supplement S3. Figure 5 . Figure 5.Frequencies of stop signals in the first and last three triplets as well as the remaining sequence for all three frames derived from the introns of genomes of the six clades protozoa, fungi, plants, invertebrates, mammalian vertebrates and non-mammalian vertebrates (Table1).'5′-flank' (blue) indicates intron positions 1 to 3, 2 to 4 and 3 to 5. '3′-flank' (red) indicates intron positions n − 4 to n − 2, n − 3 to n − 1 and n − 2 to n. 'Rest seq' (green) indicates the average stop signal frequency in the intermediate intron sequences between both flanks from positions 6 to n − 5. 'F0' − 'F2' indicate the frames.The dashed line indicates the probability calculated with Eq. (4) given the average GC content of each in-between sequence averaged over all sequences.For the numerical data, see Supplement S3.
2023-12-29T06:16:52.855Z
2023-12-27T00:00:00.000
{ "year": 2023, "sha1": "13cdfff95fa2b58bb86a73a72e937a589067a293", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-49626-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6e886b80259b38ec2f55b44698cadfdd60fe771", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
258056187
pes2o/s2orc
v3-fos-license
Evaluation of the prevalence of viral bronchitis infection in broiler chicken by using ELISA Technique . Introduction Domestic chickens are estimated at over 18 billion in the world, the majority of chicken industry are commercial farms, while in developing nations are dominated by village (local) chickens [1], there are many diseases infect chickens and respiratory diseases, such as avian influenza virus, infectious bronchitis virus, Newcastle disease virus and Mycoplasmagallisepticum, are very importance because it can cause disease alone or association with others viral or bacterial pathogens [2].Avian infectious bronchitis (IB) is an acute, highly contagious disease with severe economic losses in poultry industry around the world [3].It mainly affects the respiratory tract, and frequently causes damage to reproductive systems and kidneys, and when affects proventriculus, the mortality may reach 75% to100% in chicks [4,5].The strains typing of IBV is necessary for understanding the evolution and epidemiology of IBVs [6].The reason of difficult of accurate classification of isolates are high mutation of RNA genome, multiple subtypes, insertion, deletions and recombination among IBVS [7], morethan 50 variants and serotypes of virus have been registered around the world [8].In Iraq IB becomes endemic, and found in layers and broiler flock, IBV have been reported in Sulaimaneyah [9], Duhok [10], Erbil [11], Mosul [12], Baghdad [13], Diyala [14], Hilla, Najaf, Muthane, Theqaar [15,16], Al-Diwaniya [17], Basrah [18].The ELISA is a convenient test for checking the viral infection and immune level in chicken flocks [19].The aim of this study to evaluate the prevalence of IBV infection in broilers via serological Technique in Kirkuk governorate. Materials and methods Chickens: for this study, a total of 900 broilers, (24-42) days old (chicken farms) Broiler, were randomly selected, During the period from April 2016 to May 2016, the sera samples were collected to ELISA test from (10) Broiler Farms located to south, east and west of Kirkuk city , the broilers consist of (7 farms) (1farms were located in Laylan region, 2 farms from Daquq region, 2 farms from Taza region and 2 farm from Yaychi region) were suffering from respiratory signs, none of these farms were vaccinated against IBV according to the supervisor instructions of each farm, (1farm located in Altun kupri region) without respiratory signs and vaccinated with different vaccination program , the remaining (1farm from Dibis region), not vaccinated, without respiratory signs, (1farm from Laylan region), were suffering from respiratory signs and vaccinated.All flocks were contact with specialized veterinarians and all owners was agreed to participate in the test.Sample collection: Blood samples consist of (3ml) obtained from wing vein (brachial vein) by sterile syringes from each bird, and poured in clean plane tube without anticoagulant and centrifuged at (3000 rpm) for (5-7 minutes), the serum was separated and stored in multiple marked tubes at (2-8ºc) for ELISA test. Enzyme-linked Immunosorbent Assay (ELISA): IBV ELISA Kit(symbiotic-USA), used to measured IBV antibodies in individual sera of chickens based on the manufacture ' s instruction, Briefly, all serum samples, positive and negative serum control were diluted by Dilution Buffer (1:50 dilution).50 μl of Dilution Buffer were added to all wells on the test plate, and 50 μl of diluted IBV positive control serum were added to wells(A1, A3 and H11), 50 μl of diluted IBV negative control serum were added to wells(A2, H10 and H12), then 50 μl of each diluted serum samples were transfer to the corresponding wells of IBV coated test plate, and incubated plate for (30 minutes) at room temperature.Liquid from each well were tapped out into vessel containing decontamination agent such as bleach.300 μl of Wash Solution were filled to each well, then wash procedure were repeated two more times.100 μl of Anti-Chicken IgY(G) Peroxides conjugate were added into each well, and incubated plate for (30 minutes) at room temperature.After washing procedure, 100 μl of Chromagen substrate reagent was added into each wells and incubated plate for (15 minutes) at room temperature.100 μl of stop solution were added into each well.The results of the tests read by ELISA Plate Reader based on optical density at 405-410 nm.The antibody levels of sample to positive ratio (S/P), the endpoint titers was calculated depending on the equation described by manufacturer, the S/P ratio equal or less than to (0.2) were considered as negative and samples greater than 0.2 (titer < 396) were consider as positive. Results Seroprevalence of IBV in broiler in some regions of Kirkuk province is given in Table (1,2).Overall Seroprevalence of IBV in present study was 78.33%, 62% respectly.The clinical signs on broilers characterized by respiratory signs such as coughing, rales and gasping (figure 1).Conjunctivitis with Wet frothy eyes, cold and depression were also noticed.Post-mortem examination showed congestion, hyperemia in trachea and caseated plugs at trachea were also seen (figure 2).Kidneys displayed swollen and filled with urates material.Five hundred and eighty-eight out of 900 sera were positive : 78/90 (86.6%) without respiratory signs and vaccinated, another 63/90 (70%), suffering from respiratory signs and vaccinated in (Table 1).432/630 (68.5%) samples were suffering from respiratory signs, non-vaccinated, others 15/90 (16.6%) , not vaccinated, without respiratory signs In (Table 2). Discussion In present study, try to made a seroprevalence of IBV from broiler chickens farms in Kirkuk Governorate.One farm located in Altun kupri region, (without respiratory signs and vaccinated) were detected for antibodies of IBV in this study, 78 out of 90 (86.6%) were positive to ELISA, the percentage of infection is lower than that detected in Duhok, a survey conducted in broilers revealed 100% farms were seropositive [10].A study used hemagglutination inhibition test to find seroprevalence of 83.3% to 3 IBV strains (M-41, 4/91, D274) in broilers chickens farms free from respiratory disease and all farms were vaccinated against M-41 strain, which is close to our result [20].In Egypt, had examined 19 broilers farms, for presence of IBV by RT-PCR and the virus has detected in (65.4%), this percentage was lower than the results obtained by this study [24], the reason of low percentage maybe, because low efficacy of vaccines used in mentioned region [9].In one farm from Laylan region, (suffering from respiratory signs and vaccinated), (70%), of the samples sera were positive against IB virus.[21].The percentage of infection were detected in this study is higher than that seen in previous studies from other parts of the world.In Duhok, 41.6% of the farms were seropositive for antibodies [10].A low seroprevalence of 17.2% was reported in chickens in Middle Euphrates [15].In study done in bangeladesh 79.38% of non-vaccinated broilers, is higher than the results obtained by this study [22].In a serological survey, collected during the period from Febraury 2010 to September 2010 in Iran, the roles of I.BV, Newcastle (ND) and Avian influenza H9 subtype (AIV H9) were studied in outbreaks of respiratory diseases of broiler farms, seroprevalence of IB, ND and AIV H9 subtype were 82.43%, 31.2%,18.47% respectively [23].The remaining, In farm from Dibis region, (not vaccinated, without respiratory signs), 16.6% were tested Seropositive for IBV, which is close to the results obtained by [25].The prevalence rate is low, when compare with previous studies from other countries, in [26] were reported that 68% of the local chickens, seropositive for IBV antibodies, as chickens had not vaccinated, the result indicate that broilers exposure to low-attenuated and field strains of IBV, locate in the studied areas [27].The high titer in some farms like in Altun kupri and Laylan, may be due to increase immunosuppression factors or exposure the birds to high doses of infectious agents [30].
2023-04-11T15:05:09.431Z
2018-08-05T00:00:00.000
{ "year": 2018, "sha1": "7deb0549f775ad7bf549957520f56cec0e320d15", "oa_license": "CCBY", "oa_url": "https://tjps.tu.edu.iq/index.php/tjps/article/download/520/184", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "85f6a45e2a843306b494f277d60080959fd2f157", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
52825517
pes2o/s2orc
v3-fos-license
Association of men's exposure to family planning programming and reported discussion with partner and family planning use: The case of urban Senegal Background Family planning programs increasingly aim to encourage men to be involved in women’s reproductive health decision-making as well as support men to be active agents of change for their own and the couple’s reproductive health needs. This study contributes to this area of work by examining men’s exposure to family planning (FP) program activities in urban Senegal and determining whether exposure is associated with reported FP use and discussion of family planning with female partners. Methods This study uses data from two cross-sectional surveys of men in four urban sites of Senegal (Dakar, Pikine, Guédiawaye, Mbao). In 2011 and 2015, men ages 15–59 in a random sample of households from study clusters were approached and asked to participate in a survey about their fertility and family planning experiences. These data were used to determine the association between exposure to the Initiative Sénégalaise de Santé Urbaine (in English: Senegal Urban Reproductive Health Initiative) family planning program interventions with men’s reported modern family planning use and their reported discussion of FP with their partners. Since data come from the same study clusters at each time period, fixed effects methods at the cluster level allowed us to control for possible program targeting by geographic area. Results Multivariate models demonstrate that religious leaders speaking favorably about family planning, seeing FP messages on the television, hearing FP messages on the radio, and exposure to community outreach activities with a FP focus (e.g., house to house and community religious dialogues) are associated with reported modern family planning use and discussion of family planning with partners among men in the four urban sites of Senegal. Conclusions This study demonstrates that it is possible to reach men with FP program activities in urban Senegal and that these activities are positively associated with reported FP behaviors. Methods This study uses data from two cross-sectional surveys of men in four urban sites of Senegal (Dakar, Pikine, Gué diawaye, Mbao). In 2011 and 2015, men ages 15-59 in a random sample of households from study clusters were approached and asked to participate in a survey about their fertility and family planning experiences. These data were used to determine the association between exposure to the Initiative Sé né galaise de Santé Urbaine (in English: Senegal Urban Reproductive Health Initiative) family planning program interventions with men's reported modern family planning use and their reported discussion of FP with their partners. Since data come from the same study clusters at each time period, fixed effects methods at the cluster level allowed us to control for possible program targeting by geographic area. Results Multivariate models demonstrate that religious leaders speaking favorably about family planning, seeing FP messages on the television, hearing FP messages on the radio, and exposure to community outreach activities with a FP focus (e.g., house to house and PLOS Introduction Prior to the 1994 International Conference on Population and Development (ICPD), family planning (FP) efforts almost exclusively focused on delivery of clinical services to women [1,2]. Following the 1994 ICPD meeting, there was increasing attention to engaging men as partners to increase couple communication and encourage men's support for women's FP decision-making [3]. Engaging men is particularly important given that husbands and male partner opposition are often given as reasons for contraceptive non-use by women [4][5][6][7]. Programs that involve men can increase spousal communication frequency, address gender inequitable norms, and lead to greater FP use [8]. The evidence on male targeted FP programs has evolved over time. Several systematic reviews have examined programs targeting men to identify effective strategies for engaging men as partners and strategies to improve men's own reproductive health needs [3,9,10]. One earlier review of evaluations of men's involvement in sexual and reproductive health programs demonstrated that 10 years after the 1994 ICPD conference there was still little programmatic engagement of men and few evaluations of interventions engaging men in sexual and reproductive health programs [10]. The studies at the time of the review demonstrated that men's involvement was related to men's positive support for women's contraceptive use and that men were not necessarily a barrier to use [10]. A more recent review of evidence of changing gender norms among men to improve reproductive health outcomes demonstrates that programs that use a gender-transformative approach (i.e., promote gender-equitable relationships between women and men) and those that include multiple components were the most successful [9]. Finally, Hardee and colleagues [3] recently reviewed 47 interventions that reached men as users/clients (for condoms, vasectomy, withdrawal, and Standard Days Method) to provide recommendations to strengthen FP programming for men. Notably, an identified gap in the review was the need for more robust evaluations of programs that target men [3]. Among the programs identified, those that were considered "proven" (i.e., strong evidence) included social marketing and outreach with male motivators/peer educators [3]. Social marketing generally increases men's access to contraceptives whereas outreach by male motivators can improve men's knowledge and attitudes, address community norms around FP use, as well as increase access to methods [11]. Promising activities included community dialogue (i.e., community engagement), mass and social media, and clinic-level provision of information and services. Community dialogue and mass/social media can address men's knowledge and attitudes as well as community norms, whereas, clinic-level activities generally focus on access to methods. Finally, emerging areas for reaching men included mobile health interventions (mHealth), hotlines, and engaging religious leaders [3]. The Malawi Male Motivator project is an example of a peer-led program that used a randomized evaluation design to demonstrate that engaging men and promoting couple communication lead to greater reported family planning uptake among men [12]. In addition, spousal discussion of family planning has been found to be significantly related to male engagement and contraceptive uptake in varying contexts [13][14][15][16][17][18][19][20]. Recent qualitative studies from Nigeria and Togo demonstrate that men want and expect to be part of the decision-making process about family size and childbearing [21,22]. That said, while the Togo study demonstrated common misperceptions around FP among men, there were also clear socioeconomic motivations that led men to consider (or use) FP [22]. In both Nigeria and Togo, a key barrier to men's involvement in FP was the common thinking among the male participants that FP is the woman's domain [21,22]. The authors of these West African studies promote the need for community engagement strategies that reach men and couples to address myths and misperceptions and improve couple communication about FP [21,22]. In Senegal, the site of this study, between 2012 and 2015, significant increases were observed in modern contraceptive use from 12% of married women to 21% of married women [23,24]. This impressive increase fell below the government's commitment made at the 2012 London Summit on Family Planning which was to achieve a modern contraceptive prevalence rate of 27% by 2015 [23]. A key program that supported Senegal's increase in FP use between 2012 and 2015 was the Initiative Sénégalaise de Santé Urbaine (ISSU) or Senegal Urban Reproductive Health Initiative, launched in 2010 with funding from the Bill & Melinda Gates Foundation (BMGF). The ISSU project, with government support and engagement, undertook a multi-component program that included improving the quality and availability of contraceptive services by trained providers, integrating service delivery, and developing a reliable contraceptive supply system to reduce stockout of methods. The program also undertook a number of activities to increase demand for modern FP including mass media campaigns on the radio and television; community outreach activities that included one-on-one interactions at a person's home and community drama productions; and engaging religious leaders to speak favorably about FP in their sermons as well as part of a radio series. Notably, the Senegal ISSU program examined here used many of the proven, promising, and emerging program approaches for engaging men [3] with the goal of increasing modern contraceptive use in six cities and particularly among the urban poor. In a separate impact evaluation, women exposed to ISSU-led community outreach activities were significantly more likely to report using modern contraception at endline than women not exposed to community outreach [25]. None of the other ISSU program activities were found to be related to women's reported use over time. All of the ISSU mass media activities reached men as well as women; some of the messages were targeted to men as key gatekeepers of FP within the household. In addition, community-based activities that involved drama on couple communication and community-based activities with religious leaders also sought to engage men and encourage couple communication. This paper examines the associations between ISSU programming and men's reported modern contraceptive use and reported spousal discussion of FP in the ISSU study sites. We hypothesize that we will find a positive association between men's exposure to the ISSU program activities and their reported modern family planning use and discussion of family planning with their partner. Materials and methods The data for this study come from baseline (2011) and endline (2015) surveys collected as part of the evaluation of the ISSU program. The Measurement, Learning & Evaluation (MLE) project at the University of North Carolina at Chapel Hill was responsible for evaluating the ISSU project and sister projects in Kenya, Nigeria, and the state of Uttar Pradesh, India. In Senegal, data from men were collected from four urban sites that are part of the wider region of Dakar: Dakar, Guédiawaye, Pikine, and Mbao. At baseline, a two-stage sampling design was used to obtain a representative sample of households and men. At baseline, we used the 2009 updated version of the 2002 General Population and Housing Census list of census districts (also called clusters) which served as our study primary sampling units-PSUs. In each city, a random sample of clusters was selected in the first stage with 64 clusters selected in Dakar and 32 in each of the smaller sites (Guédiawaye, Pikine, Mbao); the number of clusters selected was reflective of the census population size estimates for the sites. In each selected cluster, a full household listing and mapping was conducted. Following the listing and mapping, 11 households were randomly selected for the men's interview in each cluster with equal probability of selection; more details on the study design can be found in the baseline report [26]. At endline in 2015, we returned to the same clusters as baseline but a new listing and mapping exercise was undertaken and 11 households were again randomly selected for interview. At each round of data collection, in each selected household in the four sites, all men ages 15-59 were eligible for interview. All eligible men were approached by a trained male interviewer and asked for their signed consent to be interviewed. For this analysis, we pooled the data from the two rounds of data collection to permit making comparisons between the baseline and endline cross-sectional samples. This analysis examines two dependent variables. The first dependent variable is reported use of modern contraception. At baseline and endline, men were asked if they (or their partner) were using a contraceptive method to delay or avoid childbearing and those who reported yes were asked what method they used. Modern methods of contraception include male and female sterilization, daily pill, intrauterine device (IUD), implants, injectables, male and female condoms, emergency contraception, Standard Days Method, and lactational amenorrhea; these last two methods are coded as modern, in accordance with the Senegal Demographic and Health Survey [24]. Men who reported traditional method use (e.g., rhythm method, withdrawal, or folkloric methods) were coded as non-modern method users. The second dependent variable is specifically focused on men who were in union (married or living with a partner). Men who were in union at the time of interview were eligible to be asked about whether (and when) they discussed FP with their partner. Those men in union who reported that they discussed FP in the last six months were coded one and all others were coded zero. This analysis examines the association between exposure to various ISSU program activities and the outcomes of interest. Table 1 presents the description of the program exposure variables as measured in the survey as well as the percentage of men exposed to each program element at baseline and endline. Because the ISSU program activities had not begun before baseline data collection, ISSU-specific variables were coded as zero at baseline (e.g., exposure to ISSU community religious talks, ISSU community activities, and ISSU radio or television). Two types of radio exposure variables are included in the model: exposure to any FP message on the radio and exposure to ISSU specific messages on the radio. Likewise, two types of television exposure variables are included in the analysis: general FP television exposure and ISSU specific exposure. For both radio and the television, we asked about specific shows and stations where the ISSU program was aired to more specifically measure ISSU media exposure. Most of the men who were exposed to the ISSU radio or television also reported general radio and television exposure. Numerous community-level exposure variables were measured as part of the evaluation and these were specifically related to activities undertaken by the ISSU program. These included a) participating in a community-based religious talk on FP; b) hearing a religious leader speak favorably about FP (asked at baseline and endline); and c) participating in outreach activities (community activities). The community activities measured were those that ISSU implemented in the study cities and included community meetings on FP, community conversations about FP, small group discussions (niche) on FP, and an outreach worker visiting the home to discuss FP. Notably, outreach workers were generally targeting women but the men were not excluded from participation. Given that we interviewed men from the same communities at baseline and endline, we use fixed effect methods to control for possible program geographic targeting. For example, the effect of community outreach programs could be biased if these programs were targeted to communities based on unmeasured characteristics of the communities. Our methods correct for this source of community level bias. However, they do not correct for individual recall bias to the exposures which is why our analysis is one of association rather than causality. All descriptive analyses use weights from the corresponding baseline or endline survey. To compare the two time periods, we present p-values from Pearson F-tests performed in Stata statistical software version 14.1. For multivariate analyses, we present the coefficients and standard errors from the fixed effect regression models and focus on the sign and significance of the results. All analyses adjust for clustering based on the survey design using Huber-White type sandwich estimators for standard errors. All study procedures were approved by the Results At baseline, 2,270 men were successfully interviewed across the four cities. At endline, a total of 2,214 men were interviewed in the same cities and same survey clusters. For the analysis of men's modern contraceptive use, we focus on the men who reported that they had ever had sex, which reduces the baseline sample to 1,491 (66% of the full sample) and the endline sample to 1,490 (67% of the full sample). Most of the men dropped from the analysis sample have never been married. At baseline, 51% of the never married men had never had sex and at endline, 57% of the never married men had never had sex. To examine spousal communication about FP, we focus on the sub-sample of men who ever had sex and were in union at the time of the baseline (n = 833) or endline (n = 978) surveys. Men who were not in union were not asked the questions about partner communication and therefore were dropped from the analysis of this outcome. Table 2 presents the demographic characteristics of the cross-sectional analysis sample of men who had ever had sex at baseline and endline. Table 2 demonstrates that the endline sample is somewhat older than the baseline sample; this corresponds to a greater percentage of the endline sample being in union and the endline sample having higher parity compared to the baseline sample that includes more men without children and who have never been married. In the four urban sites, a quarter of men have no education or only Quranic education; another 30% have only a primary education level. A quarter of men at both time periods have secondary or higher education levels. As expected, most of the sample is Muslim (nearly 90%). Table 3 presents descriptive statistics for the outcomes at baseline and endline. First, examining contraceptive use, at baseline 40% of men who ever had sex report that they or their partner is using a modern method and 5% report using a traditional method. By endline, the percentage reporting modern method use has dropped slightly to 37% and traditional method use remains about the same; this difference is not significant. While use did not change significantly between baseline and endline, the reported method mix did change significantly towards a larger share of men reporting use of long-acting methods. At endline, a greater percentage of men report that their partner is using implants and injectables compared to baseline. This represents a decline in use of less effective methods such as male condoms and pills. Also presented in Table 3 is the answer to a question about discussion of family planning among men in union. We see that recent reported discussion of family planning (in the last 6 months) declined somewhat, although the difference is not significant. Multivariate analyses presented in Tables 4 and 5 control for the observed differences in demographic factors (e.g., age, marital status and parity) between the samples to better inform the differences in the outcomes over time. Table 1, presented earlier, provides a description of the program exposure variables including ISSU specific and general FP exposure. Also presented in Table 1 is the percentage of men exposed to each of the activities at baseline (if applicable) and endline. At baseline, about 15% of men reported that they had read about FP in the newspaper or in a magazine in the last three months; by endline this percentage had increased somewhat to 20% (p = 0.04). Exposure to FP messages on the radio increased significantly over time from 43% of men reporting exposure at baseline to 80% at endline. Some of the increase in exposure to FP messages on the radio is contributed by ISSU specific messages and programming; by endline, about 20% of men had heard the ISSU specific radio programming. FP is also being presented on the television such that at baseline 59% of men reported television exposure to FP and by endline, this had significantly increased to 88%. At endline, a high percentage of men (61%) had seen an ISSU specific television program that covered FP. A question was asked at baseline and endline about exposure to a religious leader speaking favorably about FP. At baseline, nearly 18% of men reported exposure to positive messages from religious leaders and by endline this had more than doubled to 47%. While this was not an ISSU specific question, working with religious leaders on FP messaging was a key activity mainly implemented in the four cities by the ISSU program. In addition, 14% of men reported participating in community-based religious talks at endline and 12% were exposed to another type of community-based activity with a FP theme; most of these activities are ISSU-specific community-based activities. Table 4 presents the multivariate fixed effect regression coefficients and standard errors from the model examining the association between ISSU program exposure and men's reported modern method use among men who had ever had sex. In the full model, the endline dummy variable, coded 1 in 2015, is negative and significant; this is consistent with the decline in CPR seen in the descriptive results. Table 4 demonstrates that controlling for the demographic factors and survey period, there are a number of program factors associated with men's reported modern method use. Men who were exposed to FP messages on the television (p 0.001), men who heard a religious leader speak favorable about FP (p 0.001), men who heard community-level religious talks on FP (p 0.05), and men who were exposed to ISSU community-level activities (p 0.05) were significantly more likely to be using a modern method than men who were not exposed to these activities. While a number of these exposure effects are not specific to the program, controlling for general exposure (e.g., radio and television) still results in significant associations between the ISSU community and religious-based activities on modern contraceptive use. The control variables in this model are all in the expected direction: men with more children were more likely to use FP and men in the prime reproductive years (ages 25-35) were more likely to use than men ages 15-24. Men who were in union and men who were divorced, widowed or separated were less likely to be modern method users than sexually experienced men who had never been in union. Men who were Muslim were less likely to use than men who were Christian. In addition, men who were more educated were significantly more likely to use than men with no education or Quranic education only. Overall model p-value from F-test is less than 0.001; this indicates that the model is significantly different than a model with just a constant term. Table 5 presents the association between program exposure and the discussion outcome in the sample of men in union. Those men exposed to religious leaders speaking favorably about FP were significantly more likely to report recently discussing family planning with their spouse (p 0.001) than those men who did not hear a religious leader speaking favorably about FP. Further, radio exposure, both general and ISSU specific, was associated with greater recent discussion. Exposure to community-based activities was positively associated with reported recent family planning discussion. Finally, general television exposure was positively associated with reported discussion of FP in the last six months. The control variables show expected results. The time variable (endline vs. baseline) is negative and significant indicating that after controlling for exposure and the demographic factors, reported discussion declined. Further, those men with more children were more likely to report discussion of FP. Those in the reproductive years were more likely to report discussion. More educated men reported more recent discussion than men with no education or Quranic only education and richer men reported more discussion of FP than the poorest men. Discussion Senegal needs to identify ways to continue positive trends in contraceptive use to attain its FP goals and commitments. Including men in the FP equation as potential users or supporters of FP is an important step for meeting these goals as men can be a barrier (perceived or real) to couples' use [9,27]. Programs targeting men are needed to address men's desire and expectation to be part of the decision-making process about family size and childbearing and to shift social norms around FP use [21,22]. This study demonstrates which of the ISSU program components were associated with reported FP use and discussion of FP among men. In particular, men who were exposed to a religious leader speaking favorably about FP were more likely to report using FP and discussing FP with their spouses. Further, radio activities (both ISSU specific and general programming on FP) were associated with FP discussion and television exposure (general) was associated with FP use. Finally, there was an association between community-based activities and these outcomes. Interestingly, in the evaluation that examined the impact of the ISSU program on women's modern method use, only community-based activities were found to be significant [25]. There were no identified effects of radio, television or religious leaders on women's likelihood to use modern contraception. This is in contrast to the findings here that show that these other activities are associated with men's reported FP use and spousal discussion. The observed associations found here may reflect the program activities that are important for men and potentially, these may influence women's choices indirectly. As discussed earlier, men do play a role in decision-making in the region either directly or indirectly and thus, should also be engaged in FP programming in urban Senegal [21,22]. Programs should consider tailored interventions for men separately from women (and to examine outcomes among both) since exposure and effects may differ by sex [3,21,22]. Thus, as part of developing program strategies, it is worth considering the direct and indirect effects of program activities on both women and men. Earlier studies from urban West Africa show that program exposure is related to positive family planning outcomes among men. In particular, one study that used data collected from men in two cities in Nigeria (Kaduna and Ibadan) demonstrated that men who were exposed to the Nigerian Urban Reproductive Health Initiative (NURHI) media campaign had significantly greater contraceptive use ideation [28]. In the analysis, contraceptive use ideation was a summary measure capturing contraceptive awareness, myths and rumors, approval of government officials discussing FP, perceived self-efficacy to use family planning, spousal discussion of FP, and men's approval of FP [28]. Further, a recent analysis of men from urban areas of Kenya, Nigeria and Senegal demonstrated which programmatic factors were associated with men's reported contraceptive use [29]. The authors showed that among men in Kenya, participation in community events and exposure to television programs related to FP were associated with modern contraceptive use [29]. Further, among men in Nigeria, the only program activity that was significantly associated with modern method use was exposure to program slogans in English (i.e., branding) [29]. Finally, the analysis also included men from three cities in Senegal and examined ISSU program activities based on data from midterm in 2013 (the current paper uses data from endline in 2015). The earlier analysis among men in Senegal showed that exposure to program-led radio and television programming and exposure to religious leaders speaking in favor of FP were associated with modern method use [29]. The results presented in this paper are consistent with these earlier findings, however, by endline there were also significant associations between men's exposure to community-level activities (community dialogues by religious leaders and community outreach) and men's reported contraceptive use and discussion of FP with their partner. These community-level activities that are interpersonal in nature may take longer to attain a large enough coverage to see associations; the fouryear follow-up may have provided the longer time period necessary for associations to be observed. This study has a few limitations that need to be acknowledged. First, the study sample represents two separate cross-sections from the same sample clusters in the four cities. While we can use the clusters in the fixed effect analyses to control for possible program targeting at the cluster level, the results of the analyses are simply associations. We cannot show causal relationships between program activities and the outcomes of interest. Second, overall, we observe slight declines in the outcomes in the endline sample; this may reflect unobserved sample differences between baseline and endline but may also reflect true declines among men. In models that did not include the program variables, this negative time effect was attenuated or disappeared. In the women's longitudinal sample, reported modern method use increased by five percentage points [25]. Among the longitudinal sample of women, this may reflect life course factors that increase the need for FP with older age and continued childbearing. That said, repeated cross-sectional Demographic and Health Survey (DHS) data from Senegal suggests increases in contraceptive use over time as reported by women [24]. Notably, the 2015 DHS data from Senegal demonstrates that more women in union report modern method use (21%) than men in union (15%) and among users, only 4% of women in union report condom use whereas 18% of men in union who report using a modern method report condom use (author calculations). Therefore, it is possible that men are mis-reporting their use, men report less condom use as a FP method over time, men do not know about increases in long-acting method use among their partners, or there are true declines in use across the two cross-sectional samples. With the data available, we cannot determine which of these scenarios is the correct one. Another limitation of this analysis is that the general television and radio exposure variables that increase over time are picking up increases in ISSU programming but may also be picking up increases in other mass media programming taking place in Senegal; with the data available, it is not possible to make this distinction fully. Finally, there is some collinearity between the survey wave (time) and the variables that are only measured at endline; this is a consequence of the program not existing at baseline. Conclusions This study takes a first step to examine which types of program activities are associated with changes in men's reported FP behaviors and communication. Working with religious leaders, which was identified as an emerging strategy [3], was associated with modern FP use and spousal communication in the urban Senegal context. Further, we demonstrated that community-based activities and radio and television programs can lead to high exposure to FP messages among men. Identifying the best combination of mass media and interpersonal activities to increase men's engagement in FP is an important next step for programs seeking to meet men's own FP needs as well as those of their partners. The findings from this study were used by the ISSU program to strengthen their programming and can also be used to inform future programming in urban Senegal and in other parts of urban francophone Africa.
2018-10-02T01:19:39.456Z
2018-09-25T00:00:00.000
{ "year": 2018, "sha1": "d900ff25fdc158c8e9bdc124e305420ed6ac80de", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0204049&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "02688b3e20f097e097d04e0acdd81a0f8f889f7e", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
205314468
pes2o/s2orc
v3-fos-license
Experimental Observation of Quantum Chaos in a Beam of Light The manner in which unpredictable chaotic dynamics manifests itself in quantum mechanics is a key question in the field of quantum chaos. Indeed, very distinct quantum features can appear due to underlying classical nonlinear dynamics. Here we observe signatures of quantum nonlinear dynamics through the direct measurement of the time-evolved Wigner function of the quantum-kicked harmonic oscillator, implemented in the spatial degrees of freedom of light. Our setup is decoherence-free and we can continuously tune the semiclassical and chaos parameters, so as to explore the transition from regular to essentially chaotic dynamics. Owing to its robustness and versatility, our scheme can be used to experimentally investigate a variety of nonlinear quantum phenomena. As an example, we couple this system to a quantum bit and experimentally investigate the decoherence produced by regular or chaotic dynamics. Introduction Chaotic classical systems have the characteristic trait of being extremely sensitivity to initial conditions. This behavior, together with the experimental imprecision of the initial conditions, cause these deterministic systems to be inherently unpredictable. The field of quantum chaos addresses the question as to how classical chaotic dynamics manifests itself in quantum mechanics. In addition to fundamental questions concerning the correspondence principle and the classical limit of quantum mechanics, a number of intriguing quantum-dynamical features have been unravelled. Prominent examples are dynamical localization [2], the quantum suppression of classical diffusion, and the enhancement of the tunneling rate in the presence of chaos in the corresponding classical dynamics [3,4]. These phenomena have been observed in several physical systems [3][4][5][6][7][8][9][10][11][12]. The simplest and most widely studied systems that present manifestations of classical chaos in their quantum dynamics are periodic time-dependent Hamiltonian (Floquet) systems [17]. The quantum evolution up to discrete time t = nT is described by the quantum map |ψ(n) = U n |ψ(0) , where n is an integer and the Floquet operator U describes the unitary quantum evolution in the time period T . In addition to extensive study from a theoretical viewpoint, the phenomena arising in these maps have been observed in experiments with atoms [3,4,[6][7][8][9]18], Bose-Einstein condensates [10], and photonics lattices [12]. There have been a few theoretical proposals to realize quantum chaotic maps using paraxial optics [19][20][21] and, in fact, dynamical localization has been observed in an optical field sent through a sequence of phase gratings [11]. However, the realization of the quantum kicked harmonic oscillator (KHO), a paradigm of quantum non-linear dynamics with non-KAM (KolmogorovArnoldMoser) behavior, and a model for charges moving in time-dependent fields [13], electronic transport in semiconductor lattices [14,15], and trapped ions in a periodic laser field [16], is still outstanding. The quantum KHO is described by the map (1), where the iteration operator is The operator R α describes the evolution of a quantum harmonic oscillator parameterized by α = ωT , where ω is its frequency, and T the interval between the periodic perturbations. V K describes a periodic perturbation corresponding to a potential K cos(Q + φ), where φ is a phase and Q is the dimensionless position variable defined below. The dimensionless position and momentum coordinates of a particle of mass m submitted to the KHO evolution are defined as Q = νq and P = νp/(mω) respectively, where q and p are the position and momentum of the particle and ν is the spatial frequency of the kick. The dimensionless operators obey [Q, P ] = i eff , where the effective Planck constant is eff ≡ ν 2 /mω. This model can present a rich variety of intriguing quantum-dynamical phenomena [22,23]. In the so-called quasicrystal condition and also for irrational values of α, a quantum localization, similar to that extensively studied and observed in the kicked rotor model, can appear [24]. For the crystal condition where α ∈ {π/3, π/2, 2π/3, π, 2π}, the quantum system can present diffusion in energy for any value of K. In this case, the stroboscopic phase space of the corresponding classical system is characterized by the appearance of a "stochastic web" associated to the chaotic behavior, with periodic regions corresponding to essentially regular dynamics in between. Stochastic webs are typical structures of systems with non-KAM behavior, where chaotic dynamics appears even for an arbitrary small perturbation (in our case the kicks) [13], with have many applications [15]. The size of the web and the perturbation of the regular dynamics inside the periodic regions are governed by the intensity of the perturbation K. For K < 1 the size of the web is considerably small, as is the perturbation of the regular dynamics inside the periodic regions. When K = 2 the KHO can be considered a weakly chaotic system. Here we implement the quantum KHO dynamics in the spatial degrees of freedom of the photons of a monochromatic paraxial light. We observe the non-linear dynamics through direct measurement of the optical Wigner function. Controllable parameters adjust the system from regular to chaotic dynamics, as well as the effective Planck constant, associated to the quantum-classical transition. Our scheme is decoherence free and can be employed in a variety of studies of non-linear quantum systems, which we illustrate by investigating the decoherence induced by our system on a qubit. Optical implementation of the chaotic quantum map We implement an optical version of the operator (2) in the spatial degrees of freedom of monochromatic paraxial light based on the isomorphism between the paraxial wave equation and the Schrödinger equation (see [25][26][27] and Supplementary Note 1). The light beam is sent n times through a combination of optical elements designed to implement the operator U KHO , as illustrated in Fig.1(a). The transverse position in the near and far-field correspond to the position and momentum of the photons in the beam, and are analogous to the transverse position q and transverse momentum p of a quantum particle. The instantaneous "kick" perturbation, is produced using a Holoeye spatial light modulator (SLM). The SLM imprints a programmable phase exp[if (x, y)] on an optical beam, and thus can be used in the implementation of many dynamical maps. We define Q = νq as the dimensionless version of the near-field variable, q (see Supplementary Note 2). The parameter ν is the spatial frequency of the cosine function, K is the kick strength, and the effective Planck constant eff will be defined below. The harmonic evolution operator R α produces a phase space rotation that is equivalent to a Fractional Fourier Transform (FRFT) of order α [28], which can be implemented using a lens of focal length f placed between two sections of free-space of distance z α = 2f sin 2 (α/2). The dimensionless momentum variable is P = νf θ, where θ is the angle of the paraxial ray and f ≡ f sin α. The dimensionless operators Q an P obey the commutation relation [25,26], where eff = ν 2 f /k is the dimensionless effective Planck constant (see Supplementary Note 2). It is straightforward to manipulate all the relevant parameters in the dynamics of the KHO: By changing the order α of the FRFT (harmonic evolution between kicks), the amplitude K of the cosine phase implemented with the SLM, and the spatial frequency ν of this phase (effective Planck constant). The output of the HeNe laser is horizontally polarized using a polarizing beam splitter (PBS1) and reflected into n iterations of the KHO operation. Each iteration consists in a "kick", corresponding to a phase imprinted on the field with the spatial light modulator (SLM), and harmonic evolution, implemented with sections of free propagation and a cylindrical lens. The beam is reflected back and forth n times onto the SLM. Lenses L1 and L2 map the final state at transverse plane z0 onto the entrance of a Sagnac interferometer. Before entering the interferometer, a Dove prism (DP1) is used to perform a 90 • spatial rotation of the beam profile, and a polarizing beam splitter (PBS2) and the half-wave plate (HWP1) are used to balance the intensities of the vertical and horizontal polarization components. The interferometer is used for direct measurement of the optical Wigner function. Each phase space point of the Wigner function is obtained from the interference between horizontal (H) and vertical (V ) polarization components of the beam emerging from the interferometer. A Dove prism (DP2) inside the interferometer is used to spatially rotate the counter-propagating H and V components of the field. Translation and tilting of the input mirror are used to select the phase space point (Q, P ) to be measured. A quarter-wave plate (QWP), a half-wave plate (HWP2) and a polarizing beam splitter (PBS4) are used to perform the required polarization measurement, and a large aperture power meter is used to measure the beam intensity. See the Methods section for more information. Density Scale (normalized) In all cases the harmonic evolution parameter is α = π/3. The black line outlines the phase space manifold that is the skeleton of both the quantum and the classical distributions corresponding to the KHO Hamiltonian (see details in the text). The associated stroboscopic phase space evolution of the classical map is illustrated in (g) and (h) corresponding to the cases in (b) and (c) respectively (the skeleton manifold of the evolved state is the yellow line). The theoretical plots (including the stroboscopic phase space) are obtained from the evolution of the KHO corrected by a linear transformation (equal for all the plots) that take into account the spurious evolution due to technical imperfections and small alignment errors of the optical elements (see the Methods section). The experimental setup is illustrated in Fig.1(b). To characterize the chaotic dynamics, we do a point-by-point direct measurement of the optical Wigner function [28,29] using an interferometric method [30]. Details of the full experimental setup are given in the methods section, and in the Discussion section we show how this scheme can be used as a building block to implement long-time dynamics. Observing quantum signatures of chaos in phase space 2(d-f). All three cases correspond to the harmonic evolution α = π/3. The classical dynamics of the KHO map (controlled by the kick amplitude K) are illustrated with the usual stroboscopic kick-to-kick map in Figs. 2 (g) and (h). Figs. 2(a, d) and (b, e) show results for the kick amplitude K = 7.4 that corresponds classically to essentially chaotic dynamics (see Fig. 2 (g)). Figs. 2(c, f) show results for K = 2 that correspond to a mixed classical dynamics (see Fig. 2 (h)). Extended quantum states (with uncertainty ∆Q∆P >> eff /2) typically exhibit a fine oscillatory structure in their Wigner function, known as sub-Planck structure [31], which saturate at a scale ∼ 2 eff /∆Q∆P . Thus, for an almost fixed extension in phase space, the wavelength of the oscillatory pattern should decrease with eff . This can be observed com- The Wigner function of some extended states, like energy eigenstates in integrable systems, have a classical manifold as their support. Every pair of localized regions of the Wigner function on this support can interfere creating an oscillatory pattern in the middle of a chord that joins the localized regions, similar to the Wigner function of a superposition of two coherent states (a Schrodinger cat-like state). When the classical dynamics is sufficiently non-linear, the evolution of even highly localized Gaussian states are supported by a phase space manifold that evolves classically [32,33]. This is also the skeleton of the classical probability distribution associated with the initial Gaussian state whose evolution is determined by the Liouville equation of the classical system. In all the plots of Fig. 2 a curve (yellow in (g) and (h), black in the rest) indicates the classical manifold that is the support of the Wigner function. For our initial squeezed Gaussian state this is a straight line, as in Figs. 3 (a) and (e). The interference pattern appears once the stretching and folding of the classical manifold begins due to the non-linear classical dynamics. Eventually, some portion of the interference pattern falls over the classical support, and the positive skeleton begins to disappear. This occurs very quickly when the underlying dynamics is chaotic. This can be seen in Fig. 3, which shows the Wigner function of the first three iterations of the KHO map (2) for the same initial state |Ψ(0) (shown in (a) and (e)), when K = 0.75 in (b) to (d) and when K = 2 in (f) to (g). The non-linear regular classical dynamics of a stability island around the origin for K = 0.75 is shown in the stroboscopic phase space plot (i), and the weak chaotic dynamics for K = 2 in plot (j). The quantum KHO as a decohering environment As an example of the utility and versatility of our approach, we measured the loss of coherence in a polarization qubit coupled to the quantum KHO, which acts as a decohering environment. By taking advantage of the fact that the SLM only imprints a phase on the horizontal polarization component, a polarization dependent evolution, KHO or simple harmonic oscillator (SHO), is implemented in the spatial degrees of freedom (see Methods section). This corresponds to a dephasing-type interaction between qubit and environment. In this case, the off-diagonal elements of the qubit density matrix are suppressed by a factor f = | Ψ(0)| (U n SHO ) † U n KHO |Ψ(0) |, where |Ψ(0) is the initial state of the environment. Thus, for an initial state of the qubit in the equatorial plane of the Bloch sphere, the temporal behavior of its purity is given by (1 + |f | 2 )/2. The quantity f can be seen as a fidelity amplitude, which has been extensively studied in the field of quantum chaos [34][35][36]. In general, theoretical studies predict an initial decay of f before saturation [34][35][36]. For fully chaotic underlying classical dynamics, f presents an exponential decay with different decay rates depending on the perturbation regime [34,35]. On the other hand, for regular dynamics, the decay of f is not generic and depends strongly on the localization of the initial state in the classical phase space [34,35]. For initial states well localized in a stability island long time oscillations with revivals are expected, where the oscillations can be understood in terms of the classical frequencies included in |Ψ(0) . The temporal mean value of f decreases with eff so, due to revivals, the size of the fluctuations increase in the semiclassical regime. When the underlying dynamics of the environment is chaotic the temporal mean value of f and its fluctuations are inversely proportional to the effective Hilbert space dimension of the environment [36] and therefore tend to zero in the semiclassical limit ( eff → 0). In this limit the qubit becomes maximally entangled to the environment, so its purity−→ 1/2. In our experiment, an incoming beam was prepared in a linear diagonal polarization state. Figs. 4(a) and (b), shows the purity of the polarization state as a function of the number of iterations of the KHO map. Fig. 4 (a) is for essentially regular dynamics (K = 0.5), and Fig. 4(b) shows the case in which the KHO has chaotic dynamics (K = 2). The initial state |Ψ(0) is analogous to the one shown in Fig. 3 (a) and (g) and in the case when K = 0.5 is localized in an stability island around the origin (not shown). Fig. 4(c) shows the purity for n = 3 kicks as a function of eff for K = 2 (green triangles) and K = 0.5 (red diamonds). The dashed lines are the predictions given by numerical simulation of the composite system. Although the number of kicks are few, one observes a general behavior compatible with the theory described above. In the case of chaotic dynamics (K = 2), rapid loss of purity occurs for all values of eff attaining a saturation with very small fluctuations, indicating that the polarization state becomes nearly maximally entangled (purity = 1/2) for decreasing values of eff (see Fig. 4(c)). Hence, the equilibrium state of the qubit is a totally mixed state. The total loss of coherence in the polarization state here is due to the underlying classical chaotic dynamics in the spatial degrees of freedom of the beam. On the other hand, for regular dynamics (K = 0.5), because |Ψ(0) is almost completely localized in an stability island, we observe what appears to be the beginning of oscillations for different values of eff , with a revival of the polarization state purity (for all values of eff ). The value of the purity at its minimum value goes to 1/2 in the semiclassical limit (see Fig. 4(c)) indicating that the temporal mean value of f goes to zero when eff → 0. This is compatible with the typical large fluctuations of the fidelity amplitude f in the semiclassical regime for the case of regular dynamics when the initial state is localized in a stability island. Discussion The optical KHO setup reported here can be used as a building block to implement a large number of iterations of the KHO operator. Fig. 5 illustrates an experimental scheme that can be used to implement N >> 1 kicks of the KHO. A pulse from a vertically-polarized laser is reflected from a polarizing beam splitter (PBS), and the polarization is rotated to the horizontal direction by a Pockels cell (PC). The laser is sent to the SLM, and n iterations of the KHO operator are implemented, in the same manner as reported in the Results section. To minimize losses from multiple optical components, a single cylindrical mirror (CM) can be used in the place of the cylindrical lens and plane mirror in Fig. 1. After n iterations, the output light is sent through a second PC, which can be used to switch the pulse out of the setup for measurement. The measurement system is the interferometer used for direct measurement of the spatial Wigner function. If the PC is left inactive, the pulse is reflected from the mirror and backwards through the KHO operation, resulting in another n iterations. The lenses are chosen with focal length equal to half the distance between the SLM and mirrors, so that two consecutive optical Fourier transforms are performed. The overall result is an imaging system, up to a reflection. In this way, the output state from each set of n iterations is mapped onto the input state for the next set of n iterations. Since the kick operator (3) and harmonic evolution (4) are symmetric around the origin, the reflections can be absorbed into the definition of the coordinate system. The pulse is sent back to the first PC, which remains inactive, resulting in reflection of the pulse at the mirror, and transmission back into the n KHO iterations. In this way, it is possible to implement a sequence of n × 2n × 2n · · · kicks and switch the output state into the interferometer for measurement after the first n iterations, or at intervals of 2n after that. We note that the SLM allows us to control the number of kicks n per iteration by programming whether the kick-phase or a quadratic phase is imprinted on the field. The quadratic phase can be used to implement or undo the harmonic evolution. The ultimate limit to the number of iterations that can be performed depends primarily upon the losses in the optical system. These arise predominantly from the SLM, lenses and mirrors used in the n iterations of the KHO. The total output intensity after n kicks of the KHO can be written where t o is the combined transmission coefficient of the optical elements outside the KHO evolution, t l is the combined transmission coefficient of the lenses and mirrors that implement the harmonic evolution between kicks, t SLM the transmission coefficient of the SLM, and I in the input intensity of the laser beam. In the Supplementary Discussion, we estimate that with current technology it should be possible to perform about n ∼ 100 kicks. In conclusion, our experiment allows for the study of the dynamics of a non-relativistic quantum system using an intense classical laser beam due to the analogy between quantum mechanics and classical wave mechanics. A possible next step is to study the chaotic evolution of entangled photons. The optical realization of non-linear quantum dynamics should prove invaluable in the experimental investigation of quantum chaos, decoherence, and the quantum-classical boundary. Experimental setup The complete experimental setup is illustrated in Fig.1(b). A 632.8nm He-Ne laser is coupled to a single-mode optical fiber. This defines a Gaussian light beam as the initial state |ψ(0) which is then evolved by the quantum KHO propagator (2) in one dimension of the transverse spatial degrees of freedom. The state is sent through n iterations of the KHO operator U KHO by reflecting n times between the SLM and a mirror. A polarizing beam splitter (PBS1) is used to polarize the beam parallel to the active axis of the SLM display. The harmonic evolution between kicks corresponds to propagation in the α-order FFT system, which consists of free space propagation and the cylindrical lens. For practical reasons, we actually implement two consecutive FFTs of order α/2, before each incidence on the SLM. The focal length of the cylindrical lens is f = 150mm, and the free space propagation length is z = 75mm, so that α = π/3 between two consecutive kicks (see Fig.1(b)). Since the entire KHO evolution is made with optical elements that act only in one spatial dimension, the perpendicular direction evolves according to free space propagation (see the Supplementary Discussion for details and a discussion of the accessibility of long-time dynamics). In order to measure the Wigner function, two spherical lenses (L1 and L2) of focal length f = 350 mm are used to map the output state of the KHO system |ψ(n) from the transverse plane at position z 0 to the input mirror of the Sagnac interferometer. A Dove prism (DP1) tilted at a 45 • angle is used to swap the horizontal and vertical coordinates, because for convenience the quantum kicked Hamiltonian is implemented in the vertical axis, while the Wigner measurement is performed in the horizontal. The second PBS (PBS2) and the first half wave plate (HWP1) are used to keep the beam linearly polarized before entering the Sagnac interferometer. Measurement of the optical Wigner function The method used to directly measure the optical Wigner function is an interferometric scheme proposed in reference [30]. The interferometer is illustrated in Fig. 1. The displacement and tilting of a steering mirror (M 1) at the entrance of a three-mirror Sagnac interferometer displaces the optical field by Q and changes its direction of propagation by P (which in the paraxial approximation corresponds to the addition of a phase). This produces the field exp(i eff P ξ/2)Ψ(Q+ξ/2, z). A polarizing beam-splitter divides the field into two spatially identical components and a Dove prism (DP 2) placed inside the interferometer realizes opposite 90 • spatial rotations in the two counter-propagating transverse spatial modes, resulting in a total relative rotation by 180 • . The modes are recombined, and projected onto the diagonal polarization direction before detection by an area-integrating "bucket" detector. The measured intensity is composed of three terms I = I 1 + I 2 + I int , where the sum I 1 + I 2 is constant and equals half the total input intensity. The term I int originates from the interference between the counter-propagating beams. Due to the relative rotation implemented by DP 2, I int is proportional to: The right-hand side of Eq. (7) is an integral over spatial variable ξ of the overlap between the displaced field, exp(i eff P ξ/2)Ψ(Q + ξ/2, z), and its complex conjugate with the transformation ξ → −ξ. This transformation corresponds to the 180 • relative rotation implemented by DP 2. It is important to note that slight polarization transformations introduced by the Dove prism [39] inside the Sagnac interferometer does not alter significantly the measured Wigner function. In this way, the amplitude of the optical Wigner function at point (Q, P ) can be obtained by measuring the intensity at the interferometer output, for different settings of the tilt angle and displacement of the steering mirror M 1. In our experiment these parameters were controlled with high resolution motorized stages. At the exit of the interferometer we use a quarter wave plate (QWP) that is tilted to correct the polarization aberrations introduced by DP 2 inside the interferometer [39]. Analysis of the experimental data The final state of the optical KHO is obtained at the output plane z 0 , indicated in Fig. 1(b). However, the optical wavefunction propagates through free space and several linear optical elements before reaching the steering mirror at the entrance of the Sagnac interferometer, where the optical Wigner function is measured. One must therefore take into account this evolution before comparing measurements with the theo-retical Wigner functions, obtained from numerical calculation of the quantum KHO evolution. This is done by calculating the total linear transformation M, resulting from propagation through all of the optical elements and free space between the output plane z 0 and the steering mirror shown in Fig. 1 (b). The expected Wigner function in the measurement plane can be written as where x y p x p y T = M −1 (x 0 p x 0) T are the transformed coordinates. The function W (KHO) x (x, p x ) represents the Wigner function due to KHO evolution of the initial Gaussian state implemented in the x transverse spatial direction. The function W (Gauss) y (y, p y ) refers to the Gaussian state describing the y transverse spatial direction at plane z 0 . This spatial mode does not undergo KHO the evolution. Experimental errors associated to the propagation though the optical elements between the output plane z 0 and the steering mirror results in scaling, skewing and rotation of the final measured Wigner Function, W (exp) (x, p x ), in comparison to W (KHO) (x, p x ). These uncertainties are due principally to errors in lens placement, misalignment of the optical elements, and diffraction. Nevertheless, a single linear transformation E corrects these errors, such that W (exp) (x, p x ) = W (KHO) (E(x, p x )). Both M and E depend only on the experimental setup, and are the same for all measurements of Wigner functions, including the case in which the spatial light modulator is turned off. In this case, the implemented evolution is that of a simple harmonic oscillator (SHO). With the KHO turned off, we determined the value of E, and used these values to correct all of the KHO Wigner functions. This was repeated for each harmonic evolution used. Implemention of a dephasing-type decoherence channel The spatial light modulator (SLM) imprints a phase on the horizontal polarization component of the light beam, but leaves the phase of the vertical polarization component unchanged. Without the SLM, the optical system is designed to implement SHO evolution via the fractional Fourier transform systems composed of the cylindrical lens and free-space propagation. Therefore, if diagonally polarized light is used in the KHO optical setup, one obtains the transformation: where |+ = (|H + |V )/ √ 2, and |Ψ designates the transverse spatial mode of the light beam. The evolution operators U KHO and U SHO act on the spatial degree of freedom, and correspond to the evolution of the KHO and the SHO, respectively. The total evolution given in Eq.(9) could be interpreted as the quantum evolution of a qubit interacting (at the instant of the kick) with a quantum KHO via a dephasing-type of coupling of the form −(s − s H )K cos Q/(s V − s H )|H H|, where we denote with s H and s V the linear polarization eigenvalues of |H and |V respectively and s = s H or s = s V . Performing a polarization measurement of the output beam using a "bucket" detector is equivalent to tracing over the spatial degree of freedom, and yields, ρ = 1 2 (|H H| + |V V | + f |H V | + f * |V H|) , (10) where f = Ψ|U † SHO U KHO |Ψ is the overlap between the SHO and KHO quantum states. The purity of the polarization state is given by Trρ 2 = (1+|f | 2 )/2, and decays with f [36]. We use wave plates and a polarizing beam splitter, to perform quantum polarization state tomography using the standard recipe [37,38] to obtain the density matrix ρ, from which we calculate the results shown in Fig. 4.
2014-03-06T09:21:13.000Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "6f3385f48294c23f3c3da692a3d6306c67cd590c", "oa_license": null, "oa_url": "https://www.nature.com/articles/ncomms2214.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "6f3385f48294c23f3c3da692a3d6306c67cd590c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }