id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
263607079
|
pes2o/s2orc
|
v3-fos-license
|
Pathogenicity and Function Analysis of Two Novel SLC4A11 Variants in Patients With Congenital Hereditary Endothelial Dystrophy
Purpose The purpose of this study was to explore the pathogenicity and function of two novel SLC4A11 variants associated with congenital hereditary endothelial dystrophy (CHED) and to study the function of a SLC4A11 (K263R) mutant in vitro. Methods Ophthalmic examinations were performed on a 28-year-old male proband with CHED. Whole-exome and Sanger sequencing were applied for mutation screening. Bioinformatics and pathogenicity analysis were performed. HEK293T cells were transfected with the plasmids of empty vector, wild-type SLC4A11, and SLC4A11 (K263R) mutant. The transfected cells were treated with SkQ1. Oxygen consumption, cellular reactive oxygen species (ROS) level, mitochondrial membrane potential, and apoptosis rate were measured. Results The proband had poor visual acuity with nystagmus since childhood. Corneal foggy opacity was evident in both eyes. Two novel SLC4A11 variants were detected. Sanger sequencing showed that the proband's father and sister carried c.1464-1G>T variant, and the proband's mother and sister carried c.788A>G (p.Lys263Arg) variant. Based on the American College of Medical Genetics (ACMG) guidelines, SLC4A11 c.1464-1G>T was pathogenic, whereas c.788A>G, p.K263R was a variant of undetermined significance. In vitro, SLC4A11 (K263R) variant increased ROS level and apoptosis rate. Decrease in mitochondrial membrane potential and oxygen consumption rate were remarkable. Furthermore, SkQ1 decreased ROS levels and apoptosis rate but increased mitochondrial membrane potential in the transfected cells. Conclusions Two novel heterozygous pathogenic variants of the SLC4A11 gene were identified in a family with CHED. The missense variant SLC4A11 (K263R) caused mitochondrial dysfunction and increased apoptosis in mutant transfected cells. In addition, SkQ1 presented a protective effect suggesting the anti-oxidant might be a novel therapeutic drug. Translational Relevance This study verified the pathogenicity of 2 novel variants in the SLC4A11 gene in a CHED family and found an anti-oxidant might be a new drug.
Introduction
Congenital hereditary endothelial dystrophy (CHED; MIM # 217700) is an inheritable disorder of the corneal endothelium that causes bilateral, symmetric, non-inflammatory corneal clouding (edema) at birth or soon after.Patients with CHED present manifesting as involuntary eye movements (nystagmus) and decreased vision. 1,2CHED results in degeneration and dysfunction of endothelial cells of the cornea, and thickening of corneal, which can reach two to three times of the normal thickness.CHED is characterized by thickening of the Descemet's membrane and stromal layers, severe disorganization, and destruction of the stroma structure. 1,3Some cases of CHED may progress to Harboyan syndrome (CDPD, MIM # 217400), which presents manifests as CHED with sensorineural hearing loss. 4Corneal transplantation is currently an effective treatment for CHED.
According to the updated International Classification of Corneal Dystrophies in 2015, CHED refers only to the more severe phenotype of autosomal recessive CHED (originally CHED2). 5SLC4A11, a causative gene of CHED, is located on the short arm of human chromosome 20.The SLC4A11 gene has 19 exons, of which 19 are coding exons, encoding 891 amino acids. 2 SLC4A11 protein is widely expressed in the thyroid, trachea, cornea, kidney, salivary gland, and other tissues. 6SLC4A11 was identified as a novel electrogenic NH3 + /H + co-transporter protein, 7 which plays an irreplaceable role in the survival, growth, and proliferation of corneal endothelial cells. 8,9Ogando et al. found SLC4A11 is localized in the inner mitochondrial membrane and activated SLC4A11 is a mitochondrial uncoupler that can regulate mitochondrial membrane potential (MMP) and reactive oxygen species (ROS) levels. 10It has been confirmed that SLC4A11 mutant cells are more sensitive to oxidative stress-mediated damage. 11Loss of SLC4A11 activity induces oxidative stress and cells death, leading to CHED, corneal edema, and vision loss. 10Han et al. constructed an Slc4a11 knockout mouse model that exhibited characteristic morphological changes of CHED, confirming that deletion of the SLC4A11 gene could lead to progressive cells damage and apoptosis of corneal endothelial cells. 12In addition, Liu et al. found that the proliferation of human corneal endothelial cells (HCECs) was inhibited after knocking down SLC4A11 using short hairpin RNA (shRNA), further confirming that endothelial cells loss was associated with increased cells death caused by activation of apoptotic pathways. 13However, the exact mechanism of CHED remains unclear.
So far, 127 mutations in SLC4A11 gene have been reported, including 90 missense/nonsense mutations, 8 splicing mutations, 4 deletion mutations, 16 small deletion mutations, 2 small insertion mutations, and 7 small insertion and deletion mutations (https: //www.hgmd.cf.ac.uk/ac/gene.php?gene=SLC4A11).SLC4A11 plays an important role in corneal function.5][16][17] SLC4A11 mutationrelated CHED has rarely been reported in China.In this study, we identified a Chinese family with CHED.Genetic analysis of the target region by high throughput sequencing revealed 2 novel pathogenic variants of SLC4A11, which expanded the mutation spectrum of SLC4A11.We investigated the effect of K263R variant on mitochondrial function and apoptosis in mutated SLC4A11 gene transfected cells.Furthermore, we tested whether SkQ1, a mitochondria-targeted antioxidant, could protect the mutant gene transfected cells.
Clinical Examinations
This study was carried out in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Henan Eye Hospital.After explaining the risks and benefits of the study to all participants in detail, the participants agreed to participate in this study and signed an informed consent form.The proband, a 28-year-old man, underwent ophthalmic examinations, including best corrected visual acuity (BCVA), slit-lamp microscopy, and swept source optical coherence tomography (SS-OCT; VG200D, SVision Imaging, Luoyang, Henan, China).
Cell Culture and Transfection
HEK293T cell line was purchased from the American Type Culture Collection (ATCC, Manassas, VA, USA).Cells were cultured in high glucose DMEM medium containing 10% fetal bovine serum (35-081-CV; Corning, NY, USA) and 100 U per mL of penicillin/streptomycin (32105; Mengbio, Chongqing, China), and placed in a 37°C incubator (5% CO 2 ). 18,19ycoplasma contamination in HEK293T cells was excluded by the kit (AC16L061; Shanghai Life-iLab Biotech, Shanghai, China). 20Transfection of HEK293T cells was performed with EZ Trans Cell Transfection Reagent (AC04L092; Shanghai Life-iLab Biotech, Shanghai, China) and DNA was conducted in a 3:1 ratio according to the manufacturer's instructions.
Western Blotting
HEK293T cells were seeded into 6-well plates and cultured them in complete DMEM medium for 24 hours after transfection with the plasmids.In RIPA lysis buffer (PC101; Epizyme Biomedical Technology, Shanghai, China) containing a 1X general protease inhibitor (GRF101; Epizyme Biomedical Technology, Shanghai, China), the HEK293T cells were lysed.To achieve complete lysis, the lysates were further treated with a sonicator (ZQ-650Y; Shanghai Zhengqiao Scientific Instruments, Shanghai, China).After centrifugation at 12,000 rpm for 30 minutes, supernatants were collected and BCA Protein Assay Kit (P0011; Beyotime, Shanghai, China) was used to determine protein concentrations.After mixing with the 5X loading buffer solution, samples were boiled for 5 minutes at 95°C.We resolved protein samples on 7.5% SDS-PAGE gels and transferred them to a PVDF membrane.A PVDF membrane was blocked with 5% skimmed milk powder at room temperature for 2 hours, then incubated overnight at 4°C with specific primary antibodies: SLC4A11 (ER62838, HUABIO, Hangzhou, China) and βactin (200068-8F10, ZENBIO, Chengdu, China).A PVDF membrane was washed 3 times with TBST, then incubated with secondary antibodies at room temperature for 2 hours.Finally, a PVDF membrane was exposed to enhanced chemiluminescence using a chemiluminescence apparatus (GelView6000Plus, Guangzhou Biolight Biotechnology, Guangzhou, China). 20,22
ROS Measurement
Detection of intracellular reactive oxygen levels was conducted by detecting changes in fluorescence intensity of the fluorescent dye DCFH-DA (2,7-dichlorodihydrofluorescein diacetate, S0033; Beyotime, Shanghai, China).According to the manufacturer's instructions, the detection working solution was obtained by diluting DCFH-DA with a serum-free basal medium at a ratio of 1:1000 to a final concentration of 10 μM.HEK293T cells were seeded into 12-well plates and cultured them in complete DMEM medium for 48 hours after transfection with the plasmids.The cells were washed twice with serum-free basal medium and added to the detection working solution.Then, the cells were placed in the plates in a cell culture incubator set at 37°C and incubated for 30 minutes in the dark.After washing three times with serum-free basal medium, the cells were quickly observed with a fluorescence microscope (Ex/Em = 488/525 nm).Image fluorescence intensity was analyzed using ImageJ software (version 1.52a; National Institutes of Health [NIH]). 22,23
Detection of Mitochondrial Membrane Potential
The mitochondrial membrane potential assay was performed using the JC-1 assay kit (C2006; Beyotime, Shanghai, China). 22,23Briefly, HEK293T cells were seeded into 12-well plates, and cultured them in complete DMEM medium for 48 hours after transfection with the plasmids.The cells were digested with EDTA-free trypsin, collected in centrifuge tubes, centrifuged, and incubated for 20 minutes at 37°C in a cell incubator protected from light.The cells were washed twice with JC-1 staining buffer (1X) and detected by flow cytometry for mitochondrial membrane potential, which were expressed as the ratio of JC-1 aggregates to monomer. 22,23
Detection of Apoptosis
Apoptosis was detected using the PE Annexin V Apoptosis Assay Kit I (559763; BD Biosciences, Franklin Lakes, NJ, USA).The 10X binding buffer was diluted to 1X with pure water.HEK293T cells were seeded in 12-well plates, and cultured them in complete DMEM medium for 48 hours after transfection with the plasmids.The cells were digested by adding EDTAfree trypsin and collected into centrifuge tubes.The cells were washed twice with cold PBS and resuspended with 100 μL 1X binding buffer.The solution was added with 5 μL PE-annexin V and 5 μL 7-AA, and gently mixed.The cells were incubated at room temperature for 15 minutes and protected from light.Then, 400 μL 1X binding buffer diluent was added and the apoptosis rate was detected by flow cytometry. 22
Determination of Oxygen Consumption Rate
Oxygen consumption rate of cells, which reflects the function of the respiratory chain, was measured by an XFe analyzer (XFe96; Agilent Seahorse Technologies, Palo Alto, CA, USA). 22HEK293T cells were inoculated at a density of 10,000 cells per well in XF 96well plates (102601-100; Agilent Seahorse Technologies, Palo Alto, CA, USA) and incubated overnight after 1 hour of standing.Pyruvate (1 mM), glutamine (2 mM), and glucose (10 mM) were added to XF basal medium (103334-100; Agilent Seahorse Technologies), mixed well, and placed at 37°C for preheating.Then the XF96-well plates were moved to an incubator at 37°C without CO 2 for 60 minutes.The XF Cell Mito Stress Kit (103015-100; Agilent Seahorse Technologies) was used to test the respiratory function and mitochondrial metabolism of cells.The diluted reagents Oligomycin (15 μM), FCCP (5 μM), and Rotenone/antimycin A (5 μM) were injected into the probe plate and placed in the Seahorse XFe analyzer for testing.Finally, the protein concentration of each well was measured and normalized.The results were analyzed using Wave 2.6.3 software. 22
Statistical Analysis
Data were collected from 3 independent experiments, and analyzed by 1-way ANOVA followed by Bonferroni correction, with GraphPad Prism version 7.0 software (GraphPad, La Jolla, CA, USA).Any P value < 0.05 was considered statistically significant.All quantitative data were displayed as mean ± standard deviation (SD).
Clinical Features
The proband (II-1), a 28-year-old man, presented with binocular poor vision and nystagmus since childhood.The BCVA was 0.05 for the left eye and 0.1 for the right eye.Slit-lamp microscopy showed that the corneas of both eyes were foggy and edematous.The depth of the anterior chamber was moderate, the pupil was round with the diameter at 3 mm.The structure of the rest of the eye could not be seen clearly.In the left eye, several vesicles were seen in the central cornea, and corneal leucoma was seen in the inferior temporal region (Fig. 1B).SS-OCT showed that the signal of the corneal stroma of the right eye was uneven.A low-reflection cavity was visible under the corneal epithelium of the left eye, with enhanced light reflection of the posterior corneal tissue (white arrow, Fig. 1C).Neither the father (I-1) nor the mother (I-2) of the proband had ocular signs, although the younger sister (II-2) of the proband had similar clinical manifestations.
Genetic Test Results
After sequencing, 2 new heterozygous variants of SLC4A11 gene were detected, as shown in Figure 2A.The variant SLC4A11 c.1464-1G>T was a new splicing mutation located in chr20-3210907.It was not listed in gnomAD_exome (EAS), ExAC (EAS), 1000 Genomes, and other databases.Mutation Taster and CADD predicted that it was harmful.We used spliceAl software (https://spliceailookup.broadinstitute.org/) to predict the damage of the splice mutation, and the results showed that it caused Acceptor_loss and the prediction score was 0.99, indicating that the splice mutation was harmful.As a membrane transporter, SLC4A11 protein may mediate signal transduction in the form of receptor.Mutation may lead to The variant SLC4A11 c.788A>G, p.K263R was a novel variant, which was not reported in gnomAD_exome (EAS), ExAC (EAS), 1000 Genomes, and other databases.Mutation Taster, CADD predicted that it was deleterious, and Polyphen2 predicted that it was probably damaging.However, SIFT predicted it was benign.GERP++ showed that the amino acids it encodes were highly conserved.We compared the protein sequences between different species with Clustal Omega (Fig. 2B) and analyzed them by Weblogo (Fig. 2C).The variant site was highly conserved.According to the ACMG guidelines, the variant was a variant of undetermined significance: PM3 + PM2_Supporting + PP3.
Protein Structure and Function Prediction of K263R
We used HOPE (https://www3.cmbi.umcn.nl/hope/) and Expasy software (https://www.expasy.org/) to predict the protein structure, and used PyMOL to make 3D structural models of SLC4A11 (K263R) protein (Fig. 3B). Figure 3A was the schematic structures of the original and the mutant amino acid.The backbone, which was the same for each amino acid, was colored red.The side chain, unique for each amino acid, was colored black.The mutant residue was bigger than the wild-type residue after lysine changed to arginine at position 263.The wild-type residue was predicted to be located in an α-helix, whereas the mutation caused the replacement of the wild-type residue with another residue that does not prefer α-helices as a secondary structure.Mutant was less stable than wild type.
SLC4A11 Protein Expression Levels
In order to explore the expression levels of SLC4A11 protein, the cDNA plasmids of the wildtype SLC4A11 and variant SLC4A11 (K263R) were sub-cloned into pcDNA3.1 (+) expression vectors to HEK293T cells and cultured for 24 hours for Western blotting assay.The results showed that higher SLC4A11 protein expression was seen in the wild type group and the K263R group after plasmid transfection.
K263R Mutation Caused Mitochondrial Dysfunction
In order to investigate whether K263R affected mitochondrial function, the cDNA plasmids of wildtype SLC4A11 and variant SLC4A11 (K263R) were sub-cloned into pcDNA3.1 (+) expression vectors to HEK293T cells.After 48 hours of incubation, we used the Seahorse XFe analyzer to measure the oxygen consumption rate (OCR), maximum OCR value, and ATP production value of the cells.The results showed that the basal OCR, maximum OCR, and ATP production values of the K263R cells were significantly lower than the empty vector cells and the wild type cells (Fig. 4B).
In order to further verify the mitochondrial function, we detected intracellular ROS and mitochondrial membrane potential.The DCFH-DA fluorescent probe was used to evaluate intracellular ROS levels.Fluorescent images showed that the ROS-positive cells in the K263R group were significantly more than those in the empty vector group and the wild type group (Fig. 5A), indicating that the K263R cells were in a state of higher oxidative stress.
Mitochondrial membrane potential is an important indicator of mitochondrial function.The results of JC-1 assay showed that the K263R group showed higher green fluorescence intensity, but the red fluorescence intensity was much weaker than that of the empty vector and the wild type groups (Fig. 5B), indicating that the mitochondrial membrane potential decreased significantly.The results further proved that K263R caused mitochondrial dysfunction.
K263R Mutation Promoted Apoptosis
To explore whether the K263R affected cell fate, cell apoptosis was detected 48 hours after plasmid transfection by flow cytometry.The results showed that compared with the empty vector cells and the wild-type cells, the apoptosis rate (Q2 + Q4) of the K263R cells was significantly increased (Fig. 5C).
SkQ1 Improved Mitochondrial Function and Inhibited Apoptosis
Transfected HEK293T cells were treated with SkQ1 (50 nM, HY-100474; MedChemExpress, Monmouth Junction, NJ, USA) for 6 hours, and then incubated for another 48 hours.We tested the ROS level, mitochondrial membrane potential, and cell apoptosis rate.The results showed that SkQ1 treatment significantly decreased the ROS level and cell apoptosis rate but significantly increased the mitochondrial membrane potential in the K263R cells (Figs. 5A-C).The data suggested that SkQ1 might reverse the oxidative stress-related cell damage caused by K263R variation.
Discussion
The cornea is a transparent membrane located in the anterior wall of the eyeball.It is divided into five layers from the front to the back: epithelium, Bowman's membrane, stroma, Descemet's membrane, and endothelium. 24The mitochondria, as the powerhouses for cells, are responsible for energy production in the form of ATP. 25 Mitochondrial density in corneal endothelium is very high, just below that of photoreceptors in humans, to ensure the production of the high amounts of ATP required to sustain Na + /K + -ATPase pump function. 26The electron transport chain and oxidative phosphorylation (oxphos) system in the mitochondria are the principal sources of endogenous oxidative stress. 27Oxidative stress describing an imbalance between ROS production and scavenging in cells plays an important role in the degeneration and apoptosis of human corneal endothelial cells.
Many corneal dystrophies, such as CHED and FECD, are associated with oxidative stress. 28CHED mainly manifests as corneal endothelial hypoplasia or corneal endothelial cells degeneration and hypofunction.The clinical features are diffuse corneal edema and opacity in both eyes.Different degrees of visual impairment occur early in life, seriously affecting the visual development of patients, resulting in nystagmus or amblyopia.In this study, the proband had poor visual acuity with nystagmus since childhood.Corneal foggy opacity in both eyes and several vesicle-like bulges in the center of the cornea of the left eye was evident (Fig. 1B).SS-OCT results showed that the light reflection signal of the corneal stroma of the right eye was uneven, and a low-reflection cavity was visible under the corneal epithelium of the left eye, with enhanced light reflection of the posterior corneal tissue (Fig. 1C).
SLC4A11 is a dimer present in the plasma membrane, consisting of an NH 2 -terminal 374 amino acids cytoplasmic domain and a 517 amino acids integral membrane domain.The membrane domain of the SLC4A11 protein contains a total of 14 transmembrane regions. 11,29SLC4A11 belongs to the SLC4 bicarbonate transporter family, 30 however, SLC4A11 does not have bicarbonate transportation activity. 31,32SLC4A11 was identified as a novel electrogenic NH3 + /H + co-transporter protein 7 and is basolateral in corneal endothelial cells. 33Mutations in SLC4A11 can cause congenital hereditary endothelial dystrophy, Harboyan syndrome, Fuchs endothelial corneal dystrophy, and other diseases. 14,15,17Loss of function of SLC4A11 is considered to be an important factor in the death of corneal endothelial cells. 13n this study, two novel heterozygous mutations SLC4A11 c.1464-1G>T and SLC4A11 c.788A>G, p.K263R were detected.SLC4A11 c.1464-1G>T was a splicing mutation, which was predicted to be harmful.SLC4A11 c.788A>G, p.K263R is a novel missense mutation.According to the ACMG guidelines, the variant was of undetermined significance.Therefore, we focused on exploring whether the SLC4A11 c.788A>G, p.K263R is a pathogenic variant.HEK293 cells hold all human post-translational modifications and they are capable of producing proteins that are similar to those naturally synthesized in humans. 34herefore, we used HEK293T cells to determine the pathogenicity of SLC4A11 c.788A>G, p.K263R and to explore the pathogenic mechanisms.
Under physiological conditions, mitochondria are a significant source of ROS, which are involved in both free radical metabolism and energy metabolism. 35,36ow concentration of ROS is necessary for the function of cells, but excessive ROS will produce oxidative stress, leading to cell damage and apoptosis. 37,38The basic characteristics of mitochondrial dysfunction are: the disorder of basic mitochondrial functions, such as bioenergy, antioxidation, and regulation, which usually leads to the decline of electron transportation chain function, the decrease of ATP production, the decrease of mitochondrial membrane potential, the increase of ROS production, and eventually leads to cell death.A growing body of evidence supports a strong link between mitochondrial dysfunction and ocular diseases. 39It was reported mutations in SLC4A11 affected mitochondrial activity leading to corneal endothelial dysfunction. 11We hypothesized that SLC4A11 (K263R) mutation might also affect the cellular response to oxidative stress and lead to mitochondrial dysfunction.We found that SLC4A11 (K263R) mutant increased ROS production, decreased mitochondrial membrane potential, and decreased cellular oxygen consumption rate, but increased apoptosis rate (Fig. 4B, Figs. 5A-C).Our data suggested that SLC4A11 c.788A>G, p.K263R was a pathogenic variant.Our study expanded the genetic mutation spectrum of SLC4A11 and provided a new reference for the molecular diagnosis of CHED.Although HEK293T cells do not fully represent the specific pathogenesis of CHED, they might still be a valuable model for studying this disease.
In reference to the previously reported topology model, 15 we found that amino acids 263 is located in the N-terminal domain.The protein sequence between different species showed the amino acids are conserved, indicating that they may have an important function.Kodaganur et al. reported a pathogenic mutation in p.Thr262Ile, but the exact pathogenic mechanism remains unclear. 40Roy et al. investigated the physiological roles of SLC4A11 and 4 mutants associated with CHED2 (S213L, R233C, G418D, and T584K).They found that cells containing mutant SLC4A11 were more susceptible to oxidative stress and mitochondrial damage, and prone to apoptotic. 11These findings are similar to ours.
SkQ1 is a derivative of plastoquinone, which is not only oxidized by ROS but also subsequently reduced by the charged electron transportation chain in the mitochondria.This feature makes these antioxidants "rechargeable" compared to many other mitochondrial-targeting antioxidants. 41SkQ1 can selectively accumulate in the inner mitochondrial membrane, it can therefore work at very low concentrations, which means that SkQ1, as a potential drug candidate, can reduce the risk of side effects.6][47][48] Evidence from clinical trials indicated that the safety, tolerability, and efficacy of SkQ1 and suggested it may be a promising treatment for dry eye disease. 44We used SkQ1 to protect the mitochondrial dysfunction and apoptosis caused by the SLC4A11 mutation.The results showed that SkQ1 improved mitochondrial function and decreased the apoptosis rate.Thus, SkQ1 might be used for treating CHED, providing more favorable preclinical data are accumulated.
In conclusion, our results broadened both the genotype and phenotype of SLC4A11 associated CHED.We identified two novel pathogenic variants (c.1464-1G>T and c.788A>G, p.K263R) of SLC4A11 in a Chinese family with CHED.We found that the mutant SLC4A11 (K263R) caused mitochondrial dysfunction and exaggerated the apoptosis rate.In addition, SkQ1 could protect mitochondrial dysfunction and apoptosis caused by the SLC4A11 mutation, suggesting that SkQ1 might be a promising therapy for treating this hereditary corneal disorder.
Figure 1 .
Figure 1.Pedigree and clinical examinations of the proband with congenital hereditary endothelial dystrophy (CHED).(A) Pedigree of the proband, indicating by an arrow.(B) Slit lamp microscopy photograph of the proband.(C) Swept source optical coherence tomography (SS-OCT) image of the corneas.
Figure 2 .
Figure 2. Two novel SLC4A11 variants (c.1464-1G>T and c.788A>G, p.K263R) were identified in the CHED family.(A) Confirmatory Sanger sequencing.(B) Clustal Omega software for multiple sequence alignment of p. Lys263 loci among different species.(C) The conservativeness of the p. Lys263 locus was analyzed using Weblogo.The horizontal coordinate represents the position of the amino acid and the vertical coordinate represents the conservativeness of the amino acid.The higher the length, the stronger the conservativeness.
Figure 3 .
Figure 3. Tertiary structure prediction of wild type and mutant proteins.(A) A schematic diagram of the structure of the original amino acid (left) and the mutant amino acid (right).(B) Three-dimensional (3D) protein structure comparison by SLC4A11 wild type (left) with c.788A>G, p.K263R (right).
Figure 4 .
Figure 4. K263R caused dysfunction of the mitochondrial respiratory system.(A) The protein expression levels of SLC4A11 were detected using Western blotting.(B) Cell oxygen consumption rate measured by XFe analyzer.(C) Bar charts show basal respiratory capacity, maximum respiratory capacity and ATP production (values are expressed as mean ± SD, *P < 0.05, **P < 0.01, ***P < 0.001, n = 3).
Figure 5 .
Figure 5. K263R caused impairment of mitochondrial function and apoptosis.(A) Detection of mitochondrial membrane potential by using JC-1 staining.The JC-1 aggregates produce red fluorescence, and the JC-1 monomer produces green fluorescence.The ratio of JC-1 aggregates to monomer is used to represent the mitochondrial membrane potential.(B) ROS levels were measured in HEK293T cells.(C) Flow cytometry was used to quantitatively detect cell apoptosis.Q2 represented late apoptotic cells, and Q4 represented early apoptotic cells.The value of Q2 + Q4 represented the degree of apoptosis (values are expressed as mean ± SD, *P < 0.05, **P < 0.01, ***P < 0.001, n = 3).
|
2023-10-04T06:18:10.879Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "e9176811764328847d0ac44ee022285454998635",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1167/tvst.12.10.1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88f84f1cf5bc91ada3f40b123cffdf576622c8f1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208527538
|
pes2o/s2orc
|
v3-fos-license
|
Dynamics of Non-Autonomous Oscillator with a Controlled Phase and Frequency of External Forcing
The dynamics of a non-autonomous oscillator in which the phase and frequency of the external force depend on the dynamical variable is studied. Such a control of the phase and frequency of the external force leads to the appearance of complex chaotic dynamics in the behavior of oscillator. A hierarchy of various periodic and chaotic oscillations is observed. The paper studies the structure of the space of control parameters. It is shown that in the dynamics of the system there are oscillatory modes similar to those of a non-autonomous oscillator with a potential in the form of a periodic function, but there are also significant differences.
Introduction.
Many systems, including radiophysical, biological, and others, exhibit oscillatory processes in which one the system acts on another with a periodic signal, but when the operating conditions change, the frequency of forcing changes. For example, in information transmission systems to ensure high stability, the so-called phase-locked loop is used [1][2][3]. The system of cardiovascular regulation of living organisms with a change in load increases or decreases the heart rate [4][5][6][7][8]. In such interactions, the dependence of the phase or frequency on the dynamic variable can lead to the appearance of complex dynamics in the system. The control process in this case is very complex, its research and modeling encounters a number of difficulties. One of the ways in the study of such systems and processes is the consideration of simpler objects in which the excitation of oscillations and frequency control are rather easily modeled. As such a system, it is convenient to use the classical model of the theory of oscillations -a linear oscillator under external harmonic force.
In the framework of this work, a study of the dynamics of a non-autonomous oscillator with a controlled phase and frequency of external force is presented. The structure of the space of control parameters is investigated. The role of parameters is determined. The work is structured as follows. Section 2 describes the object of study: a linear oscillator with an external periodic force, the external force of which has a frequency and phase dependent on a dynamical variable, which leads to the appearance of nonlinearity in the system. Section 3 discusses in detail the dynamics of the oscillator, the phase of which depends on the dynamical variable. Section 4 presents the study of an oscillator with a frequency depending on a dynamical variable.
2. Object of study: a linear oscillator under external periodic influence and an oscillator with a controlled phase and frequency.
As the simplest object of study, we will choose the classical model of the theory of oscillations [9,10] -RLC -circuit, excited by an external signal, which is written in the following form: where , x x are the dynamic variables, α is the dissipation coefficient, 0 is the natural frequency of the oscillations of the circuit, A, and are the amplitude, frequency and phase of the external forcing. Equation (1) describes the behavior of a linear non-autonomous oscillator, the dynamics of which well known: the external force in such a system excites periodic self-oscillations [9,10]. The complication of the dynamics of such a system is traditionally observed with the addition of nonlinear terms. For example, adding cubic nonlinearity will result in the well-known Duffing oscillator [11][12][13][14][15]. By adding exponential nonlinearity, we get a Toda oscillator [16][17], whose equation reproduces the dynamics of the RL-diode circuit [18][19][20][21]. When such systems interact, they exhibit complex dynamic behavior, including chaos, quasiperiodic oscillations, multistability, nonlinear resonance, etc. [22][23][24][25][26][27][28][29][30].
In the framework of this work, we consider the features of the system when the external signal is complicated, i.e. taking into account the dependence of the phase and frequency of external influence on the dynamic variable.
The first version of the model corresponds to the case when the phase depends on the variable, as result the equation (1) becomes non-linear. The dependence of the phase on the variable is the simplest one, i.e. through a linear function: , After the transition to dimensionless time t where p = ω/ω 0 is the normalized frequency of external force. Equation (3) contains a nonlinearity of type Sin(kx). The second version of the model that we will consider in the framework of this work is an oscillator, the frequency of which depends on the dynamical variable. The equation of forced oscillations of the oscillator in this case has the form ), where x is the dynamic variable, α is the dissipation coefficient, 0 is the eigenfrequency of the oscillator, A is the amplitude, p is the frequency, and is the phase of the external force, respectively. As in the previous case, we assume that the dependence of the frequency of the external force on the dynamic variable is linear: Then equation (1) takes the form: Setting = 0, we obtain an equation of the form Thus, by controlling the phase and frequency of the external force, the linear equation describing the forced oscillations of the linear oscillator is also converted into a nonlinear one with nonlinearity of the Sin(kx) type.
An oscillator with nonlinearity of the sine type is one of the reference models of nonlinear dynamics [10]. It describes the oscillations of a mathematical and physical pendulum, and also appears in other applied problems, for example, when considering Josephson contact [31][32][33][34], when studying self-induced transparency in nonlinear optics, when analyzing the bending of an elastic beam, etc. Dynamical systems of this type have very rich dynamics [35][36][37][38][39]. The richness of the dynamics of such oscillators is related to the form of the potential function, which is periodic and has an infinite number of maxima and minima. In the case of a symmetric potential function, dynamics is observed, accompanied by a transition from one well to another. In addition to the wellknown scenarios of transition to chaos and types of bifurcations, so-called metastable chaos takes place in such systems. In the case of asymmetry of the potential function, the so-called particle drift in the periodic potential is observed.
The aim of this work is to numerically study the forced oscillations of a linear oscillator when controlling the phase and frequency of the driving force using equations (3) and (7) as an example.
3. Numerical analysis of the dynamics of a controlled phase system.
The analysis of the nature of the forced oscillations in the work was carried out on the basis of an assessment of the spectrum of Lyapunov exponents, as well as on the analysis of phase portraits in the stroboscopic section. The amplitude A, frequency p, and phase control coefficient k were used as control parameters. For stability analysis, equation (3) was transformed into a system of three first-order differential equations: We study the characteristic structure of various parameter planes in this case.
In fig. Figure 1 presents charts of the dynamic regimes of system (8) on the plane of parameters (k, A) for three different values of the frequency of forcing p. Different colors denote the regions of periodic regimes with different periods and chaotic oscillations; under the figure, the corresponding color palette is presented. Fig. 1a illustrates the structure of the parameter plane (k, A) at p = 0.25. In the dynamics of system (8), a sequence of period doubling bifurcations is observed, ending with a transition to chaos. In the region of the existence of chaos, its development is observed, associated with a decrease in the connectivity of the chaotic attractor, alternating with the appearance of zones of periodic oscillations. Fig. 1b illustrates the structure of the parameter plane (k, A) at p = 1. The structure of the parameter plane remains qualitatively unchanged, only the bifurcation values of the parameters A and k change. Fig. 1c illustrates the structure of the parameter plane (k, A) at p = 5. In general, the plane structure also represents an alternation of zones of periodic and chaotic oscillations. However, bands of periodic regimes corresponding to higher resonances appear. As can be seen from Fig. 1, an increase in the parameters A and k leads to a qualitatively identical change in the dynamics of the system. Figure 2a shows the limit cycle for small values of the parameters of the amplitude of the external signal and coefficient k. With an increase in the amplitude of the external force, the limit cycle increases in size (Fig. 2b). As the coefficient k increases, the shape of the limit cycle changes, additional loops appear (Fig. 2c), however, in the stroboscopic section, this attractor still corresponds to a single fixed point. Figure 1d shows an example of a more complex attractor for large values of the parameters A and k. On the basis of such a limit cycle, a cascade of period doubling bifurcations occurs in the system. In Fig. 2e, an example of a double limit cycle is shown, and then the chaos develops, presented in Fig. 2e. Inside the chaos region with a further increase in the parameters on the parameter plane, periodicity windows are observed, inside which a cascade of period doubling bifurcations also occurs and chaos also occurs. For all cases shown in Fig. 2, the dynamics of the system develops in the vicinity of one of the potential wells located on one side of the unstable zero equilibrium state. In this case, the phase trajectory can also enter the region of the second symmetric potential well, but then returns. This is clearly seen in the stroboscopic sections, which are located in the negative region of the dynamical variable x.
A similar scenario is observed with increasing frequency parameter p. However, with increasing frequency, it grows in size and begins to visit other potential wells that are more distant from the equilibrium state. At the same time, dynamic chaos developing at various frequencies has its own characteristic features. To analyze the features, stroboscopic sections of phase portraits and Fourier spectra for chaotic attractors were constructed for various values of the parameter p, which are presented in Fig. 3. For small values of the parameter p, which is responsible for the frequency of the external force (p = 0.25), the oscillator dynamics mainly develops in one of the potential wells close to the zero equilibrium point. The peak corresponding to the base frequency of the limit cycle from which this chaotic attractor was born is clearly visible in the Fourier spectrum. With an increase in the frequency of external force p (p = 1), the attractor becomes more developed and jumps are observed in dynamics from one potential well to another. The Fourier spectrum of such a regime is broadband and does not contain individual peaks, as it was for the previous case. At p = 5, the phase portrait looks even more developed; the phase trajectory visits about 7 potential wells. The Fourier spectrum is also broadband, but in this case a certain higher-amplitude band appears at low frequencies, which corresponds to filtering the signal by the circuit at the resonant frequency. Now we turn to the study of the characteristics of the plane of parameters corresponding to the most classical in terms of synchronization: frequency -amplitude of the external signal. As a frequency parameter, we will use the parameter p. Figure 4 presents charts of the modes of oscillations of system (8) on the plane of parameters (p, A) for various values of the parameter k; the color palette is similar to Fig. 1. Figure 4a illustrates the structure of the parameter plane (p, A) at k = 0.5. Here it is possible to distinguish separate zones of complex behavior associated with the socalled resonances at higher harmonics. At low frequencies on the parameter plane, in the dynamics of the system, a sequence of period doubling bifurcations is observed, ending with a transition to chaos. The lines of bifurcations of period doubling have the characteristic form of tongues with a certain threshold in the parameter k. So, for the first line of period doubling (at the maximum frequency of the external signal p), the minimum is located at the doubled resonant frequency, which is typical for the structure of the space of control parameters of a non-autonomous nonlinear oscillator [9][10][11][12]. With a decrease in the frequency of forcing, similar lines of period doubling are observed at frequencies corresponding to subresonances. As the frequency decreases, the threshold for bifurcation of the doubling period increases. With an increase in the parameter k, chaos develops due to a decrease in the connectivity of the chaotic attractor, alternating with the appearance of zones of periodic oscillations. On the whole, the structure of the plane of control parameters (Fig. 4a) is in many ways similar to that for a non-autonomous nonlinear oscillator [18,19]: it is possible to distinguish separate zones of complex behavior associated with the so-called resonances at higher harmonics.
An increase in the parameter k ( Fig. 4b and Fig. 4c) leads to an increase in the range of variation of the phase of the forcing, and as a result to an increase in the regions of complex behavior and the complication of their structure. Figure 5 presents charts of dynamic modes on the plane of parameters (p, k) for two values of parameter A: A = 1 and A = 10. Qualitatively, the structure of the parameter plane repeats the analogous one presented in Fig. 4. Which also confirms that a change in the parameters A and k qualitatively leads to the same result. Figure 5 illustrates the diversity of the zones of existence of various modes of oscillation.
Numerical analysis of the dynamics of a system with a controlled frequency
In the case of frequency control, the dynamics does not change qualitatively; the main difference in the system behavior is the structure of the space of control parameters. A change in the frequency of forcing, which is a consequence of its dependence on a dynamical variable, leads to the fact that the instantaneous frequency of the oscillations changes and the nature of the oscillations is more complex than when controlling the phase. The system of first-order equations in this case has the form: It should also be noted here that, unlike the system of equations (8) in (9), the variable z clearly depends on the time .
The study of model (9) will be carried out similarly to a controlled phase system. To analyze the dynamics when varying the parameters, we also use the dynamic mode map method, which was also verified by analyzing the spectrum of Lyapunov exponents. Figure 6 shows the charts of the modes of oscillations of system (9) on the plane of parameters (k, A) for various values of the parameter p 0 and the dissipation parameter r = 0.1. The color palette used was the same as for Fig. 1. Fig. 6a illustrates the structure of the parameter plane (k, A) at p 0 = 0.25. The overall picture remains the same: on the parameter plane, self-oscillation bands with a period of 1 in the stroboscopic section are observed. With basic limit cycles, a cascade of period doubling bifurcations occurs and a chaotic attractor arises. The difference from the case of the dependence of the phase on the variable is that these structures are observed only at small values of the parameters A and k. For large parameter values, the periodicity windows become very narrow and the chaotic dynamics mode dominates. Fig. 6b illustrates the structure of the parameter plane (k, A) with p 0 = 1. The structure of the parameter plane remains qualitatively unchanged, bands of periodic regimes are observed, from which chaotic oscillations arise through a cascade of period doubling bifurcations. However, for this choice of parameters, the regions of the limit cycle become more pronounced. With a further increase in the parameter p0, the chaos regions disappear and only periodic oscillations are realized in the system. This effect is due to the fact that with an increase in p 0 the amplitude of the forced oscillations decreases, and, accordingly, the amplitude of the change in the forcing frequency. Figure 7 shows examples of projections of phase portraits and their stroboscopic sections for the case p 0 = 0.25. The phase portrait for the limit cycle of the period is a multi-turn cycle, which corresponds to one point in the stroboscopic section (Fig. 7a). On its basis, a cascade of perioddoubling bifurcations occurs. The phase portrait in Fig. 7b corresponds to a double limit cycle; in Fig. 7c, an example of a chaotic attractor is presented. For small values of the parameters A and k, the oscillations occur inside two potential wells of the oscillator, the imaging point visits the vicinity of each of the wells. With increasing parameters A and k, a larger number of potential wells are involved in the dynamics of the system. Figure 7d shows an example of the limit cycle for such a case. The stroboscopic sections shown in Figs. 7e and 7e clearly show the gradual involvement of more potential wells in the dynamics. An increase in the frequency parameter p 0 also affects the spectral characteristics of the dynamic mode. Figure 8 shows examples of phase portraits in the stroboscopic section and Fourier spectra for chaotic attractors at two different values of the frequency parameters p 0 . It is clearly seen in phase portraits that with increasing frequency the attractor becomes more developed and the phase trajectory moves on the basis of a larger number of potential wells. For both cases, the spectrum is broadband, but there is a pronounced component corresponding to the base oscillation frequency. In the case of the regime shown in Fig. 8a, a relatively high uniformity of the spectrum should be noted. Perhaps by the selection of control parameters it can realized chaotic modes with a uniform spectrum.
Next, we consider the structure of other parameter planes for a non-autonomous oscillator, the frequency of which depends on the dynamic variable. In fig. Figure 9 presents charts of the dynamic regimes of system (9) on the plane of the frequency -amplitude parameters of the external force (p 0 , A) for various values of the frequency tuning parameter k and the dissipation parameter r = 0.1. Fig. 9a illustrates the structure of the parameter plane (p 0 , A) for k = 1. In the structure of the parameter plane, there is some similarity with Fig. 4a, in the case of controlling the phase of exposure, however, there are significant differences. The region of chaotic dynamics, as well as for the model with phase adjustment, is limited by twice the resonant frequency. In the parameter plane, the structure within which a cascade of period doubling bifurcations with the formation of a chaotic attractor, which is located between the resonant and doubled resonant frequencies, is clearly expressed. The period doubling bifurcation line has a minimum near the doubled resonant frequency. At a frequency of external force of a lower resonance frequency, at small amplitudes of the impact, self-oscillations are destroyed and a chaotic attractor appears, and there is no system of cascades of period doubling corresponding to subresonances, as was the case with a phase-tuning system. With an increase in the frequency tuning parameter k (k = 1, Fig. 9b), the threshold for the appearance of bifurcation of period doubling near the doubled frequency becomes smaller, the region of chaotic oscillations expands to the region of high frequencies of external influence. Moreover, in the region of more than twice the resonant frequency, new cascades of period doubling are formed at multiple resonant frequencies, but with a large threshold in amplitude. With a further increase in the frequency tuning parameter k (k = 2, Fig. 9c), the cascade of period doubling bifurcations in the vicinity of the doubled resonant frequency expands to high frequencies and, when it crosses the triple frequency, a single region of chaos is formed. Fig. 10 shows the charts of the dynamic modes of system (9) on the plane of parameters, the frequency of the impact is the frequency control coefficient (p 0 , k) for various values of parameter A. Fig.10a illustrates the structure of the parameter plane (p 0 , k) for A = 1. In the structure of the parameter plane, there is some similarity with Fig. 9a, as well as for the case of controlling the phase of the force. The region of chaotic dynamics is limited by twice the resonant frequency. On the parameter plane, the structure within which a cascade of period doubling bifurcations with the formation of a chaotic attractor, which is located between the resonant and doubled resonant frequencies, is clearly expressed. At a frequency of external force of a lower resonance frequency, at small amplitudes of the impact, self-oscillations are destroyed and a chaotic attractor appears, and there is no system of cascades of period doubling corresponding to subresonances, as was the case with a phase-tuning system. At frequencies greater than twice the resonant frequency, periodic self-oscillations are observed. With increasing parameter A = 10 (Fig. 10b), the structure of the parameter plane changes significantly. The period doubling bifurcation threshold near the doubled resonant frequency becomes much smaller. In the vicinity of the tripled resonant frequency, one more period doubling line is observed, with an increase in the frequency tuning parameter k over a wide range of external forcing frequencies, a cascade of period doubling bifurcations is observed and chaotic dynamics appear at frequencies doubled.
Conclusion.
Thus, the introduction of a linear dependence of the phase and frequency of the external force on the dynamical variable in a non-autonomous linear oscillator significantly complicates the dynamics of such a simple system and leads to the emergence of a hierarchy of periodic and chaotic oscillations when the control parameters of the external influence are varied. The dynamics of such a system becomes close to a system with multi-well potential.
In the case of phase dependence on the dynamical variable, a hierarchy of chaotic attractors is observed resulting from cascades of period doubling bifurcations, while the lines of period doubling bifurcations are located at the doubled resonant frequency and subresonance frequencies.
An increase in the amplitude of the external force leads to the expansion of the regions of existence of complex modes of oscillations; now these regions are not limited to twice the frequency of the external influence, as well as the appearance of new zones of periodic oscillations in the region of chaos. In this case, oscillation regimes appear in the dynamics of the system corresponding to the so-called dynamics of a nonlinear oscillator with a periodic potential.
In the case of the dependence of the frequency on the dynamical variable, a hierarchy of chaotic regimes is also observed, however, only the period doubling line remains in the vicinity of the doubled frequency of the external force. The system of bifurcation lines of period doubling at subresonance frequencies is destroyed with the formation of chaotic dynamics. However, an increase in the frequency of external force in this case also leads to the formation of a picture with a cascade of period doubling and the emergence of a hierarchy of chaotic regimes at the so-called superresonant frequencies.
The chaotic dynamics resulting from the control of the phase and frequency of a dynamic variable is characterized by broadband. The widest spectrum is observed in the case of a phase dependent on a dynamic variable, with the frequency of the external force near the resonance. For lower frequencies, the pronounced component of the base periodic signal is retained. For lower frequencies of external influence, the signal is filtered by the circuit and the spectrum has a limited band. In the case of a frequency dependence on a dynamical variable, the signal spectra are also broadband, however, the components of the basic limit cycle are pronounced.
|
2019-11-30T09:57:46.000Z
|
2019-11-30T00:00:00.000
|
{
"year": 2019,
"sha1": "ba4c66e74f7481d3d503121c42eefb721a01aa86",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1912.00169",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "137356f80a14e8f37b2a7d9b902b4dee34ec33a2",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
14216517
|
pes2o/s2orc
|
v3-fos-license
|
Reasoning about quantum knowledge
We construct a formal framework for investigating epistemic and temporal notions in the context of distributed quantum computation. While we rely on structures developed earlier, we stress that our notion of quantum knowledge makes sense more generally in any agent-based model for distributed quantum systems. Several arguments are given to support our view that an agent's possibility relation should not be based on the reduced density matrix, but rather on local classical states and local quantum operations. In this way, we are able to analyse distributed primitives such as superdense coding and teleportation, obtaining interesting conclusions as to how the knowledge of individual agents evolves. We show explicitly that the knowledge transfer in teleportation is essentially classical, in that eventually, the receiving agent knows that its state is equal to the initial state of the sender. The relevant epistemic statements for teleportation deal with this correlation rather than with the actual quantum state, which is unknown throughout the protocol.
Introduction
The idea of developing formal models to reason about knowledge has proved to be very useful for distributed systems [2,3,4]. Epistemic logic provides a natural framework for expressing the knowledge of agents in a network, allowing one to make quite complex statements about what agents know, what they know that other agents know, and so on. Moreover, combining epistemic with temporal logic, one can investigate how knowledge evolves over time in distributed protocols, which is useful both for program analysis as well as formal verification.
The standard approach to knowledge representation in multi-agent systems is based on the possible worlds model. The idea is that there exists a set of worlds such that an agent may consider several of these to be possible. An agent knows a fact if it is true in all the worlds it considers possible; this is expressed by epistemic modal operators acting on some basic set of propositions. The flexibility of this approach lies in the fact that there are many ways in which one can specify possibility relations. In a distributed system, worlds correspond to global configurations occurring in a particular protocol, and possible worlds are determined by an equivalence relation over these configurations. Typically, global network configurations are considered equivalent by an agent if its local state in these configurations is identical.
Quantum computation is a field of research that is rapidly acquiring a place as a significant topic in computer science [5]. Logic-based investigations in quantum computation are relatively recent and few. Recently there have been some endeavours in describing quantum programs in terms of predicate transformers [6,7,8]. These frameworks, however, aim at modelling traditional algorithms that establish an input-output relation, a point of view which is not appropriate for distributed computations. A first attempt to define knowledge for quantum distributed systems is found in [9]. Therein, two different notions of knowledge are defined. First, an agent i can classically know a formula θ to hold, denoted K c i θ; in this case the possibility relation is based on equality of local classical states. Second, an agent can quantumly know a formula to hold, denoted K q i θ. For the latter, the possibility relation is based on equality of reduced density matrices for that agent. The authors argue that K q i is an information-theoretic idealisation of knowledge, in that the reduced density matrix embodies what an agent, in principle, could determine from its local quantum state. However, there are two main problems with this approach. The first is that one cannot assume that the reduced density matrix is always known, because in quantum mechanics, observing a state alters it irreversibly. So, quantum knowledge does not consist of possession of a quantum state: it is not because an agent has a qubit in its lab that the agent knows anything about it. Indeed, consider the situation where a qubit has just been sent from A to B. Then B knows nothing about its newly acquired qubit -it is possible, even, that A knows more about it than B does. The second problem with the above approach is that one loses information on correlations between agents by considering only the reduced density matrix, a crucial ingredient in distributed quantum primitives.
What we need is a proper notion of quantum knowledge, which captures the information an agent can obtain about its quantum state. This includes the following ingredients: first, an agent knows states that it has prepared; second, an agent knows a state when it has just measured it; and third, an agent may obtain knowledge by classical communication of one of the above. While knowledge of preparation states is automatically contained in the description of the protocol, our notion of equivalence precisely captures the latter two items. As we shall see below, in doing this we find a similar notion as K c i θ. Our main argument, then, is that there is no such thing as quantum knowledge in the sense of K q i θ; rather quantum knowledge is about classically knowing facts about quantum systems.
The structure of this paper is as follows. In Sec. 2 we construct a framework for reasoning about knowledge in quantum distributed systems. Next, we investigate the important distributed primitives of superdense coding and teleportation in our epistemic framework in Sec. 3, investigating how agents' knowledge is updated as each protocol proceeds. We conclude in Sec. 4. This paper assumes some familiarity with quantum computation -for the reader not familiar with the domain, we refer to the excellent [5]. The present paper is also a continuation of earlier work by the authors [10,1]. However, most of the material presented here can be understood independently of the latter.
Knowledge in Quantum Networks
In this section, we develop the notion of knowledge for distributed quantum systems. The equivalence relation for agents, on the basis of which quantum knowledge is defined, is established in Sec. 2.1. Next, temporal operators are defined in Sec. 2.2, where we also briefly discuss how temporal and epistemic operators combine.
We phrase our results below in the context of quantum networks, an agentbased model for distributed quantum computation elaborated in [1]. We stress, however, that our notion of quantum knowledge is model-independent. That is to say, any agent-based model for distributed quantum computation would benefit from quantum knowledge as defined below, or slight adaptations thereof. Due to space limitations, only a short overview is given here; for more detailed explanations, we refer the reader to [1,11].
A network of agents N is defined by a set of concurrently acting agents together with a shared quantum state, that is where σ is the network quantum state, | denotes parallel composition, and for all i, A i is an agent with local qubits Q i and event sequence E i . The network state σ in the definition is the initial entanglement resource which is distributed among agents. Local quantum inputs are added to the network state σ during initialisation; in this way we keep initial shared entanglement as a first-class primitive in our model. Note that agents in a network need to have different names, since they correspond to different parties that make up the distributed system. In other words, concurrency comes only from distribution; we do not consider parallel composition of processes in the context of one party. Events consist of local quantum operations A, classical communication c? and c!, and quantum communication qc? and qc!. Quantum operations are denoted in the style of [10], that is we have entanglement operators E, measurements M and Pauli corrections X and Z. All of this is much clarified in the applications in Sec. 3. A network determines a set of configurations C N that can potentially occur during execution of N . Configurations are written where Γ i is each agent's local (classical) state, which is where measurement outcomes and classical messages are stored. C N consists of all configurations encountered in those paths a protocol can take. More formally, C N is obtained by following the rules for the small-step operational semantics of networks, denoted by transitions =⇒ and elaborated in [1].
Before we can actually define modal operators for knowledge or time, we need to clarify what the propositions are that these act upon. It is not our intention to define a full-fledged language for primitive propositions; rather, we define these abstractly. An interpretation of N is a truth-value assignment for configurations in C N for some basic set of primitive propositions θ. Writing I(C, θ) for the interpretation of fact θ in configuration C, we then have, The primitive propositions considered usually depend on the network under study, and are specified individually for each application encountered below. Composite formulas can be constructed from primitive propositions and the logical connectives ∧, ∨ and ¬ in the usual way. However, the formulas encountered in the applications below are usually about equality. For example, θ may be of the form x = v, meaning that the classical variable x has the value v, or q 1 = q 2 , meaning that the states of qubits q 1 and q 2 are identical. We also allow functions init and fin for taking the initial and final values of a variable or quantum state. These formulas are currently defined in an ad-hoc manner.
Knowledge
In order to define quantum knowledge, we need to define an equivalence relation on configurations for each of the agents, embodying what an agent knows about the global configuration from its own information only. We deliberately do not say local information here, as, via the network preparation, an agent may also have non-local information, under the form of correlations, at its disposal. By considering only configurations in C N we model that agents know which protocol they are executing. In a quantum network, each agent's equivalence relation has to reflect what an agent knows about the network state, the execution of the protocol and the results of measurements. All classical information an agent has is stored in its local state Γ ; this includes classical input values, measurement outcomes, and classical values passed on by other agents. Just like in classical distributed systems, an agent can certainly differentiate configurations for which the local state is different. As for quantum information, an agent knows which qubits it owns, what local operations it applies on these qubits, and, moreover, what (non-local) preparation state it starts out with, i.e. what entanglement it shares with other agents initially. It can also have information on its local quantum inputs, though this is not necessarily so, as we have explained in the above. All of the above information is in fact captured by an agent's event sequence in a particular configuration, together with its local state. Therefore, we obtain the following definition.
Definition 1. Given a network N and configurations
Via possibility relations we can now define what it means for an agent A i to know a fact θ in a configuration C in the usual way, Our choice of equivalence embodies that agents cannot distinguish configurations if they only differ in that other agents have applied local operations to their qubits; neither can they if other agents have exchanged messages with each other. While the global network state does change as a result of local operations, an agent not executing these has no knowledge of this, and no way of obtaining it. This is precisely what we capture with the relation ∼ i .
Special attention needs to be given to the matter of quantum inputs. Agents distinguish configurations corresponding to different values of their classical input via their local state, in which these input values are stored. Essentially, for each set of possible input values there is a group of corresponding configurations in C N . However, this is not something we can do for quantum inputs, since these occupy a continuous space. Hence we choose to let configurations be parameterised by these inputs, writing C(|ψ ) whenever we want to stress this. But then what about an agent's possibility relation? Basically, either a quantum input is known, in which case it is just a local preparation state such that there is only one possible initial configuration. If a quantum input for agent A is truly arbitrary, or the agent knows nothing about it -as is the case for teleportationthen all values of |ψ , and hence all configurations in the set {C(|ψ ), |ψ ∈ I A }, are considered equivalent by A. If A does know some properties of its input, then we model this by only allowing a certain set of input states. We do not explicitly mention the equivalence related to unknown quantum inputs in the examples below, for the simple reason that we are interested only in logical statements that hold for all quantum inputs. That is, we compare only configurations resulting from the same quantum inputs, and derive knowledge-related statements that are independent of this input. Nevertheless, whenever a configuration C(|ψ ) is written, it should be interpreted as a set of states, all considered equivalent by all agents of the network.
From this one can construct more complicated statements, such as for example C, N K A K B θ for "agent A knows that agent B knows that θ holds in configuration C'.
Time
One typically also wants to investigate how knowledge evolves during a computation, for example due to communication between agents. Thus, one also needs a proper formalisation of time. This is usually done by allowing a set of temporal modal operations, operating on the same set of propositions. The area of temporal logics is itself an active field of research, with applications in virtually all aspects of concurrent program design; for an overview see for example [12].
We use the approach of computational tree logic (CTL) to formalise timerelated logical statements, providing state as well as path modal operators. The reason for this is that, due to the fact that quantum networks typically have a branching structure, we need to be able to express statements concerning all paths as well as those pertaining to some paths. Typically, we want to say things such as "for all paths, agent A always knows θ ", or "there exists a path for which A eventually knows θ". We can of course express this by placing restrictions on the paths we are considering in a particular statement -this is, in fact, precisely what we do in the definition of modal path operators. Introducing these is more appealing since in this way we can abstract away from actual path definitions, which are determined by the formal semantics for networks elaborated in [1], and denoted abstractly as =⇒ below.
Concretely, we introduce the traditional temporal state operators ("always") and ("eventually") into our model, and combine these with the path operators A ("for all paths") and E ("there exists a path"), as follows 1 Obviously, we have that any formula with A implies the corresponding one with E, and likewise any formula with implies the corresponding ones with and .
When investigating knowledge issues in a distributed system, one naturally arrives at situations where one needs to describe formally how knowledge evolves as the computation proceeds. This can be done adequately by combining knowledge operators K i with the temporal operators defined above. As usual, one needs to proceed with caution when doing this, since it is not always intuitively clear what the meaning of each of these different combinations is. For example, it is generally not the case that the formula A K i θ is equivalent to K i A θ. Typically, we want to prove things that are eventually known by an agent, no matter what branch the protocol follows; this is embodied by the former.
Applications
With epistemic and temporal notions for quantum networks in place, we are ready to evaluate the distributed primitives superdense coding [13] and teleportation [14] from a knowledge-based perspective. That is, instead of investigating how the global network evolves by deriving a network's semantics, we now use 1 γ =⇒ is the closure of the small-step transition relation =⇒ mentioned above. That is, we have C γ =⇒ C ′ if C ′ can be reached form C by a series of consecutive small-step transitions, specified by the path γ. this semantics, or rather, the configurations encountered therein, to analyse how the knowledge of individual agents evolves. We start with superdense coding, which is simpler to analyse because it is deterministic and does not depend on quantum inputs. We move on to teleportation in Sec. 3.2. We note that an analysis of the quantum leader election protocol [15] was also carried out in [11].
Superdense Coding
The aim of superdense coding is to transmit two classical bits from one party to the other with the aid of one entangled qubit pair or ebit. The network for this task is defined as follows, Here x 1 x 2 are A's classical inputs, subscripts stand for qubits on which events operate, X and Z are Pauli operations, qc! and qc? stand for a quantum rendezvous, M 0,0 12 is a Bell measurement on qubits 1 and 2, and E 12 is an ebit. In the first step of the protocol Alice transforms her half of the entangled pair, in a different way for each of the four possible classical inputs. Next, she sends Bob her qubit, who then measures the entangled pair. At the end of the protocol the measurement outcomes, denoted s 1 and s 2 , are equal to A's inputs.
The configurations in C SC are the following [11], where j 1 j 2 is equal to the input values 00,01,10 or 11. The equivalence relation for both of the agents for configurations in C SC is represented in Fig. 1, with arrows for computation paths, boxes for A's equivalence classes and dashed boxes for B's equivalence classes. Obviously, A distinguishes the 4 possible configurations at each time step -we refer to this below as horizontally -because A's local state [x 1 , x 2 → j 1 , j 2 ] is different for each input value. Vertically, that is with respect to the evolution of time, configurations at the first three steps differ because A's event sequence has changed. However, we find that configurations at the third and fourth level are equivalent for A, since from between both steps B has applied a local operation, which is not observable by A.
The possibility relation for B is quite different. We find that that all configurations occurring at the first two steps are considered equivalent by B. Furthermore, all configurations C 3 are equivalent to each other, though they are not equivalent to the previous ones because the event sequence of B has changed. Configurations C 4 differ from the previous ones because here B applies a local operation, and furthermore, here B finally distinguishes states horizontally via its local state [s 1 , s 2 → j 1 , j 2 ]. The possibility relations of both agents allow us to derive several epistemic statements. First of all, however, let us note that the SC network is correct, since we have or, if we want to stress that this occurs in the last step, we use C 3 (j 1 j 2 ), SC A (s 1 s 2 = j 1 j 2 ). Note that, since there is no branching in the protocol, we may replace A by E in the above. Next, we trivially have that C, SC K A (x 1 x 2 = j 1 j 2 ) for all C ∈ C SC , that is, A always knows its input values -in fact, agents always know their own input values in any protocol. We can also state this by saying that for all input values C j1j2 1 , SC A K A (x 1 x 2 = j 1 j 2 ). On the other hand, it is only in the last step that B knows A's input values, that is while ∀j 1 , j 2 , s < 4 : C j1j2 s , SC ¬K B (s 1 s 2 = j 1 j 2 ) .
Interestingly, A never knows that B knows A's input values eventually, The reason for this is that A cannot distinguish between configurations at the last two time steps, that is, A does not know whether B has applied its local measurement yet, and therefore A never knows if B knows that s 1 s 2 = j 1 j 2 .
Other statements that can be made about the SC network, for example one can play around with temporal operators to highlight when exactly the quantum message is sent. However, the essential features of the protocol are captured above.
Teleportation
The goal of the teleportation network is to transmit a qubit from one party to another with the aid of an ebit and classical resources. The network achieving this is defined as follows, where c! and c? stand for a classical message rendezvous. In the first step of the protocol Alice executes a Bell measurement on her qubits. Next, Alice sends Bob her measurement outcomes, after which Bob applies Pauli corrections to his qubit dependent on these outcomes. The result is that Bob's qubit ends up in the same state as Alice's input qubit.
In this case, we have branching due to the Bell measurement. Moreover, configurations are parameterised by the quantum input |ψ . As explained above, we do not explicitly show that configurations for different quantum inputs are equivalent for all agents. This feature is usually expressed by saying that |ψ is an unknown quantum state, that is, A (nor B) know anything about it. We repeat the configurations occurring throughout the execution of the protocol explicitly here, labelling configurations by measurement outcomes obtained in the first step of the computation.
The equivalence relation for both agents for the set of configurations C T P is represented in Fig. 2. We find that C 1 (|ψ ) is equivalent only to itself for agent A-once more, in effect we have a set {C 1 (|ψ ), |ψ ∈ C 2 } of equivalent configurations with respect to ∼ A . After the measurement A distinguishes (sets of) configurations horizontally at all time steps via its outcome map. Just as for SC, and for the same reason, A considers configurations at the last two steps to equivalent.
Again, the situation for agent B is quite different. We find that that configurations at the first two levels are considered to be equivalent, while all other configurations are distinguished, horizontally via B's local state, and vertically by the change in B's event sequence.
The correctness of the TP network is stated in logical terms as follows, where we have left out the parameterisation because the statement holds for all |ψ . In other words, the final state of B's qubit q 3 is identical to the initial value of A's qubit q 1 2 . Interestingly, neither of the agents know the actual quantum 2 We refer to the qubit named qi as qubit i in semantical derivations.
C 00 2 (|ψ ) state at any point of the computation, that is that is to say, initially nobody knows that q 1 is in the state |ψ , and there is no future point in the protocol at which either A or B knows that q 3 is in the state |ψ . The basic reason for this is of course that for all input states |ψ configurations C(|ψ ) are considered equivalent by all agents, and therefore they can conclude nothing about properties |ψ may have. Apart from statements about classical message passing, in TP the only knowledge transfer deals with the correlation between initial and final states of the network, not with the actual form of the quantum input. To be more precise, we have that since at the last step of the computation B knows that it must have the original input state. However, since A cannot distinguish the last two time steps, we also have that The latter two statements may seem odd in that we are talking about states that the agents know nothing about. However, even without knowing a state, one may still have information about how it compares with other states. There is nothing strange about this, as this sort of thing happens with classical correlations too. What it does show, however, is that there is no actual quantum knowledge transfer in the TP network -there was no quantum knowledge about the input to begin with! We can only say something about the relation of the initial to final quantum states.
Note that our analysis is in stark contrast with the one found in [9], which is jointly in terms of K c i and K q i . As mentioned above, the latter is based upon equality of reduced density matrices. Next to our objections to this approach mentioned earlier, such an analysis becomes increasingly awkward when applied to the teleportation protocol, since the basis of TP is that the initial state is unknown. In fact, the authors themselves note that their analysis leads to difficulties. Concretely, in their framework the conclusion is that initially A has quantum knowledge of |ψ -i.e. A knows its initial reduced density matrix, which is just |ψ ψ| -while B does not, and that eventually B knows the initial state |ψ , i.e. the same reduced density matrix. However, if Alice teleports a single qubit to Bob she absolutely has not transmitted a continuum of information. Indeed, Bob needs many such qubits to determine, via statistical analysis, which quantum state has been teleported. Moreover, as pointed out by the authors themselves, their notion of knowledge allows B to distinguish the four possible network states even before A has sent the measurement results through, i.e. at the second step of the computation. This is not the case: in fact the classical message passing is crucial for the success of the protocol, as without this information Bob's state is given by the maximally mixed state. All these arguments just strengthen our point: analysing teleportation from an epistemic point of view has nothing to do with quantum states, but rather with the relationship between them. Our point is that, although quantum mechanics can be used to transmit information in unexpected ways, there is no such thing as quantum knowledge; it is all classical knowledge, albeit about quantum systems.
Conclusion
We have developed a formal framework for investigating epistemic and temporal notions in the context of distributed quantum systems. While we rely on structures developed in prior work, our notion of quantum knowledge makes sense more generally in any agent-based model of quantum networks. Several arguments are given to support our view that an agent's possibility relation should not be based on the reduced density matrix, but rather on local classical states and local quantum operations. In this way, we are able to analyse distributed primitives from a knowledge-based perspective. Concretely, we investigated superdense coding and teleportation, obtaining interesting conclusions as to how the knowledge of individual agents evolves. We have explicitly shown that the knowledge transfer in teleportation is essentially classical, in that eventually, the receiving agent only knows that its state is equal to the initial state of the sender. The relevant epistemic statements for teleportation deal with this correlation rather than with the actual quantum state, which is unknown throughout the protocol.
|
2014-10-01T00:00:00.000Z
|
2005-07-19T00:00:00.000
|
{
"year": 2005,
"sha1": "a330f520fec35fa580e00eb2440ffea9417f7e17",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0507176v2.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a330f520fec35fa580e00eb2440ffea9417f7e17",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
49677635
|
pes2o/s2orc
|
v3-fos-license
|
Epigenetic modifications by polyphenolic compounds alter gene expression in the hippocampus
ABSTRACT In this study, we developed an experimental protocol leveraging enhanced reduced representation bisulphite sequencing to investigate methylation and gene expression patterns in the hippocampus in response to polyphenolic compounds. We report that the administration of a standardized bioavailable polyphenolic preparation (BDPP) differentially influences methylated cytosine patterns in introns, UTR and exons in hippocampal genes. We subsequently established that dietary BDPP-mediated changes in methylation influenced the transcriptional pattern of select genes that are involved in synaptic plasticity. In addition, we showed dietary BDPP mediated changes in the transcriptional pattern of genes associated with epigenetic modifications, including members of the DNA methyl transferase family (DNMTs) and the Ten-eleven translocation methylcytosine dioxygenases family (TETs). We then identified the specific brain bioavailable polyphenols effective in regulating the transcription of DNMTs, TETs and a subset of differentially methylated synaptic plasticity-associated genes. The study implicates the regulation of gene expression in the hippocampus by epigenetic mechanisms as a novel therapeutic target for dietary polyphenols.
INTRODUCTION
Epigenetic modifications of the genome are a critical mechanism that controls the expression and types of genes transcribed from DNA. Within the brain, epigenetic modifications orchestrate the development and plasticity of synapses (Bongmba et al., 2011). Polymorphisms of genes that facilitate specific epigenetic modifications are associated with the formation of improper synapses and increase ones susceptibility to develop psychiatric disorders (Murphy et al., 2013). Differentially methylated regions (DMRs) of DNA are defined by the presence or absence of 5-methylcytosine (5mc) groups within the DNA template. The methylation status of cytosine residues in DNA are dependent upon the activity of epigenetic modifiers, such as by DNA methyl transferases (DNMTs) or Ten-eleven translocation methylcytosine dioxygenases (TETs). These epigenetic modifications are known to regulate gene expression in a region specific manner. Methylation of cytosine residues found in gene promoter regions is associated with suppression of gene expression (Schübeler, 2015). However, evidence to date has yet to establish a consistent relationship between the methylation of intronic, exonic, or untranslated regions (UTR) and the expression pattern of the gene's corresponding proteins.
Previous studies have established that dietary polyphenols alter the epigenetic characteristics of DNA by regulating the enzymatic activity of DNMTs (Paluszczak et al., 2010) and histone deacetylases (Chung et al., 2010). For example, recent evidence suggests that bioavailable metabolites derived from dietary BDPP, such as malvidin glucoside (Mal-Gluc), decrease the expression of the inflammatory cytokine IL-6 from peripheral blood mononuclear cells, in part through mechanisms involving inhibition of cytosine methylation in intronic regions of the of IL-6 intron gene (Wang et al., 2018). Here we report standardized bioavailable polyphenolic preparation (BDPP) differentially influenced methylation patterns in introns', UTR and exons' cytosine residues in hippocampal genes associated with brain plasticity and their concurrent transcriptional patterns of gene expression. In addition, we found BDPP-mediated regulation of the transcription of epigenetic modifiers, including TETs and DNMTs in the hippocampus.
The BDPP is composed of a complex composition of polyphenol compounds, which yield a variety of bioavailable derivatives following metabolism in vivo (Vingtdeux et al., 2010;Wang et al., 2015Wang et al., , 2014. Based on this, in combination with our preliminary BDPP pharmacokinetic studies (Ho et al., 2013), we further demonstrate individual polyphenol metabolites regulate epigenetic modifiers, ultimately influencing the expression of hippocampal genes associated to synaptic plasticity. Our results implicate epigenetic modifications altering gene expression as a novel therapeutic approach for treatment with dietary polyphenols.
BDPP-treatment influences the expression of methylationrelated epigenetic modifying genes
In order to test whether dietary BDPP can contribute to synaptic plasticity through epigenetic mechanisms, C57BL6 mice were randomly grouped into two groups: vehicle treated (control, ctrl) and BDPP treated (BDPP). Following two weeks' treatment, the hippocampus was isolated for DNA total RNA extraction (Fig. 1). In a first set of studies using real-time PCR, we quantified the expression of the epigenetics modifiers DNMTs and TETs, enzymes that are important for adding or removing methyl-groups to or from the DNA, respectively (Rasmussen and Helin, 2016;Robert et al., 2003;Robertson et al., 1999). We found BDPP treatment significantly reduced the mRNA expression of DNMT1, DNMT3A DNMT3B, TET2, and TET3 and significantly increased the mRNA expression of TET1 in the hippocampus as compared to ctrl (BDPP versus ctrl, P<0.05, Fig. 2). These results suggest BDPPmediated activation of the DNA methylation machinery.
Differential methylation of genes in the hippocampus of mice treated with BDPP Based on the observation that dietary BDPP influences the methylation status of genes, we initiated a genome-wide methylation profile analysis using the RRBS technology followed by differential methylation analysis. Comparing BDPP to ctrl, we found 15 genes with differentially methylated DNA sequences. The DMRs ranged in length between ∼30 nucleotides to ∼300 nucleotides and were found on many different chromosomes. Among these DMRs, the relative amount of methylated CpG was found to be significantly reduced in six genes, while in nine genes the amount was found to be significantly increased in the BDPP treatment group as compared to ctrl (Table 1).
Gene expression of differentially methylated genes in the hippocampus by BDPP
Since transcription can be a function of CpG DNA methylation, we next quantified the gene expression of genes containing DMRs in the hippocampus of mice from BDPP and ctrl groups by qPCR. Among the genes with DMRs that were significantly hypermethylated in BDPP when compared to ctrl, we found a significantly increased mRNA expression of OCM, FIGF and ElF4G and significantly reduced the mRNA expression of ENOPH1 and CHI3L1 in the BDPP group, as compared to ctrl (Fig. 3A, BDPP versus ctrl, P<0.05). Among the genes with DMRs that were significantly hypomethylated in in BDPP when compared to ctrl, we found a significant increase in the expression of Grb10 and Brd4 and a significant decrease in the expressions of ITPKA and CAMK2 in the BDPP group, as compared to ctrl (Fig. 3B, BDPP versus ctrl, P<0.05). Although the majority of the DMRs were found in the intronic region, DMRs were also found in coding regions and one was found in the untranslated region (UTR).The DMRs location, differential methylation in the DMRs and the expression of these specific genes are summarized in Table 2.
Specific polyphenol metabolites alter the expression of epigenetic modifying genes and differentially methylated genes High-throughput bioavailability studies indicated that select BDPP derived polyphenolic metabolites accumulate in the brain following dietary BDPP treatment (Wang et al., , 2014 (Table 3). To screen for metabolites that alter the expression of epigenetic modifying genes and differentially methylated genes, we treated primary embryonic mouse cortico-hippocampal neuron cultures with brain bioavailable polyphenol metabolites and measured mRNA expression of the epigenetic modifiers DNMT1, DNMT3B, TET1, TET2 and selected differentially methylated genes GRB10, ITPKA, CAMK2A, and ABPP2. The select differentially methylated genes were chosen based on their contribution to synaptic plasticity (Guénette et al., 2017;Kim and Whalen, 2009;Shonesy et al., 2014;Xie et al., 2014).
We found that compared to DMSO treated ctrl, primary embryonic mouse cortico-hippocampal neuron treated with R-GLUC had decreased expression of DNMT1 (Fig. 4A, R-GLUC versus ctrl, P<0.05) and increased expression of TET1 (Fig. 4C, R-GLUC versus ctrl, P<0.05) and TET2 (Fig. 4D, R-GLUC versus ctrl, P<0.05). In addition, treatment with DEL and HBA increased Fig. 1. BDPP treatment alters the expression of epigenetic modifying genes in the hippocampus of C57BL/6 mice. Fold change of mRNA expression of DNMT1, DNMT3A, DNMT3B. TET1, TET2 and TET3 in hippocampal extracts from BDPP treated mice (BDPP) relative to each in those from vehicle treated control mice (ctrl), assessed by qPCR. Expression was normalized to that of the housekeeping gene HPRT. Data are means±s.e.m. of 5-11 mice in each condition (*P<0.05, **P<0.005 unpaired two-tailed t-test). These results suggest BDPP-driven brain bioavailable polyphenols contribute to the activation DNA methylation machinery. We then examined the expression of differentially methylated genes that associate with synaptic plasticity. The effect of the selected brain-bioavailable phenolic compounds on gene expression is summarized in Table 4. We found all brain-bioavailable phenolic metabolites significantly increase the expression of GRB10 in primary embryonic mouse cortico-hippocampal neurons ( Fig. 4E, phenolic metabolites versus ctrl, P<0.05) compared to DMSO treated ctrl. Treatment with brainbioavailable polyphenol metabolites (e.g. MAL, Q-GLUC, DEL, CYA, RES, R-GLUCC), but not phenolic acids (e.g. HBA, HPP), significantly increase the expression of CAMK2A (Fig. 4G, MAL, Q-GLUC, DEL, CYA, RES, R-GLUCC versus ctrl, P<0.05). In addition, treatment with R-GLUC decreased the expression of ITKPA (Fig. 4F, R-GLUC versus ctrl, P<0.05) and treatment with Q-Gluc or CYA or HAB increased the expression of ABPP2 (Fig. 4H, Q-GLUC, CYA, HBA versus ctrl, P<0.05).
The inconsistent manners in which individual polyphenol metabolites alter gene expression suggest an additive or cancelation effect of different metabolites combinations.
DISCUSSION
Epigenetic regulation of gene expression plays a critical role in orchestrating neurobiological pathways. The disruption of epigenetic networks is implicated as the source for a number of human brain disorders including autism, major depressive disorder and schizophrenia (Egger et al., 2004;Small et al., 2011). Hippocampal function in particular is susceptible to alterations in epigenetic mechanisms, which results in deficiencies in long term memory (Levenson and Sweatt, 2005;Sigurdsson and Duvarci, 2016) and synaptic plasticity (Yu et al., 2015). We have previously reported that dietary BDPP is effective in protecting against impaired performance in hippocampus-dependent cognitive tasks while the subject is experiencing conditions such as sleep deprivation, stress, and neurodegeneration (Pasinetti, 2012;Wang et al., 2012Wang et al., , 2014Zhao et al., 2015). The principal objective of our study was to therefore explore the impact of BDPP on DNA methylation and the resultant gene expression in the hippocampus. We established that supplementation with dietary BDPP caused the differential expression of epigenetic modifiers, which are involved in the addition or removal of methyl groups from DNA cytosine residues. Through epigenetic profiling of hippocampal DNA, we present a list of hippocampal genes that had differential methylation of CpG sites following administration of BDPP and show that a number of these genes exhibit a concurrent change in their mRNA expression pattern. Furthermore, we identified specific brain bioavailable polyphenol metabolites that caused differential expression of both epigenetic modifiers, as well as a subset of the differentially methylated genes.
The methylation architecture of DNA is initially established by de novo DNA methyltransferases DNMT3A and DNMT3B (Okano et al., 1999), and then maintained during DNA replication and in senescence cells by the maintenance methyltransferase DNMT1 (Robert et al., 2003). In order to maintain the steady state equilibrium of methylated/non-methylated CpGs, active DNA demethylation is initiated by TET1 (Guo et al., 2011), TET2 (Ko et al., 2010) and TET3 . Our finding that BDPP decreased the expression of DNMT3A, DNMT3B, DNMT1, which was concurrent with an increase in the expression of TET1 and a decrease of TET2 and TET3 in the hippocampus, indicate BDPP may elicit genome-wide changes in methylation patterns through altering the ratio of DNMTs to TETs. Alterations to the ratio of epigenetic modifiers skew the steady state of methylated DNA CpG sites to hypermethylated or hypomethylated states (Pastor et al., 2013). In support of this principal, we show that BDPP treatment resulted in the hypermethylation of nine genes and the hypomethylation of six genes in the hippocampus. The differential methylation of genes induced by BDPP was nonspecific in regards to the location in the gene; differential methylation was observed in intronic, exonic, as well as UTR regions. Only nine of the differentially methylated genes had simultaneous changes to their mRNA expression pattern. Separate mechanisms may therefore be involved in regulating gene transcription, such as the affinity of transcription factors for regulatory binding domains (Zaret and Carroll, 2011), cisregulatory elements (Wittkopp and Kalay, 2011), or histone acetylation (Lawrence et al., 2016). Furthermore, hypermethylation or hypomethylation of CpG sites in a gene did not predict gene expression. Previous studies suggest that gene expression may be a function of the location of methylation within a gene. While increased methylation of gene promoter regions decreases gene expression (Schübeler, 2015), there is no defined or consistent relationship between methylation of intronic (Unoki and Nakamura, 2003), exonic (Jones, 1999) or UTR regions (Eckhardt et al., 2006;Reynard et al., 2011) and gene expression. For example, while methylation of upstream exon regions proximal to the 5′ transcription start site decreased gene expression (Brenet et al., 2011), the methylation of downstream exonic regions paradoxically increases gene expression (Jones, 1999;Kuroda et al., 2009). Our studies similarly found that hypermethylation of exonic regions resulted in either decreased gene expression or no corresponding change. In addition, hypermethylation and C57BL/6 mice were treated with polyphenol-free diet for 2 weeks followed by a 2 week treatment with either vehicle (ctrl) or BDPP. Hippocampus genomic DNA was isolated and subjected to RRBS analysis. Fifteen genes, mapped to chromosome (chr) were found to have differential methylated regions in their cytosines preceding guanines (CpG) sites. Mean methylation differences in ctrl versus BDPP were averaged from CpG sites within the defined region. Positive values represent hypermethylation and negative values represent hypomethylation. The administration of BDPP to mice caused both hyper and hypomethylation events in fifteen genes in hippocampal neurons. Nine of the genes were found to be hypermethylated, while six of the genes were observed to be hypomethylated. Importantly, differential methylation was neither locus specific nor chromosome specific.
hypomethylation of intronic CpG sites yielded decreases, increases or no change in gene expression. The tenuous relationship between methylation of gene body regions and gene expression, as illustrated in our study, may reflect the putative role of CpG site methylation in determining splice variant production. Methylation of exonic regions and intronic regions can promote alternative splicing through regulating RNA polymerase inclusion of exons (Maunakea et al., 2013). The use of pan primers in our experiment may have masked the effects of methylation in mediating the production of specific splice variants. Methylation of gene body regions may also play a role in promoting chromatin structure (Choi, 2010). However, establishing a relationship between methylation and splice variants is beyond the scope of this study.
DNA methylation is crucial for memory formation, as demonstrated in a number of organisms (e.g. honey bees, mollusks and rodents) and learning paradigms (Zovkic and Sweatt, 2013). Tet-mediated DNA demethylation is involved in the regulation of long-term memory formation as well (Kaas et al., 2013;Rudenko et al., 2013). Our finding of BDPP-mediated alternation of DNMTs and TETs gene expression suggest a mechanism for BDPP beneficial effect on memory (Zhao et al., 2015). In addition, a subset of the hippocampal genes that were both differentially expressed and methylated, including BRD4, CAMK2A, ENOPH1, GRB10, ITKPA and ABPP2 have been previously implicated as regulators of neuronal activity or synaptic plasticity (Guénette et al., 2017;Kim and Whalen, 2009;Shonesy et al., 2014;Xie et al., 2014). The differential expression of both epigenetics mediators and plasticity-related gene expression following supplementation with BDPP may therefore influence synaptic plasticity and implicates epigenetic mechanisms as a potential mediator of hippocampal function.
We showed the brain-bioavailable polyphenolic metabolite R-GLUC can alternate the expression of the epigenetic modifiers DNMT1, TET1 and TET2 in primary neuronal cultures suggesting its ability to alter DNA methylation. Previous studies have showed the polyphenol metabolite MAL inhibition of DNA methylation effect through increasing histone acetylation (Wang et al., 2018), suggesting the specific brain bioavailable polyphenols may modulate DNA methylation through mechanisms different than DNMTs and TETs. In support with other studies showing the ability of specific polyphenol compounds to mediate the expression of genes involved in synaptic plasticity (Hsieh et al., 2012;Zhong et al., 2012) we showed that, when separately administered, the polyphenolic metabolites R-GLUC or MAL have either an increased effect, or no effect on the gene expression of the genes associated with synaptic plasticity, such as GRB10, ITPKA, CAMK2A, and ABPP2. Our results suggest that the net effect of BDPP on epigenetic mechanisms of gene expression is a result of the pleiotropic nature of the BDPP-derived bioavailable polyphenol metabolites and their cumulative effect on gene expression, which may be to promote, decrease or cause no change (Fig. 5). However, pleotropic effects of the combinations of polyphenol metabolites should be further investigated to better understand their interactions' contribution to genes' expression of both epigenetic modifiers and synaptic plasticity related genes.
Collectively, our results demonstrate that the administration of a dietary polyphenol preparation to mice alters the methylation status of the CpG islands of 15 genes in the hippocampal formation. Changes in gene methylation in the hippocampus occurred simultaneously with the differential expression of epigenetic modifiers in the TET and DNMT classes. An epigenetic mechanism may therefore be responsible for the observed changes in the mRNA expression of genes in the hippocampus that are associated with synaptic plasticity. Future studies will continue to investigate BDPP mediated differential gene expression via epigenetic modification as a mechanism for resilience against hippocampal-dependent cognitive dysfunction. Given the safety and tolerability of BDPP, our preclinical study has provided a basis for the potential translational application of dietary polyphenol compounds in promoting resilience to cognitive deficits by targeting epigenetic mechanisms.
Animals
C57BL6/J male mice (Mus musculus), n=24, were purchased from Jackson's laboratory at 12 weeks of age and group housed (five mice per cage) in the centralized animal care facility of the Center for Comparative Medicine and Surgery at the Icahn School of Medicine at Mount Sinai. All animals were maintained on a 12:12 h light/dark cycle with lights on at 07:00 h, in a temperature-controlled (20±2°C) room. All mice were allowed to adapt to the new environment for at least 2 weeks and were tested at 4-5 months old. For assessing BDPP effects mice were randomly assigned to vehicle-treated control group (n=12 per group) or BDPP-treated groups (n=12 per group). The calculated daily intake of GSE was 200 mg/kg body weight (BW), resveratrol was 400 mg/kg BW and the total polyphenols from juice extract was 183 mg/kg BW 6 . Mice were given BDPP delivered through their drinking water for 2 weeks prior to the experiment and the drinking solution was changed once every 2 days. Mice were euthanized with CO 2 and hippocampi from each hemisphere were separately dissected, gently rinsed in ice-cold PBS and snap-frozen and stored at −80°C until further analyses. For all experiments, mice body weight and food consumption were assessed once a week (data summarized in Fig. S1). Liquid consumption was assessed every 2 days. Mice maintenance and use were approved by the Mount Sinai Animal Care and Use Committee.
DNA and RNA extraction
For molecular investigation of BDPP effect, mice were euthanized with CO 2 following 2 weeks of treatment. Hippocampi from each hemisphere were separately dissected, gently rinsed in ice-cold PBS and snap-frozen on dry ice for DNA and RNA studies. DNA and RNA from mouse hippocampus were simultaneously extracted from homogenized tissue using the Qiagen AllPrep DNA/RNA kit according to the manufacturer's instructions. Samples were stored at −80°C before further use. Total RNA from primary embryonic cortico-hippocampal neuronal cultures was isolated and purified using RNeasy Mini Kit (Qiagen) according to the manufacturer's instructions. Total RNA was eluted with nuclease-free water. The optical density (OD) ratio of 260/280 was measured using Nanodrop spectrophotometer (PeqLab Biotechnology, Erlangen, Germany) and ranged between 1.9 and 2.1. RNA samples were stored at −80°C before further use.
C57BL/6 mice were treated with polyphenol-free diet for 2 weeks followed by 2 week BDPP treatment. The hippocampi from vehicle or BDPP treated mice were isolated and total DNA and RNA were extracted. Genomic DNA was subjected to RRBS analysis and RNA was used for qPCR gene expression measurement. The administration of BDPP resulted in differentially methylated regions (DMR), located in intronic, exonic or untranslated regions (UTR), and differential transcription of select genes in the mice hippocampus. An upward arrow (↑) signifies an increase in either gene expression or hypermethylation of the DMR; a downward arrow (↓) indicates a decrease in gene expression or hypomethylation of the DMR. No significant changes are indicated by (-), and N.A indicates not measured. List of six polyphenol metabolites and two phenolic acids found accumulated in rats' plasma and/or brain following oral administration of (*) BDPP (200 mg GSE, 400 RSV and 183 mg CGJ/kg body weight/day), or BDPP dietary components including (**) all-trans resveratrol (400 mg/kg body weight/day) and (***) GSPE (250 mg/kg body weight/day). Phenolic compounds are clustered according to their polyphenol structural classes. a Cmax, mmol/l; b plasma concentration, µM; c brain concentration, nM; d brain concentration, µM; ND, not detectable. Values are mean±s.e.m.
Gene expression
In this study 1 µg of total hippocampal RNA and 400 ng of cells' RNA were reverse transcribed with a SuperScript first-strand III kit (Invitrogen). Realtime PCR were performed to confirm or identify genes of interest. Gene expression was measured in four replicates by quantitative RT-PCR using Maxima SYBR Green master mix (Fermentas, Waltham, USA) in ABI Prism 7900HT. Hypoxanthine phosphoribosyltransferase (HPRT) expression level was used as an internal control. Data were normalized using the 2 −ΔΔCt method (Livak and Schmittgen, 2001). Levels of target gene mRNAs were expressed relative to those found in ctrl mice hippocampal tissue for in vivo studies and to untreated cells+BNDF induction for the cell cultures studies and plotted in GraphPad Prism. The primers used for the gene expression studies are listed in Table 5.
Enhanced reduced representation bisulphite sequencing Base call files generated from the sequencer were demultiplexed and converted to FASTQ files using the CASAVA (CASAVA, RRID: SCR_001802) software. These reads were then aligned to the mm10 build of the mouse genome and post-processed to produce methylation calls at a base pair resolution using a previously described pipeline developed at the Epigenomics Core, Weill Cornell Medicine.
Differential methylation analysis
Cytosines preceding guanines (CpG) sites within the defined region in the resulting RRBS data were then interrogated for methylation patterns and differential methylation (q-value<0.01 and methylation percentage difference of at least 25%) using methylKit package in R software (methylKit, RRID:SCR_005177). The differential methylation data was then queried for differentially methylated regions (DMRs) using eDMR . Downstream statistical analyses and plots were generated using the R software environment for statistical computing.
Effect of select bioavailable polyphenols treatment on gene expression
Following 14 C57BL/6 mice were treated with BDPP for 2 weeks. RNA was extracted from the hippocampus of vehicle or BDPP treated mice. Brain bio-available BDPP-driven polyphenols' effect on genes expression was assessed in primary neurons cell cultures treated with the polyphenol metabolites malvidin-glucoside ( 5. Schematic of design of the experiments aimed to examine BDPP-mediated altered gene expression through epigenetic mechanisms. C57BL/6 mice were treated with polyphenol-free diet for 2 weeks followed by a 2 week BDPP treatment. The hippocampus was isolated and total DNA and RNA were extracted. Primary embryonic cortico-hippocampal neuronal cultures were treated with specific brain bioavailable BDPP-driven polyphenol metabolites prior to RNA extraction. Mice genomic DNA was subjected to RRBS analysis. Mice and primary embryonic cortico-hippocampal neuronal cultures RNA was used for qPCR gene expression measurements of epigenetic modifying genes and genes with differentially regulated DMRs.
|
2018-07-16T23:18:09.903Z
|
2018-07-03T00:00:00.000
|
{
"year": 2018,
"sha1": "7bb3a8e4943a55e0c58262045998bbd9c1b63198",
"oa_license": "CCBY",
"oa_url": "https://bio.biologists.org/content/biolopen/7/10/bio035196.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7bb3a8e4943a55e0c58262045998bbd9c1b63198",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
14083033
|
pes2o/s2orc
|
v3-fos-license
|
Infant mortality in the Flemish Region of Belgium 1999-2008: a time-to-event analysis
Background When calculating life expectancy, it is usually assumed that deaths are uniformly distributed within each of the age intervals. As most of the infant deaths are neonatal deaths, this calls for a better assessment for that age group. Methods The Flemish unified death and birth certificates database for all calendar years between 1999 and 2008 was used. A Kaplan-Meier survival analysis on a yearly basis was performed to assess the mean time-to-event and to compare survival curves between both genders. Results Over the last years, a slight though not steady decrease of the infant mortality rate is observed. In 2008, the probability among live births of dying before their first anniversary is 4.6‰ in boys and 3.5‰ in girls. The large majority (about 85%) of these have died in their year of birth. The mean survival time of deaths in their year of birth was found to centre around 1 month (about 30 days), which results in a 'mean proportion of the calendar year lived' (k1) close to 0.09. Among those who died in the year after their year of birth yet before their first anniversary, no such concentration in time of the deaths is observed. Differences between the gender groups are small and generally not statistically significant. Conclusion Statistics Belgium, the federal statistics office, imputes a value for k1 equal to 0.1 for infant deaths in their year of birth when calculating life expectancy. Our data fully support this value. We think such refinement is generally feasible in calculating life expectancy.
Background
Objective When calculating life expectancy, it is assumed that deaths are uniformly distributed within each of the age intervals, which translates into the imputation of an additional 0.5 years of life for the deceased in their year of death. This generally holds for all ages, except for the youngest age group, and probably for the oldest age group as well (above 80) [1][2][3].
Looking at infant mortality, the striking feature is indeed that most of the deaths among live births are concentrated in the very first days. This fact urges us to adopt some factor k notably inferior to 0.5 for the mean proportion of the calendar year lived by infants who die in their first year of life.
Our aim is to assess this factor k by analyzing data for the Flemish Region in Belgium. Which kinds of k-factor (s) should be considered, however, depends on the sort of life table used.
Location of k-factors within the life table
Usually, life expectancies are derived from so-called period life tables in which age-specific mortality risks based on observations that occurred within successive birth cohorts in a given period of time (typically a calendar year), are applied to one hypothetical birth cohort under the assumption that the risks do not change over time.
Two models of period life tables can be distinguished, depending on the kind of age groups that are observed: a) one with the age at the start of the calendar year (or equivalently, the age 'attained' at the end of the calendar year), and b) one with the age at the last birthday [2,4]. This is also referred to as age expressed in completed years versus age in exact years, respectively. Figure 1 illustrates on a Lexis-diagram, with calendar year on the x-axis and age on the y-axis, the way in which the successive birth cohorts build up the hypothetical birth cohort in both models.
To calculate life expectancy (at birth), it is necessary to ascertain correct values for the person-years lived in each of the discerned parallelograms of the hypothetical cohort in both models, and in the case of the model with age reached on January 1 st , also in its base triangle a1. In doing so, it is noteworthy that in model (a) with age attained on January 1 st , each parallelogram depicting one age group or birth cohort actually covers 2 ages, whereas in model (b) with age at last birthday, each age group covers 2 birth cohorts (suitably projected on 2 calendar years in the hypothetical birth cohort).
In model (a), we assume that the newborns of year t who survive until the end of the year, will on average have lived 0.5 years insofar as births are uniformly spread over the entire calendar year. This can be deduced from the length of the midline connecting the midpoints of the rectangular sides in triangle a1. On the other hand, the newborns of year t who have died in the set time interval depicted by triangle a1, will on average have lived some observed time length equal to k1 years, with k1 less than 0.5, or even less than the expected value (0.25) for that time interval, given uniform distributions of births and deaths.
In model (b), parallelogram A' shows on the hypothetical cohort that the newborns of year t who reach their first anniversary, will all have lived 1 year. The infants who died in their first year of life, will either have died before the end of their year of birth (in triangle a1), or else in the next year before their first anniversary (in triangle a2'). The mean proportion of the calendar year lived by the deceased infants is then the weighted average of the mean proportions observed in both discerned periods, that is k = k1*w1 + k2*w2. In this, k1 refers to the mean proportion of a calendar year lived by the deceased in the base triangle (a1) and k2 refers to the mean proportion of a calendar year lived since birth by the deceased in the next triangle (a2') during their imagined passage through parallelogram A' (comprising both triangles). The weights w1 and w2 then refer to the proportion of infant deaths in the year of birth or in the next year before the first anniversary, respectively. From the figure, it should be clear however that k2 and w2 are actually derived from observations made in the former birth cohort (that is, in triangle a2 within the base square of the observation year).
Database
The data source for this research is the Flemish unified death and birth certificates database, which is operated by the Flemish central administration. This contains data of all live births and all deaths of infants with a legal residence in the Flemish Region, that were registered in either the Flemish or Brussels Capital Regions. It includes births and deaths in the resident refugee population.
More particularly, our analyses include the following data for all years of birth between 1999 and 2008: • the number of registered live births for mothers with legal residence in the Flemish Region, by year of birth and by gender; • the number of registered deaths of infants with legal residence in the Flemish Region that died in their year of birth (excluding still births). This is broken down by the number of days lived, by year of birth and by gender; • the number of registered deaths of infants with legal residence in the Flemish Region that died in the year following their year of birth but before their first anniversary. This is broken down by number of days lived, by year of birth and by gender.
Survival analysis
To examine the time-to-event of interest, i.e. the number of days lived by the deceased either in their year of birth or in the following year before the first anniversary, a survival analysis was performed using SPSS 16.0 for Windows. More particularly the Kaplan-Meier procedure was applied, which makes it possible to compare survival distributions among subgroups. To test the equality of the survival curves, the Breslow chi-square statistic is reported in which time points are weighted by the number of cases at risk at that time point.
It is important to note that 'the number of days lived' were recorded as 'completed days', i.e. those who died on their birthday have 0 days on their record, those who died the very next day have 1 day on their record, etc. Assuming a uniform distribution of deaths within the calendar day, this (again) leads to the substitution of an average of 0.5 days lived for those who died on their birthday, an average of 1.5 days for those who died the following day, etc. These averages on a daily basis were duly taken into account. Graphs of the survival functions within the year of birth are pictured, both on a linear scale and a log scale, the latter being more apt to picture small differences between subgroups (Figure 2).
The mean proportion of the calendar year lived by the deceased in the year of birth (k1) was derived from their mean survival time (in days). Likewise, the mean proportion of the calendar year lived by the deceased in the year after the year of birth yet before the first anniversary (k2), was derived from their mean survival time since birth (in days).
Differences in proportions were tested with the usual independent samples t-test, assuming the validity of the central limit theorem for large samples. Only the Pvalue is reported. The usual level of significance is adopted (α = 0.05).
Infant mortality rates
Between 1999 and 2008, the number of registered live births per annum for mothers having their residence in the Flemish Region roughly ranged between 60,000 and 70,000, with boys slightly outnumbering girls (sex ratio close to 1.05). The lowest number of births was recorded in 2002 (60,161), the highest in 2008 (69,276). Figure 3 shows the probability of infant mortality by gender, i.e. the probability among registered live births of dying before the first anniversary. A slight though not steady decrease is observed over the years: from 5.4 in 1999 to 4.6 deaths per thousand live births in 2008 (-14%) for boys, and from 4.6 in 1999 to 3.5 per thousand live births in 2008 (-24%) for girls.
In addition, the figure displays (a) the probability of dying in the year of birth and (b) the probability of dying in the next year before the infant's first anniversary. This clearly shows that the large majority of those who died before their first anniversary actually did so in their year of birth. The average share for all observation years is 85% in males and 87% in females (P = 0.68).
The probability of dying in the year of birth decreases from 4.6 to 4.0 per thousand live births (‰) between 1999 and 2008 in boys and from 3.8 to 3.2‰ in girls, with some fluctuations over that period. Note that a significant difference between both gender groups was found only for the years 2001 (P = 0.0071) and 2002 (P = 0.019).
The probability of dying after the year of birth but before the first anniversary is much lower, with stable mortality rates well below 1 per thousand. The average for all observation years is 0.7‰ in males and 0.5‰ in females (P = 0.34).
Mean time-to-event in the year of birth Differences in survival graphs of the gender groups are generally not statistically significant on a yearly basis, the one exception being the year of birth 2000.
Infant mortality rate
Infant mortality has reached very low levels. Our figures for the Flemish Region come close to 4.5‰ for boys and 3.5‰ for girls. Recent figures for Belgium show rates below 5‰ [5]. The low observed mortality levels testify to the important progress that has been made in this respect over the last century [6], particularly also in the more recent past [7]. Looking at Figure 3, however, it becomes clear that over the last decade the recorded level is stabilizing as if some bottom line were reached. For similar reasons, a threshold in the long term of 3‰ was applied in the 2008 federal population forecasts, referring to the then lowest level ever attained in a European country, i.e. Finland in 2002 [7]. Nevertheless, since the starting level was very low already, an overall decrease of about 20% over the last 10 years might still be labelled as relatively important. Moreover, it remains to be seen if a real threshold can be reached. Indeed, the infant death toll today largely consists of (extreme) preterm births that until recently would probably not have been considered as live births [8]. The definition of live births itself is changing within the high tech context of perinatal care. This implies that today's observations are only valid for the present situation.
Mean time-to-event
The main concern for this paper was to find which values are valid for the mean survival time since birth. The survival time is expressed as a proportion of the calendar year, for either the infants dying in their year of birth (k1) or the infants dying in the following year but before their first anniversary (k2). The average time-to-event in our database for the Flemish Region for infants who died in their year of birth (see base triangle a1 in Figure 1) was found to centre around 1 month (about 30 days), which results in a value for k1 equal to 0.08 or 0.09, regardless of gender. This matches with the mean survival time for that group of infants equal to 0.1 year as adopted by Statistics Belgium, the federal office for statistics in Belgium [9].
For the group of infants who died in the next year, yet before their first anniversary, the mean survival time since birth approximates 0.5 year (with an average value of 0.22 years in the time segment depicted by triangle a2' on the hypothetical cohort in Figure 1). Nevertheless, as the mortality rate here becomes very low, some broader random fluctuation in the mean survival time on a yearly basis surfaces.
In life tables with age at birth (model (b) in Figure 1), it is necessary to plug in some value for k, i.e. the mean proportion of the calendar year lived by infants deceased in their first year of life. This value can be seen as the weighted average of k1 and k2. So, for the group of male infants who died under age 1 in our study population (perceived to belong to one birth cohort), we could write: k = 0.1*(0.86) + 0.5*(0.14) = 0.156. From a practical point of view, k equal to 0.15 would do, which is the value adopted by Statistic Belgium [9].
There are however some caveats. First, there is no complete coverage of all births and deaths. Indeed, the Flemish registration system does not cover those births and deaths of residents of the Flemish Region that took place in the Walloon Region or abroad, and of which the birth or death certificates were not presented to the Flemish authorities. These are rare events and their absence should normally have no impact on our results. The missed infant deaths might be considered as non-identified right-censored cases (i. e. lost-to-follow-up). To give an idea of its rarity, its share in the total number of infant deaths was 2.1% In a more theoretical sense, this problem of censored cases also pertains to infants that have migrated out of the region and possibly have died within the observation year. From migration data for the Flemish Region at our disposal, we learn that the age-specific emigration rate in the year of birth is small (e.g. 5‰ in 2007). Considering that about half of the infant deaths occur within the first week after birth, in which time period we do not expect families to migrate much, the impact of the censored cases must be very small indeed. Besides, in population statistics the deaths of persons who officially left the population are generally no longer taken into account.
Secondly, the calculation in days lived since birth tends to over-estimate the person-years lived by those who died in the very first days of life. If we were to count in hours instead, it would turn out that a) more infants died within the first period of twenty-four hours than recorded within the first calendar day (some on the second calendar day are then classified within the first natural day of life), and b) these infants lived much less than half a day on average. For instance, in the year 2007 we find in our study population that an extra 21% deaths have occurred on the first natural day of life compared to the first calendar day (resp. 86 vs. 71 infants). The mean survival time in hours is 3.5 hours, which makes for 0.15 days instead of the hypothesized 0.5 days of life. As roughly a quarter of the deaths among the infants who died in their year of birth occurred on the calendar day of birth itself, this may have some impact. When putting this to the test for infants that have died in the first three days of life in 2007, we find that the value for k1 starts to change on the 3-digit precision level (from 0.0795 to 0.0788). The test for 2008 gives similar results (k1 changes from 0.0827 to 0.0825). Obviously, such small changes are negligible in the context of determining life expectancy at birth.
A need for harmonized statistics
Today, there is an important demand within the European Union for benchmarking, often with ranking of Member States (or their smaller regions) on some policy indicator. Generally, this is an interesting exercise provided equals are being compared with equals. For this reason, the European Demographic Observatory (ODE) of Eurostat published a manual with guidelines designed to harmonize algorithms for demographic indicators [10], including an appendix with a great deal of information on how to construct life tables [a reprint from [11]].
ODE stresses the fact that in the field of general population statistics both statistical quality and simplicity are key, which may call for a compromise. As births are usually followed up well in the year of birth, we think it feasible that at least for that time period (i.e. the base triangle on the Lexis diagram) a more precise assessment of the mean duration of life of deceased infants be taken into account. Here indeed, most of the infant deaths are concentrated, random variation is low and departure from a uniform distribution over time is largest. In our opinion, an estimate based on observations with a 1-digit precision may therefore meet the compromise.
Gender differences
Female babies in our study population show a somewhat better risk profile than their male counterparts, although the differences are minimal and generally not statistically significant on a yearly basis. By the same token, our data do not support the need to specify different values for the k-factors according to gender. This is not to say that other factors have no predictive value. For instance, a study on stillbirths and infant mortality among hospital births by mothers aged 25 years and over in 1999 in the Flemish Region has shown that the educational level of the mother is an important determinant of foetal and, to a lesser degree, early neonatal infant mortality [12]. For certain social factors, it may be worthwhile performing a study to see to what extent this is reflected in the deceased infants' mean duration of life.
Conclusion
Infant mortality in the Flemish Region of Belgium has reached very low levels. It remains to be seen, however, to what extent further progress will be possible.
As most of the deaths in the year of birth are concentrated within the first week after birth, Statistics Belgium imputes a value equal to 0.1 for the mean proportion of the calendar year lived by the deceased in the year of birth (k1) when calculating life expectancy. Our data fully support this value. We think such refinement on the 1-digit precision level is generally feasible in calculating life expectancy.
|
2017-06-17T06:46:01.946Z
|
2012-03-30T00:00:00.000
|
{
"year": 2012,
"sha1": "8aacad4e4c537841135a00752bb0d521188f9548",
"oa_license": "CCBY",
"oa_url": "https://archpublichealth.biomedcentral.com/track/pdf/10.1186/0778-7367-70-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69397297718930472fc0c0d5017609fbcbcd6923",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245369040
|
pes2o/s2orc
|
v3-fos-license
|
Quantification of gas-accessible microporosity in metal-organic framework glasses
Metal-organic framework (MOF) glasses are a new class of glass materials with immense potential for applications ranging from gas separation to optics and solid electrolytes. Due to the inherent difficulty to determine the atomistic structure of amorphous glasses, the intrinsic structural porosity of MOF glasses is only poorly understood. Here, we investigate the porosity features (pore size and pore limiting diameter) of a series of prototypical MOF glass formers from the family of zeolitic imidazolate frameworks (ZIFs) and their corresponding glasses. CO2 sorption at 195 K allows quantifying the microporosity of these materials in their crystalline and glassy states, also providing excess to the micropore volume and the apparent density of the ZIF glasses. Additional hydrocarbon sorption data together with X-ray total scattering experiments prove that the porosity features of the ZIF glasses depend on the types of organic linkers. This allows formulating design principles for a targeted tuning of the intrinsic microporosity of MOF glasses. These principles are counterintuitive and contrary to those established for crystalline MOFs but show similarities to strategies previously developed for porous polymers.
Supplementary Figure 12. Overlay of the XRPD patterns of agZIF-zni' and agZIF-4' which were obtained by attempting to prepare the desired glasses without an isothermal segment in the TGA/DSC program (see Section S5.2). The black tick marks indicate the allowed Bragg peak positions for ZIF-zni (CCDC code IMIDZB).
Supplementary Methods 3 -Fourier-transform infrared (FTIR) spectroscopy data
The activation of all materials is demonstrated by the absence of the carbonyl stretching band of DMF at 1675 cm -1 . DMF is found in the as-synthesized materials as the template for the porous channels (except for ZIF-zni). 1,2 For ZIF-4 and the amorphous phases derived thereof, a broadening of the vibrational band at 835 cm -1 ascribed to the out-of-plane ring deformation of the imidazolate linker 3 is observed (see Supplementary Figure 13). For the zni phases (ZIF-zni and zniTZIF-4) a much sharper band is observed which may be caused by the higher density of the material corresponding to fewer degrees of freedom for this vibration.
Supplementary Methods 4 -1 H NMR spectroscopy data
The activation of all materials is demonstrated by the absence of signals ascribed to DMF. 1,2 For TIF-4, ZIF-62 and their corresponding glasses the ratio between the implemented linkers has been determined by the integral corresponding to the proton attached to the carbon atom between the two nitrogen atoms in the imidazolate type linkers. The corresponding signals are integrated in the spectra below. A deeper inspection of the 1 H NMR spectroscopy data unveiled additional small signals in the range of 9.0 ppm -9.5 ppm and 7.5 ppm -8.5 ppm for agZIF-4 and agZIF-zni which are ascribed to aromatic or polyaromatic compounds formed as the result of partial framework decomposition during high temperature treatment under inert atmosphere (see Figure 2c for a zoom into the aromatic region). For agZIF-4 this has been already observed in the literature. 4 However, as shown by the intensity of the 13 C satellite signals assigned to the protons of imidazole, the amount of decomposition is rather low. For all DSC measurements a heating rate of +10 °C min -1 was applied. Samples of ZIF-4 and ZIF-zni were heated to a maximum temperature of 600 °C. Samples of ZIF-62, TIF-4 and their corresponding glasses were heated to a maximum temperature of 485 °C. Data analysis was performed with the TRIOS (v5.1.0.46403) software from TA Instruments. The melting temperatures (Tm) are determined as the peak offset, the glass transition temperatures (Tg) as the peak onset, whereas all other derived temperatures are defined as the peak temperature. The enthalpies are determined from the integral of the corresponding signal.
Supplementary Methods 5.2 -Simultaneous thermogravimetric analysis / differential scanning calorimetry (TGA/DSC)
Several different temperature programs were utilized for the preparation of the thermal products of the investigated ZIFs starting with material of the solvothermally synthesized corresponding crystalline precursor (see Supplementary Table 4).
Supplementary Methods 8 -X-ray total scattering data
X-ray total scattering data have been collected for all investigated materials. From these data, pair distribution functions in the form D(r) have been calculated showing long range order correlations (up to at least 50 Å) for all crystalline materials (ZIF-4, ZIF-zni, zniTZIF-4, ZIF-62 and TIF-4) whereas the last intense peak is found at approx. 5.9 Å for all amorphous materials (aTZIF-4, agZIF-4, agZIF-zni, agZIF-62, agTIF-4; see Figure 2e and Supplementary Figure 42). This peak equals the Zn-Zn distance in the materials. The data are in accordance with previously reported total scattering data. Intensity / a.u.
Supplementary Methods 8.1 -Fitting of the first sharp diffraction peak (FSDP)
The FSDP of the total scattering data in the form S(Q) (see Supplementary Figure 44) has been fitted to a pseudo-Voigt function for all investigated amorphous materials. From these fits we obtained the position of the FSDP (QFSDP) and the peak width at half maximum (∆QFSDP) (see Supplementary Table 5 The absolute CO2 uptake recorded at 298 K and 4130 kPa amounts to 2.55 mmol g -1 . By considering the molar mass of CO2 (44.009 g mol -1 ) and the saturated liquid phase density of CO2 at 298 K (0.7128 g cm -3 ), we determine a pore volume of ZIF-62 of 0.16 cm 3 g -1 . This is the same value as the one determined from the CO2 sorption isotherms recorded at 195 K (applying the density of the supercooled liquid of CO2 extrapolated to 195 K, see Supplementary Figure 62). Hence, the high-pressure CO2 sorption data verify the robustness of our data analysis of the CO2 sorption data recorded at 195 K. Supplementary Methods 9.2 -Surface area and pore volume analysis BET 21 surface areas were determined with the Quantachrome ASIQwin version 5.2 software. The applied relative pressure ranges and quality factors are given in Supplementary Table 8. We again note that the BET model is not applicable to microporous materials. Hence, the BET areas must be taken with great care and cannot be considered as absolute physical values. We provide the values of the BET areas for comparison purposes only.
The specific micropore volumes (Vpore) were calculated according to: with *+, -*. the specific molar amount of gas adsorbed (mmol of gas/g material) at 195 K and 95 kPa, /01 the molar mass of CO2, and ,2 the density of the supercooled liquid at 195 K (that is 1.258 g cm -3 ). In analogy to previous reports 23 , 234 is obtained from the linear extrapolation of the tabulated liquid phase density of CO2 from its triple point temperature (216.592 K) to 195 K (Supplementary Figure 62). Reference data for the liquid phase density of CO2 as a function of temperature are taken from the NIST Chemistry Webbook (https://webbook.nist.gov/cgi/cbook.cgi?ID=C124389). The obtained values are summarized in Supplementary Table 8 The experimental void fractions (eVF) were calculated according to: with the density of the solid, either obtained from the crystallographic data (crystalline ZIFs) or obtained from the apparent density approximation (glassy ZIFs, see Section 9.4). Table 9. Comparison of the specific micropore volumes (Vpore) of agZIF-62 and agTIF-4 obtained from gas sorption isotherms of n-butane (@273 K) and CO2 (@195 K).
Supplementary Methods 9.3 -Pore size distribution analysis
In analogy to previous studies 1, 19 , experimental pore size distributions (PSDs) were derived from CO2 isotherms at 273 K with the nonlocal density functional theory (NLDFT 28 , carbon equilibrium transition kernel at 273 K based on a slit pore model 29 ) using the Quantachrome ASIQwin version 5.2 software. This is the only NLDFT kernel implemented in the software packages of commercial gas physisorption analysers and thus often used in the MOF literature to derive PSDs. Theoretical PSDs were calculated with the Zeo++ 30 software package using the default CCDC radii (-ha 'high accuracy' flag 31 ) and the implemented routine for pore size distributions. The probe size was 0.1 Å and 5000 Monte Carlo samples per unit cell were averaged. The structures for the calculations were taken from the CSD database (ZIF-4: CCDC code IMIDZB11; ZIF-zni: CCDC code IMIDZB). In both structures, missing hydrogen atoms have been added geometrically with Olex2 32 .
Supplementary Figure 63. Theoretical pore size distribution calculated with Zeo++ for crystalline ZIF-4 and ZIFzni in comparison to experimental pore size distributions calculated from the CO2 isotherms recorded at 273 K using the NLDFT method for the crystalline and glassy phases of these materials.
Supplementary Figure 63 demonstrates that the PSDs derived from experimental CO2 sorption isotherms of crystalline ZIF-4 and ZIF-zni recorded at 273 K by the NLDFT model (carbon, slit pore) are inconsistent with the theoretical PSDs derived from their crystal structures. It is evident that the NLDFT model for carbon materials with slit pore geometry is inappropriate for the calculation of PSDs of ZIF materials. This must be ascribed to the very different surface electrostatics (non-polar carbon vs. appreciably polar ZIFs) and the different pore geometries of the ZIFs. Since the PSDs of the crystalline ZIFs is not described accurately by the utilized NLDFT model, we conclude that also the PSDs previously derived for ZIF glasses via the same method 1, 19 do not represent a meaningful description of their pore structure.
relative amount / arb. units
Supplementary Methods 9.4 -Density approximation
The apparent densities of the glasses were determined from the correlation of the density to the specific pore volumes determined by CO2 adsorption isotherms at 195 K.
Feasibility test First, the feasibility of the correlation was proven with theoretical and experimental considerations for ZIF-4 and ZIF-zni. Therefore, the theoretical and experimental void fractions were calculated. The theoretical void fractions (tVFs) -based on the crystal structures for both materials -were calculated with the implemented routine in Olex2 32 applying a probe radius of 1.6 Å and grid spacing of 0.2 Å. The structures were taken from the CSD database. In both structures, missing hydrogen atoms have been added geometrically with Olex2. The theoretical accessible pore space for ZIF-4 (CCDC code IMIDZB11) amounts to 28.7% and for ZIF-zni (CCDC code IMIDZB) to 7.5%. a The experimental void fractions (eVFs) were calculated from the specific micropore volumes (Vpore) derived from the CO2 adsorption isotherms at 195 K given in cm 3 g -1 (see Supplementary Table 8 and Supporting Information Section S9.2) multiplied by the crystallographic densities b (rcryst) of the materials (eVF = Vpore · rcryst, see Supporting Information Section S9.2 for further details). The densities are 1.22 g cm -3 (ZIF-4) and 1.56 g cm -3 (ZIF-zni). The corresponding eVFs amount to 30.6% and 6.2% for ZIF-4 and ZIF-zni, respectively, which are in very good agreement with the tVFs, demonstrating the feasibility of the methodology.
Exponential fit
The Vpore vs. rcryst data for the crystalline compounds ZIF-4, ZIF-zni were completed with the corresponding values for ZIF-62 and TIF-4 and fitted with an exponential fitting function (see Figure 3; R 2 -value = 0.998).
a The same calculation has also been performed for ZIF-62 (CCDC code SIWJAM) and TIF-4 (CCDC code QOSYAZ). Before the calculation, some disordered groups were resolved and solvent molecules were removed where present. tVF values are given in Figure 1. We note that the comparison of these values to the corresponding eVFs is not applicable, because of some unresolvable residual disorder leading to partially occupied secondary linkers (bimor mbim -).
b All densities for crystalline materials have been calculated from the mass of atoms in one unit cell and the unit cell volume determined via profile fits of XRPD data (see Supplementary Table 1).
|
2021-12-22T16:17:25.239Z
|
2021-12-20T00:00:00.000
|
{
"year": 2022,
"sha1": "17500f03cfec73ec95fef3d56b3226065348a37c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-022-35372-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18745fad22cd6416a6b0f0deb25eb1dad3bc5523",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119383744
|
pes2o/s2orc
|
v3-fos-license
|
The ANAIS-112 experiment at the Canfranc Underground Laboratory
The ANAIS experiment aims at the confirmation of the DAMA/LIBRA signal at the Canfranc Underground Laboratory (LSC). Several 12.5 kg NaI(Tl) modules produced by Alpha Spectra Inc. have been operated there during the last years in various set-ups; an outstanding light collection at the level of 15 photoelectrons per keV, which allows triggering at 1 keV of visible energy, has been measured for all of them and a complete characterization of their background has been achieved. In the first months of 2017, the full ANAIS-112 set-up consisting of nine Alpha Spectra detectors with a total mass of 112.5 kg was commissioned at LSC and the first dark matter run started in August, 2017. Here, the latest results on the detectors performance and measured background from the commissioning run will be presented and the sensitivity prospects of the ANAIS-112 experiment will be discussed.
Introduction
The ANAIS (Annual modulation with NaI(Tl) Scintillators) experiment intends to confirm the DAMA/LIBRA modulation signal [1] using the same target and technique in a different environment, at the Canfranc Underground Laboratory (LSC, Laboratorio Subterráneo de Canfranc) in Spain. This goal imposes strong experimental requirements: an energy threshold at or below 2 keV ee 5 , background as low as possible below 10 keV ee and very stable operation conditions. Since the nineties, NaI(Tl) detectors from different suppliers have been in operation in Canfranc [2,3]; Alpha Spectra Inc. (AS) detectors have shown best performance and radiopurity (characterized in set-ups with two or three detectors, referred as ANAIS-25 [4] and ANAIS-37 [5], respectively) and then, they have been selected to be used in ANAIS. In 2017, ANAIS-112 experiment has been commissioned: the ANAIS-112 set-up consists of a 3×3 matrix of 12.5 kg NaI(Tl) modules and data taking is underway since August, 2017 for at least the next two years in the same conditions. A blind annual modulation analysis is foreseen, as well as the ANAIS data public release after their scientific exploitation. The set-up of ANAIS-112 is described in section 2. Results concerning the detector performance and background are presented in sections 3 and 4. Finally, the sensitivity expected in the search for an annual modulation signal is discussed in section 5.
Experimental set-up
The nine modules used in ANAIS-112 were produced by AS in Colorado and then shipped to Spain along several years, arriving at LSC the first of them at the end of 2012 and the last by March, 2017 (see table 1). Each crystal is cylindrical (4.75" diameter and 11.75" length), with a mass of 12.5 kg. NaI(Tl) crystals were grown from selected ultrapure NaI powder and housed in OFE (Oxygen Free Electronic) copper; the encapsulation has a mylar window allowing low energy calibration. Two Hamamatsu R12669SEL2 photomultipliers (PMTs) were coupled through quartz windows to each crystal at LSC clean room. All PMTs have been screened for radiopurity using germanium detectors in Canfranc. The shielding for the experiment consists of 10 cm of archaeological lead, 20 cm of low activity lead, 40 cm of neutron moderator, an anti-radon box (to be continuously flushed with radon-free air) and an active muon veto system made up of plastic scintillators designed to cover top and sides of the whole ANAIS set-up (see figure 1). The hut housing the experiment is at the hall B of LSC under 2450 m.w.e.. DAQ hardware and software of ANAIS-112 were tested in previous ANAIS set-ups. For each module, individual PMT charge output signals are digitized and fully processed. Triggering is done by the coincidence (logical AND) of the two PMT signals of any detector at photoelectron level in a 200 ns window, enabling digitization and conversion of the two signals. There is redundant energy conversion by QDC modules and the building of the spectra is done off-line by adding the signals from both PMTs. The muon detection system based on plastic scintillators is fully implemented, allowing to tag muon-related events and to monitor on-site the muon flux. The slow control system is also operative, monitoring different parameters like radon activity, humidity, pressure, several temperatures, N 2 flux or PMT High Voltage. In addition, a blank module will be set-up to monitor non-NaI(Tl) scintillation events and build a "blank" population for the study of annual modulation systematics.
Detector performance
The light output measured for all AS modules is at the level of ∼15 phe/keV, which is a factor of two larger than that determined for the best DAMA/LIBRA detectors [6]. The fourth column of table 1 shows the preliminary results for the total number of photoelectron per keV using ANAIS-112 data, following the same method applied in ANAIS-25 and ANAIS-37 set-ups, described in [7]. The new estimate is in very good agreement with the previous ones for D0-D5 detectors. This high light collection, possible thanks to the excellent crystal quality and the use of high quantum efficiency PMTs, has a direct impact in energy threshold. Triggering below 1 keV ee is confirmed by the identification of bulk 22 Na and 40 K events at 0.9 and 3.2 keV ee , respectively, thanks to coincidences with the corresponding high energy photons following the electron capture decays to excited levels.
Effective filtering protocols for rejecting non-bulk scintillation events, similar to those described at [3] and optimized for each detector, have been applied. Multiparametric cuts based on the number of peaks in the pulses, the temporal parameters of the pulses and the asymmetry in light sharing between PMTs are considered. Acceptance efficiency curves from external calibration data are obtained for each detector.
Radiopurity and background
Detailed background models for the first modules operated in ANAIS-25 and ANAIS-37 set-ups were developed [8], based on Geant4 Monte Carlo simulations and an accurate quantification of background sources: the intrinsic crystal activity directly assessed, the cosmogenic activity in crystals (precisely quantified from ANAIS-25 data [9]) and the activity from external components measured with HPGe detectors in Canfranc. At the region of interest, crystal bulk contamination is the dominant background source. Contributions from 40 K and 22 Na peaks and the continua from 210 Pb and the considered cosmogenic 3 H are the most relevant ones.
The ANAIS-112 data taken up to July, 2017 in the commissioning run have been analyzed to make a first quantification of the relevant background sources. The activity of 40 K and 210 Pb in the nine NaI(Tl) crystals has been determined and preliminary results are reported in the fifth and sixth columns of table 1. As made in previous set-ups, the potassium content has been deduced by identifying coincidences between the 3.2 keV ee emissions and the 1460.8 keV gammaray following the electron capture decay of 40 K [10]; the obtained values are compatible with estimates from previous set-ups when available. Some detectors have similar content to that of DAMA/LIBRA crystals [6]; the average 40 K activity in ANAIS-112, although higher than that of DAMA/LIBRA, is more than one order of magnitude lower than in large, low-background crystals tested from other suppliers. The activity of 232 Th and 238 U in the crystals is quantified by the measured alpha rates, following Pulse Shape Analysis (to distinguish alpha interactions from beta/gamma ones) and analysis of BiPo sequences; it is at a level of a few µBq/kg, but 210 Pb out of equilibrium has been observed for all the modules. The origin of a possible 210 Pb contamination was under study in collaboration with AS, which allowed to obtain lower activity in the last produced crystals (see table 1 and figure 2). Figure 3 presents the background spectra at the low energy region in ANAIS-112 commissioning run; first and latest data are compared, showing the decay of cosmogenics in the last detectors. Preliminary background models for those modules (see one example in figure 4), considering the measured crystal activities and the ANAIS-112 configuration, point to equivalent relevant background sources in the very low energy region. The 210 Pb contribution around 50 keV (see figure 3) is consistent with the measured alpha specific activity in all cases.
Sensitivity prospects
The prospects of ANAIS-112 for the identification of an annual modulation signal have been evaluated in [11] in terms of the a priori critical and detection limits of the experiment. The analysis is based on the detector response and the background level measured for the first Figure 5.
Annual modulation sensitivity prospects for ANAIS-112 after 5 years of measurement, as evaluated in [11]. modules operated in Canfranc. In particular, an average background (corrected for the cut efficiency) has been estimated in the regions of interest and five years of data taking have been assumed. Considering the variance of the estimator of the modulated amplitude, it is shown [11] that ANAIS-112 in 2-6 keV ee has a detection limit for a model-independent annual modulation (not related to a dark matter origin) below the measured amplitude by DAMA/LIBRA [1]. As it can be seen in figure 5 (taken from [11]), under the dark matter hypothesis, for a detection limit at 90% C.L. and a critical limit at 90% C.L., ANAIS-112 can detect the annual modulation in the 3σ region compatible with the DAMA/LIBRA result.
|
2017-10-10T22:09:59.000Z
|
2017-10-10T00:00:00.000
|
{
"year": 2020,
"sha1": "e254ce10029b81e92e97c798d68907fdee4e4243",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1742-6596/1342/1/012056",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "5044310bf9515f51e5dc49a401a155170474849b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
207175329
|
pes2o/s2orc
|
v3-fos-license
|
Abnormal N-Glycosylation of a Novel Missense Creatine Transporter Mutant, G561R, Associated with Cerebral Creatine Deficiency Syndromes Alters Transporter Activity and Localization
Tatsuki Uemura,a Shingo Ito,a,b Yusuke Ohta,c Masanori Tachikawa,c Takahito Wada,d Tetsuya Terasaki,c and Sumio Ohtsuki*,a,b a Department of Pharmaceutical Microbiology, Graduate School of Pharmaceutical Sciences, Kumamoto University; 5–1 Oe-honmachi, Chuo-ku, Kumamoto 862–0973, Japan: b AMED-CREST, Japan Agency for Medical Research and Development; 1–7 Otemachi, Chiyoda-ku, Tokyo 100–0004, Japan: c Division of Membrane Transport and Drug Targeting, Graduate School of Pharmaceutical Sciences, Tohoku University; 6–3 Aoba, Aramaki, Aoba-ku, Sendai 980–8578, Japan: and d Department of Medical Ethics and Medical Genetics, Graduate School of Medicine, Kyoto University; Yoshida-Konoe-cho, Sakyo-ku, Kyoto 606–8501, Japan. Received July 18, 2016; accepted October 21, 2016
transferring creatine from blood to brain. This idea is supported by various lines of evidence. For example, CRT is expressed in plasma membrane of brain capillary endothelial cells, which are components of the blood-brain barrier (BBB), and in neurons of the CNS. [16][17][18] In Slc6a8 whole-body knockout mouse, the brain creatine level was reduced to below the detection limit, while the blood creatine level was unchanged. 19,20) Also, a neuron-specific CRT knockout mouse driven by the CamkIIα promoter exhibits cognitive defects and executive dysfunction, suggesting that neuronal CRT-mediated supply of creatine is necessary for neuronal function. 21) We have reported a novel missense mutation of exon 12 of the SLC6A8 gene [c.1681G>C; p.G561R] in three boys who showed intellectual disability in a single family. 22) The brain creatine contents of the patients measured by 1 H-MRS were considerably reduced. 22) These clinical results strongly suggest that the G561R mutation attenuates the creatine transport function of CRT, although functional attenuation was not evaluated at the molecular level. Topology information from Uniprot suggests that the G561R CRT missense mutation is localized at the 12th transmembrane domains. The 1st and 6th transmembrane domains of CRT are considered to be essential for creatine binding, based on the crystal structure of bacterial (Aquifex aeolicus) leucine transporter. 23) Therefore, it is unlikely that the G561R missense mutation directly affects the binding of creatine.
N-Glycosylation plays an important role in maintaining the structure, proper folding, trafficking and transport function of SLC transporters. [24][25][26][27][28][29] CRT has two N-glycosylation sites (Asn192 and Asn197), and deletion of either or both of them impairs CRT trafficking to the plasma membrane. 30) Therefore, it is plausible that G561R-CRT missense mutation causes failure of CRT maturation due to protein misfolding and abnormal N-glycosylation, leading to loss of proper trafficking. However, it is important to establish the precise mechanism in order to find potential therapeutic targets for CCDSs. To investigate the transport and localization of the G561R-CRT mutant in this study, we chose 293 cells, which have been used in previous CRT mutation studies, 30) probably because CRT is predominantly expressed in the kidney and it is easy to establish stable transfected 293 cell lines.
The purpose of the present study, therefore, was to characterize the functional attenuation of CRT with G561R mutation, and to examine the effect of G561R mutation on the subcellular localization and glycosylation of CRT by using 293 cells stably expressing the mutant. [ 14 C] Creatine (55 mCi/mmol) was obtained from American Radiolabeled Chemicals (St. Louis, MO, U.S.A.). Anti-CRT antibody was produced as previously described. 16) Anti-Flag antibody was purchased from Wako Pure Chemical Industries, Ltd. (Osaka, Japan). Peptide : N-glycosidase F (PNGase F) was purchased from New England Biolabs (Roche, Basel, Switzerland). All other chemicals were commercial products of analytical grade.
Reagents and Antibodies
Construction of 293 Cells Stably Expressing N-3x Flag Tagged Human CRT The G1681C mutant of human CRT was generated with a site-directed mutagenesis kit according to the manufacturer's protocol (TaKaRa, Shiga, Japan). The open reading frames (ORFs) of human wild-type CRT and G1681C mutant CRT were subcloned into pEBmulti puro vector (Wako), which includes an N-terminal 3x Flag tag gene. Transfections of plasmid DNA were performed with ScreenFectA (Wako) according to the manufacturer's recommendations. The cells were cultured in the presence of 2.0 µg/mL puromycin for 1 week. The puromycin-selected cells were cultured on culture dishes at 37°C in an atmosphere of 5% CO 2 in Dulbecco's modified Eagle's medium (DMEM) supplemented with 70 mg/L benzylpenicillin, 100 mg/L streptomycin, 2 µg/mL puromycin, and 10% fetal bovine serum.
Western Blot Analysis Cytosol, crude membrane fraction and plasma membrane fraction of 293 cells stably expressing human CRT were separated with a Plasma Membrane Protein Extraction Kit (Bio Vision, Mountain View, CA, U.S.A.) according to the manufacturer's protocol. Amounts of proteins in each fraction were determined with a bicinchoninic acid (BCA) protein assay kit (Bio-rad, Richmond, CA, U.S.A.) using bovine serum albumin (BSA) as a standard. Samples were incubated with sodium dodecyl sulfate (SDS) sample buffer containing 2-mercaptoethanol (2-ME) for 30 min at 37°C. Protein samples (5 µg) were resolved by 7.5% SDSpolyacrylamide gel electrophoresis and bands were electrotransferred to polyvinylidene difluoride membranes (Bio-rad). After incubation with blocking buffer (5% skimmed milk in 25 mM Tris-HCl (pH 8.0), 125 mM NaCl, 0.1% Tween 20) for 1 h at room temperature, membranes were incubated with anti-Flag (Wako) or anti-β-actin antibodies at 4°C for 16 h. The membranes were washed three times with TTBS buffer (25 mM Tris-HCl (pH 8.0), 125 mM NaCl, 0.1% Tween 20) and incubated with horseradish peroxidase-conjugated antibody. Signals were visualized with an enhanced chemiluminescence kit (TaKaRa) and detected with an imager (Omega Lum G Imaging System, Aplegen Inc., San Francisco, CA, U.S.A.).
Statistical Analysis Unless otherwise indicated, all data are presented as the mean±standard error of the mean (S.E.M.). An unpaired, two-tailed Student's t-test was used to determine the significance of differences between means of two groups. One-way ANOVA followed by Dunnett's test was used to assess the statistical significance of differences among means of more than two groups.
Functional Characterization of G561R-Mutant CRT
Creatine transport by G561R-mutant CRT was examined in 293 cells stably expressing N-3xFlag tagged G561R-mutant CRT, compared with cells stably expressing wild-type (WT) CRT. [ 14 C] Creatine uptake by the mutant was significantly less than that by WT-CRT at all examined time points up to 40 min except for 0.25 min (Fig. 1). Furthermore, uptake by cells expressing the mutant was not significantly different from that by mock cells up to 10 min. At 40 min, the uptake of [ 14 C] creatine by cells expressing G561R-mutant CRT was 1.32-fold greater than that by mock cells, while cells expressing WT-CRT exhibited 1.97-fold greater uptake than mock cells.
Initial creatine uptake clearance was calculated from the slope of [ 14 C] creatine uptake up to 20 min (Fig. 1A), because the uptake of creatine by WT-CRT and G561R-mutant CRT expressing cells increased linearly up to 20 min (r=0.985 and 0.950, respectively, determined by linear regression analysis). The values of initial [ 14 C] creatine uptake clearance by cells expressing WT-and G561R-mutant CRT were determined to be 8.18 and 4.71 µL/mg protein/min, respectively (Fig. 1B). Creatine uptake clearance by exogenous CRT was estimated by subtracting that by mock cells (3.46 µL/mg protein/min) from that by either WT-or G561R-mutant CRT. The [ 14 C]creatine uptake rates by exogenous WT-and G561R-mutant CRT were estimated to be 4.72 and 1.25 µL/mg protein/min, respectively. This suggests that creatine transport activity in 293 cells was strongly attenuated by the G561R mutation.
Creatine transport by CRT is saturable and Na + -Cl −dependent. Therefore, to clarify the transport characteristics of G561R-mutant CRT, we further examined creatine uptake by WT-and mutant-CRT expressing cells in the presence and absence of unlabeled creatine or Na + or Cl − at 30 min, when the creatine uptake by WT-and mutant-CRT expressing cells was significantly greater than that by the mock cells (open bars in Fig. 1C). The uptake of [ 14 C] creatine by cells expressing WT-and G561R-mutant CRT, as well as mock cells, at 30 min was almost completely suppressed in the presence of excess unlabeled creatine, by 97.1 and 95.8%, respectively (Fig. 1C). The uptake of [ 14 C] creatine by WT-and G561Rmutant CRT at 30 min was reduced in the Na + free condition by 97.1 and 96.4%, respectively, and in the Cl − free condition by 95.4 and 95.2%, respectively (Fig. 1C). Uptake by mock cells was also suppressed by 94.2 and 94.6% in Na + free and Cl − free conditions, respectively. These results suggest that creatine transport by the exogenous CRT was saturable and Na + -Cl − -dependent, like that of endogenous CRT.
Subcellular Localization of WT-and G561R-Mutant CRT Protein in 293 Cells
CRT functions as a transporter for creatine at the plasma membrane. To investigate whether G561R-misssense mutation alters the localization of CRT in 293 cells, we examined the localization of CRT protein in WT-and G561R-CRT by means of immunohistochemistry using anti-CRT antibody (Fig. 2). The WT-CRT protein was predominantly expressed at the plasma membrane of 293 cells. In contrast, G561R-mutant CRT was visualized as dots in the intracellular compartment. This result suggests that G561Rmisssense mutation alters the subcellular localization of CRT protein in 293 cells.
Expression of WT-and G561R-Mutant CRT Protein in Subcellular Fractions of 293 Cells
To examine the subcellular expression of exogenous G561R-mutant CRT protein, cells were fractionated into cytosol, crude membrane and plasma membrane fractions, and protein expression of exogenous CRT in each fraction was determined by Western blot analysis using anti-Flag antibody. As shown in Fig. 3A, WT-CRT was predominantly detected at 68 kDa and faintly detected at 179 kDa in plasma membrane fraction, which is consistent with the immunohistochemical results shown in Fig. 2. Total expression of G561R-mutant CRT was greater than that of WT-CRT. However, G561R-mutant CRT was detected at 55, 110 and 165 kDa in both the crude membrane and plasma membrane fractions (Fig. 3A). As shown in Fig. 3B, which was exposed for a shorter time to avoid signal saturation, the band intensities were greater in the crude membrane fraction than in the plasma membrane fraction, suggesting G561R-mutant CRT proteins were predominantly localized in intracellular membranes. The larger-molecular-size bands appear to be oligomers of 55 kDa protein. These results suggest that G561R-missense mutation increases intracellular levels of CRT protein, but induces protein structure changes and oligomerization. The band intensity at around 65 kDa of G561R-mutant CRT in plasma membrane fraction was greater than that in crude membrane fraction, suggesting that G561R-mutant CRT of similar molecular size to WT-CRT was expressed at the plasma membrane of 293 cells expressing the mutant CRT (Fig. 3).
Abnormal N-Glycosylation of G561R-Mutant CRT in 293 Cells To examine whether G561R-mutant of CRT undergoes abnormal N-glycosylation in 293 cells, the proteins in cytosol, crude membrane fraction and plasma membrane fraction were treated with N-glycosidase F to remove N-linked oligosaccharides from glycoproteins. After this treatment, both the band at 68 kDa of WT-CRT and that of G561R-mutant CRT at 55 kDa in the crude and plasma membrane fractions were shifted to 50 kDa (Fig. 4). This result suggests that the 68 and 55 kDa CRT proteins were differently N-glycosylated.
In Fig. 3, several bands of G561R-mutant CRT protein were observed in the crude membrane and plasma membrane fractions. In contrast, a single band at 55 kDa of G561R-mutant CRT was detected in the crude membrane and plasma membrane fractions treated with N-glycosidase F or control for 17 h
Fig. 4. N-Linked Glycosylation of WT-and G561R-Mutant CRT in 293 Cells
Equal amounts of protein of cytosol, crude membrane or plasma membrane were treated with or without N-glycosidase F for 17 h at 37°C, and then subjected to Western blot analysis with anti-Flag antibody.
at 37°C (Fig. 4). This result suggests that protein complexes or oligomers of G561R-CRT dissociated during the treatment.
DISCUSSION
The present study demonstrated that a novel CRT missense mutation in exon 12 of the SLC6A8 gene (c.1681G>C; p.G561R) causes suppression of creatine transport activity. G561R-mutant CRT is expressed predominantly in the intracellular compartment, whereas WT-CRT is localized at the plasma membrane. The G561R missense mutation in CRT alters N-glycosylation and promotes oligomer formation in intracellular organelle membranes. Therefore, our present findings suggest that G561R-missense mutation in CRT suppresses creatine transport due to defects in protein folding, N-glycosylation, and trafficking to the plasma membrane.
Patients with G561R-missense mutation in CRT showed a marked reduction of brain creatine level measured by 1 H-MRS. 22) The physiological role of CRT expressed at the BBB is to supply creatine to the brain. Our present findings show that G561R-mutant CRT-expressing cells exhibit markedly reduced creatine uptake compared to WT-CRTexpressing cells, even though the total expression of exogenous CRT was greater in the former cells (Figs. 1, 3). Our in vitro study suggested that initial creatine uptake clearance by G561R-mutant CRT was reduced by 73% compared to that in WT-CRT (Fig. 1B). We previously reported that the brain creatine level in CCDSs patients was below the detection limit of 1 H-MRS. 22) Previous study has shown that the brain creatine level was about 5 mmol/L in the gray matter of healthy persons and detection limit of brain creatine was less than 0.8 mmol/L. 32) Thus, the brain creatine level in patients with G561R-CRT mutation appears to be reduced by at least 84% compared to that in healthy persons. Therefore, the reduction of transport activity by G561R-mutant CRT appears to be comparable with the reduction of creatine concentration in the brain of CCDSs patients with G561R-CRT mutation. Thus, we conclude that G561R missense mutation in CRT attenuates creatine transport activity at the BBB, resulting in a reduction of brain creatine level due to suppression of creatine supply to the brain across the BBB.
We found that G561R-mutant CRT is localized mainly in intracellular membranes (Figs. 2, 3). The intracellular localization of the transporter was examined using N-3x Flag taggedprotein. Previous studies have shown that N-terminally tagged CRT is predominantly localized at the plasma membrane, 30) as is untagged CRT. 33) Therefore, it seems plausible that the Flag-tag at the N-terminus would not affect the localization of WT-CRT, although the possibility that the N-terminal Flag-tag altered the localization of CRT cannot be completely excluded. Also, further study would be desirable to establish the localization of untagged G561R-mutant CRT.
We also found that the molecular weight of G561R-mutant CRT is about 13 kDa smaller than that of WT-CRT (Fig. 3). N-Glycosylation plays an important role in the regulation of transporter and channel functions, and in trafficking to the plasma membrane. [25][26][27][28][29][34][35][36] Several reports demonstrate that CRT is N-glycosylated in at least two sites. 30,37) N-Glycosidase F treatment caused a band shift of WT-CRT from 68 to 50 kDa, indicating that WT-CRT was N-glycosylated (Fig. 4). This is consistent with previous reports that treatment with N-glycosidase F or tunicamycin reduced the molecular weight of bovine and human creatine transporter expressed in 293 cells. 30,37) The molecular size of G561R-mutant CRT was 55 kDa, which smaller than WT-CRT (Fig. 3). After N-glycosidase treatment, the molecular size of G561R-mutant CRT became the same as that of WT-CRT at 50 kDa. This suggests that the N-glycosylated G561R-mutant CRT was an immature form. However, the possibility that post-translational modification of CRT is different at the BBB and in neurons cannot be excluded, and further study will be needed to examine this issue.
N-Glycosylation of protein is a post-translational process, which is initiated within the endoplasmic reticulum. Properly folded and N-glycosylated protein is then exported to Golgi apparatus, where Man 8 (GlcNAc) 2 oligosaccharide chains are added, and after further processing correctly decorated proteins are trafficked to the cell surface. 24) Our present findings suggest that G561R-mutant CRT formed dimers and trimers in the intracellular membrane, in contrast to WT-CRT, which is present as a monomer in the plasma membrane (Fig. 4). Thus, it seems likely that G561R-missense mutation alters the protein folding of CRT and causes aberrant glycosylation. The expression level of G561R-CRT was higher than that in WT-CRT (Fig. 3). This may be due to accumulation of the misfolded G561R-CRT protein in intracellular membrane.
CRT is classified into the Na + -Cl − -dependent neurotransmitter transporter family. It was reported that Na + -Cl −dependent neurotransmitter transporters such as human serotonin transporter 38) and human dopamine transporter form oligomers, 39) and proper oligomer formation of Na + -Cl −dependent neurotransmitter transporters is required to pass the endoplasmic reticulum quality control mechanisms for trafficking to the plasma membrane. 24) In the present study, oligomers might have remained partially intact under the mild denaturing conditions used to prepare plasma membrane fraction (37°C for 30 min), and this would be consistent with the faint band at around 179 kDa in Western blot analysis of the plasma membrane fraction of WT-CRT.
It was reported that the 11 and 12th transmembrane domains participate in oligomerization of Na + -Cl − -dependent neurotransmitter transporters such as human serotonin transporter. 38) The G561R CRT missense appears to be localized at the 12th transmembrane domain. The alternation of G to R at CRT561 introduces a positive charge at the 12th transmembrane domain. Therefore, alteration of the electrostatic interaction of the 12th transmembrane domain may cause the improper protein folding of G561R-mutant CRT. Several CRT missense mutations associated with CCDSs have been reported to exhibit reduced creatine transport activity and smaller molecular size than WT-CRT. 30) Thus, it is possible that the reduction of creatine transport by other CRT missense mutations may also be caused by defects of protein folding and/or trafficking.
Western blot analysis showed that G561R-CRT protein is expressed mainly in crude membrane fraction, as well as in the plasma membrane, but immunohistochemical analysis did not confirm protein expression of G561R-CRT in plasma membrane (Figs. 2, 3). Fractionation with the Biovision kit or sucrose density gradient method is a useful technique to enrich plasma membrane proteins, but the resulting fraction can be contaminated with proteins from other organelle membranes (ref. 33 and our unpublished data). Also, Western blot analysis showed no enrichment of G561R-CRT proteins in plasma membrane compared crude membrane, except for 65 kDa protein (Fig. 3). These findings suggest that G561R-CRT protein is mainly accumulated in intracellular membranes.
It has been shown by Northern blot analysis that CRT mRNA is expressed in human kidney, 40) suggesting that the uptake of creatine by mock cells could be attributed to endogenous CRT. Thus, uptake of creatine by WT-CRT-expressing cells would be mediated by both endogenous CRT and exogenous WT-CRT, and that by G561R-mutant CRT expressing cells would be mediated by both endogenous CRT and exogenous G561R-mutant CRT. The greater uptake of creatine by G561R-mutant CRT-expressing cells than that by mock cells suggests that exogenous G561R-mutant CRT at least partially retains creatine transport activity. Furthermore, the creatine uptake behavior of mutant CRT-expressing cells indicates that creatine uptake by exogenous G561R-mutant CRT was suppressed by excess unlabeled creatine and under Na + and Cl − -free conditions in the same manner as in the case of endogenous CRT. Therefore, the characteristics of creatine transport by G561R-mutant CRT appear to be similar to those of endogenous CRT (Fig. 1). Exogenous CRT of similar molecular size (about 65 kDa) to that of WT-CRT was detected in plasma membrane of G561R-mutant CRT-expressing cells, and may exhibit creatine transport activity to some extent. Impaired membrane trafficking of bile salt export pump (BSEP/ ABCB11) was reported to be caused by missense mutations, but these BSEP mutants retain transport function. 41) It was also reported that 4-phenylbutyrate, an U.S. Food and Drug Administration (FDA)-approved drug for urea cycle disorders, enhances the cell surface expression and transport capacity of mutated BSEP, thereby improving liver function and relieving pruritus in patients with progressive familial intrahepatic cholestasis type 2. 42,43) Thus, a compound that modulates a molecular chaperone involved in CRT protein folding and/or transport of creatine analogues via a CRT-independent pathway could be a therapeutic target for CCDSs with G561R-mutant CRT missense mutation or other CRT missense mutations that exhibit altered protein folding and/or trafficking.
In conclusion, our present findings suggest that a novel missense mutant of CRT, G561R, shows impaired creatine transport activity as a result of defective protein trafficking to plasma membrane due to protein misfolding and altered N-glycosylation. Since creatine plays a pivotal role in brain energy homeostasis, G561R-mutant CRT likely causes chronic depletion of cerebral creatine due to inadequate CRT-mediated creatine transport at the BBB and into neurons, resulting in neurodevelopmental disorders.
|
2018-04-03T06:10:25.195Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "f7b78de456481f8510bf40fd2f317fb859359c54",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/bpb/40/1/40_b16-00582/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "815d0de74efa74cf7a1e14d9c41b2504f895ec24",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
50746043
|
pes2o/s2orc
|
v3-fos-license
|
On the Convergence of Ritz Pairs and Refined Ritz Vectors for Quadratic Eigenvalue Problems
For a given subspace, the Rayleigh-Ritz method projects the large quadratic eigenvalue problem (QEP) onto it and produces a small sized dense QEP. Similar to the Rayleigh-Ritz method for the linear eigenvalue problem, the Rayleigh-Ritz method defines the Ritz values and the Ritz vectors of the QEP with respect to the projection subspace. We analyze the convergence of the method when the angle between the subspace and the desired eigenvector converges to zero. We prove that there is a Ritz value that converges to the desired eigenvalue unconditionally but the Ritz vector converges conditionally and may fail to converge. To remedy the drawback of possible non-convergence of the Ritz vector, we propose a refined Ritz vector that is mathematically different from the Ritz vector and is proved to converge unconditionally. We construct examples to illustrate our theory.
Introduction
Consider the numerical solution of the large quadratic eigenvalue problem (QEP) Q(λ)x ≡ (λ 2 M + λD + K)x = 0, (1.1) where λ ∈ C, x ∈ C n \{0}, M , D and K are n × n complex matrices with M = M H > 0, Hermitian positive definite. The scalar λ and the nonzero vector x in (1.1) are called an eigenvalue and a corresponding eigenvector of the quadratic pencil Q(λ) or (M, D, K), respectively. The pair (λ, x) is called an eigenpair of (M, D, K). Since M = M H > 0 in (1.1), Q(λ) has 2n finite eigenvalues.
QEP (1.1) arises in a wide variety of scientific and engineering applications [2,27]. The theoretical framework for general matrix polynomials and in particular for quadratic pencils can be found in books by Lancaster [18] and more recently by Gohberg, Lancaster and Rodman [5]. A good survey of mathematical properties, perturbation analysis, and a variety of numerical algorithms for QEPs can be found in the paper by Tisseur and Meerbergen [27].
In practice, a small number of eigenvalues that are nearest to a target τ or locate in a prescribed region of the complex plane and the corresponding eigenvectors are often of interest. To this end, we exploit the shift transformation λ τ = λ − τ with det(Q(τ )) = 0 to transform (1.1) to a new QEP of the form where M τ = M , D τ = 2τ M + D and K τ = τ 2 M + τ D + K is nonsingular. Throughout the paper, we assume that the eigenvalues to be sought are nonzero.
One kind of classical methods for solving QEP (1.1) is to reformulate it as a certain standard (or generalized) eigenvalue problem via a so-called linearization process and then to apply Krylov subspace based methods or Jacobi-Davidson type methods to solve the corresponding linear eigenvalue problem. Most of these methods fall into the category of the Rayleigh-Ritz method that is widely used for the computation of eigenpairs of a linear eigenvalue problem from a given projection subspace. As is well known, under the assumption that the angle between a desired eigenvector and the projection subspace tends to zero, there exists a Ritz value that converges to the desired eigenvalue unconditionally but its corresponding Ritz vector may fail to converge; furthermore, when one is concerned with eigenvectors, one can compute certain refined Ritz vectors whose convergence is guaranteed [10,12,13,15,16]; see also [25].
Over the years, some reliable numerical methods have been proposed that are used to solve large and sparse QEPs directly. Based on the Ritz-Galerkin condition, various methods are designed to construct suitable lower dimensional subspaces. Then, the large QEP is projected onto a given subspace to produce a small sized dense QEP which can be solved by the standard QR or QZ algorithm. They fall into the category of the q-Rayleigh-Ritz method, as will be described in the next paragraph. Methods of this type include the residual iteration method [8,21,22], the Jacobi-Davidson method [23,24], Krylov type subspace methods [6,19], the nonlinear Arnoldi method [28], second-order Arnoldi (SOAR) type methods [1,17,20,29], the iterated shift-and-invert Arnoldi method [30] and the semiorthogonal generalized Arnoldi (SGA) method [7]. Now we describe the q-Rayleigh-Ritz method for the QEP. For a given orthonormal matrix Q ∈ C n×m (m ≤ n), the q-Rayleigh-Ritz method is to find a scalar µ ∈ C and a unit length vectorx ∈ C m satisfying the Ritz-Galerkin condition (µ 2 M Q + µDQ + KQ)x ⊥ span{Q}, which amounts to solving the projected QEP If (µ,x) with x = 1 is an eigenpair of ( M , D, K), i.e., (µ 2 M +µ D + K)x = 0, then µ and Qx are, respectively, called a q-Ritz value and a corresponding q-Ritz vector of (M, D, K) with respect to span{Q}, and (µ, Qx) is a q-Ritz pair of (M, D, K). Since M is Hermitian positive definite, so is M for any given Q. Therefore, we have 2m finite q-Ritz values.
In this paper we study the convergence of the q-Ritz value and the corresponding q-Ritz vector, and extend some of the results in [15,16,25] to the q-Rayleigh-Ritz method. Although a number of q-Rayleigh-Ritz procedures with respect to different subspaces have been used, to our best knowledge, there has been no unified convergence result and general theory. As will be seen, carrying out this task is mathematically nontrivial. Similar to the linear eigenvalue problem, it appears that there exists a q-Ritz value that converges to the desired eigenvalue unconditionally but the corresponding q-Ritz vector may fail to converge even if the corresponding projection subspace span{Q} contains a sufficiently accurate approximation to the desired eigenvector. It is thus necessary and significant to replace the q-Ritz vector by a refined q-Ritz vector that has residual minimization and is mathematically different from the q-Ritz vector. We prove that the refined q-Ritz vector converges unconditionally provided that the angles between the desired eigenvector and the subspaces tend to zero. All convergence results are nontrivial generalizations of the known results on the Rayleigh-Ritz method and their refinement for the linear eigenvalue problem in [15,16,25].
This paper is organized as follows. In Section 2, we analyze the convergence for q-Ritz values and q-Ritz vectors and prove that the q-Ritz value is unconditionally convergent but the associated q-Ritz vector may fail to converge. To remedy this drawback, in Section 3, we introduce a refined q-Ritz vector and prove its unconditional convergence. Finally, we conclude the paper in Section 4.
Throughout this paper, the superscripts H and T denote the conjugate transpose and the transpose of a matrix or vector, respectively. I n is the identity matrix of dimension n. We denote by · both Euclidean vector norm and the spectral matrix norm.
Convergence of q-Ritz values and q-Ritz vectors
Throughout the paper, let (λ 1 , x 1 ) with x 1 = 1 be a desired eigenpair of (M, D, K) and assume that λ 1 = 0 is simple.
For a given orthonormal matrix Q ∈ C n×m with m ≤ n, define and let [Q, Q ⊥ ] be unitary with Q ⊥ ∈ C n×(n−m) . From now on, throughout the paper, let θ 1 be the acute angle between x 1 and the projection subspace span{Q} and Then it holds that [25, p. 249 First of all, we want to show that there is a q-Ritz value µ 1 that converges to λ 1 unconditionally when sin θ 1 → 0. The first result towards this aim is the following perturbation theorem.
Theorem 2.1. With λ 1 , q 1 and θ 1 defined as above. Let M , D and K be defined in (1.4) and Proof. Recalling (2.4) and (2.5), since So we have By (2.9) it is easily seen that E M , E D and E K satisfy (2.6) and We may deduce from this theorem that there exists an eigenvalue µ 1 of ( M , D, K) that converges to λ 1 as θ 1 → 0. However, things are subtle and by no means trivial here. The difficulty is that, unlike a usual matrix perturbation problem where matrices are given and fixed and perturbations are allowed to change, here the matrix triple ( M , D, K) and the perturbation triple (E M , E D , E K ) change simultaneously as θ 1 → 0. This means that there may be a possibility that, as θ 1 changes, the eigenvalue λ 1 of ( M + E M , D + E D , K + E K ) and the eigenvalues of ( M , D, K) become ill conditioned so swiftly that no eigenvalue of ( M , D, K) converges to λ 1 though θ 1 → 0.
Fortunately, by exploiting a theorem of Elsner [4] (also see [26, p.168]) we can prove that this cannot happen and there is indeed an eigenvalue µ 1 that converges to the desired λ 1 provided that θ 1 → 0. Elsner's theorem states that, given matrices C andC of order n, for any eigenvalue λ of C there is an eigenvalueλ ofC such that For our purpose, define the matrices A and B by Then the eigenvalues µ of ( M , D, K) are equal to those of ( A, B), whose nor- Hermitian positive definite and its smallest singular value is bounded by that of B from below, B + E B must be nonsingular for θ 1 small enough. Moreover, for θ 1 → 0, it follows from Theorem 2.1 that is uniformly bounded independent of θ 1 . Finally, from Theorem 2.1 and ( Based on Elsner's theorem, we have the following corollary, which, together with the above discussions, proves the global unconditional convergence of q-Ritz values when θ 1 → 0. There is a q-Ritz value µ 1 such that (2.12) The corollary indicates that as θ 1 → 0 there is always a q-Ritz value µ 1 → λ 1 unconditionally. We should comment that bound (2.12) will in general be a too gross overestimate and be for the worst case. If, as usually happens in practice, the condition number of λ 1 as an eigenvalue of ( B + E B ) −1 ( A + E A ) is bounded, the convergence will be linear in θ 1 , much better than that predicted by bound (2.12).
Next, we analyze the convergence of the corresponding q-Ritz vectorx 1 . Based on decomposition (2.2), we can establish the following theorem, which is an analogue of Theorem 3.1 in [9] for the standard linear eigenvalue problem. The result will be used when we prove the unconditional convergence of refined q-Ritz vectors to be introduced in the next section. . (2.14) Proof. From (2.2), pre-multiplying (2.13) by Y H leads to Therefore, it follows from X Hξ 1 = sin ∠(ξ 1 ,ξ 1 ) that (2.14) holds.
In terms of the posteriori computable residual r, Theorem 2.3 establishes the relationship between the eigenvector ξ 1 and its approximationξ 1 for the generalized eigenvalue problem (2.1).
Let (µ 1 ,x 1 ) with x 1 = 1 be the eigenpair of ( M , D, K) that is supposed to approximate the desired eigenpair (λ 1 , x 1 ) of (M, D, K). In terms of θ 1 , we attempt to derive one of our main results, anàpriori bound for the q-Ritz vectorx 1 as an approximation to the eigenvector x 1 . Note that µ 1 is an eigenvalue of ( A, B) and where L, N ∈ C (2m−1)×(2m−1) and µ 1 =αβ −1 . Under the only hypothesis that sin θ 1 → 0, it is possible that there is an eigenvalue of ( L, N ) that could be arbitrarily near or even equal to µ 1 . For a multiple and derogatory µ 1 , that is, µ 1 has more than one trivial or nontrivial Jordan blocks, there are more than onex 1 = Qx 1 to approximate the unique eigenvector x 1 of (M, D, K). In this case, the method itself cannot tell us which one or linear combination of them is meaningful, so that it fails. If µ 1 is near an eigenvalue of ( L, N ), we will get a uniquex 1 , but there is no guarantee that it converges to x 1 . It leads us to postulate thatx 1 will converge provided that sep(λ 1 , ( L, N )) is uniformly away from zero independent of θ 1 , i.e., sep(λ 1 , ( L, N )) > c with c a positive constant independent of θ 1 . We will, quantitatively, show that it is indeed the case. Before proceeding, we need the following lemma.
So we can regard (λ 1 ,q ≡ λ 1q1 q 1 ) as an approximation of (µ 1 ,ξ 1 ). Then the residual of (λ 1 ,q) as an approximate eigenpair of ( A, B) iŝ By (2.9) in the proof of Theorem 2.1 we have From Theorem 2.5 we see that sep(λ 1 , ( L, N )) > 0 uniformly is a sufficient condition for the convergence of the q-Ritz vectorx 1 . Furthermore, from Theorem 2.1, since the q-Ritz value µ 1 approaches the eigenvalue λ 1 as θ 1 → 0, by the continuity argument we have sep(µ 1 , ( L, N )) →sep(λ 1 , ( L, N )). However, as we have argued above, sep(µ 1 , ( L, N )) can be arbitrarily small (and even be exactly zero) when µ 1 is arbitrarily near other eigenvalues (or is associated with a multiple eigenvalue) of ( L, N ). Consequently, while the q-Ritz value converges unconditionally once θ 1 → 0, the corresponding q-Ritz vector may fail to converge or may converge very slowly or irregularly.
In the following, we give an example to illustrate that the q-Ritz vector fails to converge to the desired eigenvector. Since M + D + K is zero, any nonzero vectorx 1 with x 1 = 1 is an eigenvector of ( M , D, K) corresponding to the double eigenvalue one, a q-Ritz value equal to the desired eigenvalue exactly. However, the q-Rayleigh-Ritz method itself cannot tell us how to pick up a suitablex 1 . In practice, we might well takex 1 = [1/ √ 2, 1/ √ 2] T and then the approximate eigenvector becomes [4 T , which has no accuracy as an approximation of the desired eigenvector [0, 0, 1] T and is completely wrong. Thus the method can fail even though the projection subspace span{Q} contains the desired eigenvector exactly.
In practice, we would not expect span{Q} to contain x 1 exactly. Let us investigate the case that span{Q} contains an enough accurate approximation to x 1 , i.e., sin θ 1 is very small. We perturb Q by a matrix generated randomly in a normal distribution by 10 −12 × randn(3, 2) whose 2-norm is 2.2 × 10 −12 , and the resulting sin θ 1 = 1.7 × 10 −12 . at least nine orders bigger than sin θ 1 ! sox 1 is a very poor approximation to x 1 for the given accurate subspace span{Q}. It is also justified that the residual norm of the q-Ritz pair (µ 1 ,x 1 ) is The poor accuracy ofx 1 is due to the fact that there is another q-Ritz value µ = 1.000000000010143 that is very near to µ 1 , so that sep(λ 1 , ( L, N )) in (2.16) is tiny.
Convergence of refined q-Ritz vectors
As we have seen in Section 2, the q-Ritz vector may fail to converge or converges very slowly. Since the q-Ritz value is known to converge to the simple eigenvalue λ 1 when sin θ 1 → 0, this suggests us to deal with non-converging q-Ritz vector by retaining the q-Ritz value but replacing the q-Ritz vector with a unit length vector z 1 ∈ span{Q} with a suitably small residual. Naturally, for a given q-Ritz value µ 1 we constructz 1 = Qẑ 1 , where the unit lengthẑ 1 is required to be the optimal solutionẑ The vectorz 1 = Qẑ 1 is called a refined q-Ritz vector of (M, D, K) corresponding to µ 1 with respect to span{Q}. Obviously,ẑ 1 is the right singular vector of the n × m rectangular matrix µ 2 1 M + µ 1 D + K Q associated with its smallest singular value. We can computeẑ 1 reliably by a standard SVD algorithm or generally cheaper but still numerically stable cross-product based SVD algorithms; see [11,17] and also [25]. For a detailed round-off error analysis on the latter ones, we refer to [14].
Before establishing the convergence of the refined q-Ritz vectorz 1 , we need two lemmas. Proof. By (2.3) and the definition of sin θ 1 , we have and the minimum attains atẑ 1 .
Proof. Without the minimizations, for any m dimensional vector z, it is direct to verify that the two hand sides are equal. So the assertion holds. N )) .
Proof. Let ξ 1 = λ 1 x 1 Let P W be the orthogonal projector onto the subspace span{W }, where W = diag(Q, Q). Then Therefore, we get which is an approximate eigenvector of the desired form in the left-hand side of (3.3) and Q H x1 Q H x1 is a minimizer candidate for (3.3). Define Then from cos θ 1 = Q H x 1 we have From Lemma 3.1 we get (I n − P W )ξ 1 = 1 + |λ 1 | 2 sin θ 1 . Therefore, we obtain Taking the norms gives From Lemma 3.2, by the optimality property ofz we have Since (A−µ1B)z √ 1+|µ1| 2 is a residual norm, it is direct from Theorem 2.3 that (L, N )) .
We continue Example 2.6 to show considerable merits of refined q-Ritz vectors. For the case that x 1 lies in span{Q} exactly, recall that µ 1 = λ 1 exactly. It is easy to verify that the smallest singular value of the matrix (µ 2 1 M + µ 1 D + K)Q is both exactly zero and simple, the optimal solutionẑ 1 = [1, 0] T in (3.1) and the refined q-Ritz vectorz 1 = Qẑ 1 = x 1 , exactly the desired eigenvector! So in contrast to the q-Ritz vector, the refinement can pick up the desired eigenvector perfectly.
Conclusions
Theoretically, we have proved that there exists a q-Ritz value of (M, D, K) that unconditionally converges to the desired eigenvalue when the angle between the subspace span{Q} and the desired eigenvector tends to zero. However, the associated q-Ritz vector only converges conditionally. To this end, we have proposed the refined q-Ritz vector that is guaranteed to converge unconditionally. We have presented some examples to demonstrate our theory.
The purpose of this paper is not to present efficient and reliable eigensolvers for QEPs, but rather to establish a general convergence theory of the q-Rayleigh-Ritz method and to show the unconditional convergence of q-Ritz values and refined q-Ritz vectors and the conditional convergence of q-Ritz vectors. Refined q-Ritz vectors may become a very valuable component and make great improvement in flexible eigensolvers for QEPs. Numerical experiments in [17] have shown that one can gain very much by replacing q-Ritz vectors by refined q-Ritz vectors in second-order Arnoldi type methods and their implicitly restarted algorithms.
|
2015-03-16T13:31:45.000Z
|
2011-09-29T00:00:00.000
|
{
"year": 2011,
"sha1": "a9c637b1f5cc259bfae83e573671e01e7264ad89",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1109.6426",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7112280d3e3e6309ab5c50a8219d3b9d12cc147e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
239847523
|
pes2o/s2orc
|
v3-fos-license
|
The Third Konstantin Ivanov Intercontinental Magnetic Resonance Conference on Methods and Applications ICONS-3
ICONS-3, organized during September 01–03, 2021, was the third edition of the on-line magnetic resonance conference series called Konstantin Ivanov Intercontinental Magnetic Resonance Seminary, named after our untimely deceased colleague and friend. The ICONS conferences are an off-shoot of the weekly Intercontinental NMR Seminar Series that started on April 8, 2020. This seminar series enables communication and dissemination of research ideas among the magnetic research community especially in the times of the COVID-19 pandemic. In the frame-work of the ICONS series, until now, about one-hundred scientists from five different continents have presented their recent results. While the weekly seminar series gives both early-stage and experienced researchers an opportunity to give seminar talks and interact with colleagues from all over the world, the ICONS-3 conference is a platform for experienced researchers. The ICONS-3 conference attracted registrations from more than 220 researchers from over 30 countries covering 4 continents and spanning 14 time zones. The ICONS seminar series is open to all areas of magnetic resonance and covers the full range of Magnetic Resonance, i.e., EPR, NMR, MRI, and their various Applied Magnetic Resonance
3
hybrids. The first ICONS conference in 2020 (see report in APMR [1] for details) covered the full band-width of magnetic resonance, and the second ICONS in spring 2021 focused on fields where the interaction of electron and nuclear spins plays a pivotal role (see report in APMR [2] for details).
Owing to the untimely death of our friend and co-organizer, Prof. Dr. Konstantin Ivanov, (known to many as Kostya), we decided to devote a part of the ICONS3 to his memory. Kostya, who had a very broad range of interests in the field of magnetic resonance, sadly passed away on March 5, 2021 as a victim of COVID-19. He was instrumental in initiating the ICONS seminar and conference series. To honor his contributions to science, we decided to invite about one third of the speakers among well-known scientists who were former mentors, colleagues, and collaborators of Kostya and asked them to give in their presentations not only information about their latest research but also display some facets of Kostya's many contributions to their fields of magnetic resonance.
The presentations covered a wide range of topics, including recent advances in the fields of NMR of proteins and nucleic acids, quadrupolar nuclei and ultra-wideline NMR to time-resolved NMR of biomolecular systems, the development and application of hyperpolarization techniques like Chemically Induced Nuclear Polarization, Dynamic Nuclear Polarization and Parahydrogen-Induced Polarization, to the potential of hyper-polarized triplet states as spin labels for EPR to the investigation of radical pair reactions.
Alexandra Yurkovskaya, ITC, Novosibirsk, reported results on Time Resolved Chemically Induced Dynamic Nuclear Polarization (TR-CIDNP), and how it can be exploited to learn details of the chemical reaction mechanism and the dynamics of fast radical reactions. In her presentation, she gave special emphasis on the linear relationship between the signal intensities observed in TR-CIDNP spectra and the 1 3 The Third Konstantin Ivanov Intercontinental Magnetic… value of the hyperfine interaction constants in short-lived states. This relationship, which was explained by Konstantin Ivanov in 2011, is now widely used to characterize the spin-density distribution in transient radicals in multistage reactions at biologically relevant conditions. Geoffrey Bodenhausen, Paris, reported on recent findings in the field of dissolution DNP experiments. In these experiments, a microwave-dependent change of the proton relaxation times was observed in the build-up curves of the nuclear spin polarization, resulting in an extension of the nuclear coherence lifetime upon switching off of the microwave. After the presentation of the effect, he discussed the potential to employ this effect to measure the longitudinal relaxation times of the electron spins.
Matvey Fedin, ITC, Novosibirsk, discussed recent results on the measurement of spin-spin distances by pulsed dipolar (PD) EPR spectroscopy employing photoexcited triplet states as spin labels. Owing to their very high non-thermal polarization, which results from their optical excitation, they exhibit far superior sensitivity than common stable-radical-type spin labels.
Hans-Martin Vieth, FU Berlin and ITC, Novosibirsk, first gave an overview about the historic development and current state of the art of the role of Level-Anti-Crossings (LACs) in manipulating the population of spin levels. In this overview, he focused on experimental schemes for utilizing LACs in optics and magnetic resonance. Then he discussed a number of important contributions of Konstantin Ivanov to the field of LACs, who set the frame-work for understanding the spin dynamics of hyperpolarization experiments like SABRE as a result of level crossings of spin levels.
Robert Kaptein, Utrecht, reported on NMR investigation of DNA binding to proteins and in particular on DNA recognition of gene regulatory proteins and sliding of DNA fragments along the protein. After revealing the structure of the DNA-protein complex, he demonstrated how the sliding rate of the fragment on the protein can be determined from the analysis of the NMR line-width and compared the result to data from laser spectroscopy.
Dmitry Budker, Mainz, first discussed recent developments in zero-to ultralowfield (ZULF) NMR and future prospects in single-molecule spectroscopy with a single-spin sensor. Then he explained the potential of nuclear spins to search for galactic dark matter and explained some details of how NMR is utilized in the Cosmic Axion Spin Precession Experiments (CASPEr) program as a potential monitor of dark matter.
Olivier Lafon, Lille, introduced novel methods to probe the local environment of quadrupolar nuclei, such as 17 O, 47,49 Ti, 67 Zn, and 95 Mo and others in solids. These methods combine robust pulse sequences to transfer the polarization of protons to quadrupolar nuclei over a wide range of magic-angle spinning (MAS) frequencies with dynamic nuclear polarization (DNP) to detect the nuclei on or near the surface of heterogeneous catalysts, inorganic nanoparticles, or organic solids with very high sensitivity.
Marco Tessari, Nijmegen, showed an exciting application of reversible parahydrogen-induced polarization (SABRE) to boost the sensitivity of NMR in analytical chemistry. The reversible association of the analytes to the iridium catalysts selectively hyperpolarizes the analytes and permits their NMR detection down to nanomolar concentrations, while removing the signal background originating from the other species in solution.
Robert Schurko, NHMFL Tallahassee, discussed the challenges of ultrawide-line solid-state NMR spectroscopy of unreceptive isotopes and how these challenges are resolved by new experimental schemes, including improved pulses, pulse sequences, methodologies, and specialized hardware developments. In particular he reported details on various applications of the broadband adiabatic inversion-cross polarization (BRAIN-CP) methods in NMR spectroscopy and relaxometry, novel numerical evaluation tools for data analysis of these systems and indirect detection schemes.
Robert Tycko, NIH, Bethesda, reported on new developments and recent applications of low-temperature magic-angle spinning and dynamic nuclear polarization to obtain two-dimensional solid-state NMR measurements of frozen solutions of peptides, proteins, and other biological molecules at sub-millimolar concentration. The implementation of rapid negative temperature jumps, and rapid freezing techniques enabled him to investigate transient intermediate states of biomolecular systems that exist under native conditions only on the order of 10 ms by solid-state NMR. In addition, the advantages of new tri-radicals for cross-effect DNP were discussed.
Robert Konrat, Vienna, discussed new developments in the structural characterization of intrinsically disordered proteins (IDPs) by a combination of NMR spectroscopy and novel computational protein sequence analysis tools.
Kiminori Maeda, Saitama, gave an exciting overview about recent advances in the investigation of radical pair reactions in photo-chemistry by the Magnetic Field Effect (MFE) and the Reaction Yield Detected Magnetic Resonance (RYDMR). Employing a time-resolved version of the MFE called Switched External Magnetic Field (SEMF), he could measure back-ground free the decay kinetics of the radical pairs. Nikita Lukzen, ITC, Novosibirsk, introduced into the application of multifrequency nuclear magnetic resonance as an efficient tool to investigate heterospin complexes of Europium compounds in solution to determine the paramagnetic shifts of the ligands and the equilibrium constants of the systems and discussed the potential of these Europium complexes as chemical-shift thermometer.
Lewis E Kay, Toronto, finally, discussed a number of NMR tools developed by his group for the characterization of intrinsically disordered protein regions (IDRs). Employing these tools, they could identify ATP and side-chain interactions with an RNA-binding protein (CAPRIN1) that influence the phase behavior of the protein.
Organization and Future Developments
The conference was organized by Daniel Abergel (ENS Paris, France), Gerd Buntkowsky (TU, Darmstadt, Germany), and P. K. Madhu (TIFR Hyderabad, India). Suman Saurav and Sreenidhi, TIFR Hyderabad, provided technical assistance. The conference and seminar series were sponsored by Alexander von Humboldt Foundation, Wiley, Springer, HyperSpin, and Adani. Following the scheme of a general MR conference in summer alternating with a specialized conference on cutting-edge topics in winter, there are already plans for a specialized ICONS4 in winter of 2022.
3
The Third Konstantin Ivanov Intercontinental Magnetic… For updates and the schedule of upcoming talks see the home page of the meeting ICONS-Seminary.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2021-10-26T15:08:16.688Z
|
2021-10-24T00:00:00.000
|
{
"year": 2021,
"sha1": "8a65fa75f938ff6abc7507aa35dff4599bece439",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00723-021-01441-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "72f2f056c4604a70519f734881746700f72f99ca",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118774307
|
pes2o/s2orc
|
v3-fos-license
|
Soft-Gluon-Pole Contribution in Single Transverse-Spin Asymmetries of Drell-Yan Processes
We use multi-parton states to examine the leading order collinear factorization of single transverse-spin asymmetries in Drell-Yan processes. Twist-3 operators are involved in the factorization. We find that the so-called soft-gluon-pole contribution in the factorization must exist in order to make the factorization correct. This contribution comes from the corresponding cross-section at one-loop, while the hard-pole contribution in the factorization comes from the cross-section at tree-level. Although the two contributions come from results at different orders, their perturbative coefficient functions in the factorization are at the same order. This is in contrast to factorizations only involving twist-2 operators. The soft-gluon-pole contribution found in this work is in agreement with that derived in a different way. For the hard-pole contributions we find an extra contribution from an extra parton process contributing to the asymmetries. We also solve a part of discrepancy in evolutions of the twist-3 operator. The method presented here for analyzing the factorization can be generalized to other processes and can be easily used for studying factorizations at higher orders, because the involved calculations are of standard scattering amplitudes.
Introduction
Single transverse-spin asymmetries(SSA) have been observed in various experiments, where an involved hadron is transversely polarized. A review about the phenomenologies of SSA can be found in [1]. In general SSA can be generated if scattering amplitudes have nonzero absorptive parts and there are helicity-flip interactions. It has been shown in production of heavy quarks like top quarks there are sizeable SSA [2,3]. Because a top quark is heavy, the bound state effects can be neglected. The helicity-flip is due to the nonzero quark mass. Therefore, in the studies of [2,3] one essentially deals with point-like particles and can use the perturbative theory in standard way.
For SSA involving light hadrons, the origin of SSA is unclear because of bound-sate effects and the helicity-conservation of QCD with light quarks which can approximately be taken as massless. However, certain predictions can be made for SSA in cases where large momentum transfers are involved. In these cases, one can use the concept of factorization in QCD. SSA can be factorized in the form of a convolution with nonperturbative matrix elements of hadrons and perturbative coefficient functions. With this form of predictions one is then able to explore hadron structures with experiment. The coefficient functions can be calculated as a perturbative expansion. In this work we study the collinear factorization of SSA in Drell-Yan processes. We will show that the factorization of collinear divergences corresponding to nonperturbative effects is not made in an usual way as factorizations at leading twists.
The collinear factorization for describing SSA has been proposed in [4,5]. With the collinear factorizations SSA in various processes has been studied in [6,7,8,9,10,11,12]. In such a factorization, the nonperturbative effects of the transversely polarized hadron are factorized into twist-3 matrix elements, or called ETQS matrix elements. Taking SSA in Drell-Yan processes as an example, SSA is factorized as a convolution of three parts: The first part is the standard parton distribution function of the unpolarized hadron defined with twist-2 operators. The second part consists of matrix elements of the polarized hadron defined with twist-3 operators. The third part consists of perturbative coefficient functions. The differential cross-section is determined by those twist-2-and twist-3 matrix elements of the initial hadrons and a forward hard scattering of partons from the initial hadrons. The perturbative coefficient functions describe the forward hard scattering. If the factorization can be proven, the coefficient functions can be calculated safely as a perturbative expansion and they are free from any soft divergence like collinearand I.R. divergence. In this approach the effects of helicity-flip are parameterized with twist-3 matrix elements, while the absorptive part is generated in the hard scattering of partons.
The above mentioned collinear factorization has been derived in a rather formal way by using diagram expansion at hadron level, in which one divides diagrams into three parts. Two of them are related to the initial hadrons respectively, the remaining one is related to the parton scattering. By expanding the two parts related to hadrons according to twist of operators and the part for the parton scattering with large momentum transfers, one obtains the factorized form of SSA and the perturbative coefficient function at leading order of α s . The forward hard scattering is participated by three partons from the polarized hadron, i.e., two quarks with one gluon and two partons from the unpolarized hadron, e.g., q + (q + g) → γ * + X →q + q with the antiquark from the unpolarized hadron and other partons from the polarized one. It seems difficult with this method to derive the perturbative coefficient function at higher orders and to prove the factorization. It is interesting to note that the contributions in the factorization consist of two parts. One part of the contributions is with the gluon carrying a nonzero momentum, called as hard-pole contributions, while another part of the contributions is with the gluon carrying zero momentum, called as soft-gluon-pole contributions. The two parts are associated with perturbative coefficient functions starting at the same leading order of α s .
It should be noted that QCD factorizations, if they are proven, are general properties of QCD. These factorizations hold not only with hadron states but also hold when one replaces the hadron states with parton states. This is in the sense that the perturbatively calculable parts in factorizations do not depend on hadrons and are completely determined by hard scattering of partons. The procedure for this in the case of SSA is the following: With partonic states one can calculate the differential cross-section related to SSA and those twist-2 and twist-3 matrix elements with perturbative theory. In general they will contain soft divergences which usually appear beyond leading order. By writing the differential cross-section as a convolution of these matrix elements with a perturbative coefficient function, one can determine the function. Beyond the leading order of α s , one may be able to show that the function is free from soft divergences. If it is true, then the factorization for SSA is proven. This procedure also provides a way to determine the higher-order corrections to the perturbative coefficient function.
In our previous works [13,14,15] we have made such an attempt to derive the factorization by replacing hadrons with partons. To have helicity-flip with massless partons we have constructed a multi-parton state to replace the transversely polarized hadron in [15]. But with our partonic results at leading order of α s we can only find the hard-pole contributions. This is apparently in contradiction with the early results.
In factorizations only with leading twist-2 operators, it is interesting to note the following fact: For a differential cross-section the factorizations at leading order is completely determined by partonic results at tree-level, i.e., the perturbative coefficient functions at leading order are determined only with the differential cross-section and twist-2 matrix elements calculated at tree-level with parton states. This has the implication for one-loop results. If the factorization is right or proven, then the collinearly divergent part of the differential cross-section with parton states at one-loop is completely determined by the convolutions of the leading-order perturbative coefficient functions with the one-loop matrix elements of twist-2 operators. This patten about the collinearly divergent part of the differential cross-section can be iterated beyond one-loop. Assuming this is also the case for the factorization involving twist-3 operators, one then expects that the collinearly divergent part of the differential cross-section related to SSA at one-loop is completely determined by the convolution of the collinearly divergent parts of twist-2 and twist-3 matrices at one-loop with the perturbative coefficient functions at leading order. However, the assumptions may not be correct. Factorizations involving twist-3 operators can be different than those only with twist-2 operators.
In order to clarify this issue we go beyond the leading order in this work. We find that certain contributions at one-loop, which contain collinear divergences, can not be factorized in the way as expected in the above. To make the factorization with the partonic states correct, one has to introduce additional contributions in the derived factorization which only contains hard-pole contributions. These additional contributions are just the soft-gluon contributions. This is an interesting fact because a part of the leading-order perturbative coefficient function is determined by quantities at non-leading order. It will be important for calculating higher-order corrections and proving the factorization, following the above outlined procedure. In this work we restrict ourself to the case where the forward hard scattering of those partons consisting of two antiquark from the unpolarized hadron and two quarks with one gluon from the polarized hadron. Beside the above mentioned contributions we also find an extra hard-pole contribution corresponding to the forward hard scattering where only the gluon is in the initial-or final state. By calculating the twist-3 matrix element corresponding to this hard scattering, we can solve a part of discrepancies in the evolution of the matrix element studied in [16,17,18].
In this work we study SSA in Drell-Yan processes in the kinematic limit of the small transverse momentum of the observed lepton pair. In this kinematic region there exists another factorization for SSA, called Transverse-Momentum-Dependent(TMD) factorization, similarly to the TMD factorization for unpolarized cases studied in [21,22,23,24,25]. In the polarized case the nonperturbative effects of the polarized hadron are factorized into Sivers function [19] which contains both helicity-flip-and T-odd effects. The properties of Sivers function and SSA with it have been studied extensively [26,27,28,29,30,31,32,33,34,35]. It should be noted that in the kinematic region of the small transverse momentum limit two factorizations apply, if the transverse momentum is much larger than the QCD scale Λ QCD . It has been shown that the two factorizations are equivalent in the region [10,11,12]. Again, the TMD factorization here is derived in the formal way by using the mentioned diagram expansion at hadron level. In [13,14,15] we have examined the TMD factorization of SSA with parton states and found an agreement with existing results. In this work we will only focus on the collinear factorization.
Our work is organized as the following: In Sect.2 we give our notations for Drell-Yan processes and the factorization of SSA. In Sect.3 we introduce our multi-parton states and give some relevant results of twist-3 matrix elements. In Sect.4 we calculate SSA with our multi-parton state from certain classes of one-loop contributions which are collinearly divergent. We then show that these contributions can not be factorized as the usual way discussed in the above. These contributions in fact have to be identified as the mentioned soft-gluon-pole contributions in order to have finite corrections at higher orders of perturbative coefficient functions. In Sect. 5 we study the extra contribution which should be added to the factorization formula. There we also show that a part of the discrepancy in the evolution of the twist-3 matrix element derived in [16,17] is solved. Sect.6 is our summary and outlook.
Collinear Factorization of SSA in Drell-Yan Processes
We will use the light-cone coordinate system, in which a vector a µ is expressed as a µ = (a + , a − , a ⊥ ) = ((a 0 + a 3 )/ √ 2, (a 0 − a 3 )/ √ 2, a 1 , a 2 ) and a 2 ⊥ = (a 1 ) 2 + (a 2 ) 2 . Other notations are: with the light-cone vectors l and n defined as l µ = (1, 0, 0, 0) and n µ = (0, 1, 0, 0), respectively. We consider the Drell-Yan process: where h A is a spin-1/2 hadron with the spin-vector s. We take a light-cone coordinate system in which the momenta and the spin are : as the large component. The spin of h B is averaged. The invariant mass of the observed lepton pair is Q 2 = q 2 . The relevant hadronic tensor is defined as a matrix element of the forward scattering and the differential cross-section is determined by the hadronic tensor as: We are interested in the kinematical region where q 2 ⊥ ≪ Q 2 . The hadronic tensor at leading twist accuracy has the structure: In the above, we only give the structures symmetric in µν. W T contributes to SSA in the region q 2 ⊥ ≪ Q 2 which we will study. We introduce q + = xP + A and q − = yP − B . All structure functions depend on x, y and q 2 ⊥ . In the limit q ⊥ → 0 only the structure function W (1) T gives the leading spin-dependent contribution to the differential cross-section and hence to SSA. The factorization of the structure function is the main subject to be studied in this work. In the collinear factorization W (1) T is factorized with standard parton distributions of hadron h B and twist-3 matrix elements of hadron h A . There are two relevant twist-3 matrix elements. They are defined as: In the above we have suppressed the gauge links along direction n between operators. These gauge links make the definition gauge invariant. The general properties of these twist-3 matrix elements have been discussed in [4,7]. One can show is not zero. This corresponds to the so-called soft-gluon-pole contributions because the gluon field in T F (x 1 , x 2 ) with x 1 = x 2 = x carries zero momentum entering the hard scattering.
For the case that SSA is generated through the scattering where an antiquarkq from the unpolarized hadron h B and a quark q or a quark with a gluon from the polarized hadron h A , i.e, the forward parton scatteringq + q + g → γ * + X →q + q or the reversed, the structure function in the limit q ⊥ /Q ≪ 1 can be factorized in the form [10]: In the aboveq(y 2 ) is the antiquark distribution function of h B . The contributions to W T can be divided into three parts. The part with A h consists of the hard-pole contributions. In A h , the first term with T F (x 1 , x 2 ) has been first derived in [10] which is also confirmed in [15], while the second term with T ∆,F has been derived in [15]. The part with A s and that with A sq are of the soft-gluon-pole contributions, because they are related to T F (x, x). The soft-gluon-pole contributions in A sq only appear in the limit in the limit Q ≫ q ⊥ .
Multi-parton State and Twist-3 Matrix elements
In [13,14] we have studied SSA with single-parton states, where the helicity flip is caused be a finite quark mass m. In fact one can study SSA at parton level by taking massless limit m = 0. For this one needs to take the effect of helicity flip as that of a correlation between the spin of quarks and the spin of gluons. For this purpose we consider the state or the system with the total helicity λ: with p 1 + k = p. In the first term λ q = λ. For the qg-state, the total helicity is the sum λ q + λ g . We specify the momentum as: The q-state and qg-state carries the same color index i c as given: where b † i is the quark creation operator with i as the color index, a † a is the gluon creation operator with a as the color index. c 1 is taken as a real number.
From standard text book we know that the transverse-spin dependent part of a matrix element, like the twist-3 matrix elements or the transverse-spin dependent part of W µν , corresponds to the offdiagonal part of the matrix element in helicity space. Because of helicity conservation in QCD with massless quarks, the twist-3 matrix elements are always zero if we replace the hadron in the matrix elements with a single quark. However, if one replaces the hadron with the above multi-parton state, the twist-3 matrix elements receive nonzero contributions from the interference between the single quark state and the state consisting of a quark and a gluon. In the interference, the quark always has the same helicity, while the helicity change is due to the helicity of the gluon. The structure function W (1) T also receives nonzero contributions from the interference.
By replacing the hadron with the multi-parton state one can calculate those twist-3 matrix elements and the structure functions perturbatively for the purpose of factorization. At tree-level, it is straightforward to obtain the twist-3 matrix elements as: These functions are proportional to c 1 indicating that they are from the mentioned interference. For simplicity we will set c 1 = 1 in the following sections without confusion. It is noted that at tree-level the function T F (x, x) becomes zero. This is the reason why we can not re-produce in [15] the soft-gluon-pole contributions with tree-level results of T F and W (1) T . However, the function becomes nonzero at one-loop. To show this, we examine a particular contribution from a one-loop diagram given in Fig.1 in Feynman gauge. The contributions contain a U.V. divergence and a collinear divergence. Regularizing these divergences and subtracting the U.V. one we have: where the pole in ǫ c = 4 − d represents the collinear divergence and µ c is the scale associated with it. The scale µ is associated with the subtracted U.V. divergence. The collinear divergence appears because the gluon going through the cut can be collinear to the incoming gluon and the outgoing quark in Fig.1.
In the above we also give the contribution to T ∆,F from Fig.1. From this result we see that T F is nonzero at After examining all one-loop diagrams in Feynman gauge, we find that only the diagram in Fig.1 gives nonzero contribution at x 1 = x 2 , i.e., This result is in agreement with our previous calculation in the light-cone gauge n · G = 0 [15]. The function T ∆,F is always zero at x 1 = x 2 = x. To calculate the hadronic tensor, we replace the polarized hadron h A with the parton state |n of Eq.(9), and the unpolarized hadron h B with an antiquark with the momentump µ = (0,p − , 0, 0). At tree-level, W (1) T also receives contributions from the interference. i.e., from the forward scatterinḡ q + q + g → γ * + g →q + q andq + q → γ * + g →q + q + g. In the limit q ⊥ ≪ Q, only one diagram given in Fig.2 gives the contribution. In the diagram the short bar means to take the absorptive part of the cut propagator with the momentum k q , i.e., In fact, the short bar here represents a physical cut of the amplitude of the left part in the diagram. It has been shown in [15] with the tree-level results of the twist-3 matrix elements given in Eq.(12) that the treelevel result of W T only produces the A h -term in Eq. (8). If one expects that the factorization involving twist-3 operators here happens in the same way as in the factorization only with twist-2 operators, one will conclude that at leading order of α s of the perturbative coefficient function W (1) T is predicted only with the hard-pole contributions of Eq.(8). This is obvious in contradiction with the results in Eq.(8).
Soft-Gluon-Pole Contributions
At tree-level, all momenta carried by gluon lines in Fig.2 are fixed and can not be zero, e.g., the momentum k 1 of the gluon crossing the cut is fixed by the total momentum conservation. Therefore, we can not identify any gluon line in Fig.2 corresponding to the gluon field with zero momentum in T F (x, x) in Eq. (7). Now we consider the case in which there is an extra gluon exchanged and crossing the cut as those diagrams given in Fig.3. In this case the momentum k 1 has to be integrated. In the integration the collinear region, in which the gluon with k 1 is collinear to the incoming gluon and outgoing quark, is included. This will result in a collinear divergence. Taking Fig.3a as such an example, there is an extra gluon exchanged in comparison with Fig.2. The momentum k 1 hence will be integrated. In the collinear region of k 1 , one can realize that the part of Fig.3a including the three gluon vertex and the vertex absorbing the gluon with k 1 is essentially given by Fig.1. This indicates that the collinearly divergent part from the collinear region may be factorized with T F (x 1 , x 2 ) or T ∆,F given in Fig.1 in the form of a convolution of T F or T ∆,F with a perturbative coefficient function. If the function is not the same as those in A h of Eq.(8) determined at leading order, then one has to add extra terms beside the term with A h in Eq. (8) to make sure that the factorization is correct at one-loop level. It is interesting to note that by taking the absorptive part of the quark propagator in the left part of Fig.3a, one finds that in the collinear region of k 1 the momentum of the gluon exchanged between the initial gluon and initial q is a soft gluon. More precisely, the exchanged gluon is a Glauber gluon with the momentum patten k µ ∼ (λ 2 0 , λ 2 0 , λ 0 , λ 0 ) with λ 0 ≪ 1. This indicates that the possible extra terms may be soft-gluon-pole contributions factorized with T F (x, x) from Fig.1. In this section we show that this is indeed the case.
As discussed in the above, it is easy to find those diagrams at one-loop which can give contributions to the soft-gluon-pole contributions. The diagrams are those where the incoming gluon emits a collinear gluon and the collinear gluon is absorbed by an outgoing collinear quark. Beside these gluons, there is an extra exchanged gluon crossing the cut responsible for the finite q ⊥ . A class of those diagrams is given in Fig.3. For our purpose we consider those contributions from the collinear region where the gluon crossing the cut and emitted by the initial gluon is collinear to the initial gluon and the outgoing quark.
We take Fig.3a as an example to show how we obtain the collinearly divergent part. The contribution from Fig.3a to the hadronic tensor can be written with standard Feynman rule as: Figure 3: The diagrams for the amplitudeq + q + G → γ * + G + G →q + q, which gives a part of contributions to SSA.
the color and spin of the initial antiquarkq is averaged and it gives the factor (2N c ) −1 . The initial quark has the helicity λ q , the initial gluon has λ g . The absorptive part in the scattering amplitude is generated by the cut cutting the quark propagator. This gives the δ-function δ((k 3 +p) 2 ) with k 3 being the momentum carried by the gluon exchanged between the initial gluon and the initial antiquark in the left part of Fig.3a. We will consider the collinear region where the momentum k 1 of the gluon emitted by the initial gluon is collinear. In the collinear region the momentum k 1 scales as: with λ 0 ≪ 1. The on-shell condition from the quark propagator fixes k + 1 in the collinear region as This constraint leads to that the gluon with k 3 is a Glauber gluon. It is soft and may be represented by the gluon field in T F (x, x).
The evaluation of the contribution containing the collinear divergence from the collinear region given by Eq. (17) is rather straightforward. One first uses these δ-functions to perform the integration over k 2 , k − 1 and k + 1 . The remaining integration is that over k 1⊥ . The integrand is then besides some trivial factors a product of those terms in [· · ·] in Eq. (16), the denominators of propagators and the δ-function δ(k 2 2 ). Now one can expand the integrand in λ 0 . The leading order is at λ −3 0 which does not give the collinear divergence. The next-to-leading order is at λ −2 0 , which give the collinear divergence after the integration over k 1⊥ . Contributions from higher orders are finite. In the expansion we notice that the δ-function δ(k 2 2 ) also depends on k 1 and needs to be expanded. The expansion will give a contribution proportional to the derivative of the δ-function. This contribution may correspond to those terms in Eq.(8) with the derivative of T F (x, x).
After the integration over k 1⊥ one can take the limit q ⊥ ≪ Q. To derive the limit we will use the following in the limit of q ⊥ → 0: with s = 2p +p− . The calculation can be simplified by the following: We need only to calculate the off-diagonal part of the matrix element in helicity space. In this part one always has λ q λ g = −1. We will set λ q λ g = −1 in our calculation for simplicity. We have the collinearly divergent part of the hadronic tensor in the limit q ⊥ ≪ Q : where · · · stand for the following contributions: the contributions at non-leading order in the expansion in q ⊥ /Q, the contributions which do not contain the collinear divergence and the contributions of tensor structures other than g µν ⊥ . These contributions are irrelevant for our purpose. By adding the complex conjugated contribution with different parton helicities as a part of the interference, one can then obtain the off-diagonal part of the hadronic tensor in the helicity space. From the off-diagonal part we can extract the structure function: again, · · · denote those irrelevant contributions which do not contain the collinear divergence or are not at the leading order in the expansion in q ⊥ /Q. In the limit the leading order of W (1) T is of q −4 ⊥ . In the following we will only give the collinearly divergent contributions in the limit of q ⊥ → 0 explicitly. Performing similar calculations we have the contributions from other diagrams in Fig.3: Figure 4: The diagrams for the amplitudeq + q + G → γ * + G + G →q + q, which gives a part of contributions to SSA.
Beside those diagrams in Fig.3, there is another class of diagrams which give the wanted contributions. The calculations of these diagrams are slightly different than those of Fig.3. We illustrate this by taking Fig.4a as an example. The contribution from Fig.4a can be written as: Again, the initial quark has the helicity λ q , the gluon has λ g . The cut of the quark propagator gives the δ-function δ((k 3 +p) 2 ) with k 3 = k 1 − k as the momentum of the gluon emitted by the antiquark in the right part of Fig.4a. k 1 is the momentum of the gluon emitted by the quark in the right part of Fig.4a. Unlike the gluon with k 1 in Fig.3a, where it is on-shell, the gluon with k 1 in Fig.4a is off-shell in general. This difference results in that the integration over k − 1 looks nontrivial at the first step, while the integration over k − 1 in Fig.3a can be simply done with the on-shell condition δ(k 2 1 ). Now we consider the collinear region where k 1 is collinear to k and p. The scaling of its each component is given in Eq. (17). Then from the δ-function δ((k 3 +p) 2 ) k + 1 is fixed as k + 1 = k + + O(λ 2 0 ) after the integration over k + 1 . The integration over k − 1 can be done with a contour in the complex k − 1plan. With the fixed k + 1 one can show from Eq.(23) that there are poles from denominators of quark propagators only in the lower-half plan. These poles are corresponding physical cuts. One can use these poles by taking a contour in the lower-half plan to perform the integration. However, we notice that there is only one pole in the upper-half plan. The pole is from the gluon propagator with the momentum k 1 . One can equivalently use this pole by taking a contour in the upper-half plan to perform the integration. Therefore, the integration over k − 1 can be done easily by the replacement: This also applies for other three diagrams in Fig.4. It is interesting to note that the gluon with k 1 or with k 3 in Fig.4a are in corresponding to the gluon with k 1 or with k 3 in Fig.3a, respectively. The gluon with k 3 in Fig.4a is also a Glauber gluon. The remaining calculations are similar to those of Fig.3. We have the following results of W (1) T from Fig.4: Summing the contributions from Fig.3 and Fig.4 together we obtain the collinearly divergent contribution of W T , denoted as W T,s : If the factorization here takes the same patten as those only with twist-2 operators as discussed in the introduction and assuming that there is only the hard-pole contribution at leading order, one then expects that the above W T,s should obey: so that the collinear divergences caused by the collinear gluon in Fig.3 and Fig.4 will not appear in the perturbative coefficient functions at one-loop. In the above we have already taken the tree-level result q(y 2 ) = δ(1 − y 2 ). It is easy to see that the above equation does not hold because the color factor of W T,s does not match that of T F (x, y 1 ) and T ∆,F (x, y 1 ) from Fig.1. Therefore, the factorization at leading order must contain extra terms besides the hard-pole contributions.
With the discussion at the beginning of this section, parts in each diagram in Fig.3 and Fig.4 can be identified with Fig.1. By deleting these parts, these one-loop diagrams reduce to those for the forward scatteringq + q * + g * → γ * + X →q + q * . In the kinematic region considered here, the off-shell quark can be approximately taken as an on-shell quark. The virtual gluon is the mentioned Glauber gluon. It carries Figure 6: The diagrams for soft-gluon-pole contributions appearing in the limit q ⊥ → 0 the contributions from Fig.6e and Fig.6f are non-leading in the limit of q ⊥ /Q ≪ 1. In Fig. 6a the gluon with k 1 corresponds to the gluon with k 1 in Fig.4a. We have observed that the integration over k − 1 here in Fig.6a is different that in Fig.4a. In the lower-half complex k − 1 -plan there is only one pole from the quark propagator in the right part of Fig.6a, and there are three poles from the three gluon propagators in the upper-half complex k − 1 -plan. One may take a contour in the lower-half complex k − 1 -plan to perform the k − 1 -integration whose result is only from the pole of the quark propagator. By taken a contour in the upper-half complex k − 1 -plan, this integration result can also be written a sum of contributions from the three poles of the gluon propagators. In the limit q ⊥ /Q ≪ 1, the terms in W (1) T proportional to δ(x 0 − x) only come from the contribution of the pole in the gluon propagator with k 1 , i.e., these terms can equivalently be calculated with the k − 1 -integration by taking Eq.(24) as for Fig.4a. With this fact the gluon exchanged between the two gluon lines is a Glauber gluon. This indicates that these terms may be factorized with T F (x, x) according to the experience from Fig.4. The above discussed also applies for the remaining diagrams in Fig.6. In Eq.(29) the terms proportional to δ(1 − y) except the terms containing log's can be identified as hard-pole contributions at one-loop.
The sum of the diagrams in Fig.5 and Fig.6 is then With T F (x, x) in Eq. (14) it is clearly that the terms in [· · ·] reproduce the term A sq in Eq. (8). The last term in the above is not a soft-gluon-pole contribution and will become relevant if one studies the perturbative coefficient functions at the next-to-leading order. Before ending this section an interesting observation can be made. We observe that SSA calculated here at one-loop is generated through exchange of Glauber gluon at one-loop and it is divergent. This is in contrast to factorizations only with twist-2 operators for Drell-Yan processes. These factorizations are for those differential cross-sections which do not contain T -odd effects. In proving these factorizations, the existence of Glauber gluons brings up the most difficult obstacle [36,37,38]. But it is able to show that the divergences caused by Glauber gluons are canceled in differential cross-sectons [36,37,38]. For the factorization studied here, such divergences are not canceled and they need to be factorized into the twist-3 matrix element with x 1 = x 2 . This will have some implications for the study of factorizations in the framework of soft collinear effective theories of QCD [39].
Additional Contributions
In the previous sections we have used the multi-parton state in Eq. (9) to replace the polarized hadron and determined the factorization form of W (1) T . After the replacement, we have studied SSA essentially in the partonic forward scattering processq +(q +G) → γ * +X →q +q or the reversed scattering, where the helicity difference between the initial qg-state and the final q-state is ±1. For a real hadron scattering, it is possible that instead of the above scattering one has the forward scatteringq +(q +q) → γ * +X →q +g or the reversed, where the final gluon and initial qq come from the polarized hadron. If the total helicity of the qq state is zero, this forward scattering will also delivery an additional contribution to SSA besides those given in Eq. (8), because the helicity difference in the scattering is also ±1. This has been realized in the study of the evolution of the twist-3 matrix element T F (x, x) [17].
The additional contribution can be factorized with the twist-3 matrix elements. The factorization can be studied with our multi-parton state in Eq.(9) by adding a qqq state as a component: Replacing the transversely polarized hadron h A , one obtains SSA from the interference of the state qqqstate with the qg-state. The interference with the single quark state will not contribute to SSA because of the helicity conservation. By taking one quark from the qqq-state and qg-state as a spectator quark, one can have the mentioned forward scatteringq + (q +q) → γ * + X →q + g or the reversed scattering. Therefore, we need effectively to replace, e.g., in the twist-3 matrix elements the state |h A with a qq state and the state h A | with a gluon, or in a reversal way. The total helicity state of the qq state should be zero. The qq state is in color octet in correspondence with the gluon. Taking the quark with k 2 as the spectator, one can simply work out those twist-3 matrix elements at tree-level: The factor N is a normalization factor of the spectator quark state with other trivial factor. The same factor will also appears later in W (1) T and T F (x, x). All relevant quantities calculated with the state Eq.(31) are the sum of contributions studied in previous sections and those studied here. Because of this our study of these contributions can done separately. It should be noted that there are two possibilities to have one quark as a spectator for the interference with the state in Eq. (31). For simplicity we only present the results of the above possibility in this section, because this will not affect the factorization, i.e., the perturbative coefficient functions. We denote quantities calculated in this way with an index c 2 .
We have performed detailed calculations for these two possibilities and obtained the same results for the factorization form of W T and for the evolution of T F (x, x) which will be discussed later. Similarly, when we do the replacement of h A for the hadronic tensor as done for the twist-3 matrix elements in the above, one can obtain nonzero SSA. At tree-level, in the limit q ⊥ ≪ Q, only one diagram gives nonzero contribution. Calculating the contribution in the similar way as shown in [15] and previous sections, we obtain the structure function: Comparing with the above T F we can obtain a factorized form as: it should be noted that the contribution of T F (x 1 , x 2 ) calculated in Sect. 3 will not be involved here because this contribution is zero for x 2 < 0. It is clear that this part should be added to the factorized from in Eq.(8), i.e., the term A h should be modified as: In the above we note that one argument of T F (x 1 , x 2 ) with x 2 = x − y 1 is negative, representing the fact that there is an antiquark from the polarized hadron.
In [16,17] the evolution of the twist-3 matrix elements have been studied. The results are different. The evolution of the non-singlet part of T F (x, x) with x > 0 is given with z = x/ξ in [17] as: The discrepancies is the following: From [16] the evolution has only the terms in first line. The last term in the first line has a different sign than that of [17]. With our multi-parton state one can calculate T F (x 1 , x 2 ) to check the evolution. For this we calculate T F (x, x) with the contribution from the interference with the qqq state in the same way showing in the above. For x > 0 there is only one diagram at one-loop showing in Fig.8. We obtain the following: Adding the results from Sect.3. we have: Comparing the above result with that in Eq.(36), we find an agreement except the terms with T F (x, x) and the last term in Eq. (36). These terms are denoted with · · · in Eq. (38). They can not be obtained at one-loop with our results here. To verify them one has to study them at two-loop level. With our one-loop results at least a part of the discrepancy is solved. From the interference with the qqq state one also expects that there are soft-gluon-pole contributions in Eq. (34). These contributions are generated in a similar way as those studied in Sect. 4. W (1) T,c 2 receives contributions at one-loop level from similar diagrams in Sect.4., where one replaces the incoming gluon line going through the cut with an antiquark line and the outgoing quark with an outgoing gluon line. Comparing the topology of these diagrams with that of Fig.8 for T F,c 2 (x, x), one can expect that the contributions to W (1) T,c 2 from those diagrams can be factorized as the soft-gluon-pole contributions in Eq. (8). We have checked these contributions. The results confirm the above expectation.
Summary and Outlook
Because of the helicity-conservation of QCD, SSA of a single quark state can not be generated in the involving forward scattering. We have used multi-parton states in order to have a nonzero SSA in the relevant partonic processes. Using the multi-parton states one can calculate SSA of Drell-Yan processes and relevant twist-3 matrix elements. With these partonic results we can examine and derive the collinear factorization. The collinear factorization for SSA has been derived in the literature based on a diagram expansion of the relevant hadronic tensor. By using partonic results at tree-level, only the hard-pole contributions can be obtained. The soft-gluon-pole contributions can not be obtained. The reason for this has been discussed.
In this work we have performed the study with multi-parton state at one-loop level in order to examine and identify these soft-gluon-pole contributions in Drell-Yan processes. If we assume that the factorization is correct and there are only hard-pole contributions at leading order of perturbative coefficient functions,we find that a class of one-loop contributions to SSA, which are collinearly divergent, can not be factorized at one-loop order. To correctly factorize collinear divergences appearing in these contributions, one has to add extra terms in the factorization derived with tree-level partonic results. Interestingly these extra terms are just those soft-gluon-pole contributions. Therefore, with our multiparton states we can re-derive the soft-gluon-pole contributions which have been derived with the diagram expansion in [10].
It is interesting to note that the hard-pole-and soft-gluon-pole contributions in SSA, i.e., in the structure function W (1) T are at different order of α s . But, the perturbative coefficient functions are at the same order. The perturbative coefficient functions associated with the soft-gluon-pole contributions, although derived from SSA at one-loop, are at the same order of those associated with the hard-pole contributions, which are derived from SSA at tree-level. This is in contrast with factorizations for differential cross-sections, where only leading twist-2 operators are involved. In these twits-2 factorizations, perturbative coefficient functions at leading order are completely determined by differential cross-sections at tree-level and tree-level matrix elements of involved twist-2 operators. Therefore, our study here also shows an unusual feature of factorizations involving twist-3 operators.
By taking multi-parton state we also find a new contribution to W T . The new contribution comes from the parton process where a qq pair with the total helicity λ = 0 is transmitted into a gluon. This new contribution can be factorized with the twist-3 matrix element. One important twist-3 matrix element is calculated at one-loop in this work. From the result one can derive the evolution of the matrix element. In this work we can solve a part of discrepancy between evolutions derived in [16,17]. To solve the remaining parts one has to calculate the twist-3 matrix elements with the multi-parton state at two-loop.
In this work we have restrict ourself to the certain relevant partonic processes with one antiquark from the unpolarized hadron. Having succeeded to reproduce the soft-pole-gluon pole contributions with multiparton states, one can start to analyze with the method presented in this work other relevant partonic processes. E.g., the processes involving one gluon from the unpolarized hadron. In such processes, it is possible to have soft-quark-pole contributions represented by T F (0, x). One can also use our method to analyze another type of SSA appearing in the case when the momentum of each lepton is measured. In this case the factorization of SSA is so far still different derived from different works [40]. In principle, our method is not restricted to Drell-Yan processes. It can be used in any case where SSA appears. We believe that it has advantages to use our method for analyzing factorizations of SSA and for calculating higher order corrections, because the involved calculations are of standard scattering amplitudes.
|
2011-04-21T07:48:57.000Z
|
2011-02-14T00:00:00.000
|
{
"year": 2011,
"sha1": "b6cc35dc61267245735b853f0132bdc558e2bd19",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1102.2679",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b6cc35dc61267245735b853f0132bdc558e2bd19",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
267327622
|
pes2o/s2orc
|
v3-fos-license
|
Vitrimer Nanocomposites for Highly Thermal Conducting Materials with Sustainability
Vitrimers, as dynamic covalent network polymers, represent a groundbreaking advancement in materials science. They excel in their applications, such as advanced thermal-conductivity composite materials, providing a sustainable alternative to traditional polymers. The incorporation of vitrimers into composite fillers enhances alignment and heat passway broadly, resulting in superior thermal conductivity compared to conventional thermosetting polymers. Their dynamic exchange reactions enable straightforward reprocessing, fostering the easy reuse of damaged composite materials and opening possibilities for recycling both matrix and filler components. We review an overview of the present advancements in utilizing vitrimers for highly thermally conductive composite materials.
Introduction
Vitrimers are covalent adaptable network (CAN) polymers capable of undergoing dynamic covalent bond exchange reactions while maintaining their high crosslinking density [1][2][3][4].They have received increasing attention as materials that combine thermosetting resins' stability with thermoplastic resins' processability.This innovative polymer is covalently bonded and possesses the unique ability to undergo dynamic bond exchange in response to external stimuli [5][6][7].During this process, while CAN bonds break and form, a consistent bond density is maintained, allowing for a rearrangement of the high-topological network [8,9].This bond exchange process proceeds slowly at room temperature, rendering vitrimers with mechanical properties similar to thermosetting resins.However, when the temperature surpasses the topological freezing transition temperature (T v ), the bond exchange reaction accelerates, endowing vitrimers with thermoplastic-like characteristics, including reprocessing, remolding, and recycling [10][11][12].
The versatility of vitrimers is underscored by their unique combination of stability and processability, leading to applications in reprocessable [34,35] and recyclable polymers [36,37] that contribute to sustainable practices.Additionally, vitrimers find use in coatings [38], adhesives [39,40], and reshapable polymers [41,42], showcasing easy processability for adjustments and repairs.Their shape memory properties offer utility in 3D printing [43][44][45].Vitrimers play an especially significant role in the field of composites [46][47][48][49][50], showing a wide range of impacts in a variety of applications, thanks to their recyclability that traditional composite materials do not have.Consequently, vitrimers can serve as alternative materials to sustainable polymers currently used in composite materials.The versatile utilization of vitrimers across diverse fields highlights their broad impact on advancing scientific applications (Figure 1).Ongoing research endeavors are focused on exploring novel applications and optimizing the performance of vitrimer-based materials, contributing to sustained progress in materials science.
Polymers 2024, 16, x FOR PEER REVIEW 2 of 18 printing [43][44][45].Vitrimers play an especially significant role in the field of composites [46][47][48][49][50], showing a wide range of impacts in a variety of applications, thanks to their recyclability that traditional composite materials do not have.Consequently, vitrimers can serve as alternative materials to sustainable polymers currently used in composite materials.The versatile utilization of vitrimers across diverse fields highlights their broad impact on advancing scientific applications (Figure 1).Ongoing research endeavors are focused on exploring novel applications and optimizing the performance of vitrimer-based materials, contributing to sustained progress in materials science.Recently, to propel the advancement of the next generation of compact, integrated, functional, and portable smart devices, swift and efficient heat dissipation is imperative.This is critical due to the substantial heat generated within these devices, which has the potential to adversely affect the safety and performance of electronic components throughout the operational lifespan of the device [51,52].Polymer-based composites find extensive application as a solution to this issue.Polymer-based thermally conductive composite materials are fabricated with polymer (matrix) and high thermal conductive ceramic material (filler), as shown in Figure 2. Utilizing polymer composites with high crosslinking density and oriented fillers is an effective strategy for producing composites with Recently, to propel the advancement of the next generation of compact, integrated, functional, and portable smart devices, swift and efficient heat dissipation is imperative.This is critical due to the substantial heat generated within these devices, which has the potential to adversely affect the safety and performance of electronic components throughout the operational lifespan of the device [51,52].Polymer-based composites find extensive application as a solution to this issue.Polymer-based thermally conductive composite materials are fabricated with polymer (matrix) and high thermal conductive ceramic material (filler), as shown in Figure 2. Utilizing polymer composites with high crosslinking density and oriented fillers is an effective strategy for producing composites with superior thermal conductivity [53][54][55].By enhancing these two factors, the mean free pathway of phonons is extended, minimizing phonon scattering and ultimately improving the thermal conductivity of the composite [56].superior thermal conductivity [53][54][55].By enhancing these two factors, the mean free pathway of phonons is extended, minimizing phonon scattering and ultimately improving the thermal conductivity of the composite [56].In this review, we present a study of recent advancements in a novel academic domain.Specifically, we focus on excellent thermal-conductivity composites while simultaneously utilizing the characteristics of vitrimers, such as reshaping and recyclability, thereby pursuing eco-friendliness.We explore the latest advancements in the field, emphasizing the potential for environmentally friendly solutions through the creation of composites with enhanced thermal properties and the distinctive characteristics of vitrimers.
Vitrimer-Assisted Filler Orientation for the Highly Thermal Conducting Pathway of Nanocomposites
Research has been conducted on vitrimers used as high thermal conductivity nanocomposites, specifically focusing on composites involving the chemical bonding of 1,3,5triazine.Within the 1,3,5-triazine chemical group, attention has been directed towards substances from the poly(hexahydrotriazine) (PHT) series.PHT, initially reported by IBM in 2014, is synthesized through the polycondensation of 4,4′-oxydianiline or p-phenylenediamine with paraformaldehyde (as shown in Figure 3), showcasing exceptional mechanical properties and mechanical strength, resulting from a high crosslink density [57,58].In this review, we present a study of recent advancements in a novel academic domain.Specifically, we focus on excellent thermal-conductivity composites while simultaneously utilizing the characteristics of vitrimers, such as reshaping and recyclability, thereby pursuing eco-friendliness.We explore the latest advancements in the field, emphasizing the potential for environmentally friendly solutions through the creation of composites with enhanced thermal properties and the distinctive characteristics of vitrimers.
Vitrimer-Assisted Filler Orientation for the Highly Thermal Conducting Pathway of Nanocomposites
Research has been conducted on vitrimers used as high thermal conductivity nanocomposites, specifically focusing on composites involving the chemical bonding of 1,3,5-triazine.Within the 1,3,5-triazine chemical group, attention has been directed towards substances from the poly(hexahydrotriazine) (PHT) series.PHT, initially reported by IBM in 2014, is synthesized through the polycondensation of 4,4 ′ -oxydianiline or p-phenylenediamine with paraformaldehyde (as shown in Figure 3), showcasing exceptional mechanical properties and mechanical strength, resulting from a high crosslink density [57,58].
superior thermal conductivity [53][54][55].By enhancing these two factors, the mean free pathway of phonons is extended, minimizing phonon scattering and ultimately improving the thermal conductivity of the composite [56].In this review, we present a study of recent advancements in a novel academic domain.Specifically, we focus on excellent thermal-conductivity composites while simultaneously utilizing the characteristics of vitrimers, such as reshaping and recyclability, thereby pursuing eco-friendliness.We explore the latest advancements in the field, emphasizing the potential for environmentally friendly solutions through the creation of composites with enhanced thermal properties and the distinctive characteristics of vitrimers.
Vitrimer-Assisted Filler Orientation for the Highly Thermal Conducting Pathway of Nanocomposites
Research has been conducted on vitrimers used as high thermal conductivity nanocomposites, specifically focusing on composites involving the chemical bonding of 1,3,5triazine.Within the 1,3,5-triazine chemical group, attention has been directed towards substances from the poly(hexahydrotriazine) (PHT) series.PHT, initially reported by IBM in 2014, is synthesized through the polycondensation of 4,4′-oxydianiline or p-phenylenediamine with paraformaldehyde (as shown in Figure 3), showcasing exceptional mechanical properties and mechanical strength, resulting from a high crosslink density [57,58].In this review, a nanocomposite is fabricated by utilizing PHT synthesized by pphenylenediamine with paraformaldehyde with vitrimer properties, and hexagonal boron-Polymers 2024, 16, 365 4 of 17 nitride (h-BN) is chosen as the filler due to its advantageous plate-like structure, proving superior to spherical fillers [59].The interfacial affinity between the filler and the matrix is crucial for maximizing properties such as thermal conductivity while minimizing molecular voids [58][59][60][61].A computational analysis is employed to assess the intermolecular affinity between PHT and the comparative matrix (the geometry-optimized structures of the PHT matrix and analogous molecules, replacing nitrogen atoms with carbon atoms) with h-BN, comparing the oligomeric units of each molecule.The analysis revealed a favorable interaction between the nitrogen in the matrix and the boron in h-BN (Figure 4), leading to the flattening of the overall molecular structure of PHT and a reduced molecular distance between the heteromolecules.In this review, a nanocomposite is fabricated by utilizing PHT synthesized by p-phenylenediamine with paraformaldehyde with vitrimer properties, and hexagonal boronnitride (h-BN) is chosen as the filler due to its advantageous plate-like structure, proving superior to spherical fillers [59].The interfacial affinity between the filler and the matrix is crucial for maximizing properties such as thermal conductivity while minimizing molecular voids [58][59][60][61].A computational analysis is employed to assess the intermolecular affinity between PHT and the comparative matrix (the geometry-optimized structures of the PHT matrix and analogous molecules, replacing nitrogen atoms with carbon atoms) with h-BN, comparing the oligomeric units of each molecule.The analysis revealed a favorable interaction between the nitrogen in the matrix and the boron in h-BN (Figure 4), leading to the flattening of the overall molecular structure of PHT and a reduced molecular distance between the heteromolecules.The isotropic thermal conductivity of the h-BN/PHT composite materials is measured, showing a gradual increase with the rise in h-BN content, as shown in Figure 5a.At the highest h-BN content, the thermal conductivity reaches 13.8 Wm −1 K −1 , aligning with the graphical representation of the Nielson model.The Nielsen model for thermal or electrical conductivity of composites is a predictive model that considers the influence of both the composition and structure of the composite material.It quantitatively models the conduction properties by incorporating factors such as the type and arrangement of components, providing a detailed understanding of how these parameters affect the overall conductivity of the composite.Consequently, this model indicates that enhancing the filler loading can improve thermal conductivity [59,61].The Nielsen model is expressed as Equation (1): where Kc, Kp, and φ are the thermal conductivities of the composite and the polymer matrix and the filler volume fraction, respectively.The geometry factors, A, D, and λ, relate to the filler orientation ratio, the maximum filler volume fraction, and a significant amount of voids.Following the Nielsen model suggests that the h-BN/PHT composite material easily achieves thermal conductivity within the suitable range of 2 to 8 Wm −1 K −1 for high thermal dissipation applications.To visually demonstrate the enhanced thermal conductivity, The isotropic thermal conductivity of the h-BN/PHT composite materials is measured, showing a gradual increase with the rise in h-BN content, as shown in Figure 5a.At the highest h-BN content, the thermal conductivity reaches 13.8 Wm −1 K −1 , aligning with the graphical representation of the Nielson model.The Nielsen model for thermal or electrical conductivity of composites is a predictive model that considers the influence of both the composition and structure of the composite material.It quantitatively models the conduction properties by incorporating factors such as the type and arrangement of components, providing a detailed understanding of how these parameters affect the overall conductivity of the composite.Consequently, this model indicates that enhancing the filler loading can improve thermal conductivity [59,61].The Nielsen model is expressed as Equation ( 1): where K c , K p , and φ are the thermal conductivities of the composite and the polymer matrix and the filler volume fraction, respectively.The geometry factors, A, D, and λ, relate to the filler orientation ratio, the maximum filler volume fraction, and a significant amount of voids.Following the Nielsen model suggests that the h-BN/PHT composite material easily achieves thermal conductivity within the suitable range of 2 to 8 Wm −1 K −1 for high thermal dissipation applications.To visually demonstrate the enhanced thermal conductivity, a thermal IR image camera monitors the temperatures with different h-BN loadings (Figure 5b).Samples with higher thermal conductivity exhibit a faster temperature increase, indicating more efficient thermal energy conduction through these thermally conductive samples.
a thermal IR image camera monitors the temperatures with different h-BN loadings (Figure 5b).Samples with higher thermal conductivity exhibit a faster temperature increase, indicating more efficient thermal energy conduction through these thermally conductive samples.The superior thermal conductivity observed is attributed to the exceptional alignment of fillers within the PHT matrix.An optimal filler aspect ratio is achieved when the h-BN fillers are perfectly oriented in the radial direction of the sample, as opposed to a random or axial orientation.Radial (K//) and axial (K⊥) thermal conductivities are measured using the transient plane source method, indicating a highly aligned h-BN within the sample in the radial direction (Figure 5c).The degree of filler orientation is estimated using the relationship (K// − K⊥)/(2K// + K⊥), in which the denominator is the sum of K in all directions and can allow for the assessment of how aligned the material is in the radial direction compared to the axial direction [59][60][61][62].The estimated filler alignment as a function of the filler loading is provided in Figure 5d.The estimated filler alignment would give 0.5 for the perfect filler orientation in the in-plane direction.As depicted in Figure 5d, all composite materials, even those with the lowest filler loading investigated in this study, The superior thermal conductivity observed is attributed to the exceptional alignment of fillers within the PHT matrix.An optimal filler aspect ratio is achieved when the h-BN fillers are perfectly oriented in the radial direction of the sample, as opposed to a random or axial orientation.Radial (K // ) and axial (K ⊥ ) thermal conductivities are measured using the transient plane source method, indicating a highly aligned h-BN within the sample in the radial direction (Figure 5c).The degree of filler orientation is estimated using the relationship (K // − K ⊥ )/(2K // + K ⊥ ), in which the denominator is the sum of K in all directions and can allow for the assessment of how aligned the material is in the radial direction compared to the axial direction [59][60][61][62].The estimated filler alignment as a function of the filler loading is provided in Figure 5d.The estimated filler alignment would give 0.5 for the perfect filler orientation in the in-plane direction.As depicted in Figure 5d, all composite materials, even those with the lowest filler loading investigated in this study, display a pronounced orientation of h-BN along the radial direction of the sample.To verify the alignment of the filler, scanning electron microscopy is utilized for a direct examination of cross-sections from selected samples with varying h-BN loadings.The field emission scanning electron microscope (FE-SEM) images, presented in Figure 6, unequivocally validate the consistent radial alignment of fillers regardless of filler content across all samples.This observational method serves to avoid redundancy and ensures a comprehensive understanding of the filler distribution in composite materials.
verify the alignment of the filler, scanning electron microscopy is utilized for a direct examination of cross-sections from selected samples with varying h-BN loadings.The field emission scanning electron microscope (FE-SEM) images, presented in Figure 6, unequivocally validate the consistent radial alignment of fillers regardless of filler content across all samples.This observational method serves to avoid redundancy and ensures a comprehensive understanding of the filler distribution in composite materials.It becomes evident that the nanocomposite fabrication of the vitrimer matrix PHT, with the assistance of flattened molecules, allows for facile radial orientation even with a minimal amount of added h-BN.Consequently, the establishment of filler networks occurs, creating an elongated heat transfer pathway and minimizing phonon scattering [60].In summary, it can be conclusively asserted that the vitrimer PHT, in stark contrast to traditional polymers, plays an active and influential role in influencing the orientation of fillers within the composite material.The potential and expectations for achieving heightened filler alignment in composite materials are anticipated through the incorporation of diverse fillers, except h-BN, exhibiting future anisotropic characteristics.As such, the exploration of anisotropy in composites by introducing various fillers holds promise for advancing the field, offering avenues for improved thermal conductivity in heat dissipation composites.
Reprocessability and Recyclability of Vitrimer-Assisted Filler Nanocomposites
From the research literature, findings unveil the presence of unreacted imines and primary amines [63].This discovery implies the potential of PHT to exhibit vitrimer be- It becomes evident that the nanocomposite fabrication of the vitrimer matrix PHT, with the assistance of flattened molecules, allows for facile radial orientation even with a minimal amount of added h-BN.Consequently, the establishment of filler networks occurs, creating an elongated heat transfer pathway and minimizing phonon scattering [60].In summary, it can be conclusively asserted that the vitrimer PHT, in stark contrast to traditional polymers, plays an active and influential role in influencing the orientation of fillers within the composite material.The potential and expectations for achieving heightened filler alignment in composite materials are anticipated through the incorporation of diverse fillers, except h-BN, exhibiting future anisotropic characteristics.As such, the exploration of anisotropy in composites by introducing various fillers holds promise for advancing the field, offering avenues for improved thermal conductivity in heat dissipation composites.
Reprocessability and Recyclability of Vitrimer-Assisted Filler Nanocomposites
From the research literature, findings unveil the presence of unreacted imines and primary amines [63].This discovery implies the potential of PHT to exhibit vitrimer behavior through two dynamic bond exchange reactions.The first involves imine metathesis, occurring between imines, while the second is transamination, which encompasses the exchange between amines and imines, as illustrated in the respective reactions, as shown in Figure 7. Above the temperature of T v , dynamic exchange reactions occur, making reprocessing possible.
Identifying the T v temperature is crucial, as it plays a significant role in the reformation of vitrimer composites through exchange reactions.Determining this temperature is accomplished using a dynamic mechanical analysis (DMA), where tan delta reveals the point at which exchange reactions occur [63].Additionally, the relatively low activation energy (E a ) of vitrimers was obtained through Arrhenius plots in relaxation tests using DMA (Figure 8b,c) [63,64].The characteristic relaxation time τ follows Arrhenius' law where K is the reaction constant, R is the ideal gas constant, and T is the temperature (K).The linear relationship between the characteristic relaxation time and the temperature for each system was obtained by linear fitting of the Arrhenius equation, wherein the slopes of the straight lines give activation energies of 24 kJ/mol, which makes them easily reprocessable.Consequently, after creating the composite, reprocessing leads to reshaping, as illustrated in Figure 8d.Similarly, reshaping occurs even when h-BN is mixed, demonstrating the reformation through dynamic exchange reactions (Figure 8e) [63].They can be easily reshaped at temperatures above the designated T v threshold.
, 16, x FOR PEER REVIEW 7 of 18 havior through two dynamic bond exchange reactions.The first involves imine metathesis, occurring between imines, while the second is transamination, which encompasses the exchange between amines and imines, as illustrated in the respective reactions, as shown in Figure 7. Above the temperature of Tv, dynamic exchange reactions occur, making reprocessing possible.Identifying the Tv temperature is crucial, as it plays a significant role in the reformation of vitrimer composites through exchange reactions.Determining this temperature is accomplished using a dynamic mechanical analysis (DMA), where tan delta reveals the point at which exchange reactions occur [63].Additionally, the relatively low activation energy (Ea) of vitrimers was obtained through Arrhenius plots in relaxation tests using DMA (Figure 8b,c) [63,64].The characteristic relaxation time τ follows Arrhenius' law and fits the Arrhenius equation upon variation in the temperature, as per the following Equation (2): where K is the reaction constant, R is the ideal gas constant, and T is the temperature (K).
The linear relationship between the characteristic relaxation time and the temperature for each system was obtained by linear fitting of the Arrhenius equation, wherein the slopes of the straight lines give activation energies of 24 kJ/mol, which makes them easily reprocessable.Consequently, after creating the composite, reprocessing leads to reshaping, as illustrated in Figure 8d.Similarly, reshaping occurs even when h-BN is mixed, demonstrating the reformation through dynamic exchange reactions (Figure 8e) [63].They can be easily reshaped at temperatures above the designated Tv threshold.
The PHT matrix was employed to reclaim h-BN from composite materials through the chemical breakdown of PHT in low pH conditions (≤2) [61,63].After soaking PHT in an acidic solution for more than 24 h, the absence of any solid residue affirmed the complete chemical breakdown (Figure 9a).Breaking down composite materials in an acidic solution resulted in a translucent pink mixture, from which the h-BN filler, surpassing withstood dissolution in common organic solvents even after prolonged soaking, demonstrating a resilient resistance, with decomposition only occurring under acidic conditions.
In conclusion, the nanocomposite for heat dissipation, leveraging the characteristics of the vitrimer PHT, stands as an advanced vitrimer with facile reprocessability and recyclability.The PHT matrix was employed to reclaim h-BN from composite materials through the chemical breakdown of PHT in low pH conditions (≤2) [61,63].After soaking PHT in an acidic solution for more than 24 h, the absence of any solid residue affirmed the complete chemical breakdown (Figure 9a).Breaking down composite materials in an acidic solution resulted in a translucent pink mixture, from which the h-BN filler, surpassing 99% in weight, easily separated after multiple washes and vacuum drying.Assessing the recovered h-BN's quality involved comparing values from Raman spectra (14.9 cm −1 and 15.0 cm −1 ), indicating a sustained quality (Figure 9b).This was corroborated by XPS results, revealing similarities in the elemental composition (Figure 9c).Furthermore, PHT withstood dissolution in common organic solvents even after prolonged soaking, demonstrating a resilient resistance, with decomposition only occurring under acidic conditions.In conclusion, the nanocomposite for heat dissipation, leveraging the characteristics of the vitrimer PHT, stands as an advanced vitrimer with facile reprocessability and recyclability.
Natural Supramolecule-Based Vitrimer Nanocomposites Containing a Large Thermal Pathway
Tannic acid (TA), with its bio-based polyphenolic structure, can be considered a polymeric compound.Its polymer-like properties arise from the presence of multiple phenolic hydroxyl groups in its structure, allowing it to form complex networks through various interactions.Tannic acid is especially known for its ability to form strong and stable crosslinks [65,66].This property is particularly useful in polymer chemistry, where crosslinking enhances the mechanical strength and stability of polymers.The phenolic hydroxyl groups in tannic acid can react with various substrates, creating a crosslinked network [67].The intricate network of tannic acid can be utilized to form a large thermal pathway, enhancing the production of a nanocomposite with high thermal conductivity.Boronic ester bonds make vitrimers unique and distinguish them from traditional polymers, as they provide the material with properties like reprocessability, stress relaxation, and adaptability to change conditions [24][25][26].We introduce the incorporation of tannic acid's phenolic network in boronic ester vitrimers, which can create large thermal pathways.
To create a vitrimer with a high crosslink density, tannic acid, boron acid, and glycerol are utilized under base conditions (OH − generated by a NaOH solution) to form borate ions [68].Subsequently, a vitrimer incorporating boronic ester bonds is generated.During the optimization process (Table 1), a vitrimer with an increased crosslink density is formed to enhance the thermal pathway and reduce phonon scattering during manufacturing.In addition to enhancing the thermal pathway, the nanocomposite, glycerol, and cellulose nanofibers (CNFs) are mixed to enable intermolecular hydrogen bonding.CNFs, a nanoscale filler commonly used to enhance composite properties [69], are included in the system.A new system is prepared by adding CNF, aiming to create a constant structure of CNFs with abundant hydroxyl groups, facilitating the formation of hydrogen bonds with both boric acid and tannic acid (refer to Figure 10).The results, as confirmed by FT-IR, indicate that system 2, which includes CNFs, forms the highest proportion of dynamically shared bonds, as shown in Figure 11a,b [68].Moreover, a DMA analysis reveals that system 2 exhibits the highest glass transition temperature (T g ), and when calculating the crosslink density [68], it shows the highest value (0.0093 mol/cm 3 for system 2) (Figure 11b).Therefore, leveraging the high crosslink density boronic ester-based vitrimer system 2, it is utilized for the production of a highthermal conductivity nanocomposite.In the pursuit of potential applications in thermal management materials, composite materials were created by blending the system 2 vitrimer with highly thermally conductive fillers such as Al2O3 and h-BN to advance its thermal conductivity [68].The findings related to thermal conductivity, as depicted in Figure 12, showcase the thermal conductive properties of the composites created using system 2 with varying proportions of the Al2O3 and h-BN fillers.Significantly, the thermal conductive characteristics of the composite escalated in direct correlation with the filler content.Without fillers, the unaltered System 2 composite exhibited a thermal conductivity of 0.49 Wm −1 K −1 , representing a twofold increase compared to the thermal conductivity of the pure bisphenol A epoxy resin (0.24 Wm −1 K −1 ).The thermal conductivity of system 2/Al2O3 rose to 1.58 Wm −1 K −1 with the inclusion of 28 vol% of Al2O3, marking a threefold increase compared to the only system 2 In the pursuit of potential applications in thermal management materials, composite materials were created by blending the system 2 vitrimer with highly thermally conductive fillers such as Al 2 O 3 and h-BN to advance its thermal conductivity [68].The findings related to thermal conductivity, as depicted in Figure 12, showcase the thermal conductive properties of the composites created using system 2 with varying proportions of the Al 2 O 3 and h-BN fillers.Significantly, the thermal conductive characteristics of the composite escalated in direct correlation with the filler content.Without fillers, the unaltered System 2 composite exhibited a thermal conductivity of 0.49 Wm −1 K −1 , representing a twofold increase compared to the thermal conductivity of the pure bisphenol A epoxy resin (0.24 Wm −1 K −1 ).The thermal conductivity of system 2/Al 2 O 3 rose to 1.58 Wm −1 K −1 with the inclusion of 28 vol% of Al 2 O 3 , marking a threefold increase compared to the only system 2 composite and a twofold increase compared to the epoxy composite containing 28 vol% of Al 2 O 3 (0.65 Wm −1 K −1 ).Similarly, the system 2/h-BN composite achieved a remarkable thermal conductivity of 16.75 Wm −1 K −1 , containing 43 vol% of h-BN.This measurement represents a 34-time increase compared to the thermal conductivity of only the system 2 composite and a 16-time increase compared to the epoxy composite containing 43 vol% of h-BN (1.04 Wm −1 K −1 ) [68].A comparison between the theoretical (Nielsen model) and experimental data for the system 2/Al 2 O 3 and h-BN composites is depicted in Figure 12a,d.Both sets of data show a progressive elevation in filler concentrations, indicating that optimizing the filler loading can improve thermal conductivity [59,61,68].The thermal conductive properties of system 2 and its composite surpass those of commercially accessible epoxy mold compounding materials, as evident from these outcomes.
Furthermore, cylindrical composites (20 mm in diameter, 4 mm in height) composed of system 2/Al 2 O 3 and system 2/h-BN (with a 0-50 weight% filler) were positioned on a heating plate at a consistent temperature of 90 • C. The heat conductive characteristics of every composite were directly examined employing an infrared thermal imaging tool, where brighter colors signaled elevated temperatures on the surface of the composite.Figure 12b,c,e,f show temperature changes in the system 2/Al 2 O 3 and h-BN composites from 1 to 180 s, throughout an identical duration.A persistent trend is observed, where augmented filler concentrations consistently result in elevated surface temperatures in all composite systems, underscoring the contribution of fillers in enhancing the heat transfer and thermal conductive properties [68].
To understand the roles of TA and CNF in achieving this high thermal conductivity, the researcher examined the cross-sectional morphologies of system 2 + 30 wt% of Al 2 O 3 and system 2 + 30 wt% of h-BN using FE-SEM, as shown in Figure 13a and b, respectively.The results reveal that the CNF, with a high aspect ratio, was uniformly dispersed in system 2, leading to increased crosslinking and a greater number of hydrogen bonds.This enhanced the thermal pathways for phonon vibration, reducing phonon scattering.
where brighter colors signaled elevated temperatures on the surface of the composite.Figure 12b,c,e,f show temperature changes in the system 2/Al2O3 and h-BN composites from 1 to 180 s, throughout an identical duration.A persistent trend is observed, where augmented filler concentrations consistently result in elevated surface temperatures in all composite systems, underscoring the contribution of fillers in enhancing the heat transfer and thermal conductive properties [68].To understand the roles of TA and CNF in achieving this high thermal conductivity, the researcher examined the cross-sectional morphologies of system 2 + 30 wt% of Al2O3 and system 2 + 30 wt% of h-BN using FE-SEM, as shown in Figure 13a and b, respectively.The results reveal that the CNF, with a high aspect ratio, was uniformly dispersed in system 2, leading to increased crosslinking and a greater number of hydrogen bonds.This enhanced the thermal pathways for phonon vibration, reducing phonon scattering.
Furthermore, this confirmed that the fillers were horizontally oriented due to the vitrimer nature of system 2.This unique alignment, influenced by the vitrimer's elastic properties, resulted in a longer heat transfer pathway, reduced phonon scattering, and higher thermal conductivity.In particular, the plate-shaped h-BN exhibited a clearer alignment in the horizontal direction, with filler-filler interconnections contributing to an even larger percolation network.The high dispersibility of the filler and the formation of a good percolation network were identified as crucial factors contributing to the high thermal conductivities observed in these filler-containing composites.These findings propose a mechanism whereby the high thermal conductivity in these composites is due to the structure of system 2. TA and CNF, characterized by their large size and aromatic/cyclic structures, with plate and rod-like shapes, have excellent thermal conductivities.The many hydroxyl groups on their surfaces create an effective network through B-O-C or hydrogen bonding, establishing the required thermal pathway for high thermal conductivity.Furthermore, the vitrimer with h-BN forms a highly horizontally oriented plate-like layer during the thermal process, further enhancing thermal conductivity (Figure 13c).
Thermal Grating Structure Using Reprocessability of Vitrimer
The boronic ester functional group exhibits dynamic exchange reactions, particularly through its boronic ester bonds.Determining the reprocessing temperature (Tv) is crucial for these bonds.Various techniques exist to evaluate Tv, and an innovative method involves employing the creep test [68].During this test, the slope of strain values increases non-linearly at a specific temperature, indicating a balance between the vitrimer intermolecular bond breakage and recombination rates as the temperature rises (see Figure 14a).The temperature at which the slope changes non-linearly signifies Tv.In Figure 14b, Tv, Furthermore, this confirmed that the fillers were horizontally oriented due to the vitrimer nature of system 2.This unique alignment, influenced by the vitrimer's elastic properties, resulted in a longer heat transfer pathway, reduced phonon scattering, and higher thermal conductivity.In particular, the plate-shaped h-BN exhibited a clearer alignment in the horizontal direction, with filler-filler interconnections contributing to an even larger percolation network.The high dispersibility of the filler and the formation of a good percolation network were identified as crucial factors contributing to the high thermal conductivities observed in these filler-containing composites.
These findings propose a mechanism whereby the high thermal conductivity in these composites is due to the structure of system 2. TA and CNF, characterized by their large size and aromatic/cyclic structures, with plate and rod-like shapes, have excellent thermal conductivities.The many hydroxyl groups on their surfaces create an effective network through B-O-C or hydrogen bonding, establishing the required thermal pathway for high thermal conductivity.Furthermore, the vitrimer with h-BN forms a highly horizontally oriented plate-like layer during the thermal process, further enhancing thermal conductivity (Figure 13c).
Thermal Grating Structure Using Reprocessability of Vitrimer
The boronic ester functional group exhibits dynamic exchange reactions, particularly through its boronic ester bonds.Determining the reprocessing temperature (T v ) is crucial for these bonds.Various techniques exist to evaluate T v , and an innovative method involves employing the creep test [68].During this test, the slope of strain values increases nonlinearly at a specific temperature, indicating a balance between the vitrimer intermolecular bond breakage and recombination rates as the temperature rises (see Figure 14a).The temperature at which the slope changes non-linearly signifies T v .In Figure 14b, T v , for the current vitrimer system, is approximately 40 • C, and this value remains constant irrespective of the content of the CNF.Notably, this thermally conductive composite cannot be reprocessed and reshaped without external pressure.However, it becomes feasible under pressure above the T v temperature.Subsequently, the reprocessed composite sample is fabricated using the heat-press method, as depicted in Figure 14c.irrespective of the content of the CNF.Notably, this thermally conductive composite cannot be reprocessed and reshaped without external pressure.However, it becomes feasible under pressure above the Tv temperature.Subsequently, the reprocessed composite sample is fabricated using the heat-press method, as depicted in Figure 14c.In Figure 15, the advantage of reformation is solely utilized through the thermal pressure molding of a composite grating, formed by connecting four segments-system 2 and system 2/h-BN (50 wt%) repeatedly-using the reversible properties of the boronic ester bonds under heat-press conditions.Placed on a heating plate at 90 °C, it showcases the surface temperature change along the sample's longitudinal direction, affirming the successful creation of a four-segment composite grating with varying thermal conductivities.In conclusion, utilizing the advantages of heat-pressure reprocessing allows for the easy creation of a grating structure.This enables efficient heat transfer only in the desired areas, facilitating effective reprocessing.In Figure 15, the advantage of reformation is solely utilized through the thermal pressure molding of a composite grating, formed by connecting four segments-system 2 and system 2/h-BN (50 wt%) repeatedly-using the reversible properties of the boronic ester bonds under heat-press conditions.Placed on a heating plate at 90 • C, it showcases the surface temperature change along the sample's longitudinal direction, affirming the successful creation of a four-segment composite grating with varying thermal conductivities.In conclusion, utilizing the advantages of heat-pressure reprocessing allows for the easy creation of a grating structure.This enables efficient heat transfer only in the desired areas, facilitating effective reprocessing.In Figure 15, the advantage of reformation is solely utilized through the thermal pressure molding of a composite grating, formed by connecting four segments-system 2 and system 2/h-BN (50 wt%) repeatedly-using the reversible properties of the boronic ester bonds under heat-press conditions.Placed on a heating plate at 90 °C, it showcases the surface temperature change along the sample's longitudinal direction, affirming the successful creation of a four-segment composite grating with varying thermal conductivities.In conclusion, utilizing the advantages of heat-pressure reprocessing allows for the easy creation of a grating structure.This enables efficient heat transfer only in the desired areas, facilitating effective reprocessing.
Recyclability of Vitrimer and Sustainability of Vitrimer Nanocomposites
The composite showcased notable dissolution in a citric acid solution after 21 h, stemming from the hydrolysis of boronic ester bonds that led to the collapse of the crosslinked network [68,70].While achieving complete solubility in water after 96 h, the composite remained non-soluble in ethanol.These observations underscore the distinct solubility characteristics of the prepared composites in contrast to conventional thermosetting materials.As depicted in Figure 16a, pristine vitrimer compounds demonstrated the ability to dissolve in acidic solutions.Utilizing these features, the recyclability of a thermally conductive composite filled with h-BN was assessed by immersing it in a 1 mol/L citric acid solution.Within 21 h, the composite underwent total dissolution in the citric acid solution, as illustrated in Figure 16a.The non-soluble white powder, indicative of the h-BN filler, was effortlessly isolated via filtration, proceeded by iterative rinsing with deionized water and acetone and subsequent vacuum drying.The recovered h-BN and the reference h-BN underwent XPS analysis (Figure 16b).The results indicate that both samples exhibited comparable elemental compositions, confirming the successful recycling of the filler without substantial alterations to its composition.
Recyclability of Vitrimer and Sustainability of Vitrimer Nanocomposites
The composite showcased notable dissolution in a citric acid solution after 21 h, stemming from the hydrolysis of boronic ester bonds that led to the collapse of the crosslinked network [68,70].While achieving complete solubility in water after 96 h, the composite remained non-soluble in ethanol.These observations underscore the distinct solubility characteristics of the prepared composites in contrast to conventional thermosetting materials.As depicted in Figure 16a, pristine vitrimer compounds demonstrated the ability to dissolve in acidic solutions.Utilizing these features, the recyclability of a thermally conductive composite filled with h-BN was assessed by immersing it in a 1 mol/L citric acid solution.Within 21 h, the composite underwent total dissolution in the citric acid solution, as illustrated in Figure 16a.The non-soluble white powder, indicative of the h-BN filler, was effortlessly isolated via filtration, proceeded by iterative rinsing with deionized water and acetone and subsequent vacuum drying.The recovered h-BN and the reference h-BN underwent XPS analysis (Figure 16b).The results indicate that both samples exhibited comparable elemental compositions, confirming the successful recycling of the filler without substantial alterations to its composition.
Conclusions
Vitrimers, as covalent adaptable network polymers, have opened new avenues in materials science, offering a unique blend of stability and processability.The ability of vitrimers to undergo dynamic covalent bond exchange reactions, responding to external stimuli, has paved the way for applications in various industries.In particular, their role
Conclusions
Vitrimers, as covalent adaptable network polymers, have opened new avenues in materials science, offering a unique blend of stability and processability.The ability of vitrimers to undergo dynamic covalent bond exchange reactions, responding to external stimuli, has paved the way for applications in various industries.In particular, their role in high thermal-conductivity composite materials is significant, contributing to developing alternative polymers instead of traditional composites.From the viewpoint of thermal conductivity, the incorporation of vitrimers into composite fillers can help in directional alignment, forming well-structured intermolecular networks that effectively minimize phonon scattering.The potential and expectations for the further enhancement of filler alignment when creating composites by incorporating different fillers with future anisotropic characteristics are also anticipated.In addition, by utilizing natural materials, vitrimers create a broad heat passway for phonons through dynamic network interactions, resulting in significantly enhanced thermal conductivity compared to conventional thermosetting polymers.
Moreover, the dynamic exchange reactions of vitrimers enable the formation of easy reprocessing that can adapt to external forces.This distinctive feature allows damaged or fractured composite materials to undergo reuse through reprocessing.The ability to break crosslinks presents the potential for recycling both the matrix and filler components of vitrimer high thermal conductive composites.The material's adaptability to external forces, combined with the capability for reshaping and reprocessing, underscores its potential to develop high-performance and recyclable composite materials, surpassing the thermal conductivity of traditional thermosetting polymers.
In the evolving field of industrial devices, as they trend towards becoming smaller, thinner, and lighter, effective heat dissipation is anticipated to emerge as a critical challenge.Addressing these concerns, the advantages of vitrimers, as highlighted in this review, position them as a promising solution for designing polymer composite materials with a focus on efficient heat dissipation and sustainable alternatives.
Figure 1 .
Figure 1.Diagrammatic representation of dynamic covalent linkages employed in vitrimer materials and their applications.
Figure 1 .
Figure 1.Diagrammatic representation of dynamic covalent linkages employed in vitrimer materials and their applications.
Figure 2 .
Figure 2. High thermal conductive polymeric composite materials are produced by integrating polymer and ceramic fillers with excellent thermal conductivity properties.
Figure 2 .
Figure 2. High thermal conductive polymeric composite materials are produced by integrating polymer and ceramic fillers with excellent thermal conductivity properties.
Figure 2 .
Figure 2. High thermal conductive polymeric composite materials are produced by integrating polymer and ceramic fillers with excellent thermal conductivity properties.
Figure 4 .
Figure 4. Geometry optimization is performed to model the interaction between nitrogen atoms in the PHT matrix and the h-BN surface.The comparative matrix involved examining the geometryoptimized structures of the PHT matrix and analogous molecules, replacing nitrogen atoms with carbon atoms near the h-BN surface, with oligomer units, for simplification, depicted in (a), and detailed views are provided (b).Reproduced from Ref. [61].Copyright 2023, Elsevier.
Figure 4 .
Figure 4. Geometry optimization is performed to model the interaction between nitrogen atoms in the PHT matrix and the h-BN surface.The comparative matrix involved examining the geometryoptimized structures of the PHT matrix and analogous molecules, replacing nitrogen atoms with carbon atoms near the h-BN surface, with oligomer units, for simplification, depicted in (a), and detailed views are provided (b).Reproduced from Ref. [61].Copyright 2023, Elsevier.
Figure 5 .
Figure 5. (a) The graph into the thermal conductivity of h-BN/PHT composites with varying loadings of h-BN fillers.A predictive model, based on the Nielsen model, is depicted as a dashed blue line in the same plot.(b) Thermal infrared images capture the heating process of the composites at 90 °C, with the filler volume fractions indicated in parentheses, and the corresponding temperature values are provided for each image.(c) The measured radial (K//) and axial (K⊥) thermal conductivities, along with their standard deviations, are presented to illustrate the anisotropy in thermal conductivity concerning h-BN loading.(d) Degree of filler orientation by calculated anisotropic thermal conductivity.The red dotted line serves as a reference for perfect h-BN orientation in the radial direction of the composite.Reproduced from Ref. [61].Copyright 2023, Elsevier.
Figure 5 .
Figure 5. (a) The graph into the thermal conductivity of h-BN/PHT composites with varying loadings of h-BN fillers.A predictive model, based on the Nielsen model, is depicted as a dashed blue line in the same plot.(b) Thermal infrared images capture the heating process of the composites at 90 • C, with the filler volume fractions indicated in parentheses, and the corresponding temperature values are provided for each image.(c) The measured radial (K // ) and axial (K ⊥ ) thermal conductivities, along with their standard deviations, are presented to illustrate the anisotropy in thermal conductivity concerning h-BN loading.(d) Degree of filler orientation by calculated anisotropic thermal conductivity.The red dotted line serves as a reference for perfect h-BN orientation in the radial direction of the composite.Reproduced from Ref. [61].Copyright 2023, Elsevier.
Arrhenius equation upon variation in the temperature, as per the following Equation (2):
Figure 7 .
Figure 7. (a) Schematic representation of the imine exchange reaction, and (b) transamination providing the chemical rearrangements characterizing this reversible reaction.(c) Schematic illustration of the crosslink network rearrangement by dynamic bond exchange reactions over Tv.
Figure 7 .
Figure 7. (a) Schematic representation of the imine exchange reaction, and (b) transamination providing the chemical rearrangements characterizing this reversible reaction.(c) Schematic illustration of the crosslink network rearrangement by dynamic bond exchange reactions over T v .
Figure 8 .
Figure 8.(a) Dynamic mechanical analysis (DMA) is employed to investigate the behavior of PHT up to 150 °C.(b) The stress relaxation curves for poly(hexahydrotriazine) (PHT) across temperatures up to 140 °C.The dotted line indicates constant (e −1 ).(c) Characteristic relaxation times (τ) for neat PHT are determined as part of the analysis.Photographs illustrate the preparation of a disk-shaped sample, its subsequent fragmentation into smaller pieces, and the reprocessing of (d) PHT and (e) h-BN/PHT composites.Reproduced from Refs.[61,63].Copyright 2023, Elsevier and Wiley.
Figure 8 .
Figure 8.(a) Dynamic mechanical analysis (DMA) is employed to investigate the behavior of PHT up to 150 • C. (b) The stress relaxation curves for poly(hexahydrotriazine) (PHT) across temperatures up to 140 • C. The dotted line indicates constant (e −1 ).(c) Characteristic relaxation times (τ) for neat PHT are determined as part of the analysis.Photographs illustrate the preparation of a disk-shaped sample, its subsequent fragmentation into smaller pieces, and the reprocessing of (d) PHT and (e) h-BN/PHT composites.Reproduced from Refs.[61,63].Copyright 2023, Elsevier and Wiley.
Figure 8 .
Figure 8.(a) Dynamic mechanical analysis (DMA) is employed to investigate the behavior of PHT up to 150 °C.(b) The stress relaxation curves for poly(hexahydrotriazine) (PHT) across temperatures up to 140 °C.The dotted line indicates constant (e −1 ).(c) Characteristic relaxation times (τ) for neat PHT are determined as part of the analysis.Photographs illustrate the preparation of a disk-shaped sample, its subsequent fragmentation into smaller pieces, and the reprocessing of (d) PHT and (e) h-BN/PHT composites.Reproduced from Refs.[61,63].Copyright 2023, Elsevier and Wiley.
Figure 9 .
Figure 9. (a) PHT and h-BN/PHT composites undergo depolymerization by immersing them in an acidic aqueous solution (pH = 2) at room temperature for one day.(b) Raman spectra are then compared between the original h-BN (blue) and the recovered h-BN from the h-BN/PHT composite (gray).(c) XPS spectra of the recycled h-BN are compared with the reference h-BN, with normalization based on the maximum N1s peak intensity of each scan.Reproduced from Ref. [61].Copyright 2023, Elsevier.
Figure 9 .
Figure 9. (a) PHT and h-BN/PHT composites undergo depolymerization by immersing them in an acidic aqueous solution (pH = 2) at room temperature for one day.(b) Raman spectra are then compared between the original h-BN (blue) and the recovered h-BN from the h-BN/PHT composite (gray).(c) XPS spectra of the recycled h-BN are compared with the reference h-BN, with normalization based on the maximum N1s peak intensity of each scan.Reproduced from Ref. [61].Copyright 2023, Elsevier.
Figure 10 .
Figure 10.Schematic diagram of the chemical structures engaged in the creation of crosslinked networks in a vitrimer derived from natural sources.Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 10 .
Figure 10.Schematic diagram of the chemical structures engaged in the creation of crosslinked networks in a vitrimer derived from natural sources.Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 10 .
Figure 10.Schematic diagram of the chemical structures engaged in the creation of crosslinked networks in a vitrimer derived from natural sources.Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 11 .
Figure 11.(a) The intensity ratio of FT-IR peaks from B-O-C to B-O for composites in systems 0-3.(b) Dynamic mechanical analysis plots for composites in systems 1, 2, and 3. Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 11 .
Figure 11.(a) The intensity ratio of FT-IR peaks from B-O-C to B-O for composites in systems 0-3.(b) Dynamic mechanical analysis plots for composites in systems 1, 2, and 3. Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 12 .
Figure 12.A graph of experimental and theoretical thermal conductivity of (a) system 2/Al2O3.An increase in the surface temperatures of system 2/Al2O3 composites over time during heating, along with (b) corresponding infrared thermal depictions for the composites incorporating different Al2O3 concentrations.(c) Variations in the surface temperatures of system 2/h-BN composites during heating.Measured results in the same sequence corresponding to System 2/h-BN composites (d-f).Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 12 .
Figure 12.A graph of experimental and theoretical thermal conductivity of (a) system 2/Al 2 O 3 .An increase in the surface temperatures of system 2/Al 2 O 3 composites over time during heating, along with (b) corresponding infrared thermal depictions for the composites incorporating different Al 2 O 3 concentrations.(c) Variations in the surface temperatures of system 2/h-BN composites during heating.Measured results in the same sequence corresponding to System 2/h-BN composites (d-f).Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 13 .
Figure 13.Cross-sectional FE-SEM images of (a) system 2 Al2O3 and (b) h-BN.The yellow circles represent Al2O3, the blue lines represent CNF, and the red lines represent h-BN.Mechanism of thermal passway established by the crosslinking networks involving hydrogen bonds and boronic ester bonds in composites of (c) system 2 and (d) the composites subsequent to the incorporation of the h-BN filler into system 2. Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 13 .
Figure 13.Cross-sectional FE-SEM images of (a) system 2 Al 2 O 3 and (b) h-BN.The yellow circles represent Al 2 O 3 , the blue lines represent CNF, and the red lines represent h-BN.Mechanism of thermal passway established by the crosslinking networks involving hydrogen bonds and boronic ester bonds in composites of (c) system 2 and (d) the composites subsequent to the incorporation of the h-BN filler into system 2. Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 14 .
Figure 14.(a) Schematic representation of the network rearrangement process through a boronic exchange reaction.(b) The topological freezing transition temperatures (Tv) for system 2, as determined through the creep test.(c) Reprocessing capability of system 2 utilizing a heating press.Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 14 .
Figure 14.(a) Schematic representation of the network rearrangement process through a boronic exchange reaction.(b) The topological freezing transition temperatures (T v ) for system 2, as determined through the creep test.(c) Reprocessing capability of system 2 utilizing a heating press.Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 14 .
Figure 14.(a) Schematic representation of the network rearrangement process through a boronic exchange reaction.(b) The topological freezing transition temperatures (Tv) for system 2, as determined through the creep test.(c) Reprocessing capability of system 2 utilizing a heating press.Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 15 .
Figure 15.Photographic images of the reprocessing thermal grating composite, thermal infrared images, and a temperature-change graph along the length depict the composite consisting of four segments: system 2/h-BN (50 wt%) and system 2 in succession.Reproduced from Ref. [68].Copyright 2023, Elsevier.
18 Figure 15 .
Figure 15.Photographic images of the reprocessing thermal grating composite, thermal infrared images, and a temperature-change graph along the length depict the composite consisting of four segments: system 2/h-BN (50 wt%) and system 2 in succession.Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 16 .
Figure 16.(a) The specimens were dissolved by immersion.Top row, the pristine matrix; bottom row, the h-BN composite.(b) XPS spectra of the recovered and reference h-BN filler samples.The peaks were normalized using the maximum N1s peak intensity of each scan.Reproduced from Ref. [68].Copyright 2023, Elsevier.
Figure 16 .
Figure 16.(a) The specimens were dissolved by immersion.Top row, the pristine matrix; bottom row, the h-BN composite.(b) XPS spectra of the recovered and reference h-BN filler samples.The peaks were normalized using the maximum N1s peak intensity of each scan.Reproduced from Ref. [68].Copyright 2023, Elsevier.
|
2024-01-31T16:06:19.906Z
|
2024-01-29T00:00:00.000
|
{
"year": 2024,
"sha1": "b164a3bf4cbbf5437af5810510eb500365124915",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/16/3/365/pdf?version=1706524942",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "622dee3400a490ead4be512a3c45e51e85567275",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
251938943
|
pes2o/s2orc
|
v3-fos-license
|
Transfer Learning-Based Vehicle Collision Prediction
Tra ffi c accident is an important problem in modern society. Vehicle collision prediction is one of the key technical points that must be broken through in the future driving system. However, due to the complexity of tra ffi c environment and the di ff erence of emergency ability of drivers, it is very di ffi cult to predict vehicle collision. Although experts and scholars have tried to monitor and predict accidents in real time according to environmental conditions, overly agile warning or inaccurate prediction may cause serious consequences. Therefore, in order to more accurately predict the occurrence of vehicle collision, this paper analyses and models the driving mode of the vehicle based on transfer learning and using the previous performance data of the vehicle, so as to predict the future collision situation and even the collision time of the vehicle. Finally, using a real-world Internet of Vehicles data set, this paper implements a large number of experiments to verify the e ff ectiveness of the proposed model.
Introduction
With the development of society, the number of vehicles is gradually increasing, and the problem of traffic safety has attracted more and more people's attention. The frequent occurrence of traffic accidents is worrying. More than 10 million people worldwide are injured in road accidents every year. Among these accidents, vehicle collision is a serious safety problem, accounting for almost 30% of all accidents [1].
However, many accidents are closely related to the improper operation of drivers and the lack of timely and effective response to emergencies. In fact, with the development of technology, especially the development of Internet of Vehicles, automatic driving, and other technologies, the state of various parts of the vehicle can be tracked completely. It is very possible to predict whether the vehicle will collide or even predict the specific collision time in the future based on these data. However, since the implementation and deployment of intelligent transportation system and Internet of Vehicles are still in the initial stage, these data are still very difficult to obtain. At present, aiming at the problem of vehicle collision prediction, some researchers have proposed to monitor the vehicle environment in real time through the radar set on the road, the vehicle's own infrared, camera, and other sensing equipment, so as to use these data to predict the vehicle collision and give timely warnings to the drivers on the vehicle [2][3][4]. However, these data based on the external environment, such as radar signals and images that are vulnerable to weather, have strong uncertainty. Therefore, some researchers turn their attention to the relatively stable interior of the vehicle, such as assessing the possibility of collision by paying attention to the driver's behaviour [5,6]. However, vehicle interior data is usually difficult to be effectively dynamically modelled because the driving habits of drivers are very personalized.
Recently, the rapid development of deep learning has brought great technological changes to various fields. Among them, the transfer learning technology, which can make full use of the previously collected data, makes the current model perform better on less data and has been favoured by more and more people [7][8][9]. Inspired by these works, this paper intends to use transfer learning to realize the complex task of vehicle collision prediction. In fact, compared with complex external data, real-time vehicle operation data from the interior of the vehicle, such as vehicle speed, accelerator pedal position, and brake pedal state, are less vulnerable to environmental interference and are directly related to the driving state of the vehicle. Transfer learning can discover more features related to vehicle collisions from limited vehicle operation data by means of knowledge transfer. In order to explore a more safe and effective vehicle collision prediction method, this paper uses the above data to carry out the research of vehicle collision prediction task. However, these data are still very difficult to obtain. In order to fully mine the correlation law between vehicle running state and vehicle collision from these limited data, this paper uses the efficient modelling method of transfer learning to build a model that can not only accurately predict whether the vehicle has a collision but also clearly point out the possible collision time. Specifically, the contributions of this paper are as follows: (1) This paper proposes a vehicle collision prediction model based on transfer learning, which is called TLVC. This method explores the new use of Internet of Vehicles data and provides a strong technical support for safe driving in the future (2) In this paper, a special feature analysis method is developed for the operation data from the inside of the vehicle, which provides a reference for the dynamic behaviour modelling of drivers. Moreover, this method of using vehicle internal data is more reliable than the previous methods based on image and radar signal (3) Using a small amount of Internet of Vehicles data and EfficientNet, this paper constructs a transfer learning model which is more accurate and clearer than the previous vehicle collision prediction model. Finally, through a large number of experiments, we confirm the effectiveness and accuracy of the proposed model The remaining chapters of this paper are arranged as follows: the second section introduces the research related to vehicle collision and transfer learning. The third section introduces the vehicle collision prediction model proposed in this paper. The fourth section will introduce the real data set of this paper. The last section will summarize the full text and discuss our future work.
Related Work
In this part, this paper will introduce the previous work of vehicle collision prediction in detail and review the previous research on transfer learning.
Vehicle Collision Prediction.
In recent years, many researchers have done research on vehicle avoidance and vehicle collision prediction. For example, Wang et al. [3] based on convolutional neural network and using the collected real trajectory data proposed a set of methods from data collection to preprocessing and then to prediction but did not deeply explore how to use the collected data to make more accurate prediction. Lyu et al. [4] established the lane change intention recognition model by tracking the driving direction of the vehicle and then established the collision early warning model by comprehensively predicting the vehicle trajectory. Candela et al. [10] combined with road layout information, statistical agent dynamics, and discrete Gaussian process for future vehicle position estimation, so as to realize vehicle collision prediction. Peng et al. [5] constructed a comprehensive "driver-vehicle-road" data set for actual driver behaviour evaluation, mainly analysing driver behaviour and relevant factors that significantly affect driving safety in emergency situations. Lee et al. [11] constructed a dynamic riding simulator that can control rolling motion, quantified driving behaviour by using lateral control ability, driver's head movement, and emotional state, and predicted the overall collision avoidance ability by using multiple regression analysis of driving behaviour. Zhang et al. [12] proposed a multipedestrian collision risk assessment framework according to the motion characteristics of vehicles, including motion prediction module, collision inspection module, and collision risk assessment module. According to Katrakazas et al. [13], under the joint framework of interactive perception motion model and dynamic Bayesian network (DBN), network level collision estimation and vehiclebased risk estimation are integrated in real time, and machine learning classifier is used for real-time network level collision prediction. Wang et al. [14] proposed a collision prediction method based on the bivariate extreme value theory framework, taking into account the driver's perceived response failure to take appropriate avoidance actions.
To sum up, the current vehicle collision prediction is mainly based on road information, vehicle external motion characteristics, and vehicle trajectory, combined with various roadside sensing units, etc., but there is a lack of attention to the characteristics of the vehicle itself and the driver's operation state, and the vehicle collision problem is mainly modelled as a classification problem, and there is a lack of prediction of the collision time.
Transfer
Learning. Migration learning improves the performance of the model in the target domain by migrating the knowledge contained in other source domains, which can greatly reduce the dependence of the model on the data of the target domain. Due to its wide application prospects, migration learning has attracted extensive attention recently [15]. For example, Ruder et al. outlined modern transfer learning methods in natural language processing (NLP), how models are pretrained, and what information is captured in their learning representation and reviewed examples and case studies on how these models are integrated and adjusted in downstream NLP tasks [16]. Raghu et al. discussed the characteristics of transfer learning for medical imaging [17]. Through a series of analysis of migrating to block shuffled images, Neyshabur et al. separated the effect of feature reuse from the high-level statistical information of learning data and showed that some benefits of migrating learning came from the latter [ [9].
To sum up, at present, due to some outstanding advantages, transfer learning has been widely concerned in various fields such as medicine and biology, but few people are involved in the emerging field of vehicle network. The work of this paper is to fill the vacancy of this data modelling method in the field of vehicle networking, explore the use of migration learning to model the real-time driving data of vehicles, analyse the vehicle state, learn the vehicle collision prediction model, and finally improve the safety of the driving system.
Method
This part will focus on the vehicle collision prediction model based on transfer learning proposed in this paper. The overall structure of the model is shown in Figure 1. 3.1. Problem Definition. Before introducing the model, we first give the definition of the vehicle collision prediction problem solved in this paper.
Definition Vehicle Collision Prediction.
Given n vehicle sets fc 1 , c 2 ,⋯c n g, according to the operation status S i , S i ∈ R h * d (h represents the number of historical running state data of the acquired vehicle c i , and d represents the dimension of monitoring data) of each vehicle c i obtained from the monitoring system inside the vehicle. The problem of vehicle collision prediction in this paper finally comes down to the training prediction model ∅ð·Þ, so that it can predict the collision of a given vehicle c i in the future according to the operation state data S j of the given vehicle c i , as follow: where T represents the final prediction result, T ∈ ½0, 1. T = 0 means that the modified vehicle will not collide in the future, and T ∈ ð0, 1 represents the specific time when the collision occurred. Figure 2 shows an example of vehicle collision prediction. In this example, the vehicle running state data in the Internet of Vehicles is first used to analyze the vehicle operation law, and the proposed TLVC model is used to simulate the law, and then, it is used to predict whether there will be a collision in the future and the time of the collision.
For the real label G (physical time) of training data, we process it as follows: No collision, 86400 G − G start ð Þ+ 86400 otherwise: Y is the training label finally sent into the model; formula (2) indicates the time point when the current vehicle starts monitoring. The above formula makes Y ∈ ½0, 1. The specific details of the prediction model ∅ð·Þ will be described in detail below.
Preprocessing of Vehicle Operation Status
Data. This part corresponds to the left half of Figure 1. The vehicle running data processed in this part include "accelerator pedal position," "collect time," "battery pack main negative relay status," "battery pack main positive relay status," "brake pedal state," "driver leaving prompt," "main driver seat occupancy status," "driver seat belt status," "driver demand torque value," "handbrake status," "vehicle key status," "low voltage battery voltage," "the current gear status of the vehicle," "the current total current of the vehicle," "the current total voltage of the vehicle," "vehicle mileage," "speed," and "steering wheel angle." Among them, features such as "battery pack main positive relay status" and "brake pedal state" are categorical features, while features such as "speed" and "steering wheel angle" are numerical features. For each category of features, this paper adopts three different coding methods to fully mine the relationship between these original features and vehicle collision. The first coding method is simple one hot coding. The final coding result is H 1 , H 1 ∈ R s * d 1 , where s is the total number of samples and d 1 represents the dimension of the final one hot code. The other two types of coding are realized by probability distribution. Specifically, for an original feature X, its possible value is fx 1 , x 2 ,⋯,x k ,⋯x c g, where c is the total number of categories.
The second coding method is realized by using the collinear probability of collision between the feature and the vehicle: where φðY = 1, X = x k Þ indicates that there is a collision and the value of the last time point of feature X is the current number of vehicles x k to be coded, where φ * represents the total number of vehicles in the sample set and P represents a probability value, which is the coding of features X = x k . The final coding result of all category features obtained by this coding method is H 2 , H 2 ∈ R s * d 2 , where d 2 represents the feature coding dimension using cooccurrence probability for category features.
Wireless Communications and Mobile Computing
In the third coding method, we consider the frequency distribution of the vehicle for feature X in the whole time detection window: where FðY = 1, X = x k Þ represents the number of times that the value of feature X is x k in the detection window of the vehicle in collision. ∑
Prediction Model.
This part corresponds to the right part of Figure 1. In order to obtain better results on the limited vehicle operation data set, this paper uses some parameters of the pretrained EfficientNet [24] to realize vehicle collision prediction through migration learning. In fact, the transfer learning method is widely used in the field of image processing. In this paper, the running state data of two-dimensional vehicles in the time window W is compared with the pictures in image processing, and then, the correlation between vehicle running data and vehicle collision is learned through EfficientNet. Then, make the model learn the dynamics of vehicle operation, so as to realize the modelling of dynamic behaviour habits of drivers. Figure 3 shows the structure of B0 version of Efficient-Net. EfficientNet achieves good results without consuming more computing resources by coordinating and controlling the depth, width, and input data size of the network at the same time. This relatively small and refined model is quite cost-effective for applications in the Internet of Vehicles. EfficientNet has been widely used in transfer learning in recent years and has performed well in various tasks, so this model is used in this paper. In addition, this paper also verified through experiments that compared with other transfer learning models, EfficientNet is a better choice for this task.
In our task, in order to make effective use of the parameters of the pretraining model, we frozen half the parameters in EfficientNet in the training process, and let the other half of the network parameters participate in the training of vehicle collision prediction model, which helps to localize the model parameters as much as possible. That is, on the basis of making full use of the existing network parameters, let the model learn the task characteristics of the current vehicle collision data set. In the EfficientNet model using migration, its input is H. As shown in Figure 1, its output will be input to the final full connection layer FC layer and then output the prediction T.
Finally, the parameters of the whole model are updated by Adam W. Since the two tasks of whether the final vehicle collides and the time of collision are combined into a unified regression problem through formula (2), MSE is used as the final optimization goal in the process of model training.
Experiment
In this section, we will use the real vehicle signal data of the Internet of Vehicles to verify the effectiveness and accuracy of the proposed vehicle collision prediction model TLVC based on transfer learning.
Experimental Settings.
This study conducted all experiments on a computer with Intel(R) Core(TM) i7-11700F @ 2.50 GHz and 16 GBDRAM. We implemented these algorithms in Python 3.6. The data used in this paper comes from 2021 Digital China Innovation Contest (https://www .datafountain.cn/competitions/500) and includes a series of vehicle operation data, vehicle collision labels, and collision time. The data includes the operation data of 120 vehicles in 2-5 days in total. The number of detection data of each vehicle is at least 4324 and at most 114460. For building a more accurate vehicle collision prediction model, this paper Figure 3: Structure diagram of EfficientNet-B0. To enrich the data set and deal with the long vehicle state data, we truncate all vehicle detection data; that is, take consecutive w records as a sample data. After such processing, we finally obtained 355,509 training samples and 42,058 test samples. The relevant parameter design of this paper is shown in Table 1.
To verify the effectiveness of the proposed TLVC method, this paper compares the proposed model with the following methods: MLP: multilayer perceptron, a simple neural network model, is used as a comparison method in this paper RNN: recurrent neural network (RNN) compared with the general neural network; this method can deal with the data with sequence changes LSTM: long short-term memory; LSTM is a special RNN, which is mainly used to solve the problems of gradient disappearance and gradient explosion in the process of long sequence training. Compared with ordinary RNN, LSTM can perform better in longer sequences BiLSTM: bidirectional LSTM is an extension of LSTM. Because two LSTM can be trained in two directions, the performance of sequence prediction model can be improved Self-attention: this method is widely used in the field of natural language processing because of its excellent performance [25]. Vehicle running state data is a time series data, which is very similar to the text in natural language processing. Therefore, this method is used as a comparison method in this paper CNN: convolutional neural network [26], a method widely used in the field of image processing, is used to process two-dimensional vehicle running state data in this paper ResNet: deep residual network; ResNet is an excellent model that improves CNN in the field of image processing [27] To measure the accuracy of the proposed vehicle collision prediction model, MSE is used to measure the accuracy Figure 4. As shown in Figure 4, we compared the proposed model with other model benchmark models on the Internet of Vehicles data set from the real world. Among them, MSE reflects the prediction accuracy of the model. It can be seen from the results in Figure 4 that the prediction error of the proposed TLVC model is smaller than that of other bench-mark models. As can be seen from Figure 4, using the methods of processing serialized data, LSTM and BiLSTM will obtain large model errors, and the results of these two models are even worse than RNN. This may be because these two models can only learn the characteristics of time series, and this length may lead to the disappearance of gradients in the learning process of these sequence models due to the sampling window w = 100. Unexpectedly, the MLP with simpler structure and composed of two-layer fully connected neural network obtained lower prediction error than LSTM and BiLSTM in the experiment. This may be because the fully connected neural network learns more effective features in the whole monitoring window, rather than paying too much attention to the vehicle state transition process in the window as the time series model. The prediction error of CNN model is slightly better than MLP, but its stability is much lower than other models according to the results of R 2 . This may be because only the local information of vehicle state change in the monitoring window is extracted through convolution, so it is difficult to infer the final collision of vehicles. The MSE performance of self-attention model is second only to the proposed TLVC model, because it can fully learn the dynamic changes and even correlation of various vehicle characteristic signals in the monitoring window. However, its R 2 value is not high, which may be due to the lack of training data, so the R 2 value of the model is low. The R 2 of ResNet and TLVC with migration model is slightly higher than that of self-attention, mainly because the migration model is less dependent on the amount of data. However, ResNet is similar to CNN model. Because it only focuses on the local features in the monitoring window, the final MSE is large.
On the one hand, the proposed TLVC has better model performance when there is only small training data because the way of transfer learning depends less on data. On the other hand, when using the transfer learning method Effi-cientNet, we only retain half of the parameters of the original model, and the other half of the model parameters can be learned with the model training, which makes the TLVC model not only do not rely too much on large quantities of data but also carry out effective localization learning.
Ablation Experiment.
In order to verify the effectiveness of the vehicle state preprocessing part proposed in this paper, this paper compares the proposed model TLVC with the preprocessing module of removing early features, and the comparison results are shown in Table 2.
As can be seen from the data in the table, the model accuracy of TLVC ½ f without feature preprocessing is slightly poor. This shows the effectiveness of vehicle running state data processing in this paper. In particular, this paper encodes a series of category data. The results of ablation experiments show the effectiveness of these coding features in the proposed model, which provides ideas for the effective mining and application of Internet of Vehicles data.
Parameter Learning.
In order to make the model as effective as possible, in this part, we analyse the influence of some parameters in the proposed model, such as model As shown in Figure 5, keep pro is the number of pretrained transfer learning model parameters reserved for the model, keep pro = 0:5 means that half of the parameters in efficientnet B4 are retained, and the other half of the parameters are trained and learned through the local vehicle collision data set. It can be seen from the figure that when keep pro = 0:5, the model can achieve low prediction error. Figure 6 shows our discussion on the learning rate lr of the whole TLVC model. Through experiments, we found that the model can achieve better results when lr = 0:0001.
Conclusion
In this paper, we use the vehicle running state data and propose a model TLVC which can predict whether and when the vehicle will collide in the future based on the migration learning model. Compared with the previous methods, this method does not need to rely on external unstable environmental data. It only needs to effectively process the vehicle operation signal by using the proposed preprocessing method and then use the semiparametric migration model for local data training to achieve high accuracy. In particular, because some parameters of TLVC migration model do not need training, learning on less vehicle operation data can achieve the purpose of task localization. Furthermore, by comparing with a series of state-of-the-art benchmark models, we verify the outstanding effect of the proposed method through a large number of experiments.
However, there are still some limitations. For example, the preprocessing of the vehicle operating state also relies on human understanding of the data, which is one of the issues we will further explore later. In addition, traditional transfer learning is usually the transfer of data in the same field, and this paper is limited by the limited sources of Internet of Vehicles data and uses data from different fields. Therefore, if there is a chance to obtain the same type of data in the future, we will discuss more effective transfer model.
Data Availability
The data set used to support the findings of this study are included within the article.
|
2022-08-31T15:21:09.480Z
|
2022-08-28T00:00:00.000
|
{
"year": 2022,
"sha1": "a05a974a6591d4fc94c6ef4da2a7ee767c615841",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/wcmc/2022/2545958.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a4b7988ab8db905e11bd0202906933490b64240e",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
}
|
233424416
|
pes2o/s2orc
|
v3-fos-license
|
Chronic High Fat Diet Intake Impairs Hepatic Metabolic Parameters in Ovariectomized Sirt3 KO Mice
High fat diet (HFD) is an important factor in the development of metabolic diseases, with liver as metabolic center being highly exposed to its influence. However, the effect of HFD-induced metabolic stress with respect to ovary hormone depletion and sirtuin 3 (Sirt3) is not clear. Here we investigated the effect of Sirt3 in liver of ovariectomized and sham female mice upon 10 weeks of feeding with standard-fat diet (SFD) or HFD. Liver was examined by Folch, gas chromatography and lipid hydroperoxide analysis, histology and oil red staining, RT-PCR, Western blot, antioxidative enzyme and oxygen consumption analyses. In SFD-fed WT mice, ovariectomy increased Sirt3 and fatty acids synthesis, maintained mitochondrial function, and decreased levels of lipid hydroperoxides. Combination of ovariectomy and Sirt3 depletion reduced pparα, Scd-1 ratio, MUFA proportions, CII-driven respiration, and increased lipid damage. HFD compromised CII-driven respiration and activated peroxisomal ROS scavenging enzyme catalase in sham mice, whereas in combination with ovariectomy and Sirt3 depletion, increased body weight gain, expression of NAFLD- and oxidative stress-inducing genes, and impaired response of antioxidative system. Overall, this study provides evidence that protection against harmful effects of HFD in female mice is attributed to the combined effect of female sex hormones and Sirt3, thus contributing to preclinical research on possible sex-related therapeutic agents for metabolic syndrome and associated diseases.
Introduction
The metabolic syndrome is a cluster of risk factors responsible for the development of cardiovascular diseases and many other health problems, and as such is one of the leading risks for global deaths representing a serious threat to public health [1]. A high fat diet (HFD) is an important factor in the development of many metabolic diseases, with liver as a metabolic center being highly exposed to its influence [2]. Metabolic syndrome can be effectively mimicked and studied in rodent models using various dietary interventions, including HFD [3], which then lead to mitochondrial dysfunction and other metabolic changes induced by oxidative stress (reviewed in [1]). Feeding mice with HFD results in one of the diet-induced models of non-alcoholic fatty liver disease (NAFLD), which is accompanied by liver inflammation and steatosis [4]. Indeed, hepatic steatosis occurs when high concentrations of circulatory fatty acids (FAs) reaching the liver and de novo lipogenesis are not counterbalanced by FA oxidation or lipid export as lipoproteins [5].
In most mammals, including humans, life expectancy is female-biased [6,7]. Females show lower incidence of some age-related pathologies linked with oxidative stress and this sex-difference disappears after menopause, which leads to the conclusion that this protection is attributed to sex hormones (reviewed in [8]). Thus, one approach to study age-linked pathologies is to investigate hormone-depleted or -augmented animals and their defense Int. J. Mol. Sci. 2021, 22, 4277 2 of 17 from metabolic stressors. Estradiol (E2) is an important regulator of energy homeostasis, thus making it a potential target for preventing or treating metabolic disorders. Previous studies established the association of metabolic syndrome and E2 loss during menopause in women (reviewed in [9]) and reported that E2 can alter hepatic proteins involved in de novo lipid synthesis [10]. However, the mechanism behind those observations, especially how the fat-lowering action of E2 is modulated in the liver, remains elusive, especially in a sex-related manner.
Sirtuin 3 (Sirt3) is a mitochondrial protein that integrates cellular energy metabolism and plays an important role in preventing metabolic syndrome [4]. Although Sirt4 and Sirt5 are also present in mitochondria, Sirt3 is the main mitochondrial deacetylase because only Sirt3-knockout mice show hyperacetylation of mitochondrial proteins and less effective mitochondria. In addition, Sirt3 is involved in the regulation of all mitochondrial functions, including the tricarboxylic acid (TCA), the urea cycle, amino acid metabolism, fatty acid oxidation, oxidative phosphorylation (OXPHOS), ROS detoxification, mitochondrial dynamics, and the mitochondrial unfolded protein response (UPR) [11,12]. It promotes mitochondrial oxidative metabolism via deacetylation of numerous metabolic enzymes, including those involved in FA catabolism, as demonstrated earlier by an abnormal accumulation of FA oxidation intermediates in Sirt3 KO mice [4]. It was also shown that Sirt3 expression is reduced during chronic HFD in male mice [4,13]. Although E2-dependent protection includes improvement of mitochondrial function [14], it is not clear whether Sirt3, as a pivotal factor regulating mitochondrial biogenesis and reactive oxygen species (ROS) management, participates in these events.
In our recent study, we found significant sex differences in mice at the level of metabolic, oxidant, antioxidant, and mitochondrial parameters upon HFD. Also, we pointed towards a different role of Sirt3 in males and females under the conditions of nutritive stress, with higher reliance of males than females to the effect of Sirt3 against HFD-induced metabolic dysregulation [13]. These observations led us to the hypothesis that females' protection from HFD-induced metabolic dysregulation in vivo could be attributed to the complementary beneficial effect of Sirt3 and ovary hormones. However, the mechanism by which this combination operates still needs to be elucidated. Therefore, this study explored the metabolic, mitochondrial, oxidative, and antioxidative parameters following HFD and ovarian hormone deprivation in young adult Sirt3 WT and Sirt3 KO female mice.
Ovariectomy Increases Sirt3 and pgc1-α Expression in the Liver of Female Mice
Since ovaries are the main source of female sex hormones' production in the body [15], we performed ovariectomy (ovx) to assess the effect of ovary hormones in our experiments. Loss of ovarian hormones was confirmed by reduced uterus size and by cytological examination of vaginal smears, showing estrous phase in control (sham) and anestrous phase in ovx mice (Supplemental Figure S1A-C). To assess the functional role of hepatic Sirt3 with respect to ovx and HFD, we examined gene and protein expression in female sham and ovx Sirt3 WT and KO mice after 10 weeks of feeding with SFD or HFD. Ovx increased sirt3 gene (*** p < 0.001) and protein (* p < 0.05) expression in both SFD and HFD conditions ( Figure 1A-C). Thus, ovx upregulated Sirt3 irrespective of type of diet in WT female mice. A similar pattern was observed with peroxisome proliferator-activated receptor-gamma coactivator-1 alpha (pgc1-α), a master regulator of mitochondrial function [16]. Ovx upregulated pgc1-α gene expression regardless of Sirt3 in SFD conditions (* p < 0.05) ( Figure 1D). Following HFD, KO normalized ovx-induced (** p < 0.01) pgc1-α gene expression ( a p < 0.01). These data indicate that ovx induces both pgc1-α and Sirt3 in WT mice irrespective of diet and that expression of the pgc1-α in ovx KO mice depends on the type of diet. (D) pgc1-α gene expression in sham and ovx Sirt3 WT and KO mice. SFD: sham vs. ovx (* p <0.05). HFD: WT sham vs. ovx (** p < 0.01); ovx WT vs. KO ( a p < 0.01). SFD vs. HFD: no changes. Data are shown as mean ± SD. n = 3 mice per group. The representative image is displayed.
The Effect of Sirt3 and Ovx on Body Weight Gain Depends on the Type of Diet
Fasting glucose level remained unchanged by either ovx or Sirt3 depletion in both SFD and HFD-fed mice (Figure 2A). In agreement with our previous observations [13], we found no change in body weight in SFD-and HFD-fed sham mice (data not shown). However, body weight gain was affected differently in ovx KO mice, depending on type of diet: SFD-fed KO ovx mice gained less weight than WT ovx ( a p < 0.05) or sham KO mice (** p < 0.01) ( Figure 2B). Contrary to SFD conditions, upon HFD, KO ovx mice gained more weight than WT ovx ( b p < 0.01) or sham KO mice (** p < 0.01). Also, HFD-fed ovx KO mice were the only group that had increased body weight gain compared to their SFD littermates ( xxx p < 0.001). These data indicate that ovary hormone deficiency in the absence of Sirt3 makes these mice most resistant towards gaining weight on SFD but also most sensitive towards gaining weight upon HFD.
The Effect of Sirt3 and Ovx on Body Weight gain Depends on the Type of Diet
Fasting glucose level remained unchanged by either ovx or Sirt3 depletion in both SFD and HFD-fed mice (Figure 2A). In agreement with our previous observations [13], we found no change in body weight in SFD-and HFD-fed sham mice (data not shown). However, body weight gain was affected differently in ovx KO mice, depending on type of diet: SFD-fed KO ovx mice gained less weight than WT ovx ( a p < 0.05) or sham KO mice (** p < 0.01) ( Figure 2B). Contrary to SFD conditions, upon HFD, KO ovx mice gained more weight than WT ovx ( b p < 0.01) or sham KO mice (** p < 0.01). Also, HFD-fed ovx KO mice were the only group that had increased body weight gain compared to their SFD littermates ( xxx p < 0.001). These data indicate that ovary hormone deficiency in the absence of Sirt3 makes these mice most resistant towards gaining weight on SFD but also most sensitive towards gaining weight upon HFD.
Sirt3 and Ovx have Combined Effect on the Expression of Genes Responsible for Lipid Metabolism and Oxidative Stress
Due to significant differences in body weight gain with respect to Sirt3 and ovx in SFD and HFD, we investigated whether combination of Sirt3 depletion and ovx affected genes involved in the lipid metabolism and oxidative stress, i.e., peroxisome proliferatoractivated receptor alpha (pparα) [17], cytochrome P450 4a14 (cyp4a14), cyp2e1 [18], and heme oxygenase-1 (ho-1) [19]. In SFD-fed mice, ovx affected pparα expression differently depending on Sirt3: ovx increased pparα only in WT mice (*** p < 0.001), without affecting KO mice ( a p < 0.01) ( Figure 3A). In HFD-fed mice, pparα was also increased only in ovx WT compared to sham WT mice (* p < 0.05). HFD generally increased levels of pparα in all groups ( x p < 0.05) except in ovx WT. In SFD conditions, cyp4a14 expression was increased in ovx KO mice compared to both ovx WT ( a p < 0.01) and sham KO mice (* p < 0.05) ( Figure 3B). In HFD-fed conditions, ovx increased cyp4a14 expression only in KO mice ( ** p < 0.001). Also, cyp4a14 expression was higher in all HFD-fed mice compared to the respective SFD-fed groups ( x p < 0.05, xx p < 0.01). In SFD-fed mice, ovx increased cyp2e1 expression in both WT (* p < 0.05) and KO mice (** p < 0.01) ( Figure 3C). Sirt3 depletion reverted upregulated cyp2e1 expression in ovx mice ( a p < 0.05). Changes in cyp2e1 expression between SFD and HFD-fed mice were observed in sham KO ( xx p < 0.01) and ovx KO group ( x p < 0.05), whereas HFD increased or decreased cyp2e1 in these groups, respectively. Ho-1 gene expression level remained unchanged in SFD conditions ( Figure 3D). Following HFD, in WT mice ovx increased ho-1 gene expression level compared to WT sham mice (* p < 0.05), but Sirt3 depletion reverted it ( a p < 0.05). These data indicate that ovx induces pparα, cyp2e1, and ho-1 genes in WT mice, and Sirt3 depletion mostly reverses this effect. cyp2e1 expression in ovx mice ( a p < 0.05). Changes in cyp2e1 expression between SFD and HFD-fed mice were observed in sham KO ( xx p < 0.01) and ovx KO group ( x p < 0.05), whereas HFD increased or decreased cyp2e1 in these groups, respectively. Ho-1 gene expression level remained unchanged in SFD conditions ( Figure 3D). Following HFD, in WT mice ovx increased ho-1 gene expression level compared to WT sham mice (* p < 0.05), but Sirt3 depletion reverted it ( a p < 0.05). These data indicate that ovx induces pparα, cyp2e1, and ho-1 genes in WT mice, and Sirt3 depletion mostly reverses this effect. On the other hand, cyp4a14 is induced by HFD, and additionally by the combination of ovx and Sirt3 depletion.
Sirt3 KO Ovx Mice have Reduced Lipid Accumulation in SFD Conditions
To determine whether the increase in expression of genes involved in lipid metabolism was associated with hepatic lipid accumulation with respect to Sirt3 and ovx in SFD and HFD conditions, we measured lipid content using Folch extraction and performed the immunohistochemical (IHC) analysis of hepatic tissue using oil red staining. In SFD conditions, ovx reduced lipid content in Sirt3-depleted mice (** p < 0.01) ( Figure 4A). Expectedly, HFD-fed mice had more lipid content than SFD-fed ( xxx p < 0.001), while ovx decreased lipid content in HFD conditions, irrespective of Sirt3 (* p < 0.05). IHC analysis showed interaction between Sirt3 and ovx in SFD-fed conditions: sham KO mice accumulated more lipids than WT ( a p < 0.01), and ovx WT mice accumulated more lipids than
Sirt3 KO Ovx Mice Have Reduced Lipid Accumulation in SFD Conditions
To determine whether the increase in expression of genes involved in lipid metabolism was associated with hepatic lipid accumulation with respect to Sirt3 and ovx in SFD and HFD conditions, we measured lipid content using Folch extraction and performed the immunohistochemical (IHC) analysis of hepatic tissue using oil red staining. In SFD conditions, ovx reduced lipid content in Sirt3-depleted mice (** p < 0.01) ( Figure 4A). Expectedly, HFD-fed mice had more lipid content than SFD-fed ( xxx p < 0.001), while ovx decreased lipid content in HFD conditions, irrespective of Sirt3 (* p < 0.05). IHC analysis showed interaction between Sirt3 and ovx in SFD-fed conditions: sham KO mice accumulated more lipids than WT ( a p < 0.01), and ovx WT mice accumulated more lipids than sham (* p < 0.05). Similar to Folch, oil red staining showed that SFD-fed mice depleted of both Sirt3 and ovary hormones accumulated less lipids compared to either ovx WT ( b p < 0.001) or sham KO mice (*** p < 0.001) ( Figure 4B,C). In HFD conditions, all groups had higher lipid accumulation compared to their SFD-fed littermates ( xxx p < 0.001).
Sirt3 KO Ovx Mice Have Reduced Lipid Accumulation in SFD Conditions
To determine whether the increase in expression of genes involved in lipid metabolism was associated with hepatic lipid accumulation with respect to Sirt3 and ovx in SFD and HFD conditions, we measured lipid content using Folch extraction and performed the immunohistochemical (IHC) analysis of hepatic tissue using oil red staining. In SFD conditions, ovx reduced lipid content in Sirt3-depleted mice (** p < 0.01) ( Figure 4A). Expectedly, HFD-fed mice had more lipid content than SFD-fed ( xxx p < 0.001), while ovx decreased lipid content in HFD conditions, irrespective of Sirt3 (* p < 0.05). IHC analysis showed interaction between Sirt3 and ovx in SFD-fed conditions: sham KO mice accumulated more lipids than WT ( a p < 0.01), and ovx WT mice accumulated more lipids than sham (* p < 0.05). Similar to Folch, oil red staining showed that SFD-fed mice depleted of both Sirt3 and ovary hormones accumulated less lipids compared to either ovx WT ( b p < 0.001) or sham KO mice (*** p < 0.001) ( Figure 4B,C). In HFD conditions, all groups had higher lipid accumulation compared to their SFD-fed littermates ( xxx p < 0.001). both Sirt3 and ovary hormones accumulated less lipids compared to either ovx WT ( b p < 0.001) or sham KO mice (*** p < 0.001) ( Figure 4B,C). In HFD conditions, all groups had higher lipid accumulation compared to their SFD-fed littermates ( xxx p < 0.001).
Sirt3 KO Ovx Mice have Reduced Scd-1 Ratio and Less MUFA in SFD Conditions
To determine global changes in FAs composition, we determined total hepatic saturated FAs (SFA), monounsaturated FAs (MUFA), and polyunsaturated FAs (PUFA) by gas chromatography (GC). SFD-fed mice had a higher proportion of SFA compared to
Sirt3 KO Ovx Mice Have Reduced Scd-1 Ratio and Less MUFA in SFD Conditions
To determine global changes in FAs composition, we determined total hepatic saturated FAs (SFA), monounsaturated FAs (MUFA), and polyunsaturated FAs (PUFA) by gas chromatography (GC). SFD-fed mice had a higher proportion of SFA compared to HFD-fed mice only in KO groups, irrespective of ovx ( xx p < 0.01) ( Figure 5A). Proportions of MUFA were lowest in SFD-fed ovx KO mice, compared to both sham KO (** p < 0.01) and WT ovx mice ( a p < 0.01) ( Figure 5B). HFD-fed mice had significantly higher proportions of MUFAs than SFD-fed mice ( xxx p < 0.001). The highest proportions of PUFAs were detected 7 of 17 in SFD-fed ovx KO mice, compared to both sham KO (** p < 0.01) and WT ovx mice ( a p < 0.01) ( Figure 5C). HFD-fed mice had lower PUFAs than SFD-fed mice ( xxx p < 0.001).
These results indicate that in SFD-fed mice the depletion of Sirt3 and ovary hormones was associated with more hepatic PUFA than MUFA content and that HFD markedly shifted the dominant FAs in the liver from PUFAs to MUFAs following ten weeks of HFD feeding. in SFD-fed ovx KO mice, compared to both sham KO (** p < 0.01) and WT ovx mice ( a p < 0.01) ( Figure 5C). HFD-fed mice had lower PUFAs than SFD-fed mice ( xxx p < 0.001).
These results indicate that in SFD-fed mice the depletion of Sirt3 and ovary hormones was associated with more hepatic PUFA than MUFA content and that HFD markedly shifted the dominant FAs in the liver from PUFAs to MUFAs following ten weeks of HFD feeding.
linoleic acid (LNA), followed by arachidonic (AA) and docosahexaenoic acid (DHA) (Supplemental Figure S4A-C). Generally, lower total PUFAs in HFD-fed mice are the result of reduced levels of LNA, AA, and DHA. Stearoyl-CoA desaturase-1 (Scd-1) plays the important role in lipogenesis and is expressed in metabolically active tissues, such as liver and adipose tissue [21]. Desaturation index (DI), the ratio of product to precursor FAs, is an indirect marker for tissue Scd-1 activity, which is decreased in conditions of inhibited Scd-1 activity [4]. In our study, DI was determined for palmitoleic/palmitic acid. In SFD conditions, Sirt3 depletion significantly attenuated ( a p < 0.001) ovx-mediated increase in Scd-1 ratio (** p < 0.01) ( Figure 5D). HFD-fed mice displayed a similar Scd-1 ratio across all groups, which was significantly higher than SFD-fed groups ( x p < 0.05, xx p < 0.01, xxx p < 0.001). Higher Scd-1 ratio in HFDfed mice that indicates higher Scd-1 activity may be due to higher MUFA content in HFD rather than direct product of Scd-1 activity.
Combination of Ovx and Sirt3 Depletion Increases Lipid Damage in SFD Conditions
Since ovx induces oxidative stress and Sirt3 ameliorates oxidative damage, we also determined by lipid hydroperoxide (LOOH) analysis the effect of ovx and Sirt3 depletion on oxidative damage to lipids with respect to type of diet. In SFD-fed sham mice no changes were observed in LOOH level, whereas upon ovx, KO mice displayed higher LOOH than WT mice ( a p < 0.01) indicating that combination of ovx and Sirt3 depletion resulted in increased lipid damage ( Figure 6). Also, SFD-fed ovx WT mice had lower LOOH than sham WT (* p < 0.05). Within the HFD group, sham KO mice displayed lower LOOH than WT mice ( b p < 0.01). Ovx significantly reduced LOOH levels in WT mice (** p < 0.01), without the effect of Sirt3. SFD-fed KO mice had significantly higher lipid damage than HFD-fed KO mice, irrespective of ovx ( x p < 0.05, xx p < 0.01). Together, these data confirm that both Sirt3-and ovary hormone-depletion are associated with increased lipid damage only in SFD conditions. The most abundant SFAs were palmitate (C16:0), followed by stearate (C18:0) (Supplemental Figure S2A,B). SFD-fed ovx KO mice displayed accumulation of stearate compared to WT ovx mice ( a p < 0.001). Moreover, stearate was increased in all groups of SFD-fed mice compared to HFD-fed mice ( xxx p < 0.001) but did not influence the total SFA content in WT SFD mice compared to HFD, as observed in Figure 5A. Since mice depleted of both ovary hormones and Sirt3 on SFD had lower MUFA and higher PUFA levels, we wanted to explore which FAs contributed to the increase in PUFA to MUFA ratio. The main FAs in MUFA were palmitoleic, oleic, and vaccenic acid (Supplemental Figure S3A-C). Oleic acid, which is the main product of Scd-1 reaction and associates Scd-1 with the development of obesity and the metabolic syndrome [20], was significantly decreased in SFD-fed ovx KO mice (* p < 0.05), making all three main MUFAs reduced upon Sirt3 and ovary hormone deficiency. Furthermore, oleic acid levels were higher in all HFD-fed groups compared to SFD-fed groups ( xxx p < 0.001) which suggests that oleic acid is responsible for higher MUFA proportions after HFD feeding. The most abundant PUFA were linoleic acid (LNA), followed by arachidonic (AA) and docosahexaenoic acid (DHA) (Supplemental Figure Int in SFD-fed ovx KO mice, compared to both sham KO (** p < 0.01) and WT ovx mice ( a p < 0.01) ( Figure 5C). HFD-fed mice had lower PUFAs than SFD-fed mice ( xxx p < 0.001).
These results indicate that in SFD-fed mice the depletion of Sirt3 and ovary hormones was associated with more hepatic PUFA than MUFA content and that HFD markedly shifted the dominant FAs in the liver from PUFAs to MUFAs following ten weeks of HFD feeding.
linoleic acid (LNA), followed by arachidonic (AA) and docosahexaenoic acid (DHA) (Supplemental Figure S4A-C). Generally, lower total PUFAs in HFD-fed mice are the result of reduced levels of LNA, AA, and DHA. Stearoyl-CoA desaturase-1 (Scd-1) plays the important role in lipogenesis and is expressed in metabolically active tissues, such as liver and adipose tissue [21]. Desaturation index (DI), the ratio of product to precursor FAs, is an indirect marker for tissue Scd-1 activity, which is decreased in conditions of inhibited Scd-1 activity [4]. In our study, DI was determined for palmitoleic/palmitic acid. In SFD conditions, Sirt3 depletion significantly attenuated ( a p < 0.001) ovx-mediated increase in Scd-1 ratio (** p < 0.01) ( Figure 5D). HFD-fed mice displayed a similar Scd-1 ratio across all groups, which was significantly higher than SFD-fed groups ( x p < 0.05, xx p < 0.01, xxx p < 0.001). Higher Scd-1 ratio in HFDfed mice that indicates higher Scd-1 activity may be due to higher MUFA content in HFD rather than direct product of Scd-1 activity.
Combination of Ovx and Sirt3 Depletion Increases Lipid Damage in SFD Conditions
Since ovx induces oxidative stress and Sirt3 ameliorates oxidative damage, we also determined by lipid hydroperoxide (LOOH) analysis the effect of ovx and Sirt3 depletion on oxidative damage to lipids with respect to type of diet. In SFD-fed sham mice no changes were observed in LOOH level, whereas upon ovx, KO mice displayed higher LOOH than WT mice ( a p < 0.01) indicating that combination of ovx and Sirt3 depletion resulted in increased lipid damage ( Figure 6). Also, SFD-fed ovx WT mice had lower LOOH than sham WT (* p < 0.05). Within the HFD group, sham KO mice displayed lower LOOH than WT mice ( b p < 0.01). Ovx significantly reduced LOOH levels in WT mice (** p < 0.01), without the effect of Sirt3. SFD-fed KO mice had significantly higher lipid damage than HFD-fed KO mice, irrespective of ovx ( x p < 0.05, xx p < 0.01). Together, these data confirm that both Sirt3-and ovary hormone-depletion are associated with increased lipid damage only in SFD conditions. The most abundant SFAs were palmitate (C16:0), followed by stearate (C18:0) (Supplemental Figure S2A,B). SFD-fed ovx KO mice displayed accumulation of stearate compared to WT ovx mice ( a p < 0.001). Moreover, stearate was increased in all groups of SFD-fed mice compared to HFD-fed mice ( xxx p < 0.001) but did not influence the total SFA content in WT SFD mice compared to HFD, as observed in Figure 5A. Since mice depleted of both ovary hormones and Sirt3 on SFD had lower MUFA and higher PUFA levels, we wanted to explore which FAs contributed to the increase in PUFA to MUFA ratio. The main FAs in MUFA were palmitoleic, oleic, and vaccenic acid (Supplemental Figure S3A-C). Oleic acid, which is the main product of Scd-1 reaction and associates Scd-1 with the development of obesity and the metabolic syndrome [20], was significantly decreased in SFD-fed ovx KO mice (* p < 0.05), making all three main MUFAs reduced upon Sirt3 and ovary hormone deficiency. Furthermore, oleic acid levels were higher in all HFD-fed groups compared to SFD-fed groups ( xxx p < 0.001) which suggests that oleic acid is responsible for higher MUFA proportions after HFD feeding. The most abundant PUFA were linoleic acid (LNA), followed by arachidonic (AA) and docosahexaenoic acid (DHA) (Supplemental Figure The most abundant SFAs were palmitate (C16:0), followed by stearate (C18:0) (Supplemental Figure S2A,B). SFD-fed ovx KO mice displayed accumulation of stearate compared to WT ovx mice ( a p < 0.001). Moreover, stearate was increased in all groups of SFD-fed mice compared to HFD-fed mice ( xxx p < 0.001) but did not influence the total SFA content in WT SFD mice compared to HFD, as observed in Figure 5A. Since mice depleted of both ovary hormones and Sirt3 on SFD had lower MUFA and higher PUFA levels, we wanted to explore which FAs contributed to the increase in PUFA to MUFA ratio. The main FAs in MUFA were palmitoleic, oleic, and vaccenic acid (Supplemental Figure S3A-C). Oleic acid, which is the main product of Scd-1 reaction and associates Scd-1 with the development of obesity and the metabolic syndrome [20], was significantly decreased in SFD-fed ovx KO mice (* p < 0.05), making all three main MUFAs reduced upon Sirt3 and ovary hormone deficiency. Furthermore, oleic acid levels were higher in all HFD-fed groups compared to SFD-fed groups ( xxx p < 0.001) which suggests that oleic acid is responsible for higher MUFA proportions after HFD feeding. The most abundant PUFA were linoleic acid (LNA), followed by arachidonic (AA) and docosahexaenoic acid (DHA) (Supplemental Figure S4A-C). Generally, lower total PUFAs in HFD-fed mice are the result of reduced levels of LNA, AA, and DHA.
Stearoyl-CoA desaturase-1 (Scd-1) plays the important role in lipogenesis and is expressed in metabolically active tissues, such as liver and adipose tissue [21]. Desaturation index (DI), the ratio of product to precursor FAs, is an indirect marker for tissue Scd-1 activity, which is decreased in conditions of inhibited Scd-1 activity [4]. In our study, DI was determined for palmitoleic/palmitic acid. In SFD conditions, Sirt3 depletion significantly attenuated ( a p < 0.001) ovx-mediated increase in Scd-1 ratio (** p < 0.01) ( Figure 5D). HFD-fed mice displayed a similar Scd-1 ratio across all groups, which was significantly higher than SFD-fed groups ( x p < 0.05, xx p < 0.01, xxx p < 0.001). Higher Scd-1 ratio in HFD-fed mice that indicates higher Scd-1 activity may be due to higher MUFA content in HFD rather than direct product of Scd-1 activity.
Combination of Ovx and Sirt3 Depletion Increases Lipid Damage in SFD Conditions
Since ovx induces oxidative stress and Sirt3 ameliorates oxidative damage, we also determined by lipid hydroperoxide (LOOH) analysis the effect of ovx and Sirt3 depletion on oxidative damage to lipids with respect to type of diet. In SFD-fed sham mice no changes were observed in LOOH level, whereas upon ovx, KO mice displayed higher LOOH than WT mice ( a p < 0.01) indicating that combination of ovx and Sirt3 depletion resulted in increased lipid damage ( Figure 6). Also, SFD-fed ovx WT mice had lower LOOH than sham WT (* p < 0.05). Within the HFD group, sham KO mice displayed lower LOOH than WT mice ( b p < 0.01). Ovx significantly reduced LOOH levels in WT mice (** p < 0.01), without the effect of Sirt3. SFD-fed KO mice had significantly higher lipid damage than HFD-fed KO mice, irrespective of ovx ( x p < 0.05, xx p < 0.01). Together, these data confirm that both Sirt3-and ovary hormone-depletion are associated with increased lipid damage only in SFD conditions.
Combination of Ovx and Sirt3 Depletion Increases Lipid Damage in SFD Conditions
Since ovx induces oxidative stress and Sirt3 ameliorates oxidative damage, we also determined by lipid hydroperoxide (LOOH) analysis the effect of ovx and Sirt3 depletion on oxidative damage to lipids with respect to type of diet. In SFD-fed sham mice no changes were observed in LOOH level, whereas upon ovx, KO mice displayed higher LOOH than WT mice ( a p < 0.01) indicating that combination of ovx and Sirt3 depletion resulted in increased lipid damage ( Figure 6). Also, SFD-fed ovx WT mice had lower LOOH than sham WT (* p < 0.05). Within the HFD group, sham KO mice displayed lower LOOH than WT mice ( b p < 0.01). Ovx significantly reduced LOOH levels in WT mice (** p < 0.01), without the effect of Sirt3. SFD-fed KO mice had significantly higher lipid damage than HFD-fed KO mice, irrespective of ovx ( x p < 0.05, xx p < 0.01). Together, these data confirm that both Sirt3-and ovary hormone-depletion are associated with increased lipid damage only in SFD conditions.
Ovariectomized Females Maintain Mitochondrial CII-Driven Respiration in HFD Conditions
To determine if mitochondrial function was affected by ovx and/or Sirt3 depletion, we measured CI-driven (malate + glutamate, ADP added) and CII-driven (succinate + rotenone, ADP added) active mitochondrial respiration by Clark-type electrode. In SFD conditions, KO mice exhibited lower CI-driven respiration than WT mice irrespective of ovx ( a p < 0.01) ( Figure 7A). A similar effect was observed in HFD-fed mice ( b p < 0.001), indicating the importance of Sirt3 in active mitochondrial respiration. CII-driven respiration showed a similar trend as CI considering Sirt3 in SFD conditions, with KO mice having lower respiration than WT mice in both sham ( a p < 0.05) and to a greater extent in ovx group ( b p < 0.001) ( Figure 7B). Following HFD, KO mice also showed lower CII-driven respiration than WT mice ( c p < 0.001). HFD generally decreased CII-driven respiration in both sham WT ( xx p < 0.01) and KO mice ( xxx p < 0.001) compared to their SFD-fed littermates. Surprisingly, CII-driven respiration was maintained in HFD-fed ovx mice, being higher than in HFD-fed sham mice (*** p < 0.001). These data indicate that mitochondrial CI-driven respiration depends only on Sirt3. Mitochondrial CII-driven respiration is also dependent on Sirt3, but is also diet and ovary hormone-dependent, with nutritional stress repressing CII-driven respiration only in sham mice.
we measured CI-driven (malate + glutamate, ADP added) and CII-driven (succinate + rotenone, ADP added) active mitochondrial respiration by Clark-type electrode. In SFD conditions, KO mice exhibited lower CI-driven respiration than WT mice irrespective of ovx ( a p < 0.01) ( Figure 7A). A similar effect was observed in HFD-fed mice ( b p < 0.001), indicating the importance of Sirt3 in active mitochondrial respiration. CII-driven respiration showed a similar trend as CI considering Sirt3 in SFD conditions, with KO mice having lower respiration than WT mice in both sham ( a p < 0.05) and to a greater extent in ovx group ( b p < 0.001) ( Figure 7B). Following HFD, KO mice also showed lower CII-driven respiration than WT mice ( c p < 0.001). HFD generally decreased CII-driven respiration in both sham WT ( xx p < 0.01) and KO mice ( xxx p < 0.001) compared to their SFD-fed littermates. Surprisingly, CII-driven respiration was maintained in HFD-fed ovx mice, being higher than in HFD-fed sham mice (*** p < 0.001). These data indicate that mitochondrial CI-driven respiration depends only on Sirt3. Mitochondrial CII-driven respiration is also dependent on Sirt3, but is also diet and ovary hormone-dependent, with nutritional stress repressing CII-driven respiration only in sham mice.
Antioxidative Enzyme Activities are Affected by Ovx and Type of Diet
Since we previously found that the antioxidative enzyme system was affected in a sex-related manner with respect to the type of diet [13], we analyzed the activities of major antioxidant enzymes: catalase (Cat), manganese superoxide dismutase (MnSod), and copper-zinc superoxide dismutase (CuZnSod). Expectedly, Cat activity was unchanged within SFD-fed mice ( Figure 8A). Within HFD, ovx mice had lower Cat activity than sham mice irrespective of Sirt3 (* p < 0.05), but it was still significantly increased compared to SFD-fed groups, indicating increased activity of Cat following HFD in all groups ( x p < 0.05, xx p < 0.01). Interestingly, in the case of MnSod activity, hormone depletion increased it in SFD conditions (** p < 0.01) regardless of Sirt3 ( Figure 8B). HFD-fed mice had no change in MnSod activity between groups, and only WT sham mice had increased activity compared to their SFD-fed littermates ( xx p < 0.01). Similar to Cat activity, CuZnSod activity remained unchanged within SFD-fed mice, but also decreased in ovx mice compared to
Antioxidative Enzyme Activities Are Affected by Ovx and Type of Diet
Since we previously found that the antioxidative enzyme system was affected in a sex-related manner with respect to the type of diet [13], we analyzed the activities of major antioxidant enzymes: catalase (Cat), manganese superoxide dismutase (MnSod), and copper-zinc superoxide dismutase (CuZnSod). Expectedly, Cat activity was unchanged within SFD-fed mice ( Figure 8A). Within HFD, ovx mice had lower Cat activity than sham mice irrespective of Sirt3 (* p < 0.05), but it was still significantly increased compared to SFD-fed groups, indicating increased activity of Cat following HFD in all groups ( x p < 0.05, xx p < 0.01). Interestingly, in the case of MnSod activity, hormone depletion increased it in SFD conditions (** p < 0.01) regardless of Sirt3 ( Figure 8B). HFD-fed mice had no change in MnSod activity between groups, and only WT sham mice had increased activity compared to their SFD-fed littermates ( xx p < 0.01). Similar to Cat activity, CuZnSod activity remained unchanged within SFD-fed mice, but also decreased in ovx mice compared to sham mice (** p < 0.01) following HFD ( Figure 8C). This indicates that hormone depletion decreases both Cat and CuZnSod activity in HFD-fed mice. Overall, the activities of antioxidant enzymes were not affected by Sirt3, only by ovx and type of diet.
sham mice (** p < 0.01) following HFD ( Figure 8C). This indicates that hormone depletion decreases both Cat and CuZnSod activity in HFD-fed mice. Overall, the activities of antioxidant enzymes were not affected by Sirt3, only by ovx and type of diet.
Discussion
Obesity and metabolic syndrome represent major health problems worldwide [22], development of which is associated with metabolic and hormonal changes occurring during the lifespan in both sexes. Since diet can additionally favor the disease progression, understanding these disorders and their causes in both sexes has one of the highest priorities. Although many studies showed that females have better protection against HFDinduced metabolic stress than males [23][24][25][26], the combined effect of main mitochondrial deacetylase Sirt3 and ovary hormones in regulating metabolic stress in vivo has not yet been investigated. To test our hypothesis that females' protection from HFD was attributed to the synergistic effect of female sex hormones and Sirt3, we investigated the effects of Sirt3 depletion and ovarian hormone deficiency (ovx) on metabolic parameters, mitochondrial function, antioxidant system, and lipid profile in the liver of SFD-and HFD-fed female 129S mice.
Our results indicate that in SFD-fed Sirt3 WT mice, ovx resulted in a following compensatory response to stress: increased pgc1-α and its downstream target Sirt3, accompanied with maintained mitochondrial function, increased FAs synthesis (higher Scd-1 activity [27]), and MnSod activity coupled with lower levels of LOOH. Ovx also induced pparα expression, known for upregulation of hepatic lipid metabolism proteins [28]. Although it has been reported that Sirt3 downregulates Scd-1 in male mice [29], here the ovx
Discussion
Obesity and metabolic syndrome represent major health problems worldwide [22], development of which is associated with metabolic and hormonal changes occurring during the lifespan in both sexes. Since diet can additionally favor the disease progression, understanding these disorders and their causes in both sexes has one of the highest priorities. Although many studies showed that females have better protection against HFD-induced metabolic stress than males [23][24][25][26], the combined effect of main mitochondrial deacetylase Sirt3 and ovary hormones in regulating metabolic stress in vivo has not yet been investigated. To test our hypothesis that females' protection from HFD was attributed to the synergistic effect of female sex hormones and Sirt3, we investigated the effects of Sirt3 depletion and ovarian hormone deficiency (ovx) on metabolic parameters, mitochondrial function, antioxidant system, and lipid profile in the liver of SFD-and HFD-fed female 129S mice.
Our results indicate that in SFD-fed Sirt3 WT mice, ovx resulted in a following compensatory response to stress: increased pgc1-α and its downstream target Sirt3, accompanied with maintained mitochondrial function, increased FAs synthesis (higher Scd-1 activity [27]), and MnSod activity coupled with lower levels of LOOH. Ovx also induced pparα expression, known for upregulation of hepatic lipid metabolism proteins [28]. Although it has been reported that Sirt3 downregulates Scd-1 in male mice [29], here the ovx in females caused upregulation of Sirt3 and increased Scd-1. However, ovary hormone and Sirt3 depletion caused downregulation of Scd-1, indicating that Sirt3 participates in the regulation of Scd-1 in the absence of ovary hormones. It could be possible that Sirt3 increases Scd-1 by deacetylation and inhibition of its negative regulator STAT3 [21], however, this needs to be confirmed in future studies. We suggest that, although in SFD conditions ovx mice have increased FA synthesis, the upregulation of Sirt3 compensates for the loss of ovary hormones by maintaining mitochondrial metabolism and preventing oxidative damage. Consistent with previous reports indicating that increased Sirt3 expression is in association with decreased oxidative stress [30,31], these results show that in SFD conditions Sirt3 protects from mitochondrial dysfunction and oxidative damage in ovx females.
Previous studies showed that the genetic background of mice strain plays a significant role in the progression of disease under the same conditions of HFD. Although 129S mice develop features of metabolic syndrome with lower severity compared to some other strains [32][33][34][35][36], 129S Sirt3 KO mice are extremely useful in studying the role of FA oxidation in diabetes, steatosis, and life span [4,37]. However, the conducted metabolism-associated studies involved male mice only. Although our strain of mice develops obesity with lower severity, weight gain was significantly affected in KO ovx mice: these females were most resistant towards gaining weight on SFD but most sensitive upon HFD. Decreased body weight gain in SFD conditions was associated with reduced pparα expression, lower Scd-1 activity, and lowest hepatic lipid content. The inhibition of Scd-1 is found to shift FA metabolism towards increased FA oxidation pathway [38], which participates in alleviation of obesity by burning off excessive accumulated lipids [39]. Thus, this may explain their resistance towards gaining weight under SFD conditions. Other parameters that indicate reduced Scd-1 activity are lowered content of oleic in favor of stearic acid, and elevated PUFAs, known as powerful inhibitors of Scd-1 gene expression, specifically AA and DHA (reviewed in [21]). Also, ovx KO females have more compromised CII-driven mitochondrial respiration associated with the highest lipid oxidative damage. In conclusion, we propose that combined loss of both Sirt3 and ovary hormones in SFD conditions results in the lowest body weight gain as a consequence of reduced pparα expression and Scd-1 activity, compromised CII-driven respiration, and higher lipid oxidative damage, thus pointing towards unquestionable protective effect of combined Sirt3 and ovary hormones in maintaining metabolic and oxidative homeostasis in female mice.
HFD-fed ovx mice usually exhibit an obese phenotype with decreased energy expenditure, referring to the importance of the estrogen signaling pathway in the maintenance of energy homeostasis [40]. However, the role of Sirt3 in these processes is not clear. In agreement with previous data regarding the protective role of Sirt3 and ovary hormones in obesity [11,41], HFD-fed ovx KO females displayed the highest body weight gain, but interestingly, lower lipid accumulation than sham mice. This is consistent with our previous study if we compare ovx KO females to male KO mice, which also showed to have the lowest lipid accumulation on HFD, indicating their increased reliance on FAs which probably compensated for their impaired glucose uptake [13]. At the time, we assumed that the observed sex-related differences in lipid accumulation are present only in Sirt3 KO mice, but now we demonstrate the same effect in both ovx WT and KO females, suggesting the important role of ovary hormones in these processes. In addition, HFD-fed ovx mice maintained their CII-driven respiration, possibly due to increased MUFAs (palmitoleic and vaccenic acids) known for enhancing FA oxidation, i.e., energy consumption by raising mitochondrial respiratory complexes and ATP production [42]. Whilst mitochondria are the primary site of β-oxidation for energy production in the form of ATP, peroxisomal β-oxidation is involved in biosynthesis pathways with the end product acetyl CoA and H 2 O 2 [43]. Based on observed lower mitochondrial CII-driven respiration in HFD-fed sham mice, as well as higher Cat, a common peroxisomal ROS scavenging enzyme, we propose that these females depend more on peroxisomal FA oxidation. This indicates that HFD aggravates their metabolic parameters by compromising mitochondrial function and activating ROS-induced upregulation of antioxidant enzymes.
Although ovx females on HFD have lower lipid accumulation, depletion of either ovary hormones or both ovary hormones and Sirt3 changes the expression of cyp2e1 and cyp4a14. Higher expression of either of these genes indicates that these mice are prone to NAFLD since it is known that both of them induce hepatosteatosis, with an increase in ROS and oxidative stress [18,44]. In addition, the expression level of the ho-1 gene, which has been reported to attenuate oxidative stress and prevent nonalcoholic steatohepatitis (NASH) [45], is upregulated in WT and decreased in KO ovx mice. Thus, in ovx females, Sirt3 may protect from NASH by upregulation of ho-1, while in the absence of Sirt3 the expression of ho-1 is abolished, making these mice more susceptible to the development of NASH caused by HFD. From the two analyzed cyp genes, cyp4a14 showed to be more involved in the induction of NAFLD [46] and cyp2e1 in alcoholic hepatitis [47]. This may be the reason that cyp4a14 has higher expression in all HFD groups, and highest in KO ovx groups in either SFD or HFD conditions. Overall, the expression of these genes indicates that, despite lower lipid accumulation in ovx females which may be due to differential dependence on peroxisomal and mitochondrial β-oxidation of FA, mice lacking ovary hormones and especially in combination with Sirt3 depletion are more prone to NAFLD.
The limitation of the study is that parameters such as Scd-1 which are derived from the ratio of particular FAs may not be fully reliable indicators of Scd-1 activity in HFD conditions because their levels highly depend on fat composition in the diet. Several authors stated that lipogenesis was reduced in mouse models of HFD feeding (reviewed in [48]). In addition, in this study, we included only young adult females, and ovx might not have such adverse effects as it would have in older females. Further studies in senescent mice are needed to compare parameters involved in lipid metabolism, mitochondrial function, and antioxidant system, which we plan to conduct in near future.
Despite these limitations, this research adds to our previous studies and confirms our hypothesis that protection against harmful effects of HFD in female mice is attributed to the combined effect of female sex hormones and Sirt3. With this, we add to the knowledge on the prevention of metabolic dysfunction, thus contributing to preclinical research and supporting future studies for the development of sex-related therapeutic agents for metabolic syndrome and associated diseases.
Animal Model and Experimental Design
129S1/SvImJ WT (Stock No: 002448) and Sirt3 KO (Stock No: 012755, Jackson Laboratory, Bar Harbor, ME, USA) female mice were housed in standard conditions (three females per cage, 22 • C, 50-70% humidity, 12 h light / 12 h darkness cycle). Ovariectomy (ovx) and sham surgery were performed at 7 weeks of age under ketamine/xylazine anesthesia (Ketamidor 10%, Richter pharma Ag, Wels, Austria; Xylazine 2%, Alfasan International, Woerden, Netherlands). Since low levels of E2 are normally detected in ovariectomized females due to other endogenous E2 sources (reviewed in [49]), plasma E2 levels were not used as an indicator of the efficiency of ovx. Instead, the success of ovx was checked by analyzing vaginal smear during five consecutive days after the surgery [50]. After recovery, mice were placed on either a standard fat diet (SFD, 11.4% fat, 62.8% carbohydrates, 25.8% proteins; Mucedola, Settimo Milanese, Italy) or a high fat diet (HFD, 58% fat, 24% carbohydrates, 18% proteins; Mucedola, Settimo Milanese, Italy) for 10 weeks. Body weight was measured once a week, as well as glucose level (glucometer StatStrip Xpress-I, Nova Biomedical, GmbH, Mörfelden-Walldorf, Germany) after 6 h of fasting, in a blood drop from the tail vein. After 10 weeks of feeding, mice were sacrificed and liver was used either fresh or was stored in liquid nitrogen or at −80 • C, depending on the analysis. Animal experiments were done within the project funded by the Croatian Science Foundation, project ID: IP-014-09-4533, approved on 01/09/2015. All procedures were approved by the Ministry of Agriculture of Croatia (No: UP/I-322-01/15-01/25 525-10/0255-15-2 from 20th July 2015) and carried out following the EU Directive 2010/63/EU-associated guidelines.
Histology and Oil Red O Staining
A histological analysis of samples taken from the right liver lobe for all experimental groups was performed as described previously [13]. Fat vacuoles in hepatocytes of frozen sections were visualized by Oil Red O dye (Sigma Aldrich, St. Louis, MO, USA) according to the previous protocol [13], with the following modifications: Oil Red O dye was prepared in isopropanol (0.5% Oil Red O solution), the sections (8 µm) of tissues embedded in Optimal Cutting Temperature medium (O.C.T 4583, Sakura Finetek, Torrance, CA, USA) were air-dried for 1 h and the tissue was fixed in ice-cold 10% formalin for 5 min, washed with dH 2 O, and conditioned for staining by brief dipping of slides in 60% isopropanol. The tissue sections were stained with Oil Red O dye in the dark, at room temperature for 15 min, and then washed by rinsing in 60% isopropanol, and incubated for 5 min in dH 2 O, followed by staining with Mayer's hematoxylin (Dako, Histological staining reagent S3309, Santa Clara, CA, USA) for 1 min and washing with tap water and dH 2 O, and were finally mounted in aqueous mounting medium (Dako Faramount Aqueous Mounting Medium S3025, Agilent Technologies, Santa Clara, CA, USA). An analysis of the stained liver sections was done using an Olympus BX51 microscope (Tokyo, Japan) with associated software analysis.
Total Lipid Extraction and GC Lipid Analysis
Liver samples were snap-frozen and stored at −80 • C until analysis. Total lipids were extracted from liver tissue according to a modified Folch procedure as described previously [13,51]. The lipid extract was treated with 0.5 M KOH/MeOH for 20 min at room temperature, and the corresponding FA methyl esters (FAMEs) were formed and analyzed by gas chromatography (GC). GC analyses of total FAs were performed by Varian 450-GC equipped with a flame ionization detector (Varian Medical Systems, Houten, Netherlands). A Stabilwax column (crossbond carbowax polyethylene glycol, 60 m × 0.25 mm) was used as a stationary phase at a programmed temperature with helium as the carrier gas. The heating was carried out at a temperature of 150 • C for 1 min followed by an increase of 1 • C/min up to 250 • C. Methyl esters were identified by comparison with the retention times of commercially available standard mixtures (Marine oil FAME mix, Restek Corporation, Bellefonte, PA, USA).
Lipid Hydroperoxide Analysis
Liver samples were snap-frozen and stored at −80 • C until analysis. Lipids were extracted according to the modified method previously described [52]. Briefly, 0.1 g of liver tissue was cut and homogenized in PBS. Lipid extraction started by adding 2.5 mL of CHCl 3 to the homogenate, followed by vigorous shaking. The solution was washed with 0.75 mL of 0.034% MgCl 2 and centrifuged (2 min, 3000× g) and the aqueous layer was drawn off by aspiration using a Pasteur pipette. The washing procedure was repeated with 1.25 mL of 2 M KCl/MeOH (4:1, v/v) to eliminate all proteins and non-lipid contaminants. CHCl 3 layer was finally washed with CHCl 3 /MeOH (2:1, v/v) and centrifuged (5 min, 3000× g). The organic layer containing lipids was carefully transferred to a glass tube and solvent was removed on a rotary evaporator. After weighing, the lipids were stored at −20 • C until the analysis of lipid hydroperoxides (LOOH). Spectrophotometric ferric thiocyanate assay was used for the determination of LOOH concentration. The analyzed samples were prepared by diluting with a deaerated mixture of CH 2 Cl 2 /MeOH (2:1, v/v). The concentrations of LOOH were calculated by using the molar absorptivity of the complex [FeNCS] 2 + formed per mol of LOOH, 58 440 dm 3 mol −1 cm −1 , at 500 nm [53].
RNA Isolation and Quantitative Real-Time PCR Analysis
Liver samples were snap-frozen and stored in liquid nitrogen until analysis. Total RNA from liver samples was isolated using TRIzol reagent (Invitrogen, Waltham, MA, USA). RNA was treated with DNAse (TURBO DNA-free Kit, Thermo Fisher Scientific, Waltham, MA, USA) followed by reverse transcription using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific, Waltham, MA, USA). For real-time PCR analysis, an ABI 7300 sequence detection system (Foster City, CA, USA) was used. To quantify the relative mRNA expression of cyp2e1, cyp4a14, pparα, pgc1-α, and ho-1 (Supplementary Table S1) the comparative CT ( ∆∆ CT) method according to the Taqman ® Gene Expression Assays Protocol (Applied Biosystems, Foster City, CA, USA) was used. The data on the graphs are shown as the fold-change in gene expression, which is normalized to the endogenous reference gene (β-actin) and relative to SFD-fed sham WT females.
Protein Isolation and Western Blot Analysis
Liver samples were snap-frozen and stored in liquid nitrogen until analysis. Liver proteins were prepared in Ripa buffer (supplemented with cOmplete™, EDTA-free Protease Inhibitor Cocktail tablets (Roche, Basel, Switzerland)) using an ice-jacketed Potter-Elvehjem homogenizer (1300 rpm; Thomas Scientific, Swedesboro, NJ, USA) according to our standard protocol [13]. Proteins (15 µg/µL) were resolved by SDS-PAGE and transferred onto a PVDF membrane (Roche, Basel, Switzerland). Membranes were blocked and incubated with primary antibodies (Supplementary Table S2) overnight at 4 • C. For chemiluminescence detection, an appropriate horseradish peroxidase (HRP)-conjugated secondary antibody was used. AmidoBlack (Sigma Aldrich, St. Louis, MO, USA) was used for total protein normalization. The Alliance 4.7 Imaging System (UVITEC, Cambridge, UK) was used for the detection of immunoblots using an enhanced chemiluminescence kit (Thermo Fischer Scientific, Waltham, MA, USA).
Analysis of Antioxidative Enzyme Activities
Liver samples were snap-frozen and stored in liquid nitrogen until analysis. Antioxidative enzyme activities were analyzed in liver homogenates prepared in PBS supplemented with cOmplete™, EDTA-free Protease Inhibitor Cocktail tablets (Roche, Basel, Switzerland) using an ice-jacketed Potter-Elvehjem homogenizer (1300 rpm; Thomas Scientific, Swedesboro, NJ, USA). Superoxide dismutase (Sod) activities were determined using a Ransod kit (Randox Laboratories, Crumlin, UK) according to the manufacturer's recommendations. The catalase (Cat) activity was done as previously described [54], by measuring the change in absorbance (at 240 nm) in the reaction mixture (10 mM H 2 O 2 and 50 mM PBS (pH 7.0)) during the interval of 30 s following sample addition.
Mitochondria Isolation and Oxygen Consumption
Mice liver mitochondria were isolated from fresh liver by differential centrifugation as described previously [13]. Isolated mitochondria were kept in the isolation buffer (250 mM sucrose, 2 mM EGTA, 0.5% fatty acid-free BSA, 20 mM Tris-HCl, pH 7.4) until the experiment on the Clark-type electrode (Oxygraph, Hansatech Instruments Ltd., Pentney, UK) in an airtight 1.5 mL chamber at 35 • C. Mitochondria (800 µg protein) were resuspended in a 500 µL respiration buffer (200 mM sucrose, 20 mM Tris-HCl, 50 mM KCl, 1 mM MgCl 2 ·6H 2 O, 5 mM KH 2 PO 4 , pH 7,0) for the determination of oxygen consumption. Complex I assessment samples were incubated with 2.5 mM glutamate and 1.25 mM malate. Complex II assessment samples were incubated with 2 µM rotenone and 10 mM succinate. Mitochondrial respiration was accelerated by the addition of ADP (2 mM final concentration) for state 3 respiration measurements. Then, ATP synthesis was terminated by adding oligomycin (6.25 nM final concentration) to achieve state 4 rate. To inhibit mitochondrial respiration, 2 µM antimycin A was used. Oxygen consumption is calculated in nmol/min/mg protein.
Statistical Analysis
For the statistical analysis of data, SPSS for Windows (v.17.0, IBM, Armonk, NY, USA) was used. A Shapiro-Wilk test was used before all analyses to test the samples for normality of distribution. Since all data followed a normal distribution, parametric tests for multiple comparisons were performed: an unpaired Student's t-test for comparisons between SFD and HFD, and a two-way ANOVA for the interaction effect of Sirt3 and ovx within each diet. If a significant interaction was observed, all pairwise comparisons were made between groups, using Tukey's post-hoc test with Bonferroni's correction. Significance was set at p < 0.05. On graphical displays, the indicator of the differences between SFD and HFD was marked as x; the indicator of differences between WT and KO (the effect of Sirt3) was marked as a letter (a, b, etc.); the indicator of differences between sham and ovx (the effect of ovx) was marked as *.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/ijms22084277/s1, Figure S1: Observations of uterus deterioration and vaginal smears of control (sham) and ovariectomized (ovx) mice. Figure S2: Graphical display of hepatic SFA content in sham and ovx Sirt3 WT and KO mice after 10 weeks of feeding with SFD or HFD. Figure S3: Graphical display of hepatic MUFA content in sham and ovx Sirt3 WT and KO mice after 10 weeks of feeding with SFD or HFD. Figure S4: Graphical display of hepatic PUFA content in sham and ovx Sirt3 WT and KO mice after 10 weeks of feeding with SFD or HFD. Table S1: Assays (Taqman ® Applied Biosystems, UK) used for the real time quantitative PCR. Table S2: Antibodies used in this study for the Western blot analyses.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
2021-04-29T05:19:51.249Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "5c9b0fe7be01082b3ea95e2259a2feb75da67ff0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/8/4277/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c9b0fe7be01082b3ea95e2259a2feb75da67ff0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254766851
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge and Acceptance Level of Vegetable Farmers on Organic Farming and Biological Control in Kampar, Perak
Organic farming practices and biological control (biocontrol) are relatively less adopted in agriculture compared to conventional agricultural practices globally. In recent years, it is becoming more common, but in Malaysia it is still disfavored. Generally, in Malaysia, not many farmers are practicing organic farming and adopting biocontrol. This study aims to determine the level of knowledge and acceptance of vegetable farmers about organic farming practices and biocontrol in the Kampar district. Fifty farmers were selected using non-probability sampling, whereas face-to-face and telephone interviews were conducted to collect data with the aid of a questionnaire. The respondents had a good level of knowledge (mean score of 4.00) and a neutral perception on the economic benefits of organic farming (mean score of 2.65). They have a moderate level of knowledge on biocontrol (mean score of 3.24). The respondents’ acceptance level for orga nic farming practices (mean score of 2.65) and biocontrol (mean score of 3.13) were neutral, mainly due to the low local demand for organic vegetables and the low confidence in the effectiveness of biocontrol. The respondents possess moderate knowledge of organic farming and biocontrol but conventional farming was still preferred. The acceptance level for these practices remained neutral. Participatory program such as farmer field school can be introduced to increase the adoption of these practices.
I. INTRODUCTION
Before the advent of chemical fertilisers, organic farming has been practised since 13,000 BP when humankind first domesticated and cultivated wild plants (Balter, 2007;Behera et. al., 2011;Tomaš-Simin & Trbić, 2016). In the 1960s, the Green Revolution tried to address increasing global food demand using improved crop varieties, and expanded use of chemical fertilisers and pesticides. It was a success in many developing countries (Andersen & Hazell, 1985;Pingali, 2012). This farming practice is known as the conventional farming method, where farmers focus on the high yield production while neglecting the possible health and environmental hazards (Chausali & Saxena, 2021).
Today, the persistence and indiscriminate use of mineral fertilisers and chemical pesticides in the agriculture sector had led to serious unintended consequences on the environment (i.e., the loss of diversity, land degradation) and socio-economic (i.e., poorly developed input, credit and output market for small farmers and policies discriminated against small farmers welfare to apply information and resources effectively) (Pingali, 2012;Pisani, 2006;Kumar, 2017). ASM Science Journal, Volume 17, 2022 2 The increasing environmental awareness has led to the sustainable farming practice which better known as organic farming. This focus on sustainable crop production through conserving a healthy agroecosystem (Chausali & Saxena, 2021). Organic farming has been proven to be more sustainable compared to conventional farming, as it better preserves biodiversity and fertile soil (Ghabbour et. al., 2017;Katayama et. al., 2019;Wintermantel et. al., 2019;Vellenga et al., 2018). There has been a rapid global rise in organic farming in the late twentieth century and this is because of the increasing demand by consumers, especially the younger generation, higher net profit per hectare of farming area, and the increase in societal pressure in environmental protection had leads to more research on these farming methods (Siegner, 2017;Tanrivermis, 2006;Watson et al., 2006). Furthermore, some developed countries, such as Sweden and the US, also placed efforts in encouraging the conversion of conventional farming to organic farming by incentivising the farmers through subsidies (Lohr & Salomonsson, 2000). The number of organic producers worldwide increased by more than 55% compared to the last decades, while organic farmland increased 2.9% from 2017 to 2018, reaching a total of 71.5 million hectares (Willer et al., 2020). Concomitantly, organic farming encouraged nonchemical pest control by adopting biocontrol. Biocontrol is an effort that relies on natural enemies, such as predators, parasitoids and beneficial microbes, to manage and reduce pest damages. It is divided into three approaches: conservation, classical and augmentative. Conservation biocontrol focuses on conserving natural enemies of the pest that naturally exist by improving the agroecosystem or farming practices. The classical biocontrol involved the introduction of exotic natural enemies of certain pests and this usually is done for the control of invasive pest. Lastly, augmentative biocontrol is done by periodic release of natural enemies which have low populations in nature to suppress a pest and keep it under control (Sanda & Sunusi, 2014). This is an effective and environmentally friendly approach (Holmes et al., 2016). According to Lenteren (2012), biocontrol will play a key role in modern pest management because it is more sustainable and environmentally friendly. The adoption of biocontrol has grown rapidly across the globe due to the low investment cost and increasing number of organic farmers (Eggar, 2020;El-Shafie, 2019). In Southeast Asia, countries such as Cambodia and Thailand have used biocontrol effectively for pest control. For example, the huge success in the management of invasive cassava mealybug, Phenacoccus manihoti, using parasitoid, Anagyrus lopezi (Wyckhuys et al., 2018) and the use of egg parasitoid Trichogramma spp.
in the management of multiple key Lepidoptera pests (such as Spodoptera spp.) in maize and paddy (Babendreier et al., 2019).
Unfortunately, organic farming in Malaysia is catching up relatively slower compared to other Asian nations as Malaysia has far fewer certified organic producers (Willer et al., 2020). Based on the report presented by Iskandar (2018), the total number of farms with myOrganic certification in Malaysia was 211, only 54 of which are vegetable farms. Furthermore, the biocontrol is mainly used in large-scale plantations, such as the management of rhinocerous beetles in oil palm plantations using entomopathogenic fungi, Metarhizium spp. and Oryctes nudivirus (OrNV) (Kamarudin et al., 2019), control of rodent population in plantations and paddy fields using barn owls (Wood & Fee, 2003), and management of Bakanae disease in rice using Trichoderma species (Wan et al., 2015). In the case of Malaysian vegetable farming, the only well-known success story is the control of diamondback moth, Plutella xylostella, by parasitoids, Diadegma semiclausum and Diadromus collaris in highlands cabbages (Sarfraz et al., 2005).
Currently, there is limited documentation regarding the knowledge and level of acceptance of vegetable farmers on organic farming and biocontrol in Malaysia. Therefore, the objective of this study is to examine the level of knowledge and acceptance level of vegetable farmers in Kampar district towards organic farming and biocontrol. It is important to have this documented to more effectively communicate and promote these practices to them.
Thirty vegetable farmers participated in the pilot test as a minimum of 12 respondents were recommended (Lancaster et al., 2002). (Chai, 2020). The location was within the vicinity of the university, which allowed accessibility during the movement control order (MCO) restrictions of the COVID-19 pandemic.
C. The Study
In this study, 50 individual vegetable farmers in the Kampar district were selected. Non-probability sampling was used, because it focused on similar traits or characteristics shared among samples (Etikan et al., 2016). Data collection was conducted from August 2020 to October 2020. The survey was carried out face-to-face and through telephone interviews and data was collected with the aid of structured questionnaire. Face-to-face interviews were the main mode of data collection, as it allowed extra information to be gained through verbal and non-verbal communications (Opdenakker, 2006). Non-verbal communication such as body languages, facial expression and attitude are social cues observable during face-to-face interviews. Telephone interviews were used when in-person meetings are restricted, such as accessibility to the farm, time availability of the farmers for face-to-face interview, and travel restrictions due to pandemic (Glogowska et. al., 2011;Thulasingam & Cheriyath, 2008).
A. Demographic Information
The demographic information of the respondents collected in this study is shown in (Shaharudin & Rahim, 2020). Furthermore, it is also perceived by majority of the society as labour-intensive career with no prospect of social mobility, reserved for the uneducated and poor (Dising & Puad, 2018;White, 2012).
This could be the main factor that driving youth away from agriculture, as the efforts needed to achieve lucrative profits are not considered proportional. Most of the respondents were farmers in Malim Nawar (37.74%), followed by Gopeng and Kota Bahru. The registered farmers in Kampar district who were members of the vegetable association were primarily farmers from Malim Nawar (Chai, 2020).
In terms of the land status of the respondents, 69% of the respondents leased their land from the government and the majority had a farm more than 3 hectares. According to Chai (2020) and Chong (2020), the average rate for leasing vegetable cropland in Malim Nawar, Gopeng, Kota Bahru and Jeram per hectare was RM105 per month, while the land for oil palm plantation was rated at RM800 and above depending on the number of palms planted. The affordable leases in the area is probably a factor in the willingness of the respondents to expand their farm land. Ahmad (2020) reported that the Perak State Government also strongly encourages farmers to register and use government lands gazetted for agriculture purposes.
Furthermore, approximately 34% of the respondents had less than 10 years of farming experience, followed by 21-30 years (28%).
Many farmers begin farming at a later stage of life or after retirement to support their retirement living, and young people who venture into agriculture is often suffer unemployment in other sectors (Abdullah & Sulaiman, 2013;Soon, 2017). Others inherit the family profession where a child takes over the farm of his aging parents. Most of the respondents (60%) have farms larger than 3.01 hectares, followed by 11 respondents (22%) who have farms between 1.01 and 2.00 hectares. Six respondents (12%) were farming 2.01 to 3.00 hectares of land, whereas only 3 respondents had farms smaller than 1.00 hectare.
The most cultivated crops among the respondents were sweet potatoes, Ipomoea batatas (21%), yam bean, Pachyrrhizus erosus (20%) and maize, Zea mays (22%) ( Table 1). These are also the top three major crops in Kampar district (Department of Agriculture, 2020). The land in this district is mostly comprised of sandy and sandy loam soils and they are suitable for these crops (Chai, 2020;Delp, 2018;Tong, 2017;Nedunchezhiyan et al., 2012). had created a good environment and enabled farmers to achieve higher harvest and higher income (Corales et al., 2004).
About half of the respondents understood that the application of synthetic pesticides is prohibited in organic farming (mean score of 3.84). Another regulation in organic farming, the elimination of the application of mineral inputs and chemical pesticides, which was understood by half the respondents (Trewavas, 2001). The other half of the respondents were either unsure or do not know about this.
This reflected the fact that there are still farmers who were unclear about organic farming practices. According to some respondents, they considered bio-pesticides a kind of chemical input, even though most bio-pesticides are derived from organic sources, including beneficial microbes namely entomophagous fungi (such as Beauveria bassiana, Verticillium lecanii and Metarhizium anisopliae) (Milner, 1997 (Meemken & Qaim, 2018). In addition, soil fertility and biodiversity can be improved or restored through organic farming practices due to lower dependence on chemical inputs (Pimentel et al., 2005). According to Novara et al. (2019), organic farming practices contributed to healthier soil by increasing soil organic matter and lowering bulk density. The increase in soil organic matter and soil organic carbon were found to have a positive effect on crop yield. Healthy soil results in better nutrients uptake by housing diverse microbial communities which helps the plant to better absorb available nutrients (Coyne & Mikkelsen, 2015). Eventually, such practices will lead to a comparable yield to conventional farming while preserving the environment.
Although the respondents were aware of the positive environmental impacts, they did not understand that these environmental benefits can translate into economic benefits.
Our findings show that they only had a neutral perception on economic benefits (mean score of 2.65). Most of the aspects in economic benefits were perceived negatively by most of the respondents (Table 3). The majority of respondents (78%) see organic farming as more labourintensive and time consuming compared to conventional farming. More labour inputs and time are required to operate the farm as it highly depends on manual and mechanical control (Karyani et al., 2019). It is reported that the labour-intensiveness of organic farming is highly dependent on farm structure and system. Therefore, a properly managed organic farm can achieve labour efficiency equivalent to conventional farms (Orsini et al., 2018). In addition, conventional farming is more capital intensive per hectare compared to organic farming, due to the additional agrochemicals required (Yadav & Kumari, 2020 decline which led to their assumption that organic farming is not profitable (Chai, 2020). The Food and Agriculture Organisation (2007) has proven that organic farming can be more profitable compared to conventional farming with proper marketing strategies. The retail price of organic products are typically higher than conventional products due to the premium charge of organic products (Seufert et al., 2017).
One of the farmers in this survey, Ramesh (2020) viewed organic vegetable farming as profitable as it required less input compared to conventional farming, concurring with Yadav and Kumari (2020). The economic analysis of Loncaric et al. (2013) found that organic farming increases farm profitability while solving manure disposal problems.
Thus, this misconception about organic farming may explain the low popularity. The availability of organic farming input is not a cause for hesitation among the respondents as they think it is easily available in their respective local communities. According to Chai (2020) and Chong (2020), organic inputs such as organic fertilisers and manure are easy to obtain as each farming community has at least one supplier or agriculture shop. Furthermore, unprocessed manure can be easily acquired from the poultry industry at low prices. Organic farming practice is a traditional practice that use in ancient time.
4.12 ± 1.26 14 6 80 Organic farming practices can be practised in any farm.
4.42 ± 0.84 4 10 86 Organic farming practices will enhance soil fertility (e.g., more readily available nutrients and better nutrients retention) in long-run.
C. Knowledge of the Respondents on Biological Control
The respondents had a moderate level of knowledge about biocontrol with a mean score of 3.24 (Table 4). This partial understanding may be a factor in the hesitation of most of the respondents to adopt biocontrol. Only 44% of the respondents knew that biocontrol is not limited to organic farming. Baker et al. (2020) claimed that biocontrol can be adopted by both organic and conventional farming systems.
Biocontrol is a complementary element, when integrated with conventional farming system, is known as the integrated pest management (IPM) approach (Fountain & Wratten, 2013). It combines several farming practices, focusing on biocontrol, and uses chemical control as the last resort for pest suppression (Naranjo et al., 2015). A review by Samada and Tambunan (2020) explained that the development of bio-pesticides is aimed at replacing synthetic chemical pesticides, as to produce safer food with less or no pesticide residue. This proves that biocontrol is not restricted to organic practices, as bio-pesticide application involves beneficial microorganisms and biochemicals only to suppress pests and diseases (Kumar & Singh, 2014).
Although many researchers know that biocontrol is divided into three approaches, this information is often not known by farmers, as 52% of the respondents still think that biocontrol relies solely on introducing biological agents.
Among the biocontrol approaches mentioned, only classical biocontrol requires the introduction of biological agents of an exotic origin to control invasive pests (Kenis et. al., 2017;Lenteren, 2012 Badaluddin, 2020). They purchase Trichoderma inoculum and apply it directly on their farm to suppress soil pathogens (Chong, 2020). They usually only apply based on advice from friends or sales representatives without further understanding the microorganisms. Therefore, it is not surprising that most of the farmers in this survey were only familiar with classical biocontrol approaches. However, to achieve effective biocontrol, farmers do not need to introduce any of the control agents but simply conserve and provide a healthy agroecological system to allow it to happen naturally through conservation biocontrol (Graham et al., 2017). Therefore, this further shows the limited understanding of the respondents about biocontrol.
Almost all respondents (88%) were aware that unlike The respondents unanimously agreed that the adoption of biocontrol required considerable knowledge. According to Kumar (2016), biocontrol is considered a complex form of pest management. Education in the execution of biocontrol is necessary for success. Farmers who decided to implement biocontrol needed to fully comprehend and justify each decision during implementation (Barratt et al., 2017).
However, in practice, farmers can acquire necessary training through field education or farmer education, such as the farmer field school (FFS) approach (Ooi & Kenmore, 2005).
In the farmer field school approach, farmers were taught about biocontrol through the insect zoo approach. They were shown the predatory behaviour of natural enemies found in the field to help them grasp the principles of biocontrol (Pontius et al., 2002). Eventually, farmers would be able to differentiate "good" insects from "bad" insects. This is important because only 46% of the farmers believe that biocontrol agents protect their crops. Their doubt arises from having never observed the predatory interactions between insects, and to make assumptions based on the proximity of the insect to the crop damage. One of the common misperceptions of farmers in this study is their generalisation of coccinellids (ladybird beetles) as pests, while in truth, the majority of coccinellids are predators.
This indicates the inability of the respondents to identify insects in the field, making biocontrol impractical to them at the present.
Most of the respondents (80%) realised that healthier vegetables can be produced with the aid of biocontrol. As biocontrol replaces synthetic chemical pesticides, there will be less or no chemical residues in crops (Kumar, 2016;Rebek et al., 2012). Despite the promise of healthier vegetables, most respondents (68%) believe that biocontrol is challenging to implement and less efficient in managing pests. Many farmers were found to have several common key misconceptions on biocontrol, one of which is that biocontrol is unreliable and slow. This premise is founded in a minority of non-professional who implement the approach without proper research (Lenteren, 2012). In contrast, the results of chemical control are more visible and immediate, thus more convincing to farmers. However, it will lead to many negative externalities (such as pesticide resistance and environmental pollution) in the long run (Kumar, 2016;Pisani, 2006).
D. Level of Acceptance on Organic Farming Practices
The respondents had a neutral acceptance level (mean score of 2.65, Table 5) towards organic farming practices, where half of them (48%) proposed the need for a support system to facilitate transitions to organic farming. The supporting resources include, technical support, relevant training, seminars and workshops, accessible information through local agriculture departments and online, and financial assistance. More than half of the respondents (56%) will consider organic farming if there is higher market demand, but two-thirds (66%) were not incentivised by the environmental benefits. Bouttes et al. (2018) found that farmers are willing to take short-term risks in adopting new practices if there are long term consumer demands instead of environmental and health benefits. In addition, the most common challenges faced by farmers during the transition are also related to marketing strategies, such as logistics and facilities to market the produce and profitable pricing.
Negative peer pressure can also be a problem (Cranfield et al., 2009). Farmers are aware of the risks and technical challenges in implementing new and unfamiliar production practices, but they were willing to endure if there are trusted consultants and experts to help them (Bouttes et al., 2018).
This corroborates the results of this study, suggesting that most of the respondents were willing to adopt organic farming with the necessary supports to fetch better market demands. Unfortunately, many Malaysian consumers remain resistant toward purchasing organic produces, probably due to the premium price tag (Meemken & Qaim, 2018;Seufert et al., 2017).
This leads to most (70%) perceiving the conversion to organic farming is highly risky even with more secure land titles and most of them (58%) do not consider this conversion worthwhile due to limited local demand and lack of marketing channels. Although global consumer demand for organic food is increasing, it remains spotty and scarce due to its recent introduction to the supply chain (Fagan, 2017). Consumers of organic food are typically high-income, and willing to pay more for production methods that are environmentally friendly, safe to farmers, locally grown, synthetic pesticide-free and better perceived flavour or nutritional value (Thompson, 2000). The social perception of organic food as more expensive due to low yield is unfounded but remains a concern for farmers attempting to attain financial security. Thus, risk averse farmers would likely continue with their current practice (Mamuya, 2011). 10 organic production articulate farmers' eligibility for subsidies and incentives. Developing countries possessing smaller organic market potential could help farmers facilitate the export of organic produce to countries with higher demand. Government can assist farmers in overcoming technical challenges, enforce certification to maintain the standard and export value, provide necessary financial aids through subsidy and navigable land tenure policies, provide suitable platforms for farmers to market produce to suitable markets, raising public awareness on health and environmental benefits of organic products, and subsidising retail prices of organic food to make it more affordable (Archana, 2013;Lockeretz, 2007;Scialabba, 2000;Thompson, 2000).
E. Level of Acceptance on Adopting Biological Control
The results in Table 6 revealed that the respondents do not reject the idea of biocontrol but at this point, they are still unsure about adopting this pest management approach (mean score of 3.13). Most of the respondents (60%) showed willingness in adopting biocontrol but most of them (70%) do not want to risk adopting it. If someone they knew uses biocontrol, 86% of them will consider to adopt it. According to Moser et al. (2008), factors that influence the adoption of biocontrol include, positive publicity (word of mouth and advertising), personal hands-on experience and the promotion of these approaches by local research institutions, cooperatives or growers' associations. Farmers are more likely to adopt sustainable farming practices when they receive reviews from other farmers who have experienced the new practices (Dessart et. al., 2019;Ghane et al., 2011). Unfortunately, these are not available to farmers in the Kampar district. Thus, the farmers' lack of confidence in biocontrol approaches is due to unfamiliarity, uncertainty on the level of control, higher confidence in chemical pesticides than biocontrol agents, limited biocontrol companies and lack of promotion by local research and agriculture institutions (Cullen et. al., 2008;Moser et al., 2008). The lack of participation of farmers in the development of biocontrol methods led to the failure to disseminate research findings to the farmers. There is a poor communication link between researchers and farming communities which hinders the adoption of this approach (Noorhossein et al., 2010).
There is comparable efficiency between biocontrol and chemical control (Adly, 2015), but biocontrol in certain scenarios, especially open fields with higher abiotic pressure on biocontrol agents, may take a longer time to establish.
Most of the respondents (56%) cannot accept the longer duration needed to establish effective control, while 20% of them were unsure. This is understandable as the risk on yield loss increases their financial insecurity (Cullen et al., 2008). Barratt et al. (2017) claimed that the implementation of biocontrol does not produce immediate impact, so farmers perceive pesticides as a more reliable and financially guaranteed option. However, Sanda and Sunusi (2014) reported that the use of entomopathogenic nematodes, namely Steinernematidae and Heterorhabditidae, to suppress insect pest populations have a positive effect on crop yield. Similarly, conservation biocontrol is also found to reduce the cost of production and increase yield (Cullen et al., 2008). In a nutshell, biocontrol is effective for pest management in the production of pesticide-free crops and at the same time, conserving the agroecosystem will help generate comparable or even higher yield than conventional practices.
Although most of the respondents still prefer chemical over biocontrol, 54% of the respondents agreed that the increase in agrochemical prices is an incentive to adopt biocontrol. Hoddle (2004) found that farmers would likely adopt biocontrol to reduce production costs, which include labour and agrochemical input (Hoddle, 2004). Rengam et al. (2018) inferred prices and advice from licensed pesticide dealers influence farmers' choice of pesticide. In addition, pesticide resistance is becoming common with an increasing dependence on the use of chemicals for pest control.
Farmers incur a higher cost by increasing the dosage or frequency of chemical pesticide applications to achieve equal efficacy. However, the recent trend of alternative approaches, such as bio-pesticides and conservation biocontrol, are proposed to replace chemical pesticides due to their lower cost of implementation (Popp et al., 2013).
Generally, our findings suggest that the respondents do not adopt biocontrol mainly due to lack of fundamental understanding. The lack of confidence in this approach is due to the lack of exposure and familiarity with biocontrol, leading to erroneous presumptions. In order to encourage adoption, the gaps between institutions and crop producers must be fostered through communication and interaction.
These interactions can take the form of participatory training, such as farmer field school. seminars, which will help to increase awareness.
V. ACKNOWLEDGEMENT
The authors appreciated the support from the local farmers in Kampar district for their participation in the survey.
|
2022-12-17T16:04:56.381Z
|
2022-04-08T00:00:00.000
|
{
"year": 2022,
"sha1": "b32dcaabe672040be63a935194dd73baa11d9b5c",
"oa_license": "CCBYNC",
"oa_url": "https://www.akademisains.gov.my/asmsj/?mdocs-file=6889",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eda87546b552d6a7e7f7150d62eec54d76bfa283",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
53371400
|
pes2o/s2orc
|
v3-fos-license
|
Administrative Control and Evaluation
When all the administrative functions have run their course, the logical thing is that they must be controlled and evaluated in order to ensure that all operations were being undertaken according to plan and that policy results have been achieved as intended. This is the area of administrative control and evaluation.
Introduction to the Study of Control and Evaluation
Those who are skilled in public administration scientists will realize that after planning has been done, and structure provided to facilitate the achievement of the objectives formulated during the planning stage and the leading function performed; the organizational arrangements, and all resources allocated to be used to Implement Aims and Policy (IES), and predetermined objectives have not necessarily been attained. Poor accomplishment of any of the administrative functions increases the necessity of making some appropriate adjustments, either in the means or resources used to attain the policy objectives or in the objectives themselves. (S. B. M. Marume: 1988). The processes which enable the monitoring of the activities and the assessing of the results are called control and evaluation respectively.
According to Professor S. P. Robbins (1980: 376), the concepts control and evaluation are defined as the final links in the functional chain of public administration. It is necessary to check activities in the public administration process, and one of the interrelated processes is the process of control which is to check activities to ensure that they are progressing as planned, and, where there are significant deviations, in undertaking these activities, to take the necessary and appropriate corrective action (Marume 1988 and Professor J. J. N. Cloete, 1985).
Learning objectives
To be able to:
Define and explain key terms and concepts:
Control Measuring Personal observation Comparing Public accountability Written reports Correcting Licensed Under Creative Commons Attribution CC BY or objective. However, the many evaluation typologies tend to evolve themselves into impact, relative effectiveness, and project performance. And the methods which are used to measure programme outcomes, depend partly on the type of evaluation intended and partly on the available resources which include, amongst other things, time, personnel financial resources, experience and expertise (Marume 1988).
Impact evaluation: seeks to measure the extent to which a programme has met its set legislative objectives. Relative effectiveness evaluation is intended to address the range of programme strategies, techniques, and processes available for achieving the legislative aims and objectives.
Programme evaluation seeks to measure the performance of project operations.
Brief Critical Review of the Definitions of Control and Evaluation
On the basis of broader knowledge basis, knowledge, wider experience of public administration, and intimate knowledge of the extremely rational and practical works of world renowned public administration luminaries in the names of Professor J. J. N. Cloete (1967, (1983,1988 and 2015; and a few more other elegant public administration scientists; the administrative process could quite successfully be made operational in any institutional frame of reference. The frame of reference could be an international institution such as the ILO, UNESCO, IMF/World Bank; central government department, or a state university. The six main administrative categories listed by many of these scientists comprise policy, organization, finance, personnel, procedures and control (POFPPC) . According to their, line of reasoning, the concept of control is relatively comprehensive to embrace also the concept of evaluation. Therefore, the concept of control is re-defined to mean the process of: a. Systematically monitoring (checking) activities of the public officials in implementing policy decision, programmes and plans in order to determine whether individual units and the institution itself are obtaining and utilizing their resources efficiently to accomplish their objectives, and, where this is not being achieved, taking corrective action; and b. Eventually determining/assessing the actual practical results by a specific policy designed to accomplish some valued goals or objectives (s. B. M marume, 1988 and 2015).
For depth and analytical purposes, control measures and evaluation mechanisms are treated separately in the following paragraphs of this article.
Control Measures
Again clarity and unambiguity are needed in order to understand the concept of control measures much better.
Systems Analysis
The continuous process of reviewing system objectives, designing alternative methods to realize them, and weighing effectively costs of alternatives, mainly in an economic sense. Each alternative is regarded as full programme of subsystems -known as homeostasis.
Although it is based on the principles of scientific research, systems analysis is an extension of, rather than a deviation from, human relations movement.
Source: S. B. M. Marume: PhD (Public Administration) thesis, 1988
What is meant by the term 'control' in public administration? Marume (1988) states that in public administration at whatever governmental level, be it at international, national, provincial (state), or local, government level, it is important to recognize the fact that once the administrative functions have run their course; the practical policy results, that is, the outputs or outcomes, must be systematically evaluated in the light of the formulated and adopted policy and objectives. Specific administrative control measures are then required for use from time to time in order to ascertain whether or not the desired goal is being kept in sight. Control then is as a process of monitoring that all activities (operations) at all times and at all levels of the public institution, be it a government department, a state university, or a university faculty of commerce and law or department of education and distance learning, are carried out in compliance with the plans adopted, with the orders given, with the instructions issued, and with the principles laid down and, at the same time, assessing practical policy results. The three formal controls consisting of measurement, comparison and correction are further described as follows:
a. Measuring (measurement)
Four methods used to measure performance are: Personal observation; Statistical reports; Oral reports, Written reports.
The aim of control at any level of public institution is always to ensure that practical results of all operations conform as closely as possible to the established policies, stated goals, agreed programmes, defined objectives or targets. Three basic elements in the control process are that: (i) standards represent desired performance; (ii) a comparison exists of actual results against the set standards; and (iii) corrective action is taken in order to ensure conformity, in cases of deviation.
b. Comparing (comparison)
This is the determination of the degree of difference between actual performance and the desired performance. The comparison step in the control process requires that the measured standard is known, that the actual performance has been measured, and that guidelines exist for determining the extent of allowable tolerances.
c. Correcting (correction)
This is the third and final step in the control process and refers to the action that will correct the deviation. It entails adjusting the actual performance or correcting the standard or both. Two types of corrective action, one is immediate and deals predominantly with symptoms; and the other is basic and delves into causes.
Control Criteria
Four performance characteristics in an institution can be controlled; namely, Quantity: measurable outputs Quality: difficult to measure, for example, service to community; health care; educational service to the community. Cost: organizational inputs and outputs, whether human or physical, can be translated into monetary terms. Time: a scarce resource; deadlines.
Aims of Control Measures
Control means the systematic monitoring of all the activities at all times and at all levels of the public authority: a) Ensuring that all operations are being carried out in accordance with the policies stated and adopted, and objectives defined, orders given and instructions issued; and b) Methodically assessing practical policy results.
S. B. M. Marume, PhD (Public Administration) thesis, October 31, 1988 For this examination, the term control means the systematic monitoring of all the administrative activities at all times and at all levels of the public authority through two sub -processes: a) Checking and ensuring that all operations are being carried out in accordance with the policies stated and objectives defined, orders given and instructions issued; and b) Assessing of actual practical policy results.
Typologies of Administrative Control Measures
Administrative control in the public sector culminates in formal meetings of the political policy-making institutions, that is, conferences or legislatures which are open to the public and which from the climax of the process of public administration and in fact of the political life of the citizenry. In order to ensure that the executive authorities do in fact answer for their deeds during the sessions of the legislatures, it has become necessary in modern public administration to introduce means of detecting any wrongful actions that they might have taken. In the public sector, administrative control is make up of the two main categories, namely, internal control and control by the legislatures. In effect, according to one of the most celebrated public administration scientists, Professor J. J. N Cloete (1986:180), control in the public sector consists of two components, namely, internal control which is exercised by the executive functionaries, and external control, that is, giving account in the formal meetings of the legislatures, that is, parliaments, provincial and metropolitan councils, and local government councils).
Internal Control Measures
Internal control, which is exercised by the executive functionaries themselves, is part of the work activities of all political office-bearers and appointed public administration in charge of executive institutions in this context internal control is intended to mean Secondly, the control is exercised in the institutional situation by the use of formal control measures which ensure that everything which the functionaries do is in fact aimed at achieving the set objectives. Examples of aids for formal internal control measures are budget, reports, inspections and investigations, auditing, procedural arrangements, and organizational arrangements, and instructions setting out clearly the minimum standard and volume of work expected of the functionaries as they provide services to the communities, as well as the work programmes which have to be adhered to.
Thirdly, internal control is exercised in an informal manner by the influence which functionaries exercise over each other. Of special significance in this regard is the continuing supervision which supervisors exercise over their juniors, the examples they set for them and the administrative leadership they give them.
Closer attention is given to the formal and informal internal control measures as shown:
The budget
The budget is a carefully designed programme of the work which the executive institutions intend to undertake. Parliament approves, conducted on the basis of the budget which is a comprehensive financial statement which specifies the purposes for which the money is required.
Auditing
Auditing is one of the traditional administrative control measures which are always useful. The auditing is done after the transactions have taken place in order to determine the legal correctness of all financial transactions, and to check whether the monies have been widely used and for the intended purposes.
Inspections and investigations
Inspections and investigations in loco by a single functionary or a group is also a well-known traditional administrative control measure in the normal public sector. Internal and external auditors at times make separate inspections and investigations on the financial performance of the executive institution (government department) and prepares and submit reports on their observations to the Secretary of the Ministry of Finance and the Comptroller and Auditor-General.
Procedural arrangements
Fixed work methods and procedures which determine the manner and speed with which service is rendered should be laid down work procedures for the executive institutions, are laid down in procedural codes dealing with aspects such as records keeping, mail handling, personnel issues and financial matters. As already shown all these are internal procedural measures which are designed to facilitate the work operations of the executive institution.
The political policy directives/ laws/ declarations are implemented only by executive institutions which are constituted in any orderly manner. The executive institutions are arranged in such a manner that some form of an hierarchal structure of officers and officials is obtained. The one officer reports to the other in the same way that subordinate officials have to give account to their superiors.
Political control of administration
In this article: "The study of Administration", Woodrow Wilson (1887) remarked that: "It is getting to be harder to run a constitution than to frame one", implying that when legislation has been enacted, it did not automatically follow that execution would necessarily be in terms of the legislator's intention. S. X. Hanekom and C. Thornhill (1983: 176) state that from this remark could also be deducted that the framers of a constitution, (including the legislator) should, when passing an act, provide measures to guarantee that the executive attain the objectives set by the legislature. They write: "The political representatives constituting the legislative body should therefore make provision for control, and exercise final control over executive actions".
The identification of an objective does not imply that it will be attained. The following may be cited as reasons why objectives are not attained ( In western public administration public accountability is a characteristic control measure.
Although the legislature is the final controlling body in a democracy, it does not itself make detailed investigations of all executive actions. It has, however, developed measures to assist it in performing its controlling action effectively. Some of these measures are as follows: 1) The accounting officer, departmental secretary 2) A state treasury 3) Public service commission 4) Government auditor 5) Ministers of state 6) State budget 7) Parliamentary select committees 8) Annual reports 9) An ombudsman Thus, one of the philosophical foundations of public administration at any governmental level is that the legislature, for instance, parliament, provincial council, and town council which are all political institutions, has control over the sphere of work of the public officials.
Control over policy
Parliament and cabinet give the general direction to the executive institutions operations. The political directions of the respective cabinet ministry with the permanent secretary as the administrative head.
Establishment of executive institutions
Parliament views institutions as facilitating and consequential rather than as causative forces or ends in themselves. Government seeks to develop concrete areas of activity and to identify their actual servicing requirements first and only then to create institutional structures. Therefore, it has started with a minimum of bureaucratic machinery, thus both allowing for and requiring close governmental consultation. The overriding objective is to ensure coherent and concrete implementation of decisions.
Control over personnel arrangements
In terms of the secretariat: position descriptions, appointment conditions, terms and conditions of employment, and any other relevant national civil service commission terms of conditions of service mutually agreed upon, the SADCC Council of Ministers has power to lay down conditions on how personnel are appointed, remunerated, promoted and dismissed.
Control over auditing
The general rule applies that the executive institutions can incur expenditure over the activities of the approved programme of action. The parliament of Ministers bears the final responsibility for the manner in which public funds are spent. At the same time, both the executive utilization as the recipient governments have auditors who ensure that the monies have been used to the best of advantage in all government departments. Next we examine evaluation methodology.
Evaluation methodology In examining the importance of the concept of evaluation, this is what Hene Nagel Bernstein and Eleanor Bernert Sheldon quoted by R. B. Smith (1983:93) say:
There is no necessity for working social scientists to allow the political meaning of their work to be shaped by the accidents of its setting, or its use to be determined by the purposes of other men. It is quite within their powers to discuss its meanings and decide upon its uses as matters of their own policy.
Source: C. Wright Mills: The sociological imagination: New York, 1959.
Evaluative research deserves full recognition as a social science activity which will continue to expand. It provided excellent and ready -made opportunities to examine individuals, groups, and societies in the grip of major and minor forces for change. Its applications contribute not only to a science of social planning and a more rationally planned and psychological theories of change (Charles R. Press, 1968, p. 202).
Wright: Evaluation research, in international encyclopedia of the social sciences: New York: Free
The 1970s, a decade of rapid -paced social change, was marked by the proliferation of large -scale social action programmes, planned social interventions designed to ameliorate or solve existing social problems. In society today, particularly in the United States, huge expenditures of time, personnel, and funds are allocated to persons and organisations attempting to find solutions to problems both local, state and national. Perhaps nothing is more important to the success of social action programmes than that we know whether or not they work and what effect they have. It almost goes without saying or should that in order to modify or terminate programmes rationally, intelligently, and sensibly that are not achieving their objectives, and to continue and expand those that are, some evidence is needed of their efficiency and effectiveness.
In recent years interest has mounted for employing the research techniques of social science in efforts to determine the effectiveness of social action programmes and the means by which best to allocate economic and personnel resources to the. This interest has resulted in a demand for evaluative research, defined as the use of the scientific method for the purpose of judging the worth of some activity.
Definition of Evaluation
In addition to looking at evaluation research from the viewpoint of its rationale and focus, that is, as a policy perspective, the analytical or measurement perspective may be addressed. The many typologies of evaluation tend to resolve themselves into impact, relative effectiveness, and project performance. The methods used to measure programme outcomes, depend partly on the type of evaluation intended and partly on the available time, resources and expertise.
Impact evaluation seeks to measure the extent to which a programme has met its legislative objectives. Relative effectiveness evaluation addresses the range of programme strategies, techniques, and processes available for accomplishing the legislative objectives. Programme evaluation measures the performance of project operations.
Evaluation
According to Professor Robert B. Smith (1983:94), evaluation means different things to different people. Terms like assessment, appraisal, and judgment are often used synonymously for evaluation. As a result there is no clear -cut understanding of the basic requirements of evaluation research. Generally speaking, the term covers a wide range -from examination of intake records, surveys, testimonials, anecdotal material, and so on -all the way through the complex experimental designs.
It includes highly subjective impressions as well as detailed mathematical and statistical analyses.
Distinction between evaluation and evaluative research
Professor Edward Suchman (1967) makes a distinction between evaluation and evaluative research, which we have followed in this discourse. He does this in an attempt to distinguish between. a) Evaluation as a process of judging the worthwhileness of some activity, regardless of the method employed and b) Evaluative research as the specific use of the scientific method for the purpose of making an evaluation. Thus he separates evaluation as a goal from evaluative research as a particular means of obtaining that goal.
Evaluation as a Process
Professor Edward Suchman (1967: 31 -32) writes that the rage of variation of the meaning of evaluation can be indicated by clearly defining evaluation as the determination (whether based on opinions, records, subjective or objective data) of the results (whether desirable or undesirable, pleasant or unpleasant; transient or permanent; immediate or delayed) attained by some activity (whether programme, or part of a programme, a drug or a therapy, an ongoing or one shot approach) designed to accomplish some valued goal or objective (whether ultimate, inter mediate, or immediate, effort or performance, long or short range).
Simply stated, evaluation is the determination of results attained by some activity designed to accomplish some goal or objective (Marume, 1988
Internal evaluation research techniques
Evaluation research, programme evaluation, productivity improvement is an approach, to the control of administration. The primary aim of evaluation is to: i. Asses the actual practical results produced by a specific public policy, and ii. What possible/probable economy, costs and perceivable benefits of public policy alternative available could
Evaluation Methods
General means (methods) that have been used to evaluate programmes have been Monitoring; Financial management auditing; Investigatory journalism; and Public debating.
Evaluation Process
The evaluation process consists of Defining Decision Makers' Needs; Designing And Structuring; Implementing; Reporting; And Dissemination
Evaluation Purposes
Evaluation research is commissioned for purposes of: Compliance; Process improvement; Theory testing; and Knowledge -building
Evaluative Research
The scientific method with its accompanying research techniques then provides the most promising means for determining the relationship of the stimulus to the objective in terms of measurable criteria.
Critical comment on Edward Suchman's conceptual distinction
Professor Edward Suchman's conceptual distinction between evaluation as a process and evaluative research as the specific use of the scientific method does not rule out the use of nonscientific methods for evaluation. He mentions that many evaluation questions in programme planning, development, and actual operation can be addressed and answered without research and that many others cannot be answered even with the best techniques. He cautions that evaluators must be aware of which tool or technique they are using and careful not to substitute a subjective appraisal for an evaluation requiring a scientific research approach.
Methodological viewpoint of evaluation
From a methodological point of view, the key elements of evaluation are as follows: 5.3.1 Specification of a planned programme of deliberate intervention; 5.3.2 Statement of an objective or goal that is considered desirable or has some positive value; 5.3.3 A method for determining the degree to which the planned programme has been implemented and has achieved its objectives; and 5.3.4 An assessment of any unanticipated consequences of the planned intervention.
Evaluation Research
Evaluation research asks about the kind of change that is desired, the means by which this change is to be brought about, the criteria according to which such change can be recognized, and related results and effects. Obviously the emphasis is an social change, conceiving of evaluation studies as studies of social change is underscored by Professors Herbert Hyman and Charles Wright (1967: 741) in their definition of evaluative research as the procedures of fact finding about the results of panned social action.
Evaluation process
The evaluation process itself can be diagrammed as in Figure 1 below:
Commencement of Evaluation Process
The evaluation process commences with examination of a set desired outcomes, such as income redistribution, adequate housing for the poor, or community mental health. A programme of planned intervention is then proposed wherein implementation of certain activities is expected to of providing adequate housing for the poor. In good evaluation studies the evaluator makes certain that the urban renewal plan is specified in sufficient detail so that all of the components of the programme plan are clearly explicated. Detailed specification of the programme and any modification also serve as a set of criteria against which the degree of implementation can be measured.
Moreover, should the programme prove to be effective in producing the desired outcomes, a detailed account of what the programme entailed is available for others to copy or modify. Once the programme components have been specified; The next step is to provide a rationale for the expectation that implementation of the programme will produce the desired outcomes; and such a rationale may be conceived of as the theoretical underpinning of the evaluation; Next, it is important to measure the degree to which the program has been implemented and whether it has reached the target population; After measuring programme process, the evaluator should proceed to identify the specific goals of the programme as defined by the programme administrators, program participants and to specify the way in which changes on these goals will be measured; This done, the next step is to implement a research design that provides maximum validity of findings in terms of the measurement of programme impact.
End of evaluation process
Finally, the evaluator should interpret all of the findings and recommend a next step in the reform experiment process that brings attainment of the desired outcomes closer. Given the new plan or programme, the evaluation process begins all over again.
Evaluation Cycle and Recommences
Theoretically, the evaluation cycle stops only when the desired outcomes have been attained at the desired level. In this way according to Lee J. Cronback (1980:2), evaluation is also the handmaiden to gradualism -both conservative and committed to change.
Control Measures
Control is the process of monitoring activities to determine whether individual units and the institution itself are obtaining and utilizing their resources effectively and efficiently in order to attain their objectives, and where this is not being achieved, implementing corrective actions.
Four common sources of performance measurement are: personal observation, statistical reports, oral reports, and written reports. Some political controls are exercised by the legislature through the accounting officer, a state treasury, public service commission, a state auditor, ministers of state, state budgets, parliamentary select committees, annual reports, and ombudsman.
Evaluation Mechanisms
The aim of evaluation is to evaluate the results produced by a policy, and what the costs and benefits of alternatives would be. Programmes are scrutinized through monitoring, financial auditing investigatory journalism and public debating.
The evaluation process consists of defining decisionmakers' needs, designing and structuring, implementing, reporting and dissemination.
The measures to evaluate programme outcomes are: impact evaluation, relative effectiveness evaluation, and project performance.
Conclusion on Control Systems and Evaluation Mechanisms
In the government department, two types of control, namely, internal control and control by the political (legislative) bodies have been identified. These two types of control manifest themselves in the administration of government department In exercising the internal control function, several aids are made use of, for instance, the budget, auditing, inspections and investigations, procedural arrangements and organizational arrangements.
|
2018-10-13T19:01:27.250Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "933299d7db81ecaf69f6917b55fe1e7c4358615e",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/v5i1.nov15041201",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9186c16a76247fa3feb920a3677db26d8e1a607d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3307484
|
pes2o/s2orc
|
v3-fos-license
|
The Continuous Motion Technique for a New Generation of Scanning Systems
In the present paper we report the development of the Continuous Motion scanning technique and its implementation for a new generation of scanning systems. The same hardware setup has demonstrated a significant boost in the scanning speed, reaching 190 cm2/h. The implementation of the Continuous Motion technique in the LASSO framework, as well as a number of new corrections introduced are described in details. The performance of the system, the results of an efficiency measurement and potential applications of the technique are discussed.
Nuclear photographic emulsions, also called Nuclear Emulsions, are the highest-precision tracking detector used in particle physics 1 . They are a rather cheap, compact and high-density device recording cumulatively all the charged particles passing through, being produced or stopped inside them. Unlike other detector types, nuclear emulsions must be developed before the tracks can be observed. After exposure and development single three-dimensional particle tracks can be observed and measured at a microscope, allowing reconstructing complex decay topologies especially important for short-lived decays.
Nuclear emulsion has typically the form of a glass plate or a thin plastic foil coated on one or both sides of a transparent support. It is made of silver halide micro-crystals immersed in a transparent organic gelatin compound. The energy loss of ionizing particles crossing the films induces along their path atomic-scale perturbations that, after a special chemical treatment called development, produce a sequence of silver grains in the emulsion visible through an optical microscope.
The history of the photographic method in particle physics began in 1896 with the discovery of radioactivity and since then it played an important role in particle and nuclear physics and astrophysics. Nuclear emulsions have set a milestone in the study of neutrino physics, in particular in the field of neutrino oscillations and for the first direct detection of the tau-neutrino. The improvements in the technology of emulsion readout has allowed to extend the application of nuclear emulsions to the dark matter search 2,3 . The possibility to measure nuclear emulsions with faster scanning devices has provided a complementary and, when real-time monitoring is not required, even an alternative technique to electronic detectors in the field of muon radiography/tomography 4 in particular for the study of the inner structure of volcanoes 5 or the investigation of geological faults. Nuclear emulsions find interesting applications in hadron-therapy, where the use of carbon beams is limited by the poor knowledge of the secondary fragments emitted on the irradiated tissues. The added value of using nuclear emulsions for the study of carbon beams and their secondary particles has been demonstrated in several works [6][7][8] .
The first concept of automatic scanning system was proposed in 1974 at Nagoya University in Japan 9 , but at that time the digital technology was too primitive to implement it. It was not before 1990 when the first fully automated emulsion scanning system, named the Track Selector (TS) 10 , emerged in Japan. Upgraded versions of this system, called New-TS (NTS) and Ultra-TS (UTS) were used in the DONuT 11 and CHORUS 12 experiments. The latest version, called Super-UTS 13 (S-UTS) was used in the OPERA experiment 14 . Since the early 90 s an independent automatic microscopy R&D program started in Italy, aimed at the development of systems for analysis of large emulsion surfaces for neutrino oscillation experiments (CHORUS and later OPERA) designed to observe muon-to-tau neutrino oscillations in appearance mode [15][16][17][18][19] . The efforts were finalized with the development of the SySal 20 scanning system that later evolved into the European Scanning System 21, 22 (ESS). Both the ESS and the S-UTS, used in the OPERA experiment, had high efficiency (about 90%) for MIP particles search and similar tracking resolution (1 micron in position and 3 mrad in angle in a 300 μm thick film). The peak scanning speed achieved was of 20 cm 2 /h for ESS and 72 cm 2 /h for S-UTS with an angular acceptance up to 0.6 radian, satisfying the OPERA requirements. The data volume acquired by the automated microscopes for OPERA experiment is unprecedented in the world and equivalent to several thousands square meters of the emulsion surface (50 μm thick) completely analyzed by systems developed in the experimental physics groups.
Schematic view of an OPERA-like emulsion film 23 is shown in Fig. 1. It is composed of two emulsion layers, one top and one bottom, poured on either side of a plastic base. Due to the technological process of emulsion film production each emulsion layer contains in the middle a thin insensitive layer of pure gelatin. A passing through charged particle leaves traces in both sensitive emulsion layers visible as sequences of aligned silver grains and referred to as top and bottom microtracks. The measured direction of a microtrack may not coincide with the real charged particle trajectory due to the shrinkage effect: change of emulsion layer thickness during development. In OPERA-like emulsion films the shrinkage effect is minimized by filling them with glycerin after development, thus, almost precisely restoring the original thickness. After reconstructing a microtrack it is possible to trace it until the point where it crosses the plastic base surface. This point is the least affected by distortions and, therefore, lies closest to the particle's trajectory. By interconnecting the least distorted points for top and bottom microtracks one obtains the best approximation of a passing-through particle trajectory, referred to as a base track.
Since 2011 the Naples OPERA group has carried out a dedicated R&D on automatic scanning 24 to improve the performance of the ESS developing the Large Angle Scanning System for OPERA framework (LASSO) 25 able to work up to 40 cm 2 /h on the current ESS hardware set-up and to measure tracks in emulsion in an extended angular range. In 2015 the ESS has been upgraded to the New Generation Scanning System (NGSS) 26 that can operate with high efficiency at the record speed of 84 cm 2 /h. Different developments were carried out by other groups 27 . In this paper we report a further scanning speed improvement by the development of the Continuous Motion (CM) scanning technique and its implementation at the NGSS. With the development of the CM technique we have boosted the NGSS's scanning speed to 190 cm 2 /h.
Results
Scanning techniques. The scanning method consists of taking a series of tomographic images while moving the focal plane of the objective inside the sensitive emulsion layer and in this way digitizing the full content of the sample. After the image processing and the three-dimensional reconstruction, one obtains shapes and positions of all silver grains. Images are taken with steps shorter than the objective's depth of field: in this way the digitization and analysis of each cubic micron of sample is performed without gaps. The huge amount of information (up to several GB/s) coming from fast digital cameras requires high performance hardware, together with advanced image processing and computing algorithms. The image taking is synchronized with the motion of the microscope motorized stage and the objective lens (XYZ axes).
The Stop&Go approach. The Stop&Go (SG) approach for the movement of the microscope stage and objective lens has been used for automatic scanning of emulsion films since the very beginning of the automated microscopes era. It has proven itself as a very reliable and relatively easy technique. In some sense it is the most straightforward way to implement the stage motion and it provides a set of images stacked vertically that is, in turn, the most convenient format for processing. The SG technique is described in details in ref. 26. The technique includes two steps called the data acquisition (or DAQ) motion and the reset motion. The DAQ motion involves only vertical movement of the objective lens and, therefore, depends only on the camera frame rate and the desired sampling step (the distance along the Z-axis between two consecutive frames). The reset motion is intended to move the objective and the stage to the next field of view. Therefore, it involves movement along both vertical and horizontal axes, and the required time is then defined by the longest movement. Since in most of the applications the emulsion thickness (tens of μm) is several times smaller than the field of view dimensions (hundreds of μm), the longest movement is the horizontal one. As it is shown in the Fig. 2 (top scheme), during the DAQ phase of the SG (green solid arrow) the objective lens moves with a constant speed along the vertical axis only, producing a vertical image pile (blue thin horizontal lines), and then performs a long reset motion (red dotted arrow) to the next field of view. With wide fields of view and thin emulsions the reset motion takes even longer than the DAQ motion. In a certain sense the reset motion can be interpreted as a "dead time" of the microscope since no measurements are carried out during it. With the technological progress it becomes possible to construct automatic microscopes with even wider fields of view and shorter DAQ motion time by choosing lower magnification objective lens along with cameras having more (mega-)pixel sensors and higher frame rates. The use of the SG technique at such microscopes would only increase the "dead time" fraction of new systems.
The Continuous Motion approach. In order to fully exploit the hardware components of the microscope we have developed a novel scanning approach called the Continuous Motion (CM) reported in this paper. In this approach, shown schematically in the Fig. 2 (bottom scheme), the vertical axis performs a periodic movement while the horizontal one moves at a constant speed, such that during one period of the objective lens oscillation the stage displacement amounts to exactly one field of view. Then, the size of the overlap of two consecutive fields of view can be increased by decreasing the horizontal speed.
If one considers an emulsion layer with the thickness d and a scanning system equipped with a stage capable of moving the objective lens with the acceleration a z and a camera capturing f frames per second, then the time T CM required to scan a single view with Z-sampling s can be expressed with the formula: where the first two terms correspond to the DAQ phase and the last one to the reset motion phase. The first term describes the time needed to accelerate to the speed v z = sf and the factor 2 accounts for the equal time needed for deceleration. The second term represents a movement with a constant speed v z through the emulsion thickness d during which frames are actually grabbed. The last term corresponds to the acceleration time over the distance and the equal deceleration time is accounted for by the factor 2. The formula can be simplified to take the form: is the overall scanning amplitude including the emulsion thickness d as well as the space above and below it needed to accelerate and decelerate the objective lens.
Unlike the SG, in the CM a tilted set of images is produced during the DAQ step, with every image being displaced horizontally by a certain distance with respect to its neighbors. A tilted image set requires more sophisticated processing, compared to the SG, since most of the tracks will cross view boundaries as well as grain images (referred to as clusters) belonging to the same track will appear at different positions within different images. Thus, the processing must take into account effects of optical distortions, vibrations and views misalignment.
Implementation of the Continuous Motion technique.
The idea of the CM is to let the horizontal stage move at a constant speed along one of the axes while the vertical stage performs the DAQ and reset motions. Thus, the most straightforward implementation would be to command the horizontal stage to start moving and then control only the vertical stage by sending fast commands to move up and down at the desired speed and acceleration.
Unlike the SG, the CM is very sensitive to any kinds of delay in the commands flow and processing. Indeed, with the typical working cycle of 80 ms the horizontal speed will be around 10 mm/s. Thus, a delay of 1 ms would lead to about 10 μm of extra stage displacement to be taken into account in the overlap. The Windows XP used to control the stage PC is essentially not a real-time operating system. It has a task scheduler that is not under user control and a typical time slice of the order of 20 ms. So a delay of several tens of ms can easily occur resulting in up to several hundreds of μm of extra stage displacement. This displacement must be taken into account in overlaps otherwise gaps between adjacent views can appear. Overlaps of several hundreds of μm would vanish all the advantages of the CM, canceling out any significant gain in the scanning speed.
To guarantee the delay-free commands flow we use the programmable FPGA device present on the NI-Motion PCI-7344 stage controller board. Commands and their parameters can be stored in an onboard buffer without interrupting the ongoing stage motion. The next command is executed immediately after the previous one is accomplished, with no delay. The FPGA API provides functionality for looping and branching, as well as functions to query the current motion status. It allows to write simple standalone programs that can be executed onboard without interfering with the main CPU.
Following this approach, the FPGA is programmed to execute the DAQ motion followed by the reset motion in an infinite loop, making the objective oscillate up and down with a stability of the period better than 1 ms (see Fig. 3d). This introduces the uncertainty of less than 10 μm to be taken into account in overlaps. The possibility to preload the next movement parameters before the current one is completed makes moves follow one after each other with negligible delays, thus bringing the overhead time to practically zero. The comparison of the speed and displacement profiles of the SG and CM techniques is shown in Fig. 3a and b. The DAQ and reset phases duration is shown in the Fig. 3c. The corresponding scanning parameters are reported in Table 1.
Continuous Motion workflow.
The CM technique was implemented within the LASSO framework 25 . The latter has a modular structure with a number of modules each having a definite role: the Stage Module controls stage movements and position monitoring; the Camera Module performs image acquisition with frame-grabber and camera; the Image Processing Module processes images in real-time using available GPU boards; the Tracker Module performs real-time reconstruction of microtracks; the Guide Module ensures reliable co-operation of all modules and governs the scanning process. During the operation, different modules exchange data by issuing commands and waiting for responses. The workflow diagram is shown in Fig. 4. The scanning process is governed by the Guide module that starts with issuing a command to the Camera module to start image acquisition. The Camera module grabs images and stores them into a circular buffer overwriting the oldest images. The buffer length is adjustable and typically it contains 1000 images, large enough to hold about 1.75 seconds of data.
After the image acquisition is started the Guide module commands the Stage module to start oscillating along the vertical axis and requests the movement profile for 6 periods (the "6 Pr. " operation in Fig. 4). After the Stage module has provided the displacement profile (black solid line in Fig. 3b) the Guide module synchronizes with vertical oscillations by isolating individual DAQ and reset phases. From now on it is capable to predict when DAQ phases will take place even for several periods in advance. In order to check that the synchronization was calculated correctly, the Guide module calculates the nearest DAQ window and requests the Stage module to provide displacement profile for that period (the "Prof. " operation). This check (the "Check Sync. " operation) is carried out every time when position data arrives to ensure the validity of the acquired data.
After the completion of the synchronization routine the movement along the horizontal axis is started. The Guide module calculates when the nearest DAQ phase occurs and issues requests for motion profile ("Prof. ") and images ("Grab") for the corresponding time window. As soon as the DAQ phase is over, it receives the requested images and coordinates, checks synchronization and calculates image coordinates ("Calc Img Crd"). Then it sends images to the Image Processing module that performs cluster reconstruction and location of emulsion surfaces. The latter is required to control that the data is taken inside the emulsion sensitive layer and, if due to local curvature or thermal expansion the vertical position of the emulsion changes, the Image Processing module issues a feedback to the Guide module that oscillation limits for the vertical axis should be corrected. Reconstructed clusters are sent to the Tracker module that performs further processing: reconstruction of grains, alignments and reconstruction of microtracks.
View reshaping by merging of the adjacent views. As it was mentioned before, in the CM a tilted pile of images is produced leading to a reconstruction volume of non-rectangular shape. With such a shape a significant fraction of vertical tracks will only be partially contained in the reconstructed volume. For example, shown in Fig. 5a, the three bottommost (black) clusters are detected only in the left view, while the two uppermost (white) clusters are detected only in the right view. Thus, the track is only partially contained in either view, becoming split into two rather short segments. Partially contained tracks have a higher probability to be lost during the reconstruction process. This problem can be minimized if the reconstruction volume has a rectangular shape with vertical sides. In this case, shown in Fig. 5b, if a quasi-vertical track exits through a side then due to overlapping of volumes it will be reconstructed in the adjacent volume. Similarly, an inclined track crossings the boundary will be reconstructed in at least one volume since its travel path inside will be long enough.
In order to reshape the volume we first reconstruct grains inside it and then align it with the previously processed adjacent volume by matching grains in the overlapping area. Then all the grains belonging to the adjacent volume and falling inside the reshaped volume are added to it. The precision of the grain reconstruction may decrease if a grain is within 10 μm from the edge of the view since a part of it can lie outside and the grain is only partially contained. On the other hand, its duplicate is fully contained in the area of overlap with the adjacent view. Therefore, marginal grains are discarded and do not take part in the matching procedure. The matching grain pairs, detected in both volumes, are merged into single grains. After that a newly formed rectangular shape volume undergoes the usual microtrack reconstruction procedure, the same one as for the SG. The procedure of view reshaping is illustrated in Fig. 6. Optical distortion correction. Optical distortion, inevitably present in any optical system, can spoil microtrack reconstruction in emulsion. This effect leads to discrepancy between the true position of a grain and its visible position inside the field of view. It is position dependent and reaches the maximal value (up to 1-2 μm) near the field of view edges, affecting all three coordinates. Since in the SG all the frames are piled up vertically, grains of a moderately inclined track, which is usually the case, appear more or less in the same part of the field of view, thus experiencing more or less equal distortion (see Fig. 7a). Therefore, all grains of the track are equally shifted giving rise to a coherent displacement of the microtrack from its true position, but it does not affect the shape of the microtrack. So in the SG the optical distortion does not create problems other than some position accuracy degradation. On the contrary, in the CM with its tilted image pile, grains of the same track, even a vertical one, appear in different parts of the field of view undergoing distortions of different magnitude and direction (Fig. 7b). Therefore, optical distortion, if not corrected, changes the shape of a straight track curving it and even breaking it if it passes through the tilted side of the volume (Fig. 7c).
In order to get rid of optical distortion effects we have introduced two correction matrices: one performs correction in the horizontal (XY) plane while the other one along the vertical (Z) axis. The use of two matrices instead of a single three-dimensional one was dictated by reasons of implementation. The distortion correction procedure is described in details in ref. 25. Camera time offset correction. The timestamp τ of an image is associated to the coordinate of the image centre ξ through a procedure described in the Methods Section. Nevertheless, the timestamp is not perfectly synchronized to the moment when the image is centered around ξ. Note that the difference between consecutive timestamps is very precise. As long as we move in one direction, the displacement of view centers with respect to their true position is always the same. On the contrary, as soon as we change direction of movement, this displacement changes its sign. This causes the hysteresis effect. In the SG this effect was avoided by performing the data acquisition always in one and the same movement direction of the vertical axis. The same technique is used in the CM but for vertical axis only. Making it for horizontal axis would lead to a significant drop in the scanning speed. Therefore, the only way to get rid of hysteresis is to introduce the timestamp offset correction, a value that would be added to every image timestamp thus fixing it and giving it a certain sense as the moment when the central pixel is acquired.
Fine vibration correction by view-to-view alignment. The existing vibration correction procedure performs alignment of the adjacent frames taking advantage of grain shadows. Thus, it corrects vibrations inside the view only, leaving uncorrected view's coordinates with respect to neighboring views. In other words the correction is local. To make it global we developed a procedure that aligns each view with its available neighbors.
The alignment is performed at the grain level, i.e. it uses all the three coordinates of a reconstructed grains. The procedure searches for matching grain patterns in the overlapping volume of neighboring views. To improve alignment accuracy and stability some redundancy can be added by aligning with views from the previously scanned line. Therefore, the entire line of views is stored on purpose in the computer memory. In order to save memory space, the program keeps only peripheral grains close to the view boundaries. The alignment is done online and found offsets are saved for later use in a dedicated offline procedure that merges all the views into a single volume before reconstructing tracks.
View alignment also improves the timestamp correction precision: after traveling for a long distance even a small discrepancy can accumulate view by view and give a noticeable offset degrading the position resolution. Due to this offset microtracks in the overlapping area corresponding to the same track appear too far apart to be recognized as a matching pair, leading to the appearance of fake tracks. The view alignment between the current view and its closest neighbors helps reduce this effect.
Thermal expansion correction by alignment with reference views. In case of double sided emulsion films thermal effects become important. During scanning of one emulsion layer, the stage heats up and expands, thus, displacing emulsion layer vertical positions. If measured directly from the data, the distance between layers is not equal to the plastic base thickness as it should be. After one hour of scanning the discrepancy can be as large as 50 μm and, if not corrected, leads to incorrect measurement of track slopes and, hence, to the degradation of the angular resolution and efficiency. Irregularities in emulsion film flatness and base thickness are usually much smaller than the thermal expansion and do not create problems neither during scanning nor in analysis. The solution used in the SG is to divide emulsion area into fragments small enough that during scanning of one emulsion layer inside the fragment, the second one would not have any noticeable displacement. This solution does not affect the scanning speed in the SG, but in the CM it would significantly decrease it due to the necessity to switch more often between emulsion layers as well as change the scanning direction.
In order to correct the effect of thermal expansion without decreasing the scanning speed, we have introduced the concept of reference views: view pairs (one top and one bottom) separated horizontally by few centimeters. The acquisition of reference views is performed at the start of data taking and takes no longer than a minute and, therefore, emulsion layer's displacement during it is negligible. As soon as reference views are taken the emulsion film is scanned as a single fragment. Later all the views are aligned between each other as well as with reference views thus providing information for the calculation of a global transformation. The transformation, being applied to the view coordinates, moves the view to the position it was during the reference view scanning operation, thus restoring the original form of the emulsion film correcting the effect of thermal expansion.
Discussion
The presented work reports the development of the CM scanning technique. With this technique it has become possible to boost the NGSS microscope's effective scanning speed from 84 cm 2 /h to 190 cm 2 /h, thus setting a new record. The increase in the scanning speed is due to the reduction of the reset motion time needed to move the objective and the stage to the next field of view. The high performance level was achieved thanks to the development and application of a number of correction procedures: optical distortion correction, timestamp offset correction, vibration correction by view alignment and thermal expansion correction by alignment with reference views. It has become possible to use the existing microtracking procedure without changes after implementing the view reshaping by merging grains from the previous view.
The efficiency test was performed using a stack of OPERA-like emulsion films. Due to increased vibration level, in-motion acquisition and view reshaping, reconstructed grain positions in the CM case are less accurate. If one tries to compensate that by loosening parameters in the microtrack reconstruction procedure, the chance coincidence level will significantly increase giving rise to the purity degradation. For this reason microtracks reconstruction was done with the same parameters as in the SG. Naturally, that led to some decrease in the reconstruction efficiency that had propagated through all other reconstruction stages and is visible in Fig. 8a. On the other hand, the application of the view-to-view and reference view alignments at later stages have recovered angular and spatial accuracies of the base track reconstruction as shown in Fig. 8b and c, resulting in higher reconstruction quality. Angular residuals are calculated as an average angular difference between two consecutive base tracks associated with the same reconstructed track. Position residuals are calculated as an average difference between the position of a base track in one film and the projection of a consecutive base track to the same film.
Scientific REpoRTS | 7: 7310 | DOI:10.1038/s41598-017-07869-3 It is worth noting that the reconstruction efficiency of a single base track is not critical due to the redundancy provided by having 10 or more emulsion films along the particle trajectory, therefore, the measured performance is found to be fully satisfactory and comparable to that in the SG.
The NGSS microscope described in this paper is currently in use for the final analysis of the OPERA films. The development and implementation of the proposed CM technique allows to reach a scanning speed one order of magnitude higher than in OPERA, thus enabling the achievement of much more challenging goals. Unlike the SG, the CM enables the use of latest technological advances for further scanning speed improvement, e.g. the use of a piezo-driven vertical stage or multiple cameras.
The CM technique can be fruitfully applied to any application of the scanning of nuclear emulsion films in high energy, astroparticle and nuclear physics. In particular we see the following applications: • The FOOT 28 (FragmentatiOn Of Target) experiment is designed to study the interactions of carbon ion and proton beams in the patient tissues in order to optimize the hadron-therapy treatment planning systems. Its detector is based on the use of nuclear emulsion to detect light fragments emitted at large angles; • The NEWSdm 2 (Nuclear Emulsions for WIMP Search with directional measurement) experiment designed to search for dark matter candidates in the underground Gran Sasso Laboratory by using the innovative approach of detecting the nuclear recoil direction with an emulsion target; • The neutrino detector of the SHiP 29 experiment that will use large amount of emulsion films as a tracking detector to study tau neutrino physics and search for light dark matter produced by 400 GeV proton interactions; • The muon radiography/tomography to study the inner structure of volcanoes and geological faults, where emulsion films, unlike electronic detector, can be easily installed and large surfaces are required; • The CM technique can also be used with samples different from emulsion films, e.g. biological samples, where large volumes have to be analyzed with optical microscopes in the shortest possible time.
Methods
Microscope setup. The NGSS microscope setup is equivalent to that described in ref. 26. Emulsion sample. The CM performance has been studied on a stack of OPERA-like emulsion films exposed at CERN to a 6 GeV/c π − beam. The films were exposed several times with different incident angles. The track density per each angular peak was determined to be around 2000 particles/cm 2 .
Scanning configuration. We have scanned an emulsion surface of 30 cm 2 in 8 consecutive plates. The final configuration of the scanning parameters to gain the maximum efficiency has been determined choosing 28 layers of tomographic images taken with a Z-sampling of 1.75 μm, a 5 × 5 high-pass convolution filter and a pixel-dependent binarization threshold applied. The effective scanning speed was 190 cm 2 /hour taking into account overlap of adjacent views (about 30 μm) and the time required to scan 9 reference views. For other scanning parameters please refer to Table 1.
Track reconstruction. The track recognition procedure is a quite complex process executed by the LASSO software tracking module: all the steps of the algorithm for the microtrack reconstruction are described in ref. 25. Then a dedicated offline software FEDRA 30 performs the track reconstruction in the full volume data: after the connection of microtracks across the plastic base (to form base tracks), which strongly reduces the number of fake tracks in a single film, a plate to plate alignment is done to connect base tracks in consecutive sheets and recognize the volume tracks. In such a procedure, most of the instrumental background tracks are discarded. The base track efficiency has been finally evaluated as the number of segments belonging to passing-through tracks divided by the number of crossed plates except the first and the last ones where each track starts and ends respectively.
Horizontal distortion correction.
To obtain the horizontal (XY) matrix we perform scanning of approximately 80 × 60 views at one and the same depth inside emulsion with horizontal step of 10 μm. In this way we obtain a dataset where an image of a grain (a cluster) appears in different parts of the field of view thus being distorted in different ways. Then we identify that grain in all images by performing a pattern matching between clusters. The identification procedure is repeated for every grain. Then it is possible to compare the displacement of the grain from the position predicted by the stage encoder. This difference constitutes the desired local correction convoluted with stage vibration. Therefore, the scanning is performed with lowest possible speed and acceleration. The field of view is subdivided into cells, with the number of cells being equal to the camera resolution. The effect of vibrations is further reduced by averaging individual corrections for grains that fall inside the same cell. These cells form the horizontal correction matrix that later is applied to every found cluster displacing it in the XY plane according to the values computed for the corresponding cell.
Vertical distortion correction.
To calculate the vertical (Z) matrix we profit of the OPERA emulsion film structure: due to technical reasons it is produced with a 1 μm insensitive layer of pure gelatin between two sensitive layers (see Figs 1 and 6). Typically, an area of several cm 2 is scanned and grains are reconstructed. In order to determine the distorted shape of the insensitive layer we subdivide the field of view into vertical columns, each being 10 × 10 πm 2 in XY and spanning the whole Z. These columns are filled with the grain coordinates relative to their original views. Then the Z coordinate of the insensitive layer is found for each column by looking for the minimum grain density therein. The set of found values provides the vertical distortion correction matrix.
Camera time offset correction. The calculation of the time offset is performed during the microscope setup and tuning. During the normal scanning operation only the offset value is used. To find the offset value we scan the same area inside emulsion twice in opposite directions. Then alignment is performed by matching grains reconstructed in both datasets. The time offset can be calculated with the formula dt = dt/(2v y ), where dy is the offset in position of matched grains and v y is the movement speed.
Image coordinates calculation. The coordinates calculation is based on the timestamp information provided by the stage and camera modules. Sets of coordinates-timestamp pairs {x i , t i } for each axis constitute stage displacement profiles shown in Fig. 3b. To calculate image coordinate ξ, one searches for two points (x i , t t ) and (x i+1 , t i+1 ) with t i < τ < t i+1 , whekre τ is the image timestamp. Then one can use a simple interpolation formula Data availability statement. The datasets generated and analysed during the current study are available from the corresponding author on reasonable request.
|
2018-04-03T02:58:48.763Z
|
2017-08-04T00:00:00.000
|
{
"year": 2017,
"sha1": "23d769015eff0ab79096cc0542656bcaa7a75b0e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-017-07869-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "077c2fa98468875b041d8c3412e5d556172d20d0",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
207006685
|
pes2o/s2orc
|
v3-fos-license
|
Marked accumulation of 27-hydroxycholesterol in the brains of Alzheimer's patients with the Swedish APP 670/671 mutation.
There is a significant flux of the neurotoxic oxysterol 27-hydroxycholesterol (27OHC) from the circulation across the blood-brain barrier. Because there is a correlation between 27OHC and cholesterol in the circulation and lipoprotein-bound cholesterol does not pass the blood-brain barrier, we have suggested that 27OHC may mediate the effects of hypercholesterolemia on the brain. We previously demonstrated a modest accumulation of 27OHC in brains of patients with sporadic Alzheimer's disease (AD), consistent with a role of 27OHC as a primary pathogenetic factor. We show here that there is a 4-fold accumulation of 27OHC in different regions of the cortexes of patients carrying the Swedish amyloid precursor protein (APPswe) 670/671 mutation. The brain levels of sitosterol and campesterol were not significantly different in the AD patients compared with the controls, suggesting that the blood-brain barrier was intact in the AD patients. We conclude that accumulation of 27OHC is likely to be secondary to neurodegeneration, possibly a result of reduced activity of CYP7B1, the neuronal enzyme responsible for metabolism of 27OHC. We discuss the possibility of a vicious circle in the brains of the patients with familial AD whereby neurodegenerative changes cause an accumulation of 27OHC that further accelerates neurodegeneration.
and Stroke -Alzheimer's Disease and Related Disorders Association criteria ( 24 ) and the diagnosis of defi nitive AD was confi rmed by neuropathological examination using Consortium to Establish a Registery for Alzheimer's Disease criteria ( 25 ). Cerebral cortical tissue from four age-matched control subjects (age 65 ± 2 years, mean ± SEM) with no clinical history of psychiatric or neurological disorders and no neuropathological changes indicative for dementia were included in this study. The main cause of death for controls was myocardial infarction and for AD patients pneumonia or cachexia and dehydration. In compliance with the Swedish law, the deceased were kept fi rst at room temperature for a minimum of 3 h and then transferred to a cold room until the clinical autopsy was performed. At autopsy, samples from the different brain regions were collected from the left hemisphere with a postmortem interval of 5-22 h for AD patients and 11-44 h for controls. The tissue samples were frozen immediately and stored at Ϫ 80°C until use.
Ethical aspects
Permission to use autopsy brain material in the experimental procedures was granted by the Regional Human Ethics committee in Stockholm and the Swedish Ministry of Health. For all subjects included in the study, consent from next of kin was given for research studies prior to autopsy.
Oxysterol and sterol analyses
Brain tissue was homogenized according to Folch's method ( 28 ) with minor modifi cations. Approximately 0.1 g of each cortical region was homogenized in 1ml of buffer (5 mM EDTA, 50µg/ ml BHT in PBS, pH 7.4) using a polytron homogenizer. Three ml of choloroform:methanol (2:1, v/v) were added to the disrupted brain tissue homogenate. The homogenate was mixed by shaking (about 100 rpm) for 24 h in 4°C. The samples were centrifuged at 5000 × g for 5 min (room temperature) and the organic phase was collected in a new glass tube. The extraction procedure was repeated and both organic phases were combined. The extract was evaporated under argon gas, redissolved in 5 ml Folch, and stored at Ϫ 20°C until analysis of the oxysterol content.
A volume corresponding to 10 mg of the extract was analyzed with respect to 24OHC, 27OHC, and 5-cholestene-3  ,7 ␣ ,27-triol using GC/MS and deuterium labeled internal standards as previously described ( 26 ). Prior to GC/MS analysis the residue was silyated with 200 µl pyridine:hexamethylsilyldisilazane:trimet hylchlorosilane (3:2:1, v/v/v) for 30 min at 60°C. The solvent was evaporated under argon gas and redissolved in 80 µl hexane and subjected to GC/MS. The ions at m/z 413, 456, and 544 were used for tracing of unlabeled 24OHC, 27OHC, and 5-cholestene-3  ,7 ␣ ,27-triol, respectively, and the ions at m/z 416, 461, and 550 for tracing of 2 H 3 -24OHC, 2 H 5 -27OHC, and 2 H 6 -5cholestene-3  ,7 ␣ ,27-triol, respectively. Analysis of 5-cholesten-3  ,7 ␣ ,27-triol by isotope dilution has not been described previously. Replicate analyses of the same biological sample gave a coeffi cient of variation of 7%. terolemia is regarded to be a risk factor for Alzheimer's disease (AD) ( 12,13 ), in particular in midlife ( 14 ). Because there is a close correlation between cholesterol and 27OHC in the circulation ( 15 ), hypercholesterolemia is likely to result in an increased uptake of this oxysterol. We have discussed the possibility that 27OHC is mediating the effect of hypercholesterolemia and that this oxysterol may be a primary pathogenetic factor in the development of AD ( 16,17 ). The fi ndings from a number of in vitro studies are thus consistent with the possibility that 27OHC may accelerate neurodegeneration. In cultured neuroblastoma cells, 27OHC counteracts the inhibiting effect of 24OHC on generation of amyloid ( 18 ) and in retinal pigment epithelial cells, it increases  -amyloid and oxi dative stress ( 19 ). It has also been shown to increase  -amyloid formation in hippocampal slices and in neuronal preparations from rabbits ( 20 ). In addition to this, 27OHC reduces the production of the "memory protein" activity-regulated cytoskeleton-associated protein (arc) in mouse brain ( 21 ).
We have demonstrated modestly increased levels of 27OHC both in the brain ( 8 ) and cerebrospinal fl uid (CSF) ( 22 ) of patients with the sporadic form of AD. There are three different possibilities for the increased levels: 1 ) increased uptake of 27OHC from the circulation due to hypercholesterolemia; 2 ) reduced effi ciency of the bloodbrain barrier; and 3 ) reduced metabolism of 27OHC due to loss of CYP7B1 activity as a consequence of neuronal loss. The fi rst two possibilities are consistent with the infl ux of 27OHC as a primary pathogenetic factor whereas the last possibility means that the accumulation of 27OHC is secondary to the neurodegeneration.
In order to discriminate between the different possibilities, we measured 27OHC in brain autopsy samples from patients with a familial form of AD in which the primary pathogenetic factor is a double mutation in the gene encoding the amyloid precursor protein (APP) ( 23 ). In the present study, we found a marked accumulation of 27OHC in the brains of these patients, and the levels of 27OHC were considerably higher than the levels we previously observed in the brains of patients with the sporadic form of the disease ( 8 ). We conclude that accumulation of 27OHC is likely to be secondary to the neurodegeneration. Evidence is presented that the accumulation is more likely to be due to reduced CYP7B1 activity than to increased permeability of the blood-brain barrier or increased synthesis in the brain.
Human brain postmortem tissues from AD patients carrying the APPswe mutation and age-matched control subjects
Autopsy brain tissue from the temporal, parietal, and occipital cortexes was obtained from three AD patients carrying the Swedish APP 670/671 mutation (age 63 ± 4 years, mean ± SEM) at specifi c request from the Huddinge University Hospital Brain Bank (courtesy of Dr. Nenad Bogdanovic). These AD patients had received the clinical diagnosis of dementia according to the National Institute of Neurological and Communicative Disorders
Accumulation of 27OHC in the brains of AD patients carrying the APPswe mutation
As shown in Fig. 1A , the levels of 27OHC in brain homogenates from the temporal cortexes of the AD patients were 7.8 ± 1.8 ng /mg tissue as compared with 1.8 ± 0.8 ng/mg tissue in the corresponding material from the controls ( P = 0.02, Student's t -test). The level of 24OHC was 18 ± 1 ng/mg tissue in the AD patients as compared with 24 ± 3 ng/mg tissue in the controls ( P > 0.05). The ratio between 27OHC and 24OHC was 0.44 ± 0.11 in the temporal cortexes of the AD patients and 0.07 ± 0.02 in the corresponding material from the controls ( P = 0.02, Student's t -test).
As shown in Fig. 2 , the levels of 27OHC in brain homogenates from the parietal cortexes of the AD patients were 9.4 ± 3.3 ng/mg tissue as compared with 2.6 ± 1.1 ng/mg tissue in the corresponding material from the controls ( P = 0.03, Student's t -test). The corresponding fi gures for 24OHC were 18 ± 1 ng/mg tissue in the patients and 19 ± 1 ng/mg tissue in the controls ( P > 0.05).
Similar results as obtained above were obtained in the analyses of brain homogenates from the occipital cortexes of AD patients and controls (results not shown).
Levels of 5-cholestene-3  ,7 ␣ ,27-triol in the brains of AD patients carrying the APPswe mutation
If the activity of the metabolizing enzyme CYP7B1 is the limiting factor for elimination of 27OHC from the brain and if this activity is reduced in the brains of patients with AD, reduced levels of the product 5-cholestene-3  ,7 ␣ ,27triol would be expected, or at least a reduced ratio between Cholesterol, cholesterol precursors, and plant sterols were analyzed by isotope dilution mass spectrometry as described ( 29 ) using the same extract as above.
Western blotting
Microsomes prepared from the above brain material were subjected to electrophoresis on a 10% SDS or Bis/Tris polyacrylamide gel and transferred to nitrocellulose membranes. The membranes were incubated for 2 h at room temperature in blocking buffer (5% milk in PBS, 0,05% Tween) followed by incubation overnight in a cold room with an anti-CYP46 antibody (a generous kind gift from Prof. D. Russell, University of Texas Southwestern Medical Center, Dallas,TX) and an anti- -actin antibody. As a secondary antibody, Goat-anti-rabbit coupled with HRP (Pierce) was used and incubated at room temperature for 2 h. The membranes were incubated in Super Signal West Dura Extended Duration Substrate (Prod#34075) from Thermo Scientifi c (Pierce) according to the manufacturer's instructions. The signal (around 50 kDa) was detected by equipment from Bio Rad (Universal hood II).
Mitochondria were also prepared from the same brain material and subjected to electrophoresis and the same procedures as above with the exception that the anti-CYP46 antibody was replaced by an anti-CYP27 antibody ( 8 ). ng/mg tissue as compared with 6.5 ± 1.6 ng/mg tissue in the controls ( Fig. 4A , P > 0.05). The levels of campesterol in brain homogenates from the same area were 6.9 ± 2.7 ng/mg tissue in the AD patients as compared with 7.5 ± 2.6 ng/mg tissue in the controls ( Fig. 4B , P > 0.05).
Thus, there were no signifi cant differences between AD and controls with respect to levels of plant sterols in the brain. Because the plant sterols are of exogenous origin the results are consistent with an intact blood-brain barier in the AD patients.
Lathosterol is a marker for cholesterol synthesis. As shown in Fig. 5 , the levels of this steroid in the parietal cortex were 19 ± 4 ng/mg tissue as compared with 28 ± 2 ng/mg tissue in the controls ( P < 0.05). In the brain homogenates from the occipital cortex, the corresponding levels were 19 ± 2 ng/mg tissue and 22 ± 3 ng/mg tissue ( P > 0.05) (not shown). The results indicate a reduced synthesis of cholesterol in the parietal cortexes but not in the temperal cortexes of the AD patients.
Levels of CYP46A1 and CYP27A1 in the brains of AD patients carrying the APPswe mutation
Theoretically, the high levels of 27OHC could be due to increased synthesis. As shown in Fig. 6 , Western blotting with antibodies toward human CYP27A1 gave similar signals in the analysis of the mitochondrial fraction this product and 27OHC. The levels of 5-cholestene-3  ,7 ␣ ,27-triol were found to be 31 ± 5 pg/mg tissue in the brain homogenates from the parietal cortexes of the AD patients as compared with 25 ± 4 pg/mg tissue in the cor responding materials from the controls ( P > 0.05). The corresponding fi gures obtained in the analysis of 5-cholestene-3  ,7 ␣ ,27-triol in brain homogenates from the temporal cortexes of the AD patients was 63 ± 8 pg/mg tissue and 26 ± 2 pg/mg tissue, respectively ( P < 0.05). The absolute levels of the product are thus similar or higher in the brains of patients with AD than in the controls, whereas the ratio between product and substrate is lower in AD. Data are consistent with but do not prove a reduced capacity to convert 27OHC into 5-cholestene-3  ,7 ␣ ,27-triol in the brains of the AD patients.
Levels of plant sterols and lathosterol in the brains of AD patients carrying the APPswe mutation
As shown in Fig. 3A , the levels of sitosterol in the brain homogenates from the parietal cortexes of the AD patients were 6.3 ± 0.8 ng/mg tissue as compared with 5.0 ± 0.8 ng/mg tissue in the controls ( P > 0.05). The levels of campesterol in the same area of the brains of AD patients were 6.2 ± 0.8 ng/mg tissue as compared with 5.0 ± 0.8 ng/mg tissue in the controls ( Fig. 3B , P > 0.05).
The levels of sitosterol in brain homogenates from the occipital cortexes of the AD patients were 7.2 ± 2.8 that the loss of CYP46 in the neuronal cells may at least in part be compensated for by an abnormal expression of CYP46 in glial cells in the brains of patients with AD ( 35 ).
It is of interest to compare the present fi ndings in patients with the familial form of AD with a transgenic mouse model for AD expressing the same Swedish APP mutation. Up to the age of 9 months, we found that APP23 transgenic mice have levels of 27OHC identical to those found in the control wild-type mice ( 8 ). At the ages of 12 and 18 months, there was a modest (20-30%) increase in the levels of 27OHC. The latter increase was associated with a similar increase in the levels of plant sterols in the brain. Because the latter increase is likely to be due to an increased permeability of the blood-brain barrier, it seems likely that the slight accumulation of 27OHC in the brains of the animal model is due to a less effi cient blood-brain barrier rather than due to a direct effect of the neurodegeneration. It is obvious that there are differences between patients with an APP mutation and the corresponding mouse model. Part of the explanation for the lower accumulation of 27OHC in the brains of the APP23 mice may be the 4-fold lower levels of this oxysterol in the circulation as compared with the corresponding levels in humans. It may be speculated that this difference in circulating levels is related to a higher rate of metabolism in mice compared with that of man.
It is evident that the high accumulation of 27OHC in the brains of the patients with familial AD and a known pathogenetic mechanism causing the disease is a consequence rather than a cause of the neurodegeneration. However, given the properties of 27OHC, it cannot be excluded that the accumulation of this oxysterol may promote the disease process and that a vicious circle is created in which the neurodegeneration causes increased accumulation, which may further exacerbate neurodegeneration. The considerably higher accumulation of 27OHC in the brains of the patients with an autosomal dominant disease-causing mutation may refl ect the early and more clinically aggressive disease progression in these patients compared with that of sporadic AD ( 36 ). The age of the patient at the time of death may be of importance for the accumulation of 27OHC and the present patients were relatively young (mean age 63 years). It may be mentioned, however, that in our previous studies on brain autopsy of brain tissue from the AD patients as in the analysis of the corresponding fraction from the brains of the controls. Western blotting was also performed with antibodies toward human CYP46A1 and  -actin on the microsomal fraction of brain homogenates from the patients and the controls. No significant difference was observed between the patients and the controls (not shown). The results are consistent with unchanged levels of CYP27A1 and CYP46A1 in the brains of the AD patients.
DISCUSSION
The brain levels of 27OHC and 24OHC in the present control subjects were found to be almost identical to those of the control subjects studied previously using the same methodology ( 8 ). In that study, we reported that the levels of 27OHC were increased by 40-80% in the temporal, parietal, and occipital cortexes of sporadic AD patients. In the present work, we demonstrate that brain levels of 27OHC are increased by a factor of about four in AD patients carrying the Swedish APP 670/671 mutation. The APP 670/671 mutation on chromosome 21 in this Swedish family is considered to be the cause of the disease because it alters APP metabolism leading to an accumulation of A  peptides and A  plaque formation (30)(31)(32)(33). A history of AD has been traced through eight generations of this family and the onset of the disease occurs between 44 and 61 years of age ( 34 ).
In our previous investigation, there was a slight (15-20%) decrease in the levels of 24OHC in the brains of the patients with the sporadic form of AD ( 8 ). In the present work, a similar decrease of these levels was observed in the temporal cortexes but not in the parietal cortexes of the APPswe patients. The level of CYP46A1 as measured by Western blotting was not signifi cantly different between patients and controls. Due to the location of CYP46 in the neuronal cells, the loss of such cells in the brains of AD patients would be expected to be associated with a reduction in the level of 24OHC. On the other hand, we have shown samples from patients between 66 and 90 years of age (mean age 82 years) there was no correlation between the age at the time of death and the accumulation of 27OHC ( 8 ).
The reason for the increased accumulation of 27OHC in the AD patients bearing the APP Swedish mutation is not likely to be increased permeability of the blood-brain barrier. Thus, there was no signifi cant accumulation of plant sterols in the brains of the patients. Because the protein level of the sterol 27-hydroxylase enzyme (CYP27) was similar in the brains of the AD patients and the controls, a local production of 27OHC in the brains of the AD patients is also excluded. The most likely explanation seems to be a reduced level of CYP7B1, the neuronal enzyme responsible for metabolism of 27OHC. It has been reported previously that the levels of CYP7B1 are reduced in the brains of patients with AD ( 11 ). A number of attempts in our laboratory to measure CYP7B1 by Western blotting in human brain samples using different preparations of antibodies have failed thus far. We found a reduced ratio between 5-cholestene-3  ,7 ␣ ,27-triol and 27OHC in the brains of APP 670/671 AD patients. The absolute levels of the product were, however, similar or higher in the brains of the AD patients. The data are consistent with but do not prove that the activity of CYP7B1 is reduced in the AD patients.
Recently it was shown that a small subgroup of patients with hereditary spastic paresis (HSP SPG5) have a mutation in the gene coding for CYP7B1 ( 37 ). As a consequence, these patients have high levels of 27OHC in plasma and CSF ( 38 ) and most probably also in the brain. Whether or not the accumulation of 27OHC in these patients is part of the pathogenesis is not known at this stage. Interestingly, most of these patients do not develop neurodegeneration, but a more detailed characterization is lacking. The possibility has been discussed that motor neurons may be more sensitive to 27OHC than other neuronal cells, but defi nite evidence for this is lacking ( 38 ).
In summary, patients with the present familial form of AD have a 4-fold accumulation of 27OHC in the brain, considerably higher than the corresponding accumulation in the brains of patients with the sporadic form of the disease and in the brains of a transgenic mouse model for AD. Here, we provide evidence that the accumulation is a consequence of the neurodegeneration. We discuss the possibility of a vicious circle in the brains of patients with AD whereby neurodegenerative changes cause an accumulation of 27OHC that further accelerates neurodegeneration.
|
2018-04-03T04:42:52.493Z
|
2011-05-01T00:00:00.000
|
{
"year": 2011,
"sha1": "cc122c273c091a4d9965d0258afb7aa536014f60",
"oa_license": "CCBY",
"oa_url": "http://www.jlr.org/content/52/5/1004.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "eef21639e48f8b9d7fc0e7f8a2c69f212e8bb37e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
256444922
|
pes2o/s2orc
|
v3-fos-license
|
Recommendations From a Chinese-Language Survey of Knowledge and Prevention of Skin Cancer Among Chinese Populations Internationally: Cross-sectional Questionnaire Study
Background There is a paucity of studies assessing awareness and prevention of skin cancer among Chinese populations. Objective The aim of the study is to compare attitudes and practices regarding skin cancer risks and prevention between Chinese Asian and North American Chinese populations and between Fitzpatrick scores. Methods A cross-sectional, internet-based, 74-question survey in Chinese was conducted focusing on Han Chinese participants internationally. The survey included Likert-type scales and multiple-choice questions. All participants were required to read Chinese and self-identify as being 18 years or older and Chinese by ethnicity, nationality, or descent. Participants were recruited on the internet over a 6-month period from July 2017 through January 2018 via advertisements in Chinese on popular social media platforms: WeChat, QQ, Weibo, Facebook, and Twitter. Results Of the 113 completed responses collected (participation rate of 65.7%), 95 (84.1%) were ethnically Han Chinese, of which 93 (96.9%) were born in China and 59 (62.1%) were female. The mean age of these 95 participants was 35.8 (SD 13.3) years; 72 (75.8%) participants were born after 1975. Few but more North American Chinese reported that Chinese Asian populations received annual skin checks (4/30, 4.2% vs 0/65, 0%; P=.009) and believed that their clinician provided adequate sun safety education (13/30, 43.3% vs 15/65, 23.1%; P=.04). Participants with higher Fitzpatrick scores less frequently received sun safety education from a clinician (4/34, 11.8% vs 22/61, 36.1%; P=.02). More participants with lower Fitzpatrick scores used sunscreen (41/61, 67.2% vs 16/34, 47.1%; P=.05), but alternative sun protection use rates are similar across groups. Conclusions Cultural differences and Fitzpatrick scores can affect knowledge and practices with respect to sun protection and skin cancer among social media–using Chinese Asian and North American Chinese communities based on respondent demographics. Most participants in all groups understood that people of color have some risk of skin cancer, but >30% of all groups across regions and Fitzpatrick scores are unaware of current skin protection recommendations, receive insufficient sun safety education, and do not use sunscreen. Outreach efforts may begin broadly with concerted public and private efforts to train and fund dermatologists to perform annual total body skin exams and provide more patient education. They should spark community interest through mass media and empower Chinese people to perform self-examinations and recognize risks and risk mitigation methods.
China is the most populous country with approximately 1.4 billion people; Han Chinese is the largest ethnic group native to China, making up 92% of the Chinese population and 19% of the global population. Despite the attention to skin in the market for skin products and treatments among Chinese consumers, skin cancer incidence rates continue to increase in Chinese populations [14,[16][17][18][19][20][21][22][23].
One study noted that, between 2004 and 2011, the overall incidence of melanoma in China increased from 0.4/100,000 to 0.48/100,000 [21]. A study of long-term trends of skin malignant melanoma in China between 1990 and 2019 reported annual incidence net drifts of 3.523% and 3.779% for males and females, respectively [23]. In tandem with China's rapidly aging population, the age-standardized incidence rate of melanoma in China has increased as well [23], increasing by 110.3% from 1990 to 0.9/100,000 in 2017 [17]. Among the Chinese population of Hong Kong, between 1990 and 1999, the incidence of basal cell carcinoma increased from 0.32/100,000 to 0.92/100,000, and the incidence of squamous cell carcinoma from 0.16/100,000 to 0.34/100,000 [14]. In Singapore, between 1968 and 2016, the age-standardized incidence rate of cutaneous basal cell carcinoma among ethnically Chinese people increased from 2.7/100,000 to 6.9/100,000 [22].
Nonetheless, compared to the wealth of research on skin cancer risks, incidence, awareness, and prevention among Western White and people of color populations [24][25][26][27][28][29], there is a paucity of publications concerning that of Chinese populations. A recent review paper reported the identification of only 9 papers across numerous Western and Chinese English-accessible databases that studied knowledge, attitudes, beliefs, and behaviors related to skin cancer and sun protection in China [30]. In Chinese, a limited collection of peer-reviewed and non-peer-reviewed research has been published in the past decade [31][32][33][34][35][36][37]. Chinese people internationally are often [8,10,28,38]-but not always [16,27,39]-grouped into Asian and East Asian categories for studies related to skin cancer rates and behavioral risks, impeding extrapolation for subgroups. Only 2 studies were identified comparing the perspectives on the risks of skin cancer among specific ethnic Chinese populations from different sociocultural backgrounds [38,40].
In this study, international Han Chinese perspectives on skin cancer were recruited through social media and anonymously surveyed in simplified Chinese. Trends and gaps in knowledge of risk factors and preventative measures were identified and used to determine necessary educational measures for developing future interventions with patients, educators, and providers.
Participant Recruitment
Participants were recruited on the internet over a 6-month period via advertisements in Chinese on popular social media platforms: WeChat (Tencent Holdings Limited), QQ (Shenzhen Tencent Computer System Co, Ltd), Weibo (Sina Corporation), Facebook (Meta Platforms, Inc), and Twitter (Twitter, Inc). All participants were required to read Chinese and self-identify as being 18 years or older and Chinese by ethnicity, nationality, or descent. No financial compensation was provided.
Survey Details
An internet-based survey was adapted into Chinese from an English survey used in previous studies for assessing current knowledge and management of skin cancer in English-speaking populations [41][42][43][44]. The survey contained 74 questions in Chinese (Multimedia Appendix 1) and took approximately 30 minutes to complete. The University of Central Florida Institutional Review Board approved the Chinese-adapted version. Participants completed the survey on SurveyMonkey cloud-based software (SVMK Inc). Survey contents and results are reported in English.
Data Analysis and Visualization
Data analysis and visualization were completed in Excel (Microsoft Corporation) and R (R Foundation for Statistical Computing). Comparisons with chi-square and Fisher exact tests were made-for query responses with all counts 10 or greater and at least 1 less than 10-between responses by Chinese participants in Asia (group 1) versus those of Chinese participants in North America (group 2), and by those with modified Fitzpatrick scores ≤14 (modified Fitzpatrick group 1 [FG1]) versus those with modified Fitzpatrick scores ≥15 (modified Fitzpatrick group 2 [FG2]).
Due to the relative homogeneity in ethnicity, hair color, and eye color among ethnically Chinese people, scores were determined as a summation of points from questions modified to be more specific to skin phototyping for Chinese skin types [45] and more granular than the conventional Fitzpatrick scoring system: 1. How dark is your skin normally? (0=albino; 7=very dark brown) 2. How dark do you get if you tan? (0=albino; 7=very dark brown) 3. How easy is it for your skin to tan? (0=never tans; 7=always tans) 4. How easy is it for your skin to sunburn? (0=always sunburns; 7=never sunburns) Since the maximum possible score was 28, half of the maximum (14) was chosen to be the dividing value between FG1 and FG2. In short, participants in FG1 had paler skin or experienced more sensitivity to burning than their FG2 counterparts.
Ethical Considerations
This study has been reviewed and exempted by UCF institutional review board (IRB No. SBE-17-12900).
Results
The Table 2. A modified Fitzpatrick scale was used to determine whether the variable darkness of the skin among Chinese contributes to skin cancer risks and perception in Table 3. Among all participants, 37 (39%) reported not using any sunscreen; the primary cited reasons for not doing so are summarized in Table 4. The use of alternatives options to sunscreen are shown in Table 5 (participants could select more than one option as listed).
Discussion
Overview Skin lightening is a multibillion dollar industry among Chinese people. Despite Chinese culture's well-known and generally strong preferences for whiter, lighter-toned skin [46][47][48], limited research has been done on Chinese knowledge and practices with respect to sun protection and skin cancer.
Given that skin cancer incidence rates and mortality continue to increase among Chinese people [14,[16][17][18][19]49], it is imperative to understand and identify optimal strategies to synergize with consumer interests for effective UV radiation protection. This study is the first to compare Chinese attitudes and practices between Chinese Asian and North American Chinese populations as well as between modified Fitzpatrick scores.
In this study, most participants were Han Chinese, which is consistent with Chinese ethnic demographics. Most participants were born in China after 1975 (Table 1), the year when a generational paradigm shift was instituted, altering from traditional to modern Chinese culture and economics. Thus, our findings and recommendations are more focused on the perspectives and knowledge of younger generations who are more highly educated and actively use and were recruited through social media. Consistent with the participants' age distribution, only one of the participants had a history of precancerous or cancerous lesions ( Table 1). The incidence rate of skin cancer increases with age across races [2,18,50,51], and Chinese patients are more likely to be diagnosed with skin cancer after 40-60 years of age [17,52].
On the Risks Factors of Race and Ethnicity
While participants across all groups predominantly (>89%) believe that POC can get skin cancer, most participants believe that Chinese people are less at risk than Caucasian people (Tables 2 and 3). North American Chinese people may believe more often than their Chinese-Asian counterparts in a lower skin cancer risk (P=.06, Table 2). However, there is no significant difference between FG1 and FG2.
The potential significance of geographic location could be linked to experience bias. The higher awareness of skin cancer by Chinese people in North America may be related to living in heterogenous communities, wherein non-Asian counterparts are subject to skin cancer. Nonetheless, there is consistent recognition across regions and Fitzgerald scores that skin color does not guarantee immunity to skin cancer.
On Knowledge of Melanoma and Skin Care
Knowledge about skin cancer is limited in Chinese communities; only 50.0%-64.4% of participants can define melanoma as a type of skin cancer (Tables 2 and 3). Neither modified Fitzpatrick score nor geographical location yielded statistically significant differences in this lack of knowledge. Group 1 and group 2 have read the latest skin care recommendations at comparable rates (P>.99, Table 2). Acquiring knowledge of skin cancer risk may be more associated with interest and motivation as opposed to resource access (Tables 2 and 3).
On Interest and Clinical Care Related to Skin Cancer
Across location and modified Fitzpatrick score groups, 37.7%-47.1% of respondents in each group lacked concern regarding the risk of skin cancer in their lifetime (Tables 2 and 3).
While most participants are either likely or very likely to see a clinician for a new lesion, participants consistently reported low rates of annual skin checks; significantly more annual skin checks occured in North America than in Asia (P=.009, Table 2). This rate does not appear to be affected by the modified Fitzpatrick score (Table 3).
Most Chinese people across all groups had neither received sun safety education from their clinicians nor were generally satisfied with the education when provided (Tables 2 and 3). However, within these findings, significantly more North American Chinese people felt satisfied with the education provided (P=.04, Table 2). Furthermore, significantly more participants in FG1 than in FG2 received sun safety education from clinicians (P=.02, Table 3).
On Tanning Practices
While no participants used sunbeds, outdoor sun tanning practices were more popular among North American Chinese than Chinese Asian people (P=.08, Table 2), which would be in agreement with the existing literature [38]. Consequently, Western clinicians should recognize this behavior and be proactive in initiating sun safety discussions with ethnically Chinese people living in North America. In Asia, the monitoring of trends should continue, and dedicated educational programs on sunbathing and tanning should be proactively implemented.
On Sun Protection Practices
Sunscreen use was reported in 47.1%-67.2% of participants across locations and modified Fitzpatrick score groups (Tables 2-4). Participants in FG1 may use sunscreen more frequently than their counterparts in FG2 (P=.05, Table 4).
Nonetheless, a sizeable minority do not use sunscreen. Efforts are needed to confirm that these individuals are using other forms of UV protection, including hats, umbrellas, and sunglasses; in this study, 35 (37%) of all 95 Han Chinese participants stated that they used no forms of UV protection at all (Table 4). It is furthermore pertinent to conduct additional surveys to confirm that sunscreen is being applied at appropriate time intervals and in appropriate volumes. No significant differences concerning the lack of sunscreen use were found between group 1 and group 2 nor FG1 and FG2 (Table 4), suggesting similar viewpoints between groups.
It is worthwhile to note that some people reported a lack of knowledge on correct sunscreen use (Table 4). Perhaps sunscreen manufacturers could add to their products QR codes linked to instructional videos for proper sunscreen applications [53].
In terms of alternative methods for sun protection, group 1 and group 2 used wide-brimmed hats and long-sleeve clothing at similarly high rates (39/65, 60% vs 19/30, 63%, Table 5). Use of sunglasses for sun protection had a much higher proportion (35/65, 54% vs 24/30, 80%) in group 2 than in group 1 (P=.02, Table 5). This statistically significant difference is understandable due to the popularity of sunglasses in North American 20th century culture [54]. Although sunglasses are growing in popularity, their use remains minimal for sun protection in Asia [30,[55][56][57]. On the other hand, "sun umbrellas" use among Chinese-Asian people, especially Chinese-Asian women, is frequently used to maintain a white complexion [30,[55][56][57][58].
Between FG1 and FG2, protective clothing, sunglasses, and umbrella use rates were similar between the groups ( Table 5). As such, modified Fitzpatrick scores are less likely to affect sun protection practices beyond sunscreen.
Closing the Educational Gap
It is imperative to educate and motivate Chinese communities to intervene in the growing severity of diagnoses and incidence of skin cancer. Given the similarities in responses between groups, it is not unreasonable to begin with a standard guide translated into various languages and methodology for addressing skin cancer knowledge and behavior between clinicians and Chinese patients in various languages. Effective dissemination of educational messages can be achieved via social media and other forms of mass media [30,59]. Additional research should be conducted to identify viewpoints shared among participants and develop effective media-based outreach for skin cancer prevention campaigns, which may be accomplished using a method like that of Shi et al [60].
Moreover, Chinese communities have expressed interest in skin exams and increased breadth and depth of sun safety education. Efforts should be made in dermatology residency programs internationally to emphasize skin cancer risks, signs, and symptoms among all skin types, including Chinese; review specific techniques for skin protection to aid in patient education; and train residents to complete total body skin exams (TBSEs). We recommend that annual TBSEs should be conducted by a dermatologist.
Current
screening guidelines confound this recommendation-the US Preventative Services Task Force states that there is insufficient evidence to determine the effectiveness of visual skin exam screenings in US patients without obvious related signs or symptoms [61]; however, the methods behind this recommendation have been extensively critiqued [62], and notably, the conclusions are based on data inclusive of primary care clinicians alongside dermatologists without a direct means of comparison between screening accuracy [61]. Organizations' recommendations for other regions vary from no recommendations to self-examination twice a year for specific high-risk populations to 2-year intervals for all individuals from the age of 35 years onward [63,64]. Nevertheless, emerging data demonstrate that TBSEs conducted by dermatologists are low-cost and efficacious as a screening tool for detecting skin cancer [65], and they detect skin cancer at significantly higher rates than partial skin exams; for malignant melanoma, it is suggested that a dermatologist-conducted TBSE is 23.5 times more likely to identify a lesion than a Pap smear to identify a cervical cancer lesion [66].
Differences in health care systems provide another challenge to implementing TBSEs. In China, traditional Chinese medicine (TCM) is practiced alongside Western medicine, each with its own set of diagnostics, interpretations, therapeutics principles, and treatments [67,68]. Different logic systems are in place, including for cancer, with some analogous language and principles [69]. Both forms of medicine recognize the value of preventative medicine through early detection and treatment [67][68][69]; this shared perception should be used to adequately reach out to Chinese communities in China and abroad that preferentially rely on TCM. Collaboration with TCM universities to reconcile and integrate knowledge related to skin cancer risks, prevention, and screening into their curriculum and improve the cultural competency of allopathic clinicians to provide parallels to TCM concepts will improve the care and patient education of TCM patients [68,[70][71][72][73][74].
Furthermore, China's multitiered health care system is intended to coordinate between primary health care with general practitioners and secondary and tertiary health care at hospitals, with more complex levels of care and with more resources available at higher-tiered hospitals. Currently, there is limited use of primary health care services in China and preferential use of hospitals for medical services [75]. The type of first-visit hospitals and socioeconomic status have also been shown to significantly impact the time for diagnosis of melanoma [76]. In addition to ongoing reform efforts, one method to address these limitations would be to fund annual TBSEs by dermatologists as a part of routine primary care, which would lessen the financial burden and incentivize more patients to receive timely screenings.
Supplementarily, self-examination techniques should be taught through private and public health organizations to be conducted at regular intervals appropriate to individuals' genotypic, phenotypic, and environmental risk levels; all communities should be encouraged to seek clinical evaluation for lesions identified by tools such as the ABCDE rule (asymmetry, border irregularity, color nonuniformity, diameter >6 mm, and evolution) or the "ugly duckling" sign. Resources for POC to recognize their risks of malignancy and methods to protect against UV radiation-such as how to properly apply sunscreen-should be commonplace, and some examples can be found on the American Academy of Dermatology [77], American Cancer Society [78], and other organization websites. Mind the Gap, an extensive open-source handbook, compiles clinical signs in black and brown skin [79], and such efforts further aid in broadening the understanding and awareness among patients, educators, and clinicians. A selection of sun safety and skin cancer-related resources are available within China from government and nongovernment sources, some of which are highlighted in Multimedia Appendix 2 [80].
For the individual patient, culture, phenotypes, and lifestyles can significantly influence responses to upon all steps of the process, from information intake to application. Thus, all these factors ultimately should be considered in individualized educational programs and clinicians' care for Chinese patients both in Asia and in North America.
Limitations and Future Directions
Aside from the limitations of recall bias for survey-based research, future comparisons of groups by demographics of sex, age, and level of education would elucidate further stratifications of attitudes and practices and may provide suggestions for tailoring educational programs more specifically for individual patients [19,30,55,81]. While these demographic data were collected, distributions were insufficient to make meaningful inferences, except that our findings and recommendations from this study are primarily directed toward Chinese populations that use Chinese-language social media. At the time of surveying, Chinese-Asian participants on average were 32.1 years of age, 7.5 years younger than their North American Chinese participants at 40.6 years of age. Although this age difference is a limitation of our study, individuals of these groups are not void of skin cancer risks. Future studies with other languages, as discussed below, will better encapsulate younger populations.
Contextual exposure to UV radiation was not accounted for as part of this study. Though it certainly influences practices for sun protection and the risk of skin cancer, everyone has noninsignificant exposure risks to UV radiation. Among melanoma cases, the most common subtypes in China are acral and mucosal, followed by superficial spreading [6,64,82]. While UV radiation damage is primarily identified as the etiology of superficial spreading and has only been associated with a subset of mucosal and acral melanomas [82][83][84], reduction of risk against UV-induced damage overall would relieve the disease burden of skin cancer among the large population of Chinese people worldwide.
We consequently plan to expand our survey questions and recruit more participants to gain further insight concerning awareness of, exposure to, and behavior related to vocational and avocational exposure risks to UV radiation effects on skin health [36,37,[85][86][87][88]. We then plan to develop community-specific best-practice recommendations adapted from existing methods [89] to mitigate these exposure risks. These include occupational and public health policies for communication, training, and protective equipment, and encourage use of the materials by making them conveniently accessible based on survey responses. Depending on the responses, for various situations, we will recommend different interventions such as onboarding training, protective clothing appropriate to the climate and conditions, shade structures, and sunscreen dispensers placed in locations frequented by workers in a certain industry; based upon the findings of Walkosz et al [89], these types of interventions likely will reduce the incidence of sun damage. Regular targeted free awareness-raising and screening events following the structure detailed by programs such as the American Academy of Dermatology "SPOT Skin Cancer" initiative [90,91] would also benefit populations associated with an identified high risk of skin cancer.
Given the population size of Han Chinese and the diaspora across the globe, new surveys will capture additional demographics and clarify regional geographical differences in cancer incidence and burdens within different Chinese provinces [17,92]. Maintaining the criteria of Chinese ethnicity while including different translations of the survey would provide better insight into the effects of geographical and cross-cultural differences. Thus, surveys should be available in both traditional and simplified Chinese, as well as languages of countries that currently have the largest overseas Chinese populations, including but not limited to English, Russian, Spanish, French, Italian, Indonesian, Thai, and Malay [93].
Furthermore, we will collaborate with more Chinese dermatology researchers and clinicians to expand our outreach. The collaboration would facilitate the surveying of more older participants who were born prior to 1975, allowing us to compare viewpoints between generations.
Conclusions
In conclusion, our Chinese-language survey was used to assess and compare Han Chinese attitudes and practices related to skin cancer risks and prevention. We identified manifestations of cultural differences between Chinese Asian and North American Chinese communities that use social media, and we determined that opinions and behaviors among Han Chinese people may differ by modified Fitzpatrick score.
From our findings, we proposed several aims for educational programs by clinicians and health care organizations in Asia and North America for the largest ethnic group in the world. Through a collective and adaptive effort across all levels of health care, knowledge and practices with respect to sun protection and skin cancer among Chinese populations globally
|
2023-02-01T16:27:20.682Z
|
2022-03-06T00:00:00.000
|
{
"year": 2023,
"sha1": "7ab54a87344d0f0ef722a03be5c41be3f971919d",
"oa_license": "CCBY",
"oa_url": "https://derma.jmir.org/2023/1/e37758/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23342614732d0ae2cdbbe0c79731af00819225cc",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
112223418
|
pes2o/s2orc
|
v3-fos-license
|
The relevance of manufacturing flexibility in the context of Industrie 4.0
Manufacturing companies have to withstand a growing global competition on different strategic dimensions like production costs, product quality and product innovation. To cope with this increased competition, companies in high-wage countries often employ a differentiation strategy to meet individual customer needs, as it becomes increasingly challenging to justify higher production costs through superior product quality. Manufacturing flexibility as a strategic orientation has been discussed in engineering and management literature for several decades with growing interest in the recent past. As a result of this development, scientific literature has focused on a multitude of topics including flexibility as a reactive and proactive strategy. This paper summarizes the different research streams associated with production flexibility, building on the groundbreaking work of Donald Gerwin [1], who introduced flexibility as a strategic perspective and developed a framework that illustrates the relationship between manufacturing strategy, environmental uncertainty and methods for delivering flexibility. To the best of our knowledge, this is the first study to identify the relationships between flexibility and performance by systematically charting empirical findings from the literature and to link this development to the advancements of manufacturing schemes of Industrie 4.0. We use our findings to allocate the literature stream of production and manufacturing flexibility in the framework of Industrie 4.0, proposed by Schuh et al. [2]. The relevance of the discussed relationships are verified with differen t research groups in the Cluster of Excellence “Integrative Production Technology in High- Wage Countries” of the RWTH Aachen University .
Introduction
Flexibility is widely accepted as one of the four operational capabilities of a firm amongst quality, dependability and costs [3]- [5]. While quality has been the top priority of German manufacturing firms for a long time, flexibility has recently gained increasing attention. A persistent trend of globalization supports competitive pressure, while significant market fluctuations and increasing demands for individualized products prevail. Increasingly heterogeneous markets accompanied by shorter product lifecycles are provoking the need of companies to provide great product variety, while at the same time maintaining excellent product performance at low costs [6].
One of the biggest challenges within these boundaries is sustainable competitiveness. The complementary theories of Cumulative Capabilities and Trade-Offs within the Operations Management field provide a solid basis for the analysis of the strategic importance of flexibility for producing companies [7], [8]. Recent developments in production technology increase the focus on flexibility means by producing companies and question the boundaries of the traditional production theory. Our approach of integrating the key aspects of Industrie 4.0 into the production theory development shows an increased need to advance our understanding of flexibility on the production context. This paper aims at presenting a comprehensive overview over the development of flexibility literature in Operations Management. In a second step, the authors transfer the key-findings towards the implementations of iterative product lifecycles as a result of the technology advancements of Industrie 4.0. The paper closes with a discussion and concluding remarks.
Theoretical foundation
Unlike other research fields, Operations Management does not build on a common set of theories for which it is famous. However, it applies theoretical frameworks from adjacent research fields like management science, microeconomics and natural sciences [8]. One of the major frames used to derive theory in Operations Management research is the Resource Based View (RBV), which states that valuable, rare, inimitable, and non-substitutable resources can lead to competitive advantage [9], [10]. In the corporate context, a firm controls a variety of resources like organizational processes, capabilities, assets, knowledge, attributes etc. to implement strategies and to secure its competitive position [11], [12]. According to the contingency theory, which builds on the RBV, the optimal set of a corporation's resources depends on its specific internal setup and environment [13]. This assumption challenges the neoclassical production [4] functions as it claims that there is not a universal function that holds for all eventualities and can be applied to different companies [14]. In the Operations Management literature, two conflicting research streams dealing with cumulative capabilities and trade-offs have evolved [8]. The theory of trade-offs, also known as the theory of competitive capabilities, is based on the observations of Wickham Skinner, that there are trade-offs to be made in designing or operating a production system [15], [16]. A given state of technology defines an outer boundary of where a production system can operate. Hence, one system cannot provide the highest level in various operational dimensions like quality, flexibility, delivery and cost at the same time [17]. Especially when decisions for initial designs of green-field plants are made, several dimensions conflict with each other like the flexibility to produce high product variety and cost effectiveness, or the implementation of lean or agile manufacturing systems [15]. As a result of these trade-offs, the manufacturing strategy should always be aligned with strategy of the overarching corporation [18].
In contrast to the theory of trade-offs, the theory of cumulative capabilities describes the achievement of high performance in multiple capabilities at the same time, because the simultaneous pursuit of capabilities can lead to superior overall performance [8], [19]. This theory is based on the observation, that certain manufacturing plants outperform their rivals in multiple dimensions at the same time. The underlying assumption is, that improvements in certain manufacturing capabilities are a prerequisite for further improvements in other capabilities [4]. The widely cited sand cone model (Fig 1) of Ferdows and de Mayer suggests quality as a foundation for all other capabilities, as less rework and waste facilitate delivery dependability, speed and cost efficiency [4]. Whereas the supporting influence of quality and dependability on speed and quality has been subject of several empirical studies, it is questioned whether there is a cumulative relationship between high cost and flexibility performance [14]. The theory of performance frontiers (Fig 2), also referred to as the theory of production function frontiers, subsumes the law of cumulative capabilities and trade-offs. In contrast to the production functions or frontiers known from neoclassical production theory, the theory of performance frontiers differentiates between performance and asset frontiers. Further, the dimensions of product variety, quality and cost effectiveness are incorporated as outputs. The performance frontier is defined as the maximum performance achieved by a manufacturing unit, given a set of operating choices whereas the asset frontiers represent the technical maximum output of a system [11]. The shape and position of the performance frontier are affected by the firms' business policy and thereby depend on business environment and available resources. On a given performance frontier, companies can only improve one dimension by trading-off the degradation of another. Hence, improvements result out of a firms' choices about its competitive priorities and are bound by the available unique resources and the environment, specific to a firm [12]. In order to escape these trade-offs, the asset frontier needs to be advanced by introducing new production technology. Integrated machines or additive production technologies enable revolutionary short value chains and dramatically enable iterative product lifecycles [20], [21].
Development of flexibility literature
There has been a growing interest for flexible manufacturing and mass customization in the recent years [22]- [24]. Literature has focused on various aspects of the topic like definitions of manufacturing flexibility, dimensions [25]- [27], classifications and taxonomies[28], [29], measurement of flexibility [30], the relationship between uncertainty, flexibility and performance [3], [31], [1], or the dimension of supply-chain flexibility [32], [33]. Groundbreaking contributions to the topic of manufacturing flexibility have been made by Gerwin [1] through the development of a conceptual framework based on the proposal of Swamidass and Newell [3], who have introduced flexibility as a manufacturing strategy. Gerwin shows the interdependencies between environmental uncertainty, manufacturing strategy, manufacturing flexibility, and performance [1] (Fig. 3). Manufacturing strategy is the core of the framework as it is the place where the other elements are put into context. Swamidass and Newell define manufacturing strategy as the effective use of manufacturing strengths as a competitive weapon for the achievement of business and corporate goals [3]. Generally the manufacturing strategy literature comprises the following four dimensions: cost, quality, flexibility and dependability [3], [5]. Figure 3: Conceptual Framework, adopted from Gerwin [1] In the context of his framework, Gerwin introduces three generic strategies, classified into whether they are defensive or proactive in use. Adaptation, the reactive use of flexibility, refers to the adjustment of the manufacturing setup as a reaction to perceived uncertainty, meaning that more random changes in the environment lead to higher investments in the abilities to vary the production process. A proactive approach, however, is the redefinition of market uncertainties meaning, that companies can try "bend the environment to its will" [1]. A company can for example create additional uncertainties for its competitors by making their customers getting used to a faster introduction of new products. The third strategy is the reduction of uncertainties leading to a decreased need of flexibility. Such a reduction of uncertainties can be achieved for example by long-term contracts with customers, with the effect of lower fluctuations in demand or through preventive maintenance While Gerwin [1] identified four different strategies, the flexibility literature stream focuses on the two aspects of reactiveness or adaptation (Fig 4). Building on the theoretical framework of Gerwin, several researchers have composed empirical investigations on manufacturing flexibility as a strategy to cope with environmental uncertainty. Pagel and Krause suggest on the basis of empirical data, that not environmental uncertainty is the key driver for flexibility but rather the products a company produces [31], [34]. More recent results of a study analyzing flexibility in a supply chain context however suggest in line with the framework of Gerwin, that companies should achieve a fit between their flexibility strategy and the level of environmental uncertainty in order to reach high levels of performance [35]. Wong et al. provide empirical evidence, that environmental uncertainty positively moderates the relationship between several integration practices and production flexibility [36]. Criticizing that studies like the one of Gerwin solely focus on the relationship between environmental uncertainty and manufacturing flexibility, Vokurka and O'Leary-Kelly recommend to expand the scope of analysis by also focusing on three additional variables influencing manufacturing flexibility, namely strategy, organizational attributes and technology [24]. The latter implicates, that technology advancements can be used to provoke a shift in the customers' expectations towards the degree flexibility of goods offered. Feigenbaum and Karnani suggest, that flexibility is an advantage especially of small firms [37] meaning that smaller companies can realize greater performance gains as a result of output flexibility than large corporations. In order to increase their benefit, large corporations can assimilate certain proponents of process management from small companies [38].
According to Upton there is a significant link between the vintage of process technology, the experience of workers and manufacturing flexibility, suggesting that production flexibility can be increased by technology upgrades and the employment qualified staff [39]. Kuzgunkaya and ElMaraghy give a comprehensive overview over the economic implications of flexible manufacturing systems [40]. Further there is a positive relationship between production and supply chain flexibility, operational performance and firm performance [41], [42]. The effect of flexibility on performance however is dependent on the fit between the manufacturing flexibility with the overall strategic orientation. Parthasarthy and Sethi claim that the flexibility impact on performance is larger if its incorporated as part of the company's strategy and Chang et al. suggest in accordance with the trade-off theory, that companies have to derive flexibility means in line with their strategic positioning [43], [44]. [45] In the year 2000, Sawhney proposed a framework that the two strategies of adaptation and redefinition can be employed simultaneously to encounter uncertainty and to open up opportunities at the same time [45] (Fig 5). Wong et al. built on this framework and introduced the aspect of supply-chain integration to the research field of production flexibility [36]. According to their empirical findings, both supplier integration and customer integration lead to an increased production flexibility performance. Further they show evidence, that this relationship is positively moderated by environmental uncertainty suggesting, that companies aiming to provide high product variety as means to cope with uncertainty, need to integrate both with their suppliers and with their customers.
Iterative product lifecycles to drive flexibility in Industrie 4.0
Advanced manufacturing techniques allow radically short lifecycles and an intensified customer orientation with individualized products [21]. For many years additive manufacturing technology was used in the context of rapid prototyping, where products are not produced for end customers but rather for the use in an internal product development process [46]. However, in the recent past there has been a development of the employment of additive manufacturing techniques from rapid prototyping towards rapid manufacturing as a result of increasing process rates and thereby falling marginal production costs [47]- [49]. The advancement of these technologies lays ground for iterative development processes, based on incremental product adjustments. With additive manufacturing technologies products can be developed on the basis of existing parts with the aid of digital design tools such as 3D scanners or altered by the simple variation of digital drawings [50]. Weller et. al.
give a comprehensive overview on the economic impact of additive manufacturing technologies based on a payoff function [20]. In comparison to the traditional deterministic product lifecycle, consisting of the phases, development, introduction, growth, maturity and decline, an iterative development process includes an evaluation phase with the possibility to integrate with customers and thereby to gather field data (Fig 6) [51], [52]. The iterative product-lifecycle is divided into several steps called macrotacts, each consisting of two development phases, namely product conceptualization and product and process design, and a market entry step. The stages of growths, maturity, and decline are seen as intermediate steps of the evaluation phase and as a pre-step of the next macrotact. Iterative and agile product development processes increase the development productivity and allow to handle high complexity under uncertainty [53]. The approach to develop products on the basis of customer feedback is very similar to the lean startup approach known to provide high productivity at very limited resources [54]. According to the reference system of collaboration productivity, both return on engineering and return on production are positively affected by the iterative product lifecycle [2]. Additive manufacturing technologies as a basis for the iterative approach reduce the cost of producing a minimum-viable-product to enter the market for data generation and thereby increases the return on engineering. Implementing the gained market knowledge and experience into further product variants, enables gains in the return on production as wastage is reduced to a minimum.
This transformation towards iterative product lifecycles has several implications both for reactive and proactive flexibility strategies as proposed by Gerwin [1]. The establishment of the evaluation phase as an integral part of the product lifecycle decreases the risk of missing market trends and thereby increases the responsiveness to changing market needs. Ideally there is a constant assimilation of market based feedback in the sense of customer integration which allows companies to direct their manufacturing processes towards the market changes [36]. Further, the employment of agile technologies like additive manufacturing can be seen as a proactive flexibility strategy. The employment of additive manufacturing techniques has significant effects on the cost structure of manufacturing companies, as they enable companies to develop product varieties at very limited marginal costs [20]. This change in cost structure can be used according to Gerwin's framework to apply pressure on competitors on the market in the sense of redefining the product lifecycles and degree of product individualization the customers in a specific market are used to [1].
Discussion
We have shown that the Operations and Production Management literature offers a variety of literature, dealing with the strategic relevance of flexibility. In particular, we have presented the development of a research stream dealing with reactive and proactive flexibility strategies, based on a framework proposed by Gerwin in 1993 [1]. In high-wage countries, where quality and dependability merely function as order qualifiers, the dissolution of the flexibility costs dilemma is of vital strategic importance. The theory of performance frontier illustrates the need, to move the asset frontier to break free of the trade-off between these two dimensions. With additive manufacturing techniques as keystones of Industie 4.0 in combination with iterative development processes, we present an approach to employ manufacturing flexibility both as reactive and proactive manufacturing strategies.
|
2019-04-14T13:04:33.967Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "2c0ed596483506edc201ab21c4c39da8cb77a41d",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.procir.2015.12.047",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e7203d87122a19e8cd7c9a2b5c075f27cf08d069",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
249400476
|
pes2o/s2orc
|
v3-fos-license
|
Cariporide Attenuates Doxorubicin-Induced Cardiotoxicity in Rats by Inhibiting Oxidative Stress, Inflammation and Apoptosis Partly Through Regulation of Akt/GSK-3β and Sirt1 Signaling Pathway
Background: Doxorubicin (DOX) is a potent chemotherapeutic agent with limited usage due to its cumulative cardiotoxicity. The Na+/H+ exchanger isoform 1 (NHE1) is a known regulator of oxidative stress, inflammation, and apoptosis. The present study was designed to investigate the possible protective effect of cariporide (CAR), a selective inhibitor of NHE1, against DOX-induced cardiotoxicity in rats. Methods: Male Sprague-Dawley rats were intraperitoneally injected with DOX to induce cardiac toxicity and CAR was given orally for treatment. The injured H9c2 cell model was established by incubation with DOX in vitro. Echocardiography, as well as morphological and ultra-structural examination were performed to evaluate cardiac function and histopathological changes. The biochemical parameters were determined according to the manufacturer’s guideline of kits. ROS were assessed by using an immunofluorescence assay. The serum levels and mRNA expressions of inflammatory cytokines were measured by using ELISA or qRT-PCR. Cardiac cell apoptosis and H9c2 cell viability were tested by TUNEL or MTT method respectively. The protein expressions of Cleaved-Caspase-3, Bcl-2, Bax, Akt, GSK-3β, and Sirt1 were detected by western blot. Results: Treatment with CAR protected against DOX-induced body weight changes, impairment of heart function, leakage of cardiac enzymes, and heart histopathological damage. In addition, CAR significantly attenuated oxidative stress and inhibited the levels and mRNA expressions of inflammatory cytokines (TNF-α, IL-6, IL-18, and IL-1β), which were increased by DOX treatment. Moreover, CAR significantly suppressed myocardial apoptosis and Cleaved-Caspase-3 protein expression induced by DOX, which was in agreement with the increased Bcl-2/Bax ratio. Also, DOX suppressed phosphorylation of Akt and GSK-3β, which was significantly reversed by administration of CAR. Furthermore, CAR treatment prevented DOX-induced down-regulation of Sirt1 at the protein level in vitro and in vivo. Finally, Sirt1 inhibitor reversed the protective effects of CAR, as evidenced by reduced cell viability and Sirt1 protein expression in vitro. Conclusion: Taken together, we provide evidence for the first time in the current study that CAR exerts potent protective effects against DOX-induced cardiotoxicity in rats. This cardio-protective effect is attributed to suppressing oxidative stress, inflammation, and apoptosis, at least in part, through regulation of Akt/GSK-3β and Sirt1 signaling pathway, which has not been reported to date.
INTRODUCTION
As an anthracycline drug, doxorubicin (DOX) is one of the most extensively used chemotherapeutic agents for treatment of various cancers including leukemia, lymphoma, breast cancer, and other solid tumors (Rivankar, 2014). Unfortunately, despite its remarkable anticancer activity, the clinical application of DOX is markedly limited by its serious cardiotoxicity, which is characterized by electrocardiographic changes, cardiac arrhythmia, and irreversible degenerative cardiomyopathy (Smith et al., 2010). DOX-induced cardiotoxicity can occur immediately, within months, or even years after DOX treatment (Alkreathy et al., 2010). The exact molecular mechanisms responsible for DOX-induced cardiotoxicity are still not fully understood, and few drugs have been tested clinically to alleviate DOX-induced cardiotoxicity. Numerous studies have implicated that reactive oxygen species (ROS) generation, mitochondrial dysfunction, inflammation, apoptosis and various signaling pathways were convincingly shown to be crucial in DOX-induced cardiotoxicity (Mukhopadhyay et al., 2007;Meeran et al., 2019;Chen et al., 2020).
Sirtuin 1 (Sirt1), an NAD + -dependent histone deacetylase, plays important roles in multiple biological processes including longevity, stress response, and cell survival. It has been well established that Sirt1 was involved in redox regulation, cell apoptosis, as well as inflammation (Hwang et al., 2013), and the protein level of Sirt1 increased in response to DOX injection (Zhang et al., 2011). Some evidence supports that PI3K-Akt-GSK3β signaling pathway is necessary for endoplasmic reticulum stress-induced Sirt1 activation (Koga et al., 2015), and the protective role of PI3K/Akt signaling pathway is found in DOX-induced cardiac dysfunction (Wen et al., 2019). The activation of PI3K/Akt signaling pathway can suppress DOXinduced cardiomyocyte apoptosis. In particular, GSK-3β is a downstream effector of PI3K/Akt signaling pathway and can lead to the mitochondrial permeability transition pore opening and subsequently apoptosis (Townsend et al., 2007). As DOX still remains a mainstay of many chemotherapeutic regimens, further investigation of its cardiotoxicity and how to prevent it is warranted.
Cariporide (CAR) is a selective Na+/H+ exchanger isoform 1 (NHE1) inhibitor, which can significantly improve DOX sensitivity in a xenograft model, specifically enhancing tumor growth inhibition and reducing tumor volume (Chen et al., 2019).
CAR also reverses burn-induced intracellular Na + accumulation and cell apoptosis involved in PI3K/Akt and p38 MAPK pathways . Inhibition of NHE1 exerts potent cardioprotective effects against ischemia/reperfusion-induced heart injury through activation of Akt/GSK-3β survival pathway (Jung et al., 2010). Additionally, gene inactivation of NHE1 attenuates transient focal cerebral ischemia inducedapoptosis and mitochondrial injury (Wang et al., 2008). NHE1 inhibition also ameliorates peripheral diabetic nephropathy, as well as alleviates atherosclerotic lesion growth and promotes plaque stability by inhibiting the inflammatory reaction (Li et al., 2014). Moreover, inhibition of NHE1 by its inhibitor amiloride significantly enhances the intracellular accumulation of DOX in DOX-resistant human colon cancer cells and thereby increases their treatment (Miraglia et al., 2005). A recent study reported that citronellal could ameliorate DOX-induced cardiotoxicity by inhibiting the NHE1-mediated oxidative stress, apoptosis in rats (Liu X. et al., 2021). Considering the effects of NHE1 in oxidative stress, inflammation, apoptosis and sensitivity of DOX, we hypothesized that its selective inhibitor CAR not only enhanced the anticancer effect of DOX, but also might have the protective effect against DOX-induced cardiotoxicity.
Therefore, the present study was undertaken to investigate the protective effects of CAR against DOX-induced cardiotoxicity in rats, and to elucidate the underlying mechanisms of its cardioprotective effects. Our findings demonstrated that CAR could alleviate DOX-induced cardiotoxicity via suppression of oxidative stress, inflammation and apoptosis, which was at least partially through regulation of Akt/GSK-3β and Sirt1 signaling pathway. The potent protective effects of CAR against DOXinduced cardiotoxicity in rats have not been reported to date, and it is the first time to report Sirt1 signaling pathway is involved in the cardio-protective effect of CAR. These results reveal a novel role of CAR in DOX-induced cardiotoxicity and suggest that NHE1 may be a therapeutic target for lessening DOX-induced cardiac damage.
Animals and Experimental Design
Thirty-two adult male Sprague-Dawley rats weighing 280-310 g were obtained from the Experimental Animal Research Center of Hubei Province (Certificate No. SCXK [E] 2015-0018, Wuhan, China). All animal procedures and experiments described in this study were approved by the Review Committee for the Use of Human or Animal Subjects of Hubei University of Science and Technology. The animals were housed under controlled environmental conditions of humidity (40%-50%) and temperature (25 ± 2°C) with natural light and dark cycles (12 h: 12 h) and were allowed free access to food and water.
Rats were randomly divided into four groups (8 per group): Control group (CON), DOX group (DOX), DOX with CAR treatment group (DOX + CAR), and CAR group (CAR). Rats in CON group were received standard laboratory diet and drinking water. Rats in DOX group were intraperitoneally (i.p.) injected with DOX (2.5 mg/kg, every other day) over a period of 12 days for a cumulative dose of 15 mg/kg as described previously (Siveski-Iliskovic et al., 1994). Rats in DOX + CAR group were injected with DOX (i.p.) at a dose of 2.5 mg/kg every other day for a cumulative dose of 15 mg/kg and simultaneously treated with CAR (1 mg kg −1 day −1 ) once a day over a period of 12 days. Rats in CAR group were treated with CAR (1 mg kg −1 day −1 ) once a day over a period of 12 days.
Detection of Myocardial Injury Markers
The levels of myocardial enzymes CK-MB, and LDH activities in the serum, which were considered as the pivotal diagnostic indicators of myocardial injury, were tested following the manufacturer's protocols (Nanjing Jiancheng Biotechnology Institute, China). The level of cTnT was measured according to the manufacturer's guideline (Milliplex Company, Darmstadt, Germany).
Assessment of Left Ventricular Function
At the end of the experiment, transthoracic echocardiography was performed in all groups under isoflurane (1%-3%) anesthesia using an echocardiography system (Vevo 2100, VisualSonics, Canada). The echocardiography parameters were as follows: left ventricular end-diastolic diameter (LVEDD) and left ventricular end-systolic diameter (LVESD). To assess left ventricular systolic function, the ejection fraction (EF) and fractional shortening (FS) were also calculated.
Morphological and Ultra-Structural Examination
The left ventricles of the heart samples were removed and fixed by immersion in 10% formalin. Subsequently, parts of the left ventricles were embedded in paraffin wax, cut into 3-μm-thick sections, stained with hematoxylin-eosin (HE) staining or Masson's staining respectively, and examined under a light microscope (CKX41, 170 Olympus, Tokyo, Japan) at total magnifications of ×400 by a pathologist blinded to this study. For ultra-structural examination, the samples were immersed with 2.5% glutaraldehyde for 2 h and were fixed in 1% osmic acid for 3 h. After embedding in paraffin, ultra-thin sections (60-80 nm) were stained with 3% uranyl acetate and lead citrate, and were then examined by transmission electron microscope (TEM, HT7700 120 kv, HITACHI, Japan). The qualitative analysis of histopathological changes was performed as none (-) to severe (+++) according to the degree of inflammation, myocardial disorganization, and myofibrillar loss, which are listed in Table 1. The scoring system was as follows (-) no damage, (+) mild damage, (++) moderate damage, and (+++) severe damage.
Assessment of Biochemical Parameters
The content of MDA and the activities of antioxidant enzymes including SOD, GSH-Px, and CAT in heart homogenates were measured according to the manufacturer's instructions (Nanjing Jiancheng Biotechnology Institute, China). The levels of TNF-α, IL-6, IL-18, and IL-1β in serum were determined with ELISA test kits from Dakewe Biotec Company (Beijing, China) following the manufacturer's protocols.
Cell Culture and Detection of Cellular ROS
The H9c2 cell line was purchased from the China Center for Type Culture Collection (CCTCC, China), and cultured in Dulbecco's modified Eagle's medium with fetal bovine serum (10%), streptomycin (1%), and penicillin. The culture conditions contained a humidified atmosphere (95% air and 5% CO 2 at 37°C). The H9c2 cells were incubated with DOX (1 μmol/L) for 72 h with or without CAR(5 μmol/L) and Sirt1 inhibitor (nicotinamide, 10 μmol/L). The cell viability was detected by a CCK-8 kit. The ROS-level was measured by dihydroethidium (DHE, Beyotime Biotechnology, China), an indicative fluorescence probe, which was used to detect intracellular superoxide anions.
Quantitative Real-Time PCR
Total RNAs were extracted from cardiac tissues by using the TRIzol reagent (TaKaRa, Japan) according to the manufacturer's instructions. cDNA synthesis was performed with HiScriptIIQ RT SuperMix (Vazyme, China) according to the manufacturer's instructions. Quantitative RT-PCR was performed with ChamQ SYBR qPCR Master Mix (Vazyme, China) according to the protocol. GAPDH was used as the reference gene.
Terminal Deoxynucleotidyl Transferase-Mediated dUTP Nick End-Labelling Assay
Cardiac apoptotic cells were tested by TUNEL staining which was often used for detecting DNA fragmentation. The TUNEL assay was performed according to the manufacturer's protocol and instructions provided in the In Situ Cell Death Detection Kit supplied by Roche Company (Mannheim, Germany). The results were examined and the apoptotic index was calculated under a light microscope (CKX41, 170 Olympus, Tokyo, Japan) at a total magnification of ×400 by a pathologist blinded to this study.
Western Blot Analysis
Equal amounts of protein from hippocampal homogenates were separated by 10% SDS-PAGE gels and then transferred to PVDF membranes. The membranes were blocked for 1 h at room temperature in 5% non fat milk. After being incubated overnight at 4°C with the appropriate primary antibodies including Bcl-2, Bax, phospho-Akt, Akt, phospho-GSK-3β, GSK-3β, Sirt1, and GAPDH (Santa Cruz, CA, United States), the membranes were washed 15 min with TBST three times and then incubated with the secondary antibody for 1 h at room temperature. The blots were then imaged using ECL assay kits (Dalian Meilun Biotech Co., Ltd, China). The band intensities were quantified using NIH ImageJ 1.50 software and normalized to the quantity of GAPDH in each sample lane. All assays were performed at least three times.
Statistics
The values are expressed as means ± SEM. Data were analyzed One-way ANOVA followed by post hoc Tukey's test by employing GraphPad Prism Version 5.0. p values of 0.05 or less were considered to be statistically significant.
CAR Reinstated Body Weight and Myocardial Marker Enzymes in DOX-Induced Cardiotoxicity in Rats
As compared to CON group, the body weight dramatically decreased in DOX group (p < 0.01 vs. CON, Figure 1A).
Administration of CAR resulted in a significant prevention of DOX-induced decrease in body weight (p < 0.01 vs. DOX, Figure 1A). To further confirm the protective effects of CAR against DOX-induced cardiotoxicity, we tested the serum levels of cardiac enzymes (CK-MB, LDH, and cTnT), which represented the biochemical markers of myocardial injury. As shown in Figures 1B-D, treatment with DOX caused significantly elevated serum levels of CK-MB, LDH, and cTnT as compared to CON group (p < 0.01 vs. CON for all). Administration of CAR (10 mg kg −1 day −1 ) caused a reversal of DOX-induced increase in serum cardiac enzymes (p < 0.01 vs. DOX for all). Moreover, no significant changes in body weight and serum cardiac enzymes were observed in CAR group as compared to CON group, demonstrating that the dose of CAR used in this study (10 mg kg −1 day −1 ) did not affect body weight and myocardial marker enzymes of the rats.
CAR Prevented DOX-Induced Left Ventricular Dysfunction
Next, we conducted an echocardiography analysis of left ventricular function. Representative pictures of heart function were shown in Figure 1E. Administration of DOX resulted in a significant increase of LVEDD and LVESD (p < 0.05, p < 0.05 vs. CON), both of which were preserved by CAR treatment (p < 0.05, p < 0.05 vs. DOX, Figures 1F,G). Furthermore, FS and EF, the index of left ventricular systolic function, decreased dramatically in DOX group (p < 0.05, p < 0.01 vs. CON). CAR treatment significantly ameliorated the reduction of FS and EF caused by DOX (p < 0.05, p < 0.05 vs. DOX, Figures 1H,I). In addition, CAR alone had no effect on left ventricular function. These data suggest that CAR can ameliorate the impairment of heart function induced by DOX.
Histopathological and Ultra-structural Analysis by HE Staining, Masson's Staining, and TEM To assess cardiac morphological alterations, sections of rat heart tissue were stained with HE or Masson's staining, and examined by light microscopy. As shown in Figure 3, DOXinduced cardiotoxicity was characterized by mild focal inflammation, myofibrillar loss, swelling, and fibrosis. Our results revealed that CAR treatment significantly ameliorated DOX-induced lesions on myocardial morphology ( Figure 3A), and mitigated the cardiac fibrosis ( Figure 3B). TEM performed for the myocardium showed that treatment with DOX caused marked ultra-structural aberration leading to myofibrillar disintegration, damage of Z-band and M-band, and irregular mitochondria ( Figures 3C,D). The qualitative analysis of histopathological changes were shown in Table 1. Treatment with CAR markedly improved the irregular and disintegrated sarcomere, restored the Z-band and M-band, and reduced the damaged mitochondria.
CAR Alleviated Oxidative Stress Induced by DOX
It is well established that oxidative stress plays a critical role in DOX-induced cardiotoxicity. Therefore we investigated the effects of CAR on markers of oxidative stress, including MDA and the antioxidant enzymes (SOD, GSH-Px, and CAT). As expected, DOX administration resulted in a significant increased level of MDA and a significant decreased activities of SOD, GSH-Px, and CAT as compared to CON group (p < 0.01, p < 0.01, p < 0.01, p < 0.05 vs. CON). However, treatment with CAR significantly prevented DOX-induced increased level of MDA and decreased activities of SOD, GSH-Px, and CAT (p < 0.01, p < 0.01, p < 0.01, p < 0.05 vs. DOX, Figures 3A-D). Administration of CAR alone had no significant effect on the level of MDA or the activities of SOD, GSH-Px, and CAT as compared to CON group. DOX-induced ROS level in vitro was also detected by using DHE as a fluorescent probe. The results showed that H9c2 cells treated with DOX alone showed greater red fluorescence, indicating that DOX treatment led a clear increase in intracellular ROS levels. On the other hand, the red fluorescence in the cells treated with the combination of DOX and CAR exhibited largely reduced brightness when compared with those cells treated with DOX only, demonstrating that CAR could prevent intracellular ROS production ( Figure 3E). These results suggest that CAR may protect against DOX-induced cardiotoxicity at least partially through suppression of oxidative stress.
Effect of CAR on Inflammation Following DOX-Induced Cardiotoxicity
Since several inflammatory cytokines are associated with the pathological injury caused by DOX-induced cardiotoxicity, the serum levels and mRNA expressions of TNF-α, IL-6, IL-18, and IL-1β in heart tissues were then tested. As expected, the serum levels and cardiac mRNA expressions of TNF-α, IL-6, IL-18, and IL-1β were dramatically elevated in DOX group (p < 0.01 vs. CON for all, Figures A, B, C, and D indicated serum levels of TNF-α, IL-6, IL-18, and IL-1β; Fig. E, F, G and H indicated mRNA of TNF-α, IL-6, IL-18, and IL-1β). On the other hand, treatment with CAR protected against DOX-induced increase of serum levels and cardiac mRNA expressions of TNF-α, IL-6, IL-18, and IL-1β (serum levels p < 0.01, p < 0.01, p < 0.01, p < 0.05; mRNA expressions p < 0.05 vs. DOX for all). Administration of CAR alone did not significantly alter the serum levels and cardiac mRNA expressions of these inflammatory cytokines. Therefore, in addition to preventing DOX-induced oxidative stress, CAR may also prevent DOX-induced cardiotoxicity via downregulation of pathological inflammatory cytokines.
CAR Inhibited Cardiomyocyte Apoptosis Induced by DOX
As DOX-induced cardiotoxicity is known to involve in the induction of apoptosis in cardiomyocytes, we tested whether the protective effect of CAR included the ability to prevent cardiomyocyte apoptosis by using TUNEL staining of heart tissues. As shown in Figure 5A, DOX caused a significant increase of apoptotic cells (pink staining) compared to CON group (p < 0.01 vs. CON, Figure 5B). CAR treatment was able to partially prevent DOX-induced increase of cardiomyocyte apoptosis (p < 0.05 vs. DOX), while CAR treatment alone had no significant effect. To further investigate the ability of CAR to prevent DOXinduced apoptosis, we analyzed the apoptosis related proteins including Bcl-2, Bax and Cleaved-Caspase-3 in heart tissues using Western blot. As shown in Figures 5C, administration of DOX resulted insignificant increased expressions of Bax and Cleaved-Caspase-3, whereas the Bcl-2 protein level was significantly decreased. In contrast, CAR treatment markedly reversed DOX-mediated protein changes of Bcl-2, Bax and Cleaved-Caspase-3, as well as partially normalized the Bcl-2/Bax ratio (p < 0.05, p < 0.05 vs. DOX). No significant differences were detected in the protein levels of Bcl-2, Bax, and Cleaved-Caspase-3 between CAR and CON groups. It is tempting to speculate that this phenomenon may partially explain CAR's ability to prevent DOX-induced cardiotoxicity by reducing apoptosis in cardiomyocytes.
Effect of CAR on Akt/GSK-3β Signaling Following DOX-Induced Cardiotoxicity
As previous studies have indicated that the cardioprotective and anti-hypertrophic effects of NHE1 inhibition were involved in the activation of Akt/GSK-3β signaling pathway (Javadov et al., 2009;Jung et al., 2010), we investigated the phosphorylation levels of Akt and its downstream target protein GSK-3β by Western blot. As shown in Figures 6A,B, the phosphorylation levels of Akt and GSK-3β significantly decreased by administration of DOX.
In contrast, treatment with CAR attenuated DOX-induced decrease in phosphorylation of both Akt and GSK-3β (p < 0.05, p < 0.05 vs. DOX, Figures 6C,D). None of DOX or CAR significantly altered the total Akt and GSK-3β protein levels.
These data suggest that CAR may protect against DOXinduced cardiotoxicity via activating Akt/GSK-3β survival signaling pathway.
Frontiers in Pharmacology | www.frontiersin.org June 2022 | Volume 13 | Article 850053 8 mice and H9c2 cell, the protein expressions of Sirt1 decreased significantly, which were restored by CAR treatment (p < 0.05, p < 0.05 vs. DOX, Figures 7A-D). Nicotinamide, an inhibitor of Sirt1, reversed the protective effect of CAR by downregulating the protein expression of Sirt1 and reducing H9c2 cell viability (p < 0.05, vs. DOX, Figure 7E, G; p > 0.05 vs. DOX + CAR, and Figure 7F). These data suggest that activating Sirt1 pathway may mediate the cardioprotective role of CAR against DOX-induced cardiotoxicity.
DISCUSSION
Here, we demonstrated for the first time that CAR treatment suppressed DOX-induced cardiotoxicity, as indicated by improving cardiac function, reducing oxidative damage, inhibiting inflammatory response, and alleviating myocardial apoptosis. In addition, we found that Akt/GSK-3β signaling pathway was involved in the protective effects of CAR. Furthermore, the results from the present study also demonstrated that these protective effects of CAR were mediated by the activation of Sirt1 in vivo and in vitro, and Sirt1 inhibition abolished CAR treatment-mediated cardiac protection. Collectively, our data suggest that CAR may be a potential therapeutic drug and NHE1 may be a potential therapeutic target for the prevention and treatment of DOX-induced cardiotoxicity.
Doxorubicin is a well-established chemotherapeutic agent widely used in the treatment of hematological malignancies and solid tumors. Unfortunately, there are numerous serious toxicities associated with DOX treatment, particularly on the cardiovascular system, which severely limits its clinical use. The dose of DOX used in the present study (15 mg/kg) was comparable with the typical dose given to cancer patients (Venturini et al., 1996;Xiao et al., 2012). Our data demonstrated that this dosage of DOX resulted in cardiotoxicity in rats as evidenced by decreased body weight, elevated serum levels of cardiotoxicity biomarker enzymes, as well as alterations in heart function and cardiac histopathological damage. On the contrary, administration of CAR increased the body weight, attenuated the increased levels of serum CK-MB, LDH, and cTnT, improved the impairment of left ventricular function, as well as ameliorated the histopathological damage of the heart caused by DOX treatment, suggesting that CAR exerted a prominent protective effect against DOX-induced cardiotoxicity. In addition, the dose of CAR used in this experiment (10 mg kg −1 day −1 ) did not cause any cardiotoxicity, thus warranting further investigation into the possible molecular mechanisms underlying its cardioprotective effects.
Given the established role of oxidative stress in DOX-induced cardiotoxicity (Xu et al., 2001;Zhang et al., 2017), we first investigated the effect of CAR on biomarkers of oxidative stress in heart tissues. Excessive production of free radicals can cause oxidative damage to nearly all types of biological molecules, leading to numerous disease states (Halliwell and Gutteridge, 1990). Overproduction of ROS leads to damage of nuclear and mitochondrial DNA, altered calcium homeostasis, decreased protein synthesis, and cardiomyocyte death. Numerous oxidative stress markers, including lipidperoxidation and lipid aldehydes, are found in heart tissues after DOX treatment (Chaiswing et al., 2004;Zhang et al., 2017). In agreement with this, our results showed that DOX-induced cardiotoxicity in rats was associated with significant increase of cardiac MDA level as well as reduced activities of the cardiac antioxidant enzymes (GSH-Px, SOD, and CAT). Notably, treatment with CAR significantly inhibited MDA production and prevented the decreased activities of cardiac GSH-Px, SOD, and CAT in rats subjected to DOX. A recent study indicated that DOXinduced ROS production in the H9c2 cells quantified with a fluorescent probe was inhibited by treatment with pinocembrin (Sangweni et al., 2020). In our results, H9c2 cells treated with DOX exhibited a clear increase in intracellular ROS level, which was reversed by CAR treatment. Therefore, suppressing oxidative stress may be one of the primary mechanisms by which CAR protects against DOX-induced cardiotoxicity.
In addition to directly causing deleterious effects, oxidative stress in the heart can also lead to inflammation, which is known to be involved in a variety of cardiovascular diseases, including atherosclerosis, atrial fibrillation, and inflammatory cardiomyopathies. Prior research has suggested that there is a strong association between cardiac oxidative stress and inflammatory cytokine release after DOX treatment (Bien et al., 2007). TNF-α, IL-6, IL-18, and IL-1β are the common proinflammatory cytokines involved in DOX-induced cardiotoxicity and are increased in individuals who have cardiac dysfunction (Miettinen et al., 2008;Tamariz and Hare, 2010). In agreement with the prior studies (Wang et al., 2016), we found that DOX application in rats led to increased serum levels and cardiac mRNA expressions of TNF-α, IL-6, IL-18, and IL-1β. CAR treatment prevented the increased serum levels of these proinflammatory cytokines and their cardiac mRNA expressions. It is interesting to speculate that alongside its antioxidant effects, anti-inflammatory properties may be also responsible for the protective effect of CAR against DOX-induced cardiotoxicity. Beyond the damage inflicted by ROS and inflammation, myocardial apoptosis is believed to be the fundamental basis of DOX-induced cardiotoxicity (Kalay et al., 2006). β-Hydroxybutyrate, a small lipid-derived molecule derived from increased free fatty, could protect against DOX-induced cardiotoxicity by inhibiting cell apoptosis and oxidative stress and maintaining mitochondrial membrane integrity (Liu et al., 2020). Also, CAR treatment has been found to significantly suppress the induction of TUNEL-positive cardiomyocytes in cardiac hypoxia/reoxygenation. Therefore, we also examined the effect of CAR on myocardial apoptosis induced by DOX. As differential induction of apoptosis in cardiomyocytes caused by DOX treatment has been reported in different experimental animal models (Hou et al., 2006;Ruan et al., 2015;Argun et al., 2016), we observed increased TUNEL-positive cardiomyocytes following DOX treatment over a period of 12 days in rats, consistent with a previous study (Argun et al., 2016). Importantly, CAR treatment significantly inhibited DOXinduced myocardial cell apoptosis, thereby improving its myocardial toxicity.
In many cell types, apoptosis is regulated by Bcl-2 protein family. The pro-apoptotic Bax protein and anti-apoptotic Bcl-2 protein play a major role in determining whether or not cells undergo apoptosis. The translocation of Bax from the cytoplasm to mitochondria results in cytochrome c release from the mitochondria to promote apoptosis, while Bcl-2 can prevent the release of cytochrome c from mitochondria and suppress apoptosis progression (Scorrano and Korsmeyer, 2003). A recent study found that DOX administration increased Bax expression and decreased Bcl-2 expression in H9c2 cells (Liu et al., 2016). Prior research also suggested that CAR reduced mitochondrial Ca2+, the number of PI and TUNEL positive cells, cytosolic cytochrome c, caspase-3 activity, and ratio of Bax and Bcl-2 in primary cultured neonatal rat cardiomyocytes subjected to hypoxia/re-oxygenation (Sun et al., 2004). Our results revealed that treatment with DOX significantly reduced Bcl-2/Bax ratio and elevated the Cleaved-Caspase-3 protein expression in the heart, which was reversed significantly by administration of CAR. These results highly suggest that the cardioprotective effect of CAR is also due to its ability to regulate the levels of apoptosis related proteins. However, non-caspase-dependent apoptotic pathways can also be activated under DOX or cardiac IR condition, as evidenced by AIF translocation to the nuclei in H9c2 cardiomyoblasts treated with DOX (Moreira et al., 2014) and intracytosolic translocation of AIF and endonuclease G during cardiac ischemia (Yang et al., 2017). Therefore, whether AIF and mitochondrial endonuclease G are involved in the cardioprotective effect of Car against DOX-induced cardiotoxicity needs to be further studied. Cell survival signaling pathways, including Akt, GSK-3β and ERK1/2, are known regulators of myocardial cell survival (Li et al., 2009), suggesting that they may be pharmacological targets for the prevention of myocardial cell apoptosis under stress conditions. Notably, it has previously been shown that DOX treatment caused a remarkable reduction of Akt and GSK3β phosphorylation in mouse heart (Prysyazhna et al., 2016), and erythropoietin protects against DOX-induced cardiotoxicity by activating PI3K-Akt-GSK-3β signaling pathway, thereby reducing cardiomyocyte apoptosis (Kim et al., 2008). Shenmai injection, one of the patented traditional Chinese medicine, prevents DOX-induced cardiotoxicity through activation of PI3K/Akt/GSK-3β signaling pathway (Li et al., 2020). In agreement with prior studies, we also observed that Akt and GSK-3β phosphorylation reduced in DOXtreated hearts. Treatment with CAR significantly attenuated DOXinduced decreased phosphorylated protein expressions of Akt and GSK-3β.
Sirt1 has been suggested to play key roles in redox regulation, cell apoptosis, and inflammation (Hwang et al., 2013). A previous study showed that exposure to DOX of H9c2 cells leads to cellular injury and the reduction of Sirt1 (Liu et al., 2016). Sirt1 is also highly expressed in cardiomyocytes and involved in DOXinduced cardiotoxicity (Dolinsky, 2017). Inconsistent with these findings, we also found that DOX reduced Sirt1 protein levels in vivo and in vitro. However, Zhang (Zhang et al., 2011) reported that Sirt1 level was slightly increased by DOX treatment, and resveratrol attenuated DOX-induced cardiomyocyte apoptosis in mice through up-regulation of Sirt1. The inconsistent expressions of SIRT1 under DOX condition may be related to the different animals used and the different action time of DOX, which may need further discussion. Of note, restoration of the expression of Sirt1 by CAR treatment could improve cardiac function and attenuate DOX-related cardiac injury in mice. Moreover, inhibition of Sirt1 by nicotinamide offset the protective effects provided by CAR treatment against DOX-induced H9c2 injury. These findings suggest that CAR exerts cardiac protection partially via activating Sirt1 signaling pathway. Sirt3, another NAD + -dependent histone deacetylase, restores mitochondrial respiratory chain defects, and cell viability of DOX treated cardiomyocytes (Blasco et al., 2018). Also, NHE-1 inhibitor EMD-87580 improves cardiac mitochondrial function through regulation of mitochondrial biogenesis during posti nfarction remodeling in these hearts (Javadov et al., 2006b), and attenuates the hypertrophic phenotype via improving mitochondrial integrity and decreasing generation of mitochondrial-derived reactive oxygen species (Javadov et al., 2006a). These evidences provide the relationship between DOX cardiotoxicity and Sirt3 or mitochondrial biogenesis. Further studies are needed to clarify the precise mechanism Sirt3 and mitochondrial biogenesis in the cardioprotective effect of CAR against DOX-induced cardiotoxicity.
CONCLUSION
We demonstrate for the first time that CAR, a selective inhibitor of NHE1, exerts protective effects against DOX-induced cardiotoxicity via its antioxidant, anti-inflammatory, and antiapoptotic activities. The results from the present study also demonstrate a role for Akt/GSK-3β and Sirt1 signaling pathway in the cardioprotective effects of CAR, and suggest that CAR may be a potential therapeutic drug and NHE1 may be a potential therapeutic target for the prevention and treatment of DOX-induced cardiotoxicity.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The animal study was reviewed and approved by Review Committee for the Use of Human or Animal Subjects of Hubei University of Science and Technology.
AUTHOR CONTRIBUTIONS
CL and YC conceived and designed the experiments; WL performed the experiments; LW contributed to reagents; CL and ZR wrote the manuscript. All authors gave final approval and agree to be accountable for all aspects of work ensuring integrity and accuracy.
|
2022-06-07T13:12:48.857Z
|
2022-06-07T00:00:00.000
|
{
"year": 2022,
"sha1": "b7f57f329a1806897d5bed35406e1ff098c61072",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "b7f57f329a1806897d5bed35406e1ff098c61072",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237396739
|
pes2o/s2orc
|
v3-fos-license
|
Domestic Violence on Women and its Implications on their Health
Violence against women is a severe violation of human rights and ranging from domestic and intimate partner violence to sexual harassment and assault. It is widely recognized as a serious human rights abuse. Violence has substantial consequences on women’s health. To evaluate the effects of domestic abuse and violence on the physical and mental health of women a study was conducted in district Faisalabad, Punjab, Pakistan. Data were drawn from 222 women. Sampling was carried out using the multi-stage process of random sampling. Survey method was used for data collection and Statistical methods such as chi-square, correlation, linear and multiple regressions were used to analyze the data in this study. The findings showed a significant relationship between the physical and mental health of women due to domestic violence. This study emphasizes the need for justified women empowerment and a multidisciplinary approach to develop health measures, which will effectively address the problem of domestic violence.
___________________________________________________________________________ 381 studies report that violence affects women's sexual and reproductive health and causes them to have health problems through depression, mental illness, and other health problems that make them vulnerable to depression and disability (Wor ld Health Organization, 2009)? There are some policies in Pakistan (United Nations, 2020) that work against different types of violence. However, such problems still exist in the execution of these acts. In multiple sectors, such as justice, health, and social provision, women still face a lack of access to affordable services required to ensure their safety and protection. Literature Review Eswaran and Malhotra (2009) found in their studies that illiterate women were more likely to face domestic violence than literate females. At the same time, they investigated that illiterate men inflict violence on their wives and daughters. Toufique and Razzaque (2007) investigated that education and socio-economic status of women were associated with the better living state of women and developed hinders against domestic violence. Similarly, Koenig et al., (2003) studied that female education and economic status were violence hindering factors. Ellsberg et al. (1999) studied the incidence of emotional distress among women and discovered that abused women who were exposed to violence were six times more likely than women who were not abused to experience emotional distress. Rodriguez et al., (2008) reported that pregnant women suffered from post-traumatic stress disorder and depression if they faced violence during pregnancy even some victimized women got the cutoff mark. Similarly, Coker et al., (2002) investigated in their study that physical violence was associated with depressive symptoms, drug abuse, and many chronic diseases. The Disease Control Priorities Project (2008) found that women face some health issues due to gender inequality rather than biological factors. Adult women face abuse from 15% of their partners to 71% of their partners. It is stated that domestic violence is a significant detriment to the emotional, physical, sexual, and reproductive health of women. Bower (2011) found several factors that lead to a lower quality of life for women. This is due to women's lack of access to health services and unequal access to information and treatment. Gender discrimination is a major factor in such circumstances. Physical and sexual assault, the transmission of diseases through sexual intercourse like HIV/AIDS, chronic pulmonary disease, and malaria are all part of this. Yeganeh (2011) claimed that countries with greater conservative values are more likely to emphasize gender inequality by keeping socioeconomic variables stable. The causes of gender inequality have been studied by the World Bank (2001) and concluded that gender inequality is strongly associated with household choices. Furthermore, it was reported that factors of these choices were shaped by rituals, institutional and cultural norms.
Theoretical Framework
This study's theoretical framework was drawn by the Moss (2002) study, which included a complete structure on gender inequalities, gender equality, crime, socio-economic inequalities, and women's health. This structure illustrates the factors that affect the well-being of women. They were prepared to be evaluated empirically via this model assumption.
Conceptual-Framework I-V D-V G-I Methodology
The study sample consisted of 222 females from the city of Faisalabad. To make an overall sample of the current research, multi-stage sampling was carried out. By using simple random sampling, all the stages were protected. The column below is the quick image of the recruitment of union councils, regions, and participants. The data collected through survey was investigated in the form of uni-bi-multivariate (analysis). The SPSS-19 computer Programme was used to analyze the data collected. Univariate analysis was conducted at the very first stage in which female respondents were classified according to their age, education, and income levels, which were also clarified in this study's conceptual framework. Bivariate analysis was conducted with correlation and Chi-square. Multiple regressions were performed as multivariate analysis.
___________________________________________________________________________
Women's abuse and gender inequality have been strongly correlated with the principles of chi-square and correlation, 8.221 and .181, respectively. It specified that gender equality was strongly correlated with violence against women. The women who experienced abuse have shown that gender differences have also shaped it. Results of the Index variable on gender inequality shows that there is a positive relationship between gender inequality and faced violence category, i.e. 68.8 percent of women in the category of 'High' inequality faced violence and as the inequality shifts from high to medium and low the percentage of women faced violence decreases 42.7 percent and 32.2 percent respectively. This suggests that a higher gender inequality score indicates a higher score for facing violence, i.e. there is less violence faced by women if there is more gender equality. This suggests that abuse happens where there is gender inequality. The results of the table on violence and health problems show that abuse inflicted on women is closely related to women's health issues. The chisquare value is 123.305 and shows a strong association between women's health problems and violence against them. The Pearson correlation value is .745 and shows a highly significant positive relationship between both variables. The woman respondents who experienced violence showed that both physical and emotional effects on their health. A high percentage of women, i.e. 91.7 percent, revealed their association with no health conditions in the No Abuse category. This suggests that the higher score on no violence' is positively related to no health issues.' The positive relationship between health problems and domestic abuse indicated that with the increase of domestic violence health problems will increase and with the decrease in violence health problems of women will. This indicates that abuse inflicted on women has had a negative impact on women's health in their households.
Conclusion
The present study suggested that to identify and quantify the various determinants of gender differences and how they affect women's health, accurate, precise, and careful evaluation should be carried out. And it is very interesting to look at health services that tackle poverty and gender inequality, and how they can be structured to be more effective and competitive and to achieve the desired outcome. In schools, colleges, and at the university level, strong awareness campaigns and discussions on the community level should be encouraged on recognition of the fundamental women's rights. Gender equity in jobs should be improved by the government and fair employment opportunities should be granted to women and women should be made aware of their legal and customary rights to inherit the land. The position of representatives and governing bodies of various institutions governing women's rights in the sector (with women's unions and organizations) should be granted to women at the highest level of legislative bodies (ministries and parliaments). The poor social status of women is the result of inadequate treatment and independence for women. So, government, state, national and international organizations, as well as the private sector, are expected to initiate numerous initiatives to empower women and to introduce different programs.
Limitations of the Study
The first limitation of the study was the results drawn by a small sample and to generalize findings on whole Pakistani society. Secondly, during data collection, it was observed that participants felt reluctant and had a lack of confidence to respond to questions regarding abuse and mental illness. The third limitation was that her privacy was difficult to preserve when collecting responses from the respondents because of the involvement of other family members. The participants were mindful and could not easily provide the answers they needed. The researcher, however, managed to get the best possible information.
|
2021-09-01T15:14:22.152Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "94a4176bb48833272ad0b4be10b1f04135b3a0b1",
"oa_license": "CCBYNC",
"oa_url": "https://www.sjesr.org.pk/ojs/index.php/ojs/article/download/749/307",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ed20728bb42b729dfc96b7d0239618eb7eef083c",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
}
|
236717120
|
pes2o/s2orc
|
v3-fos-license
|
An unusual location of a cavernous hemangioma: a case report
Hemangiomas are benign vascular tumors that most often affect the skin, mucous membranes, subcutaneous tissues, bone and on rare occasions muscles. In the head and neck region, the masseter and trapezius muscles are most often affected; the temporalis muscle involvement is extremely rare. It is a childhood pathology that rarely occurs in adults. We report a case of a cavernous hemangioma in a 37-year-old female. Through this case and in the light of literature we focus on the clinicopathological aspects of this tumor and the rarity of this location.
Introduction
Intramuscular hemangiomas are benign vascular neoplasms that represent less than 1% of all hemangiomas and which are often localized in the trunk and extremities. There are three types of hemangioma depending on the size of the affected vessel: capillary hemangioma, cavernous hemangioma, and compound hemangioma [1]. We report a case of a rare location of a cavernous hemangioma in the temporalis muscle with an extension to the infra temporal fossa.
Patient and observation
A 37-year-old female, without significant personal or family medical history. The patient presents a swelling of the left temporal fossa that has been evolving for 5 years by gradually increasing in volume leading to a facial asymmetry. The physical examination revealed a soft, painless, non-pulsatile mass (4x2.5cm in size), without thrill, not increasing volume in the declivity position, fixed to the deep plane, mobile to the skin, without inflammatory signs or lymphadenopathy.
Contrast-enhanced computed tomography was performed, showing a temporalis muscle mass that extends to the infra-temporal fossa, measuring 33x17x39mm, isodense, unencapsulated, with polylobed contours, that enhanced after contrast injection and respected the subcutaneous fat ( Figure 1, Figure 2).
The patient underwent surgery, where we performed a hemi-coronal incision. The mass was found in the temporalis muscle and was excised completely without damaging the frontalis branch of the facial nerve. The operating follow-ups were simple, with excellent functional and aesthetic results ( Figure 3).
Macroscopic mass examination showed a smooth, nodular tumor tissue measuring 5x2.5x1cm, with a purplish-brownish appearance. The histological study found a benign proliferation of dilated blood vessels with variable shape and size often cavernous that were trapped between muscle fibers; they are bordered by a well differentiated endothelium and supported by fibrous interstitial tissue in favor of cavernous hemangioma. Over a two-year period, no signs of recurrence were detected ( Figure 4).
Discussion
Hemangiomas are benign tumors characterized by abnormal proliferation of blood vessels. Their intramuscular locations represent less than 1% of all hemangiomas. Only 14% are located at the head and neck region. Masseter (36%), trapezius (24%) and sternocleidomastoid muscles are the most affected, while temporal muscle involvement remains extremely rare. To our knowledge, only 29 cases have been reported in the literature. A slight female predominance was described with an average age of 33 years [1,2].
The etiopathogenesis of these tumors are unknown, yet there are several theories: congenital, hormonal factors and trauma. There are three types depending on the size of the vessel involved: capillary (small vessels) most common with an estimated incidence of 68%, Cavernous (26%) (Large vessels) followed by compound types (6%). Cavernous hemangioma is most frequently found in the temporal muscle [2,3]. Clinically, they appear as a slow growing mass, mobile, painless, without pulsation, and without staining of the skin. The differential diagnosis includes lipoma, neurofibroma, dermoid cyst, enlarged lymph nodes, soft-tissue sarcoma, myositis ossificans and temporal arteritis [1,4].
Imaging help to make a positive diagnosis and guiding the management of this tumor. Computed tomography is useful for defining the form, size, eventual bone damage, and infiltration of the surrounding tissues. However, magnetic resonance imaging (MRI) is the method of choice in defining the vascular nature of the tumor and providing further information on the exact extent of the tumor. Hemangiomas are generally isointense to muscle on T1-weighted images and hyperintense on T2-weighted images. Arteriography allows to identify the feeding vessels of the mass for a preoperative embolization if necessary [1,3,5].
Several therapeutic modalities are available ranging from simple observation, injection of sclerosing agents, corticosteroid treatment, radiotherapy, embolization (especially in preoperative) to reduce intraoperative bleeding, arriving at complete surgery excision which remains the method of choice in the definitive treatment of this tumor. In some cases of voluminous tumor, injection of sclerosing agents, corticosteroid treatment and radiotherapy, can be indicated as alternatives or adjuvants to surgery [6,7]. In our case, under general anesthesia, hemi-coronal incision is used to control the region and the careful surgical dissection allows to prevent injury of the temporal and auricular branches of the facial nerve. The therapeutic indications are made according to the age, the location and size of tumor, growth rate, vascularity, refractory pain, cosmetic malformations, and suspicion of malignancy [8,9].
Local recurrence is possible after an incomplete resection with rates estimated at 28% for the compound type, 20% for the capillary and 9% for the cavernous. Close and prolonged Clinical and radiological follow up are recommended for at least 2 years to ensure immediate diagnosis of eventual local recurrence [9].
Conclusion
Cavernous hemangioma of the temporalis muscle is a benign and rare entity. The clinical presentation is unspecific. Imaging plays a very important role in the diagnosis, especially the MRI which remains the examination method of choice. Despite the multitude of therapeutic means, surgery retains its place in the definitive treatment of cavernous hemangiomas. Figure 1: computed tomography scan on the coronal plane shows the mass in the left temporal region with no erosion of the bone Figure 2: computed tomography scan on the axial plane shows a homogeneous contrast enhancement of the mass Figure 3: intraoperative view of the hemangioma approached through hemi coronal flap Figure 4: control computed tomography scan performed two years later was normal without mass
|
2021-08-03T00:06:24.407Z
|
2021-05-11T00:00:00.000
|
{
"year": 2021,
"sha1": "5242d4803d3515dcc7fd01976a7807dd0d777cde",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.11604/pamj.2021.39.29.28492",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "36be9d960f0e70b9bf195bda540fe3b81da232e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15480553
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of a vraG Mutant in a Genetically Stable Staphylococcus aureus Small-Colony Variant and Preliminary Assessment for Use as a Live-Attenuated Vaccine against Intrammamary Infections
Staphylococcus aureus is a leading cause of bovine intramammary infections (IMIs) that can evolve into difficult-to-treat chronic mastitis. To date, no vaccine formulation has shown high protective efficacy against S. aureus IMI, partly because this bacterium can efficiently evade the immune system. For instance, S. aureus small colony variants (SCVs) have intracellular abilities and can persist without producing invasive infections. As a first step towards the development of a live vaccine, this study describes the elaboration of a novel attenuated mutant of S. aureus taking advantage of the SCV phenotype. A genetically stable SCV was created through the deletion of the hemB gene, impairing its ability to adapt and revert to the invasive phenotype. Further attenuation was obtained through inactivation of gene vraG (SACOL0720) which we previously showed to be important for full virulence during bovine IMIs. After infection of bovine mammary epithelial cells (MAC-T), the double mutant (ΔvraGΔhemB) was less internalized and caused less cell destruction than that seen with ΔhemB and ΔvraG, respectively. In a murine IMI model, the ΔvraGΔhemB mutant was strongly attenuated, with a reduction of viable counts of up to 5-log10 CFU/g of mammary gland when compared to the parental strain. A complete clearance of ΔvraGΔhemB from glands was observed whereas mortality rapidly (48h) occurred with the wild-type strain. Immunization of mice using subcutaneous injections of live ΔvraGΔhemB raised a strong immune response as judged by the high total IgG titers measured against bacterial cell extracts and by the high IgG2a/IgG1 ratio observed against the IsdH protein. Also, ΔvraGΔhemB had sufficient common features with bovine mastitis strains so that the antibody response also strongly recognized strains from a variety of mastitis associated spa types. This double mutant could serve as a live-attenuated component in vaccines to improve cell-mediated immune responses against S. aureus IMIs.
Introduction
Staphylococcus aureus is a major human and animal pathogen that can cause high morbidity, acute infections, as well as difficult-to-treat chronic forms of diseases. Among factors that can explain the failure of antibiotherapy and the tendency to cause chronic infections, many have noted the pathogen's multifaceted virulence, predominantly its abilities to impair or elude host immune responses by toxin secretion [1,2], formation of biofilm [3] and survival in nonphagocytic host cells, which may shield the pathogen from the action of host immune system and antibiotics [4]. Furthermore, incidences of S. aureus infections are becoming more worrisome with the emergence of multiple antibiotic resistant strains [5,6]. Consequently, there is an urgent need to find potent new strategies to control this pathogen.
As for today, bovine mastitis is still an important problem for the dairy industry, and S. aureus is the most frequent pathogen in all combined cases of clinical and subclinical intramammary infections (IMIs) [7]. Subclinical IMIs in particular can be a real concern: they often stay unnoticed by producers, are highly transmissible during milking and thus result in chronic infections that can persist for the life of the animal [8]. Over time, they can generate tissue damage that rapidly leads to a decrease in milk production and quality [9].
The development of vaccines for the prevention and control of S. aureus IMIs has been extensively investigated, although no formulation has demonstrated high protective efficacy to date. According to several reviews of the different commercially available and experimental vaccine formulations, this lack of protection is possibly caused by inadequate vaccine targets [10,11], high diversity among strains capable of provoking mastitis [10,12,13] or the failure to elicit an appropriate immune response [14][15][16]. It is increasingly understood that immunity solely based on vaccine-induced antibodies may be important, but is however insufficient for inducing protection against S. aureus [10,11]. It appears that cell mediated immunity (CMI) based on Th1 and Th17 type responses may be necessary to complete the protection [15][16][17][18].
In a previous study, we used a DNA microarray approach to uncover S. aureus genes that were highly expressed during bovine IMIs [19]. One gene (guaA) was shown to be a good target for a new drug therapy [20], and other genes were further investigated as vaccine candidates. Gene vraG (SACOL0720) was shown to be likely induced by the growth of S. aureus in fresh milk both in vitro and in vivo. The importance of gene vraG in S. aureus virulence was also demonstrated by the significant attenuation of growth observed for the gene inactivation mutant during bovine IMI [19].
It is now recognized that S. aureus small colony variants (SCVs) add important contributions to chronic infections and therapy failures. This may be attributed to the particular features of SCVs that make this phenotype adapted for long-term persistence in host tissues via expression of a distinct set of virulence factors [21], and that also allow survival in host cells [22,23]. Since SCVs have an improved ability for internalization into cells [4,24,25] and can colonize the host without generating invasive infections or tissue destruction [26,27], we hypothesized that these features could be of value in the development of genetically attenuated S. aureus strains. The use of S. aureus live-attenuated bacteria as vaccines represents an interesting approach to improve immune responses. Live-attenuated organisms that mimic natural infections stimulate the immune system in a powerful manner, eliciting broad and robust immune responses that increase serum and mucosal antibodies as well as effector and memory T cells which act synergistically to protect against disease [28,29].
In this study, we generated a vraG mutation in a SCV background to create an attenuated strain for vaccine purposes. Inactivation of gene vraG, should prevent cationic peptide resistance [30][31][32] and reduce virulence [19], while inactivation of gene hemB creates a stable SCV and prevents reversion to the invasive phenotype, a phenomenon normally seen during S. aureus infections [33]. We evaluated the persistence of the double mutant in a bovine mammary epithelial cells and demonstrated its attenuation and safety in a murine IMI model. We also report some immunogenic properties of this vaccine strain. This work is a first step in the proof of concept needed for the development of a live-attenuated vaccine for immunization and protection against S. aureus IMIs.
Ethics statement
The animal experiments were conducted following the guidelines of the Canadian Council on Animal Care and the institutional ethics committee on animal experimentation of the Faculté des Sciences of Université de Sherbrooke. The institutional ethics committee on animal experimentation of the Faculté des Sciences of Université de Sherbrooke specifically approved this study.
Bacterial strains and growth conditions
Strains used in this study are listed in Table 1. S. aureus ATCC 29213 and its isogenic mutant Δ720 were previously described [19]. Strain Δ720 is an intron insertion mutant of gene vraG that was renamed in this study ΔvraG for clarity. For the immunological tests, we selected four different bovine mastitis isolates corresponding to some of the predominant S. aureus spa types found in Canadian dairy herds and elsewhere in the world [13,34]. Strain SHY97-3906 (spa t529) was isolated from a case of clinical bovine mastitis that occurred during the lactation period, and CLJ08-3 (spa t359) was originally isolated from a cow with persistent mastitis at dry-off [19]. Strains Sa3151 (spa t13401) and Sa3181 (spa t267) were obtained from the Canadian Bovine Mastitis and Milk Quality Research Network (CBMMQRN) Mastitis Pathogen Culture Collection, and were isolated from cases of subclinical IMIs. Unless otherwise stated, S. aureus strains were grown in tryptic soy broth (TSB) and agar (TSA) (BD, Mississauga, ON,
Sa3151
Isolate from a dairy cow subclinical IMI occurring during the lactation period; spa type t13401 This study
Sa3181
Isolate from a dairy cow subclinical IMI occurring during the lactation period; spa type t267 This study Invitrogen Life Technologies
Plasmids pBT2
Shuttle vector, temperature-sensitive; Ap r Cm r [36] pBT-E pBT2 derivative, inserted ermA cassette; Ap r Cm r Em r This study pBT-EhemB pBT2 and pBT-E derivative, for hemB deletion: insertion of~1000 bp of hemB flanking regions on both sides of ErmA; Ap r Cm r Em r Canada), and Escherichia coli DH5α was grown in LB and LBA medium (BD). The ability of S. aureus strains to produce biofilm in vitro was evaluated as described before [13]. Whenever required, ampicillin (100μg/ml) (Sigma-Aldrich, Oakville, ON, Canada), chloramphenicol (20 μg/ml) (ICN Biomedicals, Irvine, CA), and erythromycin (10 μg/ml) (Sigma) were added to culture media.
DNA manipulations
Recommendations from the manufacturers of kits were followed for genomic DNA isolation (Sigma), plasmid DNA isolation (Qiagen, ON, Canada), extraction of DNA fragments from agarose gels (Qiagen) and purification of PCR products and of digested DNA fragments (Qiagen). An additional treatment of 1h with lysostaphin (Sigma) at 200 μg/ml was used to achieve efficient lysis of S. aureus cells in genomic and plasmid DNA isolations. Primers were designed to add restriction sites upstream and downstream of the amplified products. PCRs were performed using the Taq DNA Polymerase (NEB, Pickering, ON, Canada) for routine PCR or the Q5 high fidelity DNA Polymerase (NEB) for cloning, and cycling times and temperatures were optimized for each primer pair. Plasmid constructs were generated using E. coli DH5α (Invitrogen, Burlington, ON, Canada), restriction enzymes (NEB), and the T4 DNA ligase (NEB). Plasmid constructs were validated by restriction digestion patterns and DNA sequencing before electroporation in S. aureus RN4220 [35] and in final host strains. Plasmids used in this study are listed in Table 1.
Generation of pBT-E:hemB and insertional deletion of hemB
Isogenic hemB mutants of the ATCC 29213 and ΔvraG strains were constructed, in which the hemB gene was deleted and replaced by the insertion of an ermA cassette by homologous recombination. The temperature-sensitive [36] pBT2-hemB:ermA (pBT-E:hemB) was used in a strategy previously described [37], with some modifications. Briefly, the pBT-E plasmid was constructed by the insertion of an ermA cassette between the XbaI and SalI sites of the temperature-sensitive shuttle vector pBT2. The flanking regions of gene hemB [38] DNA fragments were amplified from S. aureus ATCC 29213 and were cloned on both sides of the ermA cassette into the plasmid pBT-E. The plasmid was then transferred for propagation into S. aureus RN4220 (res-). After bacterial lysis with lysostaphin (200 μg/ml for 1 h at room temperature), plasmid DNA was isolated and used to transform ATCC 29213 and Δ720 by electroporation. For plasmid integration and mutant generation, bacteria were first grown overnight at 30˚C with 10 μg/ml of erythromycin and a 1 μg/ml hemin supplementation (Sigma). Bacteria were then diluted 1:1000 and grown overnight at 42˚C with 2.5 μg/ml of erythromycin and 1 μg/ml hemin. This step was repeated twice. Finally, bacteria were diluted 1:1000 and grown overnight at 42˚C without antibiotics. Mutants with the inactivated hemB gene were selected as resistant to erythromycin and sensitive to chloramphenicol, together with a SCV phenotype that can be complemented (i.e., reversion to the normal growth phenotype) by a 5 μg/ml hemin supplementation on agar plates. The deletion of hemB in the ATCC 29213 and ΔvraG strains was confirmed by PCR and DNA sequencing of the PCR product.
Hemin supplementation in broth culture
To evaluate the capacity of hemin to restore optimal growth kinetics of S. aureus ΔhemB and the double mutant ΔvraGΔhemB, overnight bacterial cultures were diluted to an A 600 nm of approximately 0.1 in culture tubes containing fresh BHI supplemented with hemin (Sigma) added at various concentrations. The A 600nm of cultures was monitored at different points in time during the incubation period at 35˚C (225 rpm).
S. aureus infection of bovine mammary epithelial cells
An established bovine mammary epithelial cell (BMEC) line, MAC-T, was used as a cell culture model of infection [39], and was used for the characterization of intracellular infectivity and persistence of S. aureus ATCC 29213 and its isogenic mutants. The MAC-T cells were routinely cultured and maintained in Dulbecco's modified Eagle's medium (DMEM) containing 10% heat-inactivated fetal bovine serum (FBS), supplemented with 5μg/ml insulin (Roche Diagnostics Inc., Laval, QC, Canada) and 1μg/ml hydrocortisone (Sigma), and incubated at 37˚C in a humidified incubator with 5% CO 2 . Cell culture reagents were purchased from Wisent (St-Bruno, QC, Canada). Forty-eight hours before infection, 1x10 5 MAC-T cells per ml were seeded on treated Cell-BIND 1 24-well plates (Corning) to obtain 30% confluence. Monolayers were grown to confluence under 5% CO 2 at 37˚C. Six hours prior to infection, monolayers were washed with DMEM and incubated with an invasion medium (IM) (growth medium without antibiotics containing 1% heat-inactivated FBS). Overnight bacterial cultures were diluted 1:20 in fresh TSB and grown to mid-logarithmic growth phase, then washed with PBS and diluted in IM to a multiplicity of infection of 10. Invasion was achieved by incubating monolayers with bacteria for 3 h. Monolayers were then washed with DMEM and incubated with IM containing 20 μg/ ml lysostaphin to kill extracellular bacteria. The use of lysostaphin to kill extracellular normal and SCV S. aureus was previously validated in cell invasion assays [24,39]. The treatment was allowed for 30 min before the determination of intracellular CFUs after 3h of infection, or the treatment was extended for an additional 12 or 24 h for those later time points. For CFU determination, following extensive washing with Dulbecco's Phosphate-Buffered Saline (DPBS), monolayers were detached with trypsinization and lysed with 0.05% Triton X-100 before PBS was added to obtain a final 1X concentration. The lysate was serially diluted and plated on TSA for CFUs determination.
BMECs viability and metabolic activity assay
To determine the cytotoxic damage inflicted by S. aureus ATCC 29213 and its isogenic mutants on MAC-T cells, a cell metabolic activity assay that measures the reduction of 3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide (MTT) into an insoluble formazan product in viable cells, was performed. The assay followed the method of Kubica et al. [40] with some modifications. Briefly, S. aureus infection of cells was achieved as described in the persistence assay, but instead of inducing cell lysis after 12 h or 24 h, cells were incubated with 100 μl of the MTT reagent (5 mg/ml) (Sigma) in DPBS for 2 h at 37˚C. Following this, an acidic solvent solution of 16% SDS and 40% PMF, pH 4.7, was added to lyse the cells and solubilize the crystals of formazan overnight. The samples were read using an Epoch microplate reader (Biotek Instruments Inc.) at a wavelength of 570 nm. All assays were performed in triplicate, and control wells with uninfected cells (high viability control) or lysed bacteria-infected cells (bacteria-background control; treated with 0.05% Triton X-100 for 10 min before MTT addition) were included to each plate. The level of metabolic activity was calculated using the following formula: ððAbsorbance of the sample À Absorbance of bacteria À background controlÞ = High viability controlÞ Â 100
Virulence in the mouse mastitis model
The mouse mastitis model of infection was based on previously described work [4,41]. Briefly, one hour following removal of 12-14 day-old offspring, lactating CD-1 mice (Charles River Laboratories) were anesthetized with ketamine and xylazine at 87 and 13 mg/kg of body weight, respectively, and mammary glands were inoculated under a binocular. Mammary ducts were exposed by a small cut at the near ends of teats and a 100 μl-bacterial suspension containing 10 2 CFUs in endotoxin-free phosphate-buffered saline (PBS, Sigma) was injected through the teat canal using a 32-gauge blunt needle. Two glands (fourth on the right [R4] and fourth on the left [L4] from head to tail) were inoculated for each animal. Mammary glands were aseptically harvested at the indicated times, weighed and visually evaluated for inflammation. Bacterial burden was evaluated after mechanical tissue homogenization in PBS, serial dilutions, and plating on agar for CFU determination. In additional experiments, homogenized glands were preserved for protein extraction and myeloperoxidase (MPO) activity assays.
Mammary gland protein extraction
Total protein extraction from mammary glands was performed by an optimized method previously described [42], with some modifications. Mammary tissues were homogenized in a buffer containing a final concentration of potassium phosphate of 50 mM, pH 6.0, and hexadecyltrimethylammonium bromide (CTAB) 50 mM (Sigma). The samples were then sonicated, freeze-thawed in liquid nitrogen, and centrifuged at 2000 g for 15 min at 4˚C. Finally, the fat layer was removed by aspiration, and supernatants were saved for a final centrifugation of 15 min at 15 000 g, to discard all cellular debris. Supernatants were distributed in aliquots and kept at -80˚C until used for the enzymatic assays or protein concentration determination as measured by the bicinchoninic acid method (BCA) Protein Assay Kit (Thermo-Scientific).
MPO activity assay
Neutrophil recruitment in mammary tissues was measured by quantification of the MPO enzyme activity by the o-dianisidine-H 2 O 2 method, modified for a microplate format [43]. In a 96-well microplate, 10 μl of tissue extraction supernatants were incubated with a solution of o-dianisidine hydrochloride (167 μg/ml) (Sigma) and 0.0005% H 2 O 2 (Sigma) in 50 mM CTAB phosphate buffer 50 mM, pH 6.0. The MPO activity was measured kinetically with intervals of 15 s over a period of 5 min in an Epoch microplate reader at 460 nm. A Unit of MPO was considered as the amount of enzyme that degrades 1 μmol of H 2 O 2 /min at 25˚C, assuming an absorption coefficient of 11.3 mM −1 cm −1 at 460 nm for o-dianisidine [44]. Results were expressed as units of MPO per g of gland.
Mouse immunizations
The immunogenic properties of the attenuated strain ΔvraGΔhemB administered as a live vaccine were evaluated in mice. In preliminary studies, the mice well tolerated intramuscular and subcutaneous (SC) injections of the attenuated strain. The doses of 10 6 , 10 7 and 10 8 CFUs and the SC route were selected for subsequent experiments. For the preparation of bacterial inoculum, S. aureus ΔvraGΔhemB colonies previously grown on BHIA plates were washed twice in ice cold PBS and suspended in PBS containing 15% glycerol, then aliquoted and kept at -80˚C until subsequent use. The viable bacterial counts in the inoculum preparation was validated by serial dilution plating on BHIA. CD-1 mice were randomly divided into 3 groups: group 1 (n = 3) received a dose of 10 6 CFUs; group 2 (n = 3), 10 7 CFUs, and group 3 (n = 3), 10 8 CFUs. Mice were immunized by two subcutaneous injections of bacteria in PBS (100 μl), in the neck, two weeks apart. This live-attenuated formulation was also compared to a subunit vaccine using only the purified staphylococcal IsdH protein as the antigen. The recombinant S. aureus IsdH protein was produced in E. coli as previously described [45]; mice (n = 6) were immunized by two subcutaneous injections in the neck, three weeks apart, using 20 μg of IsdH combined to EMULSIGEN 1 -D (25% v/v) (MVP Laboratories, Inc., Omaha, NE) in a volume of 100 μl. Blood samples were taken just before the priming injection (preimmune serums) and 10-21 days after the boost immunization (immune serums). Blood aliquots were allowed to clot at room temperature for an hour and then centrifuged at 10,000 g for 10 min at 4˚C. The serums were collected and kept at -20˚C until subsequent analysis.
Preparation of S. aureus cell extracts
Preparation of S. aureus whole cell extracts was done as previously described with some modifications [46]. Briefly, overnight bacterial cultures were diluted 1/1000 in fresh BHI broth, and then incubated at 35˚C (225 rpm) until an absorbance value (OD 600nm ) of~0.8 was reached. Bacterial cells were centrifuged and pellets were washed in ice-cold PBS twice and suspended with the addition of 5 ml of PBS per ml of pellet. Bacterial suspensions were first treated with lysostaphin (Sigma) (100 μg/ml of pellet) for 1 h at 37˚C, and then 3 μg of protease inhibitor cocktail (Sigma), 8 μg of RNase A (Sigma) and 8 μg of DNase (Qiagen) per ml of pellet were added to the suspension. After 30 min at room temperature, cells were mechanically disrupted by 3 to 4 passages in a SLM Aminco French Pressure cell disrupter, and then centrifuged at 12,000 × g and 4˚C for 10 min to remove unbroken cells. Supernatant was collected and total protein concentration was determined as previously described with the BCA Protein Assay Kit.
Detection of mouse IgG by ELISA
Detection of serum total IgG against the ΔvraGΔhemB vaccination strain and each of the bovine IMI isolates was performed to demonstrate and measure the systemic humoral response generated by the immunization of mice. For target antigens, Nunc MaxiSorpTM 96-well plates (Thermo Fisher Scientific Inc., Rochester, NY) were coated with 100 μl of each of the whole S. aureus cell extracts or of the recombinant IsdH protein (10 μg/ml diluted in carbonate/bicarbonate buffer, Sigma), and incubated overnight at room temperature. The plates were then saturated with PBS containing 5% skim milk powder for 1 h at 37˚C, followed by a second blocking step with an addition of 5% porcine serum to prevent unspecific S. aureus protein A interactions, in the case of whole-cell extracts. One hundred microliters of two-fold serial dilutions of the sera in the dilution buffer (PBS with 2% milk and 0.025% TweenTM 20) were loaded into the plates and incubated for 1 h at 37˚C. Plates were then washed three times with PBS containing 0.05% TweenTM 20, and loaded with 100 μl of horseradish peroxidase (HRP)conjugated goat anti-mouse IgG, IgG1 or IgG2a (Jackson ImmunoResearch Laboratories Inc., West Grove, PA) diluted 1/5000 in the dilution buffer. After 1 h of incubation at 37˚C followed by washes, peroxidase activity was detected using 3,3 0 ,5,5 0 -tetramethylbenzidine (TMB) reagent (KPL Inc., Gaithersburg, MD) according to the manufacturer's recommendations.
Statistical analysis
Statistical analyses were carried out with the GraphPad Prism software (v.6.02). Intracellular bacterial CFUs and bacterial CFUs/g of gland (IMI in mice) were transformed in base 10 logarithm values before being used for statistical analyses. Statistical tests used for the analysis of each experiment and significance are specified in the figure legends.
Validation of the SCV phenotype
Homologous recombination was used to generate hemB mutants in the S. aureus wild-type and ΔvraG isogenic backgrounds. The hemB deletion was confirmed by PCR and by sequencing of the PCR product. The gene hemB codes for an δ-aminolevulinate dehydratase, an essential enzyme in porphyrin biosynthesis converting δ-aminolevulnic acid to porphobilinogen [38]. Lacking this enzyme, the hemB mutant does not synthesize heme resulting in a defective electron transport system and ATP synthase activity. The hemB mutant thus produces much less energy and secondary metabolism is impaired. This phenotypically translates into a slow growth. In vitro characterization of mutants confirmed the expected small-colony phenotype of SCVs. After 48 h of incubation at 37˚C on TSA, colonies of S. aureus ΔhemB and ΔvraGΔhemB were approximately 0.5 mm in diameter and appeared non-pigmented, whereas colonies of the parent and ΔvraG strains were 4 mm or greater in diameter with a bright yellow pigmentation. The lack of pigmentation in SCVs was previously documented [27]. Growth of the S. aureus ΔhemB mutants reached a plateau at a lower bacterial density in broth culture compared to wild-type S. aureus, but chemical complementation by the addition of hemin (1 μg/ml) in TSB restored the capacity of S. aureus ΔhemB to reach a bacterial density equivalent to that of the parent strain (data not shown). Similar results were obtained for the Δvra-GΔhemB double mutant compared to its isogenic strain ΔvraG. Wild-type and ΔvraG showed no difference in growth in broth cultures using TSB or milk as cultivation medium, as shown in a previous study [19]. Finally, the ATCC 29213 strain, the single mutants or the double mutant produced equivalent amounts of biofilm compared to that measured for the majority of bovine mastitis isolates studied in a previous study [13].
These results show validation of the SCV phenotypes in hemB mutants and demonstrate that chemical complementation by supplemental hemin restores the wild-type phenotype to the full extent.
A mutation in gene vraG impairs S. aureus internalization in BMECs
We compared the infectivity of the wild-type, ΔvraG, ΔhemB and ΔvraGΔhemB strains in infection and persistence assays using MAC-T cells. By comparing the three mutant strains to their isogenic parent, distinct effects of mutations in gene hemB and vraG were observed. A short 3-h incubation of bacteria with cell monolayers followed by the addition of lysostaphin to eliminate extracellular bacteria demonstrated good levels of internalization into MAC-T cells for both the wild-type and ΔhemB strains, based on the recovery of intracellular CFUs. On the other hand, the single ΔvraG mutant showed significantly less (P 0.01) internalization compared to its parental strain (Fig 1A). The reduction in internalization as seen with ΔvraG was even more pronounced when comparing the double mutant ΔvraGΔhemB to ΔhemB, with a 10-fold reduction of inoculum recovery in the 3-h internalization assay (P 0.001, Fig 1A). This initial reduction of internalized bacterial load was still apparent 12 and 24 h post invasion (PI) for the double mutant strain ΔvraGΔhemB (Fig 1B), as illustrated by the 1-log10 reduction of CFU/ml at both time points compared to that observed for ΔhemB (P 0.001). The difference in initial intracellular bacterial loads between the single ΔvraG mutant and wild-type strains (Fig 1A) gradually vanished with longer incubation times ( Fig 1B), as both strains did not well persist in MAC-T cells (Fig 2). On the contrary, intracellular CFUs recovered for the single ΔhemB strain was significantly higher compared to that recovered for the three other strains at 24 h PI (Fig 1B, P 0.001 against all). Globally and as expected for the SCV phenotype, the ΔhemB strain showed a better intracellular persistence than any of the other strains over time (Fig 2).
These results suggest that the ΔvraG mutation greatly reduces the internalization process into MAC-T cells. Results further demonstrate that the ΔvraGΔhemB mutant is still capable of internalization and persistence into BMECs, but to a lesser degree than that seen with the single ΔhemB mutant.
ΔvraGΔhemB and ΔhemB SCVs cause low BMEC disruption As reported above, SCV strains showed a greater persistence over time in MAC-T cells, as illustrated by their sustained intracellular viability at 12 and 24 h PI in comparison to the wildtype and ΔvraG strains (Figs 1B and 2). Percent of inoculum recovered from cells stayed nearly the same from 0 to 24 h after lysostaphin addition, both for the double and single hemB mutants, with a slight increase at 12 h, indicating intracellular growth (Fig 2). Both strains started to decrease at a slow rate after this time point of 12 h. However, the apparent reduction of intracellular CFUs for the WT and ΔvraG strains was concomitant with the visual observation of increasing damage to cell monolayers over time, in comparison to that observed with strains of the SCV phenotype. This prompted us to evaluate MAC-T cells viability following infection by each of the four strains studied. MAC-T cell viability was evaluated by the MTT method in the exact same conditions that were used for the determination of intracellular bacterial counts.
As expected, both SCV strains caused significantly less MAC-T cytotoxicity in this assay in contrast to that seen with the wild-type and ΔvraG strains: when compared to ΔhemB, the wild-type strain nearly reduced by half the viability of cells at 12 h (Fig 3A: wild-type, 25.4%; ΔhemB, 48.4%). This difference was still apparent at 24 h (Fig 3B: 16.25 vs. 34.55%, respectively), even if the bacterial load was 10 times higher for the ΔhemB mutant (Fig 1B). The MAC-T cells were more damaged by ΔhemB than by the double mutant ΔvraGΔhemB but the difference was only significant at 24 h (P 0.01). The double mutant sustained epithelial cells viability 2.3 times more than the wild-type strain at 12 h ( Fig 3A) and 2.7 times more at 24 h ( Fig 3B) (P 0.0001 for both time points). Therefore, the greater intracellular persistence of both SCV strains compared to the wild-type and ΔvraG strains over time (Fig 2) was likely to be attributed to a lower toxicity of the SCVs for MAC-T cells (Fig 3). Taken together, the results from the BMECs infection assays provide evidence of an additive effect of both ΔhemB and ΔvraG mutations for the attenuation of the wild-type strain; the vraG mutation mainly lowering the intracellular bacterial load and the hemB mutation creating the SCV phenotype that increases MAC-T cells viability.
ΔvraGΔhemB double mutant is strongly attenuated in a mouse IMI model and is efficiently cleared from mammary glands
To attest the attenuation of ΔvraGΔhemB in an in vivo model of infection, the virulence of the double mutant was evaluated and compared to the wild-type strain in a murine IMI model. For both strains, the exponential phase of infection took place mainly within the first 12 h post-infection, while the maximal bacterial burden was reached at 24 h for the double mutant and 48 h (day 2 [D2]) for the wild-type strain (Fig 4). At 24 h, the double mutant showed a reduction of 1.9 log10 in mean CFU/g of gland compared to the wild-type (P 0.05). Also after 24 h, the mutant bacterial burden showed a constant decline until complete bacterial clearance was reached at day 12 (shown by the asterisk on Fig 4). In contrast, the parental strain provoked severe invasive infections compared to the mutant, killing 3 of the 9 remaining mice at day 2 and 2 of 3 mice at day 7 (Fig 4; arrows) before glands could be harvested for those groups. Mice surviving the WT infection maintained high viable counts (9 log10 CFU/g of gland) at day 7, an approximate 5 log10 difference in bacterial burden compared to the double mutant. These results clearly demonstrate a markedly reduced capacity of the double mutant ΔvraGΔhemB to multiply and survive in the mammary gland.
Inflammatory response to ΔvraGΔhemB and WT strains following IMI
To monitor the inflammatory response of the mice to infections with the wild-type and mutant strains, neutrophil infiltration in glands was evaluated by the MPO enzymatic activity in gland homogenates. MPO activity in biological samples has previously been correlated with the absolute number of neutrophils [47], and is hence an adequate representation of neutrophil infiltration. During the first hours after infection, neutrophil recruitment followed similar profiles for the double mutant and wild-type infected glands (Fig 5), with exponential intensification of apparent neutrophil infiltration from 12 h to 24 h post infection coinciding with bacterial growth albeit with a certain delay. We indeed previously showed that the absolute numbers of polymorphonuclear cells in relation to the bacterial load in mammary glands does not always peak at the same time [48]. No significant difference in MPO activity could be observed at 6, 12 and 24 h between glands infected by mutant and wild-type strains (Fig 5). This equivalence in apparent neutrophil infiltration did not however correlate with the visual observation of inflammation at 24 h, at which point the wild-type infection generated extensive redness of glands in comparison to the double mutant (photographs of Fig 6). In contrast, mutant infected glands were not visually altered at the macroscopic level compared to non-infected controls. The disparity between the visual assessment of inflammation and neutrophil infiltration results could be attributed to the differences in bacterial loads (Fig 4) and the cytotoxic activity of the wild-type strain (Fig 3). Hence, these results indicate that neutrophil recruitment in the glands infected by the double mutant ΔvraGΔhemB strain was equivalent to that seen with the wild-type strain and that this was sufficient to allow a subsequent decline and clearance of the mutant bacterial loads.
The inflammatory response of ΔvraGΔhemB-infected glands goes back to normal levels with bacterial clearance
In order to attest strain safety, keeping in mind the possible use of the double mutant as a liveattenuated vaccine, and to confirm that this inflammatory response was not consequent to an inadmissible reactogenic strain, we continued monitoring of MPO activity in ΔvraGΔhemBinfected glands 4 and 12 days after infection. The level of MPO activity was then compared to levels obtained for glands from non-infected mice. As illustrated in Fig 7, the apparent neutrophil presence in mutant infected glands was still high 4 days after infection, with MPO activity ranging from 8 to 21 Units/g of gland. The levels of MPO at this time point might be the direct consequence of the mammary gland involution, the process by which the lactating gland returns to a morphologically near pre-pregnant state. Indeed, involution is normally associated with neutrophil recruitment allowing phagocytosis of apoptotic cells during the remodelling of tissue [49]. However, later on, the MPO levels in the mutant infected glands went through a substantial decline between days 4 and 12, (P 0.01). MPO concentration was then considered to be back to normal levels at day 12, showing no significant difference with the noninfected glands.
Immunizations with ΔvraGΔhemB generate a strong humoral response against several S. aureus bovine IMI isolates To confirm that immunization with the attenuated strain ΔvraGΔhemB can indeed generate a strong immune response suitable for its use as a putative live vaccine against S. aureus IMIs, mice were immunized with different doses of the mutant and serum total IgGs were assayed by ELISA for detection of antigenic components present in whole-cell extracts of a variety of S. aureus bovine isolates. A specific detection of the staphylococcal iron-regulated IsdH protein was also attempted by ELISA. Doses of 10 6 , 10 7 and 10 8 CFUs, when administered subcutaneously in the neck, triggered no adverse effect such as modification of mice behavior, signs of inflammation, or necrosis at the immunization site throughout the immunization period. Additionally, immunizations using increasing quantities of the live double mutant ΔvraGΔhemB yielded increasing titers of systemic IgG antibodies against its own whole cell extract (Fig 8A). The titers of the immune sera were significantly higher than those of the preimmune sera, demonstrating specificity of antibody production against the S. aureus antigens present in the live vaccine. Most importantly, increasing the doses of ΔvraGΔhemB also generated a consequential rise of antibody titers against a variety S. aureus strains isolated from bovine mastitis, including strains from the major spa types found in Canada and elsewhere in the world (Fig 8B). Interestingly, it was also possible to generate specific IgGs against the cell wall-associated and iron-regulated protein IsdH as demonstrated in the ELISA using this protein as the antigen (Fig 8C). These results clearly show that (i) immunization with the double mutant can raise a specific immune response against S. aureus, and that (ii) the strain background (ATCC 29213) share sufficient common features with bovine mastitis strains so that the antibody response also strongly recognizes strains of major spa types. Additionally, the presence of IgG2a and IgG1 isotypes specific to IsdH, i.e., indicative of a Th1 and Th2 oriented immune response, respectively, was assayed for serums collected from mice immunized with the double mutant and compared to that obtained from mice immunized with the purifed IsdH protein. Significantly higher IgG2a/IgG1 titer ratios (P 0.05) were found for serums from mice immunized with the live-attenuated double mutant compared to the ratios obtained from mice vaccinated with the purified IsdH protein (Fig 8D).
Discussion
The ability of Staphylococcus aureus to express multiple virulence factors permitting host colonization, tissue destruction, immune evasion, intracellular persistence and biofilm production makes it a very challenging pathogen to fight. Vaccines designed to prevent IMI in bovine mastitis therefore have to take into account the complexity of S. aureus pathogenesis as well as the diversity of strains capable of causing mastitis including strains with the SCV phenotype. SCVs are known to be somewhat attenuated but have intracellular abilities that allow persistence in the host without producing invasive infections [24]. In this study, we further attenuated the SCV phenotype to demonstrate that this phenotype could be used as a live attenuated vaccine.
One of our recent research endeavors has been to identify genes that are highly expressed by multiple S. aureus strains in vivo. The proteins encoded by these genes represent good targets as vaccination agents or in drug development as they are more likely to have an importance in virulence and, being expressed, to be efficiently targeted by the immune response. In a previous study, we used a DNA microarray approach to uncover S. aureus genes that were Mice were immunized as previously described: serums were collected before priming immunization (preimmune, open circles) and ten days after the boost immunization (immune, blue squares). A. IgG titers rise with increasing immunization doses (10 6 , 10 7 , 10 8 CFU) of the live-attenuated mutant ΔvraGΔhemB: each dot represents the total IgG titer of one mouse against a ΔvraGΔhemB whole cell extract. Medians are represented by thick lines for immune titers and dashed lines for preimmune titers. Titers were compared to their corresponding preimmune titers (Two-way ANOVA and Tukey's multiple comparisons test: ****: P 0.0001). B. Immunization with the live-attenuated mutant ΔvraGΔhemB confers high IgG titers against components that are shared by mastitis strains of commonly found spa types. Each dot represents the total IgG titer of one mouse against the whole cell extract of the indicated strain. Medians are represented by thick lines for immune titers and dashed lines for preimmune titers. All immune titers were compared to their corresponding preimmune titers (Two-way ANOVA and Tukey's multiple comparisons test: P 0.0001 for all groups). C. Immunization with the live-attenuated mutant ΔvraGΔhemB confers specific IgG titers against the cell-wall associated protein IsdH. Each dot represents the total IgG titer of one mouse against recombinant IsdH. Compared groups were immunized with the 10 8 CFU of the live-attenuated ΔvraGΔhemB (ΔΔ) or 25 μg of the purified recombinant IsdH protein (IsdH). D. IgG isotype ratios (IgG2a/IgG1) of mice immunized with the live-attenuated mutant ΔvraGΔhemB (open diamonds) or immunized with the recombinant IsdH (black diamonds), against whole-cell extracts of strain ΔvraGΔhemB (vs ΔΔ) or against the recombinant IsdH protein (vs IsdH). Each diamond represents the IgG2a/IgG1 titer ratio for one mouse. Medians are represented by thick lines (One-way ANOVA and Dunn's multiple comparison test: *: P 0.05).
The operon vraFG codes for an ABC transporter-like system with a role in resistance to antibiotics [32,[50][51][52] and to several cationic antimicrobial peptides (CAMPs) such as indolicidin isolated from bovine neutrophils [30,53], human cathelicidin LL-37 [54] and Class I bacteriocins such as nisin A and nukacin ISK-1 [55]. Noteworthy, vraFG was shown not only to be under the regulation of the two-component regulatory system graXRS, but also to play an essential role by sensing the presence of CAMPs and signaling through graS to activate graRdependent transcription, including its own transcript [30]. Besides, vraFG does not act as a detoxification module as previously believed [32], as it cannot confer resistance when produced on its own [30]. It was also reported that the expression of two key determinants, mprF and dlt, (needed for the modification of bacterial surface charged residues) is dependent upon graXRS-vraFG, and that these effectors are responsible for making the surface charge globally less negative [32], thus promoting resistance. When the sensing system or its effectors are altered, an increased susceptibility to vancomycin [32], daptomycin, polymyxin B [52] and several host defense CAMPs [56] is observed.
Our previous studies revealed that gene vraFG (SACOL0718-720) was up-regulated in both fresh milk in vitro and in milk recovered from infected cows. But of greater significance, this gene was shown to be a key factor in S. aureus virulence in cows, since a ΔvraG mutant was greatly attenuated in experimental bovine IMIs [19]. Consequently, this mutation was selected to further attenuate the SCV phenotype by generating the double mutant (ΔvraGΔhemB) investigated in the present work.
This study is the first one, to our knowledge, to consider the use of classical respiratory deficient SCVs as the foundation of a non-virulent, genetically-defined attenuated vaccine for the delivery of S. aureus antigens. Live-attenuated strains of S. aureus have been of great interest for a long time and have been studied for immunization of cows since the '80s [57]. Some teams have managed to produce attenuation by chemical mutagenesis [58] in order to elicit high specific humoral response in cows, but unfortunately this caused only a weak reduction in shedding of bacteria, and no difference in the reduction of somatic cell counts (SCC) in milk when vaccinated groups were challenged. Besides, the genetic basis for the attenuation of this strain was still unknown, which may be a concern considering the necessity to obtain a stable and safe vaccine. In a different manner, transposon mutagenesis was used to generate an aromatic amino acid auxotrophic aroA mutant of S. aureus for testing in a mouse IMI model [59]. Both Th1 and Th2 responses were elicited, and a certain degree of protection was observed against homologous and heterologous S. aureus. The mutant was also demonstrated safe in leukopenic mice in a model of nasal colonization [60], but its immunogenicity in cows remains unknown.
In this study, genetic stabilization of the SCV phenotype (i.e., the hemB deletion) along with inactivation of an important effector of the resistance to cationic compounds (i.e., the vraG deletion) were able to generate an attenuated S. aureus strain that still exhibited a low transient internalization in epithelial cells. Since SCVs are expected to show a high capacity of invasion and intracellular persistence [23], the reduction we observed in post-invasion intracellular bacterial loads was attributed to the disruption of gene vraG. Since inappropriately high intracellular invasion and persistence might not be suited for a strain intended to be used as a live vaccine (even if low internalization in cells might help stimulating cell-mediated immunity), this second mutation was considered relevant for attenuation, especially in the SCV background.
More specifically; the lesser degree of internalization and intracellular persistence in BMECs observed for ΔvraG and especially for ΔvraGΔhemB, as well as the total clearance of the latter mutant from glands in mice, suggested an additive deleterious effect of the two mutations. Because of their reduced membrane potential (ΔC), respiratory deficient SCVs (having an altered electron transport chain) are generally expected to be more resistant to cationic compounds or antibiotics that require membrane polarization for their mode of action [27,[61][62]. However, other unknown mechanisms and factors can also lead to a decreased [63], or even a higher susceptibility of SCVs to such compounds, as previously shown with the frog-derived CAMP dermaseptin [64]. Also, electron transport SCVs have been shown to be more susceptible to oxidant damage caused to their membrane, because of their limited ability to generate a ΔC [65]. Therefore, it is likely that disruption of the graXRS-vraFG regulon via vraG mutation in the SCV background (i.e., ΔvraGΔhemB) may be more deleterious than that seen with the normal phenotype (i.e., ΔvraG) because of the lack of membrane potential, which is required for active detoxification and reactive oxygen species (ROS) protection [66].
Another explanation for the strong attenuation seen for the double mutant is the possibility that graXRS and vraFG act as key regulators in the stress response of SCVs. The alternative transcription factor sigma B (SigB) is known to affect the expression of several genes encoding virulence factors and stress-response systems specific to SCVs [21]. This regulator has a permanent activity in hemB mutants [67] and was shown to play a role in biofilm production and in the intracellular persistence of SCVs [21]. VraFG may act in concert with SigB constant influence or possibly through another mechanism involving PhoU. PhoU is a global negative regulator of genes involved in central carbon metabolism and cytochrome expression and is therefore connected to the SCV phenotype [61]. In S. aureus, PhoU is important for resistance to CAMPs [68] and has been shown to regulate dlt, which is also under the control of graXRS-vraFG. Besides, GraSR has been linked to virulence and stress response pathways, which could help the SCV and normal phenotypes to persist in the host environment [69].
The low expression of invasive virulence factors such as hemolysin-α and other toxins associated with the reduced quorum-sensing activity of SCVs [39], probably resulted in the relatively low BMEC cytotoxicity of SCVs observed in this study. Nevertheless, the precise mechanisms by which non-SCV S. aureus strains kill epithelial cells are not completely understood and could be attributed to both induction of apoptotic pathways and/or pore-forming related lysis [23,27]. One of the prominent results of this study hinges on the high attenuation of virulence that was attained with the double mutant in the mouse IMI model. The parental strain was highly virulent and resulted in considerable mortality in this model, whereas a 5-log10 reduction in CFU/g of gland followed by total bacterial clearance from the glands was observed for the double mutant. The double mutant strain showed a good capacity to stimulate the recruitment of neutrophils in the gland and most importantly, this inflammatory response was not associated with tissue damage. Histopathological examinations of inoculated glands in future investigations will help to further support innocuity at the microscopic level.
This pro-inflammatory response was a first clear indicator of the potential of the double mutant strain as a live-attenuated vaccine. When administered through the subcutaneous route, the marked attenuation of the double mutant permitted the use of relatively high doses of live bacteria to immunize mice, without provoking any sign of local inflammation or adverse effect. At the same time, this immunization allowed to trigger a broad systemic response that translated in high IgG titers against whole S. aureus cell components. This humoral response was also broad enough to react against several bovine mastitis isolates represented by the most prevalent S. aureus spa types found in Canadian dairy herds [13] and elsewhere in the world [34]. Furthermore, the response was found to include significant IgG titers against the staphylococcal iron-regulated and cell-wall associated IsdH antigen. IgG isotypes produced against this antigen also allowed to demonstrate a more balanced Th1 and Th2 response as compared to that obtained when immunizing with the purified IsdH antigen. This feature might help to improve protection against S. aureus, for which control is increasingly thought to require cell-mediated immunity [15][16][17][18]. Noteworthy, although this proof of concept demonstrated that the double mutant genetic background (ATCC 29213) share many common features with bovine mastitis strains, such mutations (i.e., ΔvraGΔhemB) and attenuation can be created in any desired background if one wishes to cover specific types of strains. The demonstration of protection elicited by such a vaccine against experimental IMI in mice, and then in cows, will need to be examined in future work, along with investigations on the best possible route of administration. The use of cows is clearly important for future studies; nevertheless, we have recently shown that results from our mouse model of IMI [70] can translate very well to that obtained in cows [71].
As a final note, the administration route of such vaccines might undeniably influence the qualitative properties of immune response and efficacy of protection. On this matter, it was previously reported that intramammary but not intraperitoneal application of live temperature-sensitive S. aureus could stimulate murine mucosal responses against a challenge with a homologous virulent strain [72]. A different study was conducted by using formalin-killed whole cells of planktonic and biofilm S. aureus to immunize mice [73]. It was shown that the biofilm vaccine performed better in immunogenicity and protection when administered by the intramammary route, despite the fact that the planktonic subcutaneous vaccine triggered a significantly higher humoral response. In more recent work, the same team reported that subcutaneous immunizations with staphylococcal protein A could elicit higher humoral responses against the antigen, but that the response was more balanced (humoral and cellular) when administered by intramammary injections [74]. However, this subunit vaccine failed to protect immunized mice challenged (IMI) with a strong biofilm-producing and encapsulated S. aureus strain, regardless of the route of immunization. In this manner, the route of administration of our genetically defined live-attenuated vaccine will definitively impact the level of its protective efficacy, and additional practical aspects will need to be considered (e.g., subcutaneous administration vs. intramammary perfusion into four quarters for a whole herd) in upcoming studies.
|
2018-04-03T05:42:01.306Z
|
2016-11-17T00:00:00.000
|
{
"year": 2016,
"sha1": "04e53c1af36634b2f1d1e8cebed241f9d9b20737",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0166621&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "04e53c1af36634b2f1d1e8cebed241f9d9b20737",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
6604629
|
pes2o/s2orc
|
v3-fos-license
|
Analysing the Impact of Anthropogenic Factors on the Environment in India
Using carbon dioxide as a surrogate measure of various environmental impacts, this paper analyses the effects of anthropogenic factors on CO2 emissions in India. The paper uses the STIRPAT model with data for 1960-2007. The results show that urbanisation has the largest potential negative effects on the environment, followed by population, service sector, industrial sector and GDP per capita. While analysing the potential effects of various anthropogenic factors on the environment, accounted for by average annual growth rates, population emerges as the single largest factor contributing towards emissions, followed by urbanisation (the degree of contribution to change in CO2 emissions by these factors are 33.8% and 29.7% respectively). Hence, there is a need for serious consideration of policy changes with regard to demographic and urban planning in India in order to reduce the effects of these factors on the environment. For instance, India could adopt a two-child policy like the one-child policy in China to control population growth. Unplanned and haphazard urbanisation can lead to inefficient use of energy resources that may hinder the efforts to reduce carbon dioxide emissions in India. Hence sustainable urban planning across Indian states is very much essential for better management of environmental resources.
Introduction
Assessing the environmental impact due to various anthropogenic factors has been the topic of active research for quite some time (Dietz & Rosa, 1994;Madu, 2008;Lin et al., 2009).It has now become quite apparent that anthropogenic activities are modifying the global environment on an unprecedented scale and have already altered the chemical composition of the atmosphere (e.g. the emission of greenhouse gasses and ozone depleting substances), dramatically changed the land cover over vast areas of the globe, altered major biogeochemical cycles and accelerated the extinction of species (York et al., 2003b).In recent years much of the discussion has been on the issue of climate change which is believed to have been caused by global warming due to emissions of carbon dioxide and other greenhouse gases (GHGs) in the atmosphere.There is widespread consensus among the scientific community that these GHGs are the consequence of human activities (IPCC, 2007;Stern, 2006).
With increasing levels of GHGs in the atmosphere and its cumulative effects on economy as well as the global ecosystem, there is an urgent need to better understand the influence of various anthropogenic factors on the environment, to help find appropriate policy solutions.One important question that arises in this context is which anthropogenic factors are driving this environmental change.There exists a large body of research that has attempted to understand the relative importance of various anthropogenic factors that drive environmental change.According to some studies economic growth and population size are the primary determinants of a wide variety of environmental impacts (York et al., 2001;Rosa & York, 2002;Dietz et al., 2007).Along with population and economic growth other factors such as technology, political and economic institutions, attitudes and beliefs are also found to be relevant factors that affect the ecosystem at large (Stern et al., 1992;Dietz &Rosa, 2004).
There is consensus in existing empirical studies that the relationship between population, economic growth and environmental quality is complex, that they are intricately interconnected, and that thorough research is necessary for a deeper understanding of this relationship.In this context, a first attempt was made by Ehrlic and Holdren (1972) to formulate a fundamental identity that shows how environmental impact is the result of the number of people living in an area and their affluence and implemented technology.The identity is popularly known as IPAT: A detailed discussion on the origin and evolution of this identity is presented in the next section.
The objective of the present study is to identify the important anthropogenic factors, and assess the magnitude of their impacts on the environment in India, the most populous and second fastest growing economy in the world, using the modified version of IPAT identity.Ever since India embarked on the path of economic reform in 1991, it has recorded phenomenal economic growth with massive expansion of industrial and service sectors, and registered record inflows of FDI and an increased share of domestic investment, sprawling urbanisation and increased consumption levels as a result of rising per capita income.All these economic activities have put tremendous pressure on the country's environmental resources, putting the future growth of the economy in danger.India is the third largest emitter of GHGs in the world, after China and USA, in absolute terms (UN, 2007), and considering the rapid economic growth the country has achieved in recent years it can be expected that this is likely to increase at an exponential scale if not checked.Though the per capita emissions of the country remains at a very low level (in fact one of the least in the world), India cannot afford to evade the responsibility of reducing GHG emissions and hence arresting damage to ecosystems.According to the Millennium Assessment, damages to ecosystems affects the welfare of human beings (MA, 2005).More specifically, India's people are often more vulnerable to its effects because a large proportion of the population (more than 60 percent) is directly dependent on primary economic activities such as agriculture, forestry, and fisheries for their survival and well-being.
Little progress has been made in regard to the identification of various anthropogenic factors and their magnitudes that drive GHG emissions and other environmental change in India.Empirical studies from countries across the world show the varying effects of anthropogenic factors on environmental change.They range from the dominant effects of population and urbanisation in China (Lin et al., 2010), to economic growth and population in Nigeria (Madu, 2008).Interestingly, Madu (2008) found that urbanisation reduces the effects of impacts on the environment implying that modernisation brings about a reduction in environmental impacts.With increasing urbanisation and per capita income and hence per capita consumption, it would be interesting to assess the relative importance of various factors on the environmental impacts in India.Moreover, it appears that a large knowledge gap exists as far as the testing of the IPAT model in the Indian context is concerned.At least to our knowledge not a single study exists on India in this respect and hence there is a need to bridge this gap.
The paper is structured as follows.Section 2 presents a brief historical background of IPAT model and its gradual developments.Section 3 explains the data and variables used in the model, and discuss estimation techniques.Empirical results and discussion are presented in section 4. Section 5 provides some concluding remarks with policy implications.
Methods
As indicated above Ehrlich andHoldren (1971, 1972) were the first to use the IPAT model to describe the impact on the environment of a growing population.A detailed discussion on the importance of the model and its variants is presented in Chertow (2001).The equation basically explains how growing population, affluence of citizens and technological progress affects the environment.Impact on Environment is expressed as the multiplicative product of Population, Affluence and Technology.Hence the identity takes the form, (2) where I denotes the environmental impacts, P represents the population size, A represents affluence and T stands for the levels of technology.
Ever since this identity was developed researchers have used the model with modifications for a variety of reasons.For a comprehensive discussion on the developments of this model see Lin et al. (2009).Dietz and Rosa (1994) suggested a reformulation of the IPAT model considering the monotonous and rigid nature of the identity.They suggested an alternative stochastic version which allows the use of modern statistical tools used in the Social Sciences.The stochastic equation is known as STIRPAT: Stochastic Impact of Regression on Population, Affluence and Technology.York, Rosa and Dietz (2003b) introduced an additive regression model in which all variables are in logarithmic form, which facilitates estimation and hypothesis testing.Their paper points out that in the typical application of the basic STIRPAT model, T is included in the error term, rather than estimated separately, making it consistent with the IPAT model, where T is solved to balance I , P and A.
ln ln ln
(3) This makes the variables as well as the parameters easier to interpret.The following interpretations can be made based on the coefficients obtained in the model.Impacts that yield a coefficient equal to 1.0 are referred to as unit elastic, which indicates a proportional relationship between the driving force and the impact; a percentage change in the driving force produces an identical percentage change in impact.Coefficients >/1.0 suggest an elastic relationship, indicating that an impact increases more rapidly than the driving force.Coefficients <1.0 (but >0) are indicative of an inelastic relationship, where impact is less responsive to changes in the driving force.Coefficients may also be negative.Values equal to -1.0 indicate negative unit elasticity, meaning that impact decreases proportionately in response to an increase in the driving force.Values <-1.0 indicate negative elasticity, meaning that impact decreases in greater proportion to an increase in the driving force.Values <0.0 but >-/1.0 indicate negative inelasticity, meaning that impact decreases in lesser proportion to an increase in the driving force.
In another paper York et al. (2003a) explain the basic variables suggested in splitting the variables further.A presentation by Rosa and Dietz (2010) suggests the use of 11 other independent variables in addition to the basic ones they have suggested before.They also suggest that the error term should not merely include the Technology variable but also various other factors like social, political and cultural.This may amount to the "B", the behavioural variables suggested by Schultz (2002).
Data and Variable Description
The data used in this paper were obtained from the data bank of the World Bank (http://databank.worldbank.org).The data is available for the period 1960 to 2007.
As mentioned above the paper has used the following modified version of the STIRPAT model, which is based on the basic IPAT identity.
Where I is environmental impact using carbon dioxide emission as a proxy (appropriate justification is provided below), P is population, UrP is percentage of urban population, InP is the percentage of population between the age group of 15-60, GDPpC is per capita income, Ser and Ind are the percentage of GDP from the service and industrial sector respectively, A is a constant term, U is an error term, and b, c, d, e, f, g are the parameters of the independent variables to be estimated.All the variables were converted to the natural logarithms for ease of estimation.
Literature suggests that carbon dioxide emissions are highly correlated to ecological footprints (Cole & Neumayer, 2004).In the absence of reliable data on ecological footprints, annual emissions data of carbon dioxide is used here.This proxy has been widely used by researchers as an overall impact on environment (Madu, 2008;Lin et al., 2009).In addition to the population variable in STIRPAT, two more variables viz.percentage of urban population and percentage of working class has been added in the model.The justification for the inclusion of these variables is to control the effects by urbanisation and working class population in particular on the environment.One of the primary reasons for rapid environmental degradation in India is due to increasing urbanisation which is caused by the increasing migration of workers from rural areas to urban centres.Similar trends of rural-urban migration have been observed in China and elsewhere (Lin et al., 2009).GDP per Capita is used as a proxy for affluence.Percentage share of GDP from service and industrial sectors are used to explain the effects of technology as well as affluence aspects of STIRPAT.More importantly, the inclusion of share of service sector as a variable in the model is justified on the ground that the service sector contributes more to GDP than the manufacturing sector and it is generally assumed that service sectors pollutes less than other sectors of the economy.Hence, it would be interesting to examine the effects of service sector on the environment.We assume that the other aspects of technology as well as cultural and behavioural aspects will be captured by the error term.The trends of these variables are presented below.
Figure 1 shows carbon dioxide emissions in India for the period 1960 to 2005.The figure clearly suggests that there has been a steady increase in carbon dioxide emissions in India over the period.It is also true that carbon dioxide emission is largely contributed by consumption of energy in addition to land use changes and other economic activities.India's energy requirements are met primarily by coal and other fossil fuels which are carbon intensive and are considered as dirty energy.There has been an average annual increase of 5.55% (over 1960-2007), with a total increase of 1237% over the period.Over all, it can be said that the environmental impacts of India has sharply increased especially since the 1980s and shows no sign of declining.
India's population has been growing at an annual growth rate of 2% (Table 1), which is quite high compared to other growing developing countries such as China (only 1%).This is likely to have adverse impacts on the environment, as population growth is likely to increase energy consumption and increase the demand for industrial and agricultural products that can put more pressure on the environment.Urban population in India has increased at an average annual rate of 1.03% (Table 1).Urban annual average growth rate is 1.03% which is reasonably high.
Figure 4 indicates the trend in GDP per capita over the years.The GDP of India has grown significantly over the years, especially after 1991.With an Annual Average growth rate of 2.82%, the GDP per capita has recorded an overall increase of almost 280 % from 1960 to 2005. Figure 5 and 6 indicates the pattern of trend in percentage of manufacturing and service sectors as percentage of GDP.Industry share has an average annual growth rate 0.83% and service sector share has an average annual growth rate of 0.71%.Overall growth of the service and industrial sectors are 48% and 100% respectively.
Table 1 presents the summary of average annual growth rates of all the variables used in the model.Among all the variables, the growth rates of carbon dioxide emissions is found to be the highest (5.5%), followed by the growth rate of GDP per capita.
Results and Discussion
The equation ( 4) was estimated for assessing the impacts of anthropogenic factors on the environment in India using the Ordinary Least Squares Regression estimation technique.The correlation test of the independent variables was done and is presented in Table 2.The test results indicate that all the independent variables are having the problem of high multicollinearity as can be observed in the correlation matrix in Table 2.The Ordinary Least Square regression estimate of the STIRPAT model (presented in Table 3) gives Variance Inflation Factor (VIF) values for population, percentage of urban population, population between the age group 16-60, and GDP per capita.Percentage of GDP from service and industry sectors are higher than the minimum required number of 10 (see Table 4), which indicates a strong multicollinearity between these variables.With the problem of high multicollinearity the OLS estimation of the STIRPAT model may not be a good prediction of real relationship between the dependent and explanatory variables.Hence, eliminating the problem of multicollinearity is very much necessary to obtain better parameter estimates.One way to address the multicollinearity problem is to delete some of the highly correlated variables from the model, which would essentially result in information loss and thereby may affect the reliability of the estimates.However, as suggested by many researchers, one way to overcome the problem of multicollinearity without deleting any variables is by using ridge regression (Heorl, 1962;Lin et al, 2009).
In addition, Björkström (2001) provides the following advantages of Ridge regression over other methods to remove multicollinearity: (1) It does not require deletion of variables, (2) Better predictability even within the framework of Regression models, (3) In ridge regression, several principles are known for selecting the best parameter value, and to the extent the consequences of these principles have been explored, it has been within the framework of the standard regression mode.
This method requires a careful selection of an appropriate ridge regression coefficient K.As it is a biased estimation, K should be chosen as small as possible and should simultaneously have small VIFs and steady going regression coefficients.In this case, K was calculated with a step length of 0.001 changing within [0, 1].This was done with the help of SPSS 15.0.The table shows the improvement in various parameters with K.
While selecting K, it is also necessary to ensure that the VIF values are reduced to the appreciable range (<10).Stabilization of various parameters with K value is also necessary.The K value is chosen such that all these conditions are satisfied.The value of K chosen in this paper is 0.040.The graphs and table below depict the selection procedure.
After the careful selection of K, parameters are estimated using SPSS 15.0.The regression results are presented in Table 6.
Figure 9 shows the variation of R square with K, which justifies the selection of K, as R square shows very less decrement with the increment in K.
With regards to the results of ridge regression, overall the model is highly significant with a R square value of 99%.In regard to individual variables, except for one variable (population between the age group 16-60) all the variables have turned out to be significant with theoretically consistent positive signs of the coefficients which suggests that all the variables have impacted the environment to varying degrees (for degree see Table 7).For instance, population size has an impact coefficient of 0.88.On the other hand, the variable urban population which is a proxy for urbanization contributes a major part of it with an elasticity coefficient of 1.29.Similarly, GDP per capita, a proxy for affluence, has a relatively smaller regression coefficient of 0.38.Interestingly, the service sector has an impact coefficient of 0.70 which is higher than that of the Industry sector which has a coefficient of 0.417.The higher coefficient of service sector seems slightly contradictory as one expects this sector to be less polluting and clean compared to the industrial sector, but given the size of the sector and its contribution to India's total GDP, one can expect such results to be realistic.
However, it would be interesting to compare the ecological elasticities obtained in this model for India with the ones which are found in other studies and countries.The ecological elasticity of population in India is 0.88 which is more or less close to 1 as suggested by the I-PAT identity.This is also similar to the global values of 0.972 and 0.922 obtained by York et al. (2003) and Cole and Neumayer (2003) respectively.This suggests that the pressure exerted by population in India is more or less similar to that of the world average.However, for China the value is 1.5 which is relatively very high (Lin et al., 2009).The percentage of urban population in India has an ecological elasticity of 1.5 which is quite high with regard to the global value of 0.663 (Neumayer, 2003).Since the urban population is expressed as the percentage of population living in urban areas in India, we have estimated the percentage change in carbon emissions caused by one percent change in percentage of urban population.This is equivalent to 0.01 percentage (0.01*1%) of Urban population.For example if the urban population of a country is 25%, a one percent increase of 25% (i.e.0.25, change to 25.25%), we expect an increase in carbon emissions by 1.5%.The variable working class in India has no significant elasticity coefficient which is again in line with the global trend suggested by Cole and Neumayer (2003).
GDP per capita has an elasticity coefficient which is relatively small (0.36).IPAT suggest this to be as one.This is in contrast to the value found by York et al. (2003) and Neumayer (2003) as 0.91 and 0.86 respectively.This small value might be because of the erratic nature shown by Indian GDP per capita over the years and relatively larger average carbon emissions -GDP ratio.However, our value is comparable to the values obtained by Lin et al. (2009) for China which is 0.23.As mentioned before percentage share of services and manufacturing sector is indicative partly for the technological advancements which have elasticity coefficients of 0.71 and 0.40 respectively.
Though ecological elasticites gives us an idea about the effect of each factor on carbon emissions, the real contribution of each of these determinants can be calculated only if we analyze them taking into consideration the average annual growth rate of these factors.This is done by using the following simple formula. ( . .7 that the main driving forces of environmental impact in India are population, urbanization and GDP per Capita, with degree of contribution about 32%, 29% and 19% respectively.In combination, they have resulted in an overall environmental impact increase of 4.47%.
Conclusions
The paper uses a multiple regression model to study the various anthropogenic determinants of carbon dioxide emissions in India.The results found are largely in confirmation with those suggested in the literature, with some variations which can be accounted for by reasons specific to India.The following conclusions can be drawn from the results obtained.
First, we found that population is one of the important factors that affect the environment, which is expected given the size and growth of India's population over the last few decades.Moreover, these results, as mentioned earlier, are found across studies in different countries.But this has tremendous significance in the context of India.With an average annual fertility rate of 2.68 percent the population of India is likely to increase in the coming years.Even if this rate is reduced significantly the Indian population is likely to increase because of the momentum it has achieved over the years.This may have an adverse impact on the environment as the demand for resources is also likely to increase very rapidly, as India has already 1.2 billion (according to census 2011 data) and an average density of population of 363.5 persons per square km.The pressure exerted by population in India on the environment is important not just for India but also for the entire world.This call for serious thinking into India's demographic planning.A two-child policy similar to that of China's one-child policy could be implemented for controlling population growth.However in a democratic country like India it is likely to generate a huge debate among various constituents, and such draconian measures may not be adopted.Of course, access to education and improved health services, provision of social security measures, and increased awareness will go a long way in stabilizing population problems.
Second, urbanization in India has the highest ecological elasticity among all the determinant factors considered in the model, meaning that urbanization is adversely impacting the environment.As mentioned above, for three decades or so India's economic activities have been concentrated in cities which has become an important attraction for people in rural areas wishing to improve their livelihoods.Rural-urban migration enhances rapid urbanization as the demand for houses, transportation and other infrastructure increases.However, it should be noted that unplanned and haphazard urbanisation can lead to inefficient use of energy resources that may hinder the efforts to reduce carbon dioxide emissions in India.Hence sustainable urban planning across Indian states is very much essential for better management of environmental resources.In addition, development of infrastructure in rural areas and generation of employment and livelihoods opportunities may help reduce migration and pressure on environment.
Third, GDP per capita, which is used as a proxy for affluence, has also significant contribution to environmental impact.It is found that carbon dioxide emissions increase with affluence.Though the elasticity coefficient for GDP per capita is relatively low, considering the fast pace of growth in GDP the degree of contribution to environmental impact is found to be high.Fourth, service and manufacturing sectors have significant contribution to environmental degradation in India.This also suggests that technological changes in India have not been very environment friendly and there is tremendous scope for improvement in technology in these sectors.More importantly, to achieve sustainable use and management of environmental resources there is a need for a radical change in human behaviour with regard to ecologically sensitive factors.These changes must take place across different economic agents such as consumers, producers and policy makers.
In terms of the limitation and scope of the present study it is necessary to point out that this study has used carbon dioxide emission as an indicator for overall environmental impact by considering data at an aggregate level.To get more insights and in depth understanding the study can be extended to states and even regional levels which can enable us to come up with more specific policy suggestions.Also, the study can be extended by incorporating environmental impacts like land degradation, deforestation etc.Though this may be a deviation from the conservative IPAT model, the method could prove to be effective and worth attempting.
Figure
Figure 8. T
Table 1 .
Growth rates of variables from 1960Growth rates of variables from -2005
|
2017-09-08T10:28:59.837Z
|
2011-11-28T00:00:00.000
|
{
"year": 2011,
"sha1": "66f52d1d453ee9dfbd875f86fdefd0ffebfc9aae",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/enrr/article/download/13390/9283",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "66f52d1d453ee9dfbd875f86fdefd0ffebfc9aae",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
44660387
|
pes2o/s2orc
|
v3-fos-license
|
C-terminal sequences outside the tetratricopeptide repeat domain of FKBP51 and FKBP52 cause differential binding to Hsp90.
Hsp90 assembles with steroid receptors and other client proteins in association with one or more Hsp90-binding cochaperones, some of which contain a common tetratricopeptide repeat (TPR) domain. Included in the TPR cochaperones are the Hsp70-Hsp90-organizing protein Hop, the FK506-binding immunophilins FKBP52 and FKBP51, the cyclosporin A-binding immunophilin CyP40, and protein phosphatase PP5. The TPR domains from these proteins have similar x-ray crystallographic structures and target cochaperone binding to the MEEVD sequence that terminates Hsp90. However, despite these similarities, the TPR cochaperones have distinctive properties for binding Hsp90 and assembling with Hsp90.steroid receptor complexes. To identify structural features that differentiate binding of FKBP51 and FKBP52 to Hsp90, we generated an assortment of truncation mutants and chimeras that were compared for coimmunoprecipitation with Hsp90. Although the core TPR domain (approximately amino acids 260-400) of FKBP51 and FKBP52 is required for Hsp90 binding, the C-terminal 60 amino acids (approximately 400-end) also influence Hsp90 binding. More specifically, we find that amino acids 400-420 play a critical role for Hsp90 binding by either FKBP. Within this 20-amino acid region, we have identified a consensus sequence motif that is also present in some other TPR cochaperones. Additionally, the final 30 amino acids of FKBP51 enhance binding to Hsp90, whereas the corresponding region of FKBP52 moderates binding to Hsp90. Taking into account the x-ray crystal structure for FKBP51, we conclude that the C-terminal regions of FKBP51 and FKBP52 outside the core TPR domains are likely to assume alternative conformations that significantly impact Hsp90 binding.
Hsp90, typically the most abundant cytoplasmic chaperone in vertebrate cells, serves a vital role in cellular signaling by regulating the folding, activity, and stability of a wide range of client proteins, as exemplified by steroid receptors and protein kinases (1). Client protein complexes contain not only Hsp90 but also one or more cochaperones that partner with Hsp90. One class of Hsp90-binding cochaperone is composed of proteins with a characteristic tetratricopeptide repeat (TPR) 1 domain that forms an Hsp90 binding site (2). Among the TPR cochaperones of Hsp90 are Hop/Sti1, an adaptor chaperone that also binds Hsp70 (3,4), protein phosphatase PP5 (5), and members of both the FK506-and cyclosporin A-binding families of immunophilins (6 -9). The TPR cochaperones compete for binding the C-terminal region of Hsp90 (10 -12), and the highly conserved MEEVD sequence that terminates eukaryotic Hsp90 is a common target for TPR interactions (13)(14)(15)(16). Mutation of the MEEVD sequence typically inhibits binding by a TPR cochaperone, although mutations in other C-terminal sequences impact binding by TPR cochaperones differentially (13,17).
The immunophilin-related TPR cochaperones contain peptidylprolyl isomerase domains that also serve as the binding site for immunosuppressive drugs, yet immunophilins can have differential effects on the function of Hsp90 client proteins. In particular, FKBP52 and FKBP51, which share ϳ70% amino acid sequence similarity, affect hormone binding by the glucocorticoid receptor (GR) in opposing manners. A study in the yeast Saccharomyces cerevisiae, which lacks endogenous counterparts to FKBP52 or FKBP51, has shown that FKBP52, specifically, can elevate GR responsiveness to hormone (18). Although FKBP51 alone does not reduce GR activity in yeast, it can effectively block the potentiation mediated by FKBP52 when coexpressed (18). Scammell and his colleagues (19 -21) have shown that cortisol insensitivity observed in New World primates is facilitated by a constitutive overexpression of FKBP51, suggesting a physiological relevance for FKBP interactions with GR⅐Hsp90 complexes. Bourgeois and colleagues (22) first noted that the gene for FKBP51 is up-regulated by glucocorticoids, and this observation has been confirmed repeatedly by recent gene expression profiles of steroid-responsive genes. The inducibility of FKBP51 by glucocorticoids could provide a mechanism for partial desensitization of cells subsequent to an initial exposure to hormone (23).
It is not clear which structural features are responsible for the differential effects of FKBP52 and FKBP51. Three-dimensional crystallographic structures for FKBP51 have recently been solved (24), but there is no corresponding structure for FKBP52. Based on the 70% amino acid sequence similarity of FKBP51 and FKBP52 and the conservation of apparent domain sequences, there is no reason to suspect a dramatic structural difference. Nonetheless, previous studies from us showed that mutations in the C-terminal half of Hsp90 have different effects on binding by FKBP52 versus FKBP51 (13). Also, we observed that an exchange of sequences between the FKBP TPR domains had no effect on Hsp90 binding by FKBP52 but blocked binding by FKBP51 (25). These observations suggested that each FKBP has a distinctive interaction with Hsp90. In the current study, we have taken advantage of the x-ray crys-tallographic structure of FKBP51 to map functionally sequences of FKBP51 and FKBP52 that are important for Hsp90 binding. We find that sequences in the C-terminal region, both inside and outside the TPR domain, greatly influence FKBP binding to Hsp90.
EXPERIMENTAL PROCEDURES
Preparation of Mutant cDNAs-In vitro expression plasmids containing either human FKBP51 or FKBP52 cDNA inserted into pSPUTK (Stratagene, La Jolla, CA) were used to generate mutants used in these studies. To construct the C-terminal truncations, stop codons were introduced by site-directed mutagenesis (QuikChange kit, Stratagene). To help clarify our naming convention for many of the mutants generated for this study, it is helpful to note that FKBP52, relative to FKBP51, contains a two-amino acid insert in the loop connecting FK1 and FK2 domains; thus, the position of corresponding mutations in the C-terminal halves of either FKBP will differ by two residues. For example, the FKBP51 truncation mutant N404 contains an engineered stop codon at position 405; the equivalent FKBP52 truncation mutant, N406, contains a stop codon at position 407. The FK mutants were generated by introducing stop codons at position 258 or 260, respectively, which lies in the linker region between FK2 and the TPR domain. The TPR mutants were created by first introducing an F252M or F254M substitution and then removing all upstream coding sequences. FKBP chimeric cDNAs were constructed by two different approaches. The FKBP51-395 (positions 1-395 code for FKBP51 and positions 396end code for FKBP52) and the converse FKBP52-397 chimeric cDNA were originally cloned into yeast expression vectors (18). These cDNA were PCR amplified from yeast vectors and subcloned into pSPUTK. Other chimeras were created as follows. A gene fragment encoding the N-terminal portion of the desired chimeric protein was generated by PCR. A second gene fragment encoding the C-terminal portion of the desired chimera was separately generated by PCR. Primers for the two fragments were designed such that sequences surrounding the desired fusion site were complementary. The resulting DNA products were gel purified and used as megaprimers in a reaction with the appropriate 5Јand 3Ј-primers to generate the full-length chimeric cDNA. The final PCR product was subcloned into pSPUTK. Sequence changes in mutated cDNAs and all PCR-generated products were confirmed by automated DNA sequencing.
In Vitro Hsp90 Binding and PR Complex Assembly-The Hsp90 and PR binding abilities of wild type proteins and mutants were analyzed by a coimmunoprecipitation approach as described previously (25). Briefly, radiolabeled immunophilins were synthesized from plasmid DNA templates in an in vitro transcription/translation system (TNT lysate, Promega). Radiolabeled products were quantitated by densitometry of gel autoradiographs, and an equimolar amount of each radiolabeled product was added separately to 100 l of rabbit reticulocyte lysate (RL; Green Hectares, Oregon, WI). Each RL sample was added to a 10-l pellet of protein G-Sepharose (Amersham Biosciences) prebound with 10 g of H90-10, a specific anti-Hsp90 mouse monoclonal antibody. After incubation on ice for 1 h, unbound proteins were removed by washing three times in 1 ml of wash buffer (20 mM Tris-HCl, pH 7.4, 50 mM KCl, 1% Tween 20). Bound proteins were then extracted with SDS sample buffer and separated by SDS-PAGE. The assembly of PR complexes was analyzed in a similar manner, except 200 l of RL was supplemented with an ATP-regenerating system, samples were incubated with 1 g of recombinant PR bound to PR22-protein A-Sepharose, and incubations were at 30°C for 30 min. After electrophoresis, gels were stained with Coomassie Brilliant Blue to visualize total proteins. Gels were then dried and autoradiographed to visualize radiolabeled proteins, and bands were quantitated by densitometry using a Fluor-S Imager (Bio-Rad).
Yeast Strains, Plasmids, and Methods-FKBP functional assays were performed in S. cerevisiae essentially as described previously (18). All experiments were performed using a GR reporter strain in the W303a background (MATa leu2-112 ura3-1 trp1-1 his3-11, 14 ade2-1 can1-100 GAL SUC2). Parental cells were transformed with a plasmid that constitutively expresses rat GR, a second plasmid that expresses -galactosidase from a glucocorticoid-regulated promoter, and a third plasmid constitutively expressing one of the human immunophilins. Cells were grown in minimal medium containing 0.67% (w/v) yeast nitrogen base without amino acids, 2% (w/v) glucose, the appropriate SC supplement mixture (Q-biogene, Carlsbad, CA); for growth on plates the culture medium was supplemented with 1.6% w/v agar.
Hormone induction assays were performed as described previously (18). Briefly, yeast strains were grown in selective medium at 25°C, and the absorbance at 600 nm (A 600 ) of the culture was monitored to ensure exponential growth. Reporter gene expression was induced by adding deoxycorticosterone (25 nM final concentration) to log phase cultures. Starting 70 min later, cells were sampled at 10-min intervals over the next 40 min for reporter activity. -Galactosidase activity was measured by adding 100 l of culture to an equal volume of Gal-Screen assay solution (Tropix, Bedford, MA) according to the manufacturer's instructions. Reporter expression rate was calculated as the linear slope of relative light units versus A 600 /1,000. For each strain tested, hormone-induced reporter expression rate was determined with two to four independent transformants to assure consistency of results.
RESULTS
The three-dimensional crystallographic structure for FKBP51 (Protein Data Bank code 1KT1) depicted in Fig. 1A reveals three major structural domains. Of the two FKBP12like domains, peptidylprolyl isomerase and FK506 binding activities reside in FK1 (24). The Hsp90 binding TPR domain has a structure similar to that of other described TPR domains, in particular the Hsp90 binding domains from PP5 (26), Hop (16), and CyP40 (27). The consensus TPR motifs terminate with helix 6 (H6, Fig. 1B); however, similar to structures for PP5 and CyP40, there is a seventh helix that extends beyond the core TPR domain for a minimum of 23 amino acids. How much further H7 may extend is unclear because the final 36 amino acids could not be resolved from crystal data. Although a crys- tal structure for FKBP52 has not been reported, it seems likely that it will share the same overall domain organization as FKBP51.
FKBP Regions Required for Hsp90 and Receptor Interactions-As presented in Fig. 2, an initial set of FKBP51 and FKBP52 mutants ( Fig. 2A) was surveyed to compare interactions with Hsp90 and steroid receptor complexes. Coimmunoprecipitations of radiolabeled FKBP51 mutants (Fig. 2B) and FKBP52 mutants (Fig. 2C) were used to monitor Hsp90 binding (left panels) and assembly into PR complexes (right panels). We consistently found that incorporation of FKBP51 exceeded that of FKBP52 by 2-3-fold in Hsp90 complexes (compare lanes 1 and 2 in each data set). In line with previous observations (25), FKBP51 recovery in PR complexes exceeded the recovery of FKBP52 by 5-fold or greater (compare lanes 8 and 9 in each set). The TPR truncation mutants, which included the TPR domain plus C-terminal sequences, were sufficient for binding Hsp90 (lane 3) and assembling with PR complexes (lane 10). The TPR region was necessary for binding Hsp90 and assembling with PR because the FK domains showed no interactions (lanes 4 and 11 in each set). The importance of the TPR domain is demonstrated further by point mutation at one of the carboxylate clamp residues (K352A for FKBP51 and K354A for FKBP52) that abrogates Hsp90 binding and PR association (lanes 5 and 12). To test whether C-terminal sequences influence protein interactions, truncation mutants (N404 and N406) were generated which lacked sequences within H7 extension (see Fig. 1B) and beyond. Despite retention of the core TPR domain in these constructs, Hsp90 binding (lane 6 in both sets) and PR association (lane 13 in both sets) were largely abrogated. The final samples in this mutant series were a pair of chimeric constructs (395 for FKBP51 and 397 for FKBP52) in which the region from the H7 extension through the C terminus was exchanged between FKBPs. This exchange had little effect on FKBP52 (Fig. 2C, lanes 7 and 14) but greatly diminished interactions of FKBP51 with Hsp90 and PR (Fig. 2B, lanes 7 and 14). In an earlier study (25) in which the C-terminal region was retained but TPR motifs were exchanged, we observed a similar phenomenon; the FKBP52-based construct functioned normally, but the FKBP51-based construct lost the ability to bind Hsp90 or assemble with PR complexes.
Taking advantage of the yeast model for FKBP52-dependent potentiation of GR signaling (18), we further tested FKBP52 constructs for function in vivo (Fig. 2D). First, note that FIG. 2. Mutant FKBP interactions with Hsp90 and steroid receptor. A, the diagrams depict a series of mutant forms generated for either FKBP51 or FKBP52, respectively. In descending order, these are the full-length wild type protein (wt), a truncation mutant containing the TPR domain through the C terminus (TPR), a truncation mutant containing the N terminus through the FK domains (FK), a point mutant in the TPR domain (K352A or K354A), a truncation mutant that terminates just beyond the TPR domain (N404 or N406), and a chimera in which the C-terminal sequences were exchanged (chimera 395 or 397). B, the ability of mutant forms of FKBP51 to bind Hsp90 (left panels) and assemble in vitro with PR complexes (right panels) is shown in comparison with human FKBP52. Radiolabeled proteins were generated by in vitro expression, and equivalent amounts of each form were added separately to RL before immunoprecipitation of Hsp90 complexes or assembly of PR complexes. The top panel in each set is a Coomassiestained gel image of total proteins recovered in each sample. Major protein bands, including antibody heavy chains (HC), are identified. The middle panel ( 35 S bound) is an autoradiograph of the stained gel which reveals bound FKBP forms. The bottom panel ( 35 S input) is an autoradiograph of a separate gel that demonstrates the relative level of radiolabeled protein added to each sample. C, same as B except the corresponding FKBP52 mutants replaced the FKBP51 mutants. D, the ability of FKBP52 mutant proteins to enhance GR signaling was measured in a yeast strain that constitutively expresses rat GR and expresses -galactosidase in a hormone-inducible manner. The reporter strain was transformed with an empty vector (Vector) or vector constitutively expressing FKBP51 or the indicated FKBP52 form. For each strain the linear rate of increase in -galactosidase activity was measured over a 40-min period beginning 70 min after the addition of deoxycorticosterone (25 nM final concentration). Reporter activities were normalized to that seen with yeast expressing FKBP51, and the results shown are typical for each of four independent isolates of each strain. Western immunostains confirmed that each FKBP form was present at approximately equivalent levels in total cell extracts (not shown). The results shown are typical measurements from four independent isolates for each strain.
Hsp90 Binding by FKBP Cochaperones
FKBP52 significantly enhances hormone-induced -galactosidase activity compared with yeast expressing FKBP51 or lacking either FKBP (Vector). None of the FKBP52 mutants displayed potentiation of GR signaling except the C-terminal chimera 397. This is consistent with the dual requirement for Hsp90 binding and FK1 peptidylprolyl isomerase activity for FKBP52 to elevate GR hormone binding affinity (18). Because 397 retains wild type activity in this assay, it appears that specific sequences in the tail region of FKBP52 are not critical for its function in GR potentiation.
Hsp90 Binding by C-terminal Truncation Mutants-The defect in Hsp90 interactions apparent with FKBP51-N404 and FKBP52-N406 was unexpected and raised the question of which sequences downstream from the core TPR domain are minimally required for Hsp90 binding. To begin addressing this question, a series of FKBP truncation mutants was generated which focused on this region, and these were tested for coimmunoprecipitation with Hsp90. Densitometric measurements were taken from gel autoradiographs similar to those in Fig. 2, and these data were plotted as shown in Fig. 3. Significant recovery of FKBP51 mutants in Hsp90 complexes (solid circles) began with constructs that included residues 415-420, near the downstream end of H7. Full Hsp90 binding was only observed with constructs containing sequences beyond 430.
Truncations of FKBP52 (open circles) differed from FKBP51 mutants in two ways. First, note that there is a leftward shift in the Hsp90 binding curve for the FKBP52 series compared with the FKBP51 series. Thus minimally sized FKBP52 truncations that retain measurable levels of Hsp90 binding are 5-8 amino acids shorter than the smallest FKBP51 constructs that retain Hsp90 binding. The second difference pertains to the influence of sequences beyond position 430. Whereas the Cterminal region heightened Hsp90 binding by FKBP51, there appears to be a modest inhibition of Hsp90 binding by the corresponding region of FKBP52. Mutants terminating at approximately position 430 bind Hsp90 equally. However, the inclusion of sequences beyond this point moderated binding of FKBP52 to Hsp90 but enhanced binding by FKBP51.
Hsp90 Binding by FKBP51 Chimeras Containing C-terminal Portions of FKBP52-In marked contrast to the corresponding FKBP52 chimeras, swapping either the TPR domain (25) or the adjacent C-terminal tail (chimera 395 in Fig. 2B) abrogated FKBP51 binding to Hsp90. Swapping both regions together had no effect on Hsp90 binding. These observations suggest important intramolecular interactions between the C-terminal tail region and the TPR domain which are distinctive in FKBP51 and FKBP52. As shown in Fig. 4, additional FKBP51 chimeras focusing on the TPR domain boundary from H5 through H7 were constructed to probe more thoroughly for FIG. 3. Binding of FKBP truncation mutants to Hsp90. Truncation mutants of FKBP51 (solid circles) and FKBP52 (open circles) terminating at the indicated amino acid were compared for interactions with Hsp90. Radiolabeled protein products were added to RL before immunoprecipitation of Hsp90 complexes. Binding levels were quantitated by autoradiography of gel-separated samples and plotted relative to full-length FKBP51 (defined as 100% binding). The diagram above the plot illustrates the corresponding positions for H7 and the C-terminal sequences of FKBP51.
FIG. 4. Binding of FKBP chimeric proteins to Hsp90.
A, the H5-H7 regions (amino acids 348 -421, which is the final resolved residue in the crystal structure) of FKBP51 and FKBP52 are illustrated with coloring to denote hydrophobicity (blue indicates hydrophobic side chains; red, hydrophilic). The FKBP52 structure is modeled on the solved structure for FKBP51. The numbers on FKBP51 indicate six of the eight positions at which C-terminal sequences from FKBP52 were placed to generate chimeric proteins. Downstream chimeras started at 422 and 438. B, FKBP51 chimeras containing the tail sequences from FKBP52 were compared with wild type FKBPs for coimmunoprecipitation with Hsp90. Radiolabeled products were added to RL before immunoprecipitation of Hsp90 complexes. The top panel is an image from a Coomassie-stained gel separation of each immunoprecipitate. The numbers above the lanes indicate the final FKBP51 residues prior to the switch to FKBP52 sequences. The middle panel (bound) is an autoradiograph of the stained gel used to detect bound FKBP forms. The bottom panel (input) is a separate autoradiograph that demonstrates the relative levels of radiolabeled protein added to each sample. C, the Hsp90 binding data were plotted for FKBP51 chimeras terminating with FKBP52 sequences (solid circles) and for FKBP52 and the FKBP52-397 chimera (open circles). Also plotted are binding data for truncation mutants of the FKBP51-395 chimera (solid triangles). The level of Hsp90 binding to full-length FKBP51 was defined as 100%. The diagram above the plot illustrates the corresponding positions of H5, H6, and H7 in the FKBP51 crystal structure. potential interactions. Comparing hydrophobicity patterns in this region from FKBP51 and FKBP52, the two proteins are similar (Fig. 4A), although there is only 50% amino acid sequence identity. Another series of FKBP51 chimeras was generated in which an exchange occurred at one of eight positions along H6, H7, or just beyond H7 (seven of these sites are indicated in Fig. 4A). Radiolabeled products were synthesized and compared for coimmunoprecipitation with Hsp90 (Fig. 4B). As observed consistently, recovery of wild type FKBP51 in Hsp90 complexes exceeded that of wild type FKBP52 (compare first two lanes). Chimeras containing the H6 region and beyond from FKBP52 (365, 374, and 378) retained Hsp90 binding. When the exchange occurred within H7 (395, 404, and 413), there was a complete loss of Hsp90 binding. Hsp90 binding was again observed when the exchange was restricted to sequences beyond 422. To reiterate, we observed no defect in Hsp90 binding by FKBP52 chimeras that contained similar tail regions from FKBP51 (for example, for results with FKBP52 chimera 397, see Fig. 2C). Collectively, these observations suggest that there is an interaction within the region from H6 through H7 extended which is uniquely required by FKBP51 for Hsp90 binding.
An additional analysis was undertaken in which a series of C-terminal truncation mutants was generated from the defective chimera FKBP51-395. Similar to previous binding assays, truncations of 395 were examined for coimmunoprecipitation with Hsp90 on gel autoradiographs. The quantitated data were plotted in Fig. 4C (solid triangles). Also plotted in this figure are the quantitated data for FKBP51 chimeras (solid circles) that were examined in Fig. 4B and for FKBP52 and the FKBP52-397 chimera (open circles). Consistent with the behavior of FKBP52 truncation mutants (Fig. 3), the FKBP52-397 chimera displayed greater binding than wild type FKBP52 (compare open circles) presumably because of the loss of inhibitory sequences beyond position 430 of FKBP52. As seen from the gel data in Fig. 4B, the FKBP51 chimeras (solid circles) had a pattern of loss then recovery of Hsp90 binding as the fusion point in chimeras progressed from H6 through H7. Interestingly, truncations of FKBP51-395 (triangles) showed that Hsp90 binding could be partially restored in constructs that terminate between 410 and 430. The sharp boundary at ϳ410 corresponds well with the Hsp90 binding boundary observed with FKBP52 truncation. Likewise, the trailing boundary near 430 corresponds to the inhibition boundary deduced from FKBP52 truncation mutants.
Functional Analysis of FKBP52 Sequences in the 400 -420 Region of H7-Experimental results with FKBP52 and FKBP51 mutants point to the extended portion of H7 (amino acids 400 -420) as being important for Hsp90 binding and FKBP function. This region of FKBP52 was analyzed further, as shown in Fig. 5. Alignment of FKBP sequences in this region highlights the conservation of amino acids 406 -415 (Fig. 5A, FKBP52 numbering), a 10-amino acid stretch consisting of a highly charged segment followed by YANMF. A series of FKBP52 truncation mutants that target this region was generated and tested for Hsp90 binding and assembly with PR complexes (Fig. 5B). As seen previously (Fig. 2), binding to Hsp90 and assembly into receptor complexes were greatly reduced with N406, but both interactions were restored when the C terminus was extended to 414 and beyond. Yeast GR reporter strains were generated to correlate protein-protein interactions in vitro with FKBP52 function in vivo (Fig. 5C). Corresponding with weak Hsp90 and PR interactions, N406 and N410 lacked the ability to potentiate GR signaling; however, potentiation was significantly boosted with N414 and larger mutants in parallel with Hsp90 and PR interaction patterns.
Sequence data bases were accessed to determine whether other TPR-dependent Hsp90 cochaperones share a motif similar to the 406 -415 segment. For each cochaperone we selected sequences that lie in the same relative downstream juxtaposition to TPR motifs. These juxta-TPR sequences are aligned in Fig. 6. Included in the comparison are the Hsp90-binding FKBP family members FKBP52, FKBP51, FKBP36 (28), and Xap2/AIP (29,30). CyP40 and PP5 sequences are present, as well as sequences from Hop (31) and CHIP (32). As shown above the alignment, an 11-amino acid motif, what we term the charge-Y motif, was identified with the consensus organization ϪϩϪϩX⌽YXXMF, where Ϫ represents Glu or Asp, ϩ represents Lys or Arg, ⌽ represents a hydrophobic amino acid, and X represents any amino acid. There is also a negatively charged amino acid 5 positions further downstream which may relate to FIG. 5. Minimal FKBP52 sequences required for protein interactions and function. A, FKBP sequences that correspond to the extended portion of H7 are aligned. Arrows denote the positions for a series of FKBP52 truncation mutants that terminate in this region. B, FKBP52 wild type (wt) and H7 truncation mutants were compared for coimmunoprecipitation with Hsp90 and assembly with PR complexes. Shown are images from Coomassie Blue-stained gels (top panels), the corresponding autoradiographs (middle panels), and separate autoradiographs of synthesis products (bottom panels). C, the yeast GR reporter strain was separately transformed with empty vector (Vector) or plasmids expressing each form of FKBP52, as indicated. Hormoneinduced reporter gene activity was measured in each strain, and results were normalized to the activity observed in the vector strain. The results are typical for multiple independent strain isolates. the consensus. Six of the nine Hsp90 cochaperones match the consensus in at least four of nine positions (indicated by left arrowhead).
DISCUSSION
FKBP52 and FKBP51 are closely related Hsp90-binding cochaperones, yet they have distinctive patterns of interaction with Hsp90 and Hsp90 client proteins such as steroid receptors. In an effort to understand better the structural basis for distinctive interactions with Hsp90, we performed a mutagenic analysis of sequences in FKBP51 and FKBP52 which impact Hsp90 binding. Both immunophilins have a core TPR domain that is necessary for binding to Hsp90, but this core domain is not sufficient for full binding to Hsp90 (Fig. 2). We have identified a conserved region, the charge-Y motif, which lies immediately downstream from the TPR domain and is required for Hsp90 binding (Figs. 3 and 5). A similar sequence is found in some other Hsp90-binding TPR proteins (Fig. 6). Further downstream in either FKBP, unique sequences within the final 30 amino acids appear to distinguish the relatively higher Hsp90 binding affinity of FKBP51 compared with FBP52 ( Figs. 3 and 4). The FKBPs are distinguished further by an intramolecular interaction peculiar to FKBP51 which involves H7, the final helix in the core TPR domain, and the adjacent charge-Y motif (Fig. 4). Thus, the Hsp90 binding properties of these two TPR cochaperones result from a combination of the core TPR domain and the influence of C-terminal sequences outside this domain. Consistent with the notion of alternative modes of interaction with Hsp90, point mutations in the C-terminal region of Hsp90 have distinct effects on the binding of individual TPR cochaperones (13,17). This suggests that cochaperones might interface with distinct structural features of Hsp90 in addition to a common interaction with the C-terminal MEEVD.
Jackknife Model for Charge-Y Motif Participation in Hsp90 Binding-According to the crystal structure for FKBP51 (Fig. 1), the charge-Y motif lies in a portion of H7 that extends beyond the core TPR domain. As depicted in Fig. 7, we hypothesize two alternative conformational states of H7 sequences that could account for our mutagenic data. In the first state (Fig. 7A), H7 exists in the extended conformation that is consistent with the FKBP51 crystal structure. Here, charge-Y could directly contact Hsp90 and complement or facilitate binding through the core TPR domain. However, it is difficult to reconcile this conformational state with data from FKBP51 chimeras which argue for matching of sequences between the core and extended portions of H7 (Figs. 2 and 4). Yet based on the FKBP51 crystal structure, there is unlikely to be direct contact between these regions. We propose the alternative possibility (Fig. 7B) that H7 may be disrupted in some circumstances, forming an eighth helix that continues the anti-parallel pattern of interactions observed with H1 through H7. In this conformational state, the putative H8 may contribute to core TPR domain interactions that enhance TPR affinity for Hsp90.
Although an anti-parallel H8 is not observed in the static FKBP51 crystal structure, crystal packing may not have favored a conformation of FKBP51 that exists in solution or that is induced by binding to Hsp90. Furthermore, there are precedents for alternative conformational states in TPR domains such as proposed in Fig. 7. Walkinshaw and colleagues (27) obtained two crystal forms for CyP40 from which two TPR conformations were resolved. One structure contained a TPR domain similar to PP5, Hop-TPR2a, and FKBP51. In the alternate structure, the loops separating H2-H3 and H3-H4 have been reorganized into ␣-helical forms, thus resulting in a single, greatly extended H2. A similar phenomenon was observed in the crystal structures for peroxisomal TPR protein PEX5. The structure for human PEX5 has the canonical anti-parallel helix arrangement (33). In contrast to this structure, Kumar et al. (34) obtained a structure for trypanosome PEX5 in which H5 and H6 were fused into a single extended helix. They proposed the possibility that certain TPR domains may naturally assume alternative conformations through extension versus folding back of helices, what they termed the "jackknife model" for TPR motif rearrangement (34). The alternate FKBP structures proposed in Fig. 7 Chimeric data suggest that FKBP51 may assume the closed conformation (Fig. 7B), whereas FKBP52 binds Hsp90 in an open conformation (Fig. 7A). One can visualize in the closed conformation how the charge-Y motif and adjacent amino acids may interact with amino acids in the core portion of H7 or even with side chains extending from H6. The C-terminal 30 -35 amino acids that are unresolved in the FKBP51 crystal structure are not illustrated in Fig. 7. However, this region also impacts Hsp90 binding (Figs. 3 and 4), enhancing binding by FKBP51 and moderating binding by FKBP52. In the jackknife model, this tail region would have to undergo a dramatic positional swing, but such a difference perhaps contributes to influences of the tail on Hsp90 binding.
Further structural studies are needed to test the jackknife model, and alternative explanations for our observations remain viable. For example, the charge-Y motif lies within a region that is purported to be a calmodulin binding site in FKBP52 (35). We considered whether Ca 2ϩ /calmodulin interactions could either influence the conformational state of this region or provide an alternative mechanism for distinguishing FKBP51 and FKBP52 interactions with Hsp90. However, we think a role for calmodulin is unlikely for several reasons. First, the putative calmodulin binding motif scores very weakly for FKBP52 and FKBP51 when analyzed in the Calmodulin Target Data base (calcium.uhnres.utoronto.ca/ctdb/flash.htm). Second, we have never observed calmodulin in FKBP complexes with Hsp90 or steroid receptors. Finally, neither Ca 2ϩ nor the Ca 2ϩ chelators EGTA or EDTA alter FKBP binding to Hsp90 (results not shown).
General Significance of TPR Cochaperone Interactions with Hsp90 and Client Proteins-Hsp90 serves a large number of client proteins that regulate cellular pathways, and in every case examined one or more cochaperones accompanied Hsp90 in client protein complexes. Individual clients displayed distinct preferences for certain Hsp90 cochaperones. For example, among the TPR cochaperones, Xap2/AIP is found in arylhydrocarbon receptor complexes but not steroid receptor or kinase complexes (29,30). FKBP51, FKBP52, PP5, and CyP40 associate differentially with progesterone, estrogen, and glucocorticoid receptor complexes in a receptor-specific manner (25,36,37). Preferential assembly with a client may be caused by direct interactions between client and the Hsp90 cochaperone (24), but the cochaperone may also influence how Hsp90 interfaces with client, either stabilizing or destabilizing client interactions relative to other Hsp90/cochaperone pairs. Thus, the TPR cochaperones that competitively interact with the MEEVD terminus of Hsp90 might, through unique contacts with Hsp90, induce distinct structural/functional changes in Hsp90 which elaborate the chaperoning of client proteins.
|
2018-04-03T05:50:34.068Z
|
2003-05-09T00:00:00.000
|
{
"year": 2003,
"sha1": "b0d23245232a7781b684589ece3d3cbf019216bc",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/278/19/17388.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "60581b3d5c0304eb6e37fec11ac49a068edbd7e5",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
267224878
|
pes2o/s2orc
|
v3-fos-license
|
Influence of Soil Type and Temperature on Nitrogen Mineralization from Organic Fertilizers
: Organic vegetable producers in Georgia, USA, utilize a range of amendments to supply nitrogen (N) for crop production. However, differences in soil type, fertilizers and environmental conditions can result in variability in N mineralization rates among commonly utilized organic fertilizers in the region. In this study, the effects of temperature on N mineralization from three commercial organic fertilizers [feather meal (FM), pelleted poultry litter (PPL) and a mixed organic fertilizer (MIX)] in two soil types from Georgia, USA (Cecil sandy clay loam and Tifton loamy sand) were evaluated for 120 d. Net N mineralization (Net N min ) varied with soil type, fertilizer and temperature. After 120 d, Net N min from the FM fertilizer ranged between 41% and 77% of total organic N applied, the MIX fertilizer ranged between 26% and 59% and the PPL fertilizer ranged between 0% and 22% across all soil types and temperatures. Incubation at higher temperatures (20 ◦ C and 30 ◦ C) impacted Net N min of FM fertilizer in the Tifton series soil. Temperature and soil type had a relatively minor impact on the potentially mineralizable N of the PPL and MIX fertilizers after 120 d of incubation; however, both factors impacted the rate of fertilizer release shortly after application, which could impact the synchronicity of N availability and plant uptake. Temperature-related differences in the mineralization of organic fertilizers may not be large enough to influence a grower’s decisions regarding N fertilizer inputs for vegetable crop production in the two soils. However, organic fertilizer source will likely play a significant role in N availability during the cropping season.
Introduction
Continued growth in certified organic vegetable production in the USA [1] has led to increased demand for alternatives to chemical fertilizers for plant nutritional needs.A range of organic materials are commonly used in agricultural systems to supply plant-available nitrogen (N), including cover crop residues, composts, manures and commercial organic fertilizers derived from various animal or plant sources [2,3].However, unlike readily soluble synthetic fertilizers, the availability of nutrients from organic amendments and fertilizers can be highly variable [4].Organic fertilizers may include some readily available inorganic N, ammonium (NH 4 ) and nitrate (NO 3 ), but the majority of N present is in the organic form and must undergo N mineralization to supply plant-available nitrogen during the growing season [5].Failure to coordinate the timing of N release from organic fertilizers with the nutritional demands of crops may result in N loss and reduced productivity.Thus, it is important to accurately predict N mineralization rates from organic fertilizers and evaluate what factors may impact mineralization [6][7][8].
Organic fertilizers differ widely in physiochemical characteristics, and N mineralization rates can vary depending on these physiochemical properties.Total particle size, total N concentration, initial inorganic N levels and carbon to nitrogen (C:N) ratio can all affect the N mineralization of organic fertilizers [5][6][7][8][9][10].In vitro incubation and field-based studies have shown that net N mineralization (Net N min ) among different organic fertilizers can vary between 20% to 93% of the total N applied [5,[11][12][13].However, the physiochemical characteristics of the materials themselves do not explain all the variability observed in N mineralization among fertilizers, and further information is needed to properly predict N release after application [14,15].
Nitrogen release from organic fertilizers has also been shown to be impacted by environmental conditions and the properties of the soils to which they are applied [14].Microorganisms in the soil promote N mineralization through enzymatic reactions, which are primarily controlled by temperature and water content [16][17][18].In general, the rate of mineralization in the soil increases with temperature up to a maximum and then declines [11,19,20].Therefore, the season in which crops are planted and weather fluctuations during the season or across regions may impact the rate of N mineralization in the soil [17].Several studies have evaluated the effects of temperature on N mineralization from soil organic matter [17,18,21,22].However, research on N mineralization from organic fertilizers suggests that N transformations at different temperatures are likely to be dependent on a complex interaction between soil microorganisms and the mineralizable substrates in the amendment [11,23,24].Factors such as the C:N ratio of the organic materials have been reported to be closely linked to the N mineralization of a range of organic amendments [3].Temperature has also been shown to influence N availability, depending on material composition [14].Studies suggest that N mineralization of high-N-containing organic materials are primarily influenced by temperature during the initial days after incorporation, whereas low-N-containing materials are more susceptible to temperature variations over an extended period [11,12,14,25].
In addition to environmental conditions, soil physical and/or chemical properties can alter the N mineralization of organic materials due to changes in the soil microbiota and the enzymatic reactions that drive the N release [10,14,26].Further, soil type can significantly affect N mineralization [27][28][29].In general, fine-textured soils with high clay contents have been reported to have lower N mineralization rates than coarse-textured, sandy soils, which is likely due to factors such as physical isolation of organic matter by clay particles or entrapment in small pores, where a substrate may be inaccessible to microorganisms [28,30].Cassity-Duffey et al. [7] determined that, after 100 d of incubation, soil texture had an impact on the rate of N release but had minimal effects on total mineralizable N for two organic fertilizers, a feather meal (FM) and a pelleted fertilizer blend.Similarly, Lazicki et al. [14] found that soil texture and management history did not consistently affect total Net N min from 22 different organic materials but may have influenced the rate of N release.Recently, de Jesus et al. [31] reported that location (soil type and weather conditions) impacted plant N accumulation for onions (Allium cepa L.) grown with a mixed organic (MIX) and pelletized poultry litter (PPL) fertilizers.In addition to N uptake by plants, residual soil inorganic N was greater at harvest when using a MIX fertilizer compared to a PPL product.Increased N uptake and soil accumulation were also linked to increased yields for onions grown with a MIX fertilizer in some locations and years.These results suggest that N mineralization may differ between organic fertilizer source, soil type and weather conditions, ultimately impacting the yield of the crop grown [31].Further, mineralization of organic fertilizers at soil temperatures encountered during the production of onion during winter months in the Southeastern USA (<10 • C) have not been previously studied.
Understanding the factors affecting total mineralization and the rate of N release from organic fertilizers is important to determine the rate and timing of application [32].Models have been proposed to simulate the N release from organic soil amendments in the field, but data are lacking specific to commonly used commercial organic fertilizers [11,23,[33][34][35].Although different kinetics may be utilized to predict N availability, first-order kinetics are often used to predict the N mineralization potential (N 0 ) from organic amendments, where the rate at which N mineralizes is assumed to be proportional to the amount of soil N available for mineralization [26,36].An accurate estimate of N mineralization from organic fertilizers and the impact of temperature on the rate of mineralization would be a useful tool for organic vegetable growers.Thus, the objectives of this study were to determine the effects of four incubation temperatures and two soil textures on three commercially available organic fertilizers used for vegetable production in Georgia, USA through a 120 d laboratory incubation.
Soil and Organic Fertilizers
The soils used in this study were representative of two common agricultural soils in Georgia, USA.Soils were collected from US Department of Agriculture (USDA) certified organic land at the University of Georgia (UGA) Durham Horticulture Farm in Watkinsville, GA, USA (33 • 53 ′ 4.812 ′′ N, 83 • 25 ′ 9.9876 ′′ W) and at the UGA Horticulture Farm in Tifton, GA, USA (31 • 28 ′ 8.112 ′′ N, 83 • 32 ′ 54.348 ′′ W).The soil of the Watkinsville, GA, USA location is a Cecil sandy clay loam series (0% to 2% slope), while the Tifton, GA, USA location was a Tifton loamy sand series (2% to 5% slope) [37].Approximately 20 kg of soil was collected from each location (0-15 cm depth) and passed through a 4-mm sieve.Soil samples were stored in 5-gallon buckets and kept aerated at room temperature prior to initiation of the incubation.
The maximum water holding capacity (WHC) of the soils was estimated through saturation and draining over a sand bath for 48 h [38,39], and was 0.25 g H 2 O•g −1 soil and 0.30 g H 2 O•g −1 soil for the Tifton and Cecil soils, respectively.Soils were analyzed for total N, total C, phosphorous (P), potassium (K), magnesium (Mg) and calcium (Ca) concentrations, pH and cation exchange capacity (CEC) at a commercial laboratory (Waters Agricultural Laboratories, Camilla, GA, USA) (Table 1).Three commercial fertilizers, a feather meal (FM) (13N-0P-0K; Mason City By-Products, Mason City, IA, USA), a pelleted poultry litter (PPL) (5N-1.8P-2.5K;Harmony Organic Fertilizer; Environmental Products LLC, Roanoke, VA, USA) and a mixed organic fertilizer (MIX) (10N-0.88P-6.6K;All Season Organic Fertilizer; Nature Safe, Irving, TX, USA) composed of feather-meat-bone-blood meal were sourced from a commercial supply company (7 Springs Farm Supply, Check, VA, USA).The total N and C of the organic fertilizers were determined by combustion according to the AOAC 993.13 method, while NO 3 and NH 4 were determined by distillation using AOC 2.068 and AOC 2.065 methods, respectively, at a commercial laboratory (Waters Agricultural Laboratories, Camilla, GA, USA) (Table 2).
Laboratory Incubation Study
To determine the rate of mineralization from the three organic fertilizers, a soil incubation study was performed for 120 d.Soil was wetted to 50% of estimated maximum WHC (0.12 g H 2 O•g −1 soil and 0.16 g H 2 O•g −1 soil for the Tifton and Cecil series soils, respectively) and allowed to pre-incubate under aerobic conditions for 48 h to mitigate the initial flux mineralization that typically occurs during soil rewetting [40].Organic fertilizers were applied to soils at a rate to supply 100 mg N•kg −1 soil, based on the fertilizer labels, and Net N min was later determined using the N values determined for each fertilizer by combustion (Table 2).The organic fertilizers were added to 300 g dry equivalent soil in resealable polyethylene bags (1 Quart Ziploc Freezer Bag, SC Johnson, Racine, WI, USA) and mixed thoroughly.There were 96 individual bags representing each fertilizer treatment (three fertilizers and non-treated control), two soil types and four incubation temperatures with each combination replicated three times and arranged in a split plot randomized complete block design, with incubation temperature being the main plot and fertilizer and soil type the subplots.Bags were incubated at 4, 10, 20 and 30 • C for 120 d.These temperatures are reflective of the range of soil temperatures commonly encountered during year-round field production of vegetables in the Southeastern USA.The bags were aerated, and water content maintained gravimetrically every 2 to 3 d.To determine the rate of release of inorganic N, 5 g subsamples of soil were taken at 0, 2, 4, 7, 14, 35, 56, 85 and 120 d.Subsamples were extracted with 40 mL 1 M KCl, shaken for 30 min and passed through filter paper (Whatman 42, Maidstone, Kent, UK).Prior to subsampling, soils were thoroughly mixed to ensure that fertilizers were homogenously distributed.Inorganic N was determined colorimetrically using the soil KCl extract method [41,42].
Mineralization Kinetics and Statistics
Cumulative Net N mineralized was calculated for the control soils (unamended), where: Cumulative Net N mineralized (Net N min ) from the materials was calculated, where: where Net N min from the fertilizers (mg inorganic N•kg − ¹ dry material) was calculated as a function of the inorganic N at measured each time (t) point, the initial ammonium and nitrate present at time 0, and the N mineralization measured from the control soil (control Net N min ).Net N min was calculated as a percentage of total organic N applied from each fertilizer.The Net N min values for each time of incubation were analyzed using the ANOVA method by least-squares fit using JMP Pro 16.0 (SAS Institute Inc., Cary, NC, USA).Soil type, fertilizer treatment and temperature were treated as fixed effects, and replication was treated as a random effect.When statistically significant differences existed according to ANOVA (p < 0.05), mean separation was performed with Tukey's test at α = 0.05.
To determine the N mineralization kinetics the Net N min (mg inorganic N•kg − ¹ dry fertilizer) was fit to first-order kinetics using a non-linear model: where N 0 is the potentially mineralizable pool of N predicted from the model, k is the rate coefficient of net N mineralization (mg inorganic N•kg −1 •d −1 ) and t is time (days) [5,36].
The iterations for N 0 and k calculations and curve fitting of equations were carried out using a non-linear method with the G-Newton iteration using JMP Pro 16.0 (SAS Institute Inc., Cary, NC, USA).The performance of the fit of Net N min modeled versus measured data was evaluated using the root mean squared error (RMSE), ratio of the RMSE to the standard deviation of measured data (RSR), Nash-Sutcliffe efficiency (NSE) and Percent bias (PBias).The RMSE is a commonly used error index statistics that quantifies the average magnitude of errors between predicted values and observed values [43][44][45].A lower RMSE indicates a better model fit: The RSR is another way to assess the goodness of fit.The RSR standardizes RMSE by considering standard deviation of the observed data [43].
The NSE assesses the deviation between model predictions and observed values relative to the scattering of the observed data [43].It ranges from negative infinity to 1, where an NSE of 1 indicates a perfect fit: The PBias quantifies the systematic error or bias in a model's predictions by calculating the average difference between predicted and observed values.If the model underestimates, on average, the PBias is positive; conversely, if the model overestimates, on the average, the PBias is negative:
Laboratory Incubation
There was a linear correlation between incubation time and Net N min in control (unamended) soils (Figure 1).Net N min in the soil alone was significantly higher in the Cecil series soil compared to the Tifton series soil, which is likely to be related to the higher total C and total N content in the Cecil series compared to the Tifton soil (Table 1).In both soils, the Net N min was significantly higher when incubated at 30 • C, while minimal differences were observed in Net N min at 4 and 10 • C.
For the organic fertilizers, there was a trend of consistent significant differences for Net N min among treatment combinations beginning at 35 d of incubation.Therefore, data from selected time points during the incubation (2, 4, 14, 35 and 120 d) for both Tifton and Cecil soils are presented.Complete results for Net N min are available (Supplemental Tables S1 and S2).There were significant interactions between soil series, organic fertilizer and incubation temperature on Net N min at several time points over the 120 d incubation (Tables 3 and 4).In the Cecil series soil, Net N min differences between combinations of fertilizer and temperature were evident within the first 4 d of incubation (Table 3).In the Cecil soil, FM had a greater Net N min compared to PPL and MIX when incubated at 20 and 30 • C from 4 to 35 d of incubation (Table 3).In contrast, minimal effects of temperatures were observed with PPL and MIX fertilizers in the first 35 d of incubation, with the exception of the PPL fertilizer at 14 d.In the Cecil soil, results suggest that high temperatures may have impacted the Net N min immediately after application for the FM but less for the other fertilizers.Faster rates of mineralization soon after application might have favored the occurrence of alternative processes, such as ammonia (NH 3 ) volatilization [46,47].At 120 d of incubation in the Cecil soil series, incubation temperature did not affect Net N min of FM or PPL.However, the Net N min of MIX fertilizer was greater at 20 • C than the other incubation temperatures at 120 d.These findings align with previous results reported by [14], which suggests that temperature has a larger impact during the first few days after incorporation of organic fertilizers, but the effects of temperature N mineralization decrease after a long period.Further, higher temperatures for some materials may also favor loss mechanisms such as NH 3 volatilization [46].For the organic fertilizers, there was a trend of consistent significant differences fo Net Nmin among treatment combinations beginning at 35 d of incubation.Therefore, dat from selected time points during the incubation (2, 4, 14, 35 and 120 d) for both Tifton an Cecil soils are presented.Complete results for Net Nmin are available (Supplemental Table S1 and S2).There were significant interactions between soil series, organic fertilizer an Feather meal is derived from hydrolyzed, dried and ground poultry feathers.During mineralization of FM, long chains of keratin molecules are cleaved into smaller and more accessible components, which may enhance N release after relatively short incubation times [48,49].Conversely, PPL and MIX have previously been shown to have slower mineralization rates than FM when incubated at 30 • C [5].The PPL fertilizer had relatively low mineralization of the organic N (18 mg inorganic N•kg − ¹ dry material), which might be due to factors like the presence of bedding materials, moisture content and processing, which may affect the pool of mineralizable nitrogen [47].The processing of the PPL can involve drying, grinding and compressing the material into uniform pellets, which can affect the availability and release of N [47,50,51].It is important to note that the PPL fertilizer contained an initial 1.09% inorganic N (ammonium and nitrate) (Table 2), which was a relatively higher percentage of total N present compared to both FM and MIX sources.This inorganic N would be readily available for crop production and contribute to plantavailable N in a field situation allowing for production of crops using PPL, despite having a relatively low rate of N mineralization.Several studies have reported the successful production of organic crops using PPL with a similar nutrient composition in the region where this study was conducted [31,52,53].Total available N from the PPL fertilizer was notably greater than the organic mineralized N, suggesting that initial inorganic N from PPL may contribute to the total available N for plant uptake.
The effect of temperature on Net N min of different fertilizers in the Tifton series soil was variable at different time points (Table 4).The FM fertilizer after 4 d of incubation had a significantly greater Net N min at 20 and 30 • C (15 and 19 mg inorganic N kg −1 material, respectively) than at 4 and 10 • C (2 and 0 mg inorganic N kg −1 material, respectively).A similar trend in mineralization was observed with FM in the Cecil soil as well (Table 3).By 35 d of incubation Net N min of FM had increased at lower temperatures.Temperature had a lesser impact on the Net N min of MIX throughout the incubation period, with the greatest Net N min occurring at 20 • C after 120 d.The Net N min of PPL at higher temperatures in the Tifton soil was low, with a complete absence of Net N min at 30 • C starting from 14 d until 120 d, suggesting that higher temperatures favored the occurrence of other N processes in the PPL fertilizer instead of mineralization.Our results suggest that there was potential N loss from fertilizers during incubation, resulting from NH 3 volatilization or N immobilization, which may be favored by high temperatures [47].Losses during early incubation periods from FM in the Cecil soil may be the result of a pattern of quick immobilization followed by gradual mineralization, reflecting the dynamic nature of microbial activity in response to availability of labile carbon compounds in the substrate [14,46,54].It is also possible that NH 3 volatilization occurred in the PPL at 30 • C, resulting in a lack of measurable N mineralization.However, temperature is just one of several factors that influence NH 3 volatilization [46,55,56].Soil properties such as moisture, pH, CEC and organic matter content also play a role [47,57].In this context, NH 3 volatilization may be more likely to occur in the Tifton soil rather than the Cecil soil to due to its relatively higher pH, low organic matter and high sand content [58].
Mineralization Kinetics
A first-order model was fitted to the measured Net N min during the 120 d incubation for each soil, fertilizer and temperature treatment (Table 5, Figure 2).This model allowed the determination of the rate of mineralization (k) and the predicted pool of mineralizable N (N 0 ) as a function of the temperature and the fertilizer applied.Goodness of fit of the first-order model were determined for each temperature, soil and fertilizer interaction and included RMSE, RSR, NSE and PBias.
Table 5.First order fit characteristics, root mean squared error (RMSE), ratio of the root mean square error to the standard deviation of measured data (RSR), Nash-Sutcliffe efficiency (NSE) and Percent bias (PBias) for net mineralized nitrogen (Net N min ) for feather meal (FM), pelleted poultry litter (PPL) and mixed source (MIX) organic fertilizers incubated in Cecil sandy clay loam (Cecil) and Tifton loamy sand (Tifton) soils at several temperatures. 1Net mineralized N after 120 d of incubation as a percentage of organic N in the fertilizer applied. 2 Net measured mineralized N after 120 d of incubation. 3N 0 is the amount of net mineralized N predicted from a first order kinetics model. 4k is the rate coefficient of the first order model.
Soil
The goodness of fit for the first-order kinetic model varied with different treatment combinations.The model demonstrated good efficiency in predicting the N mineralization of the FM and MIX fertilizers under most temperature and soil conditions, with NSE values ranging from 0.74 to 0.95 and RSR values between 0.22 and 0.51.There was an exception for model fit of the MIX fertilizer in Cecil soil at 20 • C, where the NSE value dropped below 0.50 and RSR exceeded 0.70, suggesting a weaker correlation between observed and predicted values (Table 5; Figure 2).In this specific fit, there was a tendency for the model to overestimate N mineralization, resulting in a negative PBias of −15.47.In contrast, the model efficiency for predicting the N mineralization of MIX was satisfactory and consistent across different soil and temperature combinations, with NSE values ranging from 0.59 to 0.93 and RSR values between 0.26 and 0.64.The model's performance was unsatisfactory for predicting the N mineralization of PPL fertilizer.In all first-order kinetic fits, RSR values exceeded 0.70, which may be attributed to the considerable variability in Net N min observed among the samples of this material during the 120 d incubation (Tables 4 and 5).This variability negatively affected the correlation between observed and predicted values, which also resulted in NSE values between 0.06 and 0.46.Furthermore, the PBIAS of the models for the mineralization of PPL ranged from a positive value of 35.48 to negative values as low as −11.74, which indicates a tendency of both under-and overestimation of the model.The high initial inorganic N present in the PPL combined with low mineralization from the organic N pool (Figure 2, Table 5) also likely contributed to the poor fit of the first-order mineralization model.5; Figure 2).In this specific fit, there was a tendency The interactions between soil and temperature are necessary for understanding the rate of N mineralization of different organic fertilizers.In a previous study, Lazicki et al. [14] observed variability for potential plant-available N among 22 different organic materials trialed under warm and moist conditions (23 • C and 60% water holding capacity), with some undergoing N immobilization while others released up to 90% of applied N. The authors suggested, while the Net N min was well correlated with the C:N ratio of each material, the soil texture and management history of the soils influenced the timing of N release [14].Different soil attributes, such as pH, texture, organic matter and cation exchange capacity (CEC), create distinct physical and chemical environments for microorganisms, which may influence when and how quickly microorganisms decompose organic N compounds [14].Furthermore, the degree of which the temperature can influence microbial growth and, consequently, the N mineralization rate is likely to vary across different environmental systems [17].
FM Fertilizer Mineralization Kinetics
The N 0 at 120 d for FM ranged from 62.75 to 79.89 mg inorganic N•kg −1 fertilizer applied in the Cecil soil (Table 5), and from 41.85 to 64.54 mg inorganic N•kg −1 fertilizer applied in the Tifton soil (Table 5).Similar N 0 values have been previously reported for FM [5,14,15].In the present study, N mineralization of FM over the 120 d incubation in the Cecil soil showed two distinct phases, starting with a rapid release during the first 14 d of incubation, followed by a slow-release phase and a plateau at approximately 80 d (Figure 2A).In the Tifton soil, a similar pattern of rapid release occurred during the early stages of incubation followed by a steady and slow release at the temperatures of 4 and 10 • C. At 20 and 30 • C, the rapid phase was followed by a plateau at approximately 35 d (Figure 2B).The rapid mineralization observed from FM fertilizer in the 14 d of incubation (Figure 2B) corresponds to previous reports [5,12], and may be attributed to the presence of readily available N-containing compounds, such as urea, and simple organic forms of N that can be easily broken down by soil enzymes, even at lower temperatures.In contrast, the slower N mineralization after 14 d indicates the predominance of microbial degradation of complex organic N forms [12].A model proposed by Geisseler et al. [15] predicted that 61% of total N from FM would be in the mineral form after 100 d of incubation, with half of that being in the mineral form after only 5 d of incorporation.
For FM, in the Cecil series soil models, the k and N 0 values were not significantly different among temperatures, with constant k values ranging between 0.04 and 0.10 and N 0 values between 62.75 and 79.89 mg inorganic N•kg −1 fertilizer applied (Table 5).In contrast, in the Tifton soil, N 0 was greater at the lower temperatures (4 • C and 10 • C), suggesting that temperature might affect N mineralization of FM in the Tifton soil to a greater degree compared with Cecil soil.This agrees with results reported by Dessurealt-Rompré et al. [16], who indicated that N mineralization rates in sandy loam soils were affected by temperature changes to a greater degree than predominately clay soils.Further, k and N 0 values were not significantly different between soils, with the only exception with the highest temperature of 30 • C, suggesting that overall differences for the rate of mineralization and potentially mineralizable N of FM in different mineral soils may be minimal at lower temperatures.
PPL Fertilizer Mineralization Kinetics
The N 0 for the PPL fertilizer after 120 d was less than 20 mg inorganic N•kg −1 fertilizer applied, regardless of soil type or temperature (Figure 2E,F).The model fit for both soils and temperature combinations showed most of the N release was during the first few days of incubation, followed by a plateau after the second week.On average, 10% of total N applied is predicted to be available from PPL after 7 d of incubation.Previous studies predicting N release from PPL reported higher values, with potential available N ranging between 25% to 50% of applied N [5,47], with most of the N released during the initial 7 d of incubation.
Model fits of the PPL product had a wide variability of k rates, with values ranging between 0.03 and 0.84 among treatments of soil and temperatures (Table 5).Further, N 0 values ranged between 8.38 and 19.21 and no significant differences were observed among treatments of soils and temperatures.This agrees with other studies evaluating the N mineralization of organic fertilizers, which suggest that the temperature affects k rate more than the N 0 [14,59].Thus, changes in the N mineralization of PPL in response to environmental conditions are likely to be minimal.
MIX Fertilizer Mineralization Kinetics
The N 0 for the MIX fertilizer in the Cecil and Tifton soils after 120 d ranged between 37.41 and 55.57mg inorganic N•kg −1 fertilizer applied and 30.15 and 48.93 mg inorganic N•kg −1 fertilizer applied, respectively (Table 5; Figure 2C,D).The N release throughout the 120 d incubation steadily increased, but at a lower rate after 35 d, except when incubated at 30 • C in the Tifton series soil, in which a plateau phase was observed starting at 35 d (Figure 2D).In a previous study, Cassity-Duffey et al. [7] investigated the N mineralization of organic fertilizers across four distinct soils over a 100 d incubation period.Their results suggested that most of N mineralized from a pelletized MIX fertilizer, composed of animalbased proteins, occurred within the initial 2 weeks of incubation, with approximately 53% to 58% of the initially applied N mineralized after 14 d.
The first order model fits for MIX fertilizer had rate k values between 0.02 and 0.09, while N 0 values ranged between 30.61 and 55.57 in both soils (Table 5).Further, k and N 0 values for the MIX product were not significantly different between soil types or incubation temperatures, suggesting that we may expect minimal differences for the N release of MIX fertilizer between the two soils.Cassity-Duffey et al. [7] also reported that there were no significant effects of soil texture between Cecil and Tifton series soils on the Net N min of a MIX-type fertilizer.Interestingly, their research suggested that pH levels may have a more influential role than soil clay content in determining the potential N mineralization [7].
Conclusions
This study investigated N mineralization from three organic fertilizers in two soil types and soil temperatures that would be encountered during field vegetable production throughout the year in Georgia, USA.The Net N min varied with soil, fertilizer type and temperature, indicating that accurately predicting N mineralization from organic fertilizers may be specific for individual farms and fertilizer sources.In general, a first-order kinetic model demonstrated good efficiency in predicting the N mineralization of FM and MIX fertilizers but was inefficient for predicting N mineralization of PPL.Temperature and soil type had a relatively minor impact on the potentially mineralizable N of the MIX and PPL fertilizers at 120 d.However, incubation at higher temperatures (20 • C and 30 • C) impacted the Net N min of FM fertilizer in the Tifton series soil.Temperature-related differences may not be substantial enough to influence grower decisions regarding N fertilizer inputs for vegetable crop production in the two soils.However, organic fertilizer source will likely play a significant role in N availability during the cropping season.Further research is needed to determine how temperature may impact NH 3 volatilization and other loss mechanisms, as these processes may play a role in plant-available N at high soil temperatures.
Figure 1 .
Figure 1.Net nitrogen mineralization (Net N min ) from control Cecil (A) and Tifton (B) series soils at 4, 10, 20 and 30 • C over 120 d of incubation.
Figure 2 .
Figure 2. Nitrogen mineralization from the feather meal (FM) (A,B), mixed source (MIX) (C,D) and pelleted poultry litter (PPL) (E,F) fertilizers incubated at 4, 10, 20 and 30 °C for 120 d in Cecil and Tifton series soils.Regression lines show predicted first order regressions for N mineralization (N0) and symbols show actual mineralized N (Net Nmin) (error bars indicate standard deviation).The goodness of fit for the first-order kinetic model varied with different treatment combinations.The model demonstrated good efficiency in predicting the N mineralization of the FM and MIX fertilizers under most temperature and soil conditions, with NSE values ranging from 0.74 to 0.95 and RSR values between 0.22 and 0.51.There was an exception for model fit of the MIX fertilizer in Cecil soil at 20 °C, where the NSE value dropped below 0.50 and RSR exceeded 0.70, suggesting a weaker correlation between observed and predicted values (Table5; Figure2).In this specific fit, there was a tendency
Figure 2 .
Figure 2. Nitrogen mineralization from the feather meal (FM) (A,B), mixed source (MIX) (C,D) and pelleted poultry litter (PPL) (E,F) fertilizers incubated at 4, 10, 20 and 30 • C for 120 d in Cecil and Tifton series soils.Regression lines show predicted first order regressions for N mineralization (N 0 ) and symbols show actual mineralized N (Net N min ) (error bars indicate standard deviation).
Table 3 .
Net N mineralized (Net N min i ) from the feather meal (FM) and pelleted poultry litter (PPL), mixed source (MIX) fertilizers incubated at 4, 10, 20 and 30 • C for 120 d in a Cecil sandy clay loam soil.Differences in sampling time are noted by uppercase letters, while differences among temperatures are noted by lowercase letters.
Table 4 .
Net Inorganic N d=x − Inorganic N Control t=x − Inorganic N of Material t=0 .
1Net N min = 2 Different uppercase letters represent significant differences within fertilizer type at each sampling point, and different lowercase letters represent significant differences among temperatures for each fertilizer and incubation time according to Tukey's honest significant difference test (p < 0.05).
|
2024-01-26T16:10:37.884Z
|
2024-01-24T00:00:00.000
|
{
"year": 2024,
"sha1": "cf02b78e81cd359221b28915ab600163538b6c40",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-3129/5/1/4/pdf?version=1706081700",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "17b9af6c07602d95ff65fd00f25aecb46622cc68",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
169638731
|
pes2o/s2orc
|
v3-fos-license
|
A Scale Development Study to MeasureSmartphone Satisfactions of Adolescents
Smartphones are the most popular technological devices which enable the users to do multiple things at once such as connecting Internet, gaming, messaging, e-mailing and taking photographs in one device. In recent years, there has been a substantial rise in the sales of smartphones and the average age of their owners and replacement cycle have decreased. In this context, it is important to determine the motives and reasons why people replace their smartphones by spending high budgets and smartphone satisfactions of the users when purchasing the new ones. The aim of this study which derived from the importance of customer satisfaction is to develop a scale determining the smartphone satisfaction of adolescents between the ages of 15 and 22. In scale developing process, an item pool consisted of 46 items was constituted as a result of the compositions written by the participants and literature review. Pre-test of the draft scale was conducted with 70 students after the review of 7 field experts. In the pilot phase of study,exploratory factor analysis (EFA) was carried out through the data obtained from 716 students in order to perform construct validity. The draft scale consisted of 42 items before EFA turned into single factor structure scale consisted of 22 items after the analysis to determine the smartphone satisfactions of adolescents. Cronbach’s Alfa (α) internal consistency of 5-point Likert-type scale was .932 and explained total variance was 39.360 %. In order to confirm factor structure as a result of EFA, confirmatory factor analysis (CFA) was carried out with 316 participants. All the indicators revealed that the single factor structure smartphone satisfaction scale at ideal levels by means of modification indices was valid and reliable, and it could be used in order to identify the smartphone satisfactions of adolescents in further studies.
Introduction
In the last half-century information technologies arethe most leading innovations among the technologies developing at an unprecedented pace. Thanks to these technologies human life has been easier and daily lives of people havesteadily changed. Accessing, sharing information and communication have become as easy as touching a device in our hands.
Smartphones have been known as the latest point in the field of information technologies since the invention of the radio.
Smartphones provide people with an opportunity to accomplish several tasks, more than a telephone that a computer can performwith the integrations of various devices in it (Güvenç, 2013). Smartphones are undoubtedly the most popular technological devices of our age.
Smartphones enable the usersto do multiple things at once such as connecting Internet, gaming, messaging, e-mailing, and taking photographs in one device. According to We Are Social (2015) smartphones account for an increasingly great proportion of mobile use (38 %) among more than 7 billion active mobile subscriptions in worldwide. About one billion active Facebook users access their accounts via smartphones (69 %).However, as the number of smartphone users increases, smartphone addiction has become an annoying problem, harming work efficiency and social relations. In order to determine the degree of smartphone addiction of the individuals "Smartphone Addiction Scale" was developed by Kwon et.al. (2013). The scale was adapted to the Turkish language by Demirci, Orhan, Demirdaş, Akpınar and Sert (2014). Demirci et.al. (2014) found that 13 % of the students defined themselves as smartphone addicts.
According to Turkish Statistical Institute data [TÜİK] (2014), the rate of mobile phone penetration including smartphones in household increased to 96 % in 2014 while it was 54 % in 2004. Furthermore, mobile phone use among children aged between 6 and 15 starts at 10 on the average (TÜİK, 2013). The studies conducted with university students revealed that most of the students own their first mobile phones between the ages of 12 and 16. According to Karaaslan and Budak (2012) this rate was 67 % and it was 75 % for Uzgören, Şengür and Yiğit (2013). among children is quite low and users replace their mobile phones very often. The factors affecting the reasons why users frequently replace their devicescould be explored by determining the satisfactions of the users.Definition of customer satisfaction was given by Oliver (1997) as"ajudgment that a product or service feature or the product or service itself,provided (or is providing) a pleasurable level of consumption-related fulfillment"(as cited in Sandıkçı, 2007). Furthermore, new customers were mostly affected by the recommendations of old customers rather than marketing tools and advertisement activities, thus costumer lost resulted from dissatisfaction and not meeting the needs and demands of the customers (Sandıkçı, 2007).Customer satisfaction is an important conceptboth for the producers and the customers. This concept which is the subject of several studies has been used as a model in many countries for years.
Customer Satisfaction Index [CSI] model is a cause and effect model putting "customer satisfaction" in the center, organizing "factors affecting customer satisfaction" and "outcomes as a result of satisfaction". The purposes of the indices are to evaluate performance of the companies through the eyes of the customers and to create a comparison tool both for the companies and the customers (Türkyılmaz & Özkan, 2005).Despite some variations in National Customer Satisfaction Indices [NCSI] models, for Grigoroudis and Siskos (2004) expectations, company image, quality and value perceptions are the basic factors affecting customer satisfactions(as cited in Türkyılmaz & Özkan, 2005). The most distinct features of the models are; the use of questionnaires for data collection instrument, rating the items ranging from 1 to 10 (1 is the lowest, 10 is the highest score) and the use of telephone in the application process (Türkyılmaz & Özkan, 2005).
The current study aimed to develop a measuring scale in order to determine the smartphone satisfactions of high school and university students between the ages of 15 and 22. Therefore, it could be available to determine the motives and reasons why people replace their smartphones by spending high budgetsand smartphone satisfactions of the users when purchasing the new ones.
Method
In this study,the scale development process was composed of three phases; writing of compositions by the participants for the item pool, pre-test application, exploratory and Online Journal of Communication and Media Technologies Volume: 6 -Issue: 4 October -2016 confirmatory factor analyses for the construct validity. In this process survey model was used in order to explore the views of participants. The studies aiming data collection in order to identify the features of a group depend on survey model (Büyüköztürk, Kılıç-Çakmak, Akgün, Karadeniz & Demirel, 2008).
Participants
The study was conducted with a total of 1160 high school and university students between the ages of 15 and 22 in Balıkesir, Turkey in 2014-2015 academic years. In the process of the scale development 58 students for the writing of the compositions, 70 students for an exploratory factor analysis (EFA) and 716 students for confirmatory factor analysis (CFA).
Development of the Smartphone Satisfaction Scale (SSS)
To what extent the psychological features of an individual exist in accordance with his/her reactions are tried to be determined through the scales existed in the literature, whereas the aim of scale development is structuring of items to determine what the related psychological feature is (Erkuş, 2014). Therefore, the existing scale instruments were searched in related literature and then an item pool was constituted in order to develop the "Smartphone Satisfaction Scale" [SSS] which measures the satisfactions of adult smartphone users in high schools and universities.
Constitution of the Item Pool
In order to constitute the items of the scale,27 high school and 31 university students, a total of 58 students were requested to write a composition on the satisfactions of their smartphones. Content analysis was implemented to the collected text. 103 statements which are directly related or thought to be related were obtained. After the reorganization of the similar statements 34 items were obtained and 6 items were included as a result of the literature review. Moreover, 6 items more were added by the researchers and finally an item pool consisted of 46 items was constituted.
Opinions of Experts for Content and Face Validity
To ensure content validity, obtained items were asked the views of three academics, two teachers who are experts in the field of information technologies and two salespersons responsible from smartphones in a technology market. Depending on the opinions of the 7 Online Journal of Communication and Media Technologies Volume: 6 -Issue: 4 October -2016 experts 19 items were reorganized, 3 items were excluded and a new proposal item was added. All the items in the draft scale were examined by two Turkish Language teachers in order to ensure lucidity and accuracy in the Turkish Language.
44 items were listed randomly in the draft scale which was conducted for pre-test application.
The instruction defining the purpose of the study and the expectations from the participants was prepared. Satisfaction statements were administered with 5-pointLikert-type response categories ranging from strongly agree to strongly disagree.
The draft scale which was ready to use for pre-test applicationwas evaluated by two academics whowere holding Ph.D. degrees. Concerning the reviews of the experts all the items were investigated from the points of expression, content validity and whether the study was goal oriented.
Conducting Pre-test Application
The numbers of the participants must not be less than 50 for pre-test application in the research for scale development (Karasar, 1995). Therefore, in the current study the pre-test application was conducted with 70 participants, of the participants 38 of them were high school students and 32 of them were university students. The average amount of time the respondents spent for the scale was 10 minutes. After the pre-test application 8 items were reorganized and 2 items were excluded. The final scale was composed of 42 items to be conducted on the sample in order to ensure construct validity.
Data Collection
Four different types of high schools and a vocational school were identified with regard to maximum variation sampling method in order to apply the data collection instrument. The necessary permissions were gotten from Balıkesir Directorate of National Education by the researchers. The analysis were carried out with 716 data collection instruments for EFA of the draft scale consisted of 42 items and with 316 data collection instruments for CFA of final scale consisted of 22 items after EFA.
Data Analysis
In order to explore the construct validity of the scale, EFA was carried out using SPSS version 18.0. For Tavşancıl (2006) factor analysis which is one of the multivariate statistical Online Journal of Communication and Media Technologies Volume: 6 -Issue: 4 October -2016 analysis techniques reveals the order between each item in the scale and the response of the participants, and is used to identify the content of psychological dimensions. In order to confirm factor structure as a result of EFA, CFA was carried out using LISREL 8.7 software.
Çokluk, Şekercioğlu and Büyüköztürk (2012) stated that it is the examination of the accuracy of the structure as a model which had been identified and limited previously using CFA.
Findings
The draft scale for CFA was conducted with 565 students from four high schools and 151 students from a vocational school as a total of 716 students. For Comrey and Lee (1992) in such an analysis, for adequate sample sizes 100 is poor, 200 is fair, 300 is good, 500 is very good and 1000 is excellent. Thus, the numbers of the participants in the current study is said to be almost excellent (as cited inAkbulut, 2010). In order to examine the size of the sample KMO and Barlett statistics were calculated (KMO=.957, 2 =880.862, p<.001). KMO value was higher than .600 and the result of Barlett's test of sphericity was significant. Thus, it could be said that the sample was suitable for performing factor analysis (Cohen, Manion & Morrison, 2007).
As the total correlations of 12 items in the draft scale consisted of 42 items were lower than .40, they were taken out from the draft scale before the factor analysis. It was observed that the values of the rest of the items varied between .49 and .72. According to Büyüköztürk (2009), the items having values above .40 are very good discriminators. Besides, EFA was carried out excluding two more items which raised the Cronbach's Alfa (α) value when they were taken out from the scale.
Maximum Likelihood method was preferred for EFA. For Stevens (1996) the aforementioned method considers the convenient common variance under the factors and has a stronger structure even in lower variance. In the unrotated analysis with 28 items, it was observed that five factors having eigenvalues higher than 5 and 44.867 % of the total variance were explained. However, the eigenvalue in the first factor was 12 times of the eigenvalue in the second factor. Moreover, when the scree plot in Figure- It was found that in the analyses with regard to a single factor, communalities of five items were lower than .30 and total variance increased when excluded. For Büyüköztürk (2009) it is a good measure of choice for the factor load values of .45 and higher. In the current study cut-off pointwas determined as .50 and an item was excluded which was lower than this value. The factor loads of the scale with regard to its single factor structure ranged from .512 to .755. Values were shown in Table 1. Proportion of explained total variance in SSS which consisted of 22 items with a single factor was found as 39.360 %. According to Büyüköztürk (2009) for single factor analysis an explained total variance rate 30 % and above is considered to be sufficient. Therefore, it Online Journal of Communication and Media Technologies Volume: 6 -Issue: 4 October -2016 could be said that the explained total variance in the single factor structure was sufficient. In the current study Cronbach's Alfa (α) internal consistency of the scale was .932. This value was above the high reliability limit which was recommended by Kalaycı (2009) (Brown, 2006;Tabachnick & Fidell, 2001). The results of the analysis were given in Table 2. Regarding the fit indices in Table 2, the expected and the observed difference between covariance matrixes was significant ( 2 (207):595.34; p<.05). p value was expected to be >.05. However, this value is significant depending on the sample size in most confirmatory factor analysis (Çokluk et.al., 2012). 2 /sd value was calculated as 2.88 and indicated a perfect fit (Kline, 2005;Sümer, 2000).
Online Journal of Communication and Media Technologies Volume: 6 -Issue: 4 October -2016 Regarding the fit of the model, RMSEA value close to zero is considered to be the indication of fitting (Steiger, 2007). In the current study this value was calculated as .077 and it is considered to be the indication of good fit (Hooper, Coughlan & Mullen, 2008;Sümer, 2000).
The value of the SRMR as .043 indicates a perfect fit (Brown, 2006). NFI value was .97, NNFI value was .98 and these values indicate a perfect fit (Hu & Bentler, 1999;Sümer, 2000). CFI value, another criterion was calculated as .98 and according to several studies it is the indication of a perfect fit (Hu & Bentler, 1999;Sümer, 2000;Thompson, 2004). GFI value was .85 and AGFI value was calculated as .82. As AGFI and GFI values are close to .90, these two values are considered to be close to a good fit (Schumacher & Lomax, 1996;Kelloway, 1989;Sümer, 2000).
All the indicators revealed that the model had a good fit and the scale could be explained in one dimension in good level. In order to reinforce the fit of the model error covariances were added and M7-M16 and M20-M22 items were associated. Path diagram regarding the model was given in Figure 2.
Figure2. Diagram regarding Structural Equation Modeling
Online Journal of Communication and Media Technologies Volume: 6 -Issue: 4 October -2016 Correlation coefficient and error variance of each item were given in Figure 2. Correlation coefficients of the items ranged from .58 to .80. Moreover, the values of the all items in scale were significant at the level of p<.05.
Results and Suggestions
Smartphones are undoubtedly the most popular technological devices enabling the users with Internet, photography, e-mail, messaging, gaming, and etc. In recent years there has been a substantial rise in the sales of smartphones, and the average age of their owners and replacement cycle have decreased. In this context it is important to identify the motives and reasons why people replace their smartphones by spending high budgets. As Erkuş (2014) stated the measuring of a psychological variable stems from a necessity. In the current study a scale was developed in order to identify smartphone satisfaction of high school and university students between the ages of 15 and 22. Thus, an important data collection tool that will explore the replacement reasons, frequencies and smartphone satisfactions of adolescents while purchasing a new device has been presented in the literature.
27 high school and 31 university students, a total of 58 students were requested to write a composition on the satisfactions of their smartphones at the stage of item pool constitution. The technology that has changed and developed in time will give rise to production of new devices in the field of communication as in the other areas of our lives. Since the needs and the usage habits of an individual may change in time, it should be taken into consideration that this scale would no longerbe valid and reliable as the other scales in the long term.
The sample for the study consisted of adolescents between the ages of 15 and 22. Therefore, SSS was intended to be developed only for this age group. Several samples could be used for the other age groups and the construct validity and reliability of the scale for different profession groups are recommended to be reconstructed.
Online Journal of Communication and Media Technologies Volume: 6 -Issue: 4 October -2016
APPENDIX A -Smartphone Satisfaction Scale (SSS) (Translated Form for Readers)
Dear students, The following data collection tool was developed in order to determine smartphone satisfaction of yours. The obtained data will only be used for scientific purposes and there will be no individual assessment. Therefore, you do not need to write your names. The scale consisted of 22 items. Response time for the whole items in the scale is about 5 minutes.
Please, read all the items carefully and mark the best choice for you.
Thanks for your participation. (2) (1) 1 Quality of the images taken by the rear camera is high.
2 Call features (contacts, call history etc.) are good.
3 Screen size meets my needs.
4 Has an aesthetic appearance.
5 Sound quality of the speaker is ideal.
6 I like to use my phone.
7 I prefer the same brand if I replace my phone.
8 Allows me type easily.
9 Quality of the images taken by the front camera is high.
10 Processing speed is good.
11 Display quality of the screen is high.
12 Has ideal dimensions(height, width, depth) 13 Video shooting meets my needs.
14 Sound quality of the caller is ideal.
|
2019-05-30T23:46:12.311Z
|
2016-10-26T00:00:00.000
|
{
"year": 2016,
"sha1": "a0617129e40a31816731aaf1757da413540fb857",
"oa_license": "CCBY",
"oa_url": "http://www.ojcmt.net/download/a-scale-development-study-to-measure-smartphone-satisfactions-of-adolescents.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8ff773c6c496ab44a0d16a1a173ae1b2666c0545",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
119065870
|
pes2o/s2orc
|
v3-fos-license
|
Cumulant Expansion in Gluon Saturation, and Five and Six-Gluon Azimuthal Correlations
Correlations between the momenta of the final state hadrons measured in proton or nucleus collisions contain information that sheds light on the initial conditions and evolutionary dynamics of the collision system. These correlation measurements have revealed the long-range rapidity correlations in p-p and p-Pb systems, and they have also made it possible to extract the elliptic flow coefficient from hadron correlation measurements. In this work, we calculate five- and six-gluon correlation functions in the framework of saturation physics by using superdiagrams. We also derive the cumulant expansion of the gluon correlators that is valid in the gluon saturation limit. We show that the cumulant expansion of the gluon correlators that is used for counting the number of diagrams to be calculated does not follow the standard cumulant expansion. We also explain how these findings can be used in obtaining experimentally relevant observables such as flow coefficients calculated from correlations as well as ratios of the correlation functions of different orders.
I. INTRODUCTION
The measured correlations between the final state hadrons in collisions involving protons and nuclei contain information about the dynamics of the collision and evolution of the produced particles [1]. In nucleus-nucleus (A-A) collisions, it has been observed that the particle pairs with the azimuthal angles φ 1 and φ 2 are maximally correlated when φ 1 −φ 2 ∼ 0 (collimation) and φ 1 − φ 2 ∼ π (anticollimation) [2]. Also, this correlation is maintained even if the pair is separated by several units of rapidity. Such long-range azimuthal correlations in A-A collisions have been ascribed to the collective radial flow of the quark-gluon plasma [3]. Radial flow gives rise to collectivity in the hadronic spectrum where the momenta of the detected hadrons are not random but correlated.
In small systems such as p-p and p-A, the collectivity in the produced hadrons had not been observed in experiments or Monte Carlo simulations. Also, on the theory side there was no such expectation of collectivity in p-p and p-A collisions since it had been thought that fluid behavior would not emerge in such small collision systems. However, the twoparticle correlation measurements in p-p collisions at √ s = 7 GeV at the LHC revealed for the first time the existence of collimation and anticollimation effects, which are long ranged in rapidity, appearing at high-multiplicity events the so-called double ridge [4][5][6][7].
The observation of collectivity in p-p and p-Pb collisions later sparked an interest in applying hydrodynamics to such small systems [17][18][19][20][21][22]. An alternative program that we pursue in this work, however, does not assume hydrodynamical evolution. Instead, our approach here tracks the origin of the collectivity in small systems to gluon saturation in the target and projectile in p-p or p-A collisions [23][24][25][26][27][28][29][30][31]. Gluon saturation is expected to increase with increasing beam energy. Therefore, that ridge correlations appear only at high-multiplicity events at top LHC energies seems to be evidence supporting the onset of gluon saturation. The way gluon saturation affects particle production in proton or nucleus collisions is studied via glasma diagrams. Such calculations indicate that the two-hadron correlation function calculated from the glasma diagrams explains the systematics of the ridge signal at the LHC data [32][33][34][35][36][37].
Whether the origin of the collectivity observed in experiments is due to hydrodynamic evolution of the system or due to gluon saturation is still under discussion. The two-hadron correlations alone are not enough to settle this dispute. For this purpose, correlations between more than two hadrons must be measured, and these measurements need to be compared with the results from hydrodynamics and glasma diagrams separately [16,38].
Hadron correlation measurements are used to obtain the flow coefficients v m {nPC}, where nPC refers to that the flow coefficient is found from n-particle correlations [39][40][41]. Currenty, the elliptic flow coefficient v 2 is measured from n = 2, 4, 6 and 8 particles in experiments [42][43][44][45]. On the theory side, such coefficients can be calculated from glasma diagrams. By using glasma diagrams, one can calculate the correlations of n gluons, and convolving these with the fragmentation functions results in the hadronic correlation functions that can be compared with the ones measured by the experiments. In this work, we calculate five-and six-gluon correlation functions from glasma superdiagrams towards this goal.
Another important result of this paper, in addition to the derivation of these two correlation functions, is the cumulant expansion of the gluon correlation functions in the gluon saturation limit. An n-gluon correlation function is a cumulant that can be expanded in terms of lower-order cumulants and nth moment [see, for example, Eq. (1)]. However, the standard cumulant expansion needs to be modified if one wants to use it to determine the number of glasma diagrams to be calculated. This has been realized for the first time in the calculation of four-gluon correlation function in Ref. [16]. Here we derive the formula that generates the modified cumulant expansion for the n-gluon correlation function obtained from the glasma diagrams. The importance of this modified cumulant expansion in the context of this work is to find the number of glasma diagrams to be calculated at a given order and use it to verify independently that the number of terms in the general formula that produces the correlation function at nth order is correct.
In the next section, we derive for the first time the formula that generates the modified cumulant expansion. Then, we review the recipe developed in Ref. [38] that yields the ngluon correlation function. Following that, we will derive the five-and six-gluon correlation functions and verify by using the modified cumulant expansion that the number of terms, each of which corresponds to a connected glasma diagram, in these correlation functions is correct.
II. CUMULANT EXPANSION FOR RAINBOW GLASMA DIAGRAMS
The three-and four-gluon azimuthal correlation functions with full rapidity and transverse momentum dependence have been calculated in Ref. [16] by using 16 and 96 glasma diagrams, respectively. The observable to be calculated-and later compared to the experimentally obtained correlation function-for n gluons is the connected azimuthal correlation function C n . Since this function includes only the connected diagrams, it is a cumulant, not a moment.
The number of connected diagrams to be calculated can be determined via the cumulant expansion. An important realization has been made that the glasma correlation functions at higher orders, starting with the four-gluon correlation function, obeyed the standard cumulant expansion, but one had to modify this expansion if it was to be used to determine the number of glasma diagrams at that order [16]. Hence, the cumulant expansion should be modified for the rainbow glasma diagrams when it is to be used to count glasma diagrams. To illustrate via the example of the four-gluon correlation function, we first write the standard cumulant expansion at the fourth order: where κ's denote the cumulants (connected correlations) and µ 4 denotes the fourth moment which includes all connected and disconnected glasma diagrams involving four gluons. The number of glasma diagrams that each term contains is given by µ 4 = 209, κ 3 = 16, κ 2 = 4 and κ 1 = 1 [16]. From this, we find κ 4 = 72, which is incorrect, since we know that one needs to calculate 96 connected rainbow glasma diagrams instead of 72 as explained in detail in Ref. [16]. This error occurs since in Eq. (1) the term 3κ 2 2 = 3(κ up 2 + κ low 2 ) ⊗ (κ up 2 + κ low 2 ) mixes the upper and lower glasma diagrams (see Fig. 1). The subtraction of the two mixed terms, κ up 2 ⊗ κ low 2 and κ low 2 ⊗ κ up 2 , gives rise to wrong counting since in the leading rainbow glasma approximation such diagrams are already never considered.
In other words, the moment µ 4 already does not contain such correlations, so one should not attempt to subtract the mixed diagrams (κ up 2 ⊗ κ low 2 and κ low 2 ⊗ κ up 2 ) from µ 4 . Only the terms κ up 2 ⊗ κ up 2 + κ low 2 ⊗ κ low for the term κ low 2 ⊗ κ low 2 , which is a rainbow diagram altogether. Therefore, this pair of diagrams is required to be subtracted from the moment µ. However, the pair on the right (κ low 2 ⊗ κ up 2 ) mixes lower and upper rainbow diagrams. Such terms are already nonexistent in the fourth moment µ 4 , so subtracting such diagrams would lead to wrong counting. Eq.
(1) with 3κ 2 2 /2 and write the rainbow cumulant for rainbow glasma diagrams as 1 This gives the correct counting κ 4 = 96, which matches the number of connected diagrams calculated explicitly in Ref. [16]. The rainbow cumulant in the fifth order is written as and at the sixth order it is written as Now we will derive the formula for the rainbow cumulants. The standard moment of the order of n in terms of cumulants is given in terms of the partial Bell polynomials B n,k : 1 We shall not add any specific identifier index for the rainbow cumulants to distinguish them from the standard ones; the rainbow cumulants are recognized by the 2's in the denominators of some of its terms in the expansion. µ n = n k=1 B n,k (κ 1 , κ 2 , . . . , κ n−k+1 ), and standard κ n is found by solving this equation for κ n . From the discussions above, we can write the expression for the rainbow moment as 2 µ n = −κ n 1 + 2 n k=1 B n,k κ 1 , κ 2 2 , . . . , κ n−k+1 2 , and the rainbow cumulant κ n is found by solving this equation for κ n . The formula in Eq. (6) is the first result of this paper. 3 After we derive the five-and six-gluon correlation functions later in this work, we will use the rainbow cumulant expansion in Eq. (6) to check if the five-and six-gluon correlation functions include the correct number of terms, where each term corresponds to one connected diagram.
III. FORMULA OF n-GLUON CORRELATION FUNCTION
In Ref. [38], we derived the formula for the n-gluon correlation function C n with full momentum and rapidity dependence by using the glasma superdiagrams we developed. Here we quote the formulas that we will use in the next section to derive five-and six-gluon correlation functions.
The n-gluon correlation function is given by Here α s is the QCD strong coupling constant, N c = 3 is the gluon color factor, S ⊥ is the transverse area of overlap during the collision between the target and projectile, and p ⊥i are the transverse momentum variables of the gluons produced. N 1,2 , which include the unintegrated gluon distribution (UGD) functions Φ A,p (p ⊥ ), are given by 4 2 The Mathematica code for both the standard and rainbow cumulants are given in the Appendix. 3 To emphasize, the non-standard, rainbow cumulant expansion is developed here for correctly counting the glasma diagrams. The correlation functions C n still obey the standard cumulant expansion. This can be understood from the discussion following Eq. (46) in Ref. [16]. The Eq. (46) therein is clearly in the form of standard cumulant expansion at the order n = 4. 4 The formulas for N 1,2 given in Ref. [38] included a typo and missed the prefactor f n , and the prefactor in the second brackets mistakenly read 2 h instead of 2h. These mistakes in Ref. [38] originated from the miscalculation of the κ 5 therein. where The indices of Φ A,p (p ⊥ ) are as follows. A stands for the target or projectile index (A = 1, 2), p subscript is the rapidity variable of the gluon, and p ⊥ is the transverse momentum variable of the gluon.
The coefficient f n is given by where n here is the same n as in C n , i.e., number of gluons.
It is important to note that the rapidity indices and that which UGD appears with which prefactor in the formulas above are nontrivial. The former is found by means of superdiagrams, and the latter is found by means of the rainbow cumulant expansion. In this work, we will show only the relevant superdiagrams for C 5 and C 6 , but we will not explain how they are drawn, for which we refer the interested reader to our earlier work [38].
IV. FIVE-GLUON AZIMUTHAL CORRELATION FUNCTION
The five-gluon correlation function C 5 can be written by using the formulas given in Eqs. (7)-(10) as follows: where p q l w r p q l w r p q l w r in Eq. (14).
and N (5) The nth level moment µ n includes both the connected and disconnected glasma diagrams for n-gluon correlation functions, and it is given by µ n = 2(2n − 1)!! − 1. Using Eq. It would be practically impossible to calculate such number of diagrams separately without the superdiagrams, where one needs only three superdiagrams for C 5 (see Fig. 2). For
VI. CONCLUSION
In this paper, we derived the modified cumulant expansion that was to be used for counting glasma diagrams correctly. This expansion is essential in deriving the correlation functions and verifying that they are found from the correct glasma diagrams. We also derived the five-and six-gluon correlation functions. The six-gluon correlation function, particularly, can be used to calculate the elliptic flow v 2 {6}, and then it can be compared with the measurements. Another use of the correlation functions we calculated could be studying correlations between flow coefficients [46] as well as obtaining other observables from the ratios of the cumulants (C n 's) [47]. These studies are underway.
APPENDIX: CODES FOR THE CUMULANTS IN MATHEMATICA
The Mathematica function which gives the standard cumulant expansion of κ n is [see Eq. BellY [n, k, Table [ and that which gives the rainbow cumulant expansion of κ n is [see Eq.
VII. ACKNOWLEDGEMENT
This work is supported by the grant TUBITAK BIDEB 2232-117C008.
|
2017-10-08T19:32:42.000Z
|
2017-10-01T00:00:00.000
|
{
"year": 2017,
"sha1": "c0e2daddcd9e4834a365118ca1decbad029a5a0a",
"oa_license": null,
"oa_url": "https://cdm21054.contentdm.oclc.org/digital/api/collection/IR/id/5032/download",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c0e2daddcd9e4834a365118ca1decbad029a5a0a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
257449370
|
pes2o/s2orc
|
v3-fos-license
|
On emotions as a condition for morality
. The idea that we must free ourselves from the mastery of our emotions in order to act morally has been challenged over the past decades as Kant scholars have turned to the Metaphysics of Morals and the Critique of Judgment to regain the centrality of emotions in this tradition. I want to expand the claim about the positive role of emotions in Kant’s moral theory by arguing that certain emotional states should be understood as having an even more fundamental role, namely, as an empirical condition for morality. Therefore, I will show that the structure Kant provides to explain the human mind conceives of our moral experience as relying on what he calls lower faculty of feeling. After sketching Kant’s approach to cognition, I will show how some feelings are indissociable from the human moral experience – and notably, from the ability to act in accordance with our predispositions. I will discuss textual evidence for this view and explain that, although Kant himself failed to devise an explicit taxonomy of emotions, there is a sense in which pathological feelings are to be regarded as a condition for morality.
Introduction
The traditional view that we must free ourselves from the mastery of our emotions in order to act morally has been challenged over the past decades. It has been argued, contra this view, that despite the inclinations' negative status, not all emotional states play the same role for Kant. 1 Furthermore, misconceptions resulting in the idea that Kant's moral theory is cold and overly focused on duties have been addressed (сf. Baron, 1995) and there has been a considerable number of scholarly works on the topic. 2 As a result, the positive role of emotions has finally been brought to the fore. I want to expand the claim about the positive role of emotions in Kant's moral theory by suggesting that certain feelings should be understood as playing an even more fundamental role, namely, as empirical conditions for morality. 3 Along with contemporary criticism, my claim gives strength to the idea that, even from a rationalist perspective, emotions must not be overlooked.
In this discussion, I focus on pathological feelings, 4 since the feeling of respect for the moral law has already been discussed by several scholars. When it comes to morality, I adopt an action-based standpoint, meaning that I assume that determining the possibility of moral action is interchangeable with determining the possibility of morality as a whole. Thus, whereas stating that feelings are conditions 5 for morality does not amount to authorising their presence in moral actions, I will engage in making clear in which sense pathological feelings render morality possible.
Textual evidence allows us to claim that it was not among Kant's aims to develop a theory of emotions. 6 However, it is clear that underlying his theory there is an overall structure that unifies different emotional states into a systematic unit. From this structure emerges Kant's consistency in assigning different roles, weights and valences to human emotions. The threefold theory of the mental faculties, and the functioning of the faculties of feeling and desire are evidence of this. Understanding these divisions and the centrality of the faculty of feeling is essential for determining in which sense emotions are conditions for morality.
I begin by outlining the nature of some emotional states, mapping their function within our mental faculties. Specifically, I will discuss the connection between the faculties of feeling and desire. Next, I offer an account concerning the relevance of certain particular emotional states to morality by looking at ways that pathological feelings are seen to play a positive role in specific settings. Finally, I will be able to show how those feelings connect to our predispositions and how this enables them to be qualified as empirical conditions for morality.
The faculties of the human mind
According to Kant's account of cognition, representations, desire, and feelings of pleasure and displeasure belong to our mental faculties (V-Met/Heinze, AA 28: 228). Our representation of the world takes place via sensibility in combination with understanding (KrV, A 19 / B 33). Sensibility, as a passive faculty, refers to representations received from that which affects us, or which is given to us. On the other hand, understanding produces representations from that which was intuited by the former faculty. Therefore, an objective representation of something relies on the exercise of both powers, which is why Kant claims that "thoughts without content are empty and intuitions without concepts are blind" (KrV, A d51 / B 75; Kant, 1998, p. 107).
The Faculty of Cognition is responsible for the human affection by objects in the world, the Faculty of Feeling renders pleasure or displeasure with regard to what objects trigger in us, and the Faculty of Desire motivates actions towards those objects. The three main faculties work together resulting in our objective perception of the world. As a result, human action in general as well as moral action become possible.
In contrast to the other technical terms used by Kant to frame his theory, defining the concept of feeling is difficult since, as Kant famously states, feelings cannot be understood but must be felt (EEKU, AA 20: 232). 7 Given the special relationship between feelings and desires, no definition Kant offers about feelings is independent. They can be explained "by means of the effects that the sensation produces on our state of mind. What directly (through sense) urges me to leave my state (to go out of it) is disagreeable to me -it causes me pain. Just as what drives me to maintain my state (remain in it) is agreeable to me, I enjoy it" (Anth, AA 07: 231; Kant, 2007a, p. 334). Here, feelings are being addressed from a purely functional definition. As emotional states, they refer to particular instances, i.e. "they serve only as pointers to our phenomenology" (Deimling, 2014, p. 110). For example, the unpleasant feeling of hunger that leads me to perform the action of seeking food refers to the specific representation of that specific action. However, Kant also often refers to the very faculty of pleasure and displeasure for the same concept, i.e. the "capacity of having pleasure or displeasure in a representation" (MS, AA 06: 211; Kant, 1996b, p. 373;my emphasis).
Meanwhile, the faculty of desire "is the faculty to be by means of one's representation the cause of the objects of these representations" (MS, AA 06: 211; Kant, 1996b, p. 373). Thus, as mentioned earlier, this faculty corresponds to the power to initiate actions towards the objects represented. While Kant offers us criteria for distinguishing feelings from desires, he also points out the importance of the connection between them.
To illustrate this connection, we might consider the concept of habit. For Kant, actions carried out by habit carry little or no reflection, thereby leading to a lack of the agent's freedom in adopting his maxims (MS, AA 06: 409; Kant, 1996b, p. 536). Thus, since a free action must follow from one's deliberation, actions performed by habit have no moral worth. Consider, for example, the habit of eating meat. It is said that an agent who regularly eats meat without reflecting on the reasons for such a behaviour does so out of a habit. It involves little or no reflection and is likely an outcome of one's desire to eat meat. Apart from leading us to highlight the relevance of reflection for the performance of moral deeds, this example prompts us to observe that desires are often connected with feelings of pleasure or displeasure, whether as a cause or an effect.
But even when a feeling causes a desire, it is still possible for us to act in disagreement with this desire. In fact, even if our desire points to the performance of certain actions, it is always possible for us to decide whether or not to engage in that action. 8 The ground for the moral obligation is, according to Kant, the ability of the moral law to bind us regardless of previous desires, meaning that we must be able to act in another wayendorsing or opposing our desires. 9 In short, desires do not decisively ground our deeds. Although they are often connected to feelings, we must be able to choose whether or not our actions will conform to them.
Accounting for pathological feelings
The relevance of the faculty of feeling is often stressed by arguments concerning the positive role of certain affective states for morality. Affects (and specifically, reason-caused affects) can play this role by helping us meet the requirements of the moral law. 10 Rescuing a drowning person requires, for example, an action which, though caused by reason, does not need to involve a complex reflection at the very moment it is performed. 11 There are several feelings that make moral life more meaningful while orienting us towards our moral ends. Sympathy (sympathia moralis), for example, refers to sensibility "at another's state of joy or pain" (MS, AA 06: 457; Kant, 1996b, p. 575). Accordingly, this feeling helps us to understand the moral saliencies and identify when action should be taken. It can, for example, give us insight into someone's suffering -when recognising that suffering is a condition for the adoption of the generosity required in such situations. Emotional nuances are also required when we act morally, so that we are able, for example, to communicate our intentions appropriately since, for Kant, "we can and should perform difficult but necessary work in good humor [...]: for all these things lose their value if they are done or endured in bad humor and in a morose frame of mind" (Anth, AA 07: 236; Kant, 2007a, p. 339). 12 In sum, certain feelings guide us in the moral field, providing data that make us able to determine the best means of going along this path. 13 However, such emotional states cannot be conditions for morality as they hold a purely instrumental worth, meaning that, depending on the circumstances, these feelings may suit shady interests. The courage that motivates one to throw oneself into the water to save another can, at the same time, provide the means for an agent to perform actions contrary to duty. Someone may be sympathetic to someone else's diabolical plan and thereby support its implementation. In addition, still on the subject of sympathy, the potential self-indulgence of the agent is a concern of Kant, for "the sympathiser can easily become engaged by his own sympathy: there you are, in distress and pain, and here I am, offering you the magnanimous gift of my sympathetic attention and care" (Sorensen, 2018, p. 210). Thus, not even the fact that "reasoning about the moral law strikes down this propensity" (ibid.) would suffice to guarantee to feelings such as sympathy the status of a condition for morality. Now I want to turn to a thought experiment we usually engage in when it comes to determining whether x is a condition for y. If we imagine a life free of feelings such as sympathy, would it still be possible for an agent to perform moral actions? A possible answer may find its expression in Kant's account of apathy. For Kant, the motivational ability of certain emotional states 14 hinders us from being able to govern ourselves. Moral apathy i.e., a life without affects and other sensuous feelings is therefore "highly commendable, insofar as it consists in freedom from those mental propensities" (Refl 1526, AA 15: 940; Kant, 2007b, p. 184). 15 This sharpens the point: is it possible to predicate morality to the actions performed by agents lacking these predispositions? Without them, how should we conceive moral experience? These topics will be discussed in the following. 10 Sorensen (2011) discusses the positive contributions of certain reason-caused affects to morality by stressing that there are cases where these emotional states are necessary to the practice of actions required by the moral law. 11 The lack of reflection is an outstanding feature of affects, as they are "precipitate or rash (animus praeceps)" (MS, AA 06: 408; Kant, 1996b, p. 535; see also Anth, AA 07: 252; KU, AA 05: 272n; and VM, AA 28: 256). 12 The argument about the appropriate emotional tone is drawn by Sherman (1997). 13 As Sherman (2014) argues, it is in this sense that certain emotions have a positive status for morality. Although contingent upon human nature, emotions are intrinsic to our moral responses. 14 Remarkably, passions and affects. They are pathological emotional states of the faculties of desire and feeling respectively (cf. VM, AA 28: 256). 15 Thus, because virtue presupposes apathy, the principle of apathy (apathia moralis) is acknowledged by Kant as a duty (cf. MS, AA 06: 408-409).
Lower feelings as empirical conditions
As human beings -sensible and intelligible -our will is dually determined. Unlike a holy will which is already in conformity with its subjective constitution, the human will is governed both by moral law and by the dispositions resulting from our sensible nature (GMS, AA 4: 414). Thus, when performing moral actions our sensible nature, viz. our inclinations, is in place. This difference explains the normative character of human moral experience: whereas for a holy will the moral law is descriptive, for sensibly affected rational beings the law presents itself through categorical imperatives. 16 Since our cognition depends both on sensibility and intellect, we can divide our three mental faculties into lower and higher. "All lower faculties" claims Kant, "constitute sensibility, and all higher faculties constitute intellectuality" (VM, AA 28: 229, Kant, 1997, p. 48). Thus, the lower faculty of pleasure and displeasure refers to the capacity "to find satisfaction or dissatisfaction in the objects which affect us", while "the higher faculty of pleasure and displeasure is a power to sense a pleasure and displeasure in ourselves, independently of objects", and therefore, formal (ibid.). Our lower feelings result to a great extent from what Kant calls our predispositions to the good, which correspond to our predispositions as living beings, as rational beings, and finally, as beings who are not only rational but also responsible (RGV, AA 06: 26-28; Kant, 1996a, pp. 74-76).
Nonetheless, granted that those emotional states often prevent us from "bringing all [one's] capacities and inclinations under (reason's) control" (MS, AA 06: 408; Kant, 1996b, p. 535), it does not follow that they are not meant to exist. In fact, "considered in themselves, natural inclinations are [...] not reprehensible, and to want to extirpate them would not only be futile but harmful and blameworthy as well" (RGV, AA 6: 58; Kant, 1996a, p. 102). Feelings of pleasure and displeasure, (as well as habitual sensible desires/inclinations) are not evil in themselves. As Merritt points out, "human thought irremediably involves mechanisms of habit, inclination and imitation [and] we cannot change this about ourselves" (Merritt, 2018, p. 40). Moreover, the foundation for evil must always be rational in such a way that those emotional states alone are insufficient to provide us with reasons to act. Instead, they require a rational force resulting from a free will in order to be incorporated into the maxim of an action.
This leads us to the fact that free action -and therefore the possibility of moral actiondoes not relinquish our sensible nature. 17 Sensible impulses are, conversely, intrinsic to our power of choice "for otherwise it would be a pure power of choice <arbitrium purum>, a pure self-dependent being, which can determine itself only according to the laws, not against them" (VM, AA 29: 1016;Kant, 1997, p. 485). Our sensible nature renders it possible for the moral law to impose itself on our will in such a way that it can be incorporated as a maxim. Adopting the moral law as a maxim, in other words, is only possible due to our sensible structure.
Clearly morality relies on our sensible nature, in that without the sensible experience we would be morally dead, 18 for every action resulting from a will requires a determination 16 Every human decision about how to act based on the categorical imperative has the moral law as its ground of determination [Bestimmungsgrund]. 17 Taking reason away from us, we would be animals whose actions would correspond to direct responses to our stimuli. As rational animals, however, our stimuli do not impose themselves with necessity, but only urge us to perform certain deeds. It is known that free choice, for Kant, is the result of overcoming these impulses as elements that determine this action. This is not equivalent, however, to dismissing these elements as needless. 18 For "no man is entirely without moral feeling, for were he completely lacking in susceptibility to it he would be morally dead; and if [...] through feelings. Yet, in light of the relationship between desires and feelings as discussed earlier, it turns out that our lower feelings have a peculiar connection with our predispositions. For example, actions concerning our self-preservation, actions that we practice in pursuit of happiness and, of course, actions whose maxims are moral, require lower feelings to be brought about. While higher feelings provide us with principles such as "love your neighbour", lower ones provide us with the empirical content necessary to motivate us in performing specific deeds. 19 To exemplify this, consider the distinctive characteristics of human moral experience such as the idea of happiness. 20 Lacking the lower feelings, human experience would not encompass happiness, a prominent feature of our moral experience, as all finite rational beings want to be happy as a matter of fact. 21 Therefore, a life free from lower feelings would not conform to our moral experience. To say that, for Kant, actions must be performed out of cold blood would not imply that the existence of those feelings is unauthorised, but rather that, given the makeup of our human nature, these emotional states must be brought into harmony in such a way that moral law can present itself as normative for us.
The actions we take to satisfy what is determined by our predispositions depend on the pathological feelings as they provide empirical material for our predispositions. Thus, to maintain that lower feelings account for the empirical -and therefore necessary -structure of our moral experience is to take a step back and ensure these emotional states as elements without which our moral experience would not be possible.
Conclusion
As I have shown, the approach Kant provides regarding the functioning of the human mind leaves plenty of room for emotions when it comes to morality. Not only do the three faculties have to work in agreement if we are to be able to represent objectively the world around us, but the lower and higher parts of each faculty are closely related.
Contemporary accounts regarding the role of emotions in Kant's moral theory highlight the common intuition that a life devoid of emotional nuances does not accurately describe human moral experience. Along with those views, I have argued that we neither can nor should try to get rid of our natural inclinations, given their key role for moral motivation. For they yield content for us to make sense of our moral experience; pathological feelings do play a part as an empirical condition for morality. Without them we would not be able to fulfil the particular ends arising from the propensities determining our human nature.
|
2023-03-12T15:44:15.156Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "a335f18e118ded088ee962a3f720acfd0f12643b",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2023/10/shsconf_kr2023_01003.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2fa9d6ebb5f5e59595595220359b6b516e903770",
"s2fieldsofstudy": [
"Philosophy",
"Psychology"
],
"extfieldsofstudy": []
}
|
212974078
|
pes2o/s2orc
|
v3-fos-license
|
Neuromuscular Junction Abnormalities in Mitochondrial Disease
Objective To determine the prevalence of neuromuscular junction (NMJ) abnormalities in patients with mitochondrial disease. Methods Eighty patients with genetically proven mitochondrial disease were recruited from a national center for mitochondrial disease in the United Kingdom. Participants underwent detailed clinical and neurophysiologic testing including single-fiber electromyography. Results The overall prevalence of neuromuscular transmission defects was 25.6%. The highest prevalence was in patients with pathogenic dominant RRM2B variants (50%), but abnormalities were found in a wide range of mitochondrial genotypes. The presence of NMJ abnormalities was strongly associated with coexistent myopathy, but not with neuropathy. Furthermore, 15% of patients with NMJ abnormality had no evidence of either myopathy or neuropathy. Conclusions NMJ transmission defects are common in mitochondrial disease. In some patients, NMJ dysfunction occurs in the absence of obvious pre- or post-synaptic pathology, suggesting that the NMJ may be specifically affected.
and several articles have described the difficulty of distinguishing between them on clinical grounds alone. [5][6][7] The picture is further complicated by reports of abnormal neurophysiologic indices of NMJ dysfunction in patients with mitochondrial disease. [8][9][10] However, these have been case reports or small series in patients in whom a diagnosis of mitochondrial disease was not always genetically proven. Furthermore, no attempt has been made to investigate the mechanism of the observed NMJ abnormalities. The prevalence of NMJ abnormalities in mitochondrial disease remains unclear, and it remains unknown whether these abnormalities are a direct consequence of mitochondrial dysfunction at the end plate or merely reflect NMJ damage from the neuropathy and myopathy commonly found in these patients. 11,12 To address these questions, we present a large systematic study of NMJ function in a cohort of patients with genetically proven mitochondrial disease. This work provides evidence that NMJ dysfunction is independent of coexisting neuropathy and myopathy, raising the possibility that the NMJ may be specifically vulnerable to mitochondrial dysfunction.
Methods
Eighty participants (mean age 49.5 years, range 18-81 years, 29.8% males) with a confirmed genetic diagnosis of mitochondrial disease were recruited from a specialized mitochondrial disease clinic held in the Highly Specialised Rare Mitochondrial Disorders of Adults and Children Service, Newcastle Hospitals National Health Service Foundation Trust. This is one of 3 nationally commissioned mitochondrial disease clinics in the United Kingdom. Referrals were either based on a clinical suspicion of neuromuscular system involvement or were performed as routine screening in accordance with the Newcastle Mitochondrial Disease Neuropathy Guideline. 13 Participants were assessed clinically using the Newcastle Mitochondrial Disease Scale for Adults (NMDAS), a semiquantitative clinical rating scale designed for all forms of mitochondrial disease. 14 Each subsection was scored between 0 and 5 and included assessments of exercise tolerance, gait, ptosis, external ophthalmoplegia, proximal weakness, and deep tendon reflexes. Blood was also taken for creatine kinase (CK) level and glycosylated hemoglobin (HbA1c).
All neurophysiology studies were performed and reviewed by a consultant neurophysiologist (R.G.W.). Nerve conduction studies were performed using surface electrodes (Natus Medical) on a Keypoint electromyography (EMG) machine (Dantec). For motor studies, the median, ulnar, common peroneal, and tibial nerves were studied; for sensory studies, the median, ulnar, sural, and superficial peroneal nerves were assessed. Results were compared with published reference data and classified as normal, motor-sensory axonal polyneuropathy, sensory axonal neuropathy, sensory neuronopathy, and motor-sensory demyelinating neuropathy.
EMG was performed using a 25-G concentric needle (Natus Medical) on the muscle(s) studied in single-fiber electromyography (SFEMG): extensor digitorum communis (EDC) and/or orbicularis oculi (OO). In addition, EMG was performed on at least 2 other muscles (deltoid, biceps brachii, vastus lateralis, and tibialis anterior). Analysis of spontaneous activity at rest, individual motor unit potentials, and of interference pattern was made, and the results classified as normal, neurogenic, or myopathic.
Repetitive nerve stimulation was performed in the first 33 participants (42.3%) and was normal in all. Patients found this to be very uncomfortable, and for this reason, we limited the examination to the more sensitive single-fiber EMG in the remaining participants.
SFEMG was performed on the EDC and/or OO muscles using a 25-G facial concentric needle (bandpass 1-10 kHz). SFEMG was not possible in 2 patients because of severe myoclonus or tremor. SFEMG was performed on the EDC muscle in all of the remaining 78 patients and in the OO muscle in 15 patients. The proportion of pairs with increased jitter and/or blocking fibers was recorded, and the study was considered abnormal when ≥10% of fiber pairs showed increased jitter or blocking. 15 Statistical analysis was performed using IBM SPSS Statistics V24. Parametric data were compared between groups using the unpaired Student t test, and nonparametric data were compared using the Chi statistic.
Standard Protocol Approvals, Registrations, and Patient Consents
The study was approved by Newcastle and North Tyneside Local Research Ethics Committee. All patients gave written consent before study enrollment.
Data Availability
Anonymized study data will be made available on request.
The highest percentage of both jittering and blocking fiber pairs was found in participants with the Twinkle mutation (up to 40% and 13.6%, respectively). No association was found between severity of jitter or blocking and overall disease severity assessed using the NMDAS scale, CK, or HbA1c level.
Correlation With Coexisting Neuromuscular Disorders
Neurophysiologic evidence of neuropathy was detected in 18 of 78 participants (23%); in 11 of these, only sensory fibers were involved; in 6, a mixed motor and sensory axonal neuropathy was found; and in 1 patient, a mixed demyelinating neuropathy was found. Two participants had evidence of carpal tunnel syndrome but not of a polyneuropathy. Myopathy was found in 41 of 78 participants (52.6%). Ten participants had both neuropathy and myopathy; in 8 of these, the neuropathy affected sensory fibers only.
Treating neuropathy, myopathy, and NMJ dysfunction as independent conditions, we compared the frequency of neuropathy and myopathy in participants with and without NMJ dysfunction. The frequency of neuropathy was similar in participants with and without NMJ dysfunction (Chi statistic 0.97 p = 0.325). Furthermore, no participants with neuropathy alone had evidence of NMJ dysfunction. In contrast, the frequency of myopathy differed significantly between groups, being higher in the group with NMJ dysfunction (Chi statistic 7.16, p = 0.0074). NMJ dysfunction was also found in 9 patients with myopathy alone.
We found no difference in the overall NMDAS score, exercise tolerance, gait, ptosis, external ophthalmoplegia, CK level, or HbA1c level between participants with and without NMJ dysfunction.
Three of the 20 participants (15%) with NMJ abnormality had no neurophysiologic evidence of either neuropathy or myopathy (1 patient with a single, large-scale mtDNA Treating neuropathy, myopathy, and NMJ dysfunction as independent conditions, we compared the frequency of neuropathy and myopathy in participants with and without NMJ dysfunction. rearrangement and 2 patients with dominant TWNK variants). The mean percentage of abnormal fiber pairs in participants with neither myopathy nor neuropathy was higher than in participants with myopathy alone or both myopathy and neuropathy (33.3% in participants with neither vs 20.5% myopathy alone vs 17.0% in both).
Discussion
We present a large systematic study of NMJ function in a cohort of patients with genetically proven mitochondrial disease. We find that 20 of 78 (25.6%) participants included in the study population had abnormal NMJ transmission, with jitter values similar to patients with myasthenia gravis. NMJ abnormalities were seen in a wide range of mitochondrial genetic defects including nuclear gene defects, single large-scale mtDNA rearrangements, and mtDNA point mutations.
Our cohort was recruited from one of 3 national referral clinics for mitochondrial disease, and all had genetically proven mitochondrial disease. The spread of genotypes is broadly similar to published population-based studies in that the pathogenic m.3243A>G variant and single, large-scale mtDNA deletions represent the 2 most common mitochondrial genotypes presenting with multisystem disease. 16 However, our cohort was referred for neurophysiologic testing based on a clinical suspicion of coexisting neuromuscular disease. Consequently, it is likely that the prevalence of NMJ dysfunction in our cohort overestimates that in all patients with mitochondrial disease, which include a considerable number of asymptomatic carriers. However, to perform such detailed and time-consuming neurophysiologic testing in asymptomatic carriers presents significant logistical and ethical barriers.
Mitochondria are abundant in both the presynaptic motor nerve terminals and the postsynaptic junctional folds. 17 Given the key role of mitochondria in vesicle release and recycling, 18 it is perhaps surprising that we find no association between the presence of neuropathy and NMJ dysfunction. This may be because the majority of patients had a pure sensory neuropathy, which would not be expected to affect the NMJ. In contrast, we find a strong association between NMJ dysfunction and the presence of myopathy across all genotypes. This may be coincidental, given that both NMJ dysfunction and myopathy are common features in mitochondrial disease. However, abnormal jitter and blocking is described in several myopathies, including myotonic dystrophy type 1, 19 polymyositis, 20 and inclusion body myositis. 21 The mechanism remains unclear, although destruction of the postsynaptic folds has been described in a rodent model of Duchenne muscular dystrophy. 22 Whether similar changes occur in mitochondrial disease is unknown, and to our knowledge, no ultrastructural studies of NMJ morphology in mitochondrial disease have been performed.
Arguing against a causal relationship is the observation that in 15% of patients with NMJ dysfunction, no evidence was found of either myopathy or neuropathy. Of interest, the percentage of fibers with increased jitter was higher in this group than in those with myopathy alone or both myopathy and neuropathy. These patients with a primary defect of neuromuscular transmission in what are presumably structurally normal NMJs may be a particularly suitable group to target with therapies that boost NMJ transmission.
Unfortunately, we found no simple means of identifying these patients before neurophysiologic testing. Patients with significant NMJ dysfunction did not differ in their disease severity as assessed using the NMDAS scale or in any of the relevant subsections. There was also no difference on routine blood tests (CK and HbA1c level). The only predictor of NMJ dysfunction was electromyographic evidence of myopathy; however, relying on this alone would fail to identify those patients with a "pure" NMJ disorder. This means that at present, the only means of reliably detecting NMJ dysfunction is with detailed neurophysiology including singlefiber EMG. The time taken to perform this technique, the need for specialist expertise, and the limited availability in some countries limit the applicability of these findings. Nevertheless, a case can be made that patients being seen in a specialist center in which these are available should be considered for SFEMG at least once.
The distinction between genetically proven mitochondrial disease and autoimmune myasthenia gravis (particularly ocular myasthenia) is not always obvious on clinical grounds alone, and our data show that SFEMG abnormalities can be seen in both patient populations. Patients with autoimmune myasthenia gravis typically show marked fluctuation in symptom severity, often have bulbar symptoms early in the disease, and an association with other autoimmune diseases. 23 In contrast, patients with mitochondrial disease typically have limited symptom fluctuation, develop bulbar symptoms late on, and have a pattern of multisystem disease including other neurologic symptoms such as seizures and ataxia. 24 Autoantibody testing (including antibodies to the acetylcholine receptor, muscle specific kinase, LRP4, and agrin) can help confirm an autoimmune etiology, but up to 15% of patients are seronegative for known autoantibodies. Perhaps the most important point is that mitochondrial disease should be considered in
|
2020-01-09T09:14:28.310Z
|
2019-12-26T00:00:00.000
|
{
"year": 2021,
"sha1": "6afce08308f413c36aafa5b7d58191eb2f4ad831",
"oa_license": "CCBY",
"oa_url": "https://cp.neurology.org/content/neurclinpract/11/2/97.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "83ab111e39a659aa766829b52a33a448ea082a39",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15796361
|
pes2o/s2orc
|
v3-fos-license
|
Bandwidth, expansion, treewidth, separators, and universality for bounded degree graphs
We establish relations between the bandwidth and the treewidth of bounded degree graphs G, and relate these parameters to the size of a separator of G as well as the size of an expanding subgraph of G. Our results imply that if one of these parameters is sublinear in the number of vertices of G then so are all the others. This implies for example that graphs of fixed genus have sublinear bandwidth or, more generally, a corresponding result for graphs with any fixed forbidden minor. As a consequence we establish a simple criterion for universality for such classes of graphs and show for example that for each gamma>0 every n-vertex graph with minimum degree ((3/4)+gamma)n contains a copy of every bounded-degree planar graph on n vertices if n is sufficiently large.
Introduction
There are a number of different parameters in graph theory which measure how well a graph can be organised in a particular way, where the type of desired arrangement is often motivated by geometrical properties, algorithmic considerations, or specific applications. Well-known examples of such parameters are the genus, the bandwidth, or the treewidth of a graph. The topic of this paper is to discuss the relations between such parameters. We would like to determine how they influence each other and what causes them to be large. To this end we will mostly be interested in distinguishing between the cases when these parameters are linear or sublinear in the number of vertices in the graph.
We start with a few simple observations. Let G = (V, E) be a graph on n vertices. The bandwidth of G is denoted by bw(G) and defined to be the minimum positive integer b, such that there exists a labelling of the vertices in V by numbers 1, . . . , n so that the labels of every pair of adjacent vertices differ by at most b. Clearly one reason for a graph to have high bandwidth are vertices of high degree as bw(G) ≥ ⌈∆(G)/2⌉, where ∆(G) is the maximum degree of G. It is also clear that not all graphs of bounded maximum degree have sublinear bandwidth: Consider for example a random bipartite graph G on n vertices with bounded maximum degree. Indeed, with high probability, G does not have small bandwidth since in any linear ordering of its vertices there will be an edge between the first n/3 and the last n/3 vertices in this ordering. The reason behind this obstacle is that G has good expansion properties (definitions and exact statements are provided in Section 2). This implies that graphs with sublinear bandwidth cannot exhibit good Date: October 16, 2009. The first and third author were partially supported by DFG grant TA 309/2-1. The first author was partially supported by a Minerva grant. expansion properties. One may ask whether the converse is also true, i.e., whether the absence of big expanding subgraphs in bounded-degree graphs must lead to small bandwidth. We will prove that this is indeed the case via the existence of certain separators.
In fact, we will show a more general theorem in Section 2 (Theorem 8) which proves that the concepts of sublinear bandwidth, sublinear treewidth, bad expansion properties, and sublinear separators are equivalent for graphs of bounded maximum degree. In order to prove this theorem, we will establish quantitative relations between the parameters involved (see Theorem 5, Theorem 6, and Proposition 7).
As a byproduct of these relations we obtain sublinear bandwidth bounds for several graph classes (see Section 4). Since planar graphs are known to have small separators [19] for example, we get the following result: The bandwidth of a planar graph on n vertices with maximum degree at most ∆ is bounded from above by bw(G) ≤ 15n log ∆ (n) . This extends a result of Chung [8] who proved that any n-vertex tree T with maximum degree ∆ has bandwidth at most 5n/ log ∆ (n). Similar upper bounds can be formulated for graphs of any fixed genus and, more generally, for any graph class defined by a set of forbidden minors (see Section 4.1).
In Section 4.2 we conclude by considering applications of these results to the domain of universal graphs and derive implications such as the following. If n is sufficiently large then any n-vertex graph with minimum degree slightly above 3 4 n contains every planar n-vertex graphs with bounded maximum degree as a subgraph (cf. Corollary 19).
Definitions and Results
In this section we formulate our main results which provide relations between the bandwidth, the treewidth, the expansion properties, and separators of bounded degree graphs. We need some further definitions. For a graph G = (V, E) and disjoint vertex sets A, B ⊆ V we denote by E(A, B) the set of edges with one vertex in A and one vertex in B and by e(A, B) the number of such edges.
Next, we will introduce the notions of tree decomposition and treewidth. Roughly speaking, a tree decomposition tries to arrange the vertices of a graph in a tree-like manner and the treewidth measures how well this can be done.
is a tree such that the following holds: for every i, j, k ∈ I the following holds: if j lies on the path from i to k in T , The width of ({X i : i ∈ I}, T = (I, F )) is defined as max i∈I |X i | − 1. The treewidth tw(G) of G is the minimum width of a tree decomposition of G.
It follows directly from the definition that tw(G) ≤ bw(G) for any graph G: if the vertices of G are labelled by numbers 1, . . . , n such that the labels of adjacent vertices differ by at most b, then I := [n − b], X i := {i, . . . , i + b} for i ∈ I and T := (I, F ) with F := {{i − 1, i} : 2 ≤ i ≤ n − b} define a tree decomposition of G with width b.
A separator in a graph is a small cut-set that splits the graph into components of limited size.
Definition 2 (separator, separation number). Let 1 2 ≤ α < 1 be a real number, We also say that S separates G into A and B. The separation number s(G) of G is the smallest s such that all subgraphs G ′ of G have an (s, 2/3)-separator.
A vertex set is said to be expanding, if it has many external neighbours. We call a graph bounded, if every sufficiently large subgraph contains a subset which is not expanding.
Definition 3 (expander, bounded). Let ε > 0 be a real number, b ∈ N and consider graphs G = (V, E) and There is a wealth of literature on expander graphs (see, e.g., [15]). In particular, it is known that for example (bipartite) random graphs with bounded maximum degree form a family of ε-expanders. We also loosely say that such graphs have good expansion properties.
As indicated earlier, our aim is to provide relations between the parameters we defined above. A well known example of a result of this type is the following theorem due to Robertson and Seymour which relates the treewidth and the separation number of a graph. 1 Theorem 4 (treewidth→separator, [20]). All graphs G have separation number s(G) ≤ tw(G) + 1.
This theorem states that graphs with small treewidth have small separators. By repeatedly extracting separators, one can show that (a qualitatively different version of) the converse also holds: tw(G) ≤ O(s(G) log n) for a graph G on n vertices (see, e.g., [4,Theorem 20]). In this paper, we use a similar but more involved argument to show that one can establish the following relation linking the separation number with the bandwidth of graphs with bounded maximum degree.
The proof of this theorem is provided in Section 3.2. Observe that Theorems 4 and 5 together with the obvious inequality tw(G) ≤ bw(G) tie the concepts of treewidth, bandwidth, and separation number well together. Apart from the somewhat negative statement of not having a small separator, what can we say about a graph with large tree-or bandwidth? The next theorem states that such a graph must contain a big expander.
A result with similar implications was recently proved by Grohe and Marx in [13]. It shows that b ε (G) < εn implies tw(G) ≤ 2εn. For the sake of being self-contained we present our (short) proof of Theorem 6 in Section 3.3. In addition, it is not difficult to see that conversely the boundedness of a graph can be estimated via its bandwidth -which we prove in Section 3.3, too.
A qualitative consequence summarising the four results above is given in the following theorem. It states that if one of the parameters bandwidth, treewidth, separation number, or boundedness is sublinear for a family of bounded degree graphs, then so are the others.
Theorem 8 (sublinear equivalence theorem). Let ∆ be an arbitrary but fixed positive integer and consider a hereditary class of graphs C such that all graphs in C have maximum degree at most ∆. Denote by C n the set of those graphs in C with n vertices. Then the following four properties are equivalent: The paper is organized as follows. Section 3 contains the proofs of all the results mentioned so far: First we derive Theorem 8 from Theorems 4, 5, 6 and Proposition 7. Then Section 3.2 is devoted to the proof of Theorem 5, whereas Section 3.3 contains the proofs of Theorem 6 and Proposition 7. Finally, in Section 4, we apply our results to deduce that certain classes of graphs have sublinear bandwidth and can therefore be embedded as spanning subgraphs into graphs of high minimum degree.
Separation and bandwidth.
For the proof of Theorem 5 we will use the following decomposition result which roughly states the following. If the removal of a small separator S decomposes the vertex set of a graph G into relatively small components R i∪ P i such that the vertices in P i form a "buffer" between the vertices in the separator S and the set of remaining vertices R i in the sense that dist G (S, R i ) is sufficiently big, then the bandwidth of G is small.
Lemma 9 (decomposition lemma). Let G = (V, E) be a graph and S, P , and R be vertex sets such that V = S∪ P∪ R. For b, r ∈ N with b ≥ 3 assume further that there are decompositions P = P 1∪ . . .∪ P b and R = R 1∪ . . .∪ R b of P and R, respectively, such that the following properties are satisfied: Then bw(G) < 2(|S| + |P | + r).
Proof. Assume we have G = (V, E), V = S∪ P∪ R and b, r ∈ N with the properties stated above. Our first goal is to partition V into pairwise disjoint sets B 1 , . . . , B b , which we call buckets, and that satisfy the following property: To this end all vertices of R i are placed into bucket B i for each i ∈ [b] and the vertices of S are placed into bucket B ⌈b/2⌉ . The remaining vertices from the sets P i are distributed over the buckets according to their distance from S: This placement obviously satisfies by construction and condition (i). Moreover, we claim that it guarantees condition (1). Indeed, let {u, v} ∈ E be an edge. If u and v are both in S then clearly (1) is satisfied. Thus it remains to consider the case where, without loss of generality, . By condition (ii) this implies v ∈ S∪ R i∪ P i . First assume that v ∈ S. Thus dist(u, S) = 1 and from condition (iii) we infer that u ∈ P i . Accordingly u is placed into bucket B j(u) ∈ {B ⌈b/2⌉−1 , B ⌈b/2⌉ , B ⌈b/2⌉+1 } by (2) and v is placed into bucket B ⌈b/2⌉ and so we also get (1) in this case. If both u, v ∈ R i∪ P i , on the other hand, we are clearly done if u, v ∈ R i . So assume without loss of generality, that u ∈ P i . If v ∈ P i then we conclude from (2). Thus we also get (1) in this last case. Now we are ready to construct an ordering of V respecting the desired bandwidth bound. We start with the vertices in bucket B 1 , order them arbitrarily, proceed to the vertices in bucket B 2 , order them arbitrarily, and so on, up to bucket B b . By condition (1) this gives an ordering with bandwidth at most twice as large as the largest bucket and thus we conclude from (3) that bw(G) < 2(|S| + |P | + r).
A decomposition of the vertices of G into buckets as in the proof of Lemma 9 is also called a path partition of G and appears, e.g., in [11].
Before we get to the proof of Theorem 5, we will establish the following technical observation about labelled trees. Proof. Let L ⊆ V be the set of leaves of T and I := V \ L be the set of internal vertices. Clearly which implies the assertion.
The idea of the proof of Theorem 5 is to repeatedly extract separators from G and the pieces that result from the removal of such separators. We denote the union of these separators by S, put all remaining vertices with small distance from S into sets P i , and all other vertices into sets R i . Then we can apply the decomposition lemma (Lemma 9) to these sets S, P i , and R i . This, together with some technical calculations, will give the desired bandwidth bound for G.
of Theorem 5. Let G = (V, E) be a graph on n vertices with maximum degree ∆ ≥ 2. Observe that the desired bandwidth bound is trivial if ∆ = 2 or if log ∆ n − log ∆ s(G) ≤ 6, so assume in the following that ∆ ≥ 3 and log ∆ n − log ∆ s(G) > 6. Define β := log ∆ n − log ∆ s(G) and b := ⌊β⌋ ≥ 6 (4) and observe that with this choice of β our aim is to show that bw(G) ≤ 6n/β.
The goal is to construct a partition V = S∪ P∪ R with the properties required by Lemma 9. For this purpose we will recursively use the fact that G and its subgraphs have separators of size at most s(G). In the i-th round we will identify separators S i,k in G whose removal splits G into parts V i,1 , . . . , V i,bi . The details are as follows.
In the first round let S 1,1 be an arbitrary (s(G), 2/3)-separator in G that separates G into V 1,1 and V 1,2 and set b 1 := 2. In the i-th round, i > 1, consider each of the sets V i−1,j with j ∈ [b i−1 ]. If |V i−1,j | ≤ 2n/b then let V i,j ′ := V i−1,j , otherwise choose an (s(G), 2/3)-separator S i,k that separates G[V i−1,j ] into sets V i,j ′ and V i,j ′ +1 (where k and j ′ are appropriate indices, for simplicity we do not specify them further). Let S i denote the union of all separators constructed in this way (and in this round). This finishes the i-th round. We stop this procedure as soon as all sets V i,j ′ have size at most 2n/b and denote the corresponding i by i * . Then b i * is the number of sets V i * ,j ′ we end up with in the last iteration. Let further x S be the number of separators S i,k extracted from G during this process in total.
Claim 11. We have b i * ≤ b and x S ≤ b − 1.
We will postpone the proof of this fact and first show how it implies the theorem.
We claim that V = S∪ P∪ R is a partition that satisfies the requirements of the decomposition lemma (Lemma 9) with parameter b and r = 2n/b. To check this, observe first that for all i ∈ [i * ] and j, . Trivially e(R j∪ P j , R j ′∪ P j ′ ) = 0 for all j ∈ [b] and b i * < j ′ ≤ b and therefore we get condition (ii) of Lemma 9. Moreover, condition (iii) is satisfied by the definition of the sets P j and R j above. To verify condition (i) note that |R j | ≤ |V i * ,j | ≤ 2n/b = r for all j ∈ [b i * ] by the choice of i * and |R j | = 0 for all b i * < j ≤ b. Accordingly we can apply Lemma 9 and infer that bw(G) ≤ 2 |S| + |P | + 2n b .
In order to establish the desired bound on the bandwidth, we thus need to show that |S|+|P | ≤ n/β. We first estimate the size of S. By Claim 11 at most x S ≤ b−1 separators have been extracted in total, which implies Furthermore all vertices v ∈ P satisfy dist G (v, S) ≤ ⌊b/2⌋ − 1 by definition. As G has maximum degree ∆ there are at most |S|(∆ ⌊b/2⌋ − 1)/(∆ − 1) vertices v ∈ V \ S with this property and hence where the second inequality holds for any ∆ ≥ 3 and b ≥ 6 and the third inequality follows from (4) and (6). It is easy to verify that for any ∆ ≥ 3 and x ≥ ∆ 6 we have (∆ − 3/2) √ x ≥ 9 8 log 2 ∆ x. This together with (4) gives (∆ − 3/2) n/ s(G) ≥ 9 8 β 2 and hence we get As 6 ≤ b = ⌊β⌋ it is not difficult to check that Together with (5) and (7) this gives our bound. It remains to prove Claim 11. Notice that the process of repeatedly separating G and its subgraphs can be seen as a binary tree T on vertex set W whose internal nodes represent the extraction of a separator S i,k for some i (and thus the separation of a subgraph of G into two sets V i,j and V i,j ′ ) and whose leaves represent the sets V i,j that are of size at most 2n/b. Clearly the number of leaves of T is b i * and the number of internal nodes x S . As T is a binary tree we conclude x S = b i * − 1 and thus it suffices to show that T has at most b leaves in order to establish the claim. To this end we would like to apply Proposition 10. Label an internal node of T that represents a separator S i,k with |S i,k |/n, a leaf representing V i,j with |V i,j |/n and denote the resulting labelling by ℓ. Clearly we have w∈W ℓ(w) = 1. Moreover we claim that where L(w) denotes the set of leaves that are children of w. Indeed, let w ∈ W , notice that |L(w)| ≤ 2 as T is a binary tree, and let u and u ′ be the two children of w.
If |L(w)| = 0 we are done. If |L(w)| > 0 then w represents a (2/3, s(G))-separator S(w) : In the case that |L(w)| = 2 this implies and thus we get (8). If |L(w)| = 1 on the other hand then, without loss of generality, u is a leaf of T and |U ′ (w)| > 2n/b. Since S(w) is a (2/3, s(G))-separator however we know that |V (w)| ≥ 3 2 |U ′ (w)| and hence (8) in this case. Therefore we can apply Proposition 10 and infer that T has at most b leaves as claimed.
3.3.
Boundedness. In this section we study the relation between boundedness, bandwidth and treewidth. We first give a proof of Proposition 7. of Proposition 7. We have to show that for every graph G and every ε > 0 the inequality b ε (G) ≤ 2 bw(G)/ε holds. Suppose that G has n vertices and let σ : V → [n] be an arbitrary labelling of G. Furthermore assume that V ′ ⊆ V with |V ′ | = b ε (G) induces an ε-expander in G. Define V * ⊆ V ′ to be the first b ε (G)/2 = |V ′ |/2 vertices of V ′ with respect to the ordering σ. Since V ′ induces an ε-expander in G there must be at least εb ε (G)/2 vertices in N * := N (V * ) ∩ V ′ . Let u be the vertex in N * with maximal σ(u) and v ∈ V * ∩ N (u). As u ∈ V * and σ(u ′ ) > σ(v ′ ) for all u ′ ∈ N * and v ′ ∈ V * by the choice of V * we have |σ(u) − σ(v)| ≥ |N * | ≥ εb ε (G)/2. Since this is true for every labelling σ we can deduce that b ε (G) ≤ 2 bw(G)/ε. The remainder of this section is devoted to the proof of Theorem 6. We will use the following lemma which establishes a relation between boundedness and certain separators.
3 n then set i := i + 1 and go to step (2). (6) Set i * := i and return This construction obviously returns a partition V = A∪ B∪ S with |B| < 2 3 n. Moreover, |V i * | ≥ 2 3 n and |W i * | ≤ |V i * |/2 and hence The upper bound on |S| follows easily since It remains to show that S separates G. This is indeed the case as N G (A) ⊆ S by construction and thus E(A, B) = ∅.
Now we can prove Theorem 6. As remarked earlier, Grohe and Marx [13] independently gave a proof of an equivalent result which employs similar ideas but does not use separators explicitly.
of Theorem 6. Let G = (V, E) be a graph on n vertices, ε > 0, and let b ≥ b ε (G). It follows immediately from the definition of boundedness that every subgraph We now prove Theorem 6 by induction on the size of G. The relation tw(G) ≤ 2εn + 2b trivially holds if n ≤ 2b. So let G have n > 2b vertices and assume that the theorem holds for all graphs with less than n vertices. Then G is (b, ε)-bounded and thus has a (2εn/3, 2/3)-separator S by Lemma 12. Assume that S separates G into the two subgraphs G 1 = (V 1 , E 1 ) and G 2 = (V 2 , E 2 ). Let (X 1 , T 1 ) and (X 2 , T 2 ) be tree decompositions of G 1 and G 2 , respectively, such that X 1 ∩ X 2 = ∅. We use them to construct a tree decomposition (X , T ) of G as follows. Let X = {X i ∪ S : X i ∈ X 1 } ∪ {X i ∪ S : X i ∈ X 2 } and T = (I 1 ∪ I 2 , F = F 1 ∪ F 2 ∪ {e}) where e is an arbitrary edge between the two trees. This is indeed a tree decomposition of G: Every vertex v ∈ V belongs to at least one X i ∈ X and for every edge {v, w} ∈ E there exists i ∈ I (where I is the index set of X ) with {v, w} ⊆ X i . This is trivial for {v, w} ⊆ V i and follows from the definition of X for v ∈ S and w ∈ V i . Since S separates G there are no edges {v, w} with v ∈ V 1 and w ∈ V 2 . For the same reason the third property of a tree decomposition holds: if j lies on the path from i to k in T , then X i ∩ X k ⊆ X j as the intersection is S if X i , X k are subsets of V 1 and V 2 respectively.
We have seen that (X , T ) is a tree decomposition of G and can estimate its width as follows: tw(G) ≤ max{tw(G 1 ), tw(G 2 )} + |S|. With the induction hypothesis we get where the second inequality follows from |V i | ≤ (2/3)n and |S| ≤ (2εn)/3.
Applications
For many interesting bounded degree graph classes (non-trivial) upper bounds on the bandwidth are not at hand. A wealth of results however has been obtained about the existence of sublinear separators. This illustrates the importance of Theorem 8. In this section we will give examples of such separator theorems and provide applications of them in conjunction with Theorem 8.
Separator theorems.
A classical result in the theory of planar graphs concerns the existence of separators of size 2 √ 2n in any planar graph on n vertices proved by Lipton and Tarjan [19] in 1977. Clearly, together with Theorem 5 this result implies the following theorem.
Corollary 13. Let G be a planar graph on n vertices with maximum degree at most ∆ ≥ 2. Then the bandwidth of G is bounded from above by It is easy to see that the bound in Corollary 13 is sharp up to the multiplicative constant -since the bandwidth of any graph G is bounded from below by (n − 1)/diam(G), it suffices to consider for example the complete binary tree on n vertices. Corollary 13 is used in [7] to infer a result about the geometric realisability of planar graphs G = (V, E) with |V | = n and ∆(G) ≤ ∆.
This motivates why we want to consider some generalisations of the planar separator theorem in the following. The first such result is due to Gilbert, Hutchinson, and Tarjan [12] and deals with graphs of arbitrary genus. 2 Theorem 14 ( [12]). An n-vertex graph G with genus g ≥ 0 has separation number s(G) ≤ 6 √ gn + 2 √ 2n.
For fixed g the class of all graphs with genus at most g is closed under taking minors. Here H is a minor of G if it can be obtained from a subgraph of G by a sequence of edge deletions and contractions. A graph G is called H-minor free if H 2 Again, the separator theorems we refer to bound the size of a separator in G. Since the class of graphs with genus less than g (or, respectively, of H-minor free graphs) is closed under taking subgraphs however, this theorem can also be applied to such subgraphs and thus the bound on s(G) follows.
is no minor of G. The famous graph minor theorem by Robertson and Seymour [21] states that any minor closed class of graphs can be characterised by a finite set of forbidden minors (such as K 3,3 and K 5 in the case of planar graphs). The next separator theorem by Alon, Seymour, and Thomas [2] shows that already forbidding one minor enforces a small separator.
Theorem 15 ([2]). Let H be an arbitrary graph. Then any n-vertex graph G that is H-minor free has separation number s(G) ≤ |H| 3/2 √ n.
We can apply these theorems to draw the following conclusion concerning the bandwidth of bounded-degree graphs with fixed genus or some fixed forbidden minor from Theorem 5.
Corollary 16. Let g be a positive integer, ∆ ≥ 2 and H be an h-vertex graph and G an n-vertex graph with maximum degree ∆(G) ≤ ∆.
4.2.
Embedding problems and universality. A graph H that contains copies of all graphs G ∈ G for some class of graphs G is called universal for G. The construction of sparse universal graphs for certain families G has applications in VLSI circuit design and was extensively studied (see, e.g., [1] and the references therein). In contrast to these results our focus is not on minimising the number of edges of H, but instead we are interested in giving a relatively simple criterion for universality for G that is satisfied by many graphs H of the same order as the largest graph in G.
The setting with which we are concerned here are embedding results that guarantee that a bounded-degree graph G can be embedded into a graph H with sufficiently high minimum degree, even when G and H have the same number of vertices. Dirac's theorem [10] concerning the existence of Hamiltonian cycles in graphs of minimum degree n/2 is a classical example for theorems of this type. It was followed by results of Corrádi and Hajnal [9], Hajnal and Szemerédi [14] about embedding K r -factors, and more recently by a series of theorems due to Komlós, Sarközy, and Szemerédi and others which deal with powers of Hamiltonian cycles, trees, and H-factors (see, e.g., the survey [17]). Along the lines of these results the following unifying conjecture was made by Bollobás and Komlós [16] and recently proved by Böttcher, Schacht, and Taraz [5]. Theorem 17 ([5]). For all r, ∆ ∈ N and γ > 0, there exist constants β > 0 and n 0 ∈ N such that for every n ≥ n 0 the following holds. If G is an r-chromatic graph on n vertices with ∆(G) ≤ ∆ and bandwidth at most βn and if H is a graph on n vertices with minimum degree δ(H) ≥ ( r−1 r + γ)n, then G can be embedded into H. The proof of Theorem 17 heavily uses the bandwidth constraint insofar as it constructs the required embedding sequentially, following the ordering given by the vertex labels of G. Here it is of course beneficial that the neighbourhood of every vertex v in G is confined to the βn vertices which immediately precede or follow v.
Also, it is not difficult to see that the statement in Theorem 17 becomes false without the constraint on the bandwidth: Consider r = 2, let G be a random bipartite graph with bounded maximum degree and let H be the graph formed by two cliques of size (1/2 + γ)n each, which share exactly 2γn vertices. Then H cannot contain a copy of G, since in G every vertex set of size (1/2 − γ)n has more than 2γn neighbours. The reason for this obstruction is again that G has good expansion properties.
On the other hand, Theorem 8 states that in bounded degree graphs, the existence of a big expanding subgraph is in fact the only obstacle which can prevent sublinear bandwidth and thus the only possible obstruction for a universality result as in Theorem 17. More precisely we immediately get the following corollary from Theorem 8.
Corollary 18. If the class C meets one (and thus all) of the conditions in Theorem 8, then the following is also true. For every γ > 0 and r ∈ N there exists n 0 such that for all n ≥ n 0 and for every graph G ∈ C n with chromatic number r and for every graph H on n vertices with minimum degree at least ( r−1 r + γ)n, the graph H contains a copy of G.
By Corollary 13 we infer as a special case that all sufficiently large graphs with minimum degree ( 3 4 + γ)n are universal for the class of bounded-degree planar graphs. Universal graphs for bounded degree planar graphs have also been studied in [3,6].
Corollary 19. For all ∆ ∈ N and γ > 0, there exists n 0 ∈ N such that for every n ≥ n 0 the following holds: (a) Every 3-chromatic planar graph on n vertices with maximum degree at most ∆ can be embedded into every graph on n vertices with minimum degree at least ( 2 3 + γ)n. (b) Every planar graph on n vertices with maximum degree at most ∆ can be embedded into every graph on n vertices with minimum degree at least ( 3 4 +γ)n. This extends a result by Kühn, Osthus, and Taraz [18], who proved that for every graph H with minimum degree at least ( 2 3 + γ)n there exists a particular spanning triangulation G that can be embedded into H. Using Corollary 16 it is moreover possible to formulate corresponding generalisations for graphs of fixed genus and for H-minor free graphs for any fixed H.
Acknowledgement
The first author would like to thank David Wood for fruitful discussions in an early stage of this project. In addition we thank two anonymous referees for their helpful suggestions.
|
2009-10-16T01:26:38.000Z
|
2009-10-16T00:00:00.000
|
{
"year": 2010,
"sha1": "c1235b089d50594445ac7c89588ef791a89ce458",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.ejc.2009.10.010",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "c1235b089d50594445ac7c89588ef791a89ce458",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
245238534
|
pes2o/s2orc
|
v3-fos-license
|
Monitoring of canonical BMP and Wnt activities during postnatal stages of mouse first molar root formation
Abstract Objective This study aimed to explore the precise temporospatial distributions of bone morphogenetic protein (BMP) and Wnt signaling pathways during postnatal development of mammalian tooth roots after the termination of crown morphogenesis. Methodology A total of two transgenic mouse lines, BRE-LacZ mice and BAT-gal mice, were undertaken. The mice were sacrificed on every postnatal (PN) day from PN 3d up to PN 21d. Then, the first lower molars were extracted, and the dissected mandibles were stained with 5-bromo-4-chloro-3-indolyl-β-d-galactopyranoside (X-gal) and fixed. Serial sections at 10 µm were prepared after decalcification, dehydration, and embedding in paraffin. Results We observed BMP/Smads and Wnt/β-catenin signaling activities in the dental sac, dental pulp, and apical papilla with a certain degree of variation. The position of activation of the BMP/Smad signaling pathway was located more coronally in the early stage, which then gradually expanded as root elongation proceeded and was associated with blood vessels in the pulp and developing complex apical tissues in the later stage. However, Wnt/β-catenin signaling was highly concentrated in the mesenchyme below the cusps in the early stage, gradually expanded to regions around the root in the transition/root stage, and then disappeared entirely in the later stage. Conclusions These results further confirmed the participation of both BMP and Wnt canonical signaling pathways in tooth root development, as well as formed the basis for future studies on how precisely integrated signaling pathways regulate root morphogenesis and regeneration.
Introduction
Analogous to the ectodermal organogenesis, a series of reciprocal and sequential epithelialmesenchymal interactions are essential for tooth root development, which occurs after crown formation. 1 Multiple signaling cascades have implicated in these processes to guide root development in different stages. 2 The outer enamel epithelium is confluent with the inner enamel epithelium at the cervical loop, as well as grows in the apical direction forming the bilayer of an epithelial extension called Hertwig's epithelial root sheath (HERS), a transient structure that occurs before the complete formation of the tooth root. 3 The formed HERS extends apically and envelops the dental papilla to form a barrier separating from the dental sac. The HERS is known to induce the differentiation of odontoblasts and cementoblasts, to guide root growth, and to determine the number of tooth roots.
HERS disintegration orchestrates the deposition of root dentin, which subsequently initiates dental sac cell cementogenesis onto the dentin surface. 4,5 Moreover, it may also participate in cementum formation via epithelial-mesenchymal transition. 6,7 Thus, HERS plays a vital role in root development.
BMP and Wnt signaling pathways are crucial for root formation, including cell fate, growth, and patterning. The BMP-Smad4-Shh-Gli1-Sox2 signaling cascade worked in root development, where the disruption of BMP signaling in the dental epithelium causes delays in HERS formation and affects the stem cell niche environment. 8 Moreover, the ablation of Smad4, the key intracellular mediator of the TGF-β/ BMP signaling pathway, in odontoblasts disrupts odontoblast differentiation and causes abnormal root odontogenesis. 9,10 Previous studies also revealed the essential role of Wnt signaling during root formation; we detected the canonical Wnt activity in the mesenchyme and odontoblasts in BAT-gal/ TOPgal reporter mice. 11,12 Furthermore, recent reports showed that the disruption of β-catenin in immature odontoblasts generates molars without roots, whereas the constitutive stabilization of β-catenin can also lead to aberrant short roots with excessive cementum. 13,14 The cervical loop structure of mouse molars disappears after crown formation, and then HERS is formed to guide root development postnatally, which is similar to human tooth organogenesis. Thus, the first lower mouse molar (M1) is an ideal model for studying tooth root development. In the past decades, the molecular regulatory network of early tooth morphogenesis was studied extensively, with research mainly concentrating on prenatal stages. 15 involved CD-1 background mice. The mice were kept in a 12-hour light/dark period with a temperature of 22 ± 2 0 C and humidity of 45% ± 15% of the rooms.
Tissue preparation
Pups from mice of both sexes were sacrificed on every PN day from day 3 (PN3) up to PN21 (n: 5 pups).
Results
The results showed that M1 crown development harvested the M1 with the whole dental sac. Since the newly formed cusp was fragile and soft, it was difficult to separate the dental sac or epithelium from the crown until PN14 ( Figures 1A-1C). The whole-mount X-gal staining results detected BRE-LacZ activity in the dental sac around the tooth, the pulp, and the apical papilla (around the apical foramen) in the initial stages.
During the following stages of M1 morphogenesis (PN6-PN8), the teeth rapidly increased in size and length. BRE-LacZ activity became lower in the crown pulp but intensified in the root pulp, dental sac, and apical papilla ( Figures 1D-1F). Between PN9 and PN14, significant root growth and further tooth mineralization occurred with crown calcification (Figures 1G-1L).
Meanwhile, we established the eruption path for M1 and the tooth was ready to erupt into the oral cavity, to reach its occlusal position, and to perform its function after PN14 (data not shown). We detected strong X-gal staining at the dental sac around the root and the apical region ( Figures 1G-1L). Furthermore, the We traced Wnt/β-catenin signaling in the developing molar using the BAT-gal allele, a reporter mouse line that expressed β-galactosidase in the presence of activated β-catenin. On PN3, we established the enamel production. Different from BMP/Smad signaling activity, active Wnt/β-catenin signaling was restricted in the crown pulp region beneath the cusps and did not appear in the dental sac, root pulp, and apical papilla ( Figures. 2A-A 2 ). Wnt/β-catenin signaling activity was highly concentrated under the tip of the molar cusps during PN3-PN6, a stage before dentinogenesis in the root region (Figures 2A and 2B). We found some activated canonical Wnt signaling in the apical papilla and dental sac after rapid tooth growth and elongation on PN7 ( Figures 2C-C 2 ). Then, X-gal staining intensity reduced in the crown but increased in the apical area and dental sac ( Figures 2C-2F). When M1 was ready to erupt on PN14, the intensity of X-gal staining was weak at the zones below the cusps ( Figure 2F crown and apical papilla, and the activity was barely detectable in the pulp (Figures 2G-2J).
We chose several typical postnatal tooth development stages (PN3, PN7, and PN10) to assess the histomorphology. Figure 3A showed that we primarily detected BMP/Smad-dependent signaling in the dental epithelium on PN3. Then, BMP signaling activity moved downward to the cervical loop, confining to the HERS and apical papilla. On PN7, we observed BRE activity in the odontoblasts in the crown pulp and blood vessels ( Figure 3B). Developing apical complex (DAC) comprises the apical papilla, dental sac, and HERS, regarded as inseparable integrity.
On PN10, the level of BMP/Smad signaling activity was higher in the odontoblasts of the dental pulp, vascular tissues, and DAC, which was associated with significant root elongation and advanced tooth mineralization ( Figure 3C). We observed Wnt/βcatenin signaling activity in a different manner. On PN3, we detected a high β-galactosidase expression level in the mesenchymal cells under the cusps where the odontoblast or pre-odontoblast existed ( Figure 3D). We associated the expression with the tip of the molar cusps but excluded from the ameloblasts.
With continuing root development, the number of X-gal-stained cells beneath the cusps decreased. On A, B, and C, ×40) and Wnt/β-catenin signaling (D, E, and F, ×40) in M1 is shown. Panels A1, B1, C1, D1, E1, and F1 are high-magnification views (×100) of the red rectangular block as shown in panels A, B, C, D, E, and F, respectively. Panels A2, B2, C2, D2, E2, and F2 are high-magnification views (×100) of the blue rectangular block as shown in panels A, B, C, D, E, and F, respectively. On PN3, BMP/Smad-dependent signaling was primarily detected in the dental epithelium (A, A1, and A2); whereas active Wnt/β-catenin signaling was mainly expressed in the odontoblasts or pre-odontoblasts below the cusps and in the dental epithelium on the cusps; some expression was found in the apical region (D, D1, and D2). On PN7, BRE activity was detected in the pre-odontoblasts/odontoblasts in the crown pulp, blood vessels, and DAC (B, B1, and B2); Wnt/β-catenin activity was observed below the cusps, but none in the dental papilla and sac, nor in the vessels in the pulp (E, E1, and E2). On PN10, the same expression pattern was identified, associated with significant root elongation and advanced tooth mineralization (C, C1, and C2). Meanwhile, in BAT-gal mice, the number of X-gal-positive cells beneath the cusps decreased on PN10 (F, F1, and F2). (EE, enamel epithelium; OE, oral epithelium; DS, dental sac; CS, cusp; C, crown; R, root; DP, dental papilla; AP, apical papilla; P, pulp; CL, cervical loop; E, enamel; D, dentin; OD, odontoblast; V, vessel; HERS, Hertwig's epithelial root sheath) J Appl Oral Sci. 2021;29:e20210281 7/9 root growth certified that postnatal epithelial BMP signaling was dispensable in the regulation of root development. 22 Similar to many other developmental processes, epithelial−mesenchymal interactions were previously demonstrated to be essential for tooth root development. 4 Here, we showed that activated BMP signaling (positive X-gal staining) was specifically present in dental follicles and HERS at the early postnatal stage; then, the HERS became perforated, contributing to cementum regeneration and root formation (Figures 1 and 3). When M1 root formation was almost completed, BMP/Smad activity appeared restricted in the apical part of the root and in the pulp.
During molar root formation, BMP signaling seemed to influence odontogenesis in the light of dynamic spatial and temporal sites of the canonical BMP signaling activity. In developing M1, differentiated odontoblasts are parallel to the basement membrane after birth and then ready to secrete dentin matrix.
Next, the rapid tooth formation and root growth originate at P6. 23 Figure 3 showed that activated BMP signaling (positive X-gal staining) was localized more coronally, correlating the BMP signaling with dental mesenchyme cells committed into odontoblasts. probably generated primary odontoblasts that were responsible for root dentin production. 26 The results suggested that active BMP signaling was essential for the apical growth of the molar root.
Meanwhile, Wnt/β-catenin signaling is also crucial for root odontogenesis, as evidenced by the conditional inactivation of β-catenin in immature odontoblasts, which induces molars without roots. 13 Moreover, we observed the phenotype of ablation of Wnt ligands or overexpression of the inhibitor Dkk-1 in odontoblasts in the mandibular molars, with aberrantly short roots and dentin defects. 27,28 Thus, Wnt signaling in root formation needs to be tightly controlled. 4,15 In this study, we also observed regions with active In mouse M1, rapid tooth growth and root elongation begin on PN6. 23 During the transition/root stage of root formation in mice, we detected highly active Wnt/β-catenin signaling in odontoblast-lineage cells, HERS cells, and periodontal ligament cells. 31 We observed activated Wnt/β-catenin signals in the apical papilla and dental sac from PN7. Subsequently, we noted a decline of signaling activity in these cells when they received terminal differentiation from PN15 onward and eventually disappeared the signaling in M1. Axin2, known as a component of the β-catenin degradation complex, was also tightly linked with the developing roots. We mainly localized Axin2 expression surrounding the root sheath and dental papilla on PN10. 11 However, unlike the high Axin2 levels in the dental papilla/pulp (as this part proceeded to develop and differentiate), the Axin2 expression was
Conclusions
In short, the results of this study showed that BMP and Wnt signaling activities exhibited different and dynamic patterns during mouse M1 root development.
The position of activation of the BMP/Smad signal pathway was located more coronally in the early stage, which then gradually expanded as root elongation proceeded and was associated with blood vessels in the pulp and DAC tissues in the later stage. However, Wnt/β-catenin signaling was highly concentrated in the mesenchyme below the cusps in the early stage, gradually expanded to regions around the root at the transition/root stage, and then disappeared nearly the later stage. These findings emphasized the importance of spatial and temporal epithelial-mesenchymal signaling, such as BMP and Wnt signaling pathways, for postnatal dentinogenesis, as well as provided some clues for future studies to precisely explore the cellular and molecular mechanisms that regulate tooth root development.
Conflicts of Interest statement
The authors declare no competing or financial interests.
|
2021-12-16T06:23:30.336Z
|
2021-12-13T00:00:00.000
|
{
"year": 2021,
"sha1": "416baf0337fd579b6a4ef7b3c9baeff9d71bf19f",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/jaos/a/HVTX8QhbBKB4fRZhZ3FPCmH/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85248ab90834513b55158612105266c9db894ac7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8894639
|
pes2o/s2orc
|
v3-fos-license
|
Mapping Brain Response to Pain in Fibromyalgia Patients Using Temporal Analysis of fMRI
Background Nociceptive stimuli may evoke brain responses longer than the stimulus duration often partially detected by conventional neuroimaging. Fibromyalgia patients typically complain of severe pain from gentle stimuli. We aimed to characterize brain response to painful pressure in fibromyalgia patients by generating activation maps adjusted for the duration of brain responses. Methodology/Principal Findings Twenty-seven women (mean age: 47.8 years) were assessed with fMRI. The sample included nine fibromyalgia patients and nine healthy subjects who received 4 kg/cm2 of pressure on the thumb. Nine additional control subjects received 6.8 kg/cm2 to match the patients for the severity of perceived pain. Independent Component Analysis characterized the temporal dynamics of the actual brain response to pressure. Statistical parametric maps were estimated using the obtained time courses. Brain response to pressure (18 seconds) consistently exceeded the stimulus application (9 seconds) in somatosensory regions in all groups. fMRI maps following such temporal dynamics showed a complete pain network response (sensory-motor cortices, operculo-insula, cingulate cortex, and basal ganglia) to 4 kg/cm2 of pressure in fibromyalgia patients. In healthy subjects, response to this low intensity pressure involved mainly somatosensory cortices. When matched for perceived pain (6.8 kg/cm2), control subjects showed also comprehensive activation of pain-related regions, but fibromyalgia patients showed significantly larger activation in the anterior insula-basal ganglia complex and the cingulate cortex. Conclusions/Significance The results suggest that data-driven fMRI assessments may complement conventional neuroimaging for characterizing pain responses and that enhancement of brain activation in fibromyalgia patients may be particularly relevant in emotion-related regions.
Introduction
Nociceptive stimulation can trigger complex behavioral responses involving both local pain sensations and general affective phenomena [1]. Responses to painful mechanical stimuli typically persist after their application for a time that largely depends on stimulus features and the individual's receptive state [2,3].
Functional imaging has notably contributed to delineating the functional anatomy of the brain network mediating pain responses [4]. The most consistent activations in this ''pain matrix'' involve somatosensory and adjacent parietal cortex, the operculo-insular region and the anterior cingulate cortex [see specific reviews 1,4,5]. Interestingly, only a few imaging studies have explored nociception temporal dynamics, suggesting that pain-related activity may persist well beyond the specified stimulation periods [3,[6][7][8][9][10].
Fibromyalgia is a syndrome expressed mainly as chronic complaints involving augmented subjective pain of mechanical origin [11]. Previous functional magnetic resonance imaging (fMRI) studies assessing the anatomy of brain activations have suggested that brain responses to mechanical stimuli are abnormally increased in fibromyalgia patients [12]. In this study, we aimed to further characterize brain response to pain in patients with severe fibromyalgia and healthy subjects using an fMRI datadriven approach [13,14]. We assessed the temporal dynamics of the actual brain response to local painful pressure in pain-related regions with Independent Component Analysis (ICA). The results were then used to generate fMRI maps adjusted for the duration of brain responses that showed more complete activation patterns in patients and in control subjects and stronger correlation with reported subjective pain.
Ethics statement
This study was conducted according to the principles expressed in the Declaration of Helsinki. The study was approved by the Ethics and Institutional Review Board of the Autonomous University of Barcelona (reference number SAF2007-62376). All patients and healthy subjects provided written informed consent for clinical and fMRI assessment and subsequent analyses.
Subjects
Twenty-seven subjects participated in the study, including nine patients with fibromyalgia and two groups of nine healthy subjects (control group 1 and 2) matched to patients for gender and age, and recruited from the same sociodemographic environment. Control group 1 served to compare brain response to a fixed mechanical stimulus pressure able to provoke severe pain in fibromyalgia patients. Control group 2 was matched to fibromyalgia patients for levels of perceived pain by increasing stimulus intensity.
The patients were consecutively selected during clinical followup to make up a homogeneous sample showing severe and durable symptoms. The series included nine right-handed females with a mean6SD age of 47.969.4 years and education level of 11.062.1 years. All patients met the American College of Rheumatology criteria for fibromyalgia [11]. Mean illness duration was 8.265.6 years. The number of tender points upon study assessment was 16.762.3. General Perception of Health according to the 36-Item Short-Form Health Survey [15] scored 11.1613.2 (maximum score, 100). The Fibromyalgia Impact Questionnaire (FIQ) [16] total score was 73.2613.8 (maximum score, 100). Hospital Anxiety and Depression Scale (HADS) ratings [17,18] were 13.464.0 and 10.364.7. One patient had a co-morbid clinical diagnosis of major depression, 2 patients a dysthymic disorder and 3 patients an adjustment disorder with mixed anxiety and depressed mood.
Patients were allowed to continue with their stable medical treatment, but were required to refrain from taking analgesic drugs 72 hours prior to fMRI. Six patients were on anti-inflammatory drugs in a stable regime (2 were also taking benzodiazepines, 1 antidepressants and 1 carbamazepine). The remaining 3 patients were taking: antidepressants, benzodiazepines and carbamazepine (1 patient), antidepressants and benzodiazepines (1 patient), and no medication (1 patient).
The control group 1 included nine right-handed females with a mean age of 47.268.9 years and education level 12.464.3 years, and the control group 2 nine right-handed females with a mean age of 48.265.5 years and education level 13.063.0 years. Subjects with relevant medical or neurological disorder, substance abuse, or psychiatric disease were not considered for inclusion. None of the healthy subjects was undergoing medical treatment.
Stimuli
Pressure stimuli were delivered using a specially designed hydraulic device capable of transmitting controlled pressure to 1cm 2 surface placed on the subject's thumbnail. As in other studies [19,20], this system involved a hard rubber probe attached to a hydraulic piston that was displaced by mechanical pressure. In a preliminary session, each subject was acclimatized to the mechanical stimuli and trained to rate perceived pain intensity using a numerical rating scale (NRS) ranging from 0 (no pain) to 100 (the worst pain possible).
Pain thresholds were also assessed during the session and the intensity of pressure producing severe pain in both patients and control subjects was estimated. To determine individual thresholds, different stimulus intensities were applied lasting 5 seconds each, with an inter-stimuli interval of 20 seconds. The selected pressure stimuli, ranging from 2-9 kg/cm 2 , were administered pseudo-randomly. Conventional pain thresholds corresponded to the least pressure intensity at which subjects perceived pain in two trials. In this session, pain threshold was 1.660.5 kg/cm 2 in the 9 patients and 4.061.0 kg/cm 2 in the 18 healthy subjects (P,0.0005). The minimum pressure intensity to provoke severe pain (NRS above 70) in patients was 3.660.9 kg/cm 2 and 6.861.4 kg/cm 2 in healthy subjects (P,0.0005).
fMRI pain paradigm
During the primary study assessment, identical stimulation was applied to both patients and healthy subjects (control group 1). A block-design paradigm was used consisting of 21-second restingstate periods interleaved with pressure stimulation blocks of nine seconds. During pressure blocks, sustained 4 kg/cm 2 pressure was delivered to the subjects' right thumbnail. Pressure was partially removed for 1 second in the middle of each pain block to reduce the probability of tissue damage in the thumb. The entire imaging sequence involved 12 rest-pressure cycles lasting 6 minutes in total. Immediately after image acquisition, each subject provided a single score to globally rate pain intensity perceived during the 12 pressure blocks.
The control group 2 was assessed using identical procedures, but applying 6.8 kg/cm 2 , which produced a pain severity level similar to that experienced by fibromyalgia patients using 4 kg/ cm 2 (NRS above 70).
MRI acquisition
A 1.5 T Signa system (General Electric, Milwaukee, WI) equipped with an eight-channel phased-array head coil and singleshot echoplanar imaging (EPI) software was used. Functional sequences consisted of gradient recalled acquisition in the steadystate (time of repetition [TR], 3,000 ms; time of echo [TE], 50 ms; pulse angle, 90u) within a field of view of 24 cm, a 96664-pixel matrix, and slice thickness of 5 mm (inter-slice gap, 1 mm). Seventeen slices parallel to the anterior-posterior commissure line covered the whole-brain. The first two images in each run were discarded to allow the magnetization to reach equilibrium.
Image preprocessing
Imaging data were processed using MATLAB version 7 (The MathWorks Inc, Natick, Mass) and Statistical Parametric Mapping software (SPM5; The Wellcome Department of Imaging Neuroscience, London). Preprocessing involved motion correction, spatial normalization and smoothing using a Gaussian filter (full-width half-maximum, 6 mm). Data were normalized to the standard SPM-EPI template and resliced to 3 mm isotropic resolution in Montreal Neurological Institute (MNI) space. We excluded data from two subjects from an original sample of 29 subjects due to excessive head movement (z-axis translation.2 mm).
Image analysis
fMRI data are commonly analyzed using 'model-based' statistical methods that require a specific assumption about the time courses of activation. Typically, model-based analyses estimate the contrast between signal intensity of images obtained during stimulus application and signal intensity of images obtained without stimulation or during a control condition. In experiments where response durations cannot be completely anticipated, as in pain assessment and in the assessment of emotions in general, the standard model-based approach may underestimate the evoked brain response. In contrast, ''data-driven'' statistical methods are used to identify actual brain activation without a priori hypothesis on the expected activation time course. These methods estimate the best fitting of the data, but do not directly test the statistical significance of the activations [13,14]. In the current study, we used a data-driven approach based on Independent Component Analysis (ICA) to generate a study-specific time course model, which was used as a regressor in conventional SPM analyses to statistically test between-group differences for the activation pattern.
Independent Component Analysis
Spatial Independent Component Analysis is a data-driven statistical analysis method that is able to decompose whole brain fMRI data into independent networks of brain regions (spatial components) involving voxels following similar temporal dynamics. Results are presented as a set of spatial maps with their associated time courses.
Group ICA for fMRI Toolbox was used (GIFT v1.3c; http:// icatb.sourceforge.net), with previously described algorithms [21,22]. After subject-wise data concatenations, a separate spatial ICA was performed for each study group in three stages: Stage 1: The dimensionality of the fMRI data and the optimal number of components for each group were estimated using the minimum description length (MDL) criterion in GIFT [23]. Principal component analysis (2 reduction steps) was then used to reduce individual subject data in dimensionality (for computational feasibility) to the number of components estimated by the MDL criterion. Stage 2: Group estimation of spatially independent sources was then performed using the Infomax algorithm. Stage 3: During the final stage of back-reconstruction to the original dimensionality, individual subject image maps and time courses were estimated using the group solution [21,22]. This step was followed by the process of grouping components across subjects to produce group component maps and group-average time courses.
Temporal analysis of brain response to pain
Group ICA results were used to identify actual response functions (i.e., normalized time courses) of the brain regions activated by nociceptive stimulation. In selecting these time courses for further analysis, we considered those components involving regions known to mediate brain response to pain [4] and showing a consistent signal increase (activation) coinciding with each pain stimulation block, irrespectively of the duration of the activation.
Mapping brain response to pain: analyses of main task effects 1st-level (single-subject) SPM contrast images were estimated to characterize the functional anatomy of pain-related brain activations. For this analysis, the BOLD response at each voxel was modeled using (i) data-driven response function generated from the Group ICA; and (ii) conventional (SPM5) model-driven canonical hemodynamic response function. Resulting 1st-level contrast images for each subject were then carried forward to 2nd-level random-effects (group) analyses using one-sample t-tests. A two-sample t-test analysis was performed to compare activation maps between study groups. Spatial coordinates from the obtained maps were then converted to standard Talairach coordinates [24] using a non-linear transform of SPM standard space to Talairach space [25].
Mapping brain response to pain: correlation maps
We mapped voxel-wise correlations between subjective pain scores and brain activation. Separate correlation maps were obtained for both the data-driven and model-driven approaches including 18 study subjects (patients and control group 1). Correlations were considered significant at a P value less than 0.05 False Discovery Rate (FDR) corrected for the volume of activated regions (pain network).
In addition, we assessed the extent to which brain activation in the region showing the highest correlation with subjective pain (the anterior cingulate cortex) was able to account for group differences in perceived pain. This was carried out by comparing group differences in subjective reported pain both before and after controlling for (regressing out) the effect of cingulate activation using analysis of covariance (ANCOVA).
Pain rating during fMRI assessment
The range of subjectively reported pain varied from 20 to 100 points across the 18 subjects (9 patients and 9 healthy subjects from the control group 1) assessed using 4 kg/cm 2 of pressure. Healthy subjects reported mild-to-moderate pain and fibromyalgia patients the most severe scores during this stimulation (mean6SD for healthy subjects = 41.1620.1 and for patients 88.8611.6; t = 6.2 and P,0.0001). The group of healthy subjects (n = 9) receiving 6.8 kg/ cm 2 (control group 2) reported severe pain at rating levels comparable to the fibromyalgia group (80.2610.7; t = 1.6 and P = 0.123).
Temporal analysis of brain activation at 4 kg/cm 2 of pressure ICA estimated 34 spatially independent components in patients and 31 in healthy subjects (control group 1). The time course of nine components in patients and three components in healthy subjects showed a signal increase (i.e., activation) coinciding with each pain stimulation block. Two such components involved pain-related brain regions in each study group. That is, in both patients and healthy subjects, a ''somatosensory'' and an ''insular'' component met the double criterion of showing signal increase in each pain block and involving regions known to mediate brain response to pain.
The somatosensory component included bilateral parietal cortex in both groups and a small portion of the dorsal anterior cingulate cortex in fibromyalgia ( Figure 1). The associated time course was very similar in patients and healthy subjects showing evoked signal changes that persisted after stimulus removal in each stimulation block. Block-average time courses (Figure 1) revealed an early fMRI signal increase that returned to the baseline level only after 18 seconds in both groups (twice the duration of the applied stimulus). Time to peak activation since stimulus onset was 6.965.1 s in patients and 6.364.8 s in control subjects (control group 1), showing t = 0.28 and P = 0.782. Activation duration was 18.663.6 s in patients and 18.963.6 s in control subjects, showing t = 20.19 and P = 0.848.
The ''insular'' component involved bilateral insulo-opercular cortex in both groups. In fibromyalgia patients, the time course of this component followed the dynamics of the somatosensory component, showing a fast initial signal increase and duration of 18 seconds. By contrast, healthy subjects, showed much less consistent signal changes in the insular region, as not all the stimulation blocks showed a definite signal increase (see Figure 1B).
Mapping brain response to 4 kg/cm 2 of pressure
The time course of the somatosensory component was averaged across groups (patients and control group 1) and was used as the reference function in a conventional fMRI analysis. Figure 2 and Table 1 report brain activations obtained using this data-driven model. In fibromyalgia patients, activations involved all relevant regions of the pain network, including contralateral somatosensory and motor cortices, bilateral inferior parietal areas, the opercula, the insula, the basal ganglia, the supplementary motor area, the Table 1 anterior cingulate cortex and the cerebellum. In healthy controls, activation was mainly observed in the inferior parietal cortex involving the supramarginal gyrus, and in the insula. Statistical differences between both groups are reported in Table 1.
The assessment of brain activations from the conventional block-design analysis adjusted to stimulus duration (i.e., modelbased) resulted in notably smaller pain-related activation in patients and control group 1 (Figure 2, Table 2).
Correlation maps
We mapped the correlation of subjective pain scores with brain activations during stimulation at 4 kg/cm 2 of pressure (i.e., voxelwise regression of the activation patterns with subjects' pain scores). Pain scores were widely correlated with brain activation in the datadriven approach involving the contralateral sensory-motor cortex, supplementary motor area, anterior cingulate cortex, anterior insula and basal ganglia ( Figure 3, Table 3). By contrast, subjective pain showed no significant correlation with the activation pattern identified using the conventional model-driven approach.
The plot in Figure 3 shows a relatively graded correlation between subjective pain and anterior cingulate cortex activation when including all subjects stimulated at 4 kg/cm 2 of pressure. Nevertheless, it is evident that patients and healthy subjects are at opposite extremes of the pain score range. Using ANCOVA, cingulate cortex activation was found to account largely for the differences between both groups in perceived pain. In this analysis, group differences in subjective pain scores were highly significant before controlling for the effect of anterior cingulate cortex activation (F = 38.0, P,0.0001); a finding that was reversed (F = 1.8, P = 0.195) when removing (regressing out) this effect. Comparing patients and controls subjects matched for pain levels An ICA was carried out for the control group stimulated at pressure 6.8 kg/cm 2 and reporting severe pain (control group 2). This procedure estimated 37 spatially independent components. As in the above analysis, the time course of the obtained somatosensory component was averaged with the somatosensory time course of fibromyalgia patients and was used as the reference function in a new conventional fMRI analysis to compare patients with this control group. Table 4 shows the activation pattern obtained in both groups and the significant between-group differences. Brain response was comprehensive both in patients and control subjects involving most of the pain-related regions. Response in regions involved in the sensory aspects of nociception was similar, showing a tendency for higher activation in the somatosensory cortex in control subjects. Patients, however, showed significantly greater activation in the anterior insula and basal ganglia bilaterally, and in the SMA (Table 4, Figure 4).
Discussion
This study aimed to characterize brain response to local pressure stimulation in fibromyalgia patients using an fMRI approach based on the temporal analysis of brain activation. Somatosensory areas showed consistent activation to each block of pressure stimulation that characteristically persisted beyond stimulus application. The fMRI maps adjusted for response duration showed robust activations in regions known to mediate brain responses to pain. Importantly, a strong correlation was observed between the rating of subjective pain during the fMRI assessment and the magnitude of the activation. Fibromyalgia patients showed significantly greater activation than comparative control subjects. Response enhancement was observed in fibromyalgia patients for most pain-related regions compared to the control subjects receiving identical stimulation, and for specific regions when the groups were matched for subjective pain levels.
This data-driven imaging analysis allowed us to compare specific temporal and anatomical features of nociceptive processing between fibromyalgia patients, who reported severe subjective pain to the relatively mild local pressure stimulus, and healthy subjects reporting only mild-to-moderate pain from this stimulation. We observed a similar activation time course in somatosensory cortices in both groups, which suggested relevant and durable responses to mechanical stimulation at the ''sensory'' stage of nociceptive processing, irrespectively of subjective pain severity. For the insula component, consistent long-lasting responses were observed only in fibromyalgia patients.
The anatomy of the activations in response to 4 kg/cm 2 of pressure differed between patients and control subjects (control Table 3. doi:10.1371/journal.pone.0005224.g003 group 1). Healthy subjects showed mainly a sensory response with relevant activation in contralateral somatosensory cortices and moderate activation in the insular cortex. By contrast, fibromyalgia patients showed a full response to pain with robust sensory, limbic and motor activations. Functional MRI changes in these regions showed a significant correlation with the severity of experienced pain and largely accounted for group differences in subjective pain scores at low pressure stimulation. That is, increased activation in pain-related regions explained the increased subjective pain ratings in fibromyalgia patients. It is noteworthy that all the ''efferent'' elements of the pain response (brain regions directly related to motor or visceral output) are represented in the voxel-wise map of the correlation between pain severity and brain activations, including contralateral sensory-motor cortex, supplementary motor area, anterior cingulate cortex, anterior insula and basal ganglia. Several fMRI studies have reported a close relationship between anterior cingulate cortex activation and the subjective experience of pain or its ''suffering'' component [2,4]. This has been an especially robust finding in fMRI pain studies [2,4,[26][27][28] and our results further support such an association. In addition, the reported map suggests that the other elements of the efferent pain response may also participate in the subjective experience of pain. Staud et al. [10] have reported a near identical pattern by mapping the correlation of perceived pain and brain activation related to temporal summation of ''second pain'' (late c-fiber evoked responses) during painful heat stimulation. Nevertheless, we did not obtain specific measurement of affect or unpleasantness during fMRI (only pain intensity ratings were recorded), which is a major limitation of our study. It would be of interest in future studies to map the correlation of brain activation during painful stimulation and individual affect ratings in addition to the reported correlation with pain intensity. This closer correlation of subjective pain with the efferent brain response seems to further support proposed mechanisms for enhancement of emotions, including pain. Such models suggest that efferent somatic and visceral bodily responses to emotive stimuli originate backward afferent stimulation of the body representation in the brain, in turn amplifying emotional states [29][30][31]. Interestingly, the map showing the correlation of perceived pain with brain activations in our study largely coincides with the neural network related to interoceptive awareness in recent fMRI studies, which is proposed to mediate subjective feeling states arising from brain representations of bodily responses [32,33]. Our data indeed suggest that fibromyalgia patients show enhanced responses in regions related to the individuals' emotion expression that may be part of the subjective pain experience. Nevertheless, these activations are not necessarily the result of augmented responses in the basic levels of nociceptive processing. A very recent study by Burgmer et al. [34] showed that abnormal brain responses in emotion-related regions in patients with fibromyalgia may be delayed with respect to peripheral painful stimulation, suggesting that their painful experience enhancement is likely to originate from central factors related to the patients' affect and cognition. Our study is limited in that the influence of these factors (e.g., patients' anxiety and depression) was not controlled in the analysis.
Our results are consistent with most of previous fMRI studies on fibromyalgia, but expand the reported data by assessing the temporal dynamics of brain activity, which led to a more comprehensive activation mapping. All the reports coincide in showing abnormal brain responses to painful stimuli in fibromyalgia patients [20,35,36] when comparing patients to control subjects receiving identical stimulus intensity. In general, the data are consistent with a model of enhanced normal pain response and argues against the occurrence of ''aberrant'' nociception [20,37]. However, when matching both groups for perceived pain we observed larger activations in patients for specific regions. In this matching comparison, Gracely et al. [20] did not report significant differences between patients and control subjects with stimulation producing moderate pain. More recently, Staud et al. [9] specifically assessed the temporal summation of second pain using heat stimulation and also found no brain activation differences when stimulus strength was adjusted to induce moderate pain in both groups. In contrast with these two studies, more intense stimulation was used in our assessment and both patients and this control group reported severe pain. Fibromyalgia patients showed greater activation in the insula, basal ganglia and the anterior cingulate cortex, which are part of the brain network mediating efferent aspects of the pain response, and not in somatosensory cortices, where control subjects even had a tendency to show larger activation. Overall, our findings may be consistent with the notion of augmented brain response to pain in fibromyalgia, but the functional alterations may be particularly relevant in emotion-related (paralimbic) regions.
Functional MRI research is now focused on assessing the different dimensions of nociceptive processing. The presence of mood depression in fibromyalgia patients was associated with increased activation in regions processing affective components of pain [38]. Pain ''catastrophizing,'' or characterizations of pain as awful, horrible and unbearable, was related to increased activation in the attentional, affective and motor domains, independently of the influence of depression [19]. Another study suggested that patients' beliefs about pain-control (locus of control for pain) may influence nociceptive processing at the sensory-discriminative stage [39]. In this context, mapping brain activations adjusted to the temporal dynamics of each nociception dimension in different clinical and experimental situations may be of interest to further characterize the complex phenomenology of pain responses. Interestingly, Burgmer et al. [34] suggested that patients with fibromyalgia may show different temporal dynamics in different elements of the brain pain network.
Conventional block-design fMRI is based on detecting brain activations following a specified paradigm of stimulus duration. These methods provide reliable and accurate activation patterns when stimulus duration corresponds well to brain activation (typical in most sensory and motor tasks). Nevertheless, for painful or emotional stimuli that may evoke responses of variable duration, the temporal analysis of brain activity may provide more informative activation maps and correlate better with subjective pain scores. Data-driven methods, however, are inherently biased to the actual response in a given population or experiment, which may hinder the generalization of conclusions [13,14]. For example, between-group comparisons may be difficult when the data-driven analyses identify different time courses for each group. In our study, it was feasible to compare groups using a common temporal model, as both patients and controls showed similar time courses for the somatosensory component.
Despite the small number of subjects included in this study, we observed robust activation maps reflecting the consistency of brain activation across all 12 pressure stimulation blocks. This may have particular relevance in the clinical fMRI setting as discussed in recent studies [40] where obtaining consistent findings at the individual case level is most desirable. Nonetheless, further studies will be needed to extrapolate our findings to the general population of fibromyalgia patients. In this context, it is also of interest to better establish the possible confounding effects of relevant clinical variables as, for example, the medication history of patients. In our study, no analgesic drugs were permitted 72 hours prior to fMRI, but patients were allowed to continue with their stable medical treatment, involving drugs with potential ability to modify the central nociceptive processing. In our patients, however, it is unlikely that the observed response enhancement to painful stimuli was a consequence of ongoing medical treatments, as the available data suggest the opposite effect [41][42][43][44]. Indeed, psychotropic medication showed no significant changes or ameliorative effects on abnormal functional neuroimaging measurements [43], while antidepressants reduced limbic activation during emotional processing [41], benzodiazepines reduced brain activity associated with anticipation to pain [44], and non-steroidal anti-inflammatory drugs suppressed pain-induced activation in most regions involved in pain processing [42].
Fibromyalgia has often been a controversial medical syndrome since patient identification is based largely on subjective symptoms [45]. In this and other studies [12], fMRI has demonstrated increased brain responses in patients labeled with this clinical diagnosis. Future research will establish the clinical usefulness of imaging tools for the objective assessment of subjective symptoms in both this and related disorders.
|
2016-05-04T20:20:58.661Z
|
2009-04-21T00:00:00.000
|
{
"year": 2009,
"sha1": "511c82d1112fa65910a20a6c255de822b3c78c10",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0005224&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2fe4e8e2414bfc5702283821bea77b3a146b480",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
208124588
|
pes2o/s2orc
|
v3-fos-license
|
Influence of Time and Risk on Response Acceptability in a Simple Spoken Dialogue System
We describe a longitudinal user study conducted in the context of a Spoken Dialogue System for a household robot, where we examined the influence of time displacement and situational risk on users’ preferred responses. To this effect, we employed a corpus of spoken requests that asked a robot to fetch or move objects in a room. In the first stage of our study, participants selected among four response types to these requests under two risk conditions: low and high. After some time, the same participants rated several responses to the previous requests — these responses were instantiated from the four response types. Our results show that participants did not rate highly their own response types; moreover, they rated their own response types similarly to different ones. This suggests that, at least in this context, people’s preferences at a particular point in time may not reflect their general attitudes, and that various reasonable response types may be equally acceptable. Our study also reveals that situational risk influences the acceptability of some response types.
Introduction
Spoken Dialogue Systems (SDSs) must often engage in follow-up interactions to deal with Automatic Speech Recognizer (ASR) errors or elucidate ambiguous or inaccurate requests (which are exacerbated by ASR errors): • ASR errors, although significantly reduced in recent times, 1 may produce wrong entities or actions, or ungrammatical utterances that cannot be processed by a Spoken Language Understanding (SLU) system (e.g., "the plate inside the microwave" being misheard as "of plating sight the microwave"). 2 • People often express themselves ambiguously or inaccurately (Trafton et al., 2005;Moratz and Tenbrink, 2006;Funakoshi et al., 2012;Zukerman et al., 2015). An ambiguous reference to an object matches several objects well, while an inaccurate reference matches one or more objects partially. For instance, a reference to a "big blue mug" is ambiguous if there is more than one big blue mug, and inaccurate if there are two mugs -one big and red, and one small and blue.
In the last two decades, research in response generation has focused on techniques that generate response policies that optimize dialogue completion, using Markov Decision Processes (MDPs), e.g., (Singh et al., 2002;Lemon, 2011), and Partially Observable MDPs (POMDPs), e.g., (Williams and Young, 2007;Gašić and Young, 2014). Recently, deep-learning algorithms have been used to generate dialogue responses on the basis of request-response pairs, e.g., (Li et al., 2016;Prakash et al., 2016;Serban et al., 2017). Human and simulation-based evaluations of MDP and POMDP systems focus on dialogue completion, while evaluations of deep-learning algorithms focus on individual responses.
In this paper, we draw inspiration from research in Recommender Systems, where Amatriain et al. (2009) and Said and Bellogín (2018) showed that over time, users gave inconsistent ratings to items, leading to the "magic barrier" to prediction accuracy in Recommender Systems (Said and Bellogín, 2018). This prompted us to posit that people may also be inconsistent when assessing responses in a dialogue at different times, which may affect the results of human evaluations.
To investigate this claim, we conducted a longitudinal study in the context of an SDS for a household robot. We first collected a corpus of spoken requests that asked a robot to fetch or move objects in a room. Our participants were shown the top ASR outputs for these requests (the intention was to replicate the information available to an SDS, without the extra information people can glean from what they hear). They were also told that these requests had to be executed under two risk conditions: low risk, where the consequences of performing the wrong action are trivial, and high risk, where performing the wrong action could significantly inconvenience the speaker. The participants had to choose among four response types: DO the request without further interaction, CONFIRM the intended object, ask the requester to CHOOSE between a few candidate objects, or ask the requester to REPHRASE all or part of the request. After 1.5-2 years, the same participants were shown the original requests and ASR outputs, and were asked to rate responses generated from their previously selected response types and from other sources, in particular response types selected by one of the authors and by a classifier trained on the author's chosen response types.
Our findings show that (1) participants downrated responses sourced from their previously chosen response types; and (2) these responses were liked as much as different responses sourced from the response types selected by one of the authors or by the above-mentioned classifier. The first result indicates that, at least in the context of oneshot dialogues with an SDS for a household robot, people's preferred response types at a particular point in time may not reflect their general attitudes. The second result suggests that, instead of one best response type, several reasonable response types may be acceptable, including those selected by a classifier trained on a non-target but relevant corpus.
We also investigated the influence of situational risk on the acceptability of response types. We found that (3) as expected, under the high-risk condition, the preferred response types were generally more conservative than under the low-risk condition; but (4) surprisingly, participants' attitudes toward certain response types, e.g., CONFIRM, were not affected by risk.
The rest of this paper is organized as follows. In the next section, we discuss related work. Our experimental setup is described in Section 3. In Section 4, we present our classifier and the features used to train it. The results of our experiment are described in Section 5, and concluding remarks appear in Section 6.
Related Work
Decision-theoretic approaches have been the accepted standard for response generation in dialogue systems for some time (Carlson, 1983). These approaches were initially implemented in SDSs as Bayesian reasoning processes that optimize a system's confidence when making myopic (one-shot) decisions regarding dialogue acts (Paek and Horvitz, 2000;Sugiura et al., 2009), and as Dynamic Decision Networks that make decisions about dialogue acts over time (Horvitz et al., 2003;Liao et al., 2006).
MDPs (Singh et al., 2002;Lemon, 2011), POMDPs (Williams and Young, 2007;Gašić and Young, 2014), and their extensions Hidden Information State Model (Young et al., 2010(Young et al., , 2013 and Conversational Entity Dialogue Model were used, often in combination with Reinforcement Learning (RL), to learn policies that optimize dialogue completion on the basis of feedback given by real or simulated users.
Recently, deep learning has been applied to various aspects of SDSs (Wen et al., 2015;Li et al., 2016;Mrkšic et al., 2017;Prakash et al., 2016;Serban et al., 2017;Yang et al., 2017). Wen et al. (2015) and considered the generation of linguistically varied responses; Li et al. (2016) and Prakash et al. (2016) produced dialogue contributions of chatbots; and Serban et al. (2017) generated helpdesk responses and Twitter follow-up statements. Mrkšic et al. (2017) proposed a dialogue-state tracking framework, and Yang et al. (2017) a mechanism for slot tagging and user-intent and system-action prediction in slot-filling applications. A combination of deep learning and RL has been used in end-to-end dialogue systems that query a knowledge-base, where user utterances are mapped to a clarification question or a knowledgebase query (Williams and Zweig, 2016;Zhao and Eskenazi, 2016;Dhingra et al., 2017). All these systems harness large corpora comprising requestresponse pairs to learn responses that are assumed to be better than alternative options.
Like evaluations based on simulated users, human evaluations of (PO)MDP/RL systems focus on successful dialogue completion (Singh et al., 2002;Thomson et al., 2008;Young et al., 2010), while human evaluations of deep-learning systems assess individual responses (Wen et al., 2015;Li et al., 2016;Prakash et al., 2016;Serban et al., 2017;Dhingra et al., 2017). The findings reported in this paper contribute to (PO)MDP/RL research by determining whether there are factors other than dialogue completion that affect the suitability of responses, and to deeplearning research by ascertaining whether indeed there is a single best response to each request.
The research described in (Jurčíček et al., 2011) and (Liu et al., 2016) shed light on ancillary aspects of human evaluations of system responses. The former compared evaluations by Amazon Mechanical Turk workers with evaluations by participants recruited for a lab experiment; and the latter conducted user studies to determine the validity of word-based evaluation metrics. This paper also addresses ancillary aspects of human response evaluations, viz the influence of temporal displacement and situational risk on users' attitudes toward response types, and users' opinions of response types obtained from different sources (including a classifier trained on a corpus that differs from the target corpus).
Experimental Setup
Our experiment comprises two main stages: (1) responding to requests, and (2) rating responses to the same requests. Creating a corpus of requests We created a corpus of requests by collecting a corpus of spoken descriptions, and converting them to requests.
To collect the spoken descriptions, we replicated the experiment described in (Zukerman et al., 2015), but we used the Google ASR, instead of the Microsoft Speech API. In our experiment, the top-ranked outputs produced by this ASR had a 13% word error rate, which resulted in 53% of the descriptions having imperfect top-ranked ASR outputs. In addition, 33% of the descriptions had errors in all top four ASR outputs.
Following the protocol in (Zukerman et al., 2015), 35 participants were asked to describe 12 designated objects (labeled A to L) in four scenes ( Figure 1); speakers were allowed to restate the description of an object up to two times. In total, we recorded 478 descriptions such as the following: "the flower on the table" (object A in Figure 1(a)), "the plate inside the microwave" (object D in Figure 1(b)), "the plate at the center of the table" (object G in Figure 1(c)), and "the large pink ball in the middle of the room" (object J in Figure 1(d)). 20% of the descriptions had an unintelligible object in all ASR outputs, e.g., "the Heartist under the table", 17.9% were ambiguous (several objects matched the description), and only 3.8% were inaccurate (no object matched the description perfectly).
We retained 292 descriptions, 3 and for each description, we used the top four ASR outputs. The corpus of requests, denoted RequestCorpus, was created by prefixing the verb "get" (for small objects) or "move" (for large objects) to each ASR output (which remained unchanged), e.g., "get the flower on the table". This corpus was divided into sets of at most 12 requests (one request per object, mostly from one speaker).
Demographic and risk-propensity information
We gathered information about the participants' gender, English nativeness, age, education and risk propensity. For the last item, we showed the participants twelve statements obtained from (Rohrmann, 2005): six risk-proneness statements, e.g., "I follow the motto 'nothing ventured, nothing gained' ", and six risk-aversion statements, e.g., "My decisions are always made carefully and accurately"; (dis)agreement was indicated on a 1-5 Likert scale. The hope was that these information items would assist in predicting participants' responses.
Stage 1 -Responding to requests
This corpus was collected through an online survey where participants had to indicate how they would respond to potentially misheard requests. Each participant was shown at most 12 requests from RequestCorpus (spoken by other people). Each request consisted of four verb-prefixed ASR outputs, and was accompanied by a version of the appropriate image in Figure 1 where the objects were numbered (to enable participants to identify any object as the referent). Each participant was then asked to select one of four response types for each request: DO, CONFIRM, CHOOSE or REPHRASE. Figure 3 in Appendix A displays a screenshot containing a numbered version of Figure 1(a), four ASR outputs for a request for object #5 (labeled B in Figure 1(a)), and the four response types.
Prior to presenting the survey questions, participants were given a training example containing the descriptions shown below in italics: DO These choices were made under two risk conditions: low risk -where participants were told that the requested object must be delivered to someone in the same room; and high risk -where they were told that the object must be delivered to a remote location (Figure 3). These settings were designed to discriminate between situations where mistakes are fairly inconsequential and situations where mistakes are costly.
40 people took part in this stage of the experiment, but six dropped out after this stage. Half of the remaining participants were male, and 18 were native English speakers. 4 participants were between 18-24 years of age, 16 between 25-34 years of age, 7 between 35-44, and 7 over 45. In terms of education, 5 participants had a secondary education, 16 had a Bachelor, 8 a Masters, and 5 a PhD. To assess the participants' risk propensity, we subtracted their total risk-aversion score from their total risk-proneness score (the total risk-aversion/proneness score was calculated by adding up the Likert score of the six riskaversion/proneness statements): 16 participants were risk prone, 8 were risk averse, and 10 were fairly neutral (the difference between the scores was less than 3).
In total, this corpus, denoted ResponseCorpus, contains 584 response types (= 292 requests × 2 conditions), which are distributed as shown in Columns 2 and 3 of Table 1.
To determine the influence of speaker diversity on classifier performance (Section 4), we created a second corpus, denoted AuthorCorpus, where one of the authors selected response types for all the Stage 2 -Rating responses to the same requests After 1.5-2 years, we were able to reach 34 participants from Stage 1, and we built RatingsCorpus as follows. Each participant was shown the requests they had seen before (without alerting them to this fact) together with several candidate responses. They were then asked to rate the suitability of each response on a 1-5 Likert scale under the low-and high-risk conditions. The candidate responses were sourced from the response types chosen by the participant (Re-sponseCorpus) and the author (AuthorCorpus) in Stage 1, and the response types returned by a classifier trained on AuthorCorpus (Section 4). 5 In addition, for every DO response from Stage 1, we also presented a CONFIRM response in Stage 2, and vice versa. Clearly, if more than one source had the same response type for a request, this response type was presented only once in Stage 2. Figure 4 in Appendix A displays a screenshot of Stage 2 survey questions regarding the same request as that in Figure 3, presented to the same participant.
Two Stage 2 responses, viz DO and REPHRASE, are direct renditions of the corresponding Stage 1 response types. However, to enable participants to rate CONFIRM and CHOOSE response types, we needed to refer to specific objects. We decided to use images to mimic pointing in CONFIRM responses (e.g., "Do you want this [PICTURE]?") and in CHOOSE responses with two or three candidate objects (e.g., "There are two things on the table, do you want this [PICTURE 1] or that [PICTURE 2]?"). We restricted the number of CHOOSE responses with images because we deemed it unnatural to 1 Is there an ASR output with all correct words? 2 % of wrong words in the top ASR output 3 % of wrong words in all ASR outputs 4 % of ASR outputs with all correct words 6 In addition, all CHOOSE responses were realized as text only, e.g., "There are two things on the table, which one do you want?". That is, there were two CHOOSE responses with two or three candidate objects, and one CHOOSE response with more candidate objects. Figure 4 illustrates two CHOOSE responses, a CON-FIRM response and a DO response.
Using a Classifier to Select Responses
One of the aims of this project is to determine whether we can generate acceptable responses using a classifier trained on a small non-target but relevant corpus. As noted in Section 3, in order to simplify the classifier, we removed descriptions with more than one prepositional phrase. Hence, most descriptions have semantic segments corresponding to an OBJECT, a POSITION SPECIFIER and a LANDMARK (only 22 (7.5%) descriptions have no prepositional phrase, e.g., "the big pink ball").
Classification features
To extract features of interest, we assume an SLU system that returns several ranked interpretations, and can represent (a) the ASR's confidence in the correctness of its candidate outputs, and (b) how well an interpretation (in the context of the room) matches a given description.
We employed the output of the SLU system described in (Zukerman et al., 2015), and for each description, we automatically extracted features that represent the above two types of information.
We also included information about situational risk (high or low); and for ResponseCorpus, we added the participants' demographic characteristics gender, English nativeness, age and education, and the difference between their risk-proneness and risk-aversion scores (Section 3). Features that reflect the ASR's confidence. These features are shown in Table 2. They reflect the ASR's "opinion" of the correctness of its output, rather than the ground truth. The last feature is noteworthy because the ASR may have high confidence in a few ASR outputs, e.g., "the flower on 1 # of interpretations with similar total match score to that of the top-ranked interpretation (×1) 2 How well the relative position of OBJECT and LANDMARK in an interpretation matches (×10) the position specified in the description 3 Lexical-match score of the OBJECT, LANDMARK and POSITION SPECIFIER in an interpretation (×30) with the corresponding semantic segment in the description 4-6 Other match scores of each OBJECT and LANDMARK in an interpretation with the corresponding semantic segment in the description 4 Colour match score (×20) 5 Size match score (×20) 6 # of Unknown modifiers (×20) Features that represent how well an interpretation matches a description. These features are summarized in Table 3. They are calculated for the top-N interpretations returned by the SLU system, where N = 10 (in this system, the correct interpretation is among the top ten in about 90% of the cases). The scores calculated by the SLU system for these features are combined into a total match score for each interpretation, which determines its ranking. For instance, given the description "the brown stool near the table", two stools in Figure 1(d) have a high total match score, as both are brown and near the table: the stool to the right of the table and stool L, which is to the left of the table. However, since the former stool is closer to the table, it has a slightly higher total score, and is ranked first, while stool L is ranked second.
The first feature in Table 3 represents the ambiguity of a description through the similarity between the total match score of the top-ranked interpretation and that of subsequent interpretations. We encode this similarity as the ratio between the total score of the i-th interpretation (i = 1, . . . , N ) and the total score of the top-ranked interpretation. All the interpretations whose ratio is above an empirically-derived threshold are deemed similar to the top-ranked interpretation.
The second feature, computed for each of the top-N interpretations, represents the goodness of the match between the position of the OBJECT in the interpretation (i.e., in the room) and its requested position in the description. For example, both stools in Figure 1(d) are near the table, but the position match score of the stool to the right of the table is higher than that of stool L. Table 3 contains features that represent the quality of the match between individ-ual elements in an interpretation and their corresponding semantic segments in the given description. Feature #3 represents how well the canonical name of each element in an interpretation matches the corresponding lexical item in the description. For instance, the terms "stool" and "table" respectively match perfectly the terms that designate stool L and the yellow table in Figure 1(d). However, if the speaker had said "ottoman", the lexical match with the canonical term for stool L would have been poorer.
The rest of
Features #4-6 pertain to intrinsic attributes of things, which are normally stated as noun modifiers in a description. They are computed for the OBJECT and LANDMARK of each of the top-N interpretations. Following Zukerman et al. (2015), we have focused on colour and size modifiers, designating other modifiers, e.g., composition or shape, as Unknown. Features #4 and #5 respectively reflect the goodness of a match between the color and size of an OBJECT or LANDMARK in an interpretation and the colour and size specifications in the corresponding semantic segment in the given description. For example, a request for a "brown stool" in the context of Figure 1(d) returns a high colour match with stool L, while a request for a "blue stool" would return a low colour match. Finally, the match score for Feature #6, which pertains to Unknowns, e.g., "the plastic stool", reflects the badness of a match.
Classifying responses
We considered several classification algorithms to learn response types from the corpora collected in Stage 1 of our experiment (Section 3): 7 Naïve Bayes, Support Vector Machines, Decision Trees, Random Forest (RF) and Recurrent Neural Nets Table 4 displays the per-class and overall performance of the RF classifier with 10-fold cross validation for both corpora. As seen in Table 4, RF performed much better for AuthorCorpus than for ResponseCorpus. This is attributable to the consistency of the 584 ratings provided by one person in AuthorCorpus, compared to the variability among participants in ResponseCorpus (different participants selected different responses for requests that had the same features).
The demographic features gender and English nativeness and the difference between riskproneness and risk-aversion scores mitigated the impact of speaker diversity in ResponseCorpus (age and education had no effect). In addition, situational risk had some influence on classification results in ResponseCorpus. This is consistent with the observation that the vast majority of the differences between the low-and high-risk condition were due to changes from DO to more conservative response types, in particular CONFIRM (represented in Columns 2 and 3 in Table 1). Despite this, most of the misclassifications were also between DO and CONFIRM.
Although the performance of the RF classifier on ResponseCorpus is disappointing, this result is tangential to the main thrust of this paper. In Section 5, we examine participants' attitudes toward responses obtained from the RF classifier trained on AuthorCorpus (which is significantly different from ResponseCorpus, Section 3).
Results
The main objective of our experiment is to determine whether participants' attitudes toward responses remain consistent over time. That is, how well do participants like their own previous responses? And do they prefer them to other re-sponses? As mentioned in Section 3, these other responses were sourced from the response types in AuthorCorpus and the response types chosen by the RF classifier trained on AuthorCorpus.
In addition, we sought to gain insights about the feasibility of using a classifier trained on the responses of one person, and to determine the influence of situational risk on people's attitudes toward response types.
Hypotheses pertaining to fewer than 200 samples were tested using Wilcoxon matched-pairs signed-rank test, and for more than 200 samples, we used the Normal approximation of this test (Siegel and Castellan, 1988).
How well do people like their previously selected response types? In order to answer this question, we had to address the following issues: 1. In Stage 1, participants selected a response type for each request, while in Stage 2, they rated responses. To compare Stage 1 selections to Stage 2 ratings, we ascribed ratings to the response types selected in Stage 1. In order to account for participants' rating bias, we assigned to each response type selected by a participant in Stage 1 the highest rating this participant gave to any response in Stage 2 (87% of these highest ratings were 5 -the maximum on the Likert Scale, Section 3). 2. In Stage 2, we offered two options for CHOOSE response types with two or three candidate objects: CHOOSE+pictures and CHOOSE+text (Section 3). For each description, we assigned to a Stage 2 CHOOSE response type the maximum of the ratings of the two options.
We tested the hypothesis that participants' Stage 1 response types yield highly rated responses in Stage 2 under both risk conditions. The result of this test was that participants' Stage 2 ratings of responses sourced from their own Stage 1 response types were significantly lower than the ratings ascribed to these Stage 1 response types under the low-and high-risk conditions (p-value 0.01). Figure 2 displays a histogram of the differences between the ratings ascribed to Stage 1 response types and the ratings given to the corresponding responses in Stage 2 under both risk conditions. For example, the leftmost bars indicate that the ratings of 159 response types under the low-risk condition and 123 response types under the highrisk condition did not change between Stage 1 and Stage 2 (the difference is 0). In other words, participants lowered their ratings of 133 response types under the low-risk condition and 169 response types under the high-risk condition. DO (majority class) accounts for 71% of these downrated response types under the low-risk condition, and 60% under the high-risk condition.
Do users prefer their previously selected response types to other response types? To answer this question, for each risk condition, we collected the participants' Stage 1 response types that differ from those in AuthorCorpus for the same request, and their response types that differ from those chosen by the RF classifier trained on AuthorCorpus. Table 5 compares participants' ratings of responses (R S1 ) sourced from their Stage 1 response types (S1) with their ratings of responses (R d ) sourced from different response types (d) selected by the RF classifier for the same requests under the low-and high-risk conditions. In total, 107 response types chosen by the classifier differ from the participants' selected response types under the low-risk condition, and 126 under the high-risk condition. In 47 of the low-risk cases and 46 of the high-risk cases, the responses sourced from the classifier's response types received a higher rating than the responses sourced from the participants' own response types (the results are similar for AuthorCorpus). Table 6 illustrates two of these low-risk cases, and two of these high-risk cases. For instance, in the high-risk example pertaining to Figure 1(a), the participant chose REPHRASE in Stage 1, but gave it a rating of 1 in Stage 2, while CONFIRM received a rating of 5.
As seen in Table 5, under the low-risk condition, participants generally preferred the responses sourced from the classifier response types, while the opposite effect was observed under the high-Users' Stage 1 response type (S1) Low High versus a different response type (d) risk risk Rating(R S1 ) > Rating(R d ) 32 55 Rating(R S1 ) = Rating(R d ) 28 25 Rating(R S1 ) < Rating(R d ) 47 46 # of requests where S1 = d 107 126 Table 7). Nonetheless, when we tested the hypothesis that participants liked responses sourced from their own previous response types as much as responses sourced from different response types in AuthorCorpus and different response types chosen by the classifier, both tests returned the same result: there were no statistically significant differences between users' ratings of responses sourced from their own Stage 1 response types and their ratings of responses sourced from different response types under the low-and high-risk conditions (p-value > 0.15).
How does situational risk affect participants' attitudes toward different response types? As seen in Table 1, the proportion of DOs in ResponseCorpus decreased under the high-risk condition, while the proportion of the other response types increased (the difference between the low-and highrisk response types is statistically significant, χ 2 with p-value 0.01). This indicates that participants preferred more conservative (risk-averse) response types under the high-risk condition. Figure 2 suggests that participants were also more critical of their own previous response types under the high-risk condition than under the lowrisk condition (they reduced the ratings of 169 response types under the high-risk condition compared to only 133 under the low-risk condition). This observation is confirmed by the mean ratings of the Stage 2 responses in our corpora under the low-and high-risk conditions, which are shown in Table 7 for the responses sourced from Respon-seCorpus and the responses obtained from the RF classifier (the AuthorCorpus results are similar).
In addition, the ratings of DO and of both versions of CHOOSE were significantly lower under the high-risk condition than under the low-risk condition (p-value 0.01 for DO and CHOOSE+text, and p-value < 0.05 for CHOOSE+pictures). In con-Top four ASR outputs a. get the paint on the wall b. get the paint on the walls c. get the paint on the world d. get the painting on the wall a. get the green light next to the blue plate b. get the green light next to the Blue Plate c. get the green light next to the blue planet d. get the green light next to the blue plates Figure, trast, no statistically significant differences were found with respect to CONFIRM and REPHRASE under the two risk conditions. Also, participants preferred CONFIRM to DO and CHOOSE+pictures to CHOOSE+text under both risk conditions (p-value 0.01). These findings suggest that situational risk influences the acceptability of certain response types, but further research is required to identify these response types in a broader context.
Conclusion
We have offered a longitudinal study where participants initially selected response types for ASR outputs of spoken requests; and after some time, they rated responses sourced from their own response types, as well as responses sourced from other response types. Our results show that the participants did not think that their original choices were the best, and that overall, they had the same opinion of responses sourced from their own response types, the response types chosen by one of the authors and those selected by a classifier trained on the response types of the author. These findings suggest that, at least in the context of oneshot dialogues with a household robot, people's response preferences at a particular point in time may not reflect their general attitudes, and that var-ious reasonable responses may be equally acceptable. Our results also indicate that, at least in this context, a classifier trained on a small non-target but relevant corpus may yield adequate responses.
Our experiment also distinguished between two types of situational risk: low and high. We found that risk influences people's general attitudes toward responses -they were more risk averse and critical under high-risk conditions than under low-risk conditions. However, this attitude was directed toward some response types (DO and CHOOSE) and not others (CONFIRM and REPHRASE). This finding, if generalized, may influence response type selection.
The implications of our findings for deeplearning systems are that training on a single best response may be unjustified, as several responses are equally acceptable. Further studies are required to determine whether our findings generalize to longer dialogues in more complex domains. If this is the case, (PO)MDP/RL systems do not need to take into account people's preferences when generating a response. However, if extra-linguistic factors such as risk come into play, they should be incorporated into policy-learning algorithms to bias response selection in favour of risk-sensitive responses preferred by people. Finally, our findings regarding rating inconsistency over time may affect the results of comparative studies, such as that of Liu et al. (2016).
|
2019-10-10T09:13:12.515Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "76160c8b04254286ec39f3108f037d02e417e25c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "30e3f475500240f0e2c1d63f4061d56bdd54f10a",
"s2fieldsofstudy": [
"Computer Science",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
268200934
|
pes2o/s2orc
|
v3-fos-license
|
Association between dietary live microbe intake and Life's Essential 8 in US adults: a cross-sectional study of NHANES 2005–2018
Background Assessing the impact of dietary live microbe intake on health outcomes has gained increasing interest. This study aimed to elucidate the relationship between dietary live microbe intake and Life's Essential 8 (LE8) scores, a metric for cardiovascular health (CVH), in the U.S. adult population. Methods We analyzed data from 10,531 adult participants of the National Health and Nutrition Examination Survey (NHANES) spanning 2005–2018. Participants were stratified into low, medium, and high intake groups of dietary live microbe based on Marco's classification system. We employed weighted logistic and linear regression analyses, along with subgroup, interaction effect, and sensitivity analyses. Additionally, Restricted Cubic Splines (RCS) were used to explore the dose-response relationship between food intake and CVH in different groups. Results Compared to the low live microbe intake group, the medium and high live microbe intake groups had significantly higher LE8, with β coefficients of 2.75 (95% CI: 3.89–5.65) and 3.89 (95% CI: 6.05–8.11) respectively. Additionally, moderate and high groups significantly reduced the risk of high cardiovascular health risk, defined as an LE8 score below 50, with odds ratios (OR) of 0.73 and 0.65 respectively. Subgroup analysis and sensitivity analysis proved the stability of the results. In the low intake group, food intake shows a linear negative correlation with LE8, whereas in the high intake group, it exhibits a linear positive correlation. In contrast, in the moderate live microbe intake group, the relationship between food intake and LE8 presents a distinct inverted “U” shape. Conclusion This study highlights the potential benefits of medium to high dietary intake of live microbe in improving LE8 scores and CVH in adults. These findings advocate for the inclusion of live microbes in dietary recommendations, suggesting their key role in CVH enhancement.
Introduction
While advancements in food hygiene and environmental sanitation have markedly elevated public health standards, the concomitant reduction in microbial exposure may elicit unforeseen detrimental effects (1).The contemporary scientific milieu has witnessed a burgeoning recognition of the salutary potential harbored by microbial ingestion for human health (2).The "Old Friends Hypothesis" provides a compelling narrative on the integral role of microbes, positing that exposure to symbiotic or innocuous microbes present in our diet serves as a crucial conduit for beneficial microbial stimulation of the immune system (3).The ingestion of live, benign microbes, as part of our daily dietary intake, facilitates their transit to the gut where they seamlessly assimilate with the resident microbial consortium, thereby augmenting gut functionality, orchestrating immune system modulation, and ultimately attenuating susceptibility to chronic ailments (1).
Cardiovascular diseases (CVD) remain a leading cause of death in developed countries and a major global health issue, despite advanced lipid-lowering drugs.Their high incidence rates continue to heavily impact society and economic development (4, 5).A well-known factor contributing to cardiovascular health (CVH) is dietary patterns (6).The gut microbiota can convert commonly consumed nutrients in food into metabolites, some of which are closely related to cardiovascular diseases (7).The gut microbiota has been implicated in cardiovascular diseases in epidemiological studies and animal experiments in the past (8,9).To enhance the prevention of CVD and consequently reduce their incidence, the American Heart Association (AHA) introduced a novel concept of cardiovascular health in 2010, termed Life's Simple 7 (LS7).This paradigm shift transformed the approach to disease management from mere treatment to fostering and safeguarding the health of individuals and communities throughout their lifespan (10).In 2022, following the optimization of the LS7 scoring scale, the AHA unveiled a new CVH score, Life's Essential 8 (LE8).The LE8 is comprised of two primary components, covering four health behaviors (diet, physical activity (PA), nicotine exposure, and sleep health) as well as four health factors [body mass index (BMI), blood pressure (BP), blood lipids and blood glucose] (11).
Although several studies have explored the connection between dietary live microbes and health, their impact on LE8 remains unclear (12-14).A nationally representative sample from the National Health and Nutrition Examination Survey (NHANES) was used in this study to define and estimate the intake of dietary live microbes.We then assess the relationship between dietary live microbe intake and the LE8 score (15).Given the intricate interplay between dietary live microbe intake and CVD, our study pioneers the exploration of the potential linkage between dietary live microbe intake and the LE8 score.
Method . Study design and population
The NHANES database employs a complex, multistage probability sampling design to reflect the nutritional and health status of the U.S. population.This study was approved by the National Center for Health Statistics (NCHS) Institutional Review Board, and all participants provided written informed consent.Notably, the NHANES database is publicly accessible and does not require additional ethical or administrative approval for use.
Our study meticulously adheres to the STROBE guidelines (16).Sleep health is a key factor in assessing the LE8, and applicable data in NHANES are only available after 2005.Therefore, our study incorporated data from a total of seven survey cycles between 2005 and 2018.A total of 70,190 individuals participated in the survey.Among them, 30,441 were under the age of 20, 4,422 lacked information on dietary live microbe intake, 8,322 lacked information on the LE8, 14,514 lacked valid sample weights, and 3,983 lacked information on other covariates.As shown in Figure 1, after screening, a total of 10,531 participants were finally included in the study.
. Dietary intakes and live microbial category
In the NHANES dataset, the dietary intake of participants during two distinct 24-h periods is thoroughly recorded through face-to-face interviews and telephone follow-ups.The NCHS uses dietary nutritional data from the United States Department of Agriculture (USDA) to accurately assess the energy and nutritional content of each food and beverage.The estimated quantity of live microbes (per gram) in 9,388 foods across 48 subgroups in the NHANES database was determined by a team of four experts, including Marco.During these assessments, the experts relied on reported values in professional literature and authoritative reviews, or made inferences based on the known effects of food processing on microbial vitality.Foods were categorized into low (Lo; <10 4 CFUs/g), medium (Med; 10 4 -10 7 CFU/g), and high (Hi; >10 7 CFU/g) classes based on their live microbe content.Any discordance encountered during the assessment of live microbe content was resolved through external consultations and discussions.
. Assessment of LE
Each of the eight indicators is scored according to publicly available official calculation methodologies, with the specific calculation methods detailed in the Supplementary Table 1.Within the eight CVH indicators, each indicator is scored on a scale ranging from 0 to 100.The overall LE8 score is derived from the unweighted average of these indicators.Based on the final score, CVH is categorized into three groups: low (0-49 points), medium (50-79 points), and high (80-100 points) (17).
The dietary indicator was assessed using the 2015 Healthy Eating Index (HEI) (18), and our study included participants with 2 days of dietary data (those with only 1 day of dietary data were excluded).Subsequently, dietary information was combined with data from the USDA to calculate the HEI (19), details are provided in the Supplementary Table 2. Self-reported survey questionnaires were used to collect information on participants' PA, smoking, sleep, history of diabetes mellitus (DM), and medication use.Height, weight, and BP were measured in mobile examination centers, and BMI was calculated.Blood samples were collected and sent to a central laboratory for testing of blood lipids, blood glucose, and glycated hemoglobin.Non-high-density lipoprotein cholesterol was calculated as total cholesterol minus high-density lipoprotein cholesterol.
. Covariates and other variables
In light of prior studies and clinical experiences (12, 20), confounding factors that could influence the relationship between dietary microbes and CVH were taken into consideration.These factors included age, gender and race/ethnicity (non-Hispanic White, non-Hispanic Black, Mexican American, other Hispanic, and other race-including multi-racial).Educational level was also considered, categorized as less than high school, high school graduate/GED or equivalent, and college graduate or above.Marital status was classified into married/living with partner, never married, and widowed/divorced/separated.Economic status was assessed using the poverty income ratio (PIR) with categories of <1.30, 1.3-3.49,and 3.50 or higher.Health insurance status was noted as either Yes or No. Alcohol consumption included never drinking, former drinking, mild drinking, moderate drinking and heavy drinking.Obesity status includes normal weight, overweight and obesity.Medical history encompassed a history of CVD (Yes, No), DM (Yes, No), hypertension (HTN; Yes, No), and hyperlipidemia (HLD; Yes, No).The details of these variable assessments can be found in Supplementary Table 3.Additionally, we assessed the participants' average daily intake of energy, protein, carbohydrates, dietary fiber, and fat.
. Statistical method
To ensure national representation, this study considered sample weights in all analyses.Participants were grouped into Low, Medium, and High dietary live microbe categories, where each group corresponds to the consumption level of foods within microbial content ranges, with the Low group consuming mostly low microbe-content foods, the Medium group consuming a balance excluding high microbe-content foods, and the High group predominantly eating foods rich in live microbes.We conducted one-way ANOVA (for continuous variables) and Chi-square tests (for categorical variables) to assess the baseline characteristics of participants among different groups.Continuous variables are represented by mean ± standard deviation, while categorical variables are expressed as the number of cases (n) and weighted percentages (%).
Weighted univariate and multivariate linear regression models are used to explore the relationship between dietary live microbes and LE8.Additionally, health behaviors and health factors were explored separately.To delve deeper into the connection between them, we consider a LE8 score below 50 as an indicator of high cardiovascular health risk (HCVHR).Weighted univariate and multivariate logistic regression models are used to explore We also conducted subgroup analyses and interaction analyses based on age, gender, race/ethnicity, education level, PIR, health insurance, and marital status.Moreover, in each group with low, medium, and high live microbes intake, we employed the restricted cubic spline (RCS) analysis method to investigate the relationship between the quantity of food consumed (grams) and LE8.Finally, for the purpose of sensitivity analysis, we excluded individuals with a history of CVD, DM, HLD, and HTN.Statistical analyses were conducted using R software, version
Result . Characteristics of participants across di erent dietary live microbe intake groups
Participants were categorized into three groups based on their intake levels of dietary live microbes: low, medium, and high.As shown in Table 1, the average age of the population included in the study is 47.59 years, with a slightly higher proportion of females than males.The predominant ethnicity is non-Hispanic White, and the majority of participants have a college education or higher.Most participants have health insurance, a habit of drinking alcohol, and are either married or cohabiting with a partner.In terms of weight status, the majority are obese, with 8.81% of participants suffering from CVD, 13.87% having DM, 37.44% experiencing HTN, and over 70% suffering from HLD.The assessment of CVH shows that 66.34% of the participants are at a moderate level.Among the three groups of participants, there were significant differences in all aspects except for daily carbohydrate intake, prevalence of HTN, and HLD.We also compiled the baseline characteristics of participants categorized by low, medium, and high levels of CVH, as shown in Supplementary Table 4.
As previously mentioned, LE8 primarily comprises two components: health factors and health behaviors.We further explored the relationship between these components and the intake of live microbes.As shown in Supplementary Table 4, across all analysis models, groups with higher intake of live microbes were generally associated with higher scores in health factors and health behaviors, and this association remained significant in the multivariate-adjusted models.It is noteworthy that in the crude model, there was no significant difference between the medium and low dietary live microbe groups.
. Association between di erent dietary live microbe groups and HCVHR
To further explore the relationship between different levels of live microbes intake and the risk of cardiovascular health, we employed logistic regression analysis.The study builds on previous linear regression analyses and defines individuals with an LE8 score below 50 as HCVHR.As shown in Table 3, compared to low group, moderate and high groups are associated with a significantly reduced risk of cardiovascular health.In the crude model, without adjusting for any confounding factors, moderate live microbes intake was associated with a reduced risk of cardiovascular health (OR = 0.58, 95% CI: 0.49-0.69),and the reduction was more significant for high live microbes intake (OR = 0.43, 95% CI: 0.34-0.55),with a Pvalue for the trend <0.0001.This trend remained consistent after progressively adjusting for confounding factors.In the final model, taking into account a variety of potential confounders, moderate and high groups were still significantly associated with lower cardiovascular health risk.Specifically, the odds ratio for moderate live microbes intake was 0.73 (95% CI: 0.61-0.89),and for high live microbes intake, it was 0.65 (95% CI: 0.50-0.84).
FIGURE
Subgroup analysis for the associations of di erent dietary live microbe intake and LE .The model was adjusted for age, gender, race/ethnicity, education level, PIR, health insurance, marital status, alcohol consumption, energy intake, protein intake, carbohydrate intake, fat intake, and fiber intake when they were not the strata variables.*P < .; **P < .; ***P < .; ****P < . .
. Subgroup analysis and interaction analysis
To investigate the consistent link between dietary live microbes intake and CVH across various populations, subgroup analyses were conducted.The analysis covered various population subgroups, including age, gender, race, education level, household income ratio, health insurance status, and marital status.As shown in Figure 2, the results indicate a significant positive correlation between the intake of live microbes and LE8, which was validated in the vast majority of subgroups.However, there were some non-significant results in specific subgroups, such as non-Hispanic blacks, suggesting caution when generalizing these findings to a broader population.Trend tests showed that as the intake of live microbes increased (from moderate to high amounts), their positive impact on CVH tended to increase across all subgroups.Interaction analysis revealed that the relationship between dietary live microbe levels and LE8 was significantly influenced by gender, race and education level.The impact of gender was particularly significant (P = 0.001), suggesting that men and women might respond differently to the intake of live microbes.Additionally, subgroup analyses on the relationship between dietary live microbes levels and HCVHR were also conducted, with the results largely consistent (Supplementary Figure 1).
. Dose-response relationship between food intake and LE across di erent live microbe intake groups
We employed RCS analysis to explore the dose-response relationship between food intake at three different levels of live microbes and CVH.The observations show that in the low live microbe group (Figure 3A), there is a significant negative correlation between the intake of food and LE8 (P < 0.001), and there is no significant non-linear trend (non-linear P < 0.36).In the medium live microbe group (Figure 3B), the analysis revealed a significant inverted "U" shaped relationship (P < 0.0001, nonlinear P < 0.0001), meaning that the intake of food is positively correlated with LE8 before reaching 326.04 g, and the relationship turns negative after reaching this point.In the high live microbe intake group (Figure 3C), the intake is positively correlated with LE8 (P < 0.001), but this correlation is not curvilinear (non-linear P = 0.12).
Discussion
In this nationally representative cross-sectional study, our results demonstrate that the consumption of foods high in live microbes is associated with better CVH.The study finds a significant positive correlation between moderate and high groups of live microbes and LE8, indicating that appropriate consumption of these foods may benefit CVH.Sensitivity analyses further confirm the robustness of this finding, as the positive correlation remains significant even after excluding populations with a history of CVD and other potential confounding factors.Subgroup analyses reveal the universality of this relationship across different populations, although caution should be exercised in generalizing these findings, especially in specific subgroups such as non-Hispanic blacks where the correlation was not significant.Additionally, interaction analyses show that gender differences might influence the relationship between live microbe intake and LE8, suggesting that men and women may respond differently to live microbes consumption.Furthermore, RCS analysis exploring the dose-response relationship between food intake and LE8 at different levels of live microbes reveals a non-linear relationship.Specifically, in the moderate live microbe intake group, there is an inverted "U" shaped relationship between food intake and LE8, implying that a moderate intake of foods with medium levels of live microbes might be more beneficial for CVH.In summary, moderate intake of live microbes is important for maintaining CVH, but its effects may vary among individuals, and personal characteristics should be taken into consideration.
Prior to our study, there had been research exploring the relationship between dietary live microbes and CVD, but the issue was that the diagnosis of CVD was based solely on inquiries about the presence or absence of a relevant medical history, and previous studies focused more on whether CVD occurred (12).Additionally, Macro and colleagues studied dietary live microbes in relation to physiological indicators and found a positive correlation with health, including reductions in triglycerides, systolic blood pressure, and fasting blood sugar levels, as well as an increase in high-density lipoprotein cholesterol levels (14).However, Macro's study focused only on specific indicators and lacked direct linkage in inferring cardiovascular health.Building on these studies, our focus was on the relationship between dietary live microbes and LE8.Since the AHA updated its method for assessing CVH outcomes in 2022, multiple studies have validated the effectiveness of LE8 in predicting CVH and outcomes.Higher levels of LE8 are associated with reduced incidences of coronary heart disease, stroke, and CVD, and are also independently related to lower risks of all-cause and cardiovascular mortality (21,22).LE8 is a comprehensive indicator, incorporating not only health factors such as blood pressure and lipids but also considering health behaviors like sleep, nicotine exposure, and exercise.In fact, our study also demonstrated significant positive correlations between dietary live microbes and both these aspects.This may help to reveal a deeper connection between live microbes and overall CVH.During the study, we aimed to minimize the confounding effect of dietary factors on the results.Therefore, we utilized the HEI as the scoring standard for diet-related scores in the LE8 evaluation.Compared to the DASH diet score, which emphasizes vegetables, fruits, and low-fat dietary guidelines, HEI is more lenient.Moreover, to further eliminate the confounding effect of diet, we adjusted for calorie intake, protein intake, fat intake, carbohydrate intake, and dietary fiber intake.After these adjustments, medium and high dietary live microbe intake remained associated with higher health behavior and health factor scores.The impact on health behavior scores was still significant, indicating a stable relationship between dietary live microbes intake and higher LE8.Furthermore, we used a variety of analytical methods.To prove the stability of our results, we conducted subgroup analyses, interaction analyses, and sensitivity analyses.Additionally, we used RCS analysis to assess the dose-response relationship between food intake and cardiovascular health.The RCS analysis can accurately capture the complex non-linear relationships between variables and identify key turning points in the dose-response relationship, thereby providing a more precise and scientific basis for risk assessment and dietary guidance.
While the mechanism underlying the relationship between dietary live microbe intake and LE8 scores remains unclear, previous studies have shown that the intake of probiotics or fermented foods can significantly reduce risk factors for cardiac metabolism.Meta-analyses indicate that probiotic supplements can lower blood pressure, blood glucose, total cholesterol, lowdensity lipoprotein cholesterol, and BMI (19-21).In addition, probiotic supplements can alleviate oxidative stress, maintain gut microbiota homeostasis, and regulate immunity to maintain CVH (22).The metabolic products of gut microbiota, short-chain fatty acids (such as butyrate, propionate, and acetate), can improve gut barrier function, regulate immune and inflammatory responses, and affect the recruitment of immune cells to atherosclerotic plaques, thereby mitigating plaque formation (23)(24)(25).In recent years, the gut-brain axis has attracted increasing attention, with gut microbiota and their metabolites playing a crucial role.Current research suggests that gut microbiota and their metabolites can regulate the autonomic nervous system, endocrine system, and immune system, influence the release of neurotransmitters, and affect the activity of the central nervous system, thereby influencing sleep (26,27).Furthermore, probiotic supplements can enhance physical performance.
To the best of our knowledge, this study is the first to explore the relationship between dietary live microbe and the LE8 in a large US population.The results indicate that consumption of foods providing more dietary live microbes is positively correlated with CVH, potentially increasing scores for healthy behaviors and health factors.Our study has several strengths: firstly, the data used in this study comes from NHANES, and the data we employed underwent rigorous quality control, ultimately including 10,531 participants, which ensures a sizable sample and lends a degree of credibility to the results.Secondly, during the analysis, we considered the differences in daily energy, protein, carbohydrate, and fiber intake among participants with different dietary live microbe groups and adjusted for these factors in the fully adjusted mode.Notably, the influence of dietary live microbes intake on the LE8 remains robust after adjustment.Finally, we used a variety of statistical analysis methods, which helped in identifying special populations and understanding more complex relationships.However, there are certain limitations to the current study.Firstly, this is a cross-sectional study, and therefore, we cannot infer causal relationships.Secondly, we used 24-h dietary recalls to estimate participants' daily dietary intake, which may not accurately reflect their true dietary habits.Thirdly, the content of live microbes in foods was determined through expert literature review and discussion and was not precisely measured, which may affect the results.Fourthly, our conclusions are limited to the US population and may not apply to other regions.Lastly, there may be residual confounding in the study results.Even though we considered as many confounding variables as possible, such as diet and certain diseases, some confounding factors may still affect the results.
Conclusion
In conclusion, our study indicates that consumption of foods providing more dietary live microbes is positively correlated with scores for healthy behaviors, health factors, and the LE8.This result remains consistent across different populations and is independent of any previous history of cardiovascular-related diseases.More research is needed, particularly using experimental testing methods to determine the specific content of live microbes in various foods.Additionally, to further investigate causality, randomized controlled trials are required.
FIGURE
FIGUREFlow chart of the study.
FIGURE
FIGUREDose-response relationship curves between total food consumption and adjusted β values for LE .(A) Low live microbes group, (B) moderate live microbes group, (C) high live microbes group.The model was adjusted for age, gender, race/ethnicity, education level, PIR, health insurance, marital status, alcohol consumption, energy intake, protein intake, carbohydrate intake, fat intake, and fiber intake.
TABLE The clinical characteristics of the study population according to the di erent dietary live microbes.
TABLE Association between di erent dietary live microbe group and LE .Adjusted for age (as a continuous variable), gender, race/ethnicity, and education level.Model 2: Further adjusted for PIR, health insurance, marital status, and alcohol consumption.Model 3: Further adjusted for energy intake, protein intake, carbohydrate intake, fat intake, and fiber intake.* * * * P value < 0.0001.4.3.1.A two-tailed P-value of <0.05 was considered to indicate statistical significance.
TABLE Association between di erent dietary live microbe groups and HCVHR.
TABLE Sensitivity analysis of the association of the di erent dietary live microbe intake and LE .CVD, cardiovascular disease; DM, diabetes mellitus; HLD, hyperlipidemia; HTN, hypertension.Adjusted for age, gender, race/ethnicity, education level, PIR, health insurance, marital status, alcohol consumption, energy intake, protein intake, carbohydrate intake, fat intake, and fiber intake.
|
2024-03-03T17:13:58.514Z
|
2024-02-29T00:00:00.000
|
{
"year": 2024,
"sha1": "7b25cc7c8452029f76b770e6cf9924217b626af1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2024.1340028/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc84623c4ce57e442feacc30c24097bc78766739",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218940315
|
pes2o/s2orc
|
v3-fos-license
|
Hypersensitivity pneumonitis in children
Introduction. Hypersensitivity pneumonitis (HP) is one of the most common forms of interstitial lung disease in children. Due to its common association with occupational environment, it used to be considered an exclusively adult disease; however, hypersensitivity pneumonitis also affects the paediatric population, and is often associated with exposure to antigens in the home environment and with the pastime activities of children. Objective. The aim of the study is to present the current state of knowledge on hypersensitivity pneumonitis in children with a focus on the peculiarities of diagnostic investigation and management of the disease in this age group. The study includes a case report of the disease in a child. State of knowledge. In children, the most common factors causing HP are avian and fungal antigens present in the home environment. Diagnosis is based on the co-occurrence of characteristic clinical presentation, radiographic and pulmonary function tests findings, and a history of exposure to a potential triggering antigen. The main strategy in the management of HP is to eliminate the trigger factor with the use of a systemic corticosteroids therapy in severe or advanced cases. Conclusions. Due to the risk of irreversible changes in the respiratory tract, an early diagnosis is very important. A quick identification of the trigger factor and its elimination from the patient’s environment makes it possible to apply a less aggressive treatment, and to improve the patient’s prognosis. Unfortunately, due to its infrequent occurrence, hypersensitivity pneumonitis is often not taken into account in a differential diagnosis of respiratory diseases in children, which leads to a delayed diagnosis despite the characteristic clinical presentation of the disease.
INTRODUCTION
Hypersensitivity pneumonitis (HP), formerly called extrinsic allergic alveolitis, is relatively rarely diagnosed in children, but it accounts for approximately 50% of all forms of interstitial lung disease in this age group [1]. According to Danish report, the prevalence of HP is approximately 4 cases/1 million children and the incidence is 2 cases/year [2]. HP is most commonly diagnosed in children aged approximately 10 years [2,3] and 25% had a family history of the disease [4].
OBJECTIVE
The aim of the study is to present the current state of knowledge on hypersensitivity pneumonitis in children with a focus on the specific aspects of diagnostic investigation in this age group. The study includes a case report with a classic course of the disease.
STATE OF KNOWLEDGE
Hypersensitivity pneumonitis is defined as a disease with a variable clinical course which involves an IgE-independent hypersensitivity reaction to various environmental factors leading to lymphocytic and granulomatous inflammation of the peripheral airways, alveoli and the surrounding interstitial tissue.
Aetiology and pathomechanism. The number of known factors causing HP is substantial and constantly growing. They have been divided into 6 groups: bacteria, fungi, animalderived proteins, plant proteins, low molecular weight chemicals and metals [5,6].
The earliest reports of HP cases in children date back to the 1960s and include the terms "pigeon-breeder lung disease" and "farmer's lung disease" [7]. Currently, in the paediatric population, antigens causing HP are usually found in the domestic environment and are associated with pastime activities. The causative agents mainly include avian, fungal and mould or various inorganic antigens, such as inhaled paints, plastics, wax and talcum [2,4,8,9].
Specific antigens eliciting HP include thermophilic actinomycetes, the causative antigen of classic farmer's lung disease. Typically, this form of HP may occur in children who live on a farm [10], but a case associated with periodic exposure in an equestrian centre has also been reported [11]. Thermophilic actinomycetes are present in various organic materials, such as rotting hay, household composts, and sugar cane residue called bagasse; other sources are air-conditioning, air humidifiers and automated water systems. They may be contaminated not only with actinomycetes, but also with other microorganisms, such as Aspergillus sp., Candida sp., Cephalosporium, Aureobasidium pullulans, Naegleria gruberi, Acanthamoeba polyphaga, Acanthamoeba castellani, and Bacillus spp. [6,12,13,14]. In such cases, the disease takes the form of HP known as humidifier lung disease.
An example of the disease associated with exposure to fungi and moulds is a summer-type HP. This is the most common form of the disease in Japan, caused by inhaling Trichosporon sp. or Cryptococcus albidus, which contaminate a warm and humid home environment [15].
The most common form of HP associated with pastime activities is pigeon-breeder lung or bird fancier's disease [16,17,18,19]. Disease-inducing factors include such proteins as immunoglobulins, intestinal mucin (present in avian faeces) and bloom (a waxy material covering birds' feathers).
Another important etiological factor for HP is Mycobacterium avium complex inducing a hot-tub lung disease [20]. Mycobacteria are common in the environment: in the soil and water of natural reservoirs, waterworks and artificial reservoirs. They are resistant to temperature changes and disinfectants and the water environment of swimming pools and hot baths is suitable habitat for their survival and colonization.
It is intriguing, that cases of the disease associated with an e-cigarette use were also reported [25,26]. The pathophysiology of HP is complex and not well understood. The disease involves lymphocytic and granulomatous inflammation of the peripheral airways, alveoli and surrounding interstitial tissue. Inflammation is a result of type III (immune complexes) and type IV (cellmediated) hypersensitivity reaction caused by repetitive exposure to environmental inhalant allergens [5]. The allergens are small particles with a diameter of less than 5 μm, which makes it possible for them to penetrate the alveoli [27]. They may be both organic proteins (plant and animal) and low molecular weight agents.
In acute HP, an inflammatory reaction appears to be mediated by immune complexes, as suggested by the presence of high titres of antigen-specific precipitating IgG in the serum, and an increase in neutrophils in the lungs primed for an enhanced respiratory burst [27].
Subacute and chronic HP are characterized by an exaggerated T cell-mediated immune response with increased T-cell migration, local proliferation, and decreased apoptosis contributing to the characteristic T-lymphocytic alveolitis [14,28,29]. The immune processes that lead to progression to fibrosis are less clear. However, features associated with chronic HP include an increase in CD4+ Tcells and in the CD4+/CD8+ ratio, a skewing toward Th2 T-cell differentiation and cytokine profile, as well as an exhaustion of CD8+ T cells [14]. Also, the increase of Th17 cells may promote collagen deposition in the lung in response to chronic exposure of HP antigens.
The role of precipitins in the pathogenesis of the disease is not fully known. They are found in the serum of approximately a half of individuals exposed to a given antigen who do not present any clinical symptoms of the disease [8]. It is suggested that the disease process is based on an individual propensity of the immune system to develop an inflammatory reaction, probably associated with certain major histo-compatibility complex genes [30,31].
One of the factors initiating the disease seems to be viral infections (e.g. RSV, influenza A) [5,14]. Tobacco smoke seems to have a protective effect against the development of HP, which is probably associated with its immunosuppressive activity, particularly its influence on the production of proinflammatory cytokines by macrophages and inhibition of lymphocyte proliferation [5,32]. However, a precise pathogenesis of the disease is not known and the aetiology seems to be multi-factorial, i.e. the coincidence of a number of trigger factors in individuals with a genetic predisposition may lead to inflammatory lesions in the lungs [5].
Clinical manifestations. Obtaining a thorough patient history of symptoms plays a key role in diagnosing hypersensitivity pneumonitis and identifying the trigger factor. In the majority of cases, a clinical presentation is characteristic: there are respiratory symptoms, such as dyspnoea during exercise (94%) or rest (52%), a cough (52%) and, less commonly, wheezing (5%); nearly half of the patients lose weight [2,3,4]. On physical examination, apart from the presence of dyspnoea, there are crackles on auscultation (approximately 50% of children); in some cases signs of bronchial obstruction are noted. In some advanced cases of the disease, digital clubbing may be present (in 10-30% of children). A clinical presentation may suggest an infection-induced asthma exacerbation or Mycoplasma pneumoniae infection. As a result, inhaled corticosteroids, bronchodilators and macrolide antibiotics are often administered [2,4]. The presence of HP is supported by the lack of clinical improvement during the treatment and by the recurrence of symptoms after contact with a specific environmental antigen.
Clinical course. The duration and intensity of exposure to the trigger factor determine the clinical course of the disease [2,5]. Historically, three forms of hypersensitivity pneumonitis were distinguished: acute, subacute and chronic; however, due to the difficulties in differentiating between the particular forms in clinical practice, experts currently propose a division into two phenotypes [5,33]: Acute/subacute HP -characterised by recurrent episodes of influenza-like symptoms: fever, muscle pain, cough and dyspnoea. The symptoms develop within 2 -9 hours from contact with a trigger factor and persist for a few hours or days. On a physical examination, bilateral crackles at the base of the lungs are found and signs of bronchial obstruction may be present. A chest radiograph of patients with acute/subacute HP is often normal or shows diffuse air-space consolidation and a nodular or reticulonodular pattern. Symptoms are AAEM
Annals of Agricultural and Environmental Medicine
generally caused by an intensive but short exposure to an allergen. They usually subside completely after the end of contact with the trigger factor. Chronic HP -manifesting with chronic and gradually deteriorating respiratory symptoms and weight loss. Patients with this disease phenotype have an abnormal radiographic image of the lungs and a restrictive or obstructive and restrictive ventilatory defect in pulmonary function tests. The clinical course is determined by a constant but less intensive exposure to a trigger factor, which leads to progressive pulmonary fibrosis, emphysema and secondary pulmonary hypertension. Patients have the symptoms of gradually developing respiratory failure with an accompanying chronic dry cough, or cough with little expectoration, and weight loss. Periods of disease exacerbation may occur manifesting with worsening dyspnoea and lung radiographic image deterioration. Apart from signs of dyspnoea, physical examination reveals crackles at the base of the lungs or over all the pulmonary fields; in 10 -30% of patients, digital clubbing is observed [2,4,34]. A chest radiograph usually shows the abnormalities characteristic for pulmonary fibrosis process.
Additional tests. Laboratory tests may show moderate leukocytosis, elevated inflammatory markers (increased C-reactive protein level, accelerated erythrocyte sedimentation rate), and in some cases, increased immunoglobulin IgG and IgA levels [2].
Precipitin assay -is helpful in identifying the trigger factor. However, it needs to be emphasised that the presence of specific IgG antibodies only indicates exposure to a given antigen, but is not a marker of the disease [8]. A positive result is also found in individuals without clinical symptoms of the disease. On the other hand, the lack of specific precipitins does not exclude either exposure to the antigen or HP diagnosis. However, according to some authors, the rate of positive results in children is high -approximately 90% [2,34].
Diagnostic imaging. In the acute/subacute disease, a chest radiograph mainly shows parenchymal opacities and diffuse poorly defined nodules [5]. A radiographic image may also be normal, particularly in the case of first exposure to an antigen (18 -37% of cases) [2,34]. In chronic HP, the chest radiograph is abnormal in 98% of cases, and persistent reticular and linear opacities predominate [5,30].
In the acute/subacute HP, a high resolution computed tomography (HRCT) shows diffuse ground-glass opacities, mosaic attenuation, poorly defined small centrilobular nodules and air trapping on expiratory CT images [4,5,35,36,37]. Chronic HP is characterized by the presence of a reticulation and parenchymal distortion due to the fibrosis process; bronchiectasis can also be found [4]. In some cases, mediastinal lymphadenopathy has been reported.
Bronchoalveolar lavage (BAL) -has a lower diagnostic value in the paediatric population than in adults [8]. An increase in the total cell count in BAL with marked lymphocytosis, predominance of CD8+ T cells and a low CD4+/CD8+ ratio, is a characteristic feature of HP in adults [5]. In paediatric patients, BAL lymphocytosis also occurs in all cases, as well as a decreased CD4+/CD8+ ratio (<1) [3]. However, an increased CD4+/CD8+ ratio has also been observed [4]. Moreover, the predominance of CD8+cells and a reduced CD4+/CD8+ ratio in the absence of BAL lymphocytosis may be present in children without interstitial lung disease [38]. All these findings compromise the specificity of this method. Nevertheless, the BAL is an important diagnostic tool which may allow the avoidance of a lung biopsy. According to some authors, the normal BAL cellularity excludes the diagnosis of HP with very high security [37].
Lung function tests -usually show a restrictive ventilatory defect and an associated impaired gas exchange (decreased diffusing capacity for carbon monoxide [DLCO] down to 50 -62% of the predicted value) [2,3]. In reported paediatric cases, spirometric results, such as a vital lung capacity (FVC) and a forced expiratory volume in one second (FEV1), were reduced to approximately 40-53% of the predicted value [2,3,4,34]. In whole-body pletysmography, a decreased value of total lung capacity (TLC) was observed approximately to 60% of the predicted value with associated air trapping (increased RV/TLC ratio). Infrequently, there may be a component of airflow obstruction (decreased FEV1/FVC ratio) related to bronchiolitis [4,34]. [5]. Owing to difficulties with conducting the procedure and interpreting the results correctly, it is not recommended for routine use. Due to the lack of standardization and because of the risk of severe reactions, the test should be performed exclusively at specialised centres with appropriate experience, and only if other diagnostic methods are not sufficient to determine a diagnosis [27]. According to scientific reports, IPTs are positive in 55% of cases in the paediatric population [34].
Inhalation Provocation Test (IPT) -a relatively rare diagnostic method is an inhalation provocation test, IPT
Lung biopsy -is indicated in dubious cases. The histopathological findings of HP are dependent on the stage of the disease, intensity, and duration of antigen exposure. Histologic studies in acute HP are scarce as lung biopsy is generally not necessary for the diagnosis. The main abnormalities include infiltrations of the alveolar spaces and interstitium, mainly with neutrophils, variably associated with the findings of subacute HP [29,39]. The pathologic patterns in subacute HP are predominantly lymphocytic inflammation of the small airways and pulmonary parenchyma, with poorly formed, small non-necrotizing granulomas [40]. The lesions have a peribronchiolar and interlobular location. The typical granulomas of HP consist of a loose collections of histiocytes or scattered giant cells, frequently with cholesterol clefts, or other nonspecific cytoplasmic inclusions, such as Schaumann bodies and oxalate crystals [29]. In chronic HP, the image of interstitial pulmonary fibrosis dominates; in addition, abnormalities characteristic for subacute HP are also usually found. This form of HP should be differentiated from other chronic interstitial lung diseases. The main ancillary features for differentiating are bronchiolocentric localization of fibrotic lesions, the presence of Schaumann bodies, giant multinucleated cells, or small granulomas, and a significant lymphoid/plasmacytic infiltrate [29]. disease (particularly the relationship between symptoms and exposure to a trigger factor) in correlation with a characteristic HRCT image of the lungs, are of crucial importance for the diagnosis. A precipitin assay is helpful in identification of the trigger factor, while a bronchoalveolar lavage makes it possible to determine the cellular composition of lower respiratory tract secretions, and to perform bacteriological tests in order to exclude an infectious origin of the abnormalities. In dubious cases, histopathological assessment of a lung biopsy specimen is helpful.
Current occupational hypersensitivity pneumonitis diagnostic criteria, based on the European Academy of Allergy and Clinical Immunology (EAACI) guidelines of 2016, are presented below [5]. However, the fact needs to be emphasised that their use in children is limited due to difficulties with the performance of pulmonary functional tests and poor availability of provocation tests.
Acute/subacute HP -diagnosis can be established if all the following criteria are met: • exposure to a potential environmental trigger factor; • recurrent episodes of symptoms occurring 4-8 hours after exposure to the trigger factor; • presence of specific precipitins to an occupational antigen; • presence of inspiratory crackles; • HRCT image pattern compatible with acute/subacute HP.
If only part of these features are fulfilled, one of the following additional criteria can be used instead: • bronchoalveolar lavage lymphocytosis; • pathology of lung specimen characteristic for acute/ subacute HP; • positive provocation tests or clinical improvement after the end of exposure to the trigger factor, and symptoms recurrence after the renewed exposure.
Chronic HP -diagnosis can be confirmed if four or more of the following criteria are presented: • exposure to a potential trigger factor in the surrounding environment; • presence of specific precipitins or bronchoalveolar lavage lymphocytosis; • decreased DLCO or hypoxaemia at rest or exercise; • HRCT image pattern consistent with chronic HP; • pathology of lung specimen characteristic for chronic HP; • positive provocation tests or clinical improvement after the end of exposure to the trigger factor, and symptom recurrence after renewed exposure.
Differential diagnosis. The diagnosis of acute/subacute HP usually requires differentiation between an acute viral or atypical bacterial respiratory infection and asthma exacerbation [2,4]. HP diagnosis is supported by the persistence of symptoms despite various treatment regimens (antibiotic therapy, symptomatic inhalation treatment), by their spontaneous resolution after the change of environment, and recurrence after a renewed exposure to the antigen. Chronic HP should be differentiated from other chronic interstitial lung diseases, and in some cases, with severe, steroidresistant asthma [4]. HP should be suspected particularly in the case of constant exposure to a given environmental factor. An HRCT image is helpful in differential diagnosis. The HRCT findings characteristic for chronic HP include reticulation and parenchymal distortion due to fibrosis process and some findings of subacute form, such as diffuse groundglass opacities and poorly defined small centrilobular nodules [5]. According to certain authors, features strongly indicative for chronic HP in HRCT scan include the predominance of upper lung zone abnormalities, air-trapping, and the presence of more distinct ground-glass opacities [39].
Treatment. The main strategy of hypersensitivity pneumonitis management is to eliminate exposure to the offending factor. This is the only causal treatment of the disease. In acute HP with a mild course, the ending exposure to the antigen usually leads to a complete resolution of symptoms. In cases with a moderate or severe course, systemic corticosteroids should be considered. Oral prednisone therapy is usually used [5]. There are also scarce reports of a methylprednisolone pulse therapy [2] and a budesonide inhalation therapy, although the efficacy of these methods is a matter of dispute due to very limited evidence [41]. In the case of disease progression despite the use of systemic corticosteroids, hydroxychloroquine, cyclosporine, azathioprine or mycophenolate mofetil immunosuppression therapy should be considered [2,5,42].
In advanced cases with extensive pulmonary fibrosis, respiratory failure and secondary pulmonary hypertension, the only effective therapy is lung transplantation. The treatment duration depends on the form of the disease and on clinical, lung function and radiographic improvement. In the case of a good response to treatment, lung capacity parameters gradually improve over the first six months; subsequently, they reach a plateau [43]. In advanced cases with pulmonary fibrosis, the patient's lung function parameters and radiographic image does not usually return to normal.
Prognosis. The prognosis in children with HP is generally perceived as favourable if antigen avoidance is possible [2,4,34]. Elimination of the trigger factor combined with systemic corticosteroids therapy leads to symptom resolution and improvement of pulmonary function [34]. Nevertheless, in the case of significantly delayed diagnosis, the progressive pulmonary fibrosis and development of severe respiratory insufficiency has been reported previously [4]. A longitudinal assessment of children with HP was performed by Sisman et al. [43] in which pulmonary function tests were performed in 22 children during treatment, after the treatment and a few years later. No significant difference was found between the results of tests performed at the end of the treatment and those conducted a few years subsequently. More than 90% of the patients had normal spirometry results and 86% had a normal total lung capacity value. However, despite normal physical fitness, in 41% of the cases an impaired gas exchange was observed (decreased DLCO), and abnormalities in peripheral airways function were found in nearly a half of the patients in a multiple breath nitrogen washout test.
CASE REPORT
A 14-year-old boy with a positive family history for allergy (asthma in the mother) was admitted to hospital due to suspected hypersensitivity pneumonitis. The medical history revealed that the boy had constant daily contact with birds: he lived in the countryside in a detached house near a pigeon loft. There had previously been chickens on the farm, and a canary in the family house. At the time of admission, the patient had a 3-month history of productive cough, signs of dyspnoea, periodic wheezing and exercise intolerance. Symptoms suggesting an acute respiratory infection (fever, muscle pain, exacerbated cough and dyspnoea) were observed twice following the boy's contact with pigeon faeces during loft cleaning. The symptoms subsided within 24 hours from the end of exposure. The treatment regimen included antihistamines, antileukotriens, bronchodilators and inhaled anti-inflammatory drugs; however, without any clinical improvement. Respiratory symptoms persisted and weight loss was additionally observed. The boy was hospitalised at another hospital for diagnosis of the above-mentioned symptoms. Physical examination revealed crackles at the base of the right lung; basic laboratory tests did not reveal any significant abnormalities, while allergy tests showed signs of atopy (high total IgE level, positive skin prick tests for mugwort allergens). Chest X-ray was normal, while spirometry revealed abnormal lung capacity values, suggesting some signs of mild restriction. Since a relationship was established between the symptoms and exposure to pigeon faeces antigens, hypersensitivity pneumonitis was suspected. The boy was referred for further specialist diagnostic procedures and advised to completely avoid any contact with birds. After exposure to the suspected trigger factor was eliminated, a clinical improvement was observed in the partial resolution of respiratory symptoms.
Upon admission, the patient was in a good health condition, although he complained of mild dyspnoea and productive cough. Physical examination and basic laboratory tests did not reveal any significant abnormalities, apart from elevated levels of all classes of immunoglobulin. Pulmonary function tests demonstrated restriction patterns combined with a bronchial obstruction and impaired gas exchange expressed as decreased DLCO. HRCT revealed characteristic radiological patterns consistent with HP: generalised interlobular peri-bronchiolar thickening producing a discrete reticular opacities, and poorly defined centrilobular and subpleural nodules.
As part of a differential diagnosis, bronchofiberoscopy was performed with bronchoalveolar lavage, which revealed a substantial predominance of lymphocytes (> 80% of all cells) with a decreased CD4+/CD8+ lymphocyte ratio. Positive precipitin reactions on bird breeder's disease test (using Ouchterlony agar gel double diffusion technique and antigens from pigeon, duck, turkey and parrot faeces) supported a relationship between exposure to the suspected trigger factor and the disease symptoms. A subacute HP was diagnosed. The family was advised to completely eliminate bird antigens from the child's environment, and oral corticosteroids therapy was started. Systematic improvement in the patient's clinical condition and pulmonary function parameters was observed.
In the case reported above, the diagnosis of hypersensitivity pneumonitis was made early (within three months from the onset of symptoms) due to a characteristic clinical presentation and correlation of symptoms with an exposure to bird faeces antigens. Respiratory symptoms predominated. There was periodic exacerbation of the disease in the form of short episodes of influenza-like symptoms, and more intense coughing caused by massive exposure to the trigger factor. It is worth noting that the suspicion of a correct diagnosis had already been made before a specialist diagnostic investigation was performed. The advice to stop exposure to the suspected trigger factor, i.e. pigeon faeces antigens, had a crucial influence on the further course of the disease and the patient's prognosis.
CONCLUSIONS
Hypersensitivity pneumonitis is a rare, but not exceptional condition in children, and it is often associated with exposure to antigens at home and during the child's pastime activities. The diagnosis of paediatric HP is difficult and presents even more of a challenge than in adults. It relies on the combination of exposure history, characteristic clinical presentation of the disease, and some additional tests abnormalities, none of which is actually specific, including histopathological assay.
Since the HP symptoms may mimic recurrent acute respiratory infection and asthma exacerbation, it should be considered in a patient presenting with prolonged or recurrent cough, or dyspnea without obvious trigger factors.
The prognosis of HP in children is generally perceived as good if antigen avoidance is possible. Nevertheless, in a case of significant delayed diagnosis, progressive pulmonary fibrosis may be present.
|
2020-05-21T00:04:59.615Z
|
2020-05-08T00:00:00.000
|
{
"year": 2021,
"sha1": "072f252705bcf41345e01bc1b5b323aa11d6970c",
"oa_license": "CCBYNC",
"oa_url": "http://www.aaem.pl/pdf-118830-65775?filename=Hypersensitivity.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "801143ac15a6067fef088ac8f48f364aebdf7fb5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221712244
|
pes2o/s2orc
|
v3-fos-license
|
SOLAR SYSTEM CONSTRAINTS ON ANALYTIC PALATINI f(R) GRAVITY∗
A very practical way to confront an alternative theory of gravity with solar system tests is through the Will and Nordtvedt parametrized postNewtonian (PPN) formalism [1]. This framework considers a generic postNewtonian (PN) metric as an expansion in terms of gravitational potentials and relates each coefficient of this expansion (the so-called PPN parameters) with specific phenomena by means of the equations of motion. Once derived the PN metric of a theory, its PPN parameters and solar system constraints are promptly obtained (see, for instance, [2]). It turns out that the number of gravitational potentials considered by the formalism is limited. This implies that, if a given theory does contain distinct potentials in its PN metric, one should deduce the PN equations of motion in order to obtain the influence of new potentials in the trajectories of planets and light rays. That is the case of Palatini f(R) gravity, the alternative theory considered in this work [3]. The PN approximation of Palatini gravity has been considered previously in [4]. However, that work confronts Palatini gravity with solar system tests through the analysis of the PN metric, not considering the equations of motion. This issue we investigate here in detail considering an arbitrary analytic f(R) Palatini gravity.
Introduction
A very practical way to confront an alternative theory of gravity with solar system tests is through the Will and Nordtvedt parametrized post-Newtonian (PPN) formalism [1]. This framework considers a generic post-Newtonian (PN) metric as an expansion in terms of gravitational potentials and relates each coefficient of this expansion (the so-called PPN parameters) with specific phenomena by means of the equations of motion. Once derived the PN metric of a theory, its PPN parameters and solar system constraints are promptly obtained (see, for instance, [2]). It turns out that the number of gravitational potentials considered by the formalism is limited. This implies that, if a given theory does contain distinct potentials in its PN metric, one should deduce the PN equations of motion in order to obtain the influence of new potentials in the trajectories of planets and light rays. That is the case of Palatini f (R) gravity, the alternative theory considered in this work [3].
The PN approximation of Palatini gravity has been considered previously in [4]. However, that work confronts Palatini gravity with solar system tests through the analysis of the PN metric, not considering the equations of motion. This issue we investigate here in detail considering an arbitrary analytic f (R) Palatini gravity.
Palatini f (R) gravity review
The Palatini f (R) considers the spacetime metric g µν and an affine connection Γ λ µν as independent objects of the manifold. Matter fields are minimally coupled with gravity and their Lagrangian does not depend on the connection. The action variations with respect to g and Γ lead, respectively, to the following field equations: where κ is some coupling constant, the prime indicates a derivative with respect to R, and ∇ represents a covariant derivative constructed with the affine connection. It is important to note that, similarly to GR, diffeomorphism invariance implies that ∇ µ C T µν = 0, where ∇ µ C is the covariant derivative associated with the Christoffel symbol of metric g.
Palatini post-Newtonian metric
The PN framework is an approximation method for gravitational theories which considers a weak-field and slow-motion regime, a suitable approach to describe Solar System dynamics. To implement this framework to the Palatini f (R), we follow Ref. [5]. Once the metric field is described as a small perturbation of a flat spacetime, we consider an analytical f (R) and expand it around R = 0, f (R) = ∞ n=0 a n R n , where a n are constants. In order to proceed with PPN formalism, we impose that space should be asymptotic flat, a 0 = 0. Since κ is arbitrary, we can also set a 1 = 1. Solving the field equations order by order, we obtain the PN metric in Palatini gravity whereã 2 = κa 2 ,ã 3 = κ 2 a 3 and Latin indices run from 1 to 3. Wheñ a 2 =ã 3 = 0, the GR solution is recovered. The quantities ρ * , Π and p are the usual perfect fluid conserved mass density, internal energy density and pressure, respectively. The U , ψ, χ and V i are standard PPN potentials, and Φ P is a new potential particular to Palatini gravity, namely We first note that our result is not in conflict with the previous analysis on the PN limit of Palatini gravity using scalar-tensor equivalence [4]. At the Newtonian order, there is a Palatini correction term given byã 2 ρ * . This term is in general relevant for the internal stability of a given body, but it does not change the body center-of-mass trajectory, as will be clear later.
For the light propagation, PN corrections in the photon trajectories are given by the vacuum second order metric, which is equivalent to GR. Therefore, the PPN parameter γ within Palatini gravity is precisely 1, just like GR.
To determine the equations of motion of massive, finite-volume bodies, further details are necessary. The first step is to examine the PN hydrodynamics in order to find the conserved quantities.
Conserved quantities
By manipulating the conservation of energy-momentum tensor it is possible to obtain conserved integrals. The time component leads to a total energy conservation The total mass-energy of the fluid is then defined as M = m + E and it satisfies dM/dt = 0, where m is the material mass, m = ρ * d 3 x, and it is constant in time too. The vector equation of the conservation of energymomentum tensor can also be integrated to define the total conserved momentum, dP i /dt = 0, with The previous results show that Palatini f (R) gravity does not violate total conservation of energy and momentum in the PN regime, although it redefines the conserved quantities. This is an expected outcome since any Palatini gravity model is a Lagrangian-based metric theory with matter action being independent from the affine connection and, as shown in [6], they should not violate PN conservation laws. In the context of the PPN formalism, the results obtained here directly show that the PPN parameters ζ 1 , ζ 2 , ζ 3 , ζ 4 and α 3 are all zero in the Palatini f (R) gravity.
Equation of motion for massive bodies
In this section, we split the fluid description of the source into N separated bodies in order to obtain the PN equations of motion for the bodies center-of-mass positions. Each body indexed by A has a material mass and center-of-mass acceleration given by The volume where the integration above is calculated is bounded in the interbody vacuum region. The center-of-mass acceleration of each body will be a sum of three parts: the Newtonian acceleration a Newt A , a PN correction a PN A and the structural contribution a str A . The integrand of Eq. (7) is found from the PN Euler equation and all mathematical techniques are detailed in Ref. [5]. The final results read In the above expressions, we use the definitions r AB = r A − r B , r AB = |r AB | and n AB = r AB /r AB . These equations have no explicit dependence on either a 2 or a 3 and they are identical, in form, to the corresponding GR expressions. There is a single implicit difference inside the constant E B , which is the energy associated with the planet indexed with B. However, the PPN parameters are not sensitive to this energy redefinition. Moreover, the above result implies that the remaining PPN parameters are once again equal to their GR values, β = 1 and α 1 = α 2 = ξ = 0. Hence, in spite of the appearance of non-standard PPN potentials in the metric expansion, we conclude that the values of all the PPN parameters are the same as of GR.
Conclusion
In this work, it was presented a post-Newtonian (PN) analysis of a class of analytic Palatini f (R) gravity without making use of the equivalence with scalar-tensor theories and, more importantly, using the equations that describe light and planets trajectories to link theory and experiments. This is because the PN metric in Palatini theories does not fit the PPN formalism. The results found is that the equations of motion are precisely the same as in GR. Using the PPN language, we show that GR and Palatini gravity share the same PPN parameters. The only difference is in the conserved mass-energy function, where Palatini theories contain a correction term. Nonetheless, solar system tests are insensitive to this kind of distinction, leading to the conclusion that the Palatini f (R) gravity cannot be constrained by the current observations made at solar system. Further details on the present work can be found in Ref. [7].
The author thanks FAPES and CNPq (Brazil) for their support trough the Profix program.
|
2020-06-04T09:12:34.026Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "8a1364eb02c5b6cf67bd0d4b093972ffc2530779",
"oa_license": null,
"oa_url": "https://www.actaphys.uj.edu.pl/S/13/2/165/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "06aa73ef7eaf7a9a078e6e6171be78dcb5c8c8e3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
58086513
|
pes2o/s2orc
|
v3-fos-license
|
Nsambya Community Home-Based Care Complements National HIV and TB Management Efforts and Contributes to Health Systems Strengthening in Uganda: An Observational Study
1 Department of Paediatrics, University of Padua, Via Giustiniani 3, 35128 Padua, Italy 2Home Care Department, Saint Raphael of Saint Francis Hospital (Nsambya Hospital), Kampala, Uganda 3 Infectious Diseases Research Collaboration, Mulago Hospital Complex, Kampala, Uganda 4 Saint Raphael of Saint Francis Hospital (Nsambya Hospital), Kampala, Uganda 5 Santa Chiara Hospital, Via Largo Gold Medals 9, 38122 Trento, Italy 6Department of Public Health, Institute of Tropical Medicine, Nationalestraat 155, 2000 Antwerp, Belgium 7Department of Paediatrics and Child Health, College of Health Sciences, Makerere University, Kampala, Uganda 8MRC/UVRI Uganda Research Unit on AIDS, Plot 51-59, Nakiwogo Road, Entebbe, Uganda
Background
In the wake of the human immunodeficiency virus (HIV) epidemic in Sub-Saharan Africa (SSA), alternative service delivery models like the Community Home-Based Care (CHBC) [1][2][3][4][5][6] have evolved to fill the gap left by overstretched and underresourced health systems.CHBC includes any form of care (physical, psychosocial, palliative, and spiritual) given to the sick and the affected in their own homes and care extended from the hospital or health facility to their homes through family participation and community involvement [7,8].CHBC provides for the unmet needs of the large and growing population of PLHA in many resource-limited settings [7,9,10].However, the effects of CHBC on national HIV and TB outcomes have not been examined in detail.
In Uganda, the first CHBC programmes were established in 1987 in response to increasing numbers of acutely ill HIV/AIDS patients leading to congestion of hospital 2 ISRN Public Health wards, increased staff workload, and excessive pressure on infrastructure.Three different organisations pioneered this approach: Kitovu Mobile HIV Programme, The AIDS Support Organization (TASO), and Nsambya Hospital Home Care Department, popularly known as Nsambya Home Care (NHC).TASO was started by local people, whereas, Kitovu Mobile and Nsambya Home Care were pioneered by catholic missionary sisters from Ireland.
HIV and TB present important public health problems and health systems challenges for the country.According to the 2011/12 Uganda AIDS Commission Progress Report, Uganda has a generalized HIV epidemic, and prevalence has increased from 6.4% in 2004 to 7.3% in 2011 [11].Uganda is also one of the 22 high-burden countries with respect to TB [12], and the 2012 WHO Global TB Control Report shows that Uganda had a TB prevalence of 183 (95-298) and an incidence rate of 193 (156-234) per 100,000 population, respectively.In the same report, 53% of TB patients tested positive for HIV [13].To deal with the dual epidemics, the ministry of health (MoH) developed national policy guidelines for the integrated management of TB/HIV coinfection.The guidelines aim, among other things, to reduce the burden of both diseases, by improving detection and quality of care and promoting access through decentralization of services to the lower levels of the health systems, where the majority of the population lives [14].
The Ugandan guidelines for scaling up ART [15] outline a primary care approach and make provisions for a CHBC model, in line with the WHO framework for action on CHBC in resource-limited settings [7].This encourages synergies in the implementation of interventions for both HIV and TB.By mandate, CHBC is expected among other functions to provide palliative care and pain management, implement HIVrelated interventions, and serve as a bridge to extend HIV care, treatment, and psychosocial support services beyond the traditional health facilities to the large and growing population of PLHA in their homes and communities [16].Clearly, CHBC is consistent with the designed national response, and as a service delivery model, it could promote integrated management of the dual epidemics in Uganda and other settings with similar problems [17][18][19][20].
In this paper, we describe the Nsambya CHBC and examine the results and their effects on national HIV and TB outcomes and their contribution to health systems strengthening in Uganda.Additionally, we highlight some challenges and recommend practical steps to strengthen implementation of CHBC in a resource-limited setting.The study population consisted of adults and children receiving HIV and TB care, treatment, and psychosocial support services over the study period.
Description of CHBC.
NHC was established to extend basic health services into patients' homes, reduce pressure on hospital workers and infrastructure, and encourage family members to participate in the care of their relatives.This service was also intended to promote early hospital discharge, follow-up after discharge, and community involvement.What started as a team of three health workers providing palliative care to patients in their homes has evolved into a specialized HIV and TB centre.NHC has a catchment area stretching across four districts in and around Kampala and covers approximately 21 km in radius.The estimated population of the catchment area was about 4 million in 2012 [21].Over the years, NHC has evolved into a CHBC with the development of community components, which include community engagements, a community-based volunteer programme, community outreach programmes, and outreach clinics.
The Nsambya CHBC is a blend between facility-based care and home-based care with the community serving as an important intermediary.It employs task shifting to overcome some of the shortages in the workforce and uses home visits and outreach clinics to get services closer to patients.In addition, psychosocial support services help patients to deal with some of the challenges posed by HIV positive status and poverty in accessing healthcare in poor-resource settings.The pillars of the CHBC and how they function, patient enrolment practice, tracking of defaulters, and other interventions have been described in detail in a previous study [22].
Programmes Implemented with CHBC.
Prior to implementing programmes with the CHBC, donors and partners made concerted efforts to operate within existing national policy guidelines as much as possible.That understanding paved the way for establishing a framework of administrative and operational integration among donors and partners aimed at coordinating resources, promoting efficiency, and avoiding measures that could potentially damage the health system.
Within that framework, several closely related programmes were implemented with the CHBC: HIV prevention education, counselling and testing, ART, HIV chronic and palliative care, TB treatment, Intensified TB Case Finding (ICF), and Isoniazid Preventive Therapy (IPT).The programmes were vertical owing to the weak state of the general health system, and the approach can be considered as a contextualized solution [23][24][25][26].Nonetheless, the Nsambya CHBC has extensive and important functional linkages to the general health system and all the relevant stakeholders.For instance, the MoH supports the TB clinic within the Nsambya CHBC to function as a national referral facility that provides TB treatment for the public and participates in surveillance activities and national TB-HIV studies.The various interrelationships of the Nsambya CHBC are illustrated in Figure 1, and the key players, processes, and linkages to the observed outcomes are summarized in Figure 2.
The Nsambya CHBC was funded mainly by nongovernmental initiatives through a long-standing faith-based solidarity, and minimal support from the MoH.The faithbased solidarity also provided vital technology and technical assistance to achieve a common goal.The goal was to provide comprehensive HIV care, treatment, and psychosocial support services for HIV-infected patients and their families and affected communities.Services were generally free of charge; however, adults paid a user fee of 1,000 Uganda shillings, the equivalent of 38 cents of a US dollar at the time of this study, per visit.
Data Collection.
Data from routine programme activities, programme reports, patients' records, and HIV and TB registers at NHC were collected for the study.Country-level data were obtained from the NACP and NTLP reports as well as from global HIV and TB reports.Some of the data were incomplete from the three institutions in the study.Consequently, the analyses were limited to periods with complete data, and that has been provided under Section 3.
Statistical Methods and Data Analysis.
Primary study outcomes included the proportions of ART patients retained in care, LTFU and mortality at 12 months from ART initiation, proportion of TB patients tested for HIV, and cure and defaulter rates for new smear-positive cases.Secondary outcomes included HIV-TB coinfection and ART status among defaulters and bed occupancy rate for HIV-related hospital admissions within 12 months of starting the CHBC.Bed occupancy rate was determined from a hospital report (unpublished).
The data were analyzed with Microsoft Excel programme version 2010 and STATA version 12. Chi-square tests were used to determine the differences and trends between the mean outcomes from the Nsambya CHBC and national outcomes.In addition, chi-square test and Fisher's exact test were used to determine differences in the proportions of TB defaulters coinfected with HIV, not coinfected, receiving ART, and not on ART.
The Uganda National Council for Science and Technology granted ethical approval for the study (UNCST Ref: HS 1383).The relevant authorities waived informed consent.
Overall, there were 110 TB defaulters, 54.5% (60/110) were enrolled in care in the Nsambya CHBC and the rest were referrals from other facilities.Majority of the TB defaulters were HIV-TB coinfected (72%, < 0.001) and included all the TB defaulters enrolled in care in the Nsambya CHBC, who accounted for 76% ( < 0.001) of all the HIV-TB coinfected defaulters.Overall, minority of the TB defaulters were receiving ART (39%), and when stratified by source of patients, the proportions were similar: 38% versus 42% ( = 0.769) for Nsambya CHBC patients and referrals respectively (Table 3).
Effect on Bed
Occupancy.The immediate impact of the Nsambya CHBC was a remarkable reduction in bed occupancy from an average of three months to two weeks for HIV/AIDS-related hospital admissions in 1987, within 12 months of starting the CHBC, long before ART became publicly accessible in the country (data not presented).
Discussion
Overall, the core findings from this study demonstrate that the Nsambya CHBC complements national HIV and TB management and resulted in a higher proportion of ART patients retained in care and a lower LTFU rate, 12 months after initiation.We believe the higher retention in care and lower LTFU rates seen among the ART patients could be revealing the results of 25 years of evolution of the Nsambya CHBC, from preparing PLHA for death in the pre-ART era to keeping them alive through ART and long-term followup measures.The process entailed regular review of CHBC design to make it sensitive to some key challenges faced by patients while seeking health care.That translated into additional psychosocial support services such as the OVC support programme, food supplements to help with food insecurity, economic empowerment, particularly of adolescents through sponsorships for vocational trainings, and some caregivers to enable them to deal with poverty and other negative impacts of HIV/AIDS [22].Other factors include strategies such as tracking of defaulting patients, community involvements, task shifting to community volunteers, nurses, counselors, and social workers [27], and using outreaches to promote geographical access to some services.Furthermore, the evolutionary process involved the adoption of measures that have contributed to health system strengthening, as a stronger health system [5] is crucial for effective service delivery.We think that the slightly higher mortality rate seen among Nsambya CHBC patients on ART might be due to improved tracking of patients considered "lost to follow-up." Indeed, a variable proportion of ART patients labelled as "lost to care" were actually dead upon tracking, and that observation is consistent with the literature [28][29][30].Our mortality rate may also be reflecting improved documentation and reporting of deaths from the communities by community volunteers, who are residents of the communities.Overall, the mortality trend was reducing much more in the Nsambya CHBC, as depicted in Figure 5.
We also found that a higher percentage of TB patients were tested for HIV in the Nsambya CHBC and the average cure rate for new smear-positive TB patients was higher than the national average.However, the defaulter rates were similar.Various factors may explain the higher proportion of TB patients tested for HIV and the improved TB cure rates in the Nsambya CHBC.The Nsambya CHBC has a TB clinic and a laboratory for various tests including sputum microscopy on the same premises as the main HIV clinic.That structural arrangement coupled with training of health workers on policy guidelines for the integrated management of HIV-TB coinfection may have strengthened management of the two diseases.That arrangement may also have raised awareness among health workers and patients as well as facilitated screening of TB patients for HIV and vice versa.Moreover, patients see the arrangement as convenient and cost-saving to have HIV and TB screening and treatment at the same facility [31].The high TB defaulter rate was unexpected, particularly when compared to LTFU among ART patients in the Nsambya CHBC.Nevertheless, that finding might be linked to the overall small proportion of TB defaulters started on ART (Table 3), which is in keeping with the literature [32,33].In addition, TB patients referred to receive treatment but not enrolled in the Nsambya CHBC could not be tracked because of disjointed health information systems and logistic challenges.This observation could be reflecting a wider problem and calls for early initiation of ART in all HIV-TB coinfected patients in line with the recent revisions of the treatment guidelines [34,35] and concerted efforts to track all TB patients receiving treatment in the Nsambya CHBC.
The remarkable reduction in bed occupancy was feasible because of early discharge from hospital, home-based care provided by outreach staffs, and support from family members and friends of patients and community involvements.Studies from Uganda [36,37] and elsewhere in SSA [38,39] reported high HIV-related hospital admissions, sometimes to the exclusion of non-HIV patients in the 1990s and early 2000s.Undoubtedly, the reduction in bed occupancy has multiple impacts, such as freeing up beds for non-HIV patients, decreased workload on health workers and reduced pressures on health infrastructure, all of which have gains for the health system [26].There may also be gains for the patients and their caregivers from the shorter stays, such as overall costs.With the advent of ART, the reduction in bed occupancy has been sustained, as the health conditions of many HIV patients improved and relatively fewer patients were admitted to hospital and for shorter durations, in line with the literature [40][41][42].Over the years, community involvements seem to have raised awareness about HIV and reduced stigma to some extent, as reported by some patients and their caregivers as well as community volunteers, some of whom are expert patients.These positive effects could contribute to the building blocks for chronic disease management in resource-limited settings, particularly, where the default healthcare delivery models were not designed for chronic disease management [16].
We have observed some of the positive impacts of home visits and the various forms of psychosocial support, including lessons on self-management on patient outcomes, and believe they could go beyond HIV and TB management to benefit patients with other chronic conditions such as diabetes and hypertension, to mention a few [43].Yet, that scenario may be dictated by additional funding and other resources to scale up, policy guidelines for regulation, and the political commitment to support and sustain the approach as well as a change in the mind-sets of programme coordinators and managers.Currently, CHBC is largely a "donor-partner" funded initiative operating on a miniature scale [8,44], compared to the existing HIV and TB disease burdens and unmet needs.
Impact of Long Standing Faith-Based Solidarity.
To a large extent, the findings from the Nsambya CHBC illustrate what could be achieved when a common goal is backed by some form of "solidarity, " in this case, "a complex and powerful long standing faith-based solidarity" involving international donor-partnerships and local partners.To accomplish the common goal for the solidarity, a wide range of resources and several programmes were envisaged.Somehow, the donors and partners directly or indirectly supported most of the essential pillars of health system strengthening [45,46] through funding, drugs, equipment, materials, infrastructure development, and vital technologies like electronic databases to support health information systems.The donorpartnership synergies also provided technical assistance to build and maintain systems: train, supervise, monitor and evaluate performance against set standards periodically, thus gradually developing some of the needed systemic capacities [47] as well as a viable service delivery mechanism over time.It is estimated that, at the time of this study, the existing key donors and partners had each supported the Nsambya CHBC for at least eight years covering different periods or with some overlap, and the oldest partner for over 25 years and ongoing.This "longevity of faith-based solidarity" has been pivotal to the survival, evolution, and expansion of the Nsambya CHBC.
Administrative and Operational
Integration.One of the important achievements of the Nsambya CHBC was the establishment of administrative and operational integration among donors and partners.The measures allow some resources to be pooled together, utilized after collective decisions, and accounted for in a transparent manner.Administrative and operational integration has facilitated and somehow harmonized implementation of some common policy guidelines.For instance, there are common policy guidelines for human resources for health management in place.In practice, they translate into common procedures for advertising vacancies, standardized selection criteria, and the use of Ugandan national salary scales.This avoids disparities in salaries and working conditions for health workers with similar qualifications and experiences, but working on different programmes.The measures have reduced duplication, minimized wastage and administrative costs and poaching of health workers, and seem to have promoted efficiency and synergies with better programme outcomes.
4.3.
Positive "Spill Over" Effects.Even though the original goal of the Nsambya CHBC was to provide care, treatment, and psychosocial support for PLHA and their families, with time, the additional resources from the HIV programmes appear to have benefited other programmes and the general health system.Notably, TB control, nutritional support for children, and OVC support including sponsorships for vocational training and support for caregivers were benefited [22].These positive "spill over" effects are consistent with findings from studies in Ethiopia and Malawi [48] and elsewhere [25,26,49] globally.
4.4.
Challenges to Be Addressed.Despite the achievements of the Nsambya CHBC, some important challenges remain, and they can be viewed from the level of CHBC, from that of the implementing organization, donor-partner demands and preferences, and the general health system.Documentation and data capturing from community activities need to improve in order to contribute to operational research in the future.The referral networks linking the communities to the outreach clinics and to the department and hospital require strengthening in order to be effective.Community volunteers play vital roles in the referral networks, but they may not be adequately resourced to function effectively.Budgeting must also be clear and stable in order for programmes to be designed in a feasible manner and implemented consistently.This can be difficult to manage with multiple donors and partners providing varying portions of funds over varying periods.Supplies such as drugs and medical products are also received on a variable basis from a range of sources.In order to prevent waste and actively identify areas of both overage and shortage, efficient and timely recording systems for supplies are essential.
Although the Nsambya CHBC has an organizational know-how in place that could be extended to benefit other health problems other than HIV and TB, some donors and partners prefer to fund only specific aspects of the programmes.That state of affairs creates some difficulties among the donor-partner relationships and somehow does not contribute to the realization of the full potentials of CHBC.With respect to the general health system, referral networks are generally fragile, operational guidelines lack visibility, and the disjointed nature of health information systems makes it a daunting task to track patients lost to care, especially when they relocate to different cities, towns, or villages.
We recommend the following steps for the relevant authorities to consider in strengthening and expanding implementation of CHBC programmes: (i) government cofunding and political commitment to scale up CHBC and ensure continuity of support in the face of changes in the donor-partner relationships, (ii) streamlining the existing patient tracking system to make it sensitive for tracking all patients in care in the Nsambya CHBC, (iii) strengthening of the referral networks through national guidelines and resource allocation as well as research, and (iv) a critical assessment of how the CHBC models impact on the general health system.
Limitations of Study.
Potential limitations of this study include issues with documentation (incomplete data), availability, and data quality.Consequently, we believe the reported number of patients ever enrolled in the Nsambya CHBC could be an underestimation, possibly due to missing data from worn out paper-based registers, before electronic databases became available.To deal with missing data, we relied on reported data from the NACP and NTLP, presented in global HIV and TB reports for the comparisons, whenever possible.That meant that only periods with available reported data could be compared.These limitations were accommodated for by providing the periods for the various analyses in the text under results.Data on the average bed occupancy rate was from a secondary source (unpublished hospital report) which did not provide the standard deviation for the mean reported.In addition, portions of the relevant paper-based registers for 1987-88 hospital admissions have worn out over the years, resulting in missing data.Therefore, the primary data could not be accessed for analyses.
We also recognize that the national averages level off diversity in data and their sources and therefore believe that, the observed differences in outcomes may not be solely due to the CHBC approach but possibly some other factors, which we were unable to explore.
Conclusions
We conclude that the Nsambya CHBC complements national HIV and TB management efforts and resulted in more positive results for several HIV and TB outcomes, when compared to the national averages.The findings could be reflecting the results of 25 years of evolution of the Nsambya CHBC, from preparing PLHA for death in the pre-ART era, to keeping them alive through life-prolonging ART and long-term follow-up measures.This is a process that entailed regular review of the approach, community involvements and additional interventions to mitigate some of the negative impacts of HIV/AIDS, while adopting measures and strategies that have contributed to health system strengthening in the country.This approach may hold the potential for chronic disease management in resource-limited settings.Scaling up CHBC could have wider positive impacts on the management of not only HIV and TB, but also other chronic diseases as well as the general health system.A complex and powerful long-standing "faith-based solidarity" among international donors and partners has been pivotal to the survival and evolution of the Nsambya CHBC.
not have materialized and the authors are grateful to Agnes Alowo, Allen Victor Nagawa, Dan Kimbowa, Isaac Musoke, Francis Sozzi, Maria Kanyesigye, Jamilla Namaala, Christine Namutebi, RoseMary Alwenyi, Grace Anzoyo, and Brian Kawere.The authors appreciate the support of the secretary of the department (Susan Nakayiga) and the hospital and indeed all the staff.Finally, the authors thank the Provincia Autonoma di Trento and Regione Trentino Alto Adige for funding NHC since 2006 through Casa Accoglienza alla Vita Padre Angelo, and the PENTA Foundation for supporting the interventions.
Figure 2 :
Figure 2: Illustration of the key players in the Nsambya CHBC model, the vital components, and linkages to outcomes.Conceptual framework of the Nsambya CHBC, the context within which it operates, and functional connections to all stakeholders including beneficiaries.
2.1.Study Design, Setting and Population.This retrospective observational study compared HIV and TB outcomes from the Nsambya CHBC to national averages reported by the National TB and Leprosy Programme (NTLP) and the National AIDS Control Programme (NACP) over five years (2007-2011).The study was conducted at St. Raphael of St. Francis Hospital (Nsambya Hospital), Home Care Department in Kampala, Uganda.Nsambya Hospital is a faithbased private-not-for-profit facility owned by the Catholic Archdiocese of Kampala and accredited by both the ministry of health (MoH) and the Uganda Catholic Medical Bureau (UCMB).It is a general tertiary referral hospital with a bed capacity of 361 and involved in research and training of postgraduate doctors, nurses, midwives, and laboratory technicians.
Approximately 4,000 new TB cases were detected and managed from 2007 to 2011.Adults constituted 92.3%, females 51.0%, and children 7.7% of the cases.On average, 95% of TB patients from the Nsambya CHBC were tested for HIV as against 72% for the national value.From 2007 to 2010, the Nsambya CHBC recorded an average cure rate of 54.6% for new smear-positive TB patients, while the figure for the national average was 30.8%.
Table 3 :
Comparison of TB defaulters (adults and children) receiving treatment in the Nsambya CHBC by HIV-TB coinfection and ART status, Kampala (2007-2010).
|
2019-01-23T22:16:45.265Z
|
2014-03-06T00:00:00.000
|
{
"year": 2014,
"sha1": "911faf54c4c65f531f7605dff1ed4d9e8778a660",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2014/623690.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "54dcd48a962f713dea4bc74e15ffa79752351c4e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235230810
|
pes2o/s2orc
|
v3-fos-license
|
TERT Genotype Polymorphism: A Glance of Change Egyptian MDS Outcomes
Background: Myelodysplastic Syndromes (MDS)are clonal hematologic disorders characterized by genetic instability and ineffective hematopoiesis associated with telomere dysfunction. We aimed at investigating the association between the rs2242652 single nucleotide variant of the TERT gene and susceptibility for MDS, as well as its prognostic impact and relation to disease phenotype. Methods: Genotyping analysis was carried on 100 MDS patients recruited at Mansoura Oncology center, in addition to 100 healthy subjects for detection of rs2242652 variant of TERT gene on chromosome 5 by real time PCR following the protocol of Custom TaqMan® SNP Genotyping. Results: The rs2242652 TERT genetic polymorphism was associated with an increased risk of MDS (odds ratios 2.6 for genotype GA, 6.4 for genotype AA). The majority of AA homozygous mutant variant were associated pancytopenia (88%), poor risk cytogenetics (92%) and High/very high IPSS-R score (88%). At the end of follow-up (median 30 months), 14% of the cases transformed to secondary AML. The rate of leukemic transformation was significantly associated with the mutant AA genotype (93% of transformed cases, 52% of AA genotype cases; P< 0.0001). Survival outcome was inferior in AA mutant genotype (median 14 months, 95% CI: 12-16 months) to the GA genotype (median 30 months, 95% CI: 26-33 months) and those of the GG genotype (median not reached), P<0.001. Conclusion: Our study shows an intriguing and previously unrecognized association between rs2242652 TERT mutation and MDS risk. The presence of rs2242652 mutation defines a subgroup of patients with aggressive disease phenotype and dismal outcome. Further research is recommended to elucidate underlying pathologic mechanisms and to define an efficient therapeutic target.
Introduction
MDS represent hematological clonal disorder that typically affect elderly. They arise as a consequence of genetic mutations (more frequently chromosome alterations) in a pluripotent hematopoietic stem cell. Clinical manifestations of MDS are influenced by cytopenia-related complications and by progression into AML (Sperling et al., 2017).
Telomeres are a repetitive hexanucleotide (TTAGGG) sequences which bind many specialized proteins to the ends of the chromosome (Yang et al., 2012). Telomeres prevent coding sequence erosion and protect chromosomal stability by aiding in complete chromosome replication and regulating gene expression (Stewart et al., 2012).
Telomerase is an RNA-dependent DNA polymerase that synthesizes telomeres. The telomerase complex, consists of an RNA template (TERC), and an enzymatic subunit (telomerase reverse transcriptase, TERT) (Jafri et al., 2016). Reduced level of TERC is sufficient to cause telomere diseases, such as dyskeratosis congenita, aplastic anemia, and idiopathic pulmonary fibrosis while up regulation of TERT expression have a critical role in tumor formation and chemotherapy outcome (Townsley et al., 2014). Enhanced telomerase activity has been reported in the majority of cancers and is associated with immortalization, and resistance to apoptosis through elongation of telomeres (Jafri et al., 2016).
Accumulating evidence proved that altered telomere plays a crucial role in bone marrow failures, leukemias and MDS (Lansdorp, 2017;Menshawy et al., 2020). Telomere length is affected by the presence of single nucleotide variants (SNVs) in the TERT gene influencing activity and/ or expression (Mosrati et al., 2015a;Ozturk et al., 2017).
It has been suggested that telomerase activity and TERT expression may have a role in pathogenesis of MDS and impact the prognosis of MDS patients (Vasko et al., 2017). However, the SNV (rs 2242652) in TERT gene expression was not explored in MDS. Our study adopted a genotype-based approach, with the objective of determining the contribution of rs2242652 allele of TERT gene to MDS risk in Egyptian population. We aimed also to address its impact on prognosis of MDS as well as its association with disease characteristics and leucocyte telomere lengths.
Patients and Methods Patients
This study was carried on 100 MDS patients (54 males, 46 females), recruited at oncology Mansoura university center from April 2015-untill Mars 2018, in addition to 100 age, sex-matched healthy subjects as reference control. Diagnosis of MDS was established according to 2008 WHO diagnostic criteria (Vardiman et al., 2009). Informed consents from fall participants 'guardians in study were obtained and approved by Mansoura medical ethics Committee (MMEC) of the faculty of medicine. Follow up was at least 2 years to assess prognosis and outcome.
Sampling
DNA was extracted from peripheral leucocyte using Thermo scientific Gene JET Whole Blood Genomic DNA Purification kit according to the protocol of manufacturer's instructions. The extracted DNA was preserved frozen at -20 C. quantification of DNA samples by Nano-Drop instrument, 1000 spectrophotometer (Thermo Fisher Scientific Inc., Wilmington, NC, USA).
Genotyping analysis real time -PCR
DNA extracted with specific SYBR ® Green master mix was used for detection of rs2242652 variant of TERT gene on chromosome 5 was followed the protocol of Custom TaqMan ® SNP Genotyping Assays under guide of manufacturing kits. For assurance of quality control purpose, the order of amplified DNAs sample was randomized on plate with duplication of 5 samples all over the runs to satisfy our result finding. PCR plates were acquired on DNA-Technology DT Prime4 real time instrument, software v7.6.
Leucocyte telomere length assay
Measurement of relative telomere length by Real time quantitative PCR following the technique by Cawthon et al., (2002).
Cytogenetic analysis
Interface fluorescence in situ hybridization (FISH) (del(5q)/ -5, del(7q)/ -7, trisomy 8, del(20q), trisomy 1/1q+) were performed according to the manufacturer's instructions on mononuclear cells of BM aspirates. and Conventional cytogenetic by G-banding, depend on culture and harvest methodology with trypsin and Geimsa stain. The different Probes purchased form (Vysis, London, UK), analysis of at least 100 metaphases for every case by an expert and professional highly specialized staff at international Canadian accredited lab. Cell images were captured using a CCD camera (Photometrics SenSys camera) using CytoVision system for image analysis (Applied Imaging).
Statistical analysis
Data were analyzed running IBM-SPSS© for windows version 19.0. A two-sided p value of <0.05 was required for statistical significance. The Chi Square Test was used for testing the relation between categorical variables. Mann-Whitney U test or Kruskal-Wallis H test were used for comparison between two or more groups. Survival was determined by the Kaplan-Meier test, the Log Rank test was used for comparison. Independent hazards of different prognostic factors were tested by the Cox's regression model.
Results
The study was conducted prospectively on 100 cases with MDS in addition to 100 control subjects. The mean age of studied cases was 56±11 years, including 54% males & 46% females. Control subjects were matched with the cases for age (mean 58±9 years; p=0.2) and sex (males 55%, females 45%, p=0.9). The baseline characteristics of studied cases are shown in table 1 The genotype distribution of TERT polymorphism in control group did not deviate from Hardy-Weinberg equilibrium (P=0.1). Analysis of the differences in frequency distributions of genotypes and alleles between cases and controls showed that TERT mutations were associated with a significantly increased risk of MDS; Odds ratios were significant for both the genotype distribution (2.6 for genotype GA, 6.4 for genotype AA) and for the allele distribution (1.7 for A) as shown in Figure 1.
The mutant homozygous genotype (AA) and the heterozygotic genotype (GA) were significantly associated with older age (mean 64±10 and 60±8 years respectively) versus GG genotype (mean 47±6 years, P=0.001) as shown in (Figure 2A). A significantly shorter telomere length was found in the genotypes with mutant allele ( Figure 2B).
At the end of follow-up (median 30 months), 14 cases (14%) transformed to secondary AML. The rate of leukemic transformation was significantly associated with the mutant AA genotype (93% of transformed cases, 52% of AA genotype cases; P< 0.0001). The median overall survival of studied cases was 38 months (95% CI: 32-44 months). The survival outcome was inferior in AA mutant genotype (median 14 months, 95% CI: 12-16 months) to the GA genotype (median 30 months, 95% CI: 26-33 months) and those of the GG genotype (median not reached), log rank =60, P<0.001 ( Figure 5). Univariate survival Cox regression analysis identified 6 HR 23, 95% CI 11.7-36.4 for mutant genotype AA). However, in multivariate analysis genotype distribution was not independently associated with an impact on overall survival (Table 2).
Discussion
The activation of telomerase is a vital step during cellular immortalization and malignant transformation in human cells, and many human malignancies are characterized by elevated TERT expression (Young, 2010;Ye et al., 2017). The TERT gene sequence in general is thought to be indicative of an individual's susceptibility to cancer, and epidemiological studies have identified associations between specific TERT polymorphisms and cancer development (Yin et al., 2012). It has been reported that the rs2242652 allele of TERT influences telomere length, which has in turn been linked to a number of diseases including cancers (Heidenreich and Kumar, 2017;Wu et al., 2017a;Yang et al., 2019a;Roggisch et al., 2020).
In this study, we investigated the association between the rs2242652 SNP variant of the TERT gene and the susceptibility for MDS and its relations to clinicopathologic features of the disease. We found that the rs2242652 TERT genetic polymorphism was associated with an increased risk of MDS in an Egyptian patient population (odds ratios 2.6 for genotype GA, 6.4 for genotype AA and 1.7 for the allele distribution A). Association of rs2242652 SNP with MDS risk may in part be explained by influencing the telomere function resulting in chromosomal instability. When genomic instability ensues, the vast majority of cells undergo apoptosis, although in occasions cell may survive and become tumorigenic (Stewart et al., 2012). In addition, beyond immortalization, TERT also possess telomere independent functions in tumor formation, regulating Wnt-dependent transcription, mitochondrial function, apoptosis, and DNA damage response. It was also shown that TERT interacts with NFκB and co-activates the Figure 1. (A), The distribution of genotypes in control subjects inn relation to Hardy Weingberg Equilibrium (p=0.1); (B), The distribution of genotypes in MDS cases versus control subjects (Odds ratio for GA genotype 2.6, 95% CI 1.4-5; Odds ratio for AA genotype 6.4, 95% CI: 2.6-16); (C), The frequency of mutant allel A in MDS cases and control subject (Odds ratio for Allel A 1.7, 95% CI: 1.4-2) expression of several genes that are critical for cancer progression (Mosrati et al., 2015b;Ozturk et al., 2017).
A significant interaction was found between genotypes and age in MDS cases, the mean age was significantly older in AA homozygous mutant genotype (mean 64±10) followed by the GA heterozygotic type (mean 60±8) and the younger age group was found in the GG homozygous type (47±6), p=0.001 A concordant significantly shorter telomere length was found in the genotypes with older age (mutant allele).
When the data were analyzed with respect to disease related parameters, the mutant genotype was associated with a significant reduction in WBCs and platelets counts, and significant elevation in bone marrow blast cells. The majority of AA homozygous mutant variant were associated pancytopenia (88%), poor risk cytogenetics (92%) and High/very high IPSS-R score (88%). Collectively these data indicate that this genotype represents a high-risk abnormality, linked to aggressive clinicopathologic features of MDS. This association has not been previously explored in MDS meanwhile, in solid tumors TERT mutations were associated with significant poor clinical parameters which predict poor prognosis and may represent a novel therapeutic target (Shimoi et al., 2018).
At the end of follow-up (median 30 months), 14 cases (14%) transformed to secondary AML. The rate of leukemic transformation was significantly associated with the mutant AA genotype (93% of transformed cases, 52% of AA genotype cases; P< 0.0001). The biological mechanism that could account for this striking high rate of transformation and clinical aggressiveness is the development of rapidly acquired genetic changes that may promote progression to AML as consequences of genomic instability.
The overall survival was inferior in AA mutant genotype (median 14 months, 95% CI: 12-16 months) to the GA genotype (median 30 months, 95% CI: 26-33 months) and those of the GG genotype (median not reached), P<0.001. However, in multivariate analysis genotype distribution did not add prognostic information to the IPSS-R score after adjusting for age, cytopenia, cytogenetics, and bone marrow blast cells. This suppression of the prognostic value of the mutant genotype could be explained by the strong association with poor IPSS-R score.
Some limitations of the current study have to be taken into consideration when interpreting the results. First, the sample size was relatively small. Second the relation of this molecular marker to other genetic and epigenetic prognostic markers that have been defined in MDS was not examined. Lastly, the predictive value of this molecular marker was not evaluated in relation to different treatment modalities.
In conclusion, our study shows an intriguing and previously unrecognized association between rs2242652 TERT mutation and MDS risk. The presence of this mutation defines a subgroup of patients with aggressive phenotype, strikingly high frequency of transformation to AML and a short survival. Finally, the MDS-associated molecular marker identified here might be useful as a prognostic biomarker and potential therapeutic target warranting future studies.
Author Contribution Statement
Nadia El Menshawy: design the study, organize team
|
2021-05-29T06:16:54.634Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "3c16edb171c5c5c6cd24ed016ead53e7e022a619",
"oa_license": "CCBY",
"oa_url": "http://journal.waocp.org/article_89605_a1148368d8dff36a1cdc8824561b52a9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21da51d3cd959278a23a80e12b941145ba6e73d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
135379042
|
pes2o/s2orc
|
v3-fos-license
|
Nutrient content, fat yield and fatty acid profile of winter rapeseed (Brassica napus L.) grown under different agricultural production systems
yield was recorded in the low-input monoculture technology.
Quality features of rapeseeds (Brassica napus L.) and potential for high yielding to a major extent may be defined by improvements in agricultural engineering methods that encompass biological progress.However, this is associated with fertilization and application of pesticides, which may negatively impact on environment and quality.It is thus essential to develop and improve edible oil production systems to satisfy farmer and non-threatening consumer.The aim of this study was to evaluate content of nutrients, fat yield and fatty acid profile of rapeseed grown in two crop rotation system with three levels of agricultural inputs.Three levels of technologies were used: economically (low-input), moderately intensively (medium-input) and intensively (high-input), varied in N amount and S fertilization as well as protection against pests.The medium-and high-input technologies applied in the monoculture contributed to an increased oleic acid in rapeseeds (by 5.7% and 5.5%), whereas low-input and high-input technologies resulted in an increased proportion of linoleic (by 11.6% and 2.1%) and linolenic acid (by 6.6% and 5.0%) in the monoculture rapeseeds.The medium-input level generated an increased proportion of arachidic (from 6.9% to 15.0%), octadecanoic (by 4.9%), linoleic (by 7.0%), linolenic (by 5.1%) and eicosadienoic fatty acids (by 17.7%) in rapeseeds cultivated in the crop rotation system.The increase in technological input level changed the ratio of polyunsaturated fatty acids to linoleic and linolenic acids by 5.1% and 7.4% (p < 0.05) in both crop rotation and by 4.2% and 7.9% monoculture systems.In general, the impact of winter rapeseed in crop sequence systems was found to have an insignificant impact on the content of macronutrients and trace elements in seeds.The highest fat yield was generated with the crop rotation system at the highest input level, whereas the lowest yield was recorded in the low-input monoculture technology.
INTRODUCTION
Oil and protein are the basic raw materials derived from rapeseeds (Brassica napus L.) Rapeseed oil is an important source of energy in human nutrition (Omidi et al., 2010) and degreased rapeseeds are used as feedstuffs (Baltrukoniene et al., 2015).Rapeseed oil is a distinguished edible oil, which is also determined by a relatively high proportion of unsaturated fatty acids such as linoleic acid (C18:2) and α-linolenic acid (C18:3) that are classified as essential unsaturated fatty acids (EFAs) and have been associated with blood lipid profiles associated with a lower risk of coronary heart disease (Narits, 2010;Ntawubizi et al., 2010).According to Zatonski et al. (2008), rapeseed oil has a very low content of saturated fatty acids than other oil plants and a relatively high content of the basic fatty acids (C18:2) and (C18:3) at optimal 2:1 ratios.The value of rapeseed, as a source of vegetable oils and proteins, may be improved by: increasing the content of oil, modifying the composition of fatty acids in oil, and reducing the anti-nutritional compounds, mainly fiber and glucosinolates, in rapeseed meal (Liersch et al., 2013).
The quality features of rapeseeds and the potential for high yielding to a major extent may be defined by improvements in agricultural engineering methods that encompass biological progress.However, this is associated with intense fertilization and the application of large amounts of pesticides, which may negatively impact the consumer.It is thus essential to develop and improve edible oil production systems to make them both satisfying to the farmer and non-threatening to the consumer (Velicka et al., 2016).Fertilizer applications, especially on nutrient deficient soils, can therefore increase crop yields and quality (Albert et al., 2012;Malhi, 2012).Both macro and micronutrients are essential to proper crop growth, but N and S are the most limiting nutrients (Ngezimana and Agenbag, 2014).Hegewald et al. (2016) noted the importance of crop rotation to maintain seed yield and oil yield of oilseed rape, and to maximize the response to applied N. A reduced N-rate increased N-use efficiency and reduced the risk of high-N surpluses without a significant/equivalent decrease in the seed yield when the rotation was optimized.
New rapeseed cultivars characterized by high and reliable yields and improvements in agronomic practices increase profits, contribute to faster crop rotation and enable growing crops in monocultures (Cwalina-Ambroziak et al., 2016).Despite the above, intensive rotation of the same crop could have negative effects, such as frequent pest infestations, including plant pathogens (Mohammadi and Rokhzadi, 2012).This problem can be addressed by reversing soil fatigue through the introduction of new cultivars and technologies suited to their requirements (Sieling and Christen, 2015).An increased level of fertilization, especially with N, is always associated with a need to improve the efficacy of plant protection (Cwalina-Ambroziak et al., 2016).Crop rotation and optimal rates of N (Rathke et al., 2006) and S fertilization (Sienkiewicz-Cholewa and Kieloch, 2015) are of key importance in reducing pathogenic infections in rapeseed.
The aim of this study was to evaluate the content of nutrients, fat yield and fatty acid profile in a 5-yr monoculture and after a 4-yr break in the crop rotation system of rapeseed with three levels of agricultural inputs.
Site and experimental set-up
The research facility is located in the Central European Lowlands, the sub-area of the South Baltic Lagoon, in the Ilawa Lake District.The study area is characterized by a young glacial landscape within the range of the ice sheet of the Pomeranian glaciation of the Vistula.Winter rapeseed (Brassica napus L.) was grown in monoculture and in crop rotation in Balcyny (53°36' N, 19°51' E), Poland, in 2009-2013.The field experiment was set up on loess soil, class IIIa soil/arable soil of good quality Topsoil (Ap) was made up of heavy loamy sand, and the E-horizon consisted of clay underlain by light loam in the illuvial horizon (Bt).According to the World Reference Base for Soil Resources (WRB, 2014), this corresponds to a Luvisol.Soil was slightly acidic (in KCl solution with pH 6.6), and its total N content was determined at 0.95 g kg -1 and total organic C content at 10.05 g kg -1 .Soil concentrations of plantavailable macronutrients (mg kg -1 ) were 93.3 mg P kg -1 , 185.4 mg K kg -1 , 58.5 mg Mg kg -1 , and 550 mg Ca kg -1 .The concentrations of soil nutrients were according to the valid standards and standard methods applied in Poland.The contents of macronutrients were determined: Total N by the Kjeldahl method, P and available K by the Egner-Riehm method in calcium-lactate extract ((CH 3 CHOHCOO) 2 Ca) acidified with hydrochloric acid to pH 3.6, available Mg was assayed after the extraction of 0.01 mol CaCl 2 × 10 -3 m 3 from soil, using the Atomic Absorption Spectrometry (AAS) and Ca by universal method of extraction with 0.003 N acetic.Soil pH was determined electrometrically in a solution of 1 M KCl and humus content by the Tiurin method.
Before the experiment, a mixture of spring cereals (oats, barley, wheat) was sown in all plots for green fodder, without fertilization.The results presented in this study were noted in the fifth year of the experiment in the rapeseed monoculture (2013) and in crop rotation: 2009 winter rapeseed, 2010 winter wheat (Triticum aestivum L.), 2011 field bean (Vicia faba L.), 2012 spring wheat (T.aestivum), and 2013 winter rapeseed.The open-pollinated rapeseed 'Californium' was grown, seeds (4.5 kg ha -1 ) were sown in 20 August and dressed with insecticides imidacloprid 200 g and cypermethryna 50 g (Brasikol C 250 FS, Z.P.U.H. "Best-Pest" -Jaworzno, Poland) and fungicide tiuram 332 g and karbendazym 148 g (Funaben T 480 FS, Organika-Azot S.A. Jaworzno, Poland).Plants were harvested in the first half of July.Three levels of technologies were used: economically (low-input), moderately intensively (mediuminput) and intensively (high-input), varied in amount of N and S fertilization as well as protection against pests.The applied fertilizer and pesticide treatments are given in Table 1.The experiment had a randomized block design with three replicates.The plot size was 12.0 m 2 , the harvested plot area was 9.0 m 2 .
Yield and content of macro and microelements
At the end of the experiment seeds were collected, dried and purified.Rapeseed seed were collected from the experimental field (9.0 m 2 ) and its yield was calculated in tons per hectare at 15% humidity.
Seed samples (1 kg) were taken from the plot and subjected to chemical analysis for the content of macro-Table 1. Treatments carried out in winter rapeseed (Brassica napus) plots experiment.
Oil extraction and analysis
Fat content was determined with the use of near-infrared spectroscopy (NIR) (Infratec 1241 Grain Analyzer, Foss, Hillerod, Denmark), which takes measurements of transmission waves from the near-infrared region (570-1050 nm).Analysis of the fatty acids was done following the cold extraction of rape oil with chloroform/methanol (2:1 v/v).Fatty acid methyl esters (FAME) were prepared according to Zadernowski and Sosulski (1978) using a mixture of chloroform:methanol:sulphuric acid (100:100:1, v/v/v).Chromatographic separation was performed using a gas chromatograph (Agilent 7890A, Agilent Technologies Wilmington, Delaware, USA) with a flame-ionization detector (FID) and a 30 m 0.32 mm internal diameter capillary column.The liquid phase was Supelcowax 10 and the film thickness was 0.25 µm.
The conditions of separation were as follows: helium was used as a carrier gas; flow rate 1 mL min -1 ; detector temperature 250 ºC; injector temperature 230 ºC; column temperature 195 ºC.The different acids were identified by comparing retention times with standards from Supelco (Bellefonte, Pennsylvania, USA).The fatty acid content is presented as the relative percentage (% total fatty acids) in rape oil.
Weather conditions
Poland's climate can be described as a temperate climate, which is greatly influenced by oceanic air currents from the west, cold polar air from Scandinavia and Russia, as well as warmer, sub-tropical air from the south.
The mean monthly air temperatures (from winter rapeseed sowing till the end of November) were on a similar level as the analogous annual periods (Table 2).The drought recorded in August (with precipitation lower by 44.9 mm than in the annual periods) might have hindered seed germination, but the precipitation levels in the following months secured good plant growth before wintering.During the wintering period (December-March), when water resources should be accumulated for spring growth, precipitation was lower by 41.5 mm in comparison with the analogous periods in 1981-2010.
Following melts, there were ground frosts in March, which presented a risk of potential plant damage due to thin snow cover.Weather conditions did also not favor plant development and growth at the stages from budding to silique formation -BBCH 53-79 (BBCH Monograph, 2001).The recorded precipitation volumes between April and June were lower by 50.9 mm (lower by 30.8% as compared to the annual periods) and remained below the requirements of winter rapeseed.
Statistical analyses
The results were statistically processed in Statistica 10.0 (StatSoft, Tulsa, Oklahoma, USA) with the use of one-way ANOVA.Basic parameters and homogenous groups were determined by Tukey's test at p = 0.05.The relationships between yield of seeds, content of fat, N, P, K, Mg, Ca, Cu, Fe, Zn, Mn and yield of fat: saturated fatty acids (SFA), monounsaturated fatty acids (MUFA) and polyunsaturated fatty acids (PUFA), were described by linear regression analysis.
Content of macro and microelements
The chemical analysis of winter rapeseeds demonstrated that, regardless of production technology, the average content of minerals in the fifth year of monoculture was as follows: 29.9 g N kg -1 , 0.595 g P kg -1 , 1.12 g K kg -1 , 0.298 g Mg kg -1 , 0.55 g Ca kg -1 , 3.18 mg Cu kg -1 , 115.6 mg Fe kg -1 , 42.8 mg Zn kg -1 , and 38.2 mg Mn kg -1 .In the fifth crop rotation year there was a year break in rapeseed: 29.2 g N kg -1 , 0.562 g P kg -1 , 1.12 g K kg -1 , 0.302 mg Mg kg -1 , 0.392 mg Ca kg -1 , 3.47 mg Cu kg -1 , 113.3 mg Fe kg -1 , 44.6 mg Zn kg -1 , and 42.4 mg Mn kg -1 (Table 3).These results are comparable in their P and Cu contents with a higher amount of N, Mg, Fe, Mn and Zn, although they have a lower content of other elements compared to the data reported by Fordonski et al. (2015).
The content of N, P, K, and Mg in rapeseed did not differ significantly depending on its proportion in a crop rotation.
Aug-July Sum
The level of Ca was higher (by 17.0%) in rapeseed produced with a medium-input monoculture system compared to crop rotation technology.However, the content of this element was lower (by 34.9%) in the high-input crop rotation system than in monoculture.Considering the intensity of agricultural engineering procedures, only the highest level (high-input) generated a significant increase (10.0%) of N accumulation in the rapeseed compared to medium-input technology.
Similarly, Ca content in a crop rotation was significantly higher with the high-input compared to both the low-input and medium-input technology.In the fifth monoculture year, a higher Ca content was recorded in the medium-input technology than the other levels (26.5% on average).Depending on a crop rotation method, a generally higher content of microelements was measured in rapeseeds grown in the crop rotation system.However, a significantly higher content of Mn was found only for the low-input crop rotation technology and of Zn and Mn in the medium-input system.The rapeseed low-input crop rotation system was an exception, with a significantly lower Fe content (by 7.8%).
The highest fertilization level (high-input) generated a significantly higher content of Zn and Mn in rapeseeds in both crop rotation systems.A significantly higher Fe level was recorded in rapeseeds grown in the mediuminput crop rotation technology and in the low-input monoculture system.
Yielding, fat content
Winter rapeseed cultivated for a number of years in the same field reacts with a substantial reduction in seed yield, but when seeded occasionally 2 yr in a row or in a short monoculture system it may generate yields at a similar level as after cereal plants (Rozylo and Palys, 2011;Jaskulska et al., 2014).When cultivated in a monoculture and crop rotation system, winter rapeseed yielded high-level crops, from 4.04 to 6.25 t•ha -1 (Table 4).
Regardless of the technology level, seed yield was higher by 18.6% with the crop rotation method than in the monoculture system.Significantly higher crops (by 47.3%) were obtained using low-input crop rotation technology compared to the low-input monoculture approach.The increase of agricultural engineering technologies contributed to diminishing differences in the yield between the crop rotation and monoculture systems.Increased intensity of agricultural technologies resulted in significantly higher seed crops only in the crop rotation system.Jarecki et al. ( 2013) found that higher level of agricultural engineering procedures, as compared with a lower input, generated a significant increase in seed yield by approximately 12%, which is a result of substantially higher number of siliques on the plant and thousand-seed weight.The level of oil in mature winter rapeseeds ranges between 45% and 50% on average (Liersch et al., 2013).In personal studies, winter rapeseeds 'Californium' contained 47.2% fat on average (Table 4).The crop rotation method did not substantially modify the fat content in rapeseeds.Moreover, the intensity level of agricultural procedures did not impact the fat content in rapeseeds, as reported in the studies performed by Jarecki et al. (2013).
Level of technology
Low-input Medium-input High-input Seed yield (t ha -1 ) Fat content (%) Fat yield (t ha -1 )
According to nutritional studies, a proper ratio of n-6 to n-3 polyunsaturated fatty acids in the daily ration should range from 6:1 to 4:1, although according to the experts of the International Society for the Study of Fatty Acids and Lipids (ISSFAL), the n-6 PUFA to n-3 PUFA ratio in the diet should not exceed 4 (Ntawubizi et al., 2010).The percentage changes in the proportions of polyunsaturated fatty acids such as linoleic acid (C18:2) and linolenic acid (C18:3) did not exert any significant impact on the C18:2/ C18:3 ratio in rapeseed from either crop rotation systems (Table 6).Greater differences in the C18:2/C18:3 acid ratio were reported with a varied level of agricultural engineering technology.An increased intensity of the technology significantly reduced the ratio of these acids both in the crop rotation and monoculture system.The recorded proportions of linoleic acid-to-linolenic acid approximated the levels reported by Tanska et al. (2009).
The average content of SFA in rapeseed oil was 7.41%, PUFA was approximately 28.2%, and MUFA was approximately 64.3%.Neither the level of technology nor the crop sequence impacted the content of SFA with C14, C15, C16, C17, C18, C20, and C22 atoms.The highest content of MUFA (66.1%) was recorded with the highest level of technology in the crop rotation system, and of PUFA (29.9%) with the low-input monoculture system.Rapeseed oil from the monoculture system contained a significantly higher amount of MUFA (medium-input) and PUFA (low-and high-input).Depending on the level of rapeseed saturation in crop rotation and technology, the MUFA:PUFA ratio ranged from 2.1:1 to 2.5:1 and was similar to typical rapeseed oil.According to Liersch et al. (2013), oil with the monounsaturated-to-polyunsaturated fatty acid ratio of 2:1 perfectly fits into the nutritional recommendations.
As the only lipid fraction, PUFAs were correlated with seed yield (r = -0.472).Together with the increase in Mg and Fe content, the amount of PUFA increased (r = 0.514 and r = 0.553, respectively) whereas the PUFA fractions decreased together with increasing Ca and Mn levels (r = -0.835and r = -0.578,respectively).
CONCLUSIONS
In general, the impact of winter rapeseed in crop sequence systems was found to have an insignificant impact on the content of macronutrients and trace elements in seeds, except for the higher levels of Ca (high-input), Mn (low-input and medium-input) and Zn (medium-input) in rapeseeds from the crop rotation system and higher contents of Ca (medium-input) and Fe (low-input) in the monoculture system.The highest level of agricultural technology (high-input method) in both systems resulted in a significant increase of Zn and Mn content in seeds and N and Ca level in the crop rotation system.
The medium-input and high-input technologies applied in the monoculture contributed to an increased percentage of oleic acid (C18:1 c9) in rapeseeds, whereas the lowinput and high-input technologies resulted in an increased percentage proportion of C18:2 and C18:3 acids in the monoculture rapeseeds.The medium-input level generated an increased proportion of C20:0, C18:1 C11, C18:2, C18:3 and C20:2 fatty acids in rapeseeds cultivated in the crop rotation system.
The increase in the level of technological input significantly changed the ratio of polyunsaturated fatty acids to linoleic (C18:2) and linolenic acids (C18:3) in both the crop rotation and monoculture systems.The proportion of saturated fatty acids was positively correlated with the content of K and Mg.The level of monounsaturated fatty acids was positively correlated with P and Ca content and with levels of Cu, Zn and Mn.The proportion of polyunsaturated fatty acids was positively correlated with the level of Mg and Fe, although it was negatively correlated with the seed yield and the content of Ca and Mn.
The oil content in winter rapeseeds ranged from 46.0% to 59.1%.The fat yield was strongly correlated with the seed yield (r = 0.929) although it was independent of the fat content.The highest fat yield was generated with the crop rotation system at the highest input level, whereas the lowest yield was recorded in the low-input monoculture technology.
Continuous rape cultivation does not have negative effects on seed yield and quality.Because of the technological quality of the seed, which is determined by the amount of polyunsaturated fatty acid, it is advisable to use low-input technology.
|
2019-04-27T13:07:18.519Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "1c27c48caf5ddb458164e56bdbdcea19b0b1980d",
"oa_license": "CCBYNC",
"oa_url": "https://scielo.conicyt.cl/pdf/chiljar/v77n3/0718-5839-chiljar-77-03-00266.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1c27c48caf5ddb458164e56bdbdcea19b0b1980d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
10044717
|
pes2o/s2orc
|
v3-fos-license
|
The Relationship between Asthma and Depression in Primary Care Patients: A Historical Cohort and Nested Case Control Study
Background and Objectives Asthma and depression are common health problems in primary care. Evidence of a relationship between asthma and depression is conflicting. Objectives: to determine 1. The incidence rate and incidence rate ratio of depression in primary care patients with asthma compared to those without asthma, and 2. The standardized mortality ratio of depressed compared to non-depressed patients with asthma. Methods A historical cohort and nested case control study using data derived from the United Kingdom General Practice Research Database. Participants: 11,275 incident cases of asthma recorded between 1/1/95 and 31/12/96 age, sex and practice matched with non-cases from the database (ratio 1∶1) and followed up through the database for 10 years. 1,660 cases were matched by date of asthma diagnosis with 1,660 controls. Main outcome measures: number of cases diagnosed with depression, the number of deaths over the study period. Results The rate of depression in patients with asthma was 22.4/1,000 person years and without asthma 13.8 /1,000 person years. The incident rate ratio (adjusted for age, sex, practice, diabetes, cardiovascular disease, cerebrovascular disease, smoking) was 1.59 (95% CI 1.48–1.71). The increased rate of depression was not associated with asthma severity or oral corticosteroid use. It was associated with the number of consultations (odds ratio per visit 1.09; 95% CI 1.07–1.11). The age and sex adjusted standardized mortality ratio for depressed patients with asthma was 1.87 (95% CI: 1.54–2.27). Conclusions Asthma is associated with depression. This was not related to asthma severity or oral corticosteroid use but was related to service use. This suggests that a diagnosis of depression is related to health seeking behavior in patients with asthma. There is an increased mortality rate in depressed patients with asthma. The cause of this needs further exploration. Consideration should be given to case-finding for depression in this population.
Introduction
After hypertension asthma is the most common chronic illness in primary care in the United Kingdom with a prevalence 6% [1]. Depression also has a high prevalence in primary care of between 5 and 10% [2]. Chronic physical health problems are reported to be associated with increased rates of depression [3].
Whether there is an association between asthma and depression is unclear. In secondary care populations up to 50% of patients with asthma have been reported to have clinically significant depressive symptoms and over a third of asthmatic outpatients have been found to have a major depressive episode [4,5,6,7,8,9]. The World Mental Health Survey found an age and sex adjusted odds ratio of 1.6 for depression in people with asthma compared to people without asthma [10,11]. However other researchers have failed to find an association [12,13,14], and there has been little research in primary care populations where the majority of people with these conditions are treated. Most studies to date have been cross-sectional in design and so the ability to explore potential associations has been limited. Longitudinal studies are needed to explore potential associations further.
The aims of this study were to determine the incidence of depression in primary care patients with asthma and the incidence rate ratio (IRR) of depression in this population compared to the general primary care population without asthma. We also wanted to explore potential mediators of this relationship and examine all cause mortality in patients with asthma and depression compared with non depressed patients with asthma.
We hypothesised that: 1. Primary care patients with asthma would have an increased incidence of depression compared with primary care patients without asthma after adjusting for age, sex , social deprivation, other common chronic medical conditions (coronary heart disease, diabetes and cardiovascular accidents) and oral corticosteroid medication use and smoking status. 2. Primary care patients with asthma and depression would have a higher age and sex standardized mortality rate compared with primary care patients with asthma but no depression.
Study design
This was a historical cohort study with a nested case-control study using data derived from the General Practice Research Database.
General Practice Research Database
The UK General Practice Research Database (GPRD) is the world's largest database of anonymous longitudinal medical data from primary care (www.gprd.com). The database consists of the medical records of approximately 13 million primary care patients with 46 million years of validated data. In 1996, 480 practices across the UK contributed to the GPRD. Recorded data include diagnoses, clinical events, specialist referrals, prescription details, hospital admissions and outcomes. Each patient has a unique identifier which allows data held in 4 separate data-sets to be linked. The GPRD uses Oxford Medical Information System (OXMIS) codes and Read codes to store diagnostic information [15]. These are cross-referenced to the International Classification of Diseases (ICD9 and ICD10) by the UK National Health Service Information Authority.
The database is owned and managed by the Medicines and Healthcare Products Regulatory Agency (MHRA) in the UK. The quality of the data are regularly audited by the Office for National Statistics and only practices that are 'up-to-research standard' (UTS) are eligible to participate. The GPRD has been validated for use in respiratory epidemiology [16], and the results from the database for asthma are consistent with published studies on the epidemiology of asthma [17]. Comparisons of age and sex distributions are similar to those found in the National Population Census and the geographical distribution of practices participating in the GPRD is representative of the UK population [18].
Historical cohort study
All patients aged 16 years or over with an incident diagnosis of asthma in their primary care record given between 1 st January 1995 and 31 st December 1996 and who had at least 24 months of UTS data before the start of the study window were identified from the GPRD, and followed up until 31 December 2006. A medical diagnosis of asthma was defined by a Read/ OXMIS code for asthma (codes available from authors). Read/OXMIS codes for asthma can de cross-referenced to ICD-10 asthma codes.
All cases with a recorded medical diagnosis of depression (as defined by Read/OXMIS codes) or depressive symptoms before the 1 st January 1995 were excluded from the cohort. Additionally, cases with a recorded diagnosis of schizophrenia or bipolar affective disorder (as defined by Read/OXMIS codes) over the study period were also excluded (Read/OXMIS codes used available from authors).
Patients were age (62 years), sex and practice matched (ratio 1:1) with patients who had not received a diagnosis of asthma over the same study period taken from the same GPRD base population of registered patients. GP practice was used as a proxy to control for socio-economic status as this was not directly available from data in the GPRD. GP practice has been found to correlate with socioeconomic status in the United Kingdom with practice postcode based Index of Multiple Deprivation (IMD) scores being correlated with population weighted IMD scores [19]. All patients had at least 24 months of UTS data prior to the index date of the case.
Patients were censored if they received a diagnosis of depression during the study period to prevent over-estimating any association due to recurrent depressive disorders in those that had already been diagnosed with depression during the study period.
Nested case control study
Cases were defined as patients from the cohort study with a diagnosis of asthma and depression (as defined by Read/OXMIS codes) during the study period. Controls were defined as patients from the cohort study with a medical diagnosis of asthma but without a diagnosis of depression (as defined by Read/OXMIS codes) during the study period. Cases were matched to controls (ratio 1:1) according to the date of asthma diagnosis (61 month). The same inclusion and exclusion criteria were applied to both cases and controls and were the same as those for the cohort study.
Measures
The exposure of interest was a GP recorded diagnosis of asthma. The outcome of interest was a GP recorded diagnosis of depression. Data on smoking status (defined as ever having smoked), comorbidity with other common chronic illnesses (diabetes mellitus, cerebrovascular disease, coronary heart disease and congestive heart failure), anxiety disorders, asthma medications use (b-agonist use, inhaled corticosteroids and oral corticosteroids), and the number of GP consultations were obtained from the electronic medical records. The co-morbid illnesses were defined by a Read/ OXMIS code for any of these diagnoses prior to the onset of depression in the cases, and prior to the same date in the matched control.
As has been validated in another study, asthma medications patients had received in the year before being given a diagnosis of depression were used as a proxy for asthma severity as follows: 1. Un-medicated asthma (no prescriptions) 2. Asthma medicated with at least one prescription of a short acting b-agonist. 3. Asthma medicated with at least one prescription for an inhaled corticosteroid with or without a long acting b-agonist. 4. asthma medicated with an oral corticosteroid [20]. The number of GP consultations in the year before the onset of depression for the cases and the number in the year before the same date for each matched control were recorded. These were further categorised into low use (,5 consultations), medium use (5-10 consultations), high use (10-19 consultations), and very high use ($20 consultations).
Statistical analysis
All data were analysed using STATA version 9 (STATA Corp, Texas). The incidence rate and summary rates of depression stratified by age and sex were calculated. Rate ratios and 95% confidence intervals were calculated for the exposed and unexposed groups. Multivariate survival analyses were used to control for potential confounders using the 'streg' function in STATA. The 'cluster' option in STATA was used to allow for the effect of correlations within matched groups on estimates of standard errors and significance levels. Conditional logistic regression analysis was used to explore associations between cases (asthma and depression), controls (asthma but no depression) and to take into account the effect of potential confounders. Standardised mortality ratios (SMR) controlling for age and sex were calculated indirectly.
Sensitivity analyses were conducted to explore the possibility of misclassification of depression and asthma. For depression, the results were re-analysed excluding patients who had also received a Read/OXMIS diagnosis of anxiety or an anxiety disorder, and also by defining depression as a Read/OXMIS code for depression and being prescribed an antidepressant medication. The analyses were also repeated excluding patients who had also received a diagnosis of chronic obstructive pulmonary disease (COPD), and also restricting the inclusion criteria to those aged less than 40 years, as a diagnosis of COPD is unlikely before this age.
Ethical Approval. This is an analysis of an anonymised data set. Ethical approval was covered under the terms and conditions of use of the General Practice Research Database via the Medical Research Council.
Results
Incidence of depression in patients with asthma 11,275 incident cases of asthma were identified between 1 st January 1995 and 31 st December 1996 from 219 practices. It was possible to age-sex-and practice-match all cases (ratio 1:1) giving a total case-control cohort population of 22,550. Fifty seven percent were female. The average age of men was 50 years (s.d.18.9), women 48 years (standard deviation (s.d.) 19) (p,0.001). Patients with asthma were followed up in the GRPD for 78,096 person years and patients without asthma for 98,229 person years.
In the population with asthma 1752 were diagnosed with depression over the study period and in the population without asthma, 1353 were diagnosed with depression over the same period.
The incident rate for depression in patients with asthma was 22.4 per 1000 person years. The incidence rate for depression in age-, sex-and practice matched patients without asthma was 13.8 per 1000 person years, giving an incident rate ratio (IRR) of 1.63 (95% CI: 1.52-1.75). There was no statistically significant difference between the incidence rate ratios for depression in the asthmatic population versus non-asthmatic population by sex (IRR men = 1.64 ;95% CI:
Nested case-control study
Of the 1752 patients with asthma who became depressed, it was possible to match 1660 controls by date of asthma diagnosis (61 month) giving a total population for the case-control study of 3320 patients. Data on service use were available for 1648 (99%) controls and 1355 (82%) cases. Fifty-three per cent of the controls were women and 70% of the cases. The average age of controls was 49.6 years ( s.d.18.5) and the average age of the cases was 46.1 years (s.d. 18.7; p,0.001). The crude odds ratio for depression in women with asthma compared to men with asthma was 2.09 (95%CI:1.93-2.24). There was a small decreasing trend in the effect of age on depression (crude OR for each additional year 0.99; 95%CI: 0.987-0.994).
The mean number of GP consultations in the year prior to diagnosis of depression in the cases was 8.3 (s.d.7.1) and in the same year in the controls was 5.3 (s.d. 5.67) (Mann-Whitney test p,0.001). The age and sex adjusted OR for the association between each GP visit and diagnosis of depression was 1.1 (95% CI 1.07-1.10). The average number of consultations across cases and controls was 6.7 (s.d. 6.5). The association between the number of GP visits, severity of asthma and diagnosis of depression is shown in Table 1. The Spearman's correlation coefficient (r) for asthma severity and number of GP visits was 0.3.
There were statistically significantly more cases than controls being treated with any anti-asthmatic medication (controls 48.2%, cases 51.8%; p = 0.01) and with oral corticosteroids in the year before inclusion in the study (controls 42.6%, cases 57.4%; p,0.001). Cases and controls were as likely to have b2 agonists as the highest level of treatment (controls 47.6%, cases 52,4%; p = 0.268), and inhaled corticosteroids (controls 51.2%, cases 48.8%; p = 0.345). However being treated with an oral corticosteroid was no longer statistically significant after adjusting for the number of GP consultations (See Table 2).
The age and sex standardised SMR for primary care patients with asthma compared to the general primary care population was 2.86 (95%CI: 2.64-3.10). The age and sex standardised mortality ratio for primary care patients with asthma and depression compared with non-depressed patients with asthma was 1.87 (95%CI: 1.54-2.27).This did not change statistically significantly with asthma severity.
Discussion
We have found a statistically significant association between a diagnosis of asthma and the incidence of depression in primary care patients, and a higher mortality rate in patients with asthma and depression compared to patients with asthma only. This finding is particularly important as asthma is one of the commonest chronic conditions treated in primary care and an association between asthma and depression has potentially important implications for public health. In the UK, case-finding for depression has been instituted for primary care patients with diabetes and coronary heart disease [21], but not for asthma. Asthma is more common than coronary heart disease [22,23] and diabetes [24,23]. Our results suggest the association between depression and asthma may be as strong as that between depression in coronary heart disease and diabetes. Case-finding for depression in this population should therefore be considered. An increased mortality rate in patients with co-morbid asthma and depression has also been reported for asthma patients treated in tertiary care [4]. It has been hypothesised that concurrent depression leads to poor adherence with asthma treatments and hence to poorer outcomes [25]. Eisner et al reported that depressive symptoms were also associated with an increased risk of being hospitalised for asthma [26]. Due to limitations of the dataset we were unable to determine the cause of death in our study and further research is needed to explore the reasons for the increased mortality in this population.
Our results suggest that asthma predisposes to developing depression. However, this did not appear to be contingent on asthma severity. Evidence for an association between asthma severity and depression is contradictory. Eisner et al found that in a prospective cohort of patients hospitalised for asthma, depressive symptoms did increase with asthma severity [26], though Chapman et al reported that patients with mild asthma reported depression as frequently as those with severe asthma [27]. Our results are in line with those of Chapman et al. The reason for this counter-intuitive result is unclear. One explanation is that objective measures of asthma severity (such as peak expiratory flow rate) do not necessarily reflect subjective experience [14].
Oral corticosteroid use is unlikely to be the cause of the increased rate of depression in patients with asthma. We failed to find an association between inhaled or oral corticosteroid use and depression after adjusting for number of primary care consultations. The correlation between asthma severity and number of primary care consultations was low. Frequent attendance in primary care has been associated with receiving a diagnosis of depression and so patients with asthma who are frequent attendees may have a higher likelihood of being diagnosed with depression. An increased frequency of attendance may increase the opportunity for primary care doctors to detect depression or may reflect an underlying depression leading to an increased number of consultations. Primary care physicians should therefore consider depression in frequent attendees who have asthma.
Limitations
The data in the GPRD are essentially computerised general practice records, completed as part of routine clinical care. They should therefore reflect clinical practice. However there are limitations. Diagnoses were those given by GPs and no standardised instruments were used to confirm this diagnosis. There have been no studies we are aware of that have examined the validity of diagnoses for depression in the GPRD, though a study of psychosis diagnoses using this database found high predictive values [28]. We conducted sensitivity analyses to explore the possible effects of misclassification of depression and these were minimal. Loss to follow-up from the cohort should also be minimal as registration with primary care and exits from the database are carefully recorded. We were unable to assess the severity of asthma directly, and had to use a medication based proxy, though this has been used successfully in other studies [20]. However we were unable to measure adherence to medications. Poor adherence may result in fewer visits to a GP because patients would not have to attend to for further prescriptions, or it could result in poorer asthma control which may in turn result in more visits. Poor compliance may therefore be a residual confounder.
The case-control study may be subject to residual confounding for example from socioeconomic status. The relationship between socioeconomic status and asthma is unclear with conflicting evidence of an association [29]. Likewise, though there appears to be a relationship between the prognosis of depression and socioeconomic status, the association between the incidence of depression and socioeconomic status is less clear [30]. Nevertheless we considered socioeconomic status to be a potential confounder. We were unable to assess socioeconomic status directly and had to use GP practice as a proxy. GP practice has been found to correlate with socioeconomic status at the individual level, though it is only an approximation of the area-based deprivation of the practice population as a whole [19]. This may lead the strength of the association between asthma and depression to be overestimated. It is unlikely that frequency of attendance at GP practices is confounded by socioeconomic status. Higher rates of consultation have not been found to be related to socioeconomic status, but rather the number of underlying chronic medical conditions (which is associated to socioeconomic status) [31]. We controlled for most of the common chronic medical conditions in general practice and though there may still be residual confounding from medical conditions we did not control for this is likely small. In conclusion, there is a higher incidence of depression recorded by GPs in patients with asthma, and co-morbid asthma and depression increases mortality. A diagnosis of depression does not seem to be related to asthma severity, but is related to the frequency of GP consultation. Depression screening should be considered in primary care patients with asthma.
|
2014-10-01T00:00:00.000Z
|
2011-06-15T00:00:00.000
|
{
"year": 2011,
"sha1": "2919f2d106494948acec366a430acb765f862357",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0020750&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2919f2d106494948acec366a430acb765f862357",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55844667
|
pes2o/s2orc
|
v3-fos-license
|
A survey of university students ’ views on the nature and significance of nicknames to the Shona people of Zimbabwe
Nicknames are an integral part of human experience in many cultures in the world over, and some scholars believe that they have a cultural significance to the relevant society. This study is a report of a survey of a purposively sampled group of fifty Great Zimbabwe University students’ views, gathered through a questionnaire, on nick-name usage among the Shona-speaking people of Zimbabwe. The group of respondents comprised students, in their first semester at university, drawn from across the Zimbabwean social and dialectical divide. In this study, only nick-names used by the Shona people were predominant because the researcher’s first language is Shona and it was, therefore, felt that interpretation would be easier. Some people may think that nicknames are a trivial phenomenon of human existence but this survey revealed that they are significant to both bearers and users and are an indispensable aspect of human existence. Some may be used for convenience of usage while others may reflect the bearer’s behaviour, physical appearance, social status in life or simply an important incident in a person’s life. Yet others have personality traits of their carriers embedded in them. It could also be argued that some of these names are used arbitrarily while others are an important reflection of and offer important insights into the relevant people’s norms, values and history and the cultural intrusion of the West, particularly with short forms of actual names which bearers were given at birth.
INTRODUCTION
Nicknames, like the first names that human beings acquire at birth, are an inherent characteristic of human existence.Kuranchie (2012) asserts that it is an incontrovertible fact that nicknaming has been a common practice in various arenas of human endeavour in many societies since ages.The scholar further argues that researchers consequently have long studied the practice in various arenas of human experience, and have observed that people use varieties of them, depending on their norms and values.
A significant number of other studies have been carried out on naming in various communities (Neethling, 2005;De Klerk andBosch, 1997, 1998;Mehrabian and Piercy, 1993;Phillips, 1990), and this study seeks to augment such studies by focusing on university students" views on the nature and significance of nickname usage among the Shona-speaking people of Zimbabwe.The openended questionnaire used to solicit data from fifty student informants yielded interesting insights into how the Shona people of Zimbabwe use nicknames.About 70% of the Zimbabwean population speaks Shona (which also has many dialects) (Hachipola, 1998) hence the researcher"s interest in an aspect relevant to a population of this magnitude.
LITERATURE REVIEW
Naming is an important phenomenon of human experience, and has mainly acted as an integral aspect which enables people to identify others and make distinctions between individuals in a society.In Zimbabwe, and many other societies, every human has a name, a birthright, given to them at birth and, these are normally registered with the Registrar of Births and Deaths, a department under the Ministry of Home Affairs.Kuranchie (2012) observe that "In all cultural settings, every individual is accorded a name after birth, perhaps, to give a unique identity to the child.At birth, parents or senior members in the family give personal names to the new born baby which he/she may retain throughout his/her life, unless, for some reason, they opt to officially change them through marriage or some other social factor".Kuranchie (2012) concurs, saying true names are acquired at birth through culturally-accepted arrangements.
Many studies have been carried out on naming in different societies, which include Morgan et al. (1979), Afful (2007), Alleton (1981), de Klerk and Bosch (1997, 1998) and Liao (2006).Neethling (2005) posits that: "…one could ostensibly generalize and say that some sort of motivation always exists whenever any entity, human or non-human, is named . . .where the motivation is somewhat obscured, the meaningfulness of the name makes it easy to trace: the name carrier (or his family or friends) has an explanation at hand."Neethling (2005) further observes that "A nickname is considered to be a derived form of Old English eacan meaning "also", relating to its role as an additional name evolving subsequent to the assigning of the first name.In the English speaking world, a "nickname" has certain connotations, often dealing with a characteristic feature, physical or, of the name bearer.The nickname may also be pejorative.Neethling (2005) further contends that these are additional first names that could replace or function in place of the bestowed first name at birth.Phillips (1990) defines a nickname as a subset of informal or unfixed names for someone, usually addressed by acquaintances and asserts that since such names are unofficial, only familiar people call the nicknamed by those names.Lin (2007) also observes that, unlike personal names (first name and surname), nicknames may vary from time to time and even from group to group, depending on familiarity and relations between interlocutors or amongst group members.Liao (2006) states that nicknames are informal names that are not registered at the Civil Registration Office in Taiwan.
Other scholars argue that nicknames are also viewed as "little names" and "milk names" which are not the official name (Alleton, 1981;Blum, 1997).Fang and Heng (1983) share a similar view of nicknames and consider them as milk names which are only used within a family or among intimate friends.Lin (2007) concurs that a nickname is an informal term for an individual, often used by members in a particular community of practice.In a study carried out in Ghana, Afful (2007) states that address forms which include nicknames are used in various social domains such as politics, workplace and academia.According to Neethling (2005), a nickname is a name added to those names the name carrier already has.He postulates that they are often developed among acquaintances and that nicknames represent familiarity, intimacy and solidarity.
Nicknames serve a range of functions over and above the typically referential function of the first names; they are frequently semantically transparent and their usage reveals insights into the characteristics (personal and physical) of their bearers, as well as into their role in society (Leslie, 1990;McDowell, 1981;van Langendonck, 1983;de Klerk and Bosch, 1998).It is clear, from the earlier mentioned observations, that a significant defining aspect of nicknames is that they are unofficial or additional and, unlike first names, they are bestowed upon their bearers not only by parents but also by peers and other members of the respective community of the bearer.
In this study, the term "nickname", referred to in Shona as zita remadunhurirwa, is used broadly as an umbrella term referring to whatever unofficial or additional names given subsequent to the official, which individuals acquire (for various reasons) as they progress through life.The names given here should be treated as a mere sample, never claiming to be exhaustive or subject to generalisation.
Theoretical framework
The research is rooted in the field of semiotics which is considered as best encapsulating this culturallysignificant phenomenon of nicknaming.According to Eco (1977), every cultural entity becomes a semiotic sign.It is, therefore, argued here, that nicknames are a cultural convention and interpretation of such is key to unraveling the values of people of a given culture.Semiotics (or semiology, according to one of the founding fathers of semiotics, Ferdinand de Saussure), is a field of study that is concerned with signs and/or signification (the process of creating meaning).It is the dominant term used for the science of signs (http://www.visual-memory.co.uk).This study is guided by de Saussure"s semiology; the relationship between the "signifier" (nickname) and the "signified" (nickname bearer) as opposed to Charles Sanders Peirce and other prominent semioticians.This study focuses on the meanings generated by use of nicknames and their significance to the relevant users.It is, however, beyond the scope of this study to pursue the semiotic debates by various scholars.
Statement of the problem
Naming is an important human phenomenon, particularly meant to give bearers an identity.Likewise, each society finds it significant to give nicknames to certain members, which also act as a form of identifying them within the relevant social groups, serving different purposes as a result.From a sociolinguistic point of view, nicknaming represents a process of constructing individual identities within a group (Lin, 2007).It is the aim of this study to find out the nature and significance of nicknaming among the Shona-speaking people of Zimbabwe, and draw insights into the cultural trends in a world where globalization is a stark reality.Literature is replete with nicknaming among various communities across the globe; the literature review in this study bears testimony to this.However, as far as it could be ascertained, there is very little literature, if any, on the nature and significance of nicknaming among the Shona-speaking people of Zimbabwe.Hence this study is an attempt to add to the onomasticon and the existing literature on nicknames, with specific reference to the Shona-speaking people of Zimbabwe.
Objectives
The study sought to: 1. unpack the nature of nicknaming among the Shonaspeaking people of Zimbabwe; and 2. explore students" views on the meanings and significance of these nicknames and their social consequences.
Research questions
The research sought to answer questions which include: 3. How prevalent is the tradition of nicknaming among the Shona-speaking people of Zimbabwe?4. What significance do Shona nicknames place in the social lives of the people concerned?METHODOLOGY Fifty undergraduate Great Zimbabwe University (GZU) students" views were gathered through an open-ended questionnaire because it was considered the fastest and most convenient in view of the number of respondents involved.The students came from across Zimbabwe"s social and dialectical divide.In design, the study is a survey, which sought to solicit for views on the nature and significance of nicknaming among the Shona-speaking people of Zimbabwe.According to Marshall and Rossman (2006), "Survey research is an appropriate mode of inquiry for making inferences about a large group of people based on data drawn from a relatively small number of individuals in that group."The researcher also engaged an interpretive analysis of the nicknames given, in an attempt to find their semiotic significance to the relevant people.
Population
The population comprised of undergraduate students at GZU from the Faculties of Arts, Agriculture and Natural Sciences, Commerce, Education and Social Sciences.
Sample
Fifty students from the Faculty of Education, specializing in Bachelor of Education Degrees and aspiring to become teachers, were sampled.Their ages ranged between eighteen and forty years, and were in their first semester.They were asked to give the nicknames they used, or heard others use and their meanings.They were further requested to comment on their possible significance to the Shona-speaking people of Zimbabwe.Only Shona-speaking students were purposively sampled because they were able to understand the meanings of the nicknames given by the relevant people and the researcher had easy access because he taught them the module Communication Skills then.The researcher found these students conveniently placed to cooperate and contribute significantly to the success of the study.Thus, Punch (2006) asserts that purposive sampling is the term often used; it means sampling in a deliberate way, with some purpose or focus in mind.Punch (2006), argues that all research, including qualitative research, involves sampling; no study, whether quantitative, qualitative or both can include everything: "you cannot study everyone everywhere doing everything."As such, this research focused on the views of students on the dynamics of nicknaming among the Shona-speaking people of Zimbabwe.
RESULTS AND DISCUSSION
The findings of this study generally indicate that the Zimbabwean society at large, just like many others, is in flux due to globalization and, as such, its citizens" attitudes and nick/naming trends have been infiltrated mostly by Western cultures, particularly the English (bearing in mind that Zimbabwe is a former colony of Britain and even after independence in 1980, one of the remnants of colonial domination is that the country continues to use the English Language both as the official language and language of instruction in the classroom).The data, in this study, comprise a sample of nicknames used among the Shona-speaking people of Zimbabwe.Whereas most of the nicknames are Shona, others are English.Only the nicknames reported by fifty Shona-speaking informants from Great Zimbabwe University, drawn from across the country"s dialectical and social divide, were included in this analysis.
This study makes an interpretive analysis of nicknames given in an attempt to find their semiotic significance.The researcher"s perceived distinction of nicknames which were reported into two broad categories of the negatively and positively.This is supported by Kuranchie (2012) who says, certain names are generally considered desirable and have positive feelings associated with them while others are humiliating and are looked down upon.Wilson (1998) concurs that, while some students cherish their nicknames, others hate and cannot stand them.Kuranchie (2012) contend that, some nicknames have strongly negative meanings, and are often disliked by their bearers while users insist on their usage to mock or tease their peers.According to Neethling (2005), "Certain names are generally considered desirable and have positive feelings associated with them while others are humiliating and are looked down upon as being undesirable and carry negative connotations."Focus is on the nature and meaning/significance of such nicknames with some derived from base forms (as indicated in this study) while others are a miscellany of unique creativities of their users.Table 1 present the nicknames obtained from respondents and either give their origins or offer their meanings in English where Shona names are involved.After each table, the researcher interprets and discusses the results, an approach that was considered more economical and focused.
The respondents gave various reasons for the use of such short forms (as given in Table 1) which include that they are used as short forms of original names for endearment when children were young.The other reason given was that they were used for convenience of calling and that at a particular time, the use of the suffices such as /s/ and /y/ (as in Welas, Chinos, Tigs on one hand and Tady, Benjy and Toby) on the other were considered as trendy and people would use them, perhaps, to maintain group cohesion and were generally positively perceived by their bearers and users.These findings confirm the observations by other researchers, as highlighted earlier on in this report, that some nicknames were found desirable (Wilson, 1988;Neethling, 2005;Kuranchie, 2012).De Klerk and Bosch (1998) argue that some nicknames might be regarded as fairly reliable indicators of trends and attitudes.It was observed in some studies that such names were meant for convenience of texting in this day of constant typing and texting (http//b.scorecardresearch.com).
It could also be argued that such short forms, while they could be used for convenience of calling, may also have a historical antecedent in the form of the indigenous people"s contact with their colonial "master" from the west, and the consequent cultural infiltration.Hence, it is argued, we give our newborn children names that used to be nicknames; Jack (a nickname for John) is a very popular boys" name, Emma (the variation of Emily) was number two on the list (in a study) of popular girls" names (http//b.scorecardresearch.com).De Klerk and Bosch (1998) argue that, in defining nicknames, many writers choose to exclude from their analysis those names which are obvious short forms or derivatives of their first names but it is these forms which offer important insights into the social relations within a cultural group.As such, this study has attempted to address this gap in researches on naming and nicknaming in society.In the earlier mentioned examples, some nicknames evolve linguistically from first names while others display users" ability to coin a referent for example, Tinto for Tendai, Tini for Tinotenda, Toby for Tobias, Tadi for Tadiwa, Tamas for Tamanikwa, Kudzy for Kudzanai and Chinos for Chinongo.Thus Kuranchie (2012) observes that some names develop affectionate forms with an endearment effect.Such nicknames, confirm the observation that their use depends on familiarity and relationships between interlocutors (Fang and Heng, 1983;Lin, 2007).Neethling (2005) concurs that they represent familiarity, intimacy and solidarity.The respondents reported that these nicknames were positively perceived and their bearers liked them, thus confirming findings obtained in other studies.The researcher attributes this liking to the fact that these were short forms of actual names used by the bearers and were found on their identification particulars.
Some scholars further argue that some names are related to the job one does or a physical condition or even to shorten a real first name; they can be situational (http://b.scorecardre.com).For example, when young children learn to speak, their speech is awkward and they cannot pronounce certain words correctly and whatever becomes apparently interesting may stick as a referent.Table 2 gives examples of such names and nicknames, which help to confirm this observation made elsewhere by other researchers in this field.
In Table 2, nicknames like Ajoli, Ati, Titi and Umbo evolved from bearers" defective speech abilities when they were young while Mazhambe, Mukoko, Makwindi, Bope and Zobha were found to be situational, attributed to some character traits of the bearers as explained in the Table 2. Thus, they became additional names that evolved subsequent to the bearers" first names (Neethling, 2005).In the latter case, bearers were reported to dislike the referents because they were considered derogatory, established by users to mock certain character traits of the bearers.In support of this outcome, Neethling (2005) observes that some nicknames have connotations.Table 3 gives some names given to allude to some attributes of bearers.
The nicknames in Table 3attempt to illustrate the observation by some scholars that nicknames cannot only shorten a name but can also identify a characteristic about a person (http://scorecardresearch.com).According to Neethling (2005), "A nickname might have been bestowed because of a particular event, the physical appearance of the name carrier or other social and personal traits."Nicknames like Masvina, Pfuko, KaDora, Vakurida, Mukoko, Chikwepa, KaDora and Tepi were found to have a derogatory effect hence their bearers tended to dislike and detest them.Table 3 also shows that males have a stronger inclination, than females, towards giving each other nicknames.These nicknames offer important insights into social relationships within a cultural group (Kuranchie, 2012).Kuranchie (2012) observe that some of these nicknames have positive, neutral or negative connotations.Nicknames like Monya, Muchinda, Big and Brown were found to have neutral effects and their bearers were found to like them.The meanings of these nicknames are an important indicator of users" perception of the bearers.Thus Holland (1990) and Alford (1987) argue that another important aspect of nicknames is their role in influencing the perceptions of their users.The translations given in Table 2 were an attempt to get as nearer as possible to the English meanings of the names, for the benefit of non-Shonaspeaking readers.Neethling (2005) observes that nicknames have personality traits embedded in them and have uniqueness peculiar to a particular family or society.Shona nicknames like Tepi (One who is thin, not slim, a derogatory term for a girl who defies the African conception of positive attributes of a girl) and Masvina Means "a guy"; was given to someone who had the mannerism to refer to anyone as muchinda as a sign of solidarity (One who is always dirty and would rarely take a bath) have the potential to linger longer in users" minds and influence attitudes of dislike, with the effect of alienating the bearers from the groups they would be expected to associate with under normal circumstances.They could also serve as constant reminders for the bearers to reform.Thus, nicknames were found to play a significant role in the socialization process of an individual throughout his/her life.The last group of nicknames is that of those given after television personalities/actors and other well-known individuals from the Zimbabwean society, and beyond, like Sabhuku Varazipi (a popular Zimbabwean comedian who took Zimbabwe by storm in 2013 and 2014, famous for his corrupt tendencies as a kraal head), vaMayaya (a popular police officer in the Sabhuku Vharazipi Comedy mentioned earlier, who solicited for bribes in order to release offenders facing prosecution), Parafini and Mr. Bean, also famous clowns who were very popular with children in the 1990s.In the majority of cases, the respondents said that bearers did not have anything in common with the possessors of the names but that they were used for fun.Normally, they were found not to last very long, largely depending on how long the personality (their namesake) remained popular in social circles.They were found to be significant in as far as they constantly reminded users of these important social events and personalities.
Conclusion
From the foregoing discussion, it can be argued that nicknaming is prevalent in the tradition of the Shonaspeaking people of Zimbabwe and offers important insights into their cultural beliefs and values.The findings seem to confirm Neethling"s (2005) observation that "Nicknames, because they act as an avenue for creativity and the expression of some of the pure enjoyment that the sounds and meanings of words can give, provide name-users and name-bearers with considerable freedom in manipulating and bending linguistic resources."While some names are shortened forms of official names given at birth, used for convenience, some are used in such a way that they give insights into certain characteristics or behaviour traits of bearers yet others evolve from the way children pronounce their names when they are young, due to limitations in speech abilities.
The study also confirms De Klerk and Bosch"s (1997) findings that all sorts of nicknames are used by people in different environments and that while some people may cherish these "fab" names and would like to be identified with them, others may abhor and shy away from theirs.These scholars further argue that, in defining nicknames, many writers choose to exclude from their analysis those names which are obvious short forms or derivatives of their first names yet, it is these forms which offer important insights into social relationships within cultural groups.Such nicknames, used sometimes for endearment and group solidarity, have been discussed under findings, in this study, showing that nicknaming is not peculiar to Zimbabwe; it is prevalent in many other cultures in the world over as discussed under literature review.
In this study, nicknames were found to be more prevalent among males than they were among females, perhaps demonstrating the intimacy among men who, in the Zimbabwean context, seem to converge more frequently than women for various reasons for example, dare (a traditional gathering, in the evening, around a fire for men only) and beer-drinking parties.While the findings reveal such a discrepancy in the use of nicknames among males and females, it is beyond the scope of this study to explore gender relations.So, this leaves a gap worth pursuing in future.
This research revealed that nicknames were significant in a number of ways: they indicate users" attitudes towards bearers; they may originate from bearers" personal attributes or habits; or they may be shortened forms of bearers" official names and normally become expressions of affection and endearment.Since these are additional names, it could be argued that they appear as individuals progress through life and become unofficial referents used by those who are socially close to the bearers, thus confirming findings from other studies cited in this discourse.
All in all, nicknames should not be trivialized as they are significant in that they are used to express affection, to describe someone"s appearance (with positive and negative connotations), to disparage their bearers" behaviour or simply because it is trendy to do so and also for mere fun, like when bearers are given names for popular television personalities.
Table 1 .
Nicknames that evolve linguistically from or are derivatives of first names (trendy) or have a cohesive significance.
Table 3 .
Some nicknames are given to allude to the personal attributes/habits of the bearer.Derived from the name of a bus company plying the Bulawayo-Mutare Highway (in Zimbabwe) in the late 1990s, renowned for speeding.The bearer would traverse the local villages, in search of local alcoholic brew, as if he was "on wheels"
|
2018-12-12T17:00:23.165Z
|
2016-08-31T00:00:00.000
|
{
"year": 2016,
"sha1": "2e711b8ff26a1bf9fec240925c8afaf220c64e57",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/JASD/article-full-text-pdf/46D485D59667.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2e711b8ff26a1bf9fec240925c8afaf220c64e57",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
269215654
|
pes2o/s2orc
|
v3-fos-license
|
Integrating training in evidence-based medicine and shared decision-making: a qualitative study of junior doctors and consultants
Background In the past, evidence-based medicine (EBM) and shared decision-making (SDM) have been taught separately in health sciences and medical education. However, recognition is increasing of the importance of EBM training that includes SDM, whereby practitioners incorporate all steps of EBM, including person-centered decision-making using SDM. However, there are few empirical investigations into the benefits of training that integrates EBM and SDM (EBM-SDM) for junior doctors, and their influencing factors. This study aimed to explore how integrated EBM-SDM training can influence junior doctors’ attitudes to and practice of EBM and SDM; to identify the barriers and facilitators associated with junior doctors’ EBM-SDM learning and practice; and to examine how supervising consultants’ attitudes and authority impact on junior doctors’ opportunities for EBM-SDM learning and practice. Methods We developed and ran a series of EBM-SDM courses for junior doctors within a private healthcare setting with protected time for educational activities. Using an emergent qualitative design, we first conducted pre- and post-course semi-structured interviews with 12 junior doctors and thematically analysed the influence of an EBM-SDM course on their attitudes and practice of both EBM and SDM, and the barriers and facilitators to the integrated learning and practice of EBM and SDM. Based on the responses of junior doctors, we then conducted interviews with ten of their supervising consultants and used a second thematic analysis to understand the influence of consultants on junior doctors’ EBM-SDM learning and practice. Results Junior doctors appreciated EBM-SDM training that involved patient participation. After the training course, they intended to improve their skills in person-centered decision-making including SDM. However, junior doctors identified medical hierarchy, time factors, and lack of prior training as barriers to the learning and practice of EBM-SDM, whilst the private healthcare setting with protected learning time and supportive consultants were considered facilitators. Consultants had mixed attitudes towards EBM and SDM and varied perceptions of the role of junior doctors in either practice, both of which influenced the practice of junior doctors. Conclusions These findings suggested that future medical education and research should include training that integrates EBM and SDM that acknowledges the complex environment in which this training must be put into practice, and considers strategies to overcome barriers to the implementation of EBM-SDM learning in practice. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-024-05409-y.
Background
The practice of evidence-based medicine (EBM) requires clinicians to incorporate their own expertise, the best research evidence, and patient preferences when making decisions about patient care [1].Since its introduction, approaches to teaching EBM skills have focused on the use of critical appraisal to determine the highest level of evidence, largely overlooking clinician expertise and patient preferences [2,3] and disregarding the established central role of person-centered care and shared decisionmaking (SDM), where clinician and patient make care decisions together [4].This disparate approach may be connected to the way that EBM has been traditionally taught during medical training, where education about person-centered care and SDM has occurred in a separate educational silo to EBM education [2,5,6].In recent years, a potential solution has been proposed: teaching EBM and SDM together, where evidence is applied using SDM skills [7,8].
Some educators and practitioners have identified the potential benefit of incorporating the principals of SDM into EBM training, so that education centers on the patient as well as the evidence [9,10].However, very few published studies provide empirical data on how this can be successfully done [8,11].In an Australian study, researchers ran a single EBM-SDM workshop for medical and allied health student-clinicians [12], where SDM was introduced as part of the students' compulsory EBM course.In this study, participants who underwent SDM training in addition to reading SDM material scored significantly higher on measures of ability, attitudes, and confidence in incorporating SDM into EBM when compared to participants who read SDM material alone.In a more recent study, researchers from the same institution conducted a half-day EBM-SDM workshop to train primary care practitioners in using SDM with EBM to improve decision-making for patient care [13].In this study, pre-and post-workshop observations of doctors' skills in SDM were assessed via recorded consultations and pre-and post-workshop attitude questionnaires.The results from this pilot found that participants had increased positive attitudes towards SDM and improved SDM skills immediately after the half-day workshop [13], though the focus of this training was limited to general practice-focused clinical scenarios, did not incorporate a study follow-up, and omitted qualitative participant feedback.More recently, a scoping review of 23 studies found that while there has been increasing recognition by educators of the interdependence between EBM and SDM, only a minority of included studies explicitly incorporated EBM and SDM into training content [8].
We previously conducted a series of EBM training courses for junior doctors during which they were taught to apply evidence using SDM skills, namely, an EBM-SDM course.We ran a pilot mixed-methods evaluation, which indicated that while there was a significant increase in positive attitudes towards EBM after the course, there were also several barriers and facilitators that influenced the potential uptake and practice of EBM and SDM [14].This is unsurprising, given that EBM training for junior doctors is beset by reports of failure to translate new skills and attitudes into clinical practice [9] and SDM is slow to be taken up among doctors in general [15,16].The EBM literature has identified that the main reasons given by junior doctors for not practising EBM included: lack of time to learn [17,18] or practice EBM [19], workplace culture [20], and lack of prior training [20].Separate SDM literature has identified that barriers to the practice of SDM perceived by doctors, including junior doctors, included time constraints [21], low levels of patient health literacy [22], workplace culture [23], and no opportunities to learn and practice SDM during clinical practice [24].However, there are few investigations of barriers to the joint practice of EBM and SDM following their integrated training.As such, there is a need for more comprehensive qualitative evaluations of the outcomes of integrated EBM and SDM training, as well as a more in-depth understanding of the barriers and facilitators to their implementation in clinical practice.
Despite positive attitudinal changes towards EBM-SDM after training [13,14], it is likely that specific barriers prevent the provision of EBM-SDM training and the translation of new skills into clinical practice.It is important to further understand the nature of these barriers so that the impact of EBM and SDM practice can be fully realised.We were interested in examining the private hospital setting, and specific benefits or barriers this setting could introduce.Also of interest was the composition of junior doctor and consultant participant cohorts where most participants were undertaking surgical specialties or training, and its impact on influencing their responses and outcomes following training.In this study, we conducted interviews with junior doctors both before and after EBM-SDM training, and with their supervising consultants to further understand their perceptions and practice of EBM and SDM, and the associated barriers and facilitators.
Study aims
This study aimed to answer the following research questions: 1. How does an integrated EBM-SDM course influence junior doctors' attitudes toward, and practice of, EBM and SDM? 2. What are the barriers to junior doctors' EBM-SDM learning and practice?What are the facilitators?3. How do supervising consultants' attitudes and influence impact on junior doctors' opportunities for EBM-SDM learning and practice?
Design
This study used an emergent qualitative design where data were collected via semi-structured interviews [25].
Social constructivist theory underpinned our study design to enable the exploration of how junior doctors and consultants created their own meanings, attitudes, and understanding about EBM and SDM, and a deeper understanding of their relationships with each other within this context [26].The study centered around an EBM-SDM course that we conducted at an academic health sciences center.Phase 1 of this study involved conducting and analysing pre-and post-course interviews with junior doctors to understand their perceived barriers and facilitators to learning and practising EBM-SDM [27].Thematic analysis of the initial interviews with junior doctors raised questions about the role of supervising consultant doctors in EBM-SDM learning and practice, specifically in terms of their support for training and practice opportunities for junior doctors.Thus, Phase 2 of the study used semi-structured interviews with consultants to further understand how their attitudes and influence might impact junior doctors' opportunities for EBM-SDM learning and practice.
Study setting
The EBM-SDM training course took place at an integrated academic health sciences center (MQ Health) on an urban university campus, comprising a universityowned private hospital and specialty outpatient clinics [28].The course was attended by junior doctors who worked at the center.In the Australian setting, junior doctors include new graduates or interns, residents undertaking prevocational training, registrars who are either accredited with a specialty training program or unaccredited, and fellows who have completed specialty training and are seeking sub-specialty training [29].The EBM-SDM training course consisted of four 90-minute meetings, and covered all steps of the EBM process and the principles of SDM that are incorporated into the fourth EBM step.The course was conducted over an eight-week period to provide trainees with sufficient time in between meetings for reading, reviewing, and preparing material.The course was conducted five times during this study.Adult learning theory was used as a framework for the problem-based, collaborative learning environment where the teachers facilitated rather than directed learners [30].During the course, junior doctors used their own patient cases to increase the course relevance to their practice and patient care [31].Additional File 1 contains details of the structure and content of the EBM-SDM training course.The junior doctors were on a single-term rotation, where they spent one year at the private hospital before returning to rotations in the public hospital system.They worked alongside a variety of other healthcare professionals, including consultants, allied health professionals, researchers, and educators, and were supervised by consultants, specialists from a range of medical and surgical disciplines, who provided individualised mentoring, opportunities for learning and research, and support to enter specialist training programs in Australia.Junior doctors could also take part in educational activities outside of their supervision with consultants, including the EBM-SDM course, to acquire and practice new skills.
Participant recruitment
Participants were recruited via purposive sampling [32] where doctors from a range of age groups and training backgrounds were approached to obtain a comprehensive sample.In Phase 1, participants were recruited from the university hospital's training program for junior doctors.Using examples from the literature [33], an estimated number of 12 to 15 interviewees from the available pool of 30 junior doctors was considered appropriate to provide in-depth data, and to cover all the issues that could arise from interviews pre-and post-EBM-SDM training [32].In a similar process, for Phase 2 we sought a sample of 10 consultants from the available pool of 20 who had current supervisory roles in the training of junior doctors at MQ Health.The junior doctors were approached as they enrolled in the EBM course, while the consultants were identified from a list of junior doctors' supervisors provided by the faculty learning and teaching administration team and were sent individual emails inviting them to take part in the study.
Demographic data
A demographic survey was developed by four authors (MSi, FR, YZ, AD) and emailed to all consenting participants to record their age group, gender, position, country of medical training, period in which training occurred, and prior education in EBM and SDM.
Interview schedules
Interview questions were developed by the first author (MSi), then reviewed and amended with members of the author team (AD, FR, YZ).In Phase 1, two interview schedules were developed: pre-course and post-course.The pre-course interviews were designed to establish a pre-intervention baseline and explore how junior doctors understood and used both EBM and SDM, and their prior training experiences in each.The post-course interview questions examined changes in knowledge, attitudes, and practice of EBM and SDM and explored junior doctors' perceptions of combined EBM-SDM training for learning and practice, their intentions to use knowledge gained, the influence of their supervising consultants on EBM and SDM practice, and possible barriers and facilitators to learning and using EBM.In Phase 2, interviews with consultants were designed to understand how they viewed EBM and SDM in their own practice, and their views on whether junior doctors should practice EBM and SDM.Interview questions also explored consultants' views and experiences of combined EBM and SDM training, in influencing both clinical practice and medical education.See Additional File 2 for all interview schedules.
Interview pilot and sessions
In Phase 1, interview questions were designed and piloted with three junior doctors and were subsequently refined into the final interview schedules.In Phase 2, interviews were piloted with one consultant, after which the questions were modified for use with this cohort.Interviews took place in quiet locations with each junior doctor from 2019 until 2022, and with each consultant during 2021; they were conducted face-to-face in 2019, and via Zoom from 2020 due to the COVID-19 pandemic [34].Author MSi conducted the interviews as 40-minute sessions.All interviewees were given the option to comment on their interview transcripts and study results.One interviewee returned for a second interview to capture additional data.Observational notes were taken by MSi to capture additional contextual factors (such as tone of voice) to assist with thematic analysis.
Data analysis
In Phase 1 of the study, junior doctors' transcripts and field notes were thematically analysed [35,36] to identify, evaluate and report patterns or themes within the data in relation to the three research questions.The first author (MSi) transcribed and familiarised herself with the data.Iterative generation of codes and themes took place with other members of the authorship team (FR, YZ, LAE, GF, SS).Themes were inductively defined as new codes were generated and all themes and sub-themes were named.Transcripts were re-read, and themes reinterpreted until the team decided that data findings had been accurately described.These themes were then used in Phase 2 of the study as a framework to deductively analyse consultants' interviews.We also included an 'Other' category to code any content that did not fit within the framework, and then inductively analysed this content to capture additional sub-themes from the consultant data.
Research team and reflexivity
MSi, a higher degree research student, developed and delivered the EBM-SDM training course with two other authors (MSt, AD).MSi also developed the interview schedules (with FR, AD, YZ) and conducted the interviews.All participants were informed of MSi's involvement in the study.MSi has training qualifications in adult education and qualitative research methods, including group and individual interviewing techniques.She analysed the interview data with other authors (FR, YZ).MSi knew all study participants (except two consultants) through her work as a clinical librarian at Macquarie University and discussed with the other authors how her involvement in the study and familiarity with the participants may influence her perceptions and analysis of the interview data.FR, YZ, and SS are health service researchers, with extensive experience in qualitative research.As non-clinicians, they reflected on their experiences and expectations as patients, and as researchers, and how that may influence their interpretation of the interview data.GF and LAE are allied healthcare professionals by background and researchers who drew on their clinical and research skills and perspectives to interpret the interview data.AD and MSt are neurosurgeons with experience in training junior doctors and an interest in medical education and teaching EBM.They knew several study participants through their clinical and research work.
Ethical approval and study reporting
Ethics approval was obtained in 2019 to interview junior doctors from Macquarie University Human Research Ethics Committee (# 5201927419929), and in 2021 to add interviews with consultants (Ethics no: 52021274125020).The study was reported using COREQ guidelines (See Related Files).
Demographic information of participants
Demographic details of the junior doctors and consultants who participated in interviews are displayed in Table 1.Of the 30 junior doctors who completed the EBM-SDM training, 12 participated in interviews.Of the 12 participating junior doctors, five were fellows, five were registrars, one was a resident, and one was an intern, and thus the junior doctor cohort represented a range of training levels and experience.Half of the junior doctors undertook their medical training in Australia and around two-thirds had some prior EBM instruction, although none had received training in SDM.Five junior doctors completed both pre and post interviews; those who only completed one interview cited time factors and clinical schedules as reasons for non-completion.Most junior doctors who completed the EBM-SDM training course but not the interviews cited time factors as reasons for their non-participation.
Ten consultants participated in interviews.Of these 10 consultants, three were Associate Professors and four were Professors.Five consultants had some prior EBM training, and none had any prior SDM training.
Themes and sub-themes
The study had three key research questions, and four major themes were identified around those questions.The themes, sub-themes, and links to the research questions are summarised in Table 2.In the following results section, junior doctors' quotes are indicated with "J" and a number; consultants' quotes are indicated with "C" and a number.
Theme 1: EBM training, understanding, and practice
Four sub-themes were identified that related to perceptions and understanding of EBM training and practice: pre-course understanding and learning EBM, application to practice, training needs of junior doctors, and impact of medical speciality.
Understanding and training in EBM
Prior to the EBM-SDM course, most junior doctors equated EBM to research skills and knowledge-gain, e.g., " [EBM] …means medicine that has a foundation in scientific studies that have been rigorously peer reviewed and developed through a scientific method…" (J3).Some junior doctors linked EBM to a statistical outcome or risk measure, using it to give "the risks of certain procedures … [and] the risks of conservative management versus operative management" (J4).Of the six junior doctors that trained in Australia, none recalled EBM Five consultants indicated a lack of understanding of EBM practice when asked to prioritise its components: "Literature-based EBM is the most important, anecdotal or doctors' experiences is the least important, and what was the third one?"(C7), whilst others were more aware of EBM theory and practice, particularly as it applied to patient care: "evidence-based medicine in its foundations is meant to tailor it to the particular patient and it is actually quite flexible" (C1).
Actual and intended practice of EBM
Junior doctors' understanding of the practice of EBM broadened after the EBM-SDM course and was accompanied by increased acknowledgement of patient involvement in their care.One junior doctor described their increased awareness for future practice: "[the course made me wonder] how can I convey the message to patients and get them to be involved in deciding the management plan?"(J5).The greatest barrier to practising EBM was lack of time for learning and practice, with all junior doctors mentioning this during their interviews.
Impact of the medical speciality of consultants
Consultants' specialisations impacted their practice of EBM.Those practising as physicians, including a neurologist and cardiologist, reported greater access to high-level evidence and guidelines, with one consultant claiming that "cardiology is very algorithmic in a lot of ways, and that makes that easier…there's only so many things you can do….that kind of distils things" (C6).Consultants from surgical disciplines reported that lower levels of evidence were often drawn upon for decisionmaking, because "[in surgery] the evidence, sometimes is not like hard science…many times we base our decisions on grey literature, or on evidence that we acquire over time…or from the experience of our other senior colleagues" (C9).
Theme 2: attitudes towards EBM
Three sub-themes were interpreted within the data relating to attitudes towards EBM: attitudes towards the role of evidence in decision-making, attitudes towards patient involvement in care decisions, and attitudes towards junior doctors' practice of EBM.
Attitudes towards the role of evidence in decision-making
Prior to the EBM-SDM training course, most junior doctors' attitudes toward EBM were focused on the knowledge they could acquire for decision-making, research, and benchmarking their performance, such as "recommendations that are based on that evidence to inform medical decision-making" (J3).After the course junior doctors were keen to practice their new EBM skills that had expanded to include finding and using evidence to explain care issues to patients."It [explaining evidence] really makes them [patients] feel as though they're being actively involved in the actual details of their specific case" (J3).Consultant participants frequently discussed the pitfalls of using evidence to inform decisions, with one claiming that "[EBM has] got enormous weaknesses if people think that there's evidence for everything; that is too simplistic and left brain" (C2).Furthermore, decisions were reportedly often informed by "what you've been taught by your people training you and your mentors" (C5).Two consultants explained how they perceived EBM was negatively changing medical practice: "[EBM] takes away some of the enjoyment out of practicing medicine individually, in the sense that some of the art has been lost" (T7).Other consultants pointed out advantages of EBM, including provision of high-quality evidence for decision-making that "gives me the ability to then converse with patients as to why we do things and why it would be most appropriate" (C1).Two consultants with prior EBM training discussed the conflict with senior colleagues that can often arise when EBM is practised, one stating that "sometimes this evidence is not strong enough to change the opinion of some [senior] doctors or surgeons" (C9).
Attitudes towards patient involvement in care decisions
Junior doctors expressed mixed attitudes about patient involvement in decisions.Despite post-training beliefs that patient involvement "will help to establish…better rapport with patients…because they're more informed and there's more trust" (J3), junior doctors also reported the "need to simplify things for the patient who makes the decision about their life… other than just giving information" (J8).Six junior doctors did, however, plan for greater patient involvement after they completed the EBM-SDM course: "I am now more inclined to include evidencebased discussions…in how I approach decisions that we present to patients….
I wouldn't have really brought it up as a topic [previously]" (J3).
Consultants also reported mixed attitudes to patient involvement in their care, with one participant stating that "it's good that they're enthusiastic about it but it's bad that it's this sort of modern attitude of 'my opinion's as good as your opinion' , even if my opinion is based on social media and newspaper reports" (C4).Six consultants expressed doubts about patients' ability to grasp complex medical concepts for decision-making, to "understand something as much as a clinician who's been doing it for 10, 20, 30 years" (C8).Three consultants strongly endorsed patient involvement, mostly believing that "at the end of the day…it's the patient's body, that they have to be comfortable with the treatment plan" (C1).
Consultant attitudes towards junior doctors' practice of EBM
Consultants differed in their opinions on whether junior doctors should practice EBM.Five consultants believed there were few roles for junior doctors in evidencebased decision-making, one stating: "they practice a very protocol driven medicine.And that's just historical and that's probably not a bad thing" (C2).The other five consultants, in contrast, stated that limited decision-making roles should exist for junior doctors: "doctors at any stage should be able to assess the patient and so they can influence decision-making, based on that" (C3).Theme 3. Organisational culture and EBM Two subthemes were identified pertaining to the influence of organisational culture on practicing EBM: public versus private healthcare, and medical hierarchy.
Public vs. private healthcare
Junior doctors and consultants spoke of differences in EBM learning and practice between public and private healthcare settings.Six junior doctors reported that private healthcare settings, such as the academic health sciences center they were based in, facilitated the practice of EBM, because they had protected time for individual study and educational activities.This did not happen during their public hospital rotations, where junior doctors cited high patient numbers and associated workloads that were prioritised.One such junior doctor stated "Today I've just been allocated a study day… I don't actually think that happens in public (J4).
Four consultants' views aligned with those of junior doctors about greater protected time available for learning in private settings.Three consultants stated junior doctors had greater opportunities for patient decisionmaking in the public system, for example, in the emergency department of public hospitals where "you see people who are coming in [to the emergency department] and often they'll see the junior doctors before they even see the senior doctor" (C6).
Medical hierarchy
Junior doctors and some consultants discussed the emphasis placed on following the instructions of the most senior consultants.Six junior doctors reported that they were rarely involved in decision-making, but rather, follow the consultant's lead, regardless of whether the consultant's decisions were evidence driven.Prior to the EBM-SDM course one junior doctor stated: "I think in some of my other terms, if I had asked, they [consultants] would just say "this is just part of my experience" (J2).She maintained this view after the course, recalling one instance when querying a guideline put in place by a consultant: "I know as a junior sometimes you get a bit of pushback if what you're recommending is not guideline driven" (J2).
Two consultants reported that their decision-making capacity was also restricted by their senior colleagues, one consultant claiming that this was "the consequence of the traditional school and all the experience, based on the decades of "we always did it like that" (C9).Another consultant spoke of the difficulties faced by those consultants who completed their medical training before EBM was introduced: If you look at some of the older clinicians you can be forgiven for thinking that they're kind of stuck in, frozen in time, right?And that might be a generational thing, but because of this new focus on evidencebased learning and medicine in the nineties, these clinicians didn't have the benefit of that.(C3.)Three junior doctors reported that hierarchies were evident even among themselves, and not just between junior doctors and consultants, such that accredited registrars or fellows often held greater credibility than less experienced residents, interns, and unaccredited registrars.Two consultants stated that they only worked with fellows, not the more junior ranked doctors, whereas other consultants reported greater inclusivity of all junior doctors during decision-making, one stating:
Theme 4: understanding and practice of SDM and its role in EBM
Three sub-themes were identified relating to the understanding and practice of SDM and its role in EBM: Understanding and practicing SDM, the effect of hierarchy on the practice of person-centered care and SDM, and the role of junior doctors in the learning and practice of SDM.
Understanding and practicing SDM
Prior to the EBM-SDM course, four junior doctors could not correctly define what SDM meant, and six described SDM as one-way communication of evidence to patients.After the course, they claimed a greater understanding of SDM as part of person-centered care, and that "you need to have a good basis in EBM, to actually make sure the patient can be even involved in the discussion.So, the patient understands" (J4).Seven junior doctors believed that SDM and EBM should be taught together, whereas one did not agree: "I think we don't need to explicitly incorporate it, that it's a given" (J1).Given that the training level of junior doctors was highly varied (i.e., from intern to fellow), there was variability in how they understood and approached SDM.For example, fellows, the most experienced of the junior doctors, described using evidence to provide recommendations to patients rather than eliciting patient preferences whilst referring to evidence.One fellow stated: "I think most patients are really welcoming if you tell them that people have done it before, the percentage of people who do good, for example, and those that don't and they're willing to accept that" (J10).Consultants conveyed mixed definitions of SDM; some saw it as informed consent, and others saw it as the transfer of information from doctor to patient.All consultants pointed out the difficulties of SDM, with one highlighting that "it's really hard to get somebody to the level where they can make some sort of an educated decision" (C8).One consultant commented on the differences in attitudes towards SDM between older and younger colleagues: "younger clinicians are less likely to be as paternalistic [than older consultants], they're more willing to accept that patients have their own thoughts, even if they're unconventional and unrealistic" (C3).Surgeons and surgical trainees, comprising 72% of the study cohort, tended to view EBM and SDM as doctor-driven rather than patient-centered.For example, one neurosurgeon emphasised the important sources of evidence used for patient decisions: "So I always bring to the patient my experience, I bring the MDT [Multidisciplinary Team] meeting decision … and the literature" (C9).This contrasted with the perspective of non-surgical consultants.For example, a cardiologist highlighted the central role of the patient in the decision-making process: "I always think of evidence as the hard science and then for the decision-making process, about the application of that hard science to a particular context and … it's in that paradigm, that the patient's point of view is used to temper the evidence that you're presenting" (C6).
Effect of medical hierarchy on junior doctors' practice of person-centered care and SDM
Six junior doctors reported that, due to their place in the medical hierarchy, they tended not to practice SDM.One participant stated:
I actually try to hold off on doing that [practising SDM], personally, just because it's more of a consultant discussion at that stage. When a consultant leaves the room, the patient does actually have more questions, and sometimes I just reiterate what the consultant has already said. (J4.)
Ten junior doctors planned to increase their communication and person-centered care skills after the EBM-SDM course, for example, using EBM to find evidence that reassures a patient; skills that could be implemented now and expanded later to incorporate SDM.
Consultant perceptions of the role of junior doctors in SDM
Four consultants were of the view that junior doctors should not practice SDM due to their junior level.One consultant reported that junior doctors sometimes played a patient advocate role because they "often have an insight into some of those other levels [of patient care]" (C2).Another consultant considered providing junior doctors "the opportunity to be more involved in that [SDM] discussion" (C7) but cited time constraints as a barrier.
Discussion
This study explored how integrated EBM and SDM training can impact attitudes, understanding and practice among junior doctors, and whether the attitudes and practice of their supervising consultants can influence those outcomes.Junior doctors demonstrated significant positive attitude changes towards EBM and SDM after the EBM-SDM course.Prior EBM training (during medical training or afterwards) was mostly didactic and focused on knowledge and skill acquisition which is a common finding in other studies that has not equipped junior doctors to practice EBM confidently in clinical settings [37,38].Following our EBM-SDM course, not only did junior doctors' knowledge and skills improve, but they frequently referred to the benefits of including patients in their discussions about care, which indicated that they had expanded their understanding of EBM to incorporate aspects of person-centered care.Their intentions to be more person-centered were frequently based on using evidence to effectively communicate risks and benefits to patients, rather than having SDM conversations with patients where all options were described, and decisions made together.However, there appeared to be a disconnect between the practice of SDM and the recognition of its practice.On several occasions, junior doctors facilitated SDM by answering patient questions after the consultant left the room, or by reiterating what the consultant said, but failed to recognise this as part of a SDM conversation with the patient.
Junior doctors also varied in their attitudes and practices of SDM.The more experienced junior doctors, the five fellows, tended to demonstrate a more doctor-centered rather than patient-centered approach to patient care than the less experienced junior doctors (i.e., residents).Junior doctors were at varying levels of their medical training, some of them closer to consultantlevel practitioners than others, and may perceive and think about SDM differently depending on their training cohort.Furthermore, several fellows had worked as consultants in their home countries which may have influenced the doctor-centered patterns of decision-making commonly found among consultants.Thus, our study identified that junior doctors attitudes and practices of SDM are likely due to a lack of specific knowledge and understanding of SDM, limited prior training, as well as cultural conventions that may be associated with time and country of training.
Consultants varied greatly in their understanding of EBM and SDM, and their views on whether either should be practised by junior doctors.Senior consultants who completed medical training before the formal introduction of EBM in the 1990s [39] appeared to be unfamiliar with and less accepting of EBM and SDM and expressed a reluctance for junior doctors to engage in either.In contrast, younger consultants who had prior exposure to EBM training and practice tended to appreciate the benefits of EBM for junior doctors and patients.In another study of junior doctors and senior anaesthetists, interviews indicated there was a link between career stage and workplace settings and EBM attitudes [40].In this study, senior anaesthetists (consultants) were reluctant to make decisions or change practice based on evidence in preference to their own experience and opinion [40].Junior doctors regarded this as reluctance to change as due to older age, but the consultants saw it as surrendering their professional autonomy [40].Thus, there may be a tendency among more senior doctors to resist practising EBM in favour of using their own decision-making preferences, that carry a risk of cognitive bias and are potentially suboptimal or obsolete decisions [40][41][42].In addition, some studies have shown senior medical staff (consultants) have very little expertise in SDM with patients, thereby failing to become the role models in EBM-SDM that junior doctors need [43].Senior doctors have also reported difficulty in using technology thus preferring to ask colleagues for advice [44].
In our study, more senior consultants appeared to dominate the medical workforce hierarchy and exclude junior doctors and patients from decision-making.These consultants believed that decision-making should be underpinned by their experience, knowledge, and their communities of practice.Thus, they did not prioritise decision-making linked to EBM and SDM and consequently educational opportunities for junior doctors under their supervision were reduced.These findings support those of other studies concerning the impact of medical hierarchies on junior medical staff, where power is recognised to sit with senior medical staff positioned at the top of the hierarchy, thereby reducing the autonomy of those positioned lower in the hierarchy, such as junior doctors [40,45].This has been reported to be particularly evident in surgical specialties, where decisionmaking is dominated by senior surgeons' experience rather than evidence [46].Junior doctors learn to respect hierarchy from medical school, where they do not challenge authority to avoid unwanted impacts on their training and career progression [47][48][49].The well-established medical hierarchy emerged as a barrier preventing junior doctors in our study from using evidence-based decisionmaking skills learned in the EBM-SDM course, particularly if the evidence contradicted strongly held views and practices of senior consultants.
Of note was that the present study was conducted during the COVID-19 pandemic, a difficult and uncertain time for all medical professionals.In the Australian context, junior doctors have reported restrictive workplace cultures and behaviours, including being overlooked and undervalued by senior doctors, which contributed negatively to their psychological well-being during COVID-19 [49].This had important implications for doctors' welfare, workforce retention, and safe patient care that needed to be addressed through "positive workplace cultural interventions to engage, validate and empower junior doctors" [50].In contrast, junior doctors in our study, and in others, have reported that many consultants and senior medical staff were always supportive and approachable role models, not just during the pandemic, and helped to facilitate their trainees' well-being and progress [47,51].The potential contribution of such role models to facilitate and support EBM and SDM learning and practice may help to overcome some of the associated barriers [52].
Combining EBM and SDM training enabled junior doctors to realise there is more to EBM than the level of evidence, which was what most believed before the training.The combined course enabled them to consider how they would communicate the relevant evidence in a twoway conversation with the patient, and thus situated the principles of EBM within the broader context of patient needs and preferences.Several junior doctors had commented that their awareness and practice of improved communication skills with patients had increased after the course, lending support to the effectiveness of the combined course, and the likelihood that the learnings would be utilised in future.These outcomes also imply that EBM-SDM training has the potential to shift power dynamics within the medical hierarchy through expanding the skillset and abilities of junior doctors.
Another facilitator of combined EBM-SDM learning and practice reported in our study was the capacity of private healthcare facilities in Australia to provide protected time for educational activities.This contrasted with public healthcare facilities, where such opportunities are limited [53].Our study took place within a neurosurgery department where a half-day is set aside each week for learning and teaching meetings, including the EBM-SDM course.The meetings were co-ordinated by consultants, thereby enabling junior doctors to learn and practice new skills with consultants' support.In a similar way, consultants who recognise the benefits of EBM and SDM could act as unofficial champions, who provide further learning and teaching opportunities for junior doctors, whilst demonstrating and communicating those benefits to their senior colleagues.The idea of champions comes from literature demonstrating that colleagues or supervisors of junior clinicians can be a great source of assistance and support when it comes to learning and practicing skills associated with EBM [8].Such champions or role models have been recommended as an integral part of EBM teaching because they demonstrate to learners the 'how-to' of the application of EBM principles to clinical practice and individual patients [54].Within our study, this supportive culture, led by a champion or role model, was very beneficial.One of the neurosurgeon consultants took a keen interest in teaching EBM to junior doctors and he led by example, showing them how to use it in daily practice through patient care consultations, and ward rounds and by leading the EBM-SDM teaching during protected education time.The junior doctors responded with increased motivation to practice their EBM-SDM skills during educational meetings.This opportunity provided by a private healthcare facility could be an exemplar of EBM-SDM education in the Australian context that may be adapted by other institutions.
Future directions
A lack of prior learning and practice of EBM and SDM concepts among this sample of junior doctors echoes previous calls for improved basic and ongoing training in EBM and SDM skills [8,55].The recently updated Australian specialist training program [56] has cited the inclusion of EBM and SDM as separate skill sets, with an emphasis on skills and knowledge acquisition.However, there is now a framework providing core competencies that can underpin an EBM curriculum incorporating SDM [57].This is a promising initiative that could be adapted and used to meet the needs of institutions whilst identifying and managing barriers and facilitators to the learning and practice of EBM and SDM.Additionally, the capacity of consultants with prior EBM training and experience to act as champions of EBM-SDM could be further explored.
Future research opportunities include evaluation of the impacts of integrated EBM-SDM training content and strategies to determine optimal approaches for educators to adopt in both private and public settings.Future research should also focus on the efficacy of strategies to empower junior doctors to become more independent in using their EBM and SDM skills, such as training champions and consultants who want to help their junior doctor trainees develop skills and experience in EBM and SDM [52,58].Finally, further investigation is warranted into the significance of undertaking medical training either before or after the introduction of EBM in the 1990s, and how this impacts the medical hierarchy, EBM-SDM training and practice opportunities for junior doctors, and patient care.These investigations could incorporate other qualitative methods such as ethnography to fully capture perceived dynamics and cultural conventions within medical disciplines.
Strengths & limitations
This study has contributed to our knowledge of combined EBM-SDM training in the Australian context.A strength of the study was its emergent design, where consultant interviews in Phase 2 were added after data were analysed from junior doctor interviews in Phase 1.This approach enabled consultant interview schedules to further elucidate the barriers and facilitators associated with EBM and SDM learning and practice that emerged during Phase 1.The study was also strengthened by including two diverse, but linked participant groups, the junior doctors, and their supervising consultants, thus facilitating the collection and analysis of more than one source of relevant data that addressed the study aims.However, the study is not without its limitations.First, the modest sample size of the study, exacerbated by COVID-19 restrictions and the impact of the pandemic on the medical workforce, reduces the study's transferability to other cohorts and contexts.Second, junior doctors' limited understanding of SDM after the course may reflect a limitation of the course.Although SDM was introduced and discussed in the course, little time was provided for deliberate SDM practice and feedback; an issue that can be rectified in future training and research.Third, more males than females participated in the study which may have influenced the pattern of results and is an area for further research.
Conclusions
Most junior doctors reported positive attitude changes following EBM-SDM training that encompassed plans to increase patient involvement in their care through better communication and evidence-based shared decisionmaking.However, time constraints and the influence of the medical hierarchy were significant barriers for most junior doctors when learning and practising EBM and SDM.Despite these barriers, supportive consultants and protected educational time facilitated the learning and practice of EBM and SDM within the context of our study.To counter the reported barriers at our institution there are opportunities available for some consultants to become champions who make protected time available for EBM-SDM learning and practice opportunities.These findings may inform future research and training where integrated EBM and SDM learning and practice could be adapted to the unique contextual and cultural influences of each institution.
Table 1
Demographic details of participants
Table 2
Summary of key themes, sub-themes, and links to research questionsPrior to the EBM-SDM training course, most junior doctors were looking forward to developing skills in searching and critically appraising evidence: "I'd like a better understanding of what a good quality study is…if something is a RCT or cohort study that I want to be able to say, this is a good RCT or, this is a good cohort study" (J4).After the EBM-SDM training course, several junior doctors recommended further training to help them maintain and extend their skills.Some suggested EBM training should be provided for longer and include refresher training, and one suggested giving more emphasis to the SDM component "because this is the practical part of putting it into our daily life, applying it to patients" (J5).
EBM Evidence-based medicine, SDM Shared decision-making, J Junior doctor, C Consultant a Research question 1 b Research question 2 c Research question 3
|
2024-04-19T05:41:25.868Z
|
2024-04-18T00:00:00.000
|
{
"year": 2024,
"sha1": "180449dc2430df6aca765902ac23fb0fae0763bb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "180449dc2430df6aca765902ac23fb0fae0763bb",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195874160
|
pes2o/s2orc
|
v3-fos-license
|
Tails of Triangular Flows
Triangular maps are a construct in probability theory that allows the transformation of any source density to any target density. We consider flow based models that learn these triangular transformations which we call triangular flows and study the properties of these triangular flows with the goal of capturing heavy tailed target distributions.In one dimension, we prove that the density quantile functions of the source and target density can characterize properties of the increasing push-forward transformation and show that no Lipschitz continuous increasing map can transform a light-tailed source to a heavy-tailed target density. We further precisely relate the asymptotic behavior of these density quantile functions with the existence of certain function moments of distributions. These results allow us to give a precise asymptotic rate at which an increasing transformation must grow to capture the tail properties of a target given the source distribution. In the multivariate case, we show that any increasing triangular map transforming a light-tailed source density to a heavy-tailed target density must have all eigenvalues of the Jacobian to be unbounded. Our analysis suggests the importance of source distribution in capturing heavy-tailed distributions and we discuss the implications for flow based models.
Introduction
Increasing triangular maps are a recent construct in probability theory that can transform any source density to any target probability density [3]. The Knothe-Rosenblatt transformation [30; 18], [36,Ch.1], gives a heuristic construction of an increasing triangular map for transporting densities that is unique (up to null sets) [3]. These transformations provide a unified framework to study popular neural density estimation methods like normalizing flows [33; 32; 29] and autoregressive models [26; 14; 17; 35; 19] which provide a tractable method for evaluating a probability density [15]. Indeed, these methods are becoming increasingly attractive for task of multivariate density estimation in unsupervised machine learning.
This work is devoted to studying the properties of triangular flows that learn increasing triangular transformations when the target density is a heavy-tailed distribution. Heavy tailed analysis studies the phenomena governed by large movements and encompasses both statistical inference and probabilistic modelling [28]. Indeed, heavy-tail analysis is extensively used in diverse applications like financial risk-modelling wherein the financial returns and risk-management calculations require heavy-tailed analysis [5; 10; 21], in data-networks where heavy-tailed distributions are observed for file sizes, transmission rates, transmission duration and network traffic [24; 8; 20], and in modelling insurance claim sizes and frequencies in order to set premiums efficiently to quantify the risk to the company [5; 10].
Specifically, we study triangular flows to represent multivariate heavy-tailed elliptical distributions often used for modeling financial data and in the theory of portfolio optimization. Indeed, the basis of modern portfolio optimization relies on the Gaussian distribution hypothesis [23; 34; 31]. However, as demonstrated by multiple studies [9; 12; 16], Gaussian distribution hypothesis cannot be justified for financial modelling and elliptical distributions are the suggested alternative particularly because they allow to retain certain desirable practical properties of normal distribution.
We begin our exposition in §3 where we show that in one-dimension, the density quantile functions of the source and the target probability density precisely characterizes the slope of an (unique) increasing transformation. Subsequently, we give an exact characterisation of degree of heavytailedness of a distribution based on the asymptotic properties of the density quantile function. This allows us to clearly characterize the properties of an increasing transformation required to push a source density to any target density with varying tail behaviour respectively. Finally, we make precise the connection between the asymptotics of the density quantile function and existence of higher-order moments of a distribution. We use this to give a precise rate (which accounts for the relative heaviness of source and target densities) at which an increasing transformation must grow to capture the tail behaviour of the target density.
In §4, we extend these results for higher dimensions. We define multivariate heavy-tailed distributions as distributions whose marginals are heavy tailed in all directions and show that any increasing triangular map from a light-tailed distribution to a heavy-tailed distribution must have all diagonal entries of the Jacobian matrix (and hence all eigenvalues and the determinant) to be unbounded. We discuss the implications of our findings for neural density estimation in §5. We highlight the tradeoff between choosing an appropriate source density and the "complexity" of the transformation required to learn a target density. We provide all the proofs in §A.
We summarize our main contributions as follows: 1) We show that density quantiles precisely capture the properties of a push-forward transformation, 2) We relate the properties of density quantiles to existence of functional moments and tail-properties allowing us to provide asymptotic rates for transformations required to capture heavy-tailed behaviour, 3) We reveal properties of density quantiles for certain classes of distributions both for one dimensions and higher-dimensions that might be of independent interest, 4) We precisely study the properties of increasing maps required to capture heavy-tailed behaviour, 5) We reveal the trade-off between choosing a "complex" source density and an "expressive" transformation for representing target densities and its implications for flow based models.
Preliminaries
Consider two probability density functions p and q (with respect to the Lebesgue measure) over the source domain Z ⊆ R d and the target domain X ⊆ R d , respectively. There always exists a deterministic transformation T : Specifically, by using the change of variables formula, i.e. x = T(z), a diffeomorphic function T can push forward a base random variable z ∼ p to a target random variable x ∼ q such that q is the push forward of p i.e. q(x) = p(T −1 x)|∇T(T −1 x)| −1 =: T # p, where |∇T(z)| is the absolute value of the determinant of the Jacobian of T.
Fortuitously, it is always possible to construct such a transformation T: we call a mapping T : R d → R d triangular if its j-th component T j only depends on the first j variables z 1 , . . . , z j . The name "triangular" comes from the fact that the Jacobian of T is a triangular matrix function. We call T increasing if for all j ∈ [d], T j is an increasing function of z j .
Theorem 1 ([3]
). For any two densities p and q over Z = X = R d , there exists a unique (up to null sets of p) increasing triangular map T : Z → X so that q = T # p.
Before proceeding further, let us first give an example of a construction of an increasing triangular transformation T to help better understand Theorem 1. This example will subsequently form the basis of our theoretical exposition in the paper. Example 1 (Increasing Rearrangement). Let p and q be univariate probability densities with distribution function F and G, respectively. One can define the increasing map Indeed, if Z ∼ p, one has that F (Z) ∼ uniform. Also, if U ∼ uniform, then G −1 (U ) ∼ q. Theorem 1 is a rigorous iteration of this univariate argument by repeatedly conditioning (a construction popularly known as the 18]). Note that the increasing property is essential for claiming the uniqueness of T. 1 Thus, triangular mappings constitute an appealing function class to learn a target density. Indeed, many recent generative models in unsupervised machine learning are precisely special cases of this approach [15]. In this paper, we characterize the properties of such increasing triangular mappings T required to learn a target density q that is heavy-tailed from a source density p.
Properties of Univariate Transformations
Increasing Rearrangement is a unique increasing transformation between two densities. (cf. Example 1). Conveniently, we can analyze the slope of this transformation analytically. For a probability density p over a domain Z ⊆ R, let F p : Z → [0, 1] denote the cumulative distribution function of p, Q p : [0, 1] → Z be the quantile function given by Q p = F −1 p and f Q p : [0, 1] → R + be the density quantile function with a functional form as f Q p = 1 /Q ′ p . It is further given by the reciprocal of the derivative of the quantile function i.e. f Q p = 1 /Q ′ p . The slope of T such that q := T # p where p, q are two densities is given by the ratio of the density quantile function of the source and the target distribution respectively, i.e.
Theorem 2. Let p and q be two densities and T be an increasing map such that q := T # p. If the density quantile f Q p of p shrinks to 0 at a rate slower than the density quantile f Q q of q, then T ′ (z) is asymptotically unbounded.
Clearly, the density quantile functions precisely characterize the slope of an increasing transformation. Moreover, we can further characterise the asymptotic properties of an increasing transformation using the asymptotics of density quantiles of distributions following [27; 1] who proved that the limiting behaviour of any density quantile function as u → 1 − (corresponding to right tails) is: Then, T such that q := T # p is given by: Similarly, for p ∼ uniform[0, 1]: and f Q p (u) = 1. Therefore, Additionally, we can also define the limiting behaviour of the quantile function Q(u) as u → 1 − as: The parameter α is called the tail-exponent and defines the (right) tail-area of a distribution. Indeed, if for two distributions with tail exponents α 1 and α 2 , if α 1 > α 2 , the corresponding distribution has heavier tails relative to the other. The tail exponent α allows us to define distributions based on their degree of heaviness as follows: Following [27], if 0 < α < 1 the distributions are short-tailed, e.g. Uniform distribution. Here, we further show that a distribution has support bounded from above if and only if the right density quantile function has tail-exponent 0 < α < 1.
p has a support bounded from above.
H 1 corresponds to a family of distributions for which all higher order moments exist. However, these distributions are relatively heavier tailed than short-tailed distributions and were termed as medium tailed distributions in [27], e.g. normal and exponential distribution. Additionally, for α = 1, a more refined description of the asymptotic behaviour of quantile function can be given in terms of the shape parameter β: β determines the degree of heaviness in medium tailed distributions; the smaller the value of β, the heavier the tails of the distribution e.g. exponential distribution has β = 1, and normal distribution has β = 0.5. Based on this, we can define Therefore, we have H 1 = ∪ 0≤β≤1 H 1,β and L = ∪ 0<α≤1 H α . Finally, heavy tailed distributions have α > 1 e.g. Cauchy and t ν . We next give a precise characterization of asymptotic properties of a diffeomorphic transformation from one distribution to the other with varying tail behaviour in the following corollary of Theorem 2: Corollary 1. Let p ∈ H αp be a source distribution, q ∈ H αq be a target distribution and T be an increasing transformation such that q := T # p. Then, • if α p > α q , the slope of T converges asymptotically to 0 • if α p = α q = 1, the slope of T converges asymptotically to a finite constant • if α p < α q , the slope of T asymptotically diverges to infinity
the slope of T converges to zero asymptotically
Let us give another example to underscore the importance of using density quantiles to define tailbehaviour and the increasing push-forward transformations. Example 3 (Pushing uniform to normal). Let p be uniform over [0, 1] and q ∼ N (µ, σ 2 ) be normal distributed. The unique increasing transformation where erf(t) = 2 √ π t 0 e −s 2 ds is the error function, which was Taylor expanded in the last equality. The coefficients c 0 = 1 and c k = k−1 m=0 cmc k−1−m (m+1)(2m+1) . We observe that the derivative of T is an infinite sum of squares of polynomials. Both uniform and normal distributions are considered "lighttailed" (all their higher moments exist and are finite). However, an increasing transformation from uniform to normal distribution has unbounded slope. Density quantile functions help us to reveal this precisely: is "relatively" heavier tailed than uniform distribution explaining the asymptotic divergence of this transformation. Indeed, the density quantiles help to provide a more granular definition of heavytailedness based on the tail-exponent α and shape exponent β.
Given a random variable X ∼ p, the expected value of a function g(x) can be written in terms of the quantile function as: This allows us to draw a precise connection between the degree of heavy-tailed ness of a distribution as given by the density quantile functions (and tail exponent α) and the the existence of the number of its higher-order moments.
Based on these observations, we can equivalently define heavy-tailed distributions as follows: exists and is finite, but for µ ≥ ω, E p [e |z| µ ] is infinite or does not exist.
These definitions allow us to finally give the rate an increasing transformation must emulate to exactly represent tail-properties of a target density given some source density. Concretely, Theorem 4. Let p be a ω −1 p −heavy distribution, q be a ω −1 q −heavy distribution and T be a diffeomorphism such that q := T # p. Then for small ǫ > 0, T (z) = o(|z| ωp /ωq−ǫ ).
Properties of Multivariate Transformations
We recall that there exists a unique bijective increasing triangular map T : R d → R d that transforms a high-dimensional joint source density p to a target density q. The j−th component T j of T is given by x j = T j (x 1 , . . . , x j−1 , x j ) = F −1 q,j|<j • F p,j|<j (z j ) where F q,j|<j is the cdf of the conditional distribution of X j given X <j := (X 1 , . . . , X j−1 ). Analogous to our results in §3, we shall characterise the properties of T by studying the properties of |∇T| required to push p to q with varying tail properties. Evidently, for a triangular transformation T, the determinant of the Jacobian i.e. |∇T| is just the product of the diagonals where each diagonal entry is given by Hence, by being able to characterize the properties of the conditional density quantiles, we shall be able to characterize the properties of T. However, we first define the notion of tail-behaviour in multivariate distributions: A multivariate distribution is heavy-tailed if the marginal distributions in every direction on the (high-dimensional) sphere are heavy-tailed i.e. a distribution F (x), x ∈ R d is said to be heavy-tailed if for all vectors v ∈ B 1 where B r = {v ∈ R d : v = r} and ∀ λ > 0, x e λv T x F (dx) = ∞. This definition automatically implies that the univariate random variable x is heavy tailed. In particular, we will consider the class of elliptical distributions since they admit the same tail-behaviour in every direction. Definition 4 (Elliptical distribution, [4]). A random vector X ⊆ R d is said to be elliptically distributed denoted by X ∼ ε d (µ, Σ, F R ) with rank(Σ) = r if and only if there exists a µ ∈ R d , a matrix A ∈ R d×r with maximal rank r and, a non-negative random variable R, such that X d = µ + RAU (r) , where the random r-vector U is independent of R and is uniformly distributed over the unit sphere B r−1 , Σ = A T A and F R is the cumulative distribution function of the variate R.
For ease in developing our results, we consider only full rank elliptical distributions i.e. rank(Σ) = d. The spherical random vector U (r) produces elliptically contoured density surfaces due to the transformation A. The density function of an elliptical distribution as defined above 3 is given by: where the function g R (t) : [0, ∞) → [0, ∞) is related to f R , the density of R, by the equation: is the area of a unit sphere. Thus, the tail properties of a random variable with an elliptical distribution i.e. X ∼ ε d (µ, Σ, F R ) is determined by the generating random variable R. Indeed, X is heavy-tailed in all directions if the univariate generating random variable R is heavy-tailed. Define Intuitively, µ l is the l-th order moment of f R when l is integer-valued. We can now generalize Definition 3 to the multivariate case: Elliptical distributions have certain convenient properties: an affinely transformed elliptical random vector is elliptical. Let a ∈ R k and B ∈ R k×d . Consider the transformed vector Y = a+ BX where X d = µ + RAU (r) . Then, Y d = (a + Bµ) + RBAU (r) . In particular, if P ∈ {0, 1} d×d is a permutation matrix then Y := PX is also elliptically distributed and belongs to the same location-scale family as X. Additionally, the marginal and conditional distributions of an elliptical distributions are also elliptical.
Proposition 2. Under the same assumptions as in Lemma
We now state the main result of this section: an increasing triangular map T that transforms a lighttailed elliptical distribution to a heavy-tailed elliptical distribution has all diagonal entries of |∇T | to be unbounded.
Theorem 5. Let Z ∼ ε d (0, I, F S ) and X ∼ ε d (0, I, F R ) be two random variables with elliptical distributions with densities p and q respectively where F R is heavier tailed than F S . If T : Z → X is an increasing triangular map such that q := T # q, then all diagonal entries of |∇T| are unbounded. Moreover, the determinant of the Jacobian of T is also unbounded.
Theorem 6. Let Z ⊆ R d be a random variable with density function p(z) that is light-tailed and X ⊆ R d be a target random variable with density function q(x) that is heavy-tailed. If T(z) = (T 1 (z), T 2 (z), · · · , T d (z)) pushes forward p(z) to q(x) i.e. T : Z → X such that q = T # p, then there exists an index i ∈ [d] such that ∇ z T i is unbounded.
We next give a general result for any transformation.
Proof. We provide the proof in Appendix A.
Triangular Flows and Approximation
Neural density estimation methods like autoregressive models [25; 2; 19; 35] and normalizing flows [29; 33; 32] provide a tractable way to evaluate the exact density and are increasingly being used for the purpose of multivariate density estimation in machine learning [17; 6; 7; 26; 35; 14]. Invariably, these methods aim to learn a bijective, invertible and increasing transformation T from a simple, known source density to a desired target density such that the inverse T −1 and the Jacobian |∇T| are easy to compute.
As discussed in [15], most autoregressive models and normalizing flows at their core implement exactly a triangular map i.e. they learn a transformation T such that . [17] considered the affine map T j (z 1 , · · · , z j−1 , z j ) = µ j (z <j ) + σ j (z <j ) · z j . [14] alternatively replaced the affine form of [17] with a univariate neural network and [15] proposed to use the primitive of a univariate sum-of-squares of polynomials as the approximation of an increasing function. [13] and [26] proposed efficient implementations of these methods based on affine maps using binary masks that compute all the parameters of the transformation in a single pass of the network. Interestingly, all these methods compose several triangular maps in the hope that this composition of functions is "complex" enough to approximate any generic triangular map.
Here, we argue that there are two ways to learn a target density q: First, as we discussed in Section 3-4, we can choose an appropriate base density p such that the resulting triangular transformation from p to q can be represented using simpler triangular transformations that are Lipschitz continuous, or, we can choose a base density p from a simple class of distributions (say Gaussian with identity covariance) and learn a "complex" triangular transformation via composition of several triangular transformations. However, we note here that composing several triangular maps is essentially tantamount to converting the source density to more complex base density such that the final composition of the triangular map transforms this to the target density. We propose an alternative that can allow for simpler transformations to target density q by considering a more flexible class of source densities than the Gaussian distribution. One way would be to parametrize the source density as an elliptical distribution where the generating variate R is from a student-t distribution t ν with ν degrees of freedom where ν is a parameter to be learned along with the parameters of the transformation. It is evident from our exposition in Section 4 that such a model would require a Lipschitz continuous triangular map when learning heavy-tailed distributions.
Related question is how well one can approximate a distributions q with another distributionq := T # p, where q is heavy tailed, p is light tailed, but T is not flexible enough to push tails, so that q is heavier thanq. One can consider several similarity metrics for this task. Let us start with Wasserstein distance, most natural for the flow theory. We wish to find a lower bound on the approximation error for W l (q,q) = inf γ∈Π(q,q) |x−y| l dγ(x, y), where Π(q,q) is a set of all measures on R d ×R d with marginals q andq on the first and the second factors. Here we have two situations. First, assume that q doesn't have the l-th moment. Then, becauseq is lighter than q,q = q. And because q doesn't have l-th moments, W l (q,q) = ∞. The only possibility to have a finite distance in this case is exactly if q =q, and the distance is zero. Alternatively, assume that q has the l-th moment. Then W l (q,q) is finite. The measure q is Radon (as a finite measure on a second-countable space). Because the set of finite-supported Radon measures is dense in the metric space of Radon measures with W l -distance [22], one can approximate q arbitrary well with any finite-supported Radon measure q 0 . Hence, varying T one can findq arbitrary close to q 0 .
One can do similar analysis with f -divergence. The existence of the integral D f (q||q) = R d f (q q )q dx depends on tail behaviour of both distributions among other properties 4 . However, if the integral exists and is finite, one writes it as an integral over a compact set C plus an integral over tails, and make the latter as small as wanted by simply increasing C. Hence, heaviness determines the possibility of approximation. In case when the target distribution q has very heavy tails, the approximation reduces to representation problem, and one needs a flexible enough transformation T in order to make T # p as heavy as q.
Conclusion
We studied the properties of triangular flows for capturing heavy-tailed distributions. We showed that density quantile functions play a central role in characterising the properties of increasing pushforward maps. Subsequently, we proved that for a triangular flow all eigenvalues of the Jacobian are unbounded when pushing a light-tailed distribution to a heavy-tailed distribution. We revealed properties of quantile and density quantile functions and related it to both existence of functional moments and heavy-tailedness of a distribution that can be of independent interest. As a by-product of our analysis, we demonstrated the trade-off between the complexity of source distribution and expressively of transformations in capturing target densities in generative models. This work opens the possibility for multiple future directions: an interesting line of research will be to conduct holistic experiments to systematically analyze our results for example by considering flexible source distributions with parameters that can be trained along with the model. Another direction will be to analyze general flows that are non-triangular. Further, application of these insights into real-world problems of finance, insurance and networks might also be interesting.
A Proofs p has a support bounded from above.
A similar argument proves the reverse direction.
The first integral is finite because the integrand is non-singular. For the second integrand, we can use the asymptotic behaviour of the quantile function by choosing ǫ very close to 1. Subsequently, the integral exists and converges if and only if 1 − ωγ > 0 ⇐⇒ ω < 1 γ .
Proof. The integral
converges for 0 < ǫ < ω q , because q is ω −1 q -heavy. Because T is a univariate diffeomorphism, it is a strictly monotone function. Without loss of generality, let us consider T to be positive increasing function and investigate the right asymptotic. Consider the function T (z) ωq−ǫ /z ωp for big positive z. Assume there is a sequence {z i } ∞ i=1 , such that lim i z i = +∞ and the sequence T (z i ) ωq−ǫ /z ωp i does not converge to zero. In other words, there exists a > 0, such that for any N > 0 there exists z j > N , such that T (z j ) ωq−ǫ /z ωp j > a. Let us work with this infinite sub-sequence {z j }. Because T (z) is increasing function, we can estimate its integral from the left by its left Riemannian sum with respect to the sequence of points {z j }: Since, p is ω −1 p −heavy, the series on the right hand side diverges as a left Riemannian sum of a divergent integral. But this contradicts to the convergence of the integral on the left hand side. Hence, our assumption was wrong and for all sequences {z i } we have: lim T (z i ) ωq−ǫ /z ωp i = 0. Hence, |T (z)| ωq−ǫ = o(|z| ωp ) which leads to the desired result that |T (z)| = o(|z| ωp /ωq−ǫ ).
Proof. The density function of the conditional p(x|X 1 = x 1 ) is proportional to g R ((x − µ * ) T Σ * −1 (x − µ * )), where x ∈ R d2 and g R is the same function as for the distribution of X (see [4]). Then, because it is a d 2 -dimensional elliptical distribution, it is α-heavy iff µ l = ∞ 0 r l+d2−1 g R (r 2 )dr < ∞ for all 0 < l < α. It is given that X is ω −1 -heavy, which is equivalent to ∞ 0 r l+d−1 g R (r 2 )dr < ∞, ∀0 < l < ω. Because d = d 1 + d 2 , one gets that Theorem 5. Let Z ∼ ε d (0, I, F S ) and X ∼ ε d (0, I, F R ) be two random variables with elliptical distributions with densities p and q respectively where F R is heavier tailed than F S . If T : Z → X is an increasing triangular map such that q := T # q, then all diagonal entries of |∇T| are unbounded. Moreover, the determinant of the Jacobian of T is also unbounded.
Proof. We need to show that Thus, all we need to show is that the generating variate R * of the conditional distribution for the target is heavier than the generating variate S * of the conditional distribution of the source. From §3, we know that the tail exponent in the asymptotics of the density quantile function characterize the degree of heaviness. Furthermore, we also know that asymptotical behaviour of the density quantile function is directly related to the asymptotical behaviour of the density function since if f is a density function, the cdf is given by F (x) = f (x) dx, the quantile function therefore is Q = F −1 and the density quantile function is the reciprocal of the derivative of the quantile function i.e. f Q = 1 /Q ′ . Hence, we need to ensure that asymtotically, the density of R * is heavier than the density of S * . Using the result of the cdf of a conditional distribution as given by Eq. (15) in [4] we have that asymptotically where d 1 is the dimension of the partition that is being conditioned upon. Since, R is heavier tailed than S, we have that R * is heavier tailed than S * for all the conditional distributions.
Theorem 6. Let Z ⊆ R d be a random variable with density function p(z) that is light-tailed and X ⊆ R d be a target random variable with density function q(x) that is heavy-tailed. If T(z) = (T 1 (z), T 2 (z), · · · , T d (z)) pushes forward p(z) to q(x) i.e. T : Z → X such that q = T # p, then there exists an index i ∈ [d] such that ∇ z T i is unbounded.
We have Partition R d into 2 d sets U k , k ∈ [2 d ], i.e. R d = ∪ 2 d k=1 U k such that if a = (a 1 , a 2 , · · · , a d ) ∈ U i , and b = (b 1 , b 2 , · · · , b d ) ∈ U j , i = j, then there exists at least one index m ∈ [d] such that sign(a m ) = sign(b m ). Subsequently, we can rewrite the integral above as We will prove that each integral over the set U k is finite.
Since p(z) is light-tailed, we know that for any u ∈ B 1 , there exists a λ > 0 such that R d e λu T z p(z) dz < ∞. Choose any u ∈ B 1 , then for λ = κM/ w we have that the above integral is finite. This directly implies that Hence, we have our contradiction.
|
2019-07-10T01:46:39.000Z
|
2019-07-10T00:00:00.000
|
{
"year": 2019,
"sha1": "31558e5bf78f530f07428e6e5af59ae5a3c704da",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "31558e5bf78f530f07428e6e5af59ae5a3c704da",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
239111559
|
pes2o/s2orc
|
v3-fos-license
|
Environmental Sustainability in Viticulture as a Balanced Scorecard Perspective of the Wine Industry: Evidence for the Portuguese Region of Alentejo
: The traditional four-perspective Balanced Scorecard (BSC) model is suitable for a wide variety of organizations. Other dimensions of analysis can be carried out and other perspectives can be considered in each BSC, depending on the specific characteristics of each organization or industry. This paper presents evidence that justifies and validates the inclusion of a new perspective: ‘environmental sustainability in viticulture’in a BSC that has been developed for the Wine Industry of the Alentejo Region (Portugal) for 2021–2030. The research was performed according to the exploratory sequential design method, which combines in vivo (interviews and questionnaires) and in vitro (literature review and secondary data) research. The content analysis technique, supported by the NVivo software, was used to treat and analyze the data obtained from the interviews, to discover the explicit meanings of the interviewees’ speeches. A principal component analysis and a set of statistical analyses were performed to support the identification of perspectives to be considered in this industrial BSC. The results suggest that environmental sustainability (in viticulture) should be considered as a new strategic perspective to be included in the BSC, with a focus on future certification of environmentally sustainable production (grapes, wine, and wineries). The new perspective represents the competitive challenge of environmental sustainability and enhancement of endogenous resources for the Alentejo Wine Industry, as well as for other wine regions that share the same challenges and concerns. The results also offer an opportunity for competitive benchmarking for companies, industries and governments that operate in similar situations.
Introduction
Many organizations fail to implement their strategies [1][2][3][4][5][6][7], mainly due to the difficulty in translating the strategy into operational terms [1,2,[8][9][10][11].It is therefore necessary to create and improve the instruments and mechanisms that allow the strategy to be implemented and communicated correctly.The international literature widely recognizes that the management accounting and control system (MACS) is the main mechanism responsible for strategic implementation [5,11].In this context, models and tools such as the Balanced Scorecard (BSC) have been adopted by most organizations to strategically manage their performance [2].The BSC was initially presented as a performance measurement system with a preeminent role in the strategy implementation [12], later evolving to a strategic management system [13,14].Nowadays, it is also recognized as part of MACS that supports the strategy implementation and facilitates their translation into goals and targets [3].
The BSC is a model that assists to translate strategy into operational objectives that guide behavior and performance, allowing for the identification of good management practices and guiding the management of organizational change in a continuous improvement process.The four original perspectives of the model are: financial (mainly in the interests of shareholders, creditors and the State; interests are mainly of financial); customer (identifies the customer segments and markets in which the organization will compete and the attributes valued to achieve the desired financial performance); internal processes (identifies the processes at which you must excel to create value); and, learning and growth (building the fundamental competences for the organization to compete and create value in the future) [9].Businesses, industry, government institutions, non-profit organizations, among others, use the BSC as a cohesive strategic planning system for performance measurement and aligning organizational actions to translate vision and mission into goals and targets.Moreover, it is a helpful tool to increase internal and external communications and to look after sustainable development [15].Despite the potential and real contributions of the BSC for strategy implementation, the issue of sustainability needs to be better explored within the modern concepts of performance assessment systems, in line with [16][17][18].
Sustainability is increasingly recognized as a strategic theme for organizations and industries [7,[19][20][21][22].Sustainable development was defined in the United Nations Gro Harlem Brundtland Report [23] as development that meets the needs of the present without compromising the ability of future generations to meet their own needs.Therefore, joint efforts should be made as soon as possible to build a sustainable and safe future for all people and the planet as a whole.It is important to promote and support sustainable development by managing natural resources and ecosystems and the entire environment including people [24].In Portugal, the 2014-2020 Rural Development Program states that the rural development strategy is based on three operational objectives, one of which is sustainability, to promote good practices and sustainable use of resources and valuing rural territories [25].The Strategic Plan of the Organisation Internationale de la Vigne et du Vin (OIV) for 2020-2024 also considers the promotion of an environment-friendly viticulture as a strategic vector [26], highlighting the concern with the challenges of climate change, the production methods and the use of natural resources for the sustainability of wine-growing territories.In Portugal, this concern is also recognized by the Ministry of Agriculture.
Sustainable development and ecological security factors are key factors that play a particular role in implementing the concept of sustainable development in agriculture [24].Viticulture is one of the most intensive agricultural systems.As intensive agriculture threatens the environment, there are growing interest in the concept of sustainability in the wine industry, as well as in new businesses opportunities, as customers begin to pay more attention to environmental and sustainability issues [27].Sustainable viticulture corresponds to a global strategy in the scale of production and processing systems for grapes, which combines both the economic sustainability of structures and territories with the achievement of quality products, considering the demands of precision viticulture, the risks related to the environment, the safety of the product and the health of consumers and also the enhancement of heritage, historical, cultural, ecological and landscape aspects [28].The term sustainability has already been accepted by a large number of winegrowers and will continue to become even more widely accepted, given the recognition that vineyards can both benefit from, and contribute to, biodiversity conservation and ecosystem service provision, as consumers increasingly appreciate wines produced under environmentally friendly farming practices [27,29,30].
International literature points out that the integration of strategic information of a social and environmental nature in the BSC can be carried out in several ways: (i) through strategic measures of results or performance drivers focused on sustainable aspects, above all from the perspective of internal processes; (ii) integrated into traditional perspectives, by including environmental indicators focused on topics related to sustainability; (iii) including an additional perspective focused on sustainability and environmental management; or, on a specific scorecard of a department with environmental attributions and competences (e.g., Brignall [31]; Butler et al. [32]; Pravdic [33]; Quesado et al. [34]; Fulop et al. [35]; Hansen & Schaltegger [36]; Monteiro & Ribeiro [37]; Rafiq et al. [38]).However, the empirical results obtained on the subject are inconsistent and/or inconclusive, leading to the formulation of criticism or the absence of theorization.Moreover, there is a research gap on the use of the BSC integrated with sustainability, especially because little is known about this perspective from a sectoral (industrial) point of view.
In this way, the main objective of this paper is to present evidence that justify and validate the inclusion of a new perspective: 'environmental sustainability in viticulture' in a BSC that has been developed for the Wine Industry of the Alentejo Region (Portugal) for 2021-2030.
Besides contributing to reducing the gap mentioned above, this study highlights the effective integration of sustainability as a sector performance indicator, opening new avenues for future investigation.The inclusion of this perspective comes from the need to increase the added value of the Alentejo Wine Industry (AWI), preserving natural resources for future generations and focusing on future certification of environmental sustainability in viticulture, simultaneously disclosing the effective integration of sustainability as an industry performance indicator and thus, offering an organizational and competitive alternative for evaluating business and industrial development on a sustainable basis.
The structure of the paper is as follows: the next section presents a summary of the state of the art of the BSC, with a focus on its application to other realities (such as an industry) and the inclusion of additional perspectives.The research methodology is presented in Sections 3 and 4 presents and discusses the results.Finally, Section 5 presents the conclusions, the main limitations of the study and the avenues for future research.
Previous Research and New Directions on Balanced Scorecard
The analysis of previous research reveals that the BSC evolved from a performance assessment system to a strategy assessment system and, later, to a strategy management system, a strategy communication and alignment system and a change management system.New concepts emerge ('strategy focused organization', 'strategic map' and 'strategic management department') making the BSC saw as just a performance and strategy assessment system, but rather as a strategy and change management communication system, focused on communicating strategy and aligning individual and team goals with corporate strategy.The last update of the model appears in 2008 with its diffusion as an integrated management system, emphasizing the integration of the operational and strategic plans, thus linking operations to strategy [2,9,[39][40][41].
The scope of application of the BSC has been expanded in recent years and the model has been adapted to specific contexts: the Brazilian industry [42], the economic strategy of a country [43], the Protocol Training Centers of the Employment and Professional Training Institute in Portugal [44], or cities such as Charlotte (USA) [39,45,46] and Newcastle City Council (UK) [47].Perhaps the greatest advantage of using the BSC is that it places strategy, structure and vision at the center of the concerns of management teams, compelling them to think beyond the short-term financial perspective, grouping into a document a set of financial and non-financial measures that provide a comprehensive, fast and accurate view of the organization's performance from different perspectives.Several authors point to other merits, such as the explanation of the strategy; the improvement of communication, strategic alignment, planning and allocation of resources; the development of feedback and strategic learning; and the flexibility of the model so that it can be adapted to the specific requirements of each organization [3,8,9,38,39,[48][49][50][51].Thus, opening up possibilities for adaptations to new realities and competitive demands of companies or even industries.
The four traditional perspectives of the BSC should constitute a reference for its construction and are suitable for a wide variety of companies and organizations from various sectors of activity (including public organizations).Other dimensions of analysis (perspectives) may be included in the development of the BSC, depending on the specific characteristics of each organization (or a sector of activity) and the strategy that was outlined [9,[52][53][54], such as perspectives related to the community and workers, and more recently to the environment and sustainability [33][34][35][36][37]55,56].It should be noted that most environmental and social issues are not supported in a financial dimension, but influence the long-term performance of organizations (and sectors of activity), which justifies that several authors point to the BSC as an adequate tool to explain and relate issues in the field of sustainability (e.g., Hansen & Schaltegger [36]; Rafiq et al. [38]; Epstein & Wisner [57,58]; Rohm & Montgomery [59]).
In this context, several models emerge that enable the integration of environmental information in a BSC: the Environmental Performance Management and Assessment System developed by Campos and Selig [60], in which this methodology is used to promote the integration between environmental and critical and strategic issues, highlighting the environmental issue as critical to organizational success; the Sustainability Balanced Scorecard, which is an approach focused on improving the integration of environmental, social and economic aspects of measuring and managing the sustainability of organizations (cf.Möller & Schaltegger [61]; Schaltegger & Wagner [62]; Hristov et al. [63]; Ferber Pineyrua et al. [64]); the theoretical model of Gimeno et al. [65], focused on the pursuit of financial goals without neglecting sustainable development goals, with the purpose of creating global value and improving economic, social and environmental performance; and, the theoretical model of Claver-Cortés et al. [66], in which the BSC appears as an instrument that allows providing environmental information for the development of internal activities and for knowledge of the requirements of the society, allowing the definition of the organization's environmental vision and mission.
Both academics and practitioners consider the BSC an appropriate tool to account for sustainable issues, since the use of sustainability indicators could contribute to the survival and growth of a company in the long term, improving its performance [63].More than that, the international literature (e.g., Jordão et al. [5]; Jordão et al. [11]; Ferreira & Otley [67]; Lueg & Radlach [68]) widely recognizes not only the central role of MACS in strategic implementation; or the relevance of management, performance assessment and control strategies for the organizations' sustainability and healthy (e.g., Jordão et al. [11]; Gupta et al. [69]), as well as the contribution of the BSC as an important MACS tool capable of supporting the integration of sustainability in the implementation of strategies for different types of organizations and sectors (e.g., Curado & Manica [70]; Butler et al. [32]; Bohm et al. [54]; Epstein & Wisner [57,58]; Hristov et al. [63]; Bieker [71]).Therefore, this issue not only opens up the possibility of adapting the BCS to new realities, in line with what was originally proposed by Kaplan and Norton [12][13][14], but also offers the basis for the proposal outlined in this paper.
Research Methodology
The research followed the exploratory sequential design method [72][73][74][75], which is a multi-method approach that articulates qualitative and quantitative analysis of an applied, descriptive and exploratory nature regarding the objectives and procedures used.The use of mixed methods is common in social sciences as the selection of just one method is often insufficient to guide all the procedures to be developed during the research [76].According to Borowski [77], mixed methods have been adopted, accepted and used in management sciences and across the broad spectrum of social sciences for several years.It is assumed that the use of a combination of qualitative and quantitative methods provides the possibility of greater flexibility in undertaking research, generating better supported arguments based on research data and greater importance for a wider range of stakeholders.In this study, the collection of qualitative data (interviews) preceded the collection of quantitative data (questionnaires).Methodologically, two sequential approaches are considered: Qualitative research.In the present study, an interview was carried out with nine stakeholders who were identified through prior stakeholder analysis.AWI Stakeholders have a lot of influence (direct and/or indirect) and interest in the functioning and pursuit of the AWI's global vision, acting as partners that politically frame the action, are complementary in the execution of policies and interventions and/or intervening in the wine value chain (producers, processors, distributors, traders or final customers/consumers).In this sense, the interviewees were selected according to their main attributions, competences and influence in the Wine Industry of Portugal and interest in the pursuit of the AWI's global vision.The sample included high-level representatives from the Institute of Vine and Wine, Alentejo Wine Commision (AWC), Alentejo Wine Growers Technical Association, Ministry of Agriculture Planning, Policy and General Administration Office and ViniPortugal.The sample also includes representatives from two winemaking cooperatives and two private producers (whose identities are not revealed here to preserve anonymity).The sample is non-probabilistic, as it includes members of the population who were chosen according to specific research criteria.
Thus, on the one hand, the interview had an exploratory nature, to obtain different but complementary data on the topics under analysis and to better understand the problem under study.On the other hand, it supported the development of a quantitative instrument (questionnaire) to be applied to the different EAs.The interview was individual, faceto-face, semi-structured and in depth, with a pre-test to the interview script.Data were analyzed using content analysis, using the NVivo software (version 12), to discover the explicit meanings of the interviewees' speeches.
•
Quantitative research.In the present study, data were collected through a questionnaire survey.The questionnaire was developed based on the literature review and the strategic diagnosis of the AWI, as well as on the information collected from the interviews that were carried out and the subsequent qualitative content analysis.The questionnaire was sent electronically after its structure had been validated through a pre-test to a group of experts.The questionnaire was answered y 102 EAs, representing 25.56% of the target population.To identify the perspectives to be included in the BSC, a principal component analysis (PCA) with Varimax orthogonal rotation was performed using the SPSS (version 24.0).The quantitative analysis was complemented with other statistical analyzes.
•
The application of the PCA to question 14 of the questionnaire (What strategic themes/areas should be assessed in the AWI?), sought to support the identification of perspectives to consider in the development of the BSC for the AWI.That is, the items that constitute this question were included in the PCA to determine the components that would correspond to the perspectives to be considered in the BSC for the AWI.This is why there is an equivalence of the factors of the final PCA solution (five factors) to the BSC perspectives (five perspectives) considered in the development of the Strategic Map for the AWI.
The option for the mixed research methodology, namely the use of the sequential exploratory method, also considered the possibility of data triangulation (use of two or more independent data sources), allowing to increase the validity and credibility of the research results.The use of this research methodology is original, since studies applying the BSC methodology usually use a single method-qualitative (based on interviews) or quantitative (based on questionnaires).The research is thus reinforced as different but complementary perspectives on the same topic are obtained, providing greater robustness to the results.Nevertheless, before the field study, the state of the art on the subject was mapped, seeking to raise the most relevant studies on the topic in recent decades.
It is important to mention that, in 2016, the International Federation of Wines and Spirits (FIVS) in conjunction with the OIV published the 'Global Wine Producers Environmental Sustainability Principles' (GWPESP), a document that encourages the implementation of environmental sustainability plans that are financially viable and simultaneously aligned with the requirements of environmental and social sustainability [78].In the world context, several viticulture sustainability plans have been developed.These plans are the result of national or regional initiatives from pioneering countries in the approach to sustainability in viticulture and have adopted the guiding principles of GWPESP aligned with the strategic objectives of increasing wine exports, which are disclosed as good sustainability practices on the FIVS website.
The Alentejo Wine Sustainability Plan (AWSP) was developed in 2015, adopting many of the sustainability guidelines of the OIV and FIVS, as well as the initiatives of the Sustainability Plans mentioned above.AWSP is a pioneering initiative in Portugal promoted by AWI whose content was developed by winegrowing specialists belonging to several entities in Alentejo (the University of Évora, Technical Association of Alentejo Winegrowers and EAs), representing a collaborative effort for innovation in the Alentejo Region (and in Portugal).The program started with 91 members and by December 2020 had reached 428 members [79].It is aimed at grape and wine producers in the Alentejo Region and works in a network with research and higher education institutions and with various regional and national bodies.The objective of the program is to increase the competitiveness and ensure the sustainability of Alentejo wines, providing its members with self-assessment tools as well as recommendations aimed at increasing best practices in Alentejo winemaking.
Similarly to other regions of the world, the viticulture activity in Alentejo (Portugal), has a high economic, social and cultural importance.With eight sub-regions entitled to Controlled Designation of Origin (CDO), Alentejo is one of the largest Portuguese wine regions, with 23.3 thousand hectares of vineyards, corresponding to about 12.30% of the total Portuguese wine-growing area.Alentejo is the second-largest wine production region, considering the volume production criterion, taking the lead in the national market, both in terms of market share in volume and in value, in the category of bottled wines with CDO classification.In the 2020 campaign, wine production reached a volume of 113 million liters, although the sales of 'Wine from Alentejo' have decreased by 15.9% (−5 million liters), amounting to 53 million euros (−28.4%).Total sales in 2019 amounted to 74 million euros −25% of production was exported, mainly to non-EU countries [80,81].As for its business structure, three quarters of the companies in this industry are micro-enterprises and 70% of small and medium-sized companies generate 70% of the turnover [82,83].These characteristics reinforce the importance of the studied context and highlight the results and conclusions obtained.
Results and Discussion
The importance of vines and wine in Alentejo (and in Portugal) is not limited to their economic dimension.In recent decades, the AWI has been modernized, creating stricter regulations to guarantee the typicality of wines, adopting more environmentally-friendly cultural practices and more controlled wine-making technologies, which significantly improved the quality of the wines.The AWI has defended and adopted environmental practices that contribute to the preservation of the environment and the conservation of resources (water, soil, grape varieties, energy) and a more sustainable and competitive agricultural production.It has contributed to boosting the socio-economic development of an entire region.It has also focused on differentiation, implementing the AWSP in recent years and disseminating the guiding principles of its Environmental Management System [84].
The 2021-2030 Strategic Map for the AWI, whose structure and cause-effect logic are shown in Figure 1, was built based on a methodological procedure that included the following steps: (i) literature review, (ii) content analysis of interviews conducted with the main AWI stakeholders, (iii) PCA applied to question 14 of the questionnaire, and (iv) analysis of the responses to the questionnaire.
Two axes of orientation and strategic action were identified: (i) increase the capacity to generate added value in the industry (valuing the product and internationalizing the wine, in close articulation with the territorial brand 'Alentejo') and (ii) efficient management and protection of natural resources (improving the efficiency in the use of resources, protecting the region's specific natural resources, acting on climate change and valuing the winegrowing territories of Alentejo-resources that can underpin product differentiation and strengthen the brand 'Alentejo Wines').There is a need to develop support actions, which include the qualification of human resources and convergence of scientific knowledge systems of economic units to generate more knowledge and innovation in the industry (in terms of grape production, wine production and wineries).
ment and protection of natural resources (improving the efficiency in the use of resources, protecting the region's specific natural resources, acting on climate change and valuing the wine-growing territories of Alentejo-resources that can underpin product differentiation and strengthen the brand 'Alentejo Wines').There is a need to develop support actions, which include the qualification of human resources and convergence of scientific knowledge systems of economic units to generate more knowledge and innovation in the industry (in terms of grape production, wine production and wineries).It is still necessary to improve individual communication (the effort that each EA has to make to promote their brands and the quality of their product nationally and internationally), as well as institutional communication (improving communication at the level of the Alentejo region, promoting Alentejo as a region that offers a diversity of quality products; improve communication and sales techniques aimed at international markets; improve internal communication between the various players that make up the AWI; and associate brands with the development of the territory).
The content analysis performed on the interview responses was microscopic, that is, line by line, in the search for meanings and interpretations of the data.After the emerging categorization of the data, it was possible to carry out a set of procedures (word frequency distribution, word cloud, most frequent word cluster and word similarity cluster), which allowed the identification of a strategic concern on the part of the AWI Stakeholders with sustainability, whether directed towards aspects related to the adequacy of production methods, the efficient use and protection of natural resources, and the emerging issue of climate change (environmental sustainability); or related to the sector's development and It is still necessary to improve individual communication (the effort that each EA has to make to promote their brands and the quality of their product nationally and internationally), as well as institutional communication (improving communication at the level of the Alentejo region, promoting Alentejo as a region that offers a diversity of quality products; improve communication and sales techniques aimed at international markets; improve internal communication between the various players that make up the AWI; and associate brands with the development of the territory).
The content analysis performed on the interview responses was microscopic, that is, line by line, in the search for meanings and interpretations of the data.After the emerging categorization of the data, it was possible to carry out a set of procedures (word frequency distribution, word cloud, most frequent word cluster and word similarity cluster), which allowed the identification of a strategic concern on the part of the AWI Stakeholders with sustainability, whether directed towards aspects related to the adequacy of production methods, the efficient use and protection of natural resources, and the emerging issue of climate change (environmental sustainability); or related to the sector's development and positioning, such as the enhancement of wine and the 'Alentejo Wines' brand and internationalization (business sustainability).Environmental and business sustainability must be developed harmoniously, considering the aspects of quality, innovation, communication and exports in the industry.Specifically, the words 'sustainability' and 'work' appear as one of the most frequent words in the respondents' narrative (and stand out in the cloud model of the 50 most frequent words); the words 'sustainability', 'communication' and 'working' appear in the same subgroup, showing similarity in the interviewees' speeches (cluster of the 25 most frequent words); additionally, the cluster analysis by word similarity, using the Jaccard Coefficient as a criterion, show strategic lines to be adopted in the AWI in the short and medium term, with emphasis on the need to improve the exploitation of the region's natural resources by mobilizing the EAs to the new paradigm of sustainability (especially environmental in the face of climate change).The results of the qualitative content analysis therefore show a growing concern on the part of industry players for the need to consider the integration of environmental issues into industry strategies.Results that can contribute to the construction of an integrated territorial and systemic vision in the AWI for the period 2021-2030 aimed at the perspective of environmental sustainability in viticulture.
We believe that the main issue that should guide AWI's performance in the short and medium term was identified: 'how to sell more and better?'.To address this issue, four axes of guidance and strategic action emerge that must be worked on by 2030:
•
Improve communication (between players, for national and international markets, more and better promotion to recruit new consumers and markets and a strong connection to the territory and the 'Alentejo' brand) and quality (especially of certified products, working in the dimension of quality perceived by consumers);
•
Increasing exports through internationalization (in volume and value), and improving the internationalization process, exploring new markets and products (diversifying the offer, differentiating the product and identifying global consumption trends); • develop sustainability in Alentejo viticulture (mobilizing producers throughout the industry to adhere to the AWSP and adopt strategies that ensure sustainability, highlighting a potential 'environmental and social contract' for future generations in this industry), with growing concern for climate change; • increase the average value of 'Alentejo Wineshor (better valuation of the product and the territorial brand).
The result of the PCA includes an adaptation of the original structure by Kaplan and Norton.The perspectives of the BSC are grouped differently and with specific adaptations (content and name different from those commonly used) considering its application to an entire industry.Two new perspectives were considered, in line with several authors: one on aspects of environmental sustainability (in Viticulture) [31][32][33][34][35][36][37]42,56,71,85,86] and other concerning results for society [31,33,52,87].
The traditional financial and customer perspectives gave rise to the 'Results for the Sector' perspective, which consists of two strategic themes ('Sector Positioning' and 'Economic Growth')-and is adjusted to the value proposition for customers and the objectives of the AWI, reflecting the objective of the EAs operating in the industry to consolidate their leadership in the segment of certified wines in the national market, and thus increase their competitiveness and profitability.The perspective of 'Infrastructures and Market Development' (consisting of two strategic themes-'Infrastructures' and 'International Markets') focuses on improving infrastructure, boosting and exploiting the region's wine heritage, as well as processes related to the increase in export capacity that leads to a generalized increase in sales.The 'Qualifications and Innovation' perspective, consisting of three strategic themes ('Innovation', 'Bases of Development' and 'Communication'), highlights the importance of innovation, education, training and communication for the sustainable development of the industry.
Finally, two new perspectives are added: 'Environmental Sustainability in Viticulture', consisting of a single strategic theme ('Environmental Sustainability'), as a result of the need to increase added value in the industry while preserving natural resources for future generations and focusing on future certification of sustainability in viticulture (grapes, wine and wineries); and, a perspective focused on 'Results for Society', consisting of a single strategic theme ('Territorial Economy'), which emphasizes the importance of efficient management and protection of natural resources to promote the socio-economic development of the AWI (and a systemic development of the entire Alentejo region).
As a result, the proposed model consists of five perspectives: 'Results for Society'; 'Results for the Sector'; 'Infrastructures and Market Development'; 'Environmental Sustainability in Viticulture'; and, 'Qualifications and Innovation' (see Figure 1).
The perspective of 'Environmental Sustainability in Viticulture' was defined in light of the result of the PCA.The 102 responses to question 14 of the questionnaire (Which strategic themes/areas should be evaluated in the AWI?) evidence the growing concern of the industry EAs with the promotion of the rational use of natural resources and their preservation, corresponding to the environmental sustainability strategy that is also outlined in the AWSP.
One of the five factors in the final PCA solution (factor 3) identifies three areas to be assessed in the AWI, all related to the environmental sustainability of Alentejo wines: (i) promoting the rational use of natural resources (variable Q14.6); (ii) aligning production methods with the preservation of natural resources and biodiversity (Q14.5);and, working on the sustainability certification process (Q14.31).More than 51% of respondents to the questionnaire reported that they fully agreed with these three issues as strategic areas to be assessed in the AWI (see Table 1).When interviewed, stakeholders identify environmental sustainability as a fundamental dimension for the industry to achieve medium and long-term goals.Eight stakeholders (89%) referred to objectives directly related to the promotion of the sustainable use of natural resources.Five respondents (55.5%) advocate a certification of sustainability in viticulture for Alentejo wines.The objective of ensuring the adhesion of producers to the AWSP (variable Q13.4) was indicated by 45.10% of respondents to the questionnaire, while 96.08% of respondents agree that it is necessary to evaluate the implementation of production methods that contribute to the conservation of natural resources and preservation of biodiversity (variable Q14.5).Thus, this variable (Q14.5) was converted in the objective 'Implement and develop the AWSP' (ii.a) to simplify the construction of the strategic map and to consider the relevance that the WASP has in the transformation of the AWI towards an environmentally sustainable viticulture [88] (see Figure 2).assuming a leadership role in sustainable agriculture.Like other wine regions in the world, the AWC have decided to develop a sustainability plan for Alentejo Wines [90], providing members with an instrument to assess how they currently develop their activities, making recommendations to increase competitiveness and the sustainability of 'Alentejo Wines' [91].The issue of communicating sustainability, in different channels, is crucial to promote involvement and to evaluate sustainable strategies and practices [92].
Given the results obtained in the study, it is possible to conclude that the inclusion of a perspective of environmental sustainability in viticulture in the Strategic Map is fully aligned with the concerns of the EAs working in the AWI and with guidelines issued by the AWC, the Ministry of Agriculture, the OIV and by the United Nations [23,25,26,93]. Figure 2 outlines the objectives mentioned above, structured in this cause-effect relationship.Figure 2 outlines what was discussed above, in which the efficient management and protection of natural resources is one of the strategic objectives of the industry with a view to a future certification of environmentally sustainable production of Alentejo wines.Sustainable environmental development in this industry requires long and hard work.The first step is to make all EAs aware of the rational use of natural resources in the Alentejo region.The AWSP's sustainability strategy [90] is aligned with the concept of circular The three objectives of this perspective were organized under a single strategic theme-'Environmental Sustainability'-and corresponding cause-effect relationships were defined (see Figure 2).In general, sustainability comprises three major objectives: environmental protection, economic profitability and social equity [89].For this author, the wine industry has been fundamental in the implementation of sustainable practices, assuming a leadership role in sustainable agriculture.Like other wine regions in the world, the AWC have decided to develop a sustainability plan for Alentejo Wines [90], providing members with an instrument to assess how they currently develop their activities, making recommendations to increase competitiveness and the sustainability of 'Alentejo Wines' [91].The issue of communicating sustainability, in different channels, is crucial to promote involvement and to evaluate sustainable strategies and practices [92].Given the results obtained in the study, it is possible to conclude that the inclusion of a perspective of environmental sustainability in in the Strategic Map is fully aligned with the concerns of the EAs working in the AWI and with guidelines issued by the AWC, the Ministry of Agriculture, the OIV and by the United Nations [23,25,26,93]. Figure 2 outlines the objectives mentioned above, structured in this cause-effect relationship.
Figure 2 outlines what was discussed above, in which the efficient management and protection of natural resources is one of the strategic objectives of the industry with a view to a future certification of environmentally sustainable production of Alentejo wines.Sustainable environmental development in this industry requires long and hard work.The first step is to make all EAs aware of the rational use of natural resources in the Alentejo region.The AWSP's sustainability strategy [90] is aligned with the concept of circular economy [91], a strategic concept based on the reduction, reuse, recovery and recycling of materials and energy.Replacing the end-of-life concept of the linear economy with new circular flows of reuse, restoration and renovation in an integrated process, the circular economy is seen as a key element to promote the decoupling between economic growth and increased consumption of resources.The focus continues to be on enhancing the product, incorporating more and more attributes of environmental sustainability in 'Alentejo Wines' and associated services, looking for differentiated products to attract new market segments.Thus, environmental sustainability must be seen as a medium/long term strategy for the industry, linking the environment, heritage, culture, economy and society.In short, wine companies must implement a development strategy focused on the company's coevolution: on the environment and on the consumer [94].The perspective of Environmental Sustainability in Viticulture should guide the EAs of AWI in the implementation of such strategies that meet the requirements of sustainable development.
The objectives of the strategic theme for the perspective of Environmental Sustainability in Viticulture, as well as the indicators proposed for each objective, are presented in Appendix A. It should be noted that the objectives and performance indicators from the perspective of Environmental Sustainability in Viticulture are not limited to the examples presented in Appendix A, and may not be suitable for all EAs in the industry.Industry players can adapt objectives and indicators to individual strategies (ensuring alignment with the organization's mission, policies, objectives, goals and structure), selecting those they recognize as strategic to define the criteria for environmentally sustainable performance (in viticulture).
Considering the above, we may assert that contribution of this research is threefold.From an economic and social point of view, the research presents evidence that justifies and validates the inclusion of the new perspective-'environmental sustainability in viticulture'-in the BSC that was developed for the AWI.The suggestion to include a new perspective on environmental sustainability in viticulture results from the importance that economic agents who participated in the study attribute to this topic as a strategic area to be evaluated in this industry.This evidence is based on the results of the PCA which was carried out to support the identification of perspectives to consider in the development of the Strategic Map for the AWI for the period 2021-2030.Under the theoretical lens, besides contributing to reducing the gap mentioned above, this study highlights the effective integration of sustainability as a sector performance indicator, having as its conceptual basis a model widely recognized and used in the literature, the BSC-which beyond originality of the proposal, presents significant contributions to economic and managerial theory, opening new avenues for future investigation.The results obtained, which pointed to the need to include this new perspective, are in line with the implementation of the innovative AWSP under the responsibility of the AWC.
Conclusions
The paper discusses from a theoretical and practical point of view the importance of including a new perspective-Environmental Sustainability in Viticulture-in the BSC developed for the AWI.The results of the content analysis of the interviews carried out with the main AWI stakeholders (stakeholders of high influence and high interest in pursuing the AWI global vision) reveal that all industry players should guide their actions in the short and medium term to face the challenge of how to sell more and better.The strategic line of action, among the four that have been identified, that will support the achievement of this challenge points to the need to develop environmental sustainability in the viticulture of this region, so that every effort must be made to mobilize producers and all industry EAs to adhere to the AWSP and an environmentally sustainable production strategy.The result of PCA results in an adaptation of the original structure of Kaplan and Norton.Five BSC perspectives were identified, which were named and grouped differently from the traditional BSC model.Two of these perspectives are clearly unconventional: one concerning aspects of 'Environmental Sustainability' (in Viticulture) and the other concerning 'Results for Society'.
The perspective of 'Environmental Sustainability in Viticulture', consisting of a single strategic theme (Environmental Sustainability) and three strategic objectives, reflects the concerns of the EAs regarding this dimension and represents the strategic challenge of sustainability and the valorization of endogenous resources that should be considered in the short and medium term in this industry.This perspective is associated with the need to increase the added value of the industry, preserving natural resources for future generations, with a focus on future certification of environmental sustainability in viticulture, which can induce different stakeholders to add value to the product.For agents (companies/winegrowers/distributors) who process and sell products from grapes, certification is a way to demonstrate their commitment to the responsible use of resources (water, soil, climate, energy, etc.).For retail and large distribution, certification is a guarantee that the products sold come from environmentally responsible productions that support the conservation of the wine heritage.
No research paper is without limitations.In this study, the largest one refers to the context in which the investigation was carried out, making indiscriminate generalizations impossible.However, what is expected to generalize are the research contributions to the theoretical and practical understanding of the theme and not the results themselves.In this sense, it is expected that further studies are carried out on the subject, in the form of cases or on a large scale, in different environments and contexts, to broaden and solidify the understanding of the originally presented proposal and its application in different companies, industries, regions, and even some countries.
From a theoretical point of view, several authors propose changes to the traditional BSC model n through the inclusion of new perspectives that allow the integration of strategic environmental information.New BSC models emerge with the objective of managing, measuring and monitoring environmental aspects in organizations (and in an entire industry), which integrate new complementary perspectives, objectives and environmental indicators.The theoretical field of this methodology is thus open to new contributions, namely concerning to the structure of the BSC, integrating a new perspective in the field of sustainability (of companies/organizations and/or of an entire industry).The focus on an entire industry also extends the theoretical relevance of this research.Although the BSC is an extensively debated topic in the literature, there are very few contributions regarding the applicability of the BSC methodology to an entire economic sector in a region.On the other hand, there is also a lack of studies aimed at evaluating sustainable development in the wine industry.
From a practical point of view, the results of the study corroborate the importance of environmental sustainability and highlight the need to include a new perspective-'Environmental Sustainability in Viticulture'-in the BSC for the AWI for the period 2021-2030.Its construction was supported by a PCA, considering the responses (102) to a questionnaire by several EAs working in the Alentejo wine industry, and a content analysis of interviews (9) with opinion makers from the Alentejo wine industry, in addition to a literature review.The perspective 'Environmental Sustainability in Viticulture' highlights the need to increase the added value of the sector through the preservation of natural resources for future generations, with a focus on future certification of environmental sustainability in viticulture.It is expected that the research results can be used by managers, analysts, legislators, policymakers and other decision-makers in the creation, analysis and monitoring of strategies and policies that encourage the integration of sustainability as a performance indicator.Therefore, offering an organizational and competitive alternative for evaluating business and industrial development on a sustainable basis.
In summary, the BSC model that was developed for the AWI regarding the period 2021-2030 differs from the traditional architecture of the BSC in that it explicitly recognizes the objectives related to environmental sustainability and the corresponding performance indicators, which are part of the perspective of 'Environmental Sustainability in Viticulture'.Considering this proposal, EAs can plan and implement strategies adapted to the external environment in which they operate and to their resources and capabilities.Environmental sustainability in viticulture can be understood as a visionary dimension of this strategy.Identifies the mobilization of EAs to the AWSP (%).
∑ number of members per category
Promote the rational use of natural resources (i)
Number of initiatives to promote the preservation of natural resources and biodiversity
Inform the number of EAs participating in initiatives aimed at promoting and preserving natural resources and biodiversity.These initiatives can be broken down into topics: Sustainability and environment; AWSP; Effects of climate change; Use of phytosanitary products; Energy efficiency in wineries, etc.This indicator can evolve to 'number of sustainable processes'.
Record of alternative energy consumption by EAs
Inform the consumption of alternative energy used by EAs in the production process.It allows, in association with the other indicators, to assess the industry's contribution to the promotion of environmental management in wine-growing activities (greater efficiency in the use of resources).
∑ alternative energy consumption
Water recycling (%) Present the relationship between the water recovered and the water consumed in the production process by the EAs.It allows, in association with the other indicators, to assess the industry's contribution to promoting the environmental management of wine-growing activities (greater efficiency in the use of resources).
Quantity of recovered waste
Present the quantity of solid waste that is recovered through reuse, recycling or incineration in waste incineration facilities with energy recovery.It allows, in association with the other indicators, to assess the Sector's contribution to the promotion of environmental management of wine-growing activities (greater efficiency in the use of resources).
∑ quantity of recovered waste
Environmental costs
Present the total environmental costs concerning the management of energy, water, waste and gaseous emissions.Allows EAs to identify the most sustainable processes for their activity, maintaining their competitiveness.Energy (costs associated with consumption); Water (treatment costs, fees, etc.); Waste (costs of transportation, disposal, treatment, etc.); Gaseous emissions (treatment costs, etc.).
∑ environmental costs related to treatment, transport, taxes, disposal, etc. (energy, water, waste and gaseous emissions).
Figure 1 .
Figure 1.Structure of the 2021-2030 Strategic Map for the AWI.
Figure 1 .
Figure 1.Structure of the 2021-2030 Strategic Map for the AWI.
(
Number of adherents/Total EAs working in the AWI Number of adherents/Total EAs working in the AWI) × 100 Area of vineyard registered with the AWC that is covered by the AWSP (%) Identifies the percentage of vineyard area in the Alentejo region registered in the AWSP.∑ vineyard area included in PSVA Wine production volume included in AWSP (%) Inform the percentage of Alentejo wine production that is covered by the AWSP (volume).(Production volume entered in the AWSP/Production volume (total))/100 Number of EAs visited by the AWC to support the self-assessment defined in the AWSP Inform the number of visits made to the EAs who joined the AWSP and who are in the self-assessment phase, in a given period.∑ number of EAs visited to support self-assessment Number of EAs visited for validation of the self-assessment defined in the AWSP Inform the number of visits made to the EAs to validate the self-assessment, in a given period.∑ number of EAs visited for self-assessment validation Number of EAs participating in workshops on AWSP topics Inform the number of EAs participating in the AWSP-related workshops, in a given period (includes other partners).∑ number of EAs participating in working sessions Number of AWSP adherents by category Identify the number of AWSP members by category (pre-initial, initial, intermediate, developed).
Table 1 .
Areas to be assessed in the AWI according to the EAs.
|
2021-10-20T16:37:17.411Z
|
2021-09-10T00:00:00.000
|
{
"year": 2021,
"sha1": "08e741834371cee866d2987b59d572be8aadef01",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/18/10144/pdf?version=1632384545",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e15bb5e8cd9d6eced24c423d3bd1da71789b9ebe",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": []
}
|
238226210
|
pes2o/s2orc
|
v3-fos-license
|
Conceptual displacement : Web search as a learning experience Desplazamiento conceptual : La búsqueda en la web como experiencia de aprendizaje
The research aims to identify the relationship between the information search behavior on the Internet to solve a research task and the answers given by a group of university students. For this purpose, a quantitative quasi-experimental study was designed, in which both the words used in the web search process and the answers elaborated from it were analyzed. The data were processed thanks to the use of the GoNSA2 platform, which allows tracking the search process, and the Iramuteq software, oriented towards the analysis of lexical information. Among the main results, we highlight a shift between the topics used in the search and those observed in the response stage and an increase in the categories present in the latter stage, which allows us to consider the search process as a learning instance.
Introduction
Search engines enjoy great popularity among Internet users, due to the immediacy of access to hundreds of pages of results. It has been estimated that the indexing of documents reaches at least 5 trillion web pages. However, this growing increase in indexing brings with it various difficulties in accessing information, such as saturation, algorithmic filters, and misinformation. Thus, the main attribute of search systems that elevates them as one of the favorite applications of Internet users complicates the selection of information, due to the saturation and personalization of search results and the phenomenon of misinformation.
The saturation of results occurs because of the increased indexing of search engines, and directly affects people's working memory, due to the difficulty of processing so much data in a short time (Rivas, 2008), decreasing the effectiveness of decision making. To reduce information saturation, search systems have developed filter algorithms that personalize searches, facilitating users' access to information through search results adjusted to our historical behavior. Among the most important filtering algorithms are needs detection, query detection, query suggestion, search personalization and result ranking. In this way, the result pages are ranked based on these filters (Yogananarasimhan, 2020). However, these algorithmic filters confine us in comfortable bubbles that fit the profile that search engines have created from our clicks. Thus, web browsing is algorithmically mediated by profile information, geographic location, search engine usage history, and language, among other elements.
Resumen
La investigación busca como objetivo identificar la relación entre el comportamiento de búsqueda de información en internet para resolver una tarea de investigación y las respuestas entregadas por parte de un grupo de estudiantes universitarios. Para ello, se diseñó un estudio cuasi-experimental, de carácter cuantitativo, en el que se analizaron tanto las palabras utilizadas en el proceso de búsqueda en la web como las respuestas elaboradas a partir de este. Los datos fueron procesados gracias a la utilización de la plataforma GoNSA2, que permite realizar seguimiento al proceso de búsqueda, y al software Iramuteq, orientado hacia el análisis de la información léxica. Dentro de los principales resultados, destacamos un desplazamiento entre los tópicos utilizados en la búsqueda y los que se observan en la etapa de respuestas y un aumento en las categorías presentes en esta última etapa, lo que permite considerar al proceso de búsqueda como una instancia de aprendizaje.
3 Consequently, the efficiency of search systems depends on the personalization of results for each user, establishing boundaries to our search space in the cloud, limiting access to diverse information.
This makes it urgent to develop digital literacy plans that allow a critical approach to information and to build tools that encourage students' reflection, to promote the development of creative ideas in the different areas of study and in everyday life.
This research contributes with a characterization of the queries issued by university students and questions the belief regarding the mastery of technologies by the new generations of students; for example, it has been established that, although they show ease and familiarity in the use of computers, they are dependent on the results of search engines, so answering this question allows to highlight the need for digital literacy as a key competence for students of the 21st century.
To analyze the queries issued in the search engine, quasi-experimental research was developed, since it allows the natural search of queries in the same period, to students of similar demographic characteristics (age, gender, language, study career and geographic location) to solve factual and research/exploration tasks. The study was supported by the GoNSA2 technology platform (Olivares-Rodriguez, Guenag, & Garaizar, 2018), which allows implicit recording of all user actions.
Based on a quasi-experimental study, a group of 58 first-year university students searched for information on the web to solve four factual and research/exploratory search tasks. This article presents the results obtained from the analysis of the research/exploratory task: "How to fight crime in Chile". The content of the task corresponds to a recurrent topic in the national and international media agenda. The objective of this research is to identify the relationship between the information search behavior on the Internet to solve a research task and the answers provided by a group of university students through a study of contrast between the keywords present in the queries issued by students in the search system.
In broad terms, we propose to categorize the keywords of the queries issued by the students in the search engine and to contrast them with the answers elaborated for the research task. In the first stage of the study, the level of web exploration is determined by the number of queries issued. Then, the keywords of the queries issued by the students in the search engine are categorized. Third, the categories found in the search stage are contrasted with the words used and the categories present in the response stage.
There is evidence in the literature regarding the difficulties of users to initiate and complete a search, in terms of the number of queries and terms used, as well as the low effort they invest in the search for information, particularly the youngest, due to the cognitive barriers inherent to their development (Duarte Torres and Weber, 2011;Usta, Altingovde, and Vidinli, 2014), even a low tendency to use advanced functions in search 4 engines has been observed (Yamamoto, Yamamoto, Oshima, and Kawakami, 2018). In addition, poor performance in the search process, impacts people's mood (Rosman, Mayer, & Krampen, 2015). Added to this, the great diversity of pages and the inherent ambiguity of the queries, as well as low levels of reading comprehension make the task difficult, since the queries turn out to be vague and redundant; therefore, information retrieval becomes frustrating and with low levels of exploration of the search space, especially in minors and young people (Foss and Druin, 2014).
The level of effort spent on search depends mainly on external factors and user experience (Zach, 2005). Also, it has been established that the decision to stop a search is highly influenced by the relevance of the results presented on the first page of search engine results, the quality of the terms they are able to use, the ability to generate new terms for queries, and the personal assessment of the effort required to solve the search task, based on the complexity of the search (Wu and Kelly, 2014 p. 34).
Toms and Freund triangulated the surveys with quantitative information from logs of information-seeking sessions and identified the actions and transitions that were preferentially used at the end of the information-seeking process (Toms and Freund, 2009 p. 90).
Consequently, the formulation of queries and the criteria for closing the information search depend mainly on the subject's personal judgment, based on the perceived usefulness and relevance of the results. Thus, transitions and criteria are an essential part in the characterization of web exploration/exploitation strategies.
Currently, most access to information is mediated by technology. Search engines allow retrieving information indexed on the web by linking it to queries issued by students, according to their relevance, but the closeness of the results to the query made depends on technological and human variables, which combine to make the information search process more complex.
The search process is made difficult by three factors: the diversity of pages, the ambiguity of the queries made by the subjects and the low levels of reading comprehension, aspects that determine the elaboration of vague and redundant queries that increase frustration and, therefore, reduce the exploration of SERPs, especially in the case of children and young people (Foss and Druin, 2014). Different studies have evidenced that most students lack the necessary skills to efficiently access information (Druin, Foss, Hatley, & Golub, 2009;Qureshi, Bokhari, Pirvani, & Dawani, 2015;Şendururur & Yildirim, 2015). In addition, it is necessary to consider that during the search process the cognitive, physical, and affective variables of the learners (Kuhlthau, 1991), as well as to the capabilities of the technology itself to respond to the users' needs, are also involved.
This growing information saturation would be affecting the attention of the subjects, hindering the ability to process the informative pieces that overload their working memory (Rivas 2008, p.185). In this line, and to improve the efficiency of the search process, search engines have incorporated query recommendation models (Duarte Torres, Hiemstra and Weber, 2012), and user purpose detection (Sadikov, Madhavan, Wang and Halevy, 2010;Santos, Nguyen and Zhao, 2003), which emerge as an alternative to reduce the overwhelm of hundreds of results with diverse information. However, algorithmic mediation personalizes our search by reducing the search space and the diversity of the results, with filters that determine the links that are close to our interests, leaving out those that are far away and that present contradictory information. In short, they shape an informational bubble (Parisier, 2017) that limits the opportunity to access information. Thus, when exploring the web for the purpose of solving a learning task, we do not have access to the full diversity of content, but rather it is reduced to the sites that we consult on a recurring basis or other similar ones. For Jiang (2014b) the ranking of results also depends on user clicks, but added to the language, the language used, the popularity of the site and geolocation. This last factor appears as a determinant in the results received by the user, since significant differences have been established in the information received by users depending on their geographic location (Jiang 2014a;Jiang 2014b;Cano-Orón, 2019). Likewise, search results are also influenced by advertising as part of the business of search engines (Rieder and Sire, 2014). In this sense, the delivery of results through a content ranking, implements a biased model, according to which the algorithm determines the priority of some content over others (Lewandowski, 2017, Rieder and Sire, 2014, Jiang 2014a2014b), directly influencing the access to information by users. Another factor to consider is the media influence of each country, as it would also influence the decision making of the search algorithm (Cano-Orón, 2019 p. 98).
However, there are researchers who deny the existence of a bubble that isolates internet users, as personalization would not bring about a limitation of access to information (Haim, Graefe, & Brosius, 2018). However, these bubble-stressing studies have limitations in that they are not conducted in real contexts: the information sources are news from a single newspaper, and the object of study considers only a particular type of diversity (Möller, Trilling, Helberger, & van Es, 2018), or the participants are simulated with profile generation algorithms (Haim, Arendt, & Scherr, 2017).
Materials and Methods
This quasi-experimental research aims to identify the relationship between the information search behavior on the Internet to solve a research task and the answers given by a group of university students. The task consisted of answering the question "How to fight crime in Chile". First, the level of exploration of the web is determined by the number of queries issued. Secondly, the key words of the queries issued by the students in the search engine are categorized. Finally, the categories found in the search stage are contrasted with two aspects of the response stage: a) words used and b) categories present.
The sample is composed of first-year engineering students between 18 and 19 years of age, with 53 male students and 5 female students. Participants are volunteers and agree 6 to the use of their data in a confidential, anonymous and aggregated manner, by signing an informed consent form.
The research task was designed based on the proposal of Wildemuth and Freund (2012), which consists of a search challenge oriented towards the resolution of a complex problem, allowing the collection of information on both the search process and the results reached by the participants. This type of task was chosen because it demands a greater intellectual effort, since it requires a greater exploration and exploitation of the web to elaborate an answer. Therefore, the greater exploration of the web ensures a search for information that provides sufficient data for the research.
The research task "How to fight crime in Chile" is proposed, whose selection criteria are as follows: The context of the task is a topic on the media agenda and, therefore, in the public domain. Existence of diverse sources of information. The task instructions are understandable to students from different disciplines. However, to control for domain knowledge, the task topic is outside of the participants' formal learning. Also, tasks like the proposed one have been previously used in the literature (Arguello, J., Wu, W. C., Kelly, D., & Edwards, A., 2012; Kules, B., & Shneiderman, B., 2008), providing support for their use.
The objective of the proposed task was to determine the components that underlie a plan to combat crime in Chile, for which they had 15 minutes to submit their response.
The searches are developed on the GoNSA2 technology platform (Olivares-Rodríguez et al., 2018), which interacts with Microsoft's Bing search engine. GoNSA2 is a technological platform that supports the design of information tasks, the realization of the search process, the integration of student behavior information, and the evaluation of the elaborated solutions. It shows the interface of solving information tasks presented to students, recognizing a) the task, b) the snippet, c) the answer box and d) the personal library. The main contribution of the platform is that it provides detailed information about the search strategies, the queries issued, the documents reached with such queries and the solutions delivered by the participants.
In this way, the queries elaborated by each user are recorded, while for each query issued, the documents and sources provided by the search system are stored, as well as the actions performed on the documents, i.e., whether they were viewed, stored in the task's personal library or deleted. In addition, the intermediate solutions elaborated by the users are recorded. Finally, the timestamps when the user performs a particular action, from the beginning to the end of the task, are recorded.
7
For the analysis of lexical data, the Iramuteq program was used, an open access software based on the R program that offers textual and lexicometric statistics that can even be used as learning metrics (Valdés-León, 2021 p. 434).
The quasi-experiment is conducted in one session. At the beginning, the technical functionalities of GoNSA2 are presented. Then, the students have 15 minutes to perform the task described above. The second stage corresponds to the elaboration of analysis categories, based on the keywords of the queries issued by the students in the search engine. For the elaboration of these categories, the judgment of three specialists was used, who reviewed the data and grouped the words by applying semantic criteria. Subsequently, in the response stage, the same expert judgment methodology was used. Likewise, to enrich the comparison between the search and response generation instances, not only the categories present were contrasted, but also the lexicon used in each of the stages. For this purpose, a corpus containing the lexemes present in the searches carried out by the students was elaborated and, subsequently, the same was done with the texts generated to respond to the research task. This made it possible to carry out lexical comparisons of frequency (word cloud) and similarity. The purpose of the latter was "the study of the proximity and relationship between the elements of a set" (Ruiz, 2017).
Results
The following section is organized as follows: first, we provide general information related to the global statistics that emerge after the activity; then, we present the distribution of the results in the search and response stage, considering the coding that has been established; and finally, we offer a lexical analysis that provides information regarding the frequency of occurrence and the network of lexical associations constructed in each stage.
Global statistics: characterization of student queries As we can see, the GoNSA2 platform offers detailed information regarding the information search process by users, ranging from the number of participants to the average number of documents saved by them, to give an example. However, considering that our objective is oriented towards identifying the relationship between the information search behavior on the Internet to solve a research task and the answers provided by a group of university students, we find particularly interesting the information related to the average number of unique queries per user and the average number of documents saved in the library (Table 1). Source: own elaboration.
The average number of queries per user allows us to understand that, to reach an answer to a complex research task, students perform between two and three searches. This is consistent with the results of Fuentes and Monereo (2008) regarding the processes developed by adolescents to find information, since their study indicates that this age group considers "the search process and the selection of information as not very relevant" (p.51). This is confirmed when considering the low index of documents saved (close to two), as it seems a rather low amount if we consider the level of complexity of the proposed task.
In the consultation stage, we can observe that the codes with the highest presence correspond to "description of crimes" (DD) and "proposals to reduce crime" (PRC), which is closely related to the words that make up the problem (Figure 1). In other words, we mean that most students simply copied the proposed statement and pasted it into the search box, which corresponds to the description of the "copy-paste" student.
9 Figure 1: Comparison of consultations-solutions. Source: own elaboration As for the categories that have a greater presence in the solution stage, it is interesting to note that they correspond to topics that emerged precisely at this stage. In other words, we refer to the fact that the contrast between queries and formulation of solutions shows a shift in terms of the topics present in the corpora, which allows us to understand that the process of solving a complex search task involves a knowledge construction process that is mediated by cognitive factors, experience, age and gender, among others (Ford, Miller and Moss, 2001). On this basis and considering the search characteristics of the new university students, it is not surprising that the process starts from similar categories, but that a greater dispersion in the topics present in the elaborated answers is evidenced.
First of all, we would like to point out that we are aware that the corpora of questions and solutions have different characteristics: the former corresponds to a list of structures that do not go beyond the sentence level, while the latter is based on the set of short texts written by the students. However, if we take into consideration only the notional lexicon (i.e., leaving out words such as connectors, pronouns, articles, etc.), it is possible to identify which lexemes predominate in each of them and, thanks to this, corroborate the findings of the classification into categories.
Thanks to the above, it is possible to identify that the first scheme has an evident predominance of three lemmas (Chile, crime and reduce), which, as we pointed out in §3.2, coincides with the three main words used in the statement of the task. However, the dispersion in terms of the topics addressed in the responses is much wider, as emerging categories appear around concepts such as "person", "police" and "victim", to give an example.
The analysis of similarities "is based on graph theory, which allows identifying cooccurrences between words and its result brings indications of the connection between words, helping in the identification of the structure of a textual corpus" (Camargo and Justo, 2013, p.516). Thanks to this, it is possible to visualize not only a greater dispersion of words, but also a greater diversity in the solutions: at least four major areas in which the students' answers were concentrated are observed, answers that are influenced both by the results found and by aspects of a cognitive and even moral nature. By way of example, we can point out that there is a branch in which terms related to education (rehabilitation, improve, quality...) are agglutinated, while, in another, a much more punitive semantic field is constructed (crime, penal, increase, etc.).
Based on the above, it seems pertinent to highlight that those who advocate empowering students to use the unlimited knowledge that the web provides to acquire knowledge and, in turn, develop higher level skills, such as autonomous and critical thinking.
Conclusions
The objective of this research was to identify the relationship between the information search behavior on the Internet to solve a research task and the answers provided by a group of university students. In this regard, the findings indicate that this link is given by a) a shift between the topics used in the search and those observed in the response stage and b) an increase in the categories present in the latter stage.
Both aspects mentioned above are interrelated: the shift from initial to emergent topics is due to the fact that, in the first instance, students orient their search by using the key words found in the task statement itself; however, both the results found and external factors (social, emotional, psychological, etc.) influence the presence of varied solutions to the same problem, but -in our study-with a predominance of two topics: social and criminal.
Based on the above, we consider that, in the case of solving research tasks such as the one presented here, the information search process is not only a means to reach a content, but represents a learning instance, as students mobilize skills such as critical and analytical thinking during the process of developing solutions to complex problems.
|
2021-09-28T18:21:23.433Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "a6b8467c35955528cecbbff61d7065777a80c80a",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.revistaespirales.com/index.php/es/article/download/781/702",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0de5bd7d23f64b7a21cceb4acab576fa0f79a544",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
119446762
|
pes2o/s2orc
|
v3-fos-license
|
Asymptotic analysis of the EPRL model with timelike tetrahedra
We perform the stationary phase analysis of the vertex amplitude for the EPRL spin foam model extended to include timelike tetrahedra. We analyse both, tetrahedra of signature $---$ (standard EPRL), as well as of signature $+--$ (Hnybida-Conrady extension), in a unified fashion. However, we assume all faces to be of signature $--$. The stationary points of the extended model are described again by $4$-simplices and the phase of the amplitude is equal to the Regge action. Interestingly, in addition to the Lorentzian and Euclidean sectors there appear also split signature $4$-simplices.
Introduction
The vertex amplitude in the spin foam models [1,2,3] is the evaluation of a certain spin network. This spin network consists of links labelled by irreducible unitary representations of SL(2, C) group and nodes labelled by invariants in the tensor product of representations from links meeting in the node. These invariants are elements of certain distribution space. Naïve evaluation would be the contraction of invariants according to the prescription given by the spin network. This prescription, while valid for a compact group, gives infinity for SL(2, C) and some gauge fixing [4,5,6] is necessary (see 2.7).
The main part of the definition is in describing which invariants we should take. The EPRL-FK model [7,8,9] and its extended version [10,11] parametrize the invariants by invariants of the smaller group, which in turn is given as the stabilizer in SL(2, C) of some normal vector N can . These smaller groups are • St(N can ) = SU (2) (stabilizing timelike N can = e 0 = (1, 0, 0, 0)) for standard EPRL • St(N can ) = SU (1, 1) (stabilizing spacelike N can = e 3 = (0, 0, 0, 1)) for extended EPRL The basic ingredient is a choice of an embedded irreducible unitary representation of stabilizing group H EP RL(,ρ) in the irreducible unitary representation D ,ρ of SL(2, C) (, ρ are standard labels of such representations, see section 2.2). The labels are determined from quantum versions of the simplicity constraints twisted by the Barbero-Immirzi parameter γ [7,12,10] to be related by ρ = 2γ.
The embedded representations are irreducible SU (2) representations for the case of timelike normal e 0 and irreducible unitary representations of SU (1, 1) from discrete series in the case of spacelike normal e 3 . There is another subcase 2 = γρ that corresponds to faces of signature +− (see [10] for explanation of semiclassical origin of the notion) that we will not consider in this paper.
The EPRL Y-map is now the map where l are links meeting in the node n. Invariants of the smaller group are thus together with labels and type of the smaller group (determined by N can n ) the boundary data for the vertex amplitude. We are interested in the asymptotic analysis for large labels. In such a case one needs to specify a family of boundary states with certain semiclassical limit. Examples of such states are given by coherent states (they scale nicely with the scaling of labels) integrated over smaller group Let us notice that inside definition of a coherent state some arbitrary phase is hidden, so the total result will be influenced by an arbitrary phase that scales uniformly with the scaling of the labels. In the analysis of [12,13] this phase was fixed. However the phase choice was done in a global way by considering the whole graph -not locally by choosing the phases separately for each node. This convention is useful only in considering vertices separately as the phases would be chosen differently for the same nodes glued to two different vertices (see however [14,15,16,17]). The choice of the phase also leads to additional phase terms in the action of [12] (π from thin wedges see Section 6.2 of [12]) that can be removed by a more judicious choice of overall coherent state phase. For these reasons we prefer to keep the phase unspecified. This leads to an additional arbitrary overall phase in the asymptotic result.
Our goal is to extend the semiclassical analysis of such a vertex amplitude [12] for a spin network of five nodes 0, . . . , 4 connecting every node with each other. We parametrize links by the two nodes (ij) that they connect.
We assume that ij = 0 and we scale spins uniformly. In the standard EPRL situation, the asymptotic limit of the evaluation (see [12,18] and for an overview [19,20]) is governed by a certain 4-simplex ∆ built out of the boundary data and the Regge action for discrete gravity without cosmological constant [21] S ∆ = i<j A ij θ ij (5) where A ij are areas of the faces and θ ij is the corresponding dihedral angles (see section 9.6). This is similar to Ponzano-Regge asymptotic formula [22,23] valid in three dimensional gravity. We show that the same is true in the extended set-up (with the definition of dihedral angles given by [24], see Section 9.1 for details). Using coherent states we will associate vectors v ij ∈ M 4 in Minkowski spacetime (see section 5.2.2) perpendicular to N can i (e 0 or e 3 depending on the type of embedding) such that bivectors * (v ij ∧ N can i ) are spacelike of norm ρ ij . Following [12] we will say that (see 5
.2.2)
• they satisfy closure condition if j v ij = 0 • they are non-degenerate if every 3 out of 4 for every node are independent With such vectors we can build, by the Minkowski theorem, [25,26,27] (for simplices in arbitrary signature, see section 6.3) a tetrahedron in N can⊥ i . With condition (1) all faces will have signature −−. We can determine lengths of the edges of such tetrahedra for every node.
Let us consider a topological 4 simplex, the dual to the graph. The nodes of the graph label tetrahedra of the simplex, links (sets of two nodes) label faces, edges between faces are labeled by sets of three nodes. These nodes correspond to three tetrahedra sharing the edge. With every edge we can associate a length coming from application of Minkowski theorem to the boundary data in any node in the set. We will say that the lengths matching condition (definition 10) is satisfied if these lengths determined from all three tetrahedra are the same.
If the lengths matching condition is satisfied then we can look for the Gram matrix constructed from such lengths (see section 6.7). We can reconstruct a unique (up to reflections, rotations and shifts) 4 simplex from the Gram matrix. There are 5 cases for the signature of these simplices By the Minkowski theorem (for simplices in arbitrary signature, see section 6.3) we can also determine orientations of the reconstructed tetrahedra. They need to match the orientations of tetrahedra of one of the reconstructed 4simplices. In this case we will say that orientations matching condition is satisfied (see Section 6.7 definition 11 for precise statement).
Before stating our result we need to tell a bit more about technical assumptions. The vertex amplitude is given by an integral over an infinite domain. Our method is a version of stationary point analysis [28]. In order to give a proper asymptotic result the integral need to be finite (well-definiteness of the definition of the amplitude) and also there need to be no "contribution from boundary" both finite boundary as well as from ∞. To prove that there is no such contributions is beyond the scope of this work. We will assume this as additional condition of lack of boundary contributions. We conjecture however that this is true in case of non-degenerate boundary data.
An additional assumption is that, after suitably taking into account the symmetry of the action, the remaining matrix of second derivatives (Hessian) in the stationary point is non-degenerate. This condition was never addressed in generality in the analysis of the vertex amplitude. It is known that for the Barrett-Crane [6] spin foam models there are special configurations for which Hessian is degenerate [29]. However, this is related to the fact that the proper variables used in semiclassical description of this model are areas and conjugated angles [30,31,32] and there is some singularity in going from these variables to shapes. We conjecture that the Hessian is non-degenerate for the EPRL spin foam model for non-degenerate boundary data if the reconstructed 4 simplex is non-degenerate. This condition will be called stationary point non-degeneracy condition.
We are aware that these two conditions should be proven so that our analysis becomes a clean result, but they definitely deserve a separate treatment and we leave this issue for future research.
Let us notice at the end that in our convention faces of the tetrahedra have areas A ij = 1 2 ρ ij intead of ij in the convention of [12]. This is just a total rescaling of the action by the Barbero-Immirzi parameter γ. Our convention is compatible with LQG area operator spectrum [33,34] 1 .
After these preliminaries our main theorem can be formulated as follows: Theorem 1. Let us consider the amplitude A Λ of the uniformly scaled labels ( → Λ) for the amplitude with bondary data given by coherent states (scaling with Λ). Let us assume that boundary data is non-degenerate, satisfies lengths matching condition. We assume also that lack of boundary contribution condition and stationary point non-degeneracy condition holds. If orientations matching condition is not satisfied then amplitude is suppressed. If it is satisfied then let us consider reconstructed 4-simplex ∆ for non-rescaled labels and boundary data: • If the reconstructed 4-simplex ∆ is Lorenztian then there exists φ (depending on the choice of phases of coherent states) and geometric factors N ± ∆ (given by lengths and orientations of ∆) such that where S ∆ is a Regge (discrete Einstein) action without cosmological constant for the flat 4-simplex ∆.
• If the reconstructed 4-simplex ∆ is Euclidean or of split signature then there exists φ (depending on the choice of phases of coherent states) and geometric factors N ± ∆ (given by lengths and orientations of ∆) such that where S ∆ is a Regge (discrete Einstein) action without cosmological constant for the flat 4-simplex ∆. The case when all N can i are equal to e 0 was proven before (standard EPRL [12,35]) but the other cases are our new result. Moreover, like in standard EPRL asymptotics, we prove that if lengths matching condition is not satisfied then either the amplitude is asymptotically suppressed, or there exists a single stationary point with interpretation of a vector geometry [12]
Vertex amplitude in extended EPRL model
In this section we will describe extended EPRL embeddings [7,10] and construction of the vertex amplitude. Our analysis is restricted to the case when all faces are of signature (−−) that is when the diagonal simplicity conditions of EPRL are ρ = 2γ The case of signature (−+) i.e.
is more complicated. No EPRL like construction is known in this case (see [36] for recent developments). The coherent state proposition [10] in the line of Freidel-Krasnov model differs from representation theoretic construction in the style of EPRL for this case. We expect that the asymptotic analysis of the latter may lead to some unphysical sectors due to non-extremality of the necessary embeddings 2 .
Notation
Let us summarize notation about spinors and SL(2, C) → SO + (1, 3) double cover. We use signature + − −−. Let us introduce σ µ as follows We have the following isomorphism from Minkowski space M 4 into hermitian 2 × 2 matrices The symplectic form is defined as We will also use a notation for two spinors z and w Let us also introducê The following holds Together σ µ andσ µ form γ µ matrix, but as we always work in either self-dual or anti-self-dual representation we prefer such a notation. From standard commutation relation we have The isomorphism π from SL(2, C) to SO + (1, 3) is defined by Lie algebra of SO + (1, 3) can be identify with bivectors (2 forms). The action of bivectors on vectors is defined by contraction with the use of the metric The identification of so(1, 3) with sl(2, C) is then given on the basic simple bivectors by With the right choice of the orientation the Hodge * operation corresponds to multiplication by i of the traceless matrix.
Representations of SL(2, C)
Unitary irreducible representations (D ,ρ ) from the principal series (, ρ) are functions satisfying the condition with the action of SL(2, C) defined by We are using the convention of [12], as opposed to that of [37]. The latter is equivalent to the action gΨ(z) = Ψ(g −1 z) These two actions can be related due to The scalar product for two such functions Ψ 1 and Ψ 2 is defined as follows: Let us introduce a form where The form (26) is invariant under the scaling transformation and is annihilated by the generator of this transformation, thus it descends to a form on
Subgroups preserving the normal N
The subgroup of SO + (1, 3) that preserves the normal N is the image of the subgroup of SL(2, C) that preserves η N as follows this is equivalent to g T preserving hermitian form defined by It follows from the fact that for g ∈ St(N ).
The Y map
Let us review the generalized EPRL construction. We will work only with spacelike surfaces (the reason for this nomenclature will be explained later, in section 5.2, equation (200)). Let us consider a normal vector N where t = 1 for timelike, t = −1 for spacelike vector. Let us consider stabilizing group St(N ). In the case of t = 1, and it is exactly SU (2) for N = e 0 = (1, 0, 0, 0). For t = −1 and it is exactly SU(1, 1) for N = e 3 = (0, 0, 0, 1). We will consider only standard normals e 0 = (1, 0, 0, 0) and e 3 = (0, 0, 0, 1). We will also denote either of them by N can as we would like to work in a unified setup. We denote t can = N can · N can .
The EPRL Y map is an embedding of the unitary representation of the stabilizing group St(N ) into unitary representation (, ρ = 2γ) of SL(2, C) that satisfies a certain extremality condition. Let us recall the choice of H EP RL(,ρ) made by [7,10] 1. Spin , D representation of SU(2) embedded into D ,ρ for N can = e 0 2. Discrete series D ± of spin representations of SU(1, 1) embedded into D ,ρ for N can = e 3
Coherent states
All three families of representations have certain common features. Let us consider the generator of rotations around the z axis. We can introduce bases of eigenfunctions. In all three cases there exists an extreme eigenvalue. The corresponding eigenfunctions are (Perelomov-)coherent states [38,39,37] where n 0 = (1, 0) and n 1 = (0, 1) and θ is the Heaviside step function. 3 We do not need to consider − for SU (2) since it will be obtained from eigenstate by rotation.
All other coherent states are obtained by transforming these basic coherent states by group action of St(N can ). In fact these states can be parametrized by spinors: (39) and for N can = e 3 where n ± |n ± N can = ±1.
The vertex amplitude
We can now consider the vertex amplitude in the spin foam model. Let us consider the pentagonal graph [12] with five nodes and links connecting each node with each other.
• for every node i we choose a canonical normal N can i that determines an embedded subgroup (either SU (2) or SU (1, 1)) • for every directed link ij starting in the node i we chose a type of embedded representation (in case of SU (1, 1) it can be D + • for every directed link ij starting in the node i we chose a spinor n ij that determines a coherent state (in representation ( ij , ρ ij ) of SL(2, C)) that we will denote by Ψ ij (z) This data is a boundary data for the vertex amplitude of the extended EPRL Spin Foam model [12,10]. The vertex amplitude is given by an integral The factors c ij (Λ) are independent from z and g , N can i = e 0 and D representation Similarly, we can decompose the integral kernel of β as The differential form descends to CP 1 × CP 1 as a measure (smooth in the interior of its support). Similarly 4 , as the integrand form is invariant under rescaling of z also the part scaled uniformly with Λ need to be invariant. The amplitude is invariant under rescalings z ij → λ ij z ij , z ji → λ ji z ji and thus projects down to a function on CP 1 × CP 1 . We will write the vertex amplitude as an integral of the form: where c(Λ) ∼ Λ 20 for large Λ,
Action
We observe, that p β is of the form The scaling invariance of Pij and descendent property of µij is a general fact.
so we conclude |p β | = 1. Similarly we will prove that |p ij | ≤ 1 (lemmma 8). Now we define S = i<j ln P ij (52) and We note that as written above, the various S are multi-valued functions defined up to multiplicity of 2πi, but as long as the product of the P ij is nonzero, we can always work in a local branch. We have The action S ij of the amplitude depending on g i and z ij is given by where Similarly we can also define auxiliary e S aux ij = p aux ij . The integration is restricted to the domain where for all i = j
Applicability of the stationary phase method
We will now study the behaviour of the vertex amplitude when the representation labels are rescaled ( ij , ρ ij ) → (Λ ij , Λρ ij ) and Λ → ∞. According to the stationary phase method [28], in this regime the amplitude is dominated by contributions from the points (manifolds) where the following conditions are satisfied: the reality condition (S) = 0 and the stationary point condition Here δ R denotes the standard variation, to be distinguished from the holomorphic variation δ that we will introduce later.
A few comments are necessary. First of all we deal here with functions on a non-compact domain, which in addition has some boundaries. In such a situation it is not clear that stationary phase analysis describes the asymptotic expansion correctly. It is possible that so-called boundary contributions to the asymptotic expansion appear. By that we mean both, contributions from the finite boundaries as well as contributions from the fact that we integrate over a non-compact region.
This issue was never addressed even in the work on the standard EPRL asymptotic limit. In fact, we expect that such contribution will appear, but only for certain special degenerate boundary data (usually not considered in EPRL asymptotic) and moreover they might have physical meaning. This issue however is beyond the scope of our current paper.
In addition, there is a related question of integrability. In the case of standard EPRL construction it was shown (see [5,41]) that the integral is absolutely convergent. However such a proof is not provided for the modified construction yet. We also postpone this question for future research.
Gauge symmetries of the action
The following transformations labeled by g ∈ SL(2, C), λ ij ∈ C * preserve the non-gauge fixed action. The g-part of this transformation is gauge fixed by the δ(g 5 ) term. We can still consider variations with respect to g 5 of the action, they are just not independent of the others. The subgroup of gauge transformations with g = 1 will be called CP 1 -gauge transformations.
Stationary point conditions
We will use variational calculus (form) notation for derivatives: We denote where the variations are of the form: with δ R g i , δ R g ij taking values in the Lie algebra of SL(2, C). Let us notice that We will also use variations with respect to single variables that we will denote as that are variation with single δ R g i (and respectively single δ R z ij ) nonzero. Stationary point for the given boundary data (spinors n ij , vectors N can i and the types of embeded representations) consists of a collection of satisfying the reality condition (S) = 0 and the stationary point conditions There are gauge transformations acting on the stationary points.
Reality condition and holomorphic derivatives
We will consider action S as a function of holomorphic g and antiholomorphic g variables 5 . We will prove later (see lemma 9) that However this holds only whenḡ and g are complex conjugated (we will call this set real manifold). We will consider now complexified manifold where these group elements are independent. We denote holomorphic and antiholomorphic variations with respect to these group elements by δ g S and δḡS.
is saturated only if ∀ α S α = 0 Lemma 2. When reality condition are satisfied then when in the second equality we took δḡ = δg Proof. The real variation of S is zero when reality condition are satisfied (from extremality) so that can be written as The equality is thus From arbitrariness of δg We can also compute for δḡ = δg. Proof. It follows from lemma 2 that where we took δḡ = δg. As holomorphic variables can be multiplied by i we see that a vanishing holomorphic derivative is equivalent to a vanishing real variation.
Holomorphic stationary point conditions
Let us now rephrase the stationary point conditions in holomorphic language: A stationary point for a given set of boundary data (spinors n ij , vectors N can i and the type of embedded representation) consists of a collection of (on the real manifold) satisfying the reality condition and the stationary point conditions (δS = 0) There are gauge transformations acting on the stationary points.
Traceless matrices, spinors and bivectors
In this section we will describe the connection between spinors and Lie algebra elements of SL(2, C) (traceless matrices). Using this connection, we will show that stationary points can be described in terms of traceless matrices satisfying certain conditions (we will call it SL(2, C) solution). This will allow us later to translate it into geometric language of the Lorentz group and bivectors. We will also compute the (difference of the) phase between two stationary points.
Traceless matrices
We will now recall some properties of spinors that we will use to translate the stationary point conditions into the language of traceless matrices. Proofs are provided for the convenience of the reader in Appendix D.
Let us assume that δg is traceless then for the symplectic form ω of (12), and for spinors u and v and • for all traceless matrices δg then there exists λ ∈ C * such that • and for all traceless matrices δg then there exists λ ∈ C * such that
Variations δ g i S ij and reality conditions
In order to avoid overburden the notation we will in this subsection suppress indices ij and write S ij (g i , z ij ) as S(g, z). Let us consider normal N can and normalized spinor n and ρ ∈ R, 2 ∈ Z N can · N can = det η N can = t, n, n N can = n † η T N can n = s, s, t ∈ {−1, 1} (92) the action part of the amplitude depending on g and z where (the integration is restricted to the domain where and where (97)
Spinors u and v
We introduce spinors Proof. The first equality follows from The second by η N can ωη T N can = tω where we used η † N can = η N can and ω † = −ω.
Lemma 7.
We have equality Proof. We will prove that It is enough to check that action of both sides coincides on spinors v = n and u. We have by lemma 6 (snn † + stuu † )η T N can n = sn n, n N can =s +stu u, n N can and similarly
Lemma 8. On real manifold
and the equality S = 0 holds if and only if Proof. From construction S ρ = 0. We will consider S . The reality condition is equivalent to reality condition for S aux . We have equality by lemma 7 This is equivalent to t(s g T z, g T z N can ) = t| g T z, n N can | 2 + | g T z, u N can | 2 ≥ t| g T z, n N can | 2 (110) and because s g T z, g T z N can > 0 and | g T z, n N can | > 0 we can write it as that is equivalent to S aux ≤ 0.
The equality holds only if that is g T z = ξn.
Lemma 9.
On real manifold S ≤ 0 and under condition S = 0 and s = n, n N can ∈ {−1, 1} Proof. We have From lemma 2 when reality conditions are satisfied we have The latter can be computed thus the total variation By (86) it can be written in the given form.
When reality condition is satisfied then with spinors and s ij = n ij , n ij N can i
Variations with respect to z ij
The edge action can be divided into pieces We parametrize δz ij by δg ij traceless as follows all variations can be written this way but δg ij is not unique.
In this case δ z ij S ij + δ g i S ij = 0, so taking variation as in the thesis we get the result.
We will from now on abuse notation and regard S β ji = S β ij for i < j. Lemma 11. Let us introduce a traceless matrix B β ij such that Spinors z ij and z ji are determined up to complex scaling by Proof. We have and it can be written as Together with (86) it can be transformed into form from the lemma. Characterization of z ij and z ji follows from lemma 4.
Stationary point conditions and boundary data
The boundary data (spinors n ij , normal vectors N can i and types of embedded representations) can be summarized by Stationary point for a given boundary data consists of a collection (real manifold) and reality conditions There are gauge transformations acting on the stationary points and they are parametrized by g ∈ SL(2, C) and λ ij ∈ C * Gauge fixing condition g 5 = 1 fixes SL(2, C) gauge transformations.
SL(2, C) solutions
We can now translate stationary point conditions into language of traceless matrices Definition 2. SL(2, C) solution for the boundary data consists of
Lemma 12. SL(2, C) solutions are in bijective correspondence with stationary point up to CP 1 gauge transformations. The SL(2, C) solution is determined by group elements of the stationary points and then
The SL(2, C) gauge transformations are acting the same way on both sides of the correspondence.
Proof. Stationary point determines SL(2, C) geometric solution determining traceless matrices from δ z ij S β ij and δ g i S ij . They satisfies all assumptions of SL(2, C) solution.
In the opposite direction we need to find z ij such that traceless matrices This can be determined using lemma 4 and 11. In fact z ij and z ji need to be eigenvectors of B β ij (or equivalently B β ji ) Matrix B β ij has exactly such eigenvalues because B β ij = −(g T i ) −1 B ij g T i and B ij has such eigenvalues. This determines z ij up to CP 1 gauge transformations. From By lemma 5 and this is just reality condition. We see now that with our choice of z ij and from reality condition (by lemmas 9) The remaining conditions of stationary point can be written in terms of matrices B ij and B β ij and are the same as conditions for SL(2, C) solution. The z ij are determined up to CP 1 gauge transformations.
Determination of the phase
Let us compute value of the edge part of the action in the stationary point We can choose a gauge for z ij such that In such a case The only contribution to the action comes from the β amplitude From equality of traceless matrices from lemma 5 and (146) (taking into account gauge fixing) but similarly The phase amplitude is then
Difference of the phase amplitude between two stationary points
Changing phases of coherent states changes also the total phase. However difference of the phases of two different stationary points is invariant under such transformation. Our goal is to determine this phase for two stationary points As u ij and v ij are determined by boundary data they are the same in both stationary points thus where we introduce We also have Using fact that and the properties of ∆ ij we see that Lemma 13. The phase difference between stationary points defined by two where r ij and φ ij are real numbers determined by Proof. It is enough to transpose (172) and (167).
The geometric solutions
We will translate our SL(2, C) description into SO(1, 3) language. We will also extract necessary geometric boundary data from the boundary data given by spinors n ij . This section is devoted to the relation between two descriptions.
Spin structures
Let us notice that EPRL construction makes sense only if certain integrability condition holds because only then the intertwiners space (defined in the distributional sense) is nonempty. We assume this condition for every boundary tetrahedron. The vertex integral and the action (defined up to 2πi) are then invariant under the following transformations The transformations of this kind will be called spin structure transformations.
Bivectors and traceless matrices
Generators of SO + (1, 3) are matrices in M 4 that are antisymmetric after lowering index. They can be identified with bivectors. Generators of SL(2, C) are traceless matrices. The isomorphism between these Lie algebras is given on simple bivectors by With the standard choice of the orientation, Hodge * operation corresponds to multiplication by i of the traceless matrix. We also have for two bivectors The image of traceless matrix with purely imaginary eigenvalues is spacelike. We assume that the SL(2, C) representations satisfies (spacelike faces conditions) then traceless matrices B ij have property that
Characterization of the bivectors
We will now characterize π −1 − 2γ γ−i B T ij . Let us introduce a vector l µ ij by identity It is null because matrix has rank one. It is future directed if s ij = 1 and past directed if s ij = −1.
Contracting with σ ν and taking the trace we obtain Let us consider decomposition (introducing vector v ij ) We have For any traceless matrix K (189) This equality shows that the traceless matrices B ij and − 2γ γ−i B T ij are equal.
3d characterization of the bivectors
We can now characterize bivectors in terms of 3 dimenional geometry in the space perpendicular to N can i .
Definition 3. The geometric boundary data are sets of vectors
ij that are obtained form spinors n ij as a projection onto space orthogonal to N can i of the null vectors l ij defined by Let us notice what follows from the previous subsection:
Geometric solutions
The inversion I ∈ SO(1, 3) is defined by It does not belong to SO + (1, 3).
Let us introduce notion of geometric solution such that bivectors with v ij defined by the boundary data (definition 3) satisfy are called inversion gauge transformation.
Let us notice that necessary condition (196) for stationary point is in fact a condition for the boundary data. It is called closure condition [12] We will always assume that boundary data satisfies closure condition. Let us notice that from 5.2.2 we know that is a spacelike bivector.
and then also The Comment: Transposition is a price one need to pay for following notation from [12].
Geometric reconstruction
In the previous section we determined that stationary points modulo gauge transformations are in one to one correspondence with SO(1, 3) geometric solutions. In this section we will classify the latter 7 . Geometric SO(1, 3) solutions divide into non-degenerate and degenerate ones. With the first class we can associate non-degenerate lorentzian simplices. However with the pair of degenerate geometric solutions we will (in section 7) associate simplex in other than lorentzian signature. For this reason we want to provide classification in arbitrary signature.
Notation
We will denote all operations in this arbitrary metric by underline i.e. (Hodge star * , scalar product · and contractions with use of the metric and ).
We can introduce reflections with respect to the normalized (to ±1 vector N ) where we lowered index with use of the metric. Notice that R 2 N = I. We can also introduce inversion Depending on the signature inversion belongs (O(2, 2) and O(4)) or does not (O (1, 3)) to the connected component of identity. It is however always in special orthogonal subgroup in dimension 4. The reflections do not belong to special orthogonal subgroup thus also not to connected component of identity. Connected component of identity we denote by SO + and by SO we denote special orthogonal subgroup.
Geometric solution
We will first define geometric version of the boundary data
Definition 5. The (SO geometric) boundary data is a collection
We will say that the boundary data is non-degenerate if for every i, every 3 out of 4 vectors v ij are linearly independent.
Set of canonical normals is such that every normalized vector N , N ·N = ±1 can be rotated by an element of SO to exactly one of canonical normals 8 .
Definition 6. The geometric SO solution for the geometric boundary data is a collection
such that bivectors We can associate with this data normals N
Geometric bivectors
We will perform the following construction of the k form corresponding to the simplex spanned on the points x 0 , . . . x k in R n . Let us introduce auxiliary space R n+1 and in this space vectors and covector Let us introduce k + 1 vectors k-vectors in R n can be identified with k-vectors Ω in R n+1 that satisfy A Ω = 0 (214) We can check that as the only nonzero last component is in y α 0 . This gives (after restriction to that is the volume k-vector of the k-simplex multiplied by k!. Let us notice that this k-vector depends on the order of α 0 , . . . , α k . As V α 0 ···α k change by (−1) sgn σ under permutation of points (even permutations preserves it) the same is true for V α 0 ···α k .
Suppose that we have simplex determined by points with indices 0, . . . n. We can define codimension 1 and 2 simplices by indicating which points we are skipping.
Let us introduce where· means omission, and similarlyṼ i andB ij . With this definition B ij = −B ji .
Theorem 2. The following holds
Proof. Let us consider It can be written as where s j is the number of site on which we contract.
The sum can be written as However this n − 1 vector contracted with A is zero This finishes proof. In similar way Let us restrict to the case of n = 4,
Definition 8. The geometric bivectors of the 4 simplex determined by ver-
Let us consider a scaling transformation Under this transformation bivectors changes Let us notice that in particular inversion transformation λ = −1 preserves B ∆ ij .
Nondegenerate case
Let us now assume that x i do not lay in the hyperplane that is y i are linearly independent. We introduce a dual basisŷ i defined bŷ Let us also introduceỹ i defined bŷ These covectors can be regarded as belonging to R n . We have that can be written with the use ofỹ i in terms of R n and similarly Proof. It is enough to check that both sides of equality give the same value contracted with the elements of the basis y j Thus we have Covectorsỹ i are conormal to subsimplices V 0···î···n . Let us now add metric tensor on R n . It defines also scalar product on k-vectors by We can also introduce normalization of V 0···n by By the definition of Hodge star (see B) If codimension 1 subsimplices are not null we can introduce normal vectors N i and positive numbers W i and t i ∈ {−1, 1} such that then we have Let us consider an altitude for our simplex from point x i . Its base we will denote by h i . Let us notice that as it lays inside the hyperplane of remaining points ∃ α k ,k =i : We have (identifying vectors and covectors using scalar product) Thus vectors are outer directed and introducing Let us notice that Let us notice that for spacelike faces area is equal to
Reconstruction of normals from the bivectors
Suppose that we have bunch of bivectors B {G} ij that are coming from some non-degenerate geometric solution. We can reconstruct from them N i up to a sign. We assume that v ij for the fixed j span the whole space perpendicular to N can i (for example the boundary data is non-degenerate).
Lemma 17.
Let us assume that we have geometric solution {G i } for the nondegenerate boundary data then the following are equivalent for the chosen i and vector N Let us assume that there is independent vector N satisfying the same equation then
Reconstruction of bivectors from knowledge of ±N i
We will now reconstruct bivectors from normals N i . In fact out theorem works in any dimension in arbitrary non-degenerate signature.
Theorem 3. Let us assume that in R n we have normalized to ±1 vectors N i , i = 0 . . . n, such that (nondegeneracy) any n out of n + 1 vectors N i span the whole R n . Then there exists a solution to the following: • There exists n − 2 vectors • For every i The solution is given by where constants W i ∈ R are the nonzero solutions of For any other solution B ij there exists a constant λ ∈ R such that The solution is independent of changing some of N i by sign.
Proof. Let us first prove uniqueness of such solution up to scaling. Let us assume such B ij are given. There is exactly one solution (up to a scaling by real constant) of the equation as there are n + 1 vectors in n dimensional space and every n out of n + 1 are independent. The constant W i are all nonzero (in the nontrivial solution) and in fact they will turn out to be proportional to signed volumes of the tetrahedra of the 4 simplex with normal N i 9 . Let us notice that B ij are simple n − 2 bivectors because they are annihilated by two independent normals so there need to exists constant λ ij such that We have for every i for some λ i ∈ R. This equation has a unique up to a constant solution so From the symmetry so we have λ ij = λ ji and for some constant λ and finally This shows uniqueness up to a scaling. To show existence it is enough to check that such constructed forms satisfy requirements (that is just reversing all arguments). As changing N i by sign also change W i by the same sign, the B ij are independent of the choice of sign of normal vectors.
Nondegenerate bivectors and 4-simplex
We can now prove that non-degenerate geometric solution determines 4simplex uniquely up to shifts and inversion.
where r = ±1. They are related by inversion transformation I and for both the sign r is the same.
Proof. We will first proof that there exists such 4 simplex. We take any 5 planes orthogonal to N {G} i . They cut out a 4 simplex ∆ . This simplex is unique up to shifts, and scaling by real number (changing size and applying inverse).
The bivectors of any of these 4-simplices B ∆ ij satisfy from reconstruction from normals: Under scaling transformation (by real number λ, so also under inverse) the bivector changes by λ 2 . There exists exactly two scalings ± |c| that brings the bivectors to B The sign cannot be changed but it depends on the choice of orientation. Uniqueness: From bivectors we can reconstruct ±N {G} i (sign ambiguity). For any choice of the signs we reobtain the same 4 simplices.
The sign r seems to be an additional data in the reconstruction.
Definition 9.
We call the constant r = ±1 from the reconstruction for geometric solution, geometric Plebański orientation.
Constant r relates chosen orientation of R 4 with the orientation defined by order of tetrahedra.
We have B where V ol ∆ is 4! volume of the 4 simplex (239).
Uniqueness of Gram matrix and reconstruction
For the non-degenerate geometric solution {G i } the the edge lengths of the tetrahedron i in the 4 simplex ∆ can be reconstructed from bivectors B ∆ ij = rB {G} ij , j = i. Let us now consider single i-th tetrahedron. As G i is a rotation the shape of the tetrahedron with bivectors B is the same as the shape of the tetrahedron with bivectors This is however determined by the geometric boundary data.
Lemma 18. If boundary data is non-degenerate then for every i there exists a unique up to inversion and translations tetrahedron with face bivectors
in the subspace N can⊥ Proof. Let us fix i. We cut a tetrahedron with planes perpendicular to v ij in N can⊥ i in generic position. Its bivectors B ij are proportional to From the closure condition we know As v ij are perpendicular to N can i from non-degeneracy By rescaling of the tetrahedron we can get λ = ±1.
This determines edge lengths uniquely as functions of v ij . Let us denote the signed square lengths of the edge l i 2 jk between faces (ij) and (ik) of the tetrahedron i.
This numbers are defined for i, j, k pairwise different and are symmetric in j, k.
Definition 10.
The geometric boundary data satisfies lengths matching condition if l k 2 ij is symmetric in all its indices. The lengths matching condition is necessary for existence of non-degenerate geometric solution.
If lengths matching condition is satisfied we define signed square lengths l 2 ml = l k 2 ij , for m, l the remaining missing indices different from i, j, k (274) These lengths determines 4 simplex unique up to orthogonal transformation and shifts (see [43,44] for description of the matching conditions). Proof. It is more convenient to work with the matrix with elements that should correspond to the matrix of suitable scalar products. By a change of basis we can transform G l to the block diagonal form where one block is 0 1 1 0 and the second block is M . The signature of M is thus (p, q, n).
There exist matrix R with 4 columns and p + q rows with full range such that because M is symmetric with given signature. The 4 simplex can be constructed as follows. Choose arbitrary x 0 and define x i for i = 0 by This 4 simplex has a prescribed lengths. Let us now compare two different 4 simplices with vertices x i and x i . Both R and R need to satisfy (277).
Since R and R have the same kernel (kernel of M ) and full range there exists and it satisfies We have thus as v and G ∈ O(p, q) are arbitrary we obtained uniqueness of the solution up to desired transformation.
Suppose that p ≥ p and q ≥ q then we can affinely embed spacetime of signature (p, q) into spacetime of signature (p , q ). Even if n > 0 we can reconstruct degenerate 4-simplex in 4 dimensions up to O(p , q ) R 4 transformations. However signature (p , q ) is not unique.
If we introduce normals to tetrahedra of two such 4 simplices {N i } and {N i } then there exists G ∈ O and s i ∈ {0, 1} such that that are exactly gauge transformations of the SO geometric solution.
Geometric rotations G
that are normals to the faces of i-th tetrahedron recovered from geometric bivectors. We have
Lemma 19. If the lengths matching condition is satsified then
Proof. This is equivalent to Both bivectors are bivectors of either reconstructed ith tetrahedron in 4 simplex or reconstructed boundary tetrahedron in the space perpendicular to N can i . Similarly as for 4 simplex we can prove that the two tetrahedra differs by rotation G ∈ O. The scalar products are thus preserved 10 .
We can introduce group elements G ∆ i for any i by conditions There are 5 conditions but only 4 are independent (closure conditions are the same for v ∆ and v vectors).
Lemma 20. Elements
Proof. Both N ∆ i and N can i are of the same type normalized, and perpendicular to the rest of the vectors. The shapes of tetrahedra are also the same so scalar product between vectors are preserved. This is however the definition of being orthogonal.
Relation of
We would like to compare G i from the definition of geometric solution with (where r = (−1) s for s ∈ {0, 1}). We know that there exists s i ∈ {0, 1} such that and and then Proof. The condition for {G i } to be a geometric solution is det G i = 1. This is equivalent by (290) to Let us notice that as there is only one reconstructed 4 simplex up to rotation from O thus two geometric rotations are always related by and we can introduce a definition
Definition 11. Suppose that the boundary data satisfies the lengths matching condition. We say that it satisfies orientations matching condition if for any (and thus for all) reconstructed 4 simplices
Let us notice that after we choose reconstructed 4 simplex (O transformation), the choice of s i is arbitrary and corresponds to involution gauge transformations. The value of s is fixed by and it is Plebański orientation.
where e α is any normalized to ±1 vector. The second class corresponds to reflected 4 simplex. Any non-degenerate solution is gauge equivalent to one of these two.
Proof. The identification comes from lemma 21 and the description of the gauge transformations. There are two reconstructed 4-simplices up to SO rotations. The choice of s i corresponds to involution gauge transformations. Given such simplex from one class the representative of the second class can be obtained by applying reflection thus for geometric SO solutions where e α is normalized.
Classification of geometric solutions
We will consider only the case of non-degenerate boundary data that is that for every i, every 3 out of 4 vectors v ij are independent. General classification was done in [42].
but this contradicts non-degeneracy of the boundary data. Thus exactly one of the two conditions must be satisfied.
Let us consider the second case. We will prove that there exists only one (up to a scaling) solution W i of and for nontrivial solution all W i = 0. As there are 5 vectors in 4 dimensional space at least one solution exists. We know that from non-degeneracy of v ij as for any i j =i G i v ij = 0. The ratio of W j to W k is fixed and nonzero for j, k = i. However choice of i is arbitrary so the solution is unique up to a constant and with all W i = 0. This is equivalent to the solution being non-degenerate.
Other signature solutions
Let us notice that in the case when N can i are of different types and the boundary data is non-degenerate then degenerate geometric solutions cannot occur. The case of all N can i timelike was describe in [12,13]. In this case, if lenghts and orientations matching conditions are satisfied then either Gram matrix is degenerate and geometric solution corresponds to degenerate 4-simplex or such points occures in pairs (there are exactly two geometric solutions) and one can associate with them Euclidean 4-simplex. The difference of the action is proportional to Regge action again, but the proportionality constant is different.
Our goal is to provide uniform treatment of both cases N can = e 0 and N can = e 3 where ∀ i N can i = N can (307)
Vector geometries
We can consider subgroup of SO(1, 3) that preserves N can
In this situation
One can check that all conditions for geometric solutions are equivalent to conditions for vector geometries.
Other signature solutions
In this chapter we will relate pairs of degenerate solutions with non-degenerate simplex but of different than lorentzian signature. We describe it in unified language applicable to both N can = e 0 and N can = e 3 .
Let us introduce auxiliary space M 4 that differ from Minkowski M 4 by flipping norm of N can where we used g µν for lowering indices. We will use prime to distinguish operations related to this metric (Hode star * , scalar product · , contraction with use of the metric ). We introduce Let us notice that restricted to V both scalar products coincide, thus V can be regarded as subspace of both M 4 as well M 4 . For vectors in V we can use exchangeably both scalar products. The Hodge * operation satisfies in and inversion I ∈ SO(M 4 ). Let us introduce where we regard V as a subspace of M 4 . Let us notice also that for We can use this isomorphism to transform action of SO(M 4 ) from bivectors into V ⊕ V . Let us notice that as SO(M 4 ) preserves decomposition into self-dual and anti-self-dual forms, the action is diagonal and we can define Proof. The action on Λ 2 M 4 preserves scalar product It can be translated into V ⊕ V as follows. We need to computẽ but this is by (319) after cancelations So Φ ± (O) preserves the scalar product on V .
Let us notice that condition for simplicity by similar reasoning as before.
Lemma 27. Let us suppose that we have two vector geometries {G i } and
is equal for every i.
Proof. It is trivial statement for N can = e 0 . Let us consider N can = e 3 . It is enough to show that and then for GI s regarded as an element of SO(V ) Proof. In one direction: follows from the definition of Φ ± and (318) that where s ∈ {0, 1}.
Correspondance
Let us notice that SO(1, 3) geometric boundary data with all N can i = N can we can regard as SO(M 4 ) geometric boundary data. We will called this flipped geometric boundary data. and Proof. Let us consider geometric SO(M 4 ) solution. We have Let us now suppose that we have two vector geometries G + i and G − i . From lemma 27, s = sgn G + i sgn G − i is the same for all i. By gauge transforming G − i by G such that sgn G = s we can obtain situation when so vector geometries were gauge equivalent. Other way around, if two vector geometries are equivalent then by gauge transformation we can assume
Lemma 29. Suppose that we have a non-degenerate boundary data satisfying lengths and orientations matching conditions. There is a 1-1 correspondance between gauge equivalent classes of geometric non-degenerate
Let us notice the following identity for G ∈ SO(M 4 ) Two non-equivalent geometric solutions thus satisfies
Orientations
We will now describe orientations matching condition in terms of self-dual and anti-self-dual forms. Let us introduce From simplicity of B ∆ ij and (323) and We have we see that
Classification of solutions
In this section we will classify solutions and determine under which circumstances which geometric SO(1, 3) solution can occur. We assume that the boundary data is non-degenerate, that is, that for every i, every 3 out of the 4 vectors v ij are independent. Using the correspondence of the geometric SO(1, 3) solutions to stationary points, this gives, in fact, a classification of the latter.
If we have one SO(1, 3) geometric solution with additional choice of the gauge G i ∈ SO + (1, 3), then the representative of the second gauge equivalence class is given as follows: where we added such that also theG i are in the connected component of the identity.
Proof. The equivalences 1. ⇔ 2. ⇔ 3. follow from lemma 29. 3. ⇒ 5.: By theorem 7 from an ordered pair of non-equivalent vector geometries one obtains one non-degenerate geometric solution. If there existed a third class of vector geometries, then taking all 6 possible ordered pairs, one would obtain 6 different geometric solutions. That contradicts 3. 5 ⇒ 4 is trivial. 4 ⇒ 2 is just an application of theorem 7.
Degenerate geometric solutions
In this section we will prove several results about vector geometries under assumption that the lengths matching condition is satisfied.
Possible signatures
Let us now suppose that lengths matching condition is satisfied and we have a vector geometry As every two such faces are sharing common tetrahedron we can compute scalar product between bivectors without actual knowledge about reconstruction for example (as all faces are spacelike) but this can be computed in lorentzian signature Let us now notice that vectors e ∆ 3i can be expressed as a linear combinations of n ∆ kl for k, l / ∈ {3, 4} and e ∆ 34 thus vectors Let us introduce for any pairwise different i, j, k, ∈ {0, 1 . . . 4} the matrix (20)} is degenerate. The latter is equal toG 012 . As choice of indices is arbitrary we obtain equivalence.
Lemma 33.
Let us assume that we have non-degenerate geometric boundary data for N can i = N can . There exists at most two (up to overall rotation by If for any pairwise different i, j, k, ∈ {0, 1 . . . 4} matrix from (365) satisfies then there exists at most one solution.
Proof. We will first prove that there are at most two matrices G ij,kl (indices are pairs i = j and k = l) satisfying Matrix G ij,kl should be the matrix of scalar product w ij · w kl . If there are at most two such matrices there will be at most two sets of vectors {w ij } up to rotations from O(V ).
The only undetermined entries are Let us denote G 01,23 = α then ∀ ijkl different G ij,kl is a linear combination of α and v ab · v ac (370) Proof. Let us assume that i, j, k, l are different and m is the lacking index. We know that So G ij,kl can be expressed by G im,kl . Similarly we can replace by m any other index. As all permutations are generated by transpositions (mi), (mk), (ml), (mj) we see that we can express any G ij,kl with different indices by any other and known entries.
As w ij are vectors in V , every determinant of 4 by 4 minor of G ij,kl need to vanish. Let us take the determinant of G 0i,0j G 0i,23 G 23,0j G 23, 23 , i, j ∈ {1, 2, 3} The only unknown entry is α = G 01,23 = G 23,01 and it appears twice in the matrix. The coefficients in the contribution of α 2 to the determinant is −(G 2 02,03 −G 02,02 G 03,03 ) = 0 as v 02 and v 03 are not parallel. The determinant is thus quadratic polynomial and there are at most two solutions in α to the equation det = 0.
If G ij,kl is a matrix of scalar product then its range of is 3 dimensional (nondegeneracy of boundary data) and the signature is the same as signature of V . There exist vectors M µ ij ∈ V such that As v Proof. If lengths matching condition is satisfied, reconstructed 4 simplex is non-degenerate and we have one vector geometry then we can consider 3 sets of vectors satisfying assumptions of lemma 33 and v ∆+ ij and v ∆− ij from equation (348) are not related by rotation 11 . From lemma 33 we know that there exists s ∈ {+, −} and thus we have introducing r = det G by lemma 30 and orientations matching condition is satisfied and by theorem 9 there exist two gauge inequivalent vector geometries. By lemma 34 there exists no other gauge class of vector geometry. If the reconstructed 4 simplex is degenerate then by lemma 32 detG 012 = 0 and by lemma 33 there exists at most one solution (up to rotation from O(V )) to w ij from 33. We know one solution vG ij and the second from geometric construction v ∆+ ij from (348) ( it follows that they differ by Thus orientations matching condition is satisfied. By lemma 34 there exists no other vector geometry. for non-degenerate boundary data. In such situation all bivectors of the reconstructed 4-simplex are annihilated by Φ − (G)N can that contradicts its non-degeneracy of the 4 simplex.
Classification of solutions
Degenerate and non-degenerate solutions cannot occure at the same time. If lengths matching condition is satisfied, but orientations matching condition is not satisfied then we cannot have any solution.
When lengths matching condition is not satisfied we can have neither nondegenerate SO(1, 3) geometric solution (reconstructed + − −− simplex satisfies lengths matching condition) nor two degenerate solutions (reconstructed + + −− or − − −− simplex need to satisfy lengths matching condition). There still might exist a single vector geometry in this case.
By lemma 15 this classification applies to real stationary points of the action.
Phase difference
In this section we will give an interpretation in terms of Regge geometries of the difference of the phases between two stationary points. The overall phase can be changed by adjusting the phases of the coherent states. Therefore, in the study of the asymptotic behaviour of the vertex amplitude the phase difference ∆S is of main interest. It is defined up to 2πi because it appears in the exponent. In Section 9.4 we will show that it agrees with the expected Regge term ∆S ∆ up to π i. We will improve this result in Section 9.5 by using certain deformation argument and we will show that this agreement holds exactly, i.e. up to 2π i: ∆S = ∆S ∆ mod 2π i .
The Regge term ∆S ∆
Regge calculus [21] was devised as an extension of gravitational action to certain distributional metrics that are flat everywhere except on the faces of the simplicial decomposition. On these faces, the geometry is not smooth anymore, and deficit angles appear. Faces are assumed to have flat inner simplicial geometry. The action is a sum of contributions from single simplices given by where A ij is the area of the triangle (ij) and θ ij is the dihedral angle between the tetrahedra i and j at the triangle (ij). However, apart from the Euclidean case, the definition of the dihedral angle is nontrivial. For example, in the Lorentzian theory one need to consider separately the cases when the tetrahedra form a thin or thick wedge at the triangle, respectively, [50,12] (see figure 9.1).
In this paper we limit ourselves to the case where the faces are spacelike, but the normals to tetrahedra can be timelike or spacelike. We basically follow the definition of the dihedral angles in the relevant cases from [24] 12 . We argue that this is the right definition because the Schläfli identity holds [24,51] where δθ ij are variations of dihedral angles under changes of the shape of the 4-simplex. This identity is crucial for Regge calculus [21].
The dihedral angles are defined as follows: • Euclidean (− − −−) or split signature (+ + −−) and N can i = N can j . The dihedral angle is a unique angle θ ij such that This case contains thick wedge for both normals timelike and thin wedge for both normals spacelike.
• Lorentzian signature (+ − −−) and N ∆ i · N ∆ j < 0 and N can i = N can j . The dihedral angle θ ij < 0 is the unique angle such that This case contains thick wedge for both normals spacelike and thin wedge for both normals timelike.
• Lorentzian signature (+ − −−) and N can i = N can j . The dihedral angle θ ij is the unique angle such that The area of the triangle (ij) is equal to By the reconstruction theorem (Theorem 4) we know that thick wedges thin wedges Figure 1: Thick and thin wedges.
Since |B G ij | 2 = ρ 2 ij , it immediately follows that the Regge action for the geometries reconstructed from the geometric solution {G i } becomes We define the geometric difference of the phase ∆S ∆ to be equal to where r is the Plebański orientation (definition 9).
Revisiting the phase difference
Let us recall that the difference of the phases between two stationary points is given in terms of its SL(2, C) solutions {g i } and {g i } by lemma 13 as where generates rotation with period 2π. The difference of the contribution to the phase from the link action is i(ρ ij r ij + 2 ij φ ij ).
Determination of r ij and φ ij
In this section we will determine r ij and φ ij in a geometric way. The price we pay is an ambiguity of π in the phase that will be fixed by a deformation argument in section 9.5.
Lemma 36.
The contributions φ ij and r ij to the difference in phase for two stationary points corresponding to two non-degenerate SO (1, 3) solutions where s = 0 if both canonical normals are of the same type, and s = 1 if they are of different types.
Proof. We can compute where r i ∈ {0, 1} are such that We see that s = r i + r j = 0 modulo 2 if and only if both canonical normals are the same.
We will now obtain a similar result for two degenerate SO(1, 3) geometric solutions (vector geometries).
Proof. From the equality (346) we have That prove the equality.
Lemma 38. Suppose that we have a bivector B in
where B ± ∈ so(V ) ≈ Λ 2 V are determined by the equality and we embed bivectors from V into M 4 .
Proof. We can decompose B into its self-dual/anti-self-dual part, B = B + + B − . Group elements generated by them commute, thus As B ± are independent and B + is determined by . Let us assume that B preserves N can , then by lemma 28 where we identify subgroups SO(V ) from Minkowski and from M 4 . In this special case as B ± preserves N can . Let us notice that as B + = B − is arbitrary in this case so we have for arbitrary B.
We have Lemma 39. The contributions φ ij and r ij for the difference in phase between two stationary points corresponding to two vector geometries {G + i } and {G − i } satisfies an identity written in terms of SO(M 4 ) geometric solutions G i and G i for flipped boundary data, where Namely is the bivector from the SO(M 4 ) geometric solution.
Proof. From lemmata 37 and 13 we know that (regarding SO(V ) as subgroup of SO(1, 3)) as + counts difference from G + to G − and − in opposite direction.
If r ij = 0 then right hand side would not be in SO(V ) ⊂ SO(1, 3). However every element G ± i belongs to this subgroup. Thus r ij = 0. Regarding B {G ± } ij as bivectors in M 4 , we have with use of theorem 7 Thus by lemma 38 we have Comparing the images of two group elements underΦ we obtain their equality up to I As * B {G} ij is simple, we see that s = 0.
Geometric difference of the phase modulo π
We know that This is a group element that appears in lemmas 36 and 39. Let us recall that * (N ∆ i ∧ N ∆ j ) is spacelike. We have (see appendix F): • For N can i = N can j , (in Lorentzian signature) so the rotation generated by either has period 2π.
From the geometric reconstruction theorem we know that and the sine law shows that 1 where Let us notice (see (245)) that By comparing (407), (408) with (388) and (401) we get the following • For Lorentzian signature (non-degenerate solutions) and normals of the same type 2φ ij = 0 mod 2π, 2r ij = 2rθ ij (415) • For Lorentzian signature (non-degenerate solutions) and normals of different types 2φ ij = π mod 2π, 2r ij = 2rθ ij (416) • For other signature solutions (degenerate solutions) We will now consider contributions of π 2 appearing in the action from (416).
Proof. Let us consider the sub-graph of the spin network consisting of those links that have half-integer spins. As this is a graph with all the vertices of even valence, there exists Euler cycles, i.e., cycles such that every link of half-integer spin belongs to the cycle exactly once. Let us count number of changes of the type of normals from consecutive vertices on one of those cycles. As it is a cycle, the number is even. However as we go through all links exactly once this is the number of links ij with N can as the sum is over even number of π 2 . Let us denote by [j] the largest integer smaller than j. Definitely Summing the two equalities, we obtain the desired result.
As we determined S modulo π we can skip the π 2 terms coming from (416) by lemma 40 and write ∆S = ∆S ∆ mod πi, where
Deformation argument to fix the remaining ambiguity
We only need to determine the remaining ambiguity of π in the action. This however depends on the choice of SL(2, C) lifts, and it needs to be done consistently for the whole 4-simplex. Taking into account that the only source of ambiguity are the contributions φ ij for half-integer spins, we have such that: • for all t ∈ [0, 1], {G i (t)} is a SO(1, 3) geometric solution for v ij (t) boundary data, • for all t = 1 the boundary data v ij (t) is non-degenerate, • for all t = 1 solution {G i (t)} is non-degenerate, and Then Proof. The function takes values in {0, π} by lemma 40 and is changing continuously if we compute differences between two solutions Its value needs to be constant. We need to show lim t→1 f (t) = 0. As in the limit {G i (1)} and {G i (1)} are gauge equivalent thus there exists G ∈ SO(1, 3) such thatG i = GG i for all i.
Let us introduce lifts g i andg i and g of these elements. There exists s i ∈ {0, 1} such thatg and then (−1) s i +s j = g ig Thus from M ij = 0 it follows that φ ij (1) = (s i + s j )π mod 2π. We have We can consider an Euler cycle in the subgraph consisting of edges with odd spin. For such a cycle i<j : (ij)∈cycle (s i + s j )π = i∈cycle 2s i π = 0 mod 2π.
As the subgraph of half-integer spins has even valent nodes, we can decompose it into Euler cycles thus ∆S 0 = ∆S ∆0 mod 2πi.
Let us deform our boundary data by deforming solution as follows: We choose a spacelike plane described by a simple, normalized bivector V in generic position i.e.
We now contract directions in * V (perpendicular to directions in V ). All the time B {G(t)} ij = 0 and the solution {G i (t)} obtained by reconstruction from the bivectors is non-degenerate. Let us now consider the limit where we shrink * V to zero. We will denote this time as t = 1. Due to (432), the limits lim t→1 B {G(t)} ij exist and are nonzero. The shrinking has a dual action on the geometric normals (their directions in * V expands but we need to apply normalization). As the geometric normal vectors N ∆ i (t) do not lie in the V plane (see (432)), they also have a limit and it is equal to their normalized components lying in the plane * V . By suitable definition of v ij (t) we can assume that the limits G i (1) = lim t→1 G ∆ i (t) exist. The 4-simplex is now highly degenerate, contained in a 2d plane. All bivectors are proportional to V .
So we have for any non-degenerate boundary data by lemma 41
The case of other signatures
The difference ∆S is well-defined if we fix between which two degenerate points we need to compute the difference of the phase.
such that • for all t ∈ [0, 1] the boundary data v ij (t) is non-degenerate, Then Proof. The function takes values in {0, π} and is changing continuously if we compute differences between two stationary points determined by vector geometries {G ± i (t)} obtained from a SO(M 4 ) geometric solution {G i (t)}. Thus it is constant. For simplicity let us work in a suitable gauge, such that We have for the lifts g + i (1) = (−1) s i g − i (1). Similar considerations using Euler cycles as for Lorentzian signature show that Also, as θ ij (1) an argument with an Euler cycle shows that Thus f (1) = 0 and Let us deform the boundary data as follows: We choose N of the same type as N can , in generic position, i.e., such that We contract in the direction of N in M 4 . During contraction we have a continuous path of non-degenerate SO(M 4 ) geometric solutions (with nondegenerate boundary data). At the end we obtain a degenerate 4-simplex for non-degenerate boundary data. By lemma 42 we have
Summary
Let us now choose all outer pointing normals N ∆ i . We obtained the following formula for the difference of the phase between two stationary points where r is the Plebanski orientation (definition 9), and θ ij are generalized dihedral angles (see 9.4) to be reconstruct as follows:
Computation of the Hessian
In this chapter we will finish our analysis by computing the scaling property of the measure factor in the formal application of stationary phase approximation. We assume that after taking into account gauge transformations the remaining Hessian is non-degenerate. The contribution to the stationary phase approximation of the amplitude A from the Hessian is The integrand depends on the following variables • z ij for i = j, giving 80 real variables, • g i ∈ SL(2, C), giving 30 real variables.
But these variables are subject to gauge transformations, which reduces the effective number of nontrivial variables: • z ij → λ ij z ij , giving 40 real gauge parameters, • SL(2, C) gauge gives 6 real gauge parameters.
This gives 64 nontrivial variables, and the scaling Together with the scaling Λ 20 (see (51)) of c(Λ) this gives the scaling of the measure factor,
Conlusions and conjectures
We performed in this paper the complete stationary phase analysis in the extended EPRL setting. Our result rests on several preconditions, which should eventually be addressed: • Finiteness of the amplitude: It is not known wether the evaluation of a spin network in the extended EPRL model is, in fact, finite. This was proven only for the standard EPRL setting [5,41]. The condition is necessary for any statement about the amplitude to make sense.
• Boundary contributions: It is an open question wether the phase oscillates sufficiently fast at the boundary of integration so as to not give contributions to the asymptotic behaviour. This was not addressed in any asymptotic analysis of the Lorentzian spin foam models so far. It is trivially satisfied for the Euclidean models, as there is no boundary of the integration.
• Non-degeneracy of the Hessian: This was checked for specific configurations in the standard EPRL model [12]. It is known that it fails in the Barrett-Crane model for certain non-generic but geometric data [29]. We expect that the Hessian in the EPRL model is always nondegenerate if the reconstructed 4-simplex is non-degenerate. If nondegeneracy fails, the scaling behaviour is different.
Also, there are several points which merit further analysis: • Extension of our results to the case of surfaces of mixed signature. A spinfoam model for surfaces of this kind was introduced in [11] based on coherent state techniques. There is no known EPRL-like construction. We suspect that there might be some nongeometric contributions to the asymptotic formula in this case.
• N ± are complex numbers, thus they contribute to the phase. They are an important part of the asymptotics and can be described in geometric terms if the boundary state is geometric [22,23,29]. The spread of the coherent states has no obvious geometric meaning but it influences the measure factors. In the Euclidean EPRL model these two contributions are equal thus, their phase can be cancelled by proper choice of the phase of coherent states. It is tempting to conjecture that similar statement is true for the Lorentzian models. However nothing of this kind was proven even in the standard EPRL setup.
• A determination of the absolute value of N ± is interesting for the following reason. In order to evaluate the spin foam amplitudes of more complicated triangulations, one needs to take a product over many vertex amplitudes. In the semiclassical limit one can hope that one can replace the vertex amplitudes in the product by their asymptotic forms.
In such a situation, the phase is exactly the Regge action [21] of discrete gravity (with some problems regarding orientations [52]) and the measure of the path integral is obtained from N ± . However, it seems that in the case of current spinfoam models the amplitude of the whole foam cannot be obtained by this approximation [14,15] (but see also [16] for possible resolutions).
• Another open problem is the extension of our result to the case of non-vanishing cosmological constant. The asymptotic analysis of a corresponding version of the EPRL model was given in [53,54]. However, moving away from Euclidean signature even the formulation of the model becomes very formal. There is no satisfactory proposal for the intertwiners representing timelike tetrahedra for the situation with cosmological constant (see however [55,56] for possible, alternative rigorous deformations).
• ω symplectic form for spinors, • N normal vectors, N ∆ i outer pointing normal vector to i-th tetrahedron, N can i canonical normal vector either e 0 = (1, 0, 0, 0) or e 3 = (0, 0, 0, 1), N {G} i normal vector to i-th tetrahedron obtained from geometric solution, • v, l vectors (l null vector), v ij boundary data vectors, • · scalar product, • R N reflection with respect to normal N , • I inversion, • g ∈ SL(2, C) group elements, • B ij bivectors, B • V space perpendicular to N can in Minkowski (also emebddable in M 4 ), • Φ ± maps from bivectors in M 4 into V (self-dual and antiself-dual forms), Φ ± corresponding maps from SO(M 4 ) into SO(V ), • Hodge star * , • if we are working explicitly in the flipped spacetime then we use * , · and for contraction with use of the metric and , • if we are working in arbitrary signature spacetime then we use * , ·, and , • representation labels (, ρ) for SL(2, C) group, ij and ρ ij representation labels for edge connecting tetrahedron i with j, • λ number, • C * invertible complex numbers, R * invertible real numbers • Λ integer scaling of spins, • θ ij dihedral angle between tetrahedron i and j, • A ij area of the face between tetrahedron i and j, • S action, S ij , S β ij parts of the action.
B. Conventions
We are using abstract definition of Grassmann algebra X and ∧ by its universal properties. For b ∈ X * we define left ( ) and right ( ) antiderivatives of order −1 b w, w b, w ∈ X (451) by its action on X ⊂ X Suppose that we have a metric g µν . The norm of k vectors is given by The Hodge dual is defined by * (a 1 ∧ . . . ∧ a k ) = a k a k−1 · · · a 1 Ω where Ω is the normalized volume n-vector (choice of the orientation) and is the contraction with dualized vector by the scalar product.
C. Restriction of representations of SL(2, C)
In this appendix we will collect results of [37] and [10] In this case following [7] we consider embedding of the spin representation. We can realize this representation as functions on SU (2) given by matrix elements of the representation in L z eigenbasis. The embedding of the functions Ψ m (u) = 2 + 1D m (u), u ∈ SU (2) into the representation space of unitary irreducible representation (, ρ = 2γ) is given by where N can = e 0 and u(z) = 1 z|z N can z 0 z 1 −z 1 z 0 , z = z 0 z 1 and u|v N can = u 0 v 0 + u 1 v 1 .
Using the explicit expression for the representation matrices of the SU (2) group we write F explicitly: we can consider extremal eigenfunctions. These are basic coherent states [38]. They usefulness for asymptotic analysis comes from their simple form (see [37,39]) • For N can = e 0 F (z) = 2 + 1 2π z|z where n 0 = 1 0 and n 1 = 0 1 .
All other coherent states are obtained through transformation of these basic coherent states by group action of St(N can ). Let us notice that the group action preserves ·|· N can and we can move group elements from z into n i obtaining Ψ n (z) = F (u T z), n = (u T ) −1 n 0 , Ψ +,n + (z) = F + (u T z), n + = (u T ) −1 n 0 , Ψ −,n − (z) = F − − (u T z), n − = (u T ) −1 n 1 We have the following classification: • Every spinor n + such that n + , n + N can = 1 with N can = e 3 is obtained as n + = (u T ) −1 n 0 for u ∈ SU (1, 1) • Every spinor n − such that n − , n − N can = −1 with N can = e 3 is obtained as n − = (u T ) −1 n 1 for u ∈ SU (1, 1) • Every spinor n such that n, n N can = 1 with N can = e 0 is obtained as n = (u T ) −1 n 0 for u ∈ SU (2) This allow us to write coherent states as follows with N can = e 3 and arbitrary n + , n + N can = 1, and for SU (2) embedding Ψ n (z) = 2 + 1 2π z|z with N can = e 0 and arbitrary n, n N can = 1.
D. Traceless matrices
In this section we will describe relation between traceless matrices in two complex dimensions and spinors. Let us assume that δg is traceless then thus for spinors u and v where we transposed the whole formula in the middle equality. Proof. We apply left hand side to v and u with v being iλ and u being −iλ eigenvectors with λ ∈ C.
The condition for M is means that M † has the same eigenvalues as −M (±iλ) so either λ ∈ R or λ ∈ iR.
We will call M spacelike if its eigenvalues are purely imaginary (see section 5.2).
Lemma 48.
Suppose that M is a traceless matrix such that M T belongs to the Lie algebra of the group preserving normal N and is spacelike then there exists λ > 0, and n spinor n, n N = s ∈ {−1, 1} such that Proof. We can always write with v being iλ and u being −iλ eigenvectors with λ ≥ 0.
The condition for M is where t = det η N = N · N from the identity t(η −1 N ) T = ωη N (−ω). It is equivalent to We can write it in the form where we introducedũ = ωη Nū ,ṽ = tωη Nv (500) Let us notice that We have also the equality Normalizing v we can introduce spinor n such that Spinor n is unique up to a phase.
E. Generalized sine law
Taking the scalar product of (242) with itself we obtain (for an extension see [57]) (V ol ∆ ) 2 |B ∆ ij | 2 = W ∆ Proof. We can restrict ourselves to two dimensional plane spanned by N ∆ i and N ∆ j . The connected component of the group of Lorentz transformations is generated by N ∆ i ∧ N ∆ j . Moreover for two elements G and G from connected component Let us notice that is in connected component and for any vector v in the plane
|
2017-05-08T13:32:32.000Z
|
2017-05-08T00:00:00.000
|
{
"year": 2017,
"sha1": "700ae735bba5f90ff871255eb4ba56fe9eb66f17",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1705.02862",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "700ae735bba5f90ff871255eb4ba56fe9eb66f17",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
1113254
|
pes2o/s2orc
|
v3-fos-license
|
Use of the adult attachment projective picture system in psychodynamic psychotherapy with a severely traumatized patient
The following case study is presented to facilitate an understanding of how the attachment information evident from Adult Attachment Projective Picture System (AAP) assessment can be integrated into a psychodynamic perspective in making therapeutic recommendations that integrate an attachment perspective. The Adult Attachment Projective Picture System (AAP) is a valid representational measure of internal representations of attachment based on the analysis of a set of free response picture stimuli designed to systematically activate the attachment system (George and West, 2012). The AAP provides a fruitful diagnostic tool for psychodynamic-oriented clinicians to identify attachment-based deficits and resources for an individual patient in therapy. This paper considers the use of the AAP with a traumatized patient in an inpatient setting and uses a case study to illustrate the components of the AAP that are particularly relevant to a psychodynamic conceptualization. The paper discusses also attachment-based recommendations for intervention.
INTRODUCTION PRESENTING SYMPTOMS
Gloria, a mid-aged patient, appeared restless and somewhat distrustful initially during her diagnostic clinical interview; she gained trust during the interview because the therapist's open and direct approach seemed to defuse the patient's fear of a discussing trauma therapy. She reported during this interview that 5 years ago she had a gruesome experience with a therapist who had suggested immediately at their first meeting that she begin trauma therapy for her rape experience. Gloria was terrified and she quit therapy.
When the second author first saw Gloria for the attachment assessment, her symptoms indicated a strong dissociative avoidance disorder, which also included headaches, fainting in stress situations, and memory loss. She had been in a car accident about 1 year prior to this time and, as a result, she felt for the first time in her life that treatment would be appropriate and she contacted a psychosomatic hospital. Gloria was diagnosed with post-traumatic stress disorder (PTSD; DSM-IV) with high dissociative states (amnesia) and a pain disorder.
HISTORY OF RELATIONSHIP TRAUMA AND COURSE OF ILLNESS
Gloria lived in an intact family with her parents and three younger siblings until her parents divorced at age 5, but she provided no details about her childhood before this time and would not speak at all about her biological father. Gloria and her siblings lived predominantly with their mother after the divorce. Her mother remarried 5 years later when Gloria was 10 years old and she viewed her stepfather as her "actual" father. She described him as being humorous, loved, and trusted, but she also described him impulsive, irascible, and argumentative. Gloria seemed insecure about her stepfather's acceptance, wondering how far she could push him before he would break. Would he accept her even if she acted like a wild child? Gloria stated that "once a week I pushed him until he burst," and she told how she tested him with "mischievousness" so as to push her stepfather into beating her. Gloria's deliberate misbehavior and her stepfather's beatings were central to their relationship.
Gloria's first major traumatic experience as a sadistic rape in late adolescence. The only details that she provided about her rape was that it occurred during daytime and that she did not know the rapist. After the rape, she began around 3 weeks later to have sudden headache and fainting attacks, fainting as much as three times a day. She also developed chronic dissociation experiences. Gloria's symptoms appeared to be associated with feeling of being exposed and to school or performance-related pressure. Although these problems persisted, she did not seek psychological treatment. Her symptoms, especially fainting, diminished when she studied abroad. Her symptoms reoccurred after returning home 2 years later, however, and she decided to go back abroad.
Gloria's second major traumatic experience was at 30 years old when her boyfriend of 2 years died in an accident. Gloria had separated from him shortly before his death, the reason being that she was no longer able to tolerate physical closeness. She felt severely guilty about his death and her guilt had masochistic qualities. As a result, she did not have another intimate relationship for many years.
Gloria had recently experienced a third trauma prior to her decision to seek treatment. She had been in a serious accident in which she had been thrown out of her car and into the air rendering her unconscious. She was thought at first to be dead. Her physical injuries included three spinal discs and a strain on the cervical spine and her fainting episodes increased to many episodes a day. Although Gloria reported in her initial interview, almost proudly, saying "I survived this," and she had since was unable to work.
Gloria felt that her symptoms had become debilitating, and she noticed that her fainting spells seemed related to stress. Her headaches had become so severe that she risked becoming unconscious. She was not able to recall what preceded the headaches and she could not remember any indicators associated with their onset, such as less debilitating headaches or other physical warning signs. Gloria described herself as being on autopilot. This "defensive mechanism" had saved her life more than 20 years ago, but now this automatic mechanism was out of her control.
Gloria had not allowed herself to think about this until she entered treatment and her treatment goal was "to get rid of it." She had a stiff commitment to being strong and carrying on. "I want to function. I will get through this. I want to be able to work. I have worked for many years to wipe out the traumatic event, to get rid of it, to repress it." This perspective had dominated her life and kept her moving forward. She was frightened of not being able to be in control of her symptoms and the prospect of becoming dependent on the pain medication prescribed to combat her severe headaches. Bowlby (1969) was a prominent psychoanalyst to use ethological concepts to describe the infant's biologically predisposed attachment to a primary caregiver. He viewed relatedness in early childhood as a primary and independent developmental goal that is not subservient to a physiological needs (e.g., hunger) or psychoanalytically defined primary processes. The infant is perceived from an interactional perspective, with a focus on the relationships with primary attachment figures. Attachment theory maintained some foundations of psychoanalytic theory (e.g., the developmental point of view) and there is a strong literature that discusses the divergences and convergences of psychoanalysis and attachment theory (e.g., Diamond and Blatt, 1999;Fonagy, 1999Fonagy, , 2001Slade, 2000;Gullestad, 2001;Steele and Steele, 2008;Eagle, 2013) and also developed some aspects further, particularly the delineation of the internal world (Diamond and Blatt, 1999). Fonagy's (1999Fonagy's ( , 2001 overview of the intersection of these two approaches demonstrated that the relationship between attachment theory and psychoanalysis is more complex than adherents of either community have generally recognized. This paper addresses some of these complexities by integrating attachment assessment using the Adult Attachment Projective Picture System (AAP) in psychodynamic psychotherapy in an adult traumatized patient. George and Solomon (1999) proposed that one major difference between psychoanalysis and attachment theory falls in the description of forms of defensive processes. Traditional psychoanalytic models provide a complex constellation of defenses to interpret a broad range of intrapsychic phenomenon, including phantasy, dream, wish, and impulse (e.g., Horowitz, 1988;Kernberg, 1994). Attachment theory delineates two basic processes that manifest in three forms. According to George andSolomon (1999, 2008), Bowlby defined defense as forms exclusion directed to modulating difficult and anxious experiences with attachment figures, and the child's experiences with incomplete or failed bids for parental protection, care, and comfort. He defined defenses in terms of two qualitatively distinct processes: deactivation (retaining elements of intellectualization and denial) and cognitive disconnection (retaining elements of splitting). George andSolomon (1999, 2008) pointed out that under normal circumstances these two exclusion processes are associated with goals to maintain physical and psychological proximity in the attachment-caregiving relationship under conditions when the child's experiences with the attachment figure are less than satisfying. George and Solomon refined Bowlby's (1980) model suggesting that deactivation and cognitive disconnection organized and supported at least minimal forms of representational, behavioral, and emotional regulation. Bowlby (1980) proposed that these forms defensive exclusion functioned to segregate (akin to repression) memory, affect, and experience when the attachment figure was not available, conceiving of an extreme process he termed "segregated systems." Segregated systems were thought as associated with the painful and chronic distress experiences, such as those that accompany loss. Bowlby posited that segregated systems were the intrapsychic root of symptoms related pathological mourning and severe psychopathology. Attachment theorists have since demonstrated that segregated systems are associated with experiences of failed protection, attachment trauma, and disorganized/dysregulated attachment behavior and representation George, 2000, 2011;George and West, 2012).
BACKGROUND ATTACHMENT AND PSYCHOANALYSIS
Consistent with a psychoanalytic approach, some attachment theorists have suggested that utilization of defensive process models is needed to provide a complete picture of the emotional and behavioral regulation processes individuals develop from their childhood relationships with attachment figures (Cassidy and Kobak, 1988;George and Solomon, 1999;George and West, 2012). Further, concluded, "In order to understand the relationship between adult attachment and mental health risk we need to examine the attachment concepts of defense and segregated systems, the mental processes that define disorganization" (p. 295). These theorists operationally defined Bowlby (1980) basic defense scheme as a central element for evaluating representational patterns of attachment using the Adult Attachment Projective Picture System (George et al., 1997George and West, 2012). Suggesting that these representational structures have developed under conditions of attachment trauma (abuse, loss, failed protection), the concept of segregated systems is fruitful to explain some forms of relationship-based psychopathology in adults (George and West, 2012).
The discussion that follows provides some ideas about using attachment concepts in clinical work by showing how the perspectives of a psychoanalyst and attachment assessment may improve the understanding of an individual case of a traumatized patient with the diagnosis of a PTSD with dissociative states (e.g., fainting in response to stressful situations).
POSTTRAUMATIC STRESS DISORDER
The lifetime prevalence of PTSD in Germany has been found to be 1.3% with a female-to-male ratio of 3.25-1. Traumatized patients are frequently misdiagnosed and mistreated in the mental health system. The number and complexity of the symptoms lead to fragmented and incomplete treatment. PTSD patients are vulnerable to become re-victimized by caregivers because of their difficulties with close relationships. Severely traumatized PTSD patients (complex trauma) develop difficulties in modulating arousal and show signs of severe affect dysregulation (e.g., aggression against self and other, and problems with social attachment and dissociative states).
Dissociation, defined as a deficit of the integrative functions of memory, consciousness and identity, is often related to traumatic experiences and traumatic memories (Liotti, 2004;Kihlstrom, 2005). During clinical interviews, dissociation is suggested either by such a degree of unwitting absorption in mental states that ordinary attention to the outside environment is seriously hampered. Dissociation can be accompanied by a sudden lack of continuity in discourse, thought or behavior of which the person is unaware (supposedly due to intrusion of dissociated mental contents in the flow of consciousness). Thus, for instance, a dissociative patient may suddenly interrupt her speech during a therapeutic session, stare into the void for minutes, and become unresponsive to the therapist's queries as to what is happening to her. Or a patient suffering from PTSD may suddenly utter fragmented and incoherent comments on intrusive mental images (usually related to traumatic memories) that surface in consciousness and hamper the continuity of the preceding dialog with the therapist. In the most extreme variety of dissociation (Dissociative Identity Disorder), an alternate ego state may appear during the clinical dialog, reporting (sometimes with an unusual tone of voice, e.g., like a child) memories of childhood abuse of which the patient has previously been totally unaware, or expressing attitudes and beliefs quite extraneous to the patients' personality (Liotti, 2004).
Furthermore, shattered meaning propositions predominate. Trust, hope and sense of agency is accompanied by social avoidance, with loss of meaningful attachment and therefore lack of participation in preparing for future (van der Kolk et al., 1996).
DISORGANIZED-DYSREGULATED ATTACHMENT AND DISSOCIATIVE SYMPTOMS
The founding premise of attachment theory is that stress, especially traumatic stress, produces a strong desire for proximity to and comfort by attachment figures; this desire is built into human biology as a survival safety mechanism and the mechanism is functions unchanged throughout the life span (Bowlby, 1969). Attachment experience shapes the ways in which individuals manage stress and are especially important when individuals experience a traumatic event (Bowlby, 1973(Bowlby, , 1980Schore, 2001). When attachment is secure, individuals know how and when to seek attachment figures and develop internal representations of self as deserving of care. Attachment security fosters confidence and trust that figures are available, empathic, and sensitive to their needs; security is a buffer or resilience factor that supports recovery from trauma (Bowlby, 1980;Schore, 2001).
When attachment is insecure, emotional and behavioral reactions when distressed may be made even more painful by unconscious evaluations that wishes for comfort are illegitimate. Insecurity may result in additional painful interactions with the attachment figures rather than the functional comfort and protection for which attachment was intended (Bowlby, 1980). Insecurity fosters anxiety, anger, and fear, and increases the risk of developing trauma-related emotional disorders (Bowlby, 1980;Adam et al., 1995;Dozier et al., 2008;George and West, 2012). Extreme forms of insecurity are associated with the breakdown of attachment and caregiving regulatory mechanisms risk emotional and homeostatic dysregulation, often termed disorganized attachment (Solomon and George, 2011). The risk of dysregulation is heightened when attachment relationships are threatened or threatening, such as parental loss or psychiatric debilitation or maltreatment (Lyons-Ruth and Jacobvitz, 2008). George and West (2012) defined events such as these as attachment traumas, events that involve terrifying threats to the integrity of self or attachment relationships. Attachment disorganization, conceived in terms of mechanisms of dysregulation and attachment trauma, has been shown to predict vulnerability to severe psychiatric symptomology, including dissociative symptoms (Lyons-Ruth and Jacobvitz, 2008;Weinfield et al., 2008;Solomon and George, 2011). Liotti (2004) found the metaphor of a "drama triangle" useful in thinking about the intersection between dissociation and disorganized attachment. The dissociation triangle addresses how disorganized attachment fosters dissociative mechanisms that create incompatible and separate representations of self as victim, rescuer, and persecutor. The child's representation of the attachment figure is represented in a conflicting manifold way. On the one side, the attachment figure is represented as source of the child's fear, the self as a victim of attachment figure as persecutor. On the other side, the attachment figure by virtue of being the child's biological protector is viewed as the child's source of safety and protection (rescuer). In the child's mind, representation of self and attachment figure shift among these three incompatible models that are too complex to be synthesized into an integrated model of self. Liotti's model provides us with an integrated psychodynamic and attachment approach to our first questions concerning Gloria's illness, questions regarding the childhood origins of her episodes of near unconsciousness and her inability to ask for help following traumatic assault. Fearon and Mansell (2001) examined cognitive perspectives on unresolved attachment in patients diagnosed with PTSD. They proposed that unresolved loss, as defined in attachment assessment during interview, involves intrusion avoidance phenomena similar to those of PTSD. Specifically, they develop a model based on unresolved loss that involves the failure to integrate representations of self and the world following a loss. The features of unresolved loss can be understood as emerging as a result of the activation of unintegrated representations of the loss experience and cognitive and behavioral avoidance processes. In this model, the sudden intrusion of memories, cognitions, and emotions associated with the loss automatically captures attention and initiates behavioral dispositions that are incompatible. With regard to attachment, the authors suggested that this was the mechanism that interfered www.frontiersin.org with caregiving behavior. Lack of attentional resources and incompatible response tendencies can also result from safety behaviors directed at avoiding the perceived negative consequences of activating trauma memory. The authors proposed that these processes offer a novel way of understanding the disturbances in behavior and speech that are evident in mothers who are designated as unresolved with respect to loss.
This suggests that representational attachment measures, like the AAP, can provide a good understanding of the movement that the client might be making toward empowerment, integration, or understanding. Thus, even if a patient's overall attachment is unresolved (i.e., dysregulated), there may be indications in their responses to the AAP stimuli that suggest they are moving toward mental organization. Given the negative outcomes that are associated with abuse, focusing on resources and defensive strategies is arguably important for therapeutic recommendations.
DISCUSSION
Looking at this case from both a psychodynamic and attachment point of view, we ask a complex set of questions seeking to understand how childhood attachment experience and trauma were related to the patient's symptoms and her refusal to seek treatment. Were Gloria's experiences with childhood attachment figures traumatic and how might early experience block her from seeking help? Was Gloria's chronic denial of emotional pain related to her rape? Why were somatic symptoms -severe headaches and fainting -her only way to express pain? And finally, why had Gloria retreated with her ailments and refused to address her trauma?
The discrepancy in Gloria's descriptions is noteworthy. She idealized her stepfather by saying how much she loved him, while simultaneously describing his repetitive harsh spankings. From a psychodynamic viewpoint, these two object-representations are split and unintegrated. We speculated, therefore, that her stepfather's beatings were the answer she was looking for to confirm her experience as being recognized and loved.
The attachment perspective contributes added depth to this observation. The juxtaposition of love with dysregulating painful parental rage are the foundation of segregated systems (i.e., repressed and dissociated experience), defined as unconscious representational processes that become obstacles to grieving trauma and foster psychosomatic symptoms (Bowlby, 1980;George and West, 2012). According to attachment theory, attachment figure proximity, even proximity involving pain, is a better attachment solution than feeling isolated or abandoned (Bowlby, 1973;George and West, 2012). This thinking then permits us to better understand Gloria's representation of self in terms of the dissociation triangle described earlier. The quality of this daughter-stepfather relationship would be described as punitive-controlling attachment, a dysregulated form of attachment in which children seek to conquer feelings of abandonment and helplessness through parent-child combat (George and Solomon, 2008). The child in a punitive attachment relationship reciprocally plays out the roles of persecutor and victim.
Gloria's description of her mother is limited. She described her mother as distant and busy with her family and job. Her mother would "let things go" but also punished her. Unlike her stepfather's beatings, her mother used the "silent treatment," interpreted as withdrawal of maternal love and engagement. Gloria recalled, "She simply did not talk with me for many days." Once again, attachment theory provides additional depth to Gloria's experience with her mother. This form of parental withdrawal has been shown to be a reaction to parental feelings of being out of control and helpless; the child's experience is for the parent to inexplicably become psychologically invisible and vulnerable (George and Solomon, 2008). Faced with psychological abandonment, the silent treatment fosters a relationship in which the child must be very careful not to instigate the parent's withdrawal. Gloria would know quickly that her misbehavior-enraged script with her father was a dangerous script with her mother. Children in these situations develop precocious sensitivity and caregiving skills toward the parent; feelings of helpless abandonment are regulated by role reversal (George and Solomon, 2008). The child becomes a skilled caregiver, hypervigilant and seeking to rescue the parent from her pain. Gloria likely assumed the role of rescuer with her mother, completing the disorganized attachment-dissociation triangle.
In describing her life story, Gloria remembers how she enjoyed her parents' busy work schedule. Busy with their own life, she told how it provided her with a sense of freedom and independence. "If they are not caring, then at least I can do what I want." Her independence often fostered dangerous or senseless recklessness. Rebelling against and refusing to accept her parents' rules, she described herself as wild and out of control. She did what she wanted, including dangerous things that she now thought were stupid (e.g., climbing up the chimney, jumping on to the train tracks). When her parents sent to her to her room and forbade her to go out, she had climbed out of the window. She endured her punishment bravely and punishment did not deter her from doing these things again. According to attachment theory, Gloria's dangerous and defiant behavior in adolescence is viewed as attachment behavior (Allen, 2008); and indeed, even though her parents' response was punishment, Gloria achieved the connection to attachment figures she craved.
In her quest to be strong and rid herself of her symptoms, Gloria did not understand that her fainting was a defense mechanism. Fainting means survival and it was very likely that her trauma and fainting attacks were intertwined. The intersection of psychoanalysis and attachment theory is survival. From an attachment perspective, confronting her pain would threaten her fundamental ability to survive. From a psychoanalytic perspective, confronting these symptoms could mean her inner death and threatened survival may explain her resistance to the former therapist who had suggested trauma therapy.
ADULT ATTACHMENT PROJECTIVE PICTURE SYSTEM: GLORIA'S ADULT REPRESENTATION OF ATTACHMENT
The AAP was administered after Gloria had been in inpatient treatment for 2 weeks. Her adult attachment classification was judged to be pathological mourning for unresolved trauma. Pathological mourning is a state of chronic mourning that endures for years because trauma produced segregated systems block completion of the mourning, Mourning in attachment theory is defined by conscious awareness and re-organization of memories and feelings related to trauma, which leads to a representation of self that integrates current reality with the past (Bowlby, 1980;George and West, 2012). Gloria's AAP responses demonstrate the prominent role of attachment trauma in her representation of self and attachment relationships. The unresolved designation associated with her classification group signifies that she is not able to maintain regulatory processes to manage pain and fear. In Gloria's case, we see her attempt to keep attachment trauma walled off or segregated as fortified by defensive deactivation. When working well, this form of defensive exclusion neutralizes and shift attention away from distress and pain (George and West, 2012). The wall breaks though and Gloria succumbs to dysregulation.
The AAP assesses attachment by asking individuals to respond to two kinds of attachment situations. One situation portrays individuals alone. The alone stimuli provide evidence of the capacity to develop strategies to cope with stress in the absence of any visual cues for potential solutions, including no visible portrayal of potential attachment figures (George and West, 2012). Alone responses are evaluated for potential agency of self (capacity for internal integration or constructive action) and connectedness to others, especially attachment figures. The other situation portrays individuals in potential attachment-caregiving dyads. These scenes are evaluated for synchrony, evidence relationship of reciprocity, sensitivity, and mutuality. We first discuss Gloria's representation of self in response to alone AAP stimuli and then describe representations associated with the AAP dyadic stimuli.
It was Gloria's representation of the alone self in Bench, the fourth AAP picture stimulus, in which she becomes dysregulated and attachment representation is "unresolved." The stimulus figure is typically interpreted as an adolescent, drawn with legs and barefoot feet pulled up off the ground and arms loosely wrapped around the legs. The isolated vulnerability of this figure dysregulated her beyond repair. She told the following story, trauma designated in italics: "Uh, a young woman, she is sitting on a bench and is very sad and unhappy. She is desperate and doesn't know how to help herself. She sees no way out and feels that no one helps her. Feels left alone. Helpless, stranded and lonely. I don't know. I have no idea. But I don't know if it ends well, she is very sad. She may have the strength to pull herself out somehow. But if she has bad luck, then not. And then she will, she will suffer further." The young woman (projected self) is desperate, helpless, and suffering. She is confused and cannot envision life without suffering. The references to desperation, helplessness, and suffering are AAP indicators of severe traumatic content and emotional dysregulation. Gloria demonstrates no clear agency of self, unable to describe change or moving forward. Her representational self is void of a sense of connection to others. She succumbs to despair.
One common deactivation strategy in the AAP is evidenced by descriptions of sleep. Representational (and real) sleep is effective in that events can neither be detected nor processed. Sleep represents a deactivating defensive posture that filters out the details of distressing experiences from consciousness. Deactivation is central to maintaining segregated systems (Bowlby, 1980). We see how Gloria segregates images of stress-related dissociation (italics) with deactivation (underlined) in her Window response. Window, the first stimulus in the AAP set, portrays a rear view of a girl looking out a picture window. "It is night time and the child is awakened, that is why she doesn't have shoes on and is standing at the window and looking out. She is sad. Yes, she wishes I think that there actually was someone there with her but there is no one there now and that's why she looks outside, so as if to say, yes, there is certainly someone outside that cares for me. She feels lonely and perhaps looks toward the stars because they are comforting. But there are no stars above, or at least one can simply not recognize them. At some point the child goes away and goes back to sleep, but I think when she grows up then she will go somewhere out in the world in order to find something." The most common responses to this scene are non-threatening descriptions of a child awakening in the morning to go outside to play or go to school (George and West, 2012). By contrast, Gloria's response is a representation of self as desperately alone and without comfort. The child wishes for somebody, anybody, to care for her. Attachment theory views the girl's "yearning and searching" as a natural response to separation and being alone; it is also associated with the initial phase of mourning loss of an attachment figure (Bowlby, 1980). In the absence of agency and connectedness, Gloria describes the girl's solution with the haunting image of looking for comfort in galaxy beyond. This image is surreal and is evaluated in the AAP as a form of derealization. George (2013) found such AAP images were found in the AAP responses of individuals with severe childhood trauma. Gloria copes by returning to a deactivated mental state. By going back to sleep, the girl re-orients her attention away from just enough to create the image of being able to move forward in life. Although it contains Gloria's dysregulated mind state, this image is fragile because Gloria fails to describe any concrete coping actions.
The last stimulus in the AAP set is an alone scene that portrays a child in the corner, drawn with solid lines that designate perceptual boundaries that potentially confine the child. The child is turned askew into the corner with one arm reaching outward away from the corner. Gloria told a story of trauma and self-protection, with trauma (in italics), personal agency (underlined), and her interpretation (in brackets) designated in her response: "A small child stands in a corner and cannot get away. He defends, fends off with his hands. And he looks to the side and down because he tries in this way to protect his face or also not see what is coming next. . .. he only thinks, hopefully it will not be so bad. Afterward? He will be beaten. [It doesn't help him that he fends against it. . . he should not have tried to avoid it because then he will be even more severely punished.] Yes, the scene repeats itself again and again. Until the child is someday simply big enough and can run away. And he will surely never do that because he knows what it feels like to stands in the corner and not be able to return and get away. So, he will have to get over it." Gloria's final alone response describes the helplessness of abuse, the defining quality of her relationship with her stepfather. She describes a child who is helplessly trapped in the cycle of abuse. Important in her response, however, is the description of self has having the personal agency of protection. In those moments of abuse, this child was able to protect himself, a capacity to act. This www.frontiersin.org was likely the source of Gloria's capacity to go forward in life. What we see too though is that she currently sees the uselessness of the trying to protect one's self. This may have been the impetus for seeking therapy. Unable to escape, she has not been able to "get over it." The child's agency in Corner did not change anything and Gloria's overall representation of self in the alone stories demonstrates that she caught in a cycle of pathological chronic mourning for attachment trauma.
Gloria's dyadic attachment representations demonstrated some sense of self as finding functional care from attachment figures. Her response to Bed, however, describes how the absence of functional care that leads to relationship failure. The Bed scene portrays a child sitting up in bed reaching toward the opposite end of the bed where the mother is seated. Gloria's Bed story describes the sequelae of consistent maternal rejection. She told the following story in response to Bed; rejection is indicated by underline: "That is clearly a night situation. The child has been sent to bed. The mother sits with him and the child wants very much like to be embraced, but the mother can probably not do that. She doesn't make any sort of preparation to take the child into her arms. She is probably not in the position to do so. She loves her child but she can simply not be so, so loving or express her love so. With physical contact or something like that. And the child does not yet understand that and would very much like to be embraced. And over time he will feel cast off and will react similarly himself. The child simply thinks that he would like to be held by the mother. The mother thinks, I don't want to do that. Or I can't do that, it isn't necessary. You have everything that you need. The child, so the relationship between the mother and the child will be in time one where the child doesn't ask for it anymore. But I remain an optimist. The child will at some time find someone that can offer this loving and can also express it. And then he will be fine. But that all still takes time." This response evidences the process of how rejecting a child's attachment bids creates distance in the relationships and extinguishes the child's capacity to express its attachment needs. Bedtime signals separation for a child and naturally activates the need for comfort or at least a functional connection with the attachment figure (Bowlby, 1973). Gloria describes her need very clearly. She also "sees" how the mother's withdrawal fosters rejection over time and extinguishes the child's ability to ask. The need remains, as indicated by the suggesting that the boy will find a loving person who can provide what he needs. Feelings of intimacy and real connection, however, are sacrificed as it is only through mutual enjoyment and sensitivity that a child comes to know real intimacy (Bowlby, 1969). The relationship distance that results from rejection helps deactivate the distress of failed intimacy. It is like this experience that contributed to Gloria's decision to leave her boyfriend.
Gloria's overall AAP response patterns demonstrated a representation of self as helpless, desperate, abandoned, managed by deactivating defenses that created representational and relationship distance to shift her attention away from her pain. It is notable that none of these stories mirrored the independent or rebellious self she described during her clinical interview. Her AAP responses get beyond her desperate attempts to "get rid of " her feelings and demonstrate how frightened and helpless she really is. Gloria is caught in what seemed to be an endless cycle of trauma. Without protective and caring attachment figures, representational distance (deactivation), blurred by a positive smoke screen (hope to grow out of her circumstances), her only regulating mechanism is selfprotection, "fending off " distress. We can see how dysregulation and chronic mourning have played out in her life. Adolescent and adult trauma combined with the threat of feared addiction to medication seemed to have rendered her helpless, likely because they were out of her control and she was helpless to draw on the control strategies she had developed during childhood with her parents. On a positive note, Gloria's hope that she will someday find a caring relationship may finally support a commitment to developing a therapeutic relationship.
THE PSYCHODYNAMIC VIEW
Attachment and psychodynamic perspectives are mutually informing. From a psychodynamic perspective, Gloria had learned early on to repress negative feelings, not to be noticed, to be "independent," and to be outwardly "in control" when she was frightened. This coincided with her primary response mode -to endure and to pass over painful physical or emotional signs.
After her rape, a friend convinced Gloria to go to the police and to make a statement against the perpetrator. Ashamed and feeling that the police officer was insensitive to her situation, Gloria said that she was attacked but was able to get away, but she later retracted that statement. This retraction coincided with the onset of her severe headaches and the dissociative fainting episodes. She denied the severity of the trauma by not talking about it to anybody and suffered the consequences. Freud (1920) defined "trauma" (Jenseits des Lustprinzips) as an overwhelming stimulus experience in response to which a healthy psyche cannot defend itself. The feeling of total helplessness compared with the traumatic experience sets the point of crystallization for further trauma. Helplessness is a difficult feeling to endure. This is presumably the reason why the psyche is not able to easily become peaceful after an overwhelming occurrence. Instead it tries on the one hand to hinder the return of the memory of the trauma in order to protect itself from further traumatization. On the other hand, there is a type of pressure to often deal with the incident in order to be able to find the fault of this threatening experience at least in thought, and perhaps in that to reach a process whereby the helplessness of the experience could be conquered through a new security or certitude. The traumatized person rocks back and forth between two opposite conditions -the "renunciation" (avoidance, dissociation) of the occurrence and the pressure of having to constantly remember it (intrusion). From the psychoanalytic viewpoint, these two opposite motivations deal in the classical sense with an intrapsychic conflict. Modern psychodynamic theory is derived from this trauma model.
The model emphasizes the effects of early development, especially physical and sexual trauma, with the simultaneous lack of protection of the child on psychic development. Up to now, research has shown that early traumatization is associated with deficits in the ability to steer intense affects highly unsettled by their attachment abilities (analogous to dysregulation risk associated with attachment disorganization). The effects of early traumatization often include attention and concentration deficits, antisocial behavior, the presence of physical symptoms as substitute expressions of emotional problems (somatization), as well as deep seeded hopelessness and a lack of basic trust. Early trauma has the most debilitating the consequences because, traumatic experience threatens the stability and the differentiation of the personality structure. The development of its affects, self confidence and trust requires a constant environment that supports stable and positive relationships to the parents.
This perspective suggests that it is reasonable to believe that the origin of Gloria's vulnerability was her repeated beating by her stepfather combined with maternal psychological absence. She had learned at an early age to internalize this experience and interpret beating as the deserved and expected consequence of mischief. She could neither acknowledge nor speak about trauma under these conditions. Instead, she blamed herself and rendered these experiences taboo for a long time.
TREATMENT SUGGESTIONS
Gloria's response to starting trauma therapy was traumatic in itself. She did not want to look inside and remember the rape that she has tried to repress for more than 20 years. Since the fainting attacks are probably strongly connected to the trauma, an avoidance of treatment of the trauma would probably not improve her disease. A premature treatment of the trauma without an established working alliance would however be contraindicated, because of Gloria's "gruesome" previous experience.
We would suggest the following key tasks in a psychodynamic treatment plan for this patient based on the AAP: 1. Establish a secure base for this patient. This would mean not being intrusive and postpone delving into the rape experience until an alliance was established. Gloria's unresolvedpathological chronic mourning classification on the AAP is also supports delaying exploring trauma and parenting failures because they are disorganizing and she is not able to contain feeling desperate, stranded, and helpless. She could not trust others to help her, and this would extend to an inability to trust her therapist. Rather, building on the hope Gloria alluded to in the Bed response bodes well for beginning the therapeutic relationship to build an alliance.
Part of the secure base therapist position would include helping Gloria develop a more integrated self-object representation. Gloria's chronic representation of self as embodied the classic contradictory themes, as evidenced on the AAP. Her representation of self juxtaposes self-protective capacity with helplessness. One view was the self as frightened and helpless: I am frightened and helpless and cannot tolerate or verbalize it. The corollary theme was the self as having the capacity for protection: "I can protect myself and survive." What becomes clear is that deactivation defenses neutralize and turn her attention away from her pain, thus maintaining trauma segregation and prevent her from understanding why she becomes dysregulated, "I dissociate and don't know why." Other feature of the secure base position would help Gloria change her representation of object; that is, change the idealization of the stepfather to become more realistic. She would also need to accept the fact that, in attachment theory terms, her mother rejected her and did not protect her enough, which is why she had to learn to protect herself. Her avoidance, facilitated by deactivating defenses, is a reasonable response to her experience but is a maladaptive.
2. Diminish deactivating defenses. Strengthen Gloria's ability to tolerate negative feelings. This can help her face and accept the physical signs of distress in order to reduce the autopilot autonomic reactions. This can also help mitigate feelings of blame and foster increased ability for affect tolerance. 3. Re-organize attachment dysregulation that is strengthened by deactivation. Help Gloria not reject and be frightened intimacy and closeness, and accept relationships with an authentic self who has weaknesses.
The therapist should be careful of being silent and appearing anonymous, since, for Gloria who is already neutralized and deactivated, this behavior would mean rejection. With this patient, it would be expected that the silence of the analyst could activate the previous mode with the mother, as shown in the Bed story and described in her childhood as her mother's withdrawal of love: "She did not speak to me for days." One can readily expect also that this relationship pattern will probably appear in the transference. Therapeutic abstinence in the sense of being silent and non-responding may instill more tension and anxiety. Technical neutrality in a modified way (e.g., Kernberg et al., 2008) and a warm, accepting and in general empathetic position at the right moment seems to be the most useful technique recommendation until the danger of a retraumatization is attenuated. According to Kernberg et al. (2008) technical neutrality might be an ideal point of departure within the treatment of traumatized patients, like Borderline patients, at large and within each session because it counters patients' tendency to externalize their intrapsychic conflicts. However, at times it needs to be disrupted because of the urgent requirement for limit-setting and even in connection with the introduction of a major life problem of the patient that, at such point, would seem a non-neutral intervention of the therapist. Such deviation from technical neutrality may be indispensable in order to protect the boundaries of the treatment situation, and the patient from severe suicidal and other self-destructive behavior, and requires a particular approach in order to restore technical neutrality once it has been abandoned.
CONCLUDING REMARKS
Our goal in this paper was to demonstrate the complexity of trauma-related disorders as informed by attachment theory and psychodynamic perspectives for the purpose of psychotherapy. The response to trauma is embedded in patients' interpersonal difficulties and representations of self and attachment figures. Attachment representations should receive at least as much attention as their traumatic memories and symptoms (i.e., dissociative experiences dissociative defenses). The knowledge of the mental processes linked to traumatic dysregulation and disorganization of attachment should guide the therapist's understanding of these difficulties. Some patients are frozen in a state of chronic pathological mourning, such as the case described here. Responses in other patients may take other forms, including other forms of mourning, such as preoccupation with personal suffering or failed mourning (Bowlby, 1980;George and West, 2012). The case analysis of Gloria, integrating www.frontiersin.org the AAP with psychodynamic interpretation, demonstrates the phenomenological overlap and the developmental continuity between childhood attachment behavior and the behavior of adult dissociative patients within the therapeutic relationship (Liotti, 1993(Liotti, , 1995Fonagy, 1999;Muscetta et al., 1999;Liotti and Intreccialagli, 2003). Correction of patients' representations of attachment should become an important aim of the treatment.
One of the advantages of attachment assessment using the AAP is that the pictures scenes serve as stimuli for individual narratives. This means frightening memories can show up in a story without necessarily being articulated as one's own experiences. Gloria's predominant fear at the beginning of treatment was having to describe her trauma. She was so terrified that she abandoned therapy for quite a long time. Using the AAP in the context of an initial clinical assessment, the clinician gains specific knowledge about what kind of words might be eerie and traumatic for the patient due to his or her individual story. The clinician can then be more aware and careful in pursuing the details associated with a patient's fear, which certainly will be reactivated in the therapeutic dyad. With the help of an understanding of individual traumatic dysregulation and defensive structure as provided by the AAP, therapeutic interventions can focus step by step on helping patients to understand their intense emotional reactions of helplessness in the context of the treatment setting. According to Fonagy and Bateman (2006), the patient must be helped to consider who engendered the feeling and how, and to explore whether the feelings have occurred or are connected to events either in the immediate or longer term past.
Gloria presented herself as a very strong and autonomous person. There was no evidence of this person in the AAP, which demonstrated that she did not truly view herself as autonomous. Gloria's outward "strength" was controlling others, a strategy developed to manage frightening feelings of helplessness, abandonment, and isolation in the context of maltreatment and rejection (Solomon and George, 2011). Her strength was constructed from a disorganizing dissociation triangle and she knew and played out the script for each of its roles.
In thinking about the benefits of using the AAP in clinical settings, we must also discuss cautions and limitations. Developmental attachment assessments as designed to be stressful in order to be able to capture patterns of attachment representation and defensive processes. Using a picture set for assessment is a benefit of the AAP methodology, but the picture stimuli may become triggers, especially for individuals with PTSD. Because PTSD patients may be in a state of severe traumatization, it is possible that they may not be able to encode and respond to the task, which is called "constriction" (e.g., the person says that he or she is unable to create a story or hands the picture back to the interviewer saying they cannot or do not want to tell a story). Taking that into account, we stress that it would be very important to complete the AAP assessment in a supportive and caring clinical environment. Although more research is needed on the use of the AAP with disturbed patients, it is a promising instrument that has the potential to formulate psychodynamic hypotheses and treatment goals (see also Finn, 2011).
|
2016-05-12T22:15:10.714Z
|
2014-08-05T00:00:00.000
|
{
"year": 2014,
"sha1": "f5d066bcdd73879a10322eea9ca54a7a3264cec0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2014.00865/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5d066bcdd73879a10322eea9ca54a7a3264cec0",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
224916667
|
pes2o/s2orc
|
v3-fos-license
|
Reflecting on Self-Aware Systems-on-Chip
In this chapter, we explore adaptive resource management techniques for cyber-physical systems-on-chip that employ principles of computational self-awareness to varying degrees, specifically reflection. By supporting various self-X properties, systems gain the ability to reason about runtime configuration decisions by considering the significance of competing objectives, user requirements, and operating conditions, while executing unpredictable workloads.
rather a unification of subjects studied disjointly in various fields including control systems, artificial intelligence, autonomic computing, software engineering, among others, and how such research can be applied toward building computer systems with varying degrees of self-awareness in order to accomplish a task [8].
Cyber-Physical Systems-on-Chip
Battery-powered devices are the most ubiquitous computers in the world. Users of battery-powered devices expect support for various high-performance applications running on same device, potentially at the same time. Applications range from interactive maps and navigation, to web browsers and email clients. In order to meet performance demands by users utilizing complex workloads, increasingly powerful hardware platforms are being deployed in battery-powered devices. Systems-onchip (SoCs) can integrate hundreds of heterogeneous cores and uncore components on a single chip. Such systems are constrained by a limited amount of shared system resources (e.g., power, interconnects). Simultaneously, the systems are expected to support workloads with diverse characteristics and demands that may conflict with system constraints. These platforms include a number of configurable knobs throughout the system stack and with different scope that allow for a trade-off between power and performance, e.g., dynamic voltage and frequency scaling (DVFS), power gating, idle cycle injection. These knobs can be set and modified at runtime based on the workload demands and system constraints. Heterogeneous many-core processors (HMPs) have extended this principle of dynamic powerperformance trade-offs by incorporating single-ISA, architecturally differentiated cores on a single processor, with each of the cores containing a number of independent trade-off knobs. All of these configurable knobs allow for a huge range of potential trade-off. With such a large number of possible configurations, SoCs require intelligent runtime management in order to achieve system goals for complex workloads. Additionally, the knobs may be interdependent, so the decisions must be coordinated.
Cyber-physical systems-on-chip (CPSoC) [21] provide an infrastructure for system introspection and reflective behavior, which is the foundation for computational self-awareness. Figure 6.1 shows the infrastructure of a sensor-actuator rich platform, integrated with decision-making entities that observe system state through virtual and physical sensors at various layers in order to set the system configuration through actuators. The actuations are determined by policies that enforce the overall application goals while considering system constraints. Such an infrastructure can deploy reactive policies through the traditional Observe, Decide, and Act (ODA) feedback loop, as well as proactive policies through the augmented self-aware feedback loop. Figure 6.2 shows how the traditional ODA loop is augmented with reflection to provide self-aware adaptation. In this chapter we explore the
Reflective System Models
Traditionally, resource managers deploy an ODA feedback loop (lower half (in black) of Fig. 6.3) to manage systems at runtime. However, recent works [1,27] have shown that a runtime model of the system can better manage the unpredictable nature of workloads.
Reflection can be defined as the capability of a system to reason about itself and act upon this information [26]. A reflective system can achieve this by maintaining a representation of itself (i.e., a self-model) within the underlying system, which is used for reasoning. Reflection is a key property of self-awareness. Reflection enables decisions to be made based on both past observations, as well as predictions made from past observations. Reflection and prediction involve two types of models: (1) a self-model of the subsystem(s) under control, and (2) models of other policies that may impact the decision-making process. Predictions consider future actions, or events that may occur before the next decision, enabling "what-if" exploration of alternatives. Such actions may be triggered by other policies invoked more frequently than the decision loop. The top half of Fig. 6.3 (in blue) shows prediction enabled through reflection that can be utilized in the decision-making process of a feedback loop. The main goal of the predictive model is to estimate system behavior based on potential actuation decisions as well as system dynamics.
Middleware for Reflective Decision-Making
The increasing heterogeneity in a platform's resource types and the interactions between resources pose challenges for coordinated model-based decision-making in the face of dynamic workloads. Self-awareness properties address these challenges for emerging SoC platforms through reflective resource managers. Reflective resource managers build a model of the system which represents the software organization or the architecture of the target platform. Resource managers can use reflective models to anticipate the effects of changing the system configuration [10]. Different layers of the system stack coordinate through policies to orchestrate the management of resources: sensors inform policies of the system state; policies coordinate with models to perform reflective queries, and make resource management decisions; policies set actuators to enact changes on the system at runtime. However, with SoC computing platform architectures evolving rapidly, porting the self-aware decision logic across different hardware platforms is challenging, requiring resource managers to update their models and platform-specific interfaces. To address this problem, we propose MARS (Middleware for Adaptive and Reflective Systems), a cross-layer and multi-platform framework that allows users to easily create resource managers by composing system models and resource management policies in a flexible and coordinated manner. Figure 6.4 shows an overview of the MARS framework (shaded), with Sensors and Actuators interfacing across multiple layers of the system stack: Applications, Linux kernel, and HW Platform. The components of MARS are explained next.
Sensors and actuators:
The sensed data consists of performance counters (e.g., instructions executed, cache misses, etc.) and other sensory information (e.g., power, temperature, etc.). The collected data is used to assess the current system state and to characterize workloads. Any updates to the system configuration (e.g., CPU core frequency, GPU frequency, memory controller frequency, task-to-core mapping) happen through system knobs. Actuators allow system configuration changes to optimize operating point or control trade-offs. 2. Resource Management Policies: They are platform agnostic user-level daemons implemented in MARS using supported sensors, actuators, and reflective system models. 3. Reflective system model is used by the policies to make informed decisions. The reflective model has the following subcomponents: (a) Models of policies implemented by the underlying OS kernel used for coordinating decisions made within MARS with decisions made by the OS. (b) Models of user policies that are automatically instantiated from any policy defined within MARS. (c) The baseline performance/power model. This model takes as input the predicted actuations generated from the policy models and produces predicted sensed data.
4. The policy manager is responsible for reconfiguring the system by adding, removing, or swapping policies to better achieve the current system goal.
MARS is implemented in the C++ language following an object-oriented paradigm and works on hardware (e.g., Odroid-XU3, Nvidia Jetson TX2), simulated (e.g., gem5), and trace-based offline [11] platforms. The framework is open source and available online. 2 While the current version of MARS targets energy-efficient heterogeneous SoCs, we believe the MARS framework can be ported to a wider range of systems (e.g., webservers, high-performance clusters) to support self-aware resource management.
Managing Energy-Efficient Chip Multiprocessors
Dynamic resource management for HMPs is a well-known challenge: integration of hundreds of cores running various workloads with conflicting constraints increases the pressure on limited shared system resources. A promising and well-established approach is the use of control-theoretic solutions based on rigorous mathematical formalisms that can provide bounds and guarantees for system resource management. In this context, we discuss efforts that deploy control-theoretic-centric runtime resource management of HMPs, from simple Single Input Single Output (SISO) controllers to more complex Supervisory Control Theory (SCT) methods.
Single Input Single Output Controllers
Conventional control theory methods proposed for resource management use Single Input Single Output (SISO) controllers for the ease in deployment and the guarantees they provide in tracking the target output. These SISO controllers use Proportional Integral (PI), Proportional Integral Derivative (PID), or lead-lag implementations [22]. Figure 6.5 depicts a first-order feedback SISO controller which can be deployed either as a PI or a PID controller. The error e is the input to the controller. Note that to compute the current control input u, the controller needs to have the current value of the error e along with the past value of the error and the past value of the control input. It is this memory inherent in the controller that makes it dynamic.
Multiple Input Multiple Output Controllers
Modern HMPs execute diverse set of workloads with varying resource demands, which sometimes exhibit conflicting constraints. In this context, the use of SISO controllers might not be effective as multiple system goals varying over time need to be managed in a coordinated and holistic manner. Multiple Input Multiple Output (MIMO) control theory is able to coordinate and prioritize multiple design goals and actions. MIMO controllers have proven effective for coordinating management of multiple goals in unicore processors [17] and HMPs [12].
Adaptive Control Methods
Ideally, control-theoretic solutions should provide formal guarantees, be simple enough for runtime implementation, and handle nonlinear system behavior. Static linear feedback controllers such as SISO and MIMO can provide robustness and stability guarantees with simple implementations, while adaptive controllers modify the control law at runtime to adapt to the discrepancies between the expected and the actual system behavior. However, modifying the controller at runtime is a costly operation that also invalidates the formal guarantees provided at design time. In order to be able to take predicted responsive actions against nonlinear behavior of the computer systems, a well-established and lightweight adaptive control-theoretic technique called Gain Scheduling can be used. This method is used for dynamic power management in chip multiprocessors in [2].
Hierarchical Controllers
Supervisory Control Theory (SCT) [19,30] provides formal and systematic supervision of classical MIMO/SISO controllers. SCT uses modular decomposition of control problems to manage their complexity. Specifically, supervisory control has two key properties: (1) rapid adaptation in response to abrupt changes in management policy and (2) low computational complexity by computing control parameters for different policies offline. New policies and their corresponding parameters can be added to the supervisor on demand. Therefore, SCT is suitable for resource management problems (such as managing power, energy, and qualityof-service metrics) that can be modeled using logic and discrete system dynamics. Figure 6.6 depicts a high-level view of supervisory control for HMP resource management. Either the user or the system software may specify Variable Goals and Policies. The Supervisory Controller aims to meet system goals by managing the low-level controllers. High-level decisions are made based on the feedback given by the High-level Plant Model, which provides an abstraction of the entire system. Various types of Classic Controllers, such as PID or state-space controllers, can be used to implement each low-level controller based on the target of each subsystem. The flexibility to incorporate any pre-verified off-the-shelf controllers without the need for system-wide verification is essential for the modularity of this approach. The supervisor provides parameters such as output references or gain values to each low-level controller during runtime according to the system policy. Low-level controller subsystems update the high-level model to maintain global system state,
Physical Plant
Sub Fig. 6.6 High-level view of Supervisory Control Theory and potentially trigger the supervisory controller to take action. The high-level model can be designed in various fashions (e.g., rule-based or estimator-based) to track the system state and provide the supervisor with guidelines. Supervisory control provides the opportunity to benefit from both classical control-theoretic methods and heuristics in a robust fashion. The SCT hierarchy in Fig. 6.6 is successfully used to manage quality-of-service (QoS) goals within a power budget on an HMP in [18].
Heterogeneous Mobile Governors: Energy-Efficient Mobile System-on-a-Chip
Mobile games stress modern SoCs by utilizing heterogeneous processing elements, CPUs and GPUs, concurrently. However, the utilization of each processing element may vary between games. Performance of these games that usually is measured in frames per second (FPS) can highly depend on the operating frequency of compute units. However, conventional DVFS governors conservatively choose high frequencies without considering the utilization pattern of the games [16]. In order to meet a performance goal while conserving energy, the frequency of each processing element should be as low as possible without an observable effect on the FPS.
Sensors to Capture Dynamism
To coordinate frequency configuration decisions, a cooperative CPU-GPU DVFS strategy, Co-Cap [14], limits the maximum frequency of CPUs and GPUs on a game-specific basis. Based on the utilization of each processing element, games are classified as one of the following classes: (1) No CPU-GPU Dominant; (2) CPU Dominant; (3) GPU Dominant; and (4) CPU-GPU Dominant. Figure 6.7 shows the classes and gives an example of each class. To determine a maximum frequency for each game class, Co-Cap implements a frame rate sensor, which is affected by both CPU and GPU frequencies. By limiting maximum frequencies for each game class, Co-Cap reduces energy consumption without observable performance degradation.
The assumption in Co-Cap is that games can only belong to one of the classes. However, some games might change their dynamic behavior throughout their life cycle. To proactively respond to the dynamic CPU and GPU frequency requirements of games, a DVFS governor policy requires more information about a game's workload dynamism. A Hierarchical Finite State Machine (HFSM) based CPU-GPU governor, HiCAP [13], models the dynamic behavior of mobile gaming workloads and applies a cooperative, dynamic CPU-GPU frequency-capping policy to conserve energy by adapting to a game's inherent dynamism. Using the HFSM, a DVFS governor can predict the next workload feature for a certain window [14] at a game's runtime. Through this added self-awareness, HiCAP reduces energy consumption even further than Co-Cap.
Further dynamism exists in a game's memory access patterns. Some scenes in mobile games read more graphics data than others, resulting in increased memory utilization. This may slow down the CPU portion of the game, but on the other hand when memory utilization is low, it may run faster than originally predicted by a conventional DVFS governor. A conventional DVFS governor cannot detect these memory utilization changes by sensing utilization, causing prediction errors to increase. MEMCOP, a Memory-aware Cooperative Power Management Governor for Mobile games [5], senses the number of last level cache misses to monitor the memory pressure of the system in addition to CPU, and GPU memory utilization. This prevents the CPU DVFS governor from increasing frequency due to inaccurate predictions caused by variation in memory access time.
Toward Self-Aware Governors
Co-Cap, HiCap, and MEMCOP DVFS policies are each steps toward a self-aware DVFS governor policy for heterogeneous SoCs. Each policy monitors system's state using novel sensors, and defines runtime prediction rules to reflect and adapt to changes in mobile game behavior. However, the predictive models are generated statistically at design time, and remain the same during the execution. Moreover, as the predictive model becomes more complex, prediction errors increase due to the assumption of a linear relationship between the model's input and output. ML-Gov, a machine learning enhanced integrated CPU-GPU governor [15], tries to address these issues by applying machine learning algorithms. This method does not require rule tuning at design-time. ML-Gov's machine learning algorithm helps to exploit nonlinear characteristics between frequency and performance. ML-Gov currently builds the model offline, but through enhanced self-awareness via online updates of the reflective model, could adapt to previously unknown games and classes.
Adaptive Memory: Managing Runtime Variability
Heterogeneous processing elements on mobile SoCs share limited memory resources, leading to memory contention and stalled processes waiting for data. This performance degradation is exacerbated by the Von Neumann bottleneck, a prevalent problem in modern day computer systems. Data transfer speeds in memory have not been able to keep up with the performance gains of processors exemplified by Moore's law. However, with the end of Moore's law on the horizon there is an ever increasing need to alleviate the Von Neumann bottleneck to increase the performance of computer systems. There have been various approaches over the years to address the Von Neumann bottleneck such as putting critical memory in an easily accessible cache [25] and recently in an easily accessible Software-Programmable Memory (SPM), also known as a scratchpad, using multi-threading [9], and exploiting cache-coherency [7]. We address the bottleneck by providing self-awareness with respect to memory resource utilization.
Sharing Distributed Memory Space
Software-Programmable Memories are a promising alternative to hardwaremanaged caches in embedded systems. However, traditional approaches for managing SPMs do not support sharing of distributed memory resources, missing the opportunity to utilize those memory resources. Employing operating-systemlevel awareness of SPM utilization, memory resources can be shared by allowing threads to opportunistically exploit the entire memory space for unpredictable application workloads. Best-effort policies can be used to maximize the usage of on-chip SPMs. The policies can be supported by hardware via distributed memory management units (MMUs), an on-chip component that can be used to exchange information between the NoC and an MMU's local SPM. Sharing distributed SPM space reduces memory contention, resulting in reduced memory latency by reducing off-chip memory accesses by about 14%. The off-chip access reduction decreases average execution time by about 19.5%, which in turn reduces energy consumption [23,29]. More intelligent policies that explore a mixed SPM/cache hierarchy for many-core embedded systems can yield further improvements.
Memory Phase Awareness
Modern mobile devices use multi-core platforms that allow for concurrent execution of multimedia and non-multimedia applications that enter and exit at unpredictable times. Each application also has variable memory demands during these unpredictable times. By being aware of the periodic patterns, or phasic behavior, of an application's memory usage (memory phases), a system's on-chip memory can be more efficiency utilized. Memory phases can be identified from memory usage information extracted on an application basis, and can be used to prioritize different memory pages in a multi-core platform without having any prior knowledge about running applications. The identification process can be integrated into the runtime system and done online. For example, memory phases can be used for effective sharing of distributed SPMs for multi-core platforms to reduce memory access latency and contention. Experiments on workloads with varying intra-and interapplication memory-intensity show that using phase detection schemes can reduce memory access latency up to 45% for configurations up to 16 cores [28]. Ongoing work investigates more aggressive use of memory phasic behavior in many-core architectures with hundreds of cores.
Quality-Configurable Memory
We have established how self-awareness can be achieved through formal control theory. Figure 6.8 shows a closed feedback control loop with a quality monitor that can measure memory utilization and processor usage with respect to a QoS goal to fit the runtime requirements of applications. The quality monitor gives a quality score and sends the collected data to a high-level controller. The controller reflects on the data, then tunes knobs to adapt the memory utilization and processor usage to minimize the error between the current quality and the quality-of-service goal. The self-aware approach enables dynamic convergence toward dynamic memory utilization and quality targets for unpredictable workloads. While current The controller optimizes memory knobs to improve application performance results indicate that a self-aware memory controller outperforms a manual quality configuration scheme, there is much work to be done with to analyze energy tradeoffs when using a self-aware memory controller, and whether a MIMO controller could be more effective for resource management in many-core systems with the self-aware approach [24].
What's Ahead?
Self-awareness enables a system to observe its context and make changes to optimize its execution at runtime. For instance, it is possible to allow a system to tune its execution to optimize power consumption. Through observing how it has reacted to past changes in certain conditions, the system can learn what the impact on the overall execution and power consumption was, and if a different adaptation would be more appropriate in the future. To further explore such opportunities in computing systems, we shift our focus to a new project: the Information Processing Factory (IPF). IPF is a step toward autonomous many-core platforms in cyberphysical systems (CPS) and the Internet of Things (IoT). It represents a paradigm shift in platform design, with robust and independent platform operation in the focus of platform-centric design rather than existing semiconductor device or software technology, as mostly seen today [4]. We use the metaphor of an Information Processing Factory to draw similarities between microelectronics systems and factories as follows: in a factory, all components must adapt to the current workload [20]. Additionally, this adaptation cannot be done offline and must instead be done in real time without interrupting the baseline operations. Future microelectronic systems (e.g., MPSoCs) should operate in a similar manner.
Clusters of component-specific, uncorrelated control occurrences cannot handle operations of large scale systems with multi-criteria objective functions. Similarly, a centralized controller model is also inadequate in this case because it cannot scale. The goal of the IPF project is to demonstrate that a hybrid hierarchical approach, sporting as much modularity as possible and as much centralized as necessary, is a much more effective means of achieving the desired goal while maintaining cost efficiency, low overhead, and scalability. Figure 6.9 depicts how we envision the platform to be structured. Information provided by sensors is gathered and merged into self-organizing, selfaware (SO/SA) control processing instances across different hardware/software abstraction layers comprising an MPSoC-based CPS system. The SO/SA instances generate actuation directives affecting the MPSoC system components at same or lower levels of abstraction. The SO/SA paradigm is not limited in scope to optimization of CPS operational parameters/metrics. In fact, self-and groupawareness can also enable higher level tasks such as self-protection of both the MPSoC and the overall CPS system.
Example Use Case: Autonomous Driving
The key innovation in automated driving as compared to driver assistance systems is the transition of decision-making from the driver to the vehicle. The application processing and communication requirements ask for platform performance, memory capacity, and communication bandwidth and latency far beyond the capabilities of current architectures. At the same time, these platforms must be highly reliable and guarantee sufficient functionality under platform errors, aging, and degradation to meet safety standards. That is, platforms and their components must be failoperational, i.e., must be able to continue driving, instead of fail-safe, as today. Thus, the automated driving requirements can be mapped to corresponding requirements of an Information Processing Factory. The system must be capable of in-field integration, i.e., able to adapt to changes in the workload of both critical and non-critical (best-effort) functions. The system must find a new suitable mapping and must prevent the changes from violating the guarantees of other software components. The software must be able to detect and to adapt to transient errors in order to provide a reliable service. This requires self-diagnosis and self-healing.
The system must be predictable and provide for minimum performance guarantees for all scenarios.
Allowing and exploiting dynamic system behavior through IPF can significantly improve platform performance and resource utilization. Thus, the system must be able to optimize the execution and mapping online: self-optimization. The optimization may target e.g. aging (temperature), power consumption, response time, and resource utilization.
Summary
Future cyber-physical systems will host a large number of coexisting distributed applications on hardware platforms with thousands to millions of networked components communicating over open networks. These distributed applications will include both critical and best-effort tasks, may be subject to permanent change, environment dynamics and application interference. Using wisdom gathered from our initial exploration into self-aware SoCs, we introduce a new Information Processing Factory paradigm to manage current and future cyber-physical systems.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
|
2020-08-06T09:05:28.293Z
|
2020-07-31T00:00:00.000
|
{
"year": 2020,
"sha1": "89e7989dbfa1a7846bb3a52d21c20466256effc3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-47487-4_6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "2e5c386e64623d3ebc993f1cf887c870264b01d1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
21074407
|
pes2o/s2orc
|
v3-fos-license
|
Measuring Singularities with Frobenius: The Basics
Consider a polynomial $f$ defined over a field $k$, the multiplicity is perhaps the most naive measurement of the singularities of $f$. This paper describes the first steps toward understanding a much more subtle measure of singularities which arises naturally in three different contexts-- analytic, algebro-geometric, and finally, algebraic. Miraculously, all three approaches lead to essentially the same measurement of singularities: the log canonical threshold (in characteristic zero) and the closely related $F$-pure threshold (in characteristic $p$). In this paper we present only the first steps in understanding these invariants, with an emphasis on the prime characteristic setting.
Introduction
Consider a polynomial f over some field k, vanishing at some point x in k n . By definition, f is smooth at x (or the hypersurface defined by f is smooth at x) if and only if some partial derivative ∂f ∂xi is non-zero there. Otherwise, f is singular at x. But how singular? Can we quantify the singularity of f at x?
The multiplicity is perhaps the most naive measurement of singularities. Because f is singular at x if all the first order partial derivatives of f vanish there, it is natural to say that f is even more singular if also all the second order partials vanish, and so forth. The order, or multiplicity, of the singularity at x is the largest d such that for all differential operators ∂ of order less than d, ∂f vanishes at x. Choosing coordinates so that x is the origin, it is easy to see that the multiplicity is simply the degree of the lowest degree term of f .
The multiplicity is an important first step in measuring singularities, but it is too crude to give a good measurement of singularities. For example, the polynomials xy and y 2 − x 3 both define singularities of multiplicity two, though the former is clearly less singular than the latter. Indeed, xy defines a simple normal crossing divisor, whereas the singularity of the cuspidal curve defined by y 2 − x 3 is quite complicated, and that of, for example y 2 − x 17 is even more so. This paper describes the first steps toward understanding a much more subtle measure of singularities which arises naturally in three different contexts-analytic, algebro-geometric, and finally, algebraic. Miraculously, all three approaches lead to essentially the same measurement of singularities: the log canonical threshold (in characteristic zero) and the closely 2000 Mathematics subject classification. Primary 13A35; Secondary 14B05. Keywords: Log canonical thresholds, multiplier ideals, F-thresholds, test ideals. The first author is partially supported by MTM2009-07291, by the Fellowship Fundación Ramón Areces para Estudios Postdoctorales 2011/12, and by Spanish Government in the frame of Postdoctoral mobility abroad Fellowship (code: EX-2010-0128). The second author has been supported by a For Women in Science award 2011 of L'Oréal Austria, the Austrian commission for UNESCO and the Austrian Academy of Sciences and by the Austrian Science Fund (FWF) in frame of project P21461. The third author is partially supported by NSF grant DMS-1001764. related F -pure threshold (in characteristic p). The log canonical threshold, or complex singularity exponent, can be defined analytically (via integration) or algebro-geometrically (via resolution of singularities). As such, it is defined only for polynomials over C or other characteristic zero fields. The F -pure threshold, whose name we shorten to F-threshold here, by contrast, is defined only in prime characteristic. Its definition makes use of the Frobenius, or p-th power map. Remarkably, these two completely different ways of quantifying singularities turn out to be intimately related. As we will describe, if we fix a polynomial with integer coefficients, the F -threshold of its "reduction mod p" approaches its log canonical threshold as p goes to infinity.
Both the log canonical threshold and the F -threshold can be interpreted as critical numbers for the behavior of certain associated ideals, called the multiplier ideals in the characteristic zero setting, and the test ideals in the characteristic p world. Both naturally give rise to higher order analogs, called "jumping numbers." We will also introduce these refinements.
We present only the first steps in understanding these invariants, with an emphasis on the prime characteristic setting. Attempting only to demystify the concepts in the simplest cases, we make no effort to discuss the most general case, or to describe the many interesting connections with deep ideas in analysis, topology, algebraic geometry, number theory, and commutative algebra. The reader who is inspired to dig deeper will find plenty of more sophisticated survey articles and a plethora of connections to many ideas throughout mathematics, including the Bernstein-Sato polynomial [35], Varchenko's work on mixed Hodge structures on the vanishing cycle [62], the Hodge spectrum [58], the Igusa Zeta Function [34], motivic integration and jet schemes [41], Lelong numbers [13], Tian's invariant for studying Kähler-Einstein metrics [12], various vanishing theorems for cohomology [38,Chapter 9], birational rigidity [10], Shokurov's approach to termination of flips [52], Hasse invariants [2], the monodromy action on the Milnor fiber, Frobenius splitting and tight closure.
There are several surveys which are both more sophisticated and pay more attention to history. In particular, the classic survey by Kollár in [35, contains a deeper discussion of the characteristic zero theory, as do the more recent lectures of Budur [9], mainly from the algebro-geometrical perspective. For a more analytic discussion, papers of Demailly are worth looking at, such as the article [11]. Likewise, for the full characteristic p story, Schwede and Tucker's survey of test ideals [48] is very nice. The survey [42] contains a modern account of both the characteristic p and characteristic zero theory.
2.1. Analytic approach. Approaching singularities from an analytic point of view, we consider how fast the (almost everywhere defined) function "blows-up" at a point x in the zero set of f . We attempt to measure this singularity via integration. For example, is this function square integrable in a neighborhood of x? The integral 1 |f | 2 never converges in any small ball around x, but we can dampen the rate at which 1 |f | blows up by raising to a small positive power λ. Indeed, for sufficiently small positive real numbers λ, depending on f , the integral is finite, where B ε (x) denotes a ball of sufficiently small radius around x. As we vary the parameter λ from very small positive values to larger ones, there is a critical value at which the function 1 |f | λ suddenly fails to be L 2 locally at x. This is the log canonical threshold or complex singularity exponent of f at x. That is, Definition 2.1. The complex singularity exponent of f (at x) is defined as When the point x is understood, we denote the complex singularity exponent simply by lct(f ).
The following figure depicts the λ-axis: for small values of λ the function z → 1 |f (z)| belongs to L 2 locally in a neighborhood of a point x; for larger λ is does not. It is not clear whether or not the function is integrable at the complex singularity exponent; we will see shortly that it is.
This numerical invariant is more commonly known as the log canonical threshold, but it is natural to use the original analytic name when approaching it from the analytic point of view. See Remark 2.11. Example 2.2. If f is smooth at x, then its complex singularity exponent at x is 1. Indeed, in this case the polynomial f can be taken to be part of a system of local coordinates for C n at x. It is then easy to compute that the integral always converges on any bounded ball B(x) for any positive λ < 1. Indeed, this computation is a special case of Example 2.3, below.
1 · · · z aN N be a monomial in C[z 1 , . . . , z N ], which defines a singularity at the origin in C N . Let us compute its complex singularity exponent. By definition, we need to integrate 1 |z a1 1 · · · z aN N | 2λ over a ball around the origin. To do so, we use polar coordinates. We have each |z i | = r i and each dz i ∧ dz i = r i dr i ∧ dϑ i . Hence we see that 1 |z 1 | 2a1λ · · · |z N | 2aN λ converges in a neighborhood B of the origin if and only if If f has "worse" singularities, the function 1 |f | will blow up faster and the complex singularity exponent will typically be smaller. In particular, the complex singularity exponent is always less than or equal to the complex singularity exponent of a smooth point, or one. 1 Although it is not obvious, the complex singularity exponent is always a positive rational number. We prove this in the next subsection using Hironaka's theorem on resolution of singularities.
2.2.
Computing complex singularity exponent by monomializing. Hironaka's beautiful theorem on resolution of singularities allows us to reduce the computation of the integral in the definition of the complex singularity exponent for any polynomial (or analytic) function to the monomial case. Let us recall Hironaka's theorem (cf. [28]).
Theorem 2.4. Every polynomial (or analytic) function on C N has a monomialization. That is, there exists a proper birational morphism X π −→ C N from a smooth variety X such that both f • π and Jac C (π) are monomials (up to unit) in local coordinates locally at each point of X.
Since X is a smooth complex variety, it has a natural structure of a complex manifold. Saying that π is a morphism of algebraic varieties means simply that it is defined locally in coordinates by polynomial (hence analytic) functions, therefore π is also a holomorphic mapping of complex manifolds. The word "proper" in this context can be understood in the usual analytic sense: the preimage of a compact set is compact 2 . The fact that π is birational (meaning it has an inverse on some dense open set) is not relevant at the moment, beyond the fact that the dimension of X is necessarily N .
The condition that both f • π and Jac C (π) are monomials (up to unit) locally at a point y ∈ X means that we can find local coordinates z 1 , . . . , z N at y, such that both (2.4.1) f • π = uz a1 1 · · · z aN N , and the holomorphic Jacobian 3 Jac C (π) = vz k1 1 · · · z kN N where u and v are some regular (or analytic) functions defined in a neighborhood of y but not vanishing at y.
1 One caveat: the singularity exponent behaves somewhat differently over R, due to the possibility that a polynomial's zeros are hidden over R, so that 1 |f | may fail to blowup as expected or even at all! 2 Alternatively, it can be taken in the usual algebraic sense as defined in Hartshorne, [23, Ch. 2, Section 4]. 3 If we write π in local holomorphic coordinates as (z 1 , . . . , z N ) → (f 1 , . . . , f N ), then Jac C (π) is the holomorphic function obtained as the determinant of the N × N matrix ∂f i ∂z j . The properness of the map π guarantees that the integral 1 |f | 2λ converges in a neighborhood of the point x if and only if the integral Jac R (π) |f • π| 2λ converges in a neighborhood of π −1 (x), where Jac R (π) is the (real) Jacobian of the map π considered as a smooth map of real 2N -dimensional manifolds. Recalling that (see [18, pp. 17-18]), and using that π −1 (x) is compact, Hironaka's theorem reduces the convergence of this integral to a computation with monomials in each of finitely many charts U covering X: In particular, remembering that the map was proper so that only finitely many charts are at issue here, we have: Corollary 2.5. The complex singularity exponent of a complex polynomial at any point is a rational number.
2.3.
Algebro-Geometric Approach. In the world of algebraic geometry, we might attempt to measure the singularities of f by trying to measure the complexity of a resolution of its singularities. Hironaka's theorem can be stated as follows: Theorem 2.6. Fix a polynomial (or analytic) function f on C N . There exists a proper birational morphism X π −→ C N from a smooth variety X such that the pull-back of f defines a divisor F π whose support has simple normal crossings, and which is also in normal crossings with the exceptional divisor (the locus of points on X at which π fails to be an isomorphism). Furthermore, the morphism π can be assumed to be an isomorphism outside the singular set of f .
The proper birational morphism π is usually called a log resolution of f in this context. The support of the divisor defined by the pull-back of f is simply the zero set of f • π. The condition that it has normal crossings means that it is a union of smooth hypersurfaces meeting transversely. In more algebraic language, a divisor with normal crossing support is one whose equation can be written as a monomial in local coordinates at each point of X. Thus Theorem 2.6 is really just a restatement of Theorem 2.4. 4 4 As stated here, Theorem 2.6 is actually a tiny bit stronger, since the condition that we have simple normal crossings rules out self-crossings. The difference is immaterial to our discussion.
Hironaka actually proved more: such a log resolution can be constructed by a sequence of blowings-up at smooth centers. We might consider the polynomial f to be "more singular" if the number of blowings up required to resolve f , and their relative complicatedness, is great. However, because there is no canonical way to resolve singularities, we need a way to compare across different resolutions. This is done with the canonical divisor.
2.4. The canonical divisor of a map. Fix a proper birational morphism X π −→ Y between smooth varieties. The holomorphic Jacobian (determinant) Jac C (π) can be viewed as a regular function locally on charts of X. Its zero set (counting multiplicity) is the canonical divisor of π (or relative canonical divisor of X over Y ), denoted by K π . Because the Jacobian matrix is invertible at x ∈ X if and only if π is an isomorphism there, the canonical divisor of π is supported precisely on the exceptional set E, which by definition consists of the points in X at which π is not an isomorphism. In particular, since it is the locally the zero set of this Jacobian determinant, the exceptional set E is always a codimension one subvariety of X. Moreover, this exceptional set is more naturally considered as a divisor: we label each of the components of E by by the order of vanishing of the Jacobian along it. This is the canonical divisor K π . That is, where the sum ranges through all of the components E i of the exceptional set E and where k i is the order of vanishing 5 of Jac C (π) along E i . Thus we can view the canonical divisor K π as a precise "difference" between birationally equivalent varieties X and Y .
To measure the singularities of a polynomial f , consider a log resolution X π −→ C N . The polynomial f defines a simple crossing divisor F π on X, namely the zero set (with multiplicities) of the regular function f • π, where the D i range through all irreducible divisors on X and the a i are the orders of vanishing 6 of f • π along each. If we denote the divisor of f in C N by F , then F π is simply π * F . There are two types of divisors in the support of F π : the birational transforms F i of the components of F , and exceptional divisors E i . All are smooth. Note that locally in charts, both types of divisors-the E i and the F i -are defined by some local coordinates z i on X.
Using this language, we examine our computation for the convergence of the integral The condition (2.4.4) that k i − λa i > −1 is equivalent to the condition that all coefficients of the R-divisor K π − λF π are greater than −1. Put differently, the integral 1 |f | 2λ converges in a neighborhood of x if and only if the "round up" divisor 7 ⌈K π − λF π ⌉ 5 These are the same k i appearing in expressions (2.4.2) as we range over all charts of X. Note that there are typically many more than N components E i , despite the fact that in the expression (2.4.2) we were only seeing at most N of them at a time in each chart. 6 Of course, the order of vanishing is zero along any irreducible divisor not in the support of F , so the sum is finite. Again, these are the same a i as in formula (2.4.1); there are typically many divisors in the support of π * F although in formula (2.4.1) we see at most N in each chart. 7 Given a divisor D with real coefficients, we define the round up ⌈D⌉ as the integral divisor obtained by rounding up all coefficients of prime divisors to the nearest integer. In the same way, ⌊D⌋ is obtained by rounding down.
is effective. [Strictly speaking, since we are computing the complex singularity exponent at a particular point x, we should throw away any components of K π − λF π whose image on C N does not contain x; that is, we should consider a log resolution of singularities only in a sufficiently small neighborhood of x].
Again we arrive at the following formula for the complex singularity exponent of f at x: Corollary 2.7. Let π : X −→ C N be a log resolution of the polynomial f . If we write where the D i range through all irreducible divisors on X, then the complex singularity exponent or the log canonical threshold of f at at point x is the minimum, taken over all indices i such that x ∈ π(D i ), of the rational numbers The complex singularity exponent is better known in algebraic geometry as the log canonical threshold.
Remark 2.8. The condition that ⌈K π − λF π ⌉ is effective is independent of the choice of log resolution. This follows from our characterization of the convergence of the integral but can also be shown directly using the tools of algebraic geometry (see [36]). Although we did not motivate the study of K π − λF π in purely algebro-geometric terms, the R-divisors K π − λF turn out to be quite natural in birational algebraic geometry, without reference to the integrals. See, for example, [35]. In any case, our discussion shows that the definition of log canonical threshold can be restated as follows: Definition 2.9. The log canonical threshold of a polynomial f is defined as where X π −→ C N is any log resolution of f (in a neighborhood of x), K π is its relative canonical divisor, and F π is the divisor on X defined by f •π. We can also define the global log canonical threshold by taking π to be a resolution at all points, not just in a neighborhood of x.
Remark 2.10. Note that loosely speaking, the more complicated the resolution, the more likely λ will have to be small in order make K π − λF π close to effective. This essentially measures the complexity of the pullback of f to the log resolution. The presence of the K π term accounts for the added multiplicity that would have been present in any resolution, because of the nature of blowing up C N to get X, thus "standardizing" across different resolutions.
It is also clear from this point of view that ⌈K π − λF π ⌉ is always effective for very small (positive) λ, and that as we enlarge λ it stays effective until we suddenly hit the log canonical threshold of f , at which point at least one coefficient is exactly negative one.
Remark 2.11. The name log canonical comes from birational geometry. A pair (Y, D) consisting of a Q-divisor on a smooth variety Y is said to be log canonical if, for any proper birational morphism π : X −→ Y with X smooth (or equivalently, any fixed log resolution), the divisor K π − π * D has all coefficients ≥ −1. This condition is independent of the choice of π (see [36]). Thus the log canonical threshold of f at x is the supremum, over positive λ ∈ R such that (C n , λdiv(f )) is log canonical in a neighborhood of x.
Example 2.12. The log canonical threshold of any complex polynomial f is bounded above by one. Indeed, suppose for simplicity that f is irreducible, defining a hypersurface D with isolated singularity at x. Let π : X −→ C N be a log resolution of f . We have where all the E i are exceptional, and where the E i are exceptional and D is the birational transform of D on X. Then the log canonical threshold is the minimum value of (2.12.1) min as we range through the exceptional divisors of π. More generally, the argument adapts immediately to show that if f factors into irreducibles as f = f b1 1 f b2 2 · · · f bt t , then the log canonical threshold is bounded above by the minimal value of 1 bi .
2.5.
Computations of Log Canonical Thresholds. The canonical divisor of a morphism plays a starring role in birational geometry, and in particular, as we have seen, in the computation of the log canonical threshold. Before computing some more examples, we isolate two helpful properties of K π .
Fact 2.13. Let X π −→ Y be the blow-up along a smooth subvariety of codimension c in the smooth variety Y . Then the relative canonical divisor is where E denotes the exceptional divisor of the blow-up.
Fact 2.14. Consider a sequence of proper birational morphisms X 3 The proof of both these facts are easy exercises in local coordinates, and left to the reader.
Example 2.15. A cuspidal singularity. We compute the log canonical threshold of the cuspidal curve D given by f = x 2 − y 3 in C 2 (at the origin, its unique singular point). The curve is easily resolved (that is, the polynomial f is easily monomialized) by a sequence of three point blowups at points: whose composition we denote by π, and which create exceptional divisors E 1 , E 2 and E 3 respectively. 8 [Here φ is the blowup at the origin, ν is the blowup of the unique intersection point with the birational transform of D with E 1 , and ψ is the blowup of the unique intersection point of the birational transform of D on X 2 with E 2 ]. There are four relevant divisors on X 3 to consider, the three exceptional divisors E 1 , E 2 and E 3 , and the birational transform of D on X 3 . Using the two facts above, it is easy to compute that Hence lct(f ) = 5 6 .
Example 2.16. As an exercise, the reader can compute that for f = x m −y n with gcd(m, n) = 1, then lct(f ) = 1 m + 1 n . The resolution is constructed as in Example 2.15 but may require a few more blowups to resolve.
Example 2.17. Let f be a homogenous polynomial of degree d in N variables, with an isolated singularity at the origin. Then lct(f ) = N d , if d ≥ N and 1 otherwise. Indeed, one readily checks that blowing up the origin, we obtain a log resolution, X π −→ C N , with one exceptional component E. Using Fact 2.13 above, we compute that Also, the divisor D defined by f pulls back to where again, D denotes also the birational transform of D on X. Thus Remark 2.18. The log canonical threshold describes the singularity but it does not characterize it. For example, the previous example gives examples of numerous non-isomorphic non-smooth points whose log canonical threshold is one, the same as a smooth point.
Remark 2.19.
In general it is hard to compute the log canonical threshold, but there are algorithms to compute it in special cases such as the monomial case ( [33]), the toric case ( [3]) or the case of two variables ( [61]). In all these cases, the reason the log canonical threshold can be computed is that a resolution of singularities can be explicitly understood.
2.6. Multiplier ideals and Jumping Numbers. Our definition of log canonical threshold leads naturally to a family of richer invariants called the multiplier ideals of f , which are ideals in the polynomial ring indexed by the positive real numbers.
Again, multiplier ideals can be defined analytically or algebro-geometrically.
Definition 2.20. (Analytic Definition, cf. [11]). Fix f ∈ C[x 1 , . . . , x n ]. For each λ ∈ R + , define the multiplier ideal of f as Thus the multiplier ideals consist of functions that can be used as "multipliers" to make the integral converge. It is easy to check that this set J (f λ ) is in fact an ideal of the ring Equivalently, we define the multiplier ideal in an algebro-geometric context using a logresolution.
where X π −→ C n is a log-resolution of f , K π is its relative canonical divisor and F π is the divisor on X determined by f • π. These are the polynomials whose pull-backs to X have vanishing no worse than that of the divisor K π − λF π .
Recall that if D is a divisor on a smooth variety X, the notation O X (D) denotes the sheaf of rational functions g on X such that div(g) + D is effective. Thus in concrete terms, the multiplier ideal is It is straightforward to check that the argument we gave for translating the analytic definition of the log canonical threshold into algebraic geometry can be used to see that these two definitions of multiplier ideals are equivalent. Moreover, this also shows that Definition 2.21 is independent of the choice of resolution. For a direct algebro-geometric proof, see [38,Chapter 9].
We caution the reader delving deeper into the subject, however, that when the notion of multiplier ideals and log canonical thresholds are generalized to divisors on singular ambient spaces, there are situations in which K π is a non-integral Q-divisor. In this case, we can not assume that ⌈K π − λF π ⌉ = K π − ⌊λF π ⌋.
Proposition 2.23. Fix a polynomial f and view its multiplier ideals J (f λ ) as a family of ideals varying with λ. Then the following properties hold: (How small is small enough depends on λ).
All of these properties are easy to verify, thinking of what happens with the rounding up of the divisor K π − λF as λ changes. As we imagine starting with a very small λ and increasing it, the properties above can be summarized by the following diagram: There are certain critical exponents ci for which the multiplier ideal "jumps." The critical numbers λ described in (5) and denoted by c i in the diagram give a sequence of numerical invariants refining the log canonical threshold, which is the smallest of these. Proof. Let π : X → C N be a log resolution of f , with K π = k i D i and F π = a i D i , as before. It is easy to see that the critical values of ⌈K π − λF π ⌉ occur only when k i − λa i ∈ N. That is, the jumping numbers are a subset of the numbers { ki+m ai } m∈N . In particular, they are discrete and rational.
Although an infinite sequence, the jumping numbers are actually determined by finitely many: . Fix a polynomial f as above. Then for λ ≥ 0. In particular, a positive real number c is a jumping number if and only if c + 1 is a jumping number.
Thus the jumping numbers are periodic and completely determined by the finite set of jumping numbers less than or equal to 1. Remark 2.27. It is quite subtle to determine which "candidate jumping numbers" of the form ki+1 ai are actual jumping numbers. When f is a polynomial in 2 variables, there is some very pretty geometry behind understanding this question. See [56], [61]. (1) J (f λ ) is trivial for values of λ less than 5 6 .
6 . Using Theorem 2.26, describe the multiplier ideal of x 2 − y 3 for any value of λ. (1) J (f λ ) = R is for values of λ less than 7 10 .
10 . Remark 2.32. The jumping numbers turn out to be related to many other well-studied invariants. For example, it is shown in [15] that the jumping numbers of f in the interval (0, 1] are always (negatives of) roots of the so-called Bernstein-Sato polynomial b f of f . The jumping numbers can also be viewed in terms of the Hodge Spectrum arising from the monodromy action on the cohomology of the Milnor fiber of f [8].
Multiplier ideals have many additional properties, in addition to many deep applications which we don't even begin to describe. Lazarsfeld's book [38] gives an idea of some of these.
Positive characteristic: The Frobenius map and F -thresholds
A natural question arises: What about positive characteristic?
Fix a polynomial f over a perfect field k. We wish to measure the singularity of f at some point where it vanishes. For concreteness and with no essential loss of generality, say the field is F p and the point is the origin, so that How can we try to define an analog of log canonical threshold? In characteristic zero, we used real analysis to control the growth of the function 1 |f | λ as we approached the singular points of f . But in characteristic p, can we even talk about taking fractional powers of f ? Remarkably, the Frobenius map gives us a tool for raising polynomials to non-integer powers, and for considering their behavior near m.
3.1. The Frobenius map. Let R be any ring of characteristic p, with no non-zero nilpotent elements.
Definition 3.1. The Frobenius map F is the ring homomorphism The image is the subring R p of p-th powers of R, which is of course isomorphic to R via F (provided R has no non-trivial nilpotents, so that F is injective.) Nothing like this is true in characteristic zero. The point is that in characteristic p, the Frobenius map respects addition [(r + s) p = r p + s p for all r, s ∈ R], because the binomial coefficients p j are congruent to 0 modulo p for every 1 ≤ j ≤ p − 1. By iterating the Frobenius map we get an infinite chain of subrings of R: each isomorphic to R. Alternatively, we can imagine adjoining p-th roots: inside a fixed algebraic closure of the fraction field of R, for example, each element of R has a unique p-th root. Now the ring inclusion R p ⊂ R is equivalent to the ring inclusion R ⊂ R 1 p ; the Frobenius map gives an isomorphism between these two chains of rings. Iterating we have an increasing but essentially equivalent chain of rings Viewing these R 1 p e as R-modules, it turns out that a remarkable wealth of information about singularities is revealed by their R-module structure as e → ∞. Let us consider an example.
there is a unique way to write Iterating, we see that The freeness of the polynomial ring over its subring of p-th powers is no accident, but rather reflects the fact that the corresponding affine variety is smooth. The Frobenius map can be used to detect singularities quite generally: Let us put Kunz's theorem more concretely in the case we care about-the case where the Frobenius map R F −→ R is finite, that is, when R is finitely generated as an R p -module. 9 In this case, Kunz's theorem says that that R is regular if and only if R is a locally free R p -module, or equivalently, if and only if R 1/p is locally free as an R-module.
This leads to the natural question: if R is not regular, can we use Frobenius to measure its singularities? The answer is a resounding YES. This is the topic of a large and active body 9 All rings in which we are interested in this survey satisfy this condition. It is easy to check, for example, that if R is finitely generated over a perfect field, or a localization of such, then the Frobenius map is finite. Similarly, so do rings of power series over perfect fields. of research in "F -singularities" which classifies singularities according to the structure of the chain of R-modules R ⊂ R 1 p ⊂ R 1 p 2 ⊂ . . . . The F -threshold, which we now discuss, is only the beginning of a long and beautiful story. ]. This allows us to "take fractional powers" of polynomials, analogously to what the analysis allowed us to do in Section 1, at least if we restrict ourselves to fractional powers whose denominators are powers of p.
In the analytic setting, we tried to measure how badly the function 1 |f | λ "blows up" at the singular point using integrability-this led to the complex singularity index or log canonical threshold. In this characteristic p world, we can not integrate, nor does even absolute value make sense. Amazingly, however, the most naive possible way to talk about the function 1 f c "blowing up" does lead to a sensible invariant, which turns out to be very closely related to the complex singularity index. Indeed, we can agree that 1 f c certainly does not blow up at any point where the denominator does not vanish.
Recall that each R module M can be interpreted as a coherent sheaf on the affine scheme Spec R, in which case each element s ∈ M is interpreted as a section of this coherent sheaf. Grothendieck defined the "value" of a section s at the point P ∈ Spec R to be the image of s under the natural map from M to M ⊗ L, where L is the residue field at P [23, Chap 2.5]. In particular, the "function" f c (when c is a rational number of the form a p e ) is an element of the R-module R 1/p e (for some e), and as such its "value" at the point m is zero if and only if f c ∈ mR 1/p e . So, given that integration does not make sense, we can at least look at values of c for which 1 f c "does not blow up at all," and take the supremum over all such c. This extremely naive attempt to mimic the analytic definition then leads to the following definition.
Amazingly, this appear to be the "right" thing to do! Although we have stated the definition for polynomials over F p , any perfect field k, or indeed, any field k of characteristic p such that [k : k p ] is finite works just as well.
Let us check that this definition is independent of how we write c. First note that viewing f c as an element of the free R-module R 1/p e , we can write it uniquely as an R-linear combination of the basis elements for some uniquely determined r A ∈ R. So, an equivalent formulation of the F -threshold of f ∈ F p [x 1 , . . . , x n ] at the maximal ideal m = (x 1 , . . . , x n ) is where the coefficients r A are as in Equation (3.5.1) above.
It is now easy to see that this supremum is independent of the way we write c. That is, if we instead had written c = ap p e+1 and viewed f c as an element in the larger ring R 1 p e+1 , then when expressed f c uniquely as an R-linear combination of the basis elements x A ′ /p e+1 for R 1 p e+1 , the coefficients r A ′ that appear are the same elements of R as appearing in expression (3.5.1).
By Nakayama's Lemma, f c ∈ mR 1/p e if and only if f c is part of a minimal generating set for the R-module R 1/p e (after localizing at m). So equivalently: Example 3.7. The F -threshold of any polynomial is always bounded above by one. Indeed, let f be any polynomial in m. Since f 1 = f · 1 ∈ mR 1 p e for all e, we see that f 1 is never part of a basis for R 1/p e over R. We must raise f to numbers less than one to get a basis element. Thus F T m (f ) ≤ 1 always.
Example 3.8. Assume that f is non-singular at m. Then f is part of a local system of regular parameters at m, that is, one of the minimal generators for the ideal m. Changing coordinates, we can assume f = x 1 . For any p e , note that for all e ≥ 1. In other words, taking the limit we see that the F -threshold at a smooth point is bounded below by one. Combining with the previous example, we conclude that the F -threshold of a smooth point is exactly one. is also part of a basis of the free R-module R 1/p e . The same argument applies here to show that F T m (xy) = 1. In general, F T m (x 1 · · · x ℓ ) = 1, which is to say, the F -threshold of a simple normal crossings divisor is always one. In particular, this shows that F T m (f ) = 1 does not imply that f is non-singular.
Examples (3.7) through (3.10) indicate that the F -threshold has many of the same features as the log canonical threshold. Is the F -threshold capturing exactly "the same" measurement of singularities as the log canonical threshold? If a polynomial has integer coefficients, do we get the same value of the F -threshold modulo p for all p? Of course, the "same" polynomial can be more singular in some characteristics, so we expect not. But does the F -threshold for "large p" perhaps agree with the log canonical threshold? The following example is typical: Example 3.11. Consider the polynomial f = x 2 + y 3 , which we can view as a polynomial over any of the fields F p (or C). Its F -threshold depends on the characteristic: if p ≡ 1 mod 6, 5/6 − 1 6p if p ≡ 5 mod 6.
Viewing f as a polynomial over C, we computed in Example 2.15 that its log canonical threshold is 5 6 . Interestingly, we see that as p −→ ∞, the F -thresholds approach the log canonical threshold. Also, there are some (in fact, infinitely many) characteristics where the F -threshold agrees with the log canonical threshold. On the other hand, there are other characteristics where the polynomial is "more singular" than expected, as reflected by a smaller F -threshold. For example, in characteristics 2 and 3, of course, we expect "worse" singularities, and indeed, we see the F -threshold is smaller in these cases. But also the Fthreshold detects a worse singularity for this curve in characteristics congruent to 5 mod 6, reflecting subtle number theoretic issues in that case (see [44,Question 3.9]). [24]), including any "diagonal" hypersurfaces x a1 1 + · · · + x an n (see [26]). There is also an algorithm to compute the F -threshold of any binomial as well; see [27], [51]. See also [44].
Example 3.13. Let f ∈ Z[x, y, z] be homogeneous of degree 3 with an isolated singularity.
In particular, f defines an elliptic curve in P 2 over Z. The F -threshold has been computed in this case by B. Bhatt [1]: Again we see that "more singular" polynomials have smaller F -thresholds. As in Example 3.11, there are infinitely many p for which the log canonical threshold and the F-threshold agree, and infinitely many p for which they do not, by a result of Elkies [14].
Comparison of F-threshold and multiplicity.
To compare the F-threshold with the multiplicity, we rephrase the definition still one more time. Let us first recall a well-known notation: for an ideal I in an ring R of characteristic p, let I [p e ] denote the ideal of R generated by the p e − th powers of the elements of I. That is, I [p e ] is the expansion of I under Frobenius R → R sending r → r p e .
Definition 3.14. The F-threshold of f ∈ k[x 1 , . . . , This is patently the same as Definition 3.5, simply by raising to the p e -th power.
On the other hand, the multiplicity of f at m is defined as the largest n such that f ∈ m n . It is trivial to check that this is equivalent to That is, the formula that computes the F-threshold is similar to formula that computes the reciprocal of the multiplicity, but with "Frobenius powers" replacing ordinary powers: It is also not hard to check in all these cases that the infimum (supremum) is in fact a limit.
This similarity allows us to easily prove the following comparison between multiplicity and F-threshold: Proof. Since m is generated by N elements, we have the inclusions for all e. So we also obviously have inclusions of sets Taking the infimum, we have Since the infimum on the left can be interpreted as N times inf a N p e ∈ Z 1 p f a ∈ m N p e , the result is proved.
3.4.
Computing F -thresholds. To get a feeling how to compute F -thresholds, we begin the computation of Example 3.11, relegating the details to [24]. First recall that F p [x, y] is a free module over F p [x p e , y p e ] with basis {x a1 y a2 } 0≤a1,a2≤p e −1 . By definition, For each a, we expand using the binomial theorem to get Note that none of the terms in this expression can cancel, since each has a unique bi-degree in x and y, unless its corresponding binomial coefficient is zero. So, thinking over F[x p e , y p e ], we see that F T (f ) ≥ a p e if and only if there is an index i such that 2i < p e , 3(a − i) < p e and a i ≡ 0 (mod p).
( * ) Note that if 2a < p e , then the index i = a fulfills the three conditions in ( * ). This, in particular, implies that F T (f ) ≥ 1 2 , independent of p. If the characteristic is 2, it is easy to see immediately that F T (f ) = 1 2 . On the other hand, if a p e ≥ 5 6 , then condition ( * ) is never satisfied, so that F T (f ) ≤ 5 6 , independent of p. Indeed, in this case, either 2i ≥ p e or 3(a − i) ≥ p e . For otherwise, we have both in which case, adding them, we have that a ≤ p e −1 2 + p e −1 3 . This implies that a p e ≤ 5 6 − 5 6p e , a contradiction.
For the exact computation of the F -threshold, it remains to analyze the binomial coefficients in the critical terms in which both 2i < p e and 3(a − i) < p e . These are the terms indexed by i satisfying a − p e 3 < i < p e 2 . For this, it is crucial to understand the behavior of binomial coefficients modulo p. One of the main tools is the following theorem: Theorem 3.17. (Lucas, [39]). Fix non-negative integers m ≥ n ∈ N and a prime number p. Write m and n in their base p expansions: m = r j=0 m j p j and n = r j=0 n j p j . Then, modulo p, m n ≡ m 0 n 0 where we interpret a b as zero if a < b. In particular, m n is non-zero mod p if and only if m j ≥ n j for all j = 1, . . . , r.
Thus, to compute the F -threshold of x 2 + y 3 , it is helpful to write a in its base p expansion, and then try to understand, using Lucas's theorem, whether there exist values of i in the critical range for which a i is not zero. For example, if p ≡ 1 mod 6, then 5p e = 5 mod 6, so the number a = 5p e −5 6 is an integer. With this choice of a, it is not hard to check that the conditions ( * ) are satisfied for the index i = p e −1 2 . This shows that when p ≡ 1 mod 6, then the F -threshold of x 2 + y 3 is at least a p e = 5 6 − 5 p e for all e. It follows that for p ≡ 1 mod 6, the F -threshold of x 2 + y 3 is exactly 5 6 . The details, as well as the computation for other p, are carried out in [44,Example 4.3] or [24, Example 8.2].
3.5.
Comparison of F -thresholds and log canonical thresholds. Once F -thresholds and log canonical thresholds are defined, it is natural to compare them when it is possible. This is the case when f ∈ Z[x 1 , . . . , x n ]. We can view f as a polynomial over C, and compute its log canonical threshold. After reduction modulo p, we can calculate the F -threshold of f mod p in F p [x 1 , . . . , x n ].
Question: For which values of p is lct(f ) = F T (f mod p)? What happens when p ≫ 0?
The following theorem provides a partial answer: The proof of this theorem is the culmination mainly of the work of the Japanese school of tight closure, who generalized the theory of tight closure to the case of pairs. The first important step was the work of Hara and Watanabe in [21] Theorem 3.3, with Theorem 3.18 essentially following from Theorem 6.8 in the paper [22] of Hara and Yoshida; the proof there in turn generalizes the proofs given in [20] and [55] in the non-relative case to pairs.
Open Problem: Are there infinitely many primes p for which F T (f mod p) = lct(f )?
A positive answer to this Question would settle a long standing conjecture in F -singularities: every log canonical pair (X, D) where X is smooth (over C) is of "F -pure type." Daniel Hernandez shows that this is the case for a "very general" polynomial f in C[x 1 , . . . , x n ] [25]. Versions of this question have been around since the early eighties, for example, as early as Rich Fedder's thesis in 1983 [17], when similarities between Hochster and Robert's notion of "F-purity" and rational singularities began to emerge. Watanabe pointed out that the log-canonicity ought to correspond to F-purity, circulating a preprint in the late eighties proving that "F-pure implies log canonical" for rings. He did not publish this result for several years, when together with Hara, they introduced a notion of F-purity for pairs and proved the corresponding result for a pair (X, D). The field of "F-singularities of pairs," including the F-pure threshold, developed rapidly in Japan, with numerous papers of Watanabe, Hara, Takagi, Yoshida, and others filling in the theory. The question in the form stated here may have first appeared in print in [44].
3.6. Test ideals and F -thresholds. We wish to construct a family of ideals of a polynomial f ∈ F p [x 1 , . . . , x n ] = R, say τ (f c ), called test ideals, which are analogous to the multiplier ideals.
We first restate the definition of F -threshold yet one more time, in a way that will make the definition of the test ideals very natural.
To see that this is equivalent to the previous definitions, we apply the following simple lemma in the case where J = m: Proof. Suppose that f c / ∈ JR 1 p e . Then in writing f c uniquely in some basis for R 1 p e as in expression (3.5.1), there is some coefficient r A / ∈ J. If we let φ be the projection onto this direct summand, we have a map φ satisfying the required conditions. Conversely, if f c ∈ JR 1 p e , then the R-linearity of φ forces φ(f c ) ∈ J for any φ ∈ Hom R (R 1/p e , R). So every such φ must send f c to an element in J.
With this definition of F -threshold in mind, it is quite natural to define test ideals, at least for certain c. First note that for each f ∈ F p [x 1 , . . . , x n ] and each c ∈ Z[ 1 p ], we have a natural R-module map: Hom(R 1/p e , R) where e is chosen so that c = a p e for some natural number a. The test ideal is the image of this map: In practical terms, if we write f c = f a/p e uniquely as in Expression (3.5.1), then τ (f c ) is generated by the coefficients r A which appear in this expression. Note that this is independent of the way we write c. Indeed, if we instead think of c as ap p e+1 , then the expression for f c becomes which is a valid expression for f c as an element of the free R-module R 1/p e+1 since the monomials x pA/p e+1 are still part of free basis for R 1/p e+1 .
Remark 3.22. The test ideal τ (f a/p e ) is the smallest ideal J such that f a/p e ∈ JR 1/p e . Indeed, Lemma 3.20 can be reinterpreted as saying that if there is some ideal J of R such that f a/p e ∈ JR 1/p e , then τ (f a/p e ) ⊂ J.
Although we have not yet defined test ideals with arbitrary real exponents c, let us pause and see whether, at least for c ∈ Z[1/p], we have a collection of ideals {τ (f c )} satisfying the desired basic properties analogous to Proposition 2.23. That is, we want (1) τ (f c ) is the unit ideal for sufficiently small positive c; Lemma 3.23 encourages us to define τ (f c ) for any positive real c by approximating c by a sequence of numbers {c n } n∈N ∈ Z[ 1 p ] converging to c from above and taking advantage of the Noetherian property of the ring. That is, we take any monotone decreasing sequence of numbers in Z[ 1 p ] c 1 > c 2 > c 3 . . . converging to c. There is a corresponding increasing sequence of test ideals: Because R is Noetherian, this chain of ideals must stablize. Since any other strictly decreasing sequence converging to c is cofinal with this one (meaning that, if {c ′ i } is some other sequence, then for all i, there exists j such that c i > c ′ j and vice versa), it is easy to check that the stable ideal is independent of the choice of approximating sequence. So we have the following definition: where {c n } n∈N is any decreasing sequence of rational numbers in Z[1/p] approaching c. In particular, or equivalently as τ (f ⌈cp n ⌉/p n ) for n ≫ 0.
There is one slight ambiguity to address: If the real number c happens to be a rational number whose denominator is a power of p, then we have already defined τ (f c ) in Definition 3.21. Do the two definitions produce the same ideal in this case? That is, we need to check that, if we had instead approximated c by a sequence {c n } n∈N ∈ Z[1/p] converging to a/p e from above, then the ideals τ (f cn ) stabilize to τ (f a/p e ). But this is essentially the content of Lemma 3.24. So test ideals are well-defined for any positive real number.
As before with multiplier ideals, the following follows easily from the definition: (1) For c ∈ R + sufficiently small, τ (f c ) is the unit ideal.
(4) For each fixed c, we have τ (f c ) = τ (f c+ε ) for sufficiently small positive ε (how small is small enough depends on c).
The proposition is summarized by the following diagram of the c-axis, which shows intervals where the test ideal remains constant: This leads naturally to the F -jumping numbers: Definition 3.27. The F -jumping numbers of f are the real numbers c i ∈ R + for which τ (f ci ) = τ (f ci−ε ), for every ε > 0.
It is not hard to see that test ideals and F-jumping numbers enjoy many of the same properties as do the multiplier ideals and jumping numbers defined in characteristic zero. First we have an analog of the Briançon-Skoda theorem. 10 Proof. Without loss of generality, we may replace both c and c + 1 by rational numbers in Z[ 1 p ] approximating each from above. Thus we may assume c = a/p e for some a, e ∈ N.
For any R-linear map R 1/p e ϕ −→ R, it is clear that Like the jumping numbers in characteristic zero, the F -jumping numbers are discrete and rational. Interestingly, the proof of the characteristic zero statement follows trivially from the (algebro-geometric) definition of multiplier ideals, while the characteristic p proof took some time to find.
so that τ (f b/p e ) is generated by the coefficients r B . Raising to the power p gives that which means, by Lemma 3.20, that every R-linear map R 1/p e−1 φ → R sends f b/p e−1 to something in r p B . In other words, 10 Starting in [38], the name of this theorem, which belongs to a collection of inter-related theorems comparing powers of an ideal to its integral closure, has been sometimes shortened to "Skoda's theorem." We follow here the tradition in commutative algebra to include Briançon's name.
In contrast, since τ (f b/p e ) τ (f a/p e ), we know that f a/p e ∈ r B R 1/p e . Raising to pth powers again, we have that f a/p e−1 ∈ r p B R 1/p e−1 . But then Equation 3. 30.1 forces f a/p e−1 ∈ τ (f b p e−1 )R 1/p e−1 . By Lemma 3.20, it follows that there is an R-module homomorphism R 1/p e φ → R such that φ(f a/p e−1 ) is not in τ (f b p e−1 ). But this exactly means that there is an element of τ (f a/p e−1 ) that is not in τ (f b p e−1 ). Letting a/p e , b/p e go to c, we get that pc is a jumping number as well as c.
Proof of Theorem 3.29. To prove discreteness, we fix an f of degree d, and c = a p e . We claim that τ (f c ) is generated by elements of degree smaller or equal to ⌊cd⌋. Indeed, τ (f c ) is generated by the coefficients r A appearing in f c = r A x a/p e ∈ R 1/p e . Since f c has degree dc, then r A ∈ R has degree ≤ ⌊cd⌋. This proves the claim. Now assume that the F -jumping numbers of f , say α 1 < α 2 < · · · , were clustering to some α. Without loss of generality, each of the α i can be assumed in Z[ 1 p ]. By definition of F -jumping number, The previous claim ensures that each τ (f αi ) is generated in degree ≤ ⌊dα i ⌋ ≤ ⌊dα⌋ = D. Now intersect each of these test ideals with the finite dimensional vector space V ⊆ F p [x 1 , . . . , x n ] consisting of polynomials of degree ≤ D. The sequence stabilizes, since V is finite dimensional. Hence (3.30.2) also stabilizes, which is a contradiction to clustering of the F -jumping numbers.
To prove the rationality of F -jumping numbers, let c ∈ R be an F -jumping number. Then for all e ∈ N, the real numbers p e c are also F -jumping numbers. One can write p e c = ⌊p e c⌋ + {p e c}, where the fractional part {p e c} is also an F -jumping number by Lemma 3.30. By the discreteness of the F -jumping numbers it follows that {p e c} = {p e ′ c} for some e and e ′ in N, and hence p e c − p e ′ c = m ∈ Z. Thus c = m p e −p e ′ is rational. Remark 3.31. Test ideals and multiplier ideals can be defined not just for one polynomial, but for any ideal a in any polynomial ring, and even for sheaves of ideals on (certain) singular ambient schemes. While not much more complicated that what we have introduced here, we refer the interested reader to the literature for this generalization. Many properties of multiplier ideals (see [38,Chapter 9]) can be directly proven, or adapted, to test ideals. For example, the fact that the F -jumping numbers are discrete and rational holds more generallyessentially for all ideals in normal Q-Gorenstein ambient schemes [6]. In addition to the few properties discussed here, other properties of multiplier ideals that have analogs for test ideals include the "restriction theorem," the "subadditivity theorem," the "summation theorem" [22] [59], and the behavior of test ideals under finite morphisms [49]. Interestingly, some of the more difficult properties to prove in characteristic zero turn out to be extraordinarily simple in characteristic p. For example, in characteristic zero, the proof of the Briançon-Skoda theorem (for ideals that are not necessarily principal) uses the "local vanishing theorem" (see [38]). Although this vanishing theorem fails in characteristic p, the characteristic p analog of the Briançon-Skoda Theorem (that is, Theorem 3.28 for non-principal ideals) is none-the-less true and in fact quite simple to prove immediately from the definition. On the other hand, some properties of multiplier ideals that follow immediately from the definition in terms of resolution of singularities turn out to be false for test ideals. For example, while multiplier ideals are easily seen to be integrally closed, test ideals are not. In fact, every ideal in a polynomial ring is the test ideal τ (a λ ) for some ideal a and some positive λ ∈ R, as shown in [43].
An interpretation of F -thresholds and test ideals using differential operators.
Our definition of F -threshold can be viewed as a measure of singularities using differential operators. The point is that a differential operator on a ring R of characteristic p is precisely the same as a R p e -linear map.
Differential operators can be defined quite generally. Let A be any base ring, and R a commutative A-algebra. Grothendieck defined the ring of A-linear differential operators of R using a purely algebraic approach (see [19]), which in the case where A = k is a field and R is a polynomial ring over k results in the "usual" differential operators.
Definition 3.32. Let R be a commutative A-algebra. The ring D A (R) of A-linear differential operators is the subring of the (non-commutative) ring End A (R) obtained as the union of the A-submodules of differential operators D n A (R) of order less than or equal to n, where D n A (R) is defined inductively as follows: First the zero-th order operators D 0 A (R) are the elements r ∈ R interpreted as the A-linear endomorphisms "multiplication by r"; that is, r : R −→ R sends x → rx. Then, for n > 0, Example 3.34. If k has characteristic zero and R is the polynomial ring k[x 1 , . . . , x n ], then D k (R) is the Weil algebra k[x 1 , . . . , x n , ∂ 1 , . . . , ∂ n ] where each ∂ i denotes the derivation ∂ ∂xi . This is the non-commutative subalgebra of End k (R) generated by the ∂ ∂xi and the multiplication by x j . Example 3.35. In characteristic p, the differential operators on k[x 1 , . . . , x n ] are essentially "the same" as in Example 3.34 as k-vector spaces, but not as rings. For example, if k has characteristic p, the operator obtained by composing the first order operator ∂ ∂xi with itself p-times is the zero operator. None-the-less, there is a differential operator 1 p! ∂ p ∂x p i sending x p i to 1, which is not the composition of lower order operators but which essentially has the same effect as the corresponding composition in characteristic zero. In particular, a k-basis for D k (k[x 1 , . . . , x n ]), where k has characteristic p, is as j ℓ and i ℓ range over all non-negative integers. In characteristic p, D k (k[x 1 , . . . , x n ]) is not finitely generated.
The following alternate interpretation of differential operators in characteristic p ties into the definition of F-threshold.
Proposition 3.36. Let R be any ring of prime characteristic p such that the Frobenius map is finite (main case for us: R = F p [x 1 , . . . , x n ]). Then an F p -linear map R d −→ R is a differential operator if and only if it is linear over some subring of p e -th powers. That is, Proof. This is not a difficult fact to prove. It is first due to [63]; or see [57] for a detailed proof in this generality.
This gives us an alternate filtration of D Fp (R) by "Frobenius order:" D Fp (R) is the union of the chain Alternatively, by taking p e -th roots, we can interpret the ring of differential operators as the union Using this filtration, we can give an alternative definition of test ideals and F -threshold in terms of differential operators: Note that this interprets the F-threshold as very much like the multiplicity: it is defined as the maximal (Frobenius) order of a differential operator which, when applied to (a power of) f , we get a non-vanishing function. However, here the operators are filtered using Frobenius.
Similarly, the test ideal is the image of f a/p e over all differential operators of R 1/p e which have image in R.
Remark 3.38. Historical Remarks and Further Work. The F -threshold was first defined by Shunsuke Takagi and Kei-ichi Watanabe in [60], who called it the F -pure threshold. The definition looked quite different, since they defined it using ideas from Hochster and Huneke's tight closure theory [29]. Expanding on this idea, Hara and Yoshida [22] introduced the test ideals (under the name "generalized test ideals" in reference to the original test ideal of Hochster and Huneke, which was not defined for pairs), and soon later using this train of thought, F -jumping numbers were introduced in [44], where they are called F -thresholds. The definition of F -pure threshold, test ideals, and the higher F -jumping numbers we presented here is essentially from [4], where it is also proven that this point of view is equivalent (in regular rings) to the previously defined concepts. This point of view removes explicit mention of tight closure, focusing instead on R p e -linear maps (or differential operators).
Unifying the prime characteristic and zero characteristic approaches.
We defined the log canonical threshold for complex polynomials using integration, and the F -threshold for characteristic p polynomials using differential operators. However, as we have seen, in characteristic zero, our approach was equivalent to a natural approach to measuring singularities in birational geometry. Since birational geometry makes sense over any field, might we also be able to define the F -threshold directly in this world as well?
This approach does not work as well as we would hope in characteristic p. Two immediate problems come to mind. First, resolution of singularities is not known in characteristic p. It turns out that this is not a very serious problem. Second, and more fatally, some of the vanishing theorems for cohomology that make multiplier ideals such a useful tool in characteristic zero actually fail in prime characteristic.
The lack of Hironaka's Theorem in characteristic p can be circumvented as follows. We look at all proper birational maps X π −→ C N with X normal. If X is a normal variety, the needed machinery of divisors goes through as in the smooth case, because the singular locus of a normal variety is of codimension two or higher. To define the order of vanishing of a function along an irreducible divisor D, we restrict to any (sufficiently small) smooth open set meeting the divisor. Thus, the relative canonical divisor K π can be defined for a map X π −→ C N , for any normal X, as can the divisor F = div(f • π). That is, if X π −→ C N with X normal, we can define K π and F as the divisor of the Jacobian determinant of π |U and the divisor of f • π |U respectively, where U ⊂ X is the smooth locus of X (or any smooth subset of X whose complement is codimension two or more).
So we can attempt to define the log canonical threshold in arbitrary characteristic as follows: Similarly, for any divisor on a normal variety X, the sheaves O X (D) are defined. 11 So we can also attempt to define the multiplier ideal in characteristic p similarly, by considering all proper birational models: Definition 4.3. Let f be a polynomial in n variables, and λ a positive real number. The multiplier ideal is as we range over normal varieties X, mapping properly and birationally to A n k via π, where K π is the relative canonical divisor, and F is the divisor div(f • π) on X. Equivalently, this amounts to where we range over all irreducible divisors E lying on a normal X mapping properly and birationally to A n k , say X π −→ A n k .
Again, in characteristic 0, this produces the same definition as before. Does the multiplier ideal in characteristic p (produced by Definition 4.3) have the same good properties as in characteristic zero? The answer is NO. The problem occurs with the behavior of multiplier ideals in prime characteristic under wildly ramified maps: they simply do not have the properties we expect of multiplier ideals based on their behavior in characteristic zero (see [48,Example 6.33] or [48,Example 7.12]). The test ideals have better properties in char p than the multiplier ideals. They accomplish much of what multiplier ideals do in characteristic zero. The survey [48] gives an excellent introduction to this topic.
One reason the multiplier ideals fail to be useful in prime characteristic is that certain vanishing theorems fail that contribute to the magical properties of multiplier ideals over C. For example, a very useful statement is "local vanishing": If π : X → A N k is a log resolution of a complex polynomial f , then R i π * O X (K π − ⌊cF ⌋) = 0 for all i > 0 (cf. [38,Theorem 9.4.1]). For example, local vanishing is needed to prove the Briançon-Skoda theorem for non-principal ideals in characteristic zero. Unfortunately, this vanishing theorem is false in characteristic p. Fortunately, the Briançon-Skoda theorem for test ideals can be proven quite simply in characteristic p, using Frobenius instead of vanishing theorems.
On the other hand, for "large p," it is true that the multiplier ideals "reduce mod p" to the test ideals.
4.1. Idea of reduction modulo p. Fix a polynomial f ∈ Q[x 1 , . . . , x n ], or (by clearing denominators) in Z[x 1 , . . . , x n ]. Fix a log resolution of f over Q, say given by X Q π −→ A n Q . We can "thicken" X Q to a scheme X Z over Z, and so get a family of maps over Spec Z described by the following diagram: where the right hand side gives a fiber over a closed point p in Spec Z and the left hand side shows the generic fiber. Because the generic fiber is a log resolution of f , it follows that for an open set of closed fibers, we also have a log resolution of f . That is, we can assume X Fp −→ A n Fp is a log-resolution of f for p ≫ 0. The multiplier ideal J (A n Q , f c ) ⊂ Q[x 1 , . . . , x n ] can be viewed as an ideal in Z[x 1 , . . . , x n ] by clearing denominators if necessary; abusing notation we denote the ideal in Z[x , . . . , x n ] and Q[x 1 , . . . , x n ] the same way. So we can reduce modulo p and obtain an analog of multiplier ideals in positive characteristic given by: Fp , f c ) = J (A n Q , f c ) ⊗ F p . These turn out to be the test ideals for p ≫ 0! (How large is large enough for p depends on c.) As we have explained however, the test ideals are probably the "right" objects to use in each particular characteristic p.
Recently, Blickle, Schwede and Tucker have found an interesting way to unify test ideals and multiplier ideals (see [7]). The idea is to look at a broader class of proper maps, not just birational ones. Recall that a surjective morphism of varieties X π → Y is an alteration if it is proper and generically finite. We say an alteration is separable if the corresponding extension of function fields k(Y ) ⊂ k(X) is separable. Note that such π always factors as X φ →Ỹ ν → Y where φ is proper birational and ν is finite.
Consider a separable alteration X π → A n , with X normal. Denote by F π the divisor on X defined by f • π and by K π the divisor on X defined by the Jacobian. 12 As before in our computation of the multiplier ideal, the idea is to push down the sheaf of ideals O X (⌈K π − λF π ⌉) to O X . However, this will only produce a subsheaf of π * O X , which is not O A n but rather some normal finite extension. Let us denote its global sections by S, which is a normal finite extension of the polynomial ring R. To produce an ideal in R, we can use the trace map.
4.2.
Trace. Let R ⊂ S be a finite extension of normal domains, with corresponding fraction field extension K ⊂ L. The field trace is a K-linear map L → K sending each ℓ ∈ L to the trace of the K-linear map L → L given by multiplication by ℓ. Because S is integral over the normal ring R, it is easy to check that this restricts to an R-linear map S tr → R. In particular, every ideal of S is sent, under the trace map, to an ideal in R. Using this, we can give a uniform definition of the multiplier ideal and test ideal. For any separable alteration X π → A n with X normal, denote by tr π the trace of the ring extension map R ֒→ S = O X (X). Then we have: Theorem 4.5. ( [7]). Fix a polynomial f ∈ k[x 1 , . . . , x n ] where k is an arbitrary field, and let c be any positive real number. Define J := π tr π (π * O X (⌈K π − cF π ⌉)).
where π varies over all possible normal varieties X mapping properly and generically separably to A n . Then, Note that each π * O X (⌈K π − cπ * D⌉) is an ideal in π * O X , whose global sections form some finite extension S of the polynomial ring R = k[x 1 , . . . , x n ]. So its image under the trace map is an ideal in R. The theorem says that if we intersect all such ideals of R, we get the test/multiplier ideal of f λ .
In fact, Blickle, Schwede and Tucker prove even more: The intersection stabilizes. So there is one alteration X π → A n for which τ (f c ) = tr π (π * O X (⌈K π − cπ * D⌉)). In fact, it has been shown that, fixing f , there is one alteration which computes all test ideals τ (f λ ), for any λ 12 By which we mean the unique divisor on X which agrees with these divisors on the smooth locus of X. This is possible since X is normal; see the beginning paragraphs of Section 4. [50]. Of course, this is already known for multiplier ideals: it suffices to take one log resolution of X to compute the multiplier ideal. Indeed, in characteristic zero, one need not take any finite covers at all. Interestingly, in characteristic p, it is precisely the finite covers that matter most.
It is worth remarking that many of the features of multiplier ideals-including the important local vanishing theorem-can be shown in characteristic p "up to finite cover" [7,Theorem 5.5]. This can be viewed as a generalization of Hochster and Huneke's original Theorem on the Cohen-Macaulayness of the absolute integral closure of a domain of characteristic p [31]. See also [54].
|
2013-09-18T22:54:45.000Z
|
2013-09-18T00:00:00.000
|
{
"year": 2013,
"sha1": "aa4db7fe1c3004584ae720d836e76f2f8e949fc1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1309.4814",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aa4db7fe1c3004584ae720d836e76f2f8e949fc1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
196482967
|
pes2o/s2orc
|
v3-fos-license
|
METCAM/MUC18 plays a Novel Tumor and Metastasis Suppressor Role in the Progression of Human Ovarian Cancer Cells
METCAM/MUC18, an integral membrane cell adhesion molecule (CAM) in the Iglike gene super-family, is capable of performing typical functions of CAMs, such as mediating cell-cell and cell-extracellular interactions, crosstalk with intracellular signaling pathways, and modulating social behaviors of cells. The role of METCAM/ MUC18 in the progression of ovarian cancer cells has not been well studied. Previous studies showed that METCAM/MUC18 is expressed in normal ovarian epithelial cells, but expressed at higher levels in ovarian cancer tissues, suggesting that METCAM/ MUC18 may serve as a biomarker for the malignant progression of clinical ovarian cancers and it has been implicated for playing a positive role in the progression of the cancer. Recently we provided evidence from in vitro and in vivo studies to suggest that the above notion is a fortuitous correlation and that METCAM/MUC18 actually serves as a tumor and metastasis suppressor for the malignant progression of ovarian cancer cells, similar to its role in the malignant progression of one mouse melanoma cell line and nasopharyngeal carcinoma type I. Many possible mechanisms mediated by this CAM during early tumor development and metastasis are suggested. Furthermore, we suggest that METCAM/MUC18 may be used a therapeutic reagent to arrest the malignant progression of clinical ovarian cancer.
Introduction: Current Status of Ovarian Cancer
Epithelial ovarian cancer (EOC) is the fifth leading cause of female cancers in USA with a high fatality rate of about 65% [1]. The high lethality of the cancer is because the early stage of the disease is mostly asymptomatic and therefore remains undiagnosed until the cancer has already disseminated throughout the peritoneal cavity (at clinical stages of III and IV) [2]. The early stage disease can be treated successfully with a fiveyear survival rate of more than 90%, however, effective therapy for the advanced-stage disease is lacking because of the strong chemo-resistance of recurrent ovarian cancer [2].
The major challenges for combating ovarian cancer are: a) The ovarian cancer is histologically and molecularly heterogeneous with at least four major subtypes, such as serous adenocarcinoma (75%), endometrioid adenocarcinoma (10%), mucinous adenocarcinoma (3%), and clear cell carcinomas (5-10%) [3,4], b) There is a lack of reliable specific diagnostic markers for an effective early diagnosis of each subtype, though molecular signatures of the major subtypes are available [5], and c) Very little is known of how ovarian tumor emerges and how it progresses to malignancy [6 for a review].
Cell Adhesion Molecules and the Progression of Ovarian Cancer
In general, tumorigenesis is a complex process involving changes of several biological characteristics [7], including the aberrant expression of cell adhesion molecules (CAMs) [7]. CAMs play many important physiological functions, such as organ formation, tissue architecture, vascularization and angiogenesis, immune response, inflammation, wound healing, and cellular social behaviors [8]. Since CAMs govern the social behaviors of cells by affecting the adhesion status of cells and cross-talk and modulating intracellular signal transduction pathways, they also play an important function in cancer cell metastasis. This is because tumor progression is induced by a complex cross-talk between tumor cells and stromal cells in the surrounding tissues [9]. These interactions are, at least in part, mediated by cell adhesion molecules (CAMs) [7][8][9]. Thus the altered expression of CAMs can change motility and invasiveness, affect survival and growth of tumor cells, and alter angiogenesis [7][8][9]. As such, CAMs may promote or suppress the metastatic potential of tumor cells [10]. Similar to other epithelial cancers, cell adhesion molecules must play a role in the progression of ovarian cancer, especially since aberrant expression of various CAMs, such as mucins [11], integrins [12], CD44 [13], L1CAM [14], cadherin [15], claudins [16], EpCAM [17], ALCAM [18] and METCAM/MUC18 [19,20], has been associated with the malignant progression of ovarian cancer. Some of the CAMs may play a positive role, such as MUC4 [21], CD44 [22], L1CAM [23], ALCAM [18], and P-cadherin [24]; however, some a negative role, such as β3-integrin [25], E-cadherin [26], claudin-3, 4, &7 [27], EpCAM [28], and KAI1 [29], in the progression of ovarian cancer cells. We have been focusing our studies on the role of METCAM/MUC18 in the progression of several epithelial tumors, including epithelial ovarian tumors [30].
The Role of METCAM/MUC18 in the Progression of Epithelial Cancers
Human METCAM/MUC18 (or MCAM, Mel-CAM, S-endo1, CD146, or A32), an integral membrane cell adhesion molecule (CAM) in the Ig-like gene superfamily, has an N-terminal extracellular domain of 558 amino acids, a transmembrane domain, and a short intra-cellular cytoplasmic domain (64 amino acids) at the C-terminus, as shown in the following Figure 1 [30,31].
As shown in the Figure 1, the extra-cellular domain of the protein comprises a signal peptide sequence and five immunoglobulin-like domains and one X domain [30,31]. The cytoplasmic domain contains five consensus sequences potentially to be phosphorylated by PKA, PKC, and CK2 [30,31]. Thus human METCAM/MUC18 is capable of performing typical functions of CAMs, such as governing the social behaviors by affecting the adhesion status of cells and modulating cell signaling. Therefore, an altered expression of METCAM/MUC18 may affect motility and invasiveness of many tumor cells in vitro and tumorigenesis and metastasis in vivo [30].
On the contrary, the possibility that the over-expression of METCAM/MUC18 might play a tumor suppressor role was first suggested by Shih et al. [33], who found that METCAM/MUC18 expression suppressed tumorigenesis of a breast cancer cell line MCF-7 in SCID mice. However, this notion was revoked by recently published evidence, which supported the positive role of METCAM/MUC18 in the progression of breast cancer cells [36,37,40], similar to its role in the progression of melanoma and prostate cancer cells. The role of METCAM/MUC18 in the progression of ovarian cancer has not been well studied. The following section describes the recent findings for the role of METCAM/MUC18 in this aspect.
The Role of METCAM/MUC18 in the Progression of Ovarian Cancer
Aldovini et al. [19] first showed that METCAM/MUC18 expression is significantly associated with advanced stage tumors, and serous and undifferentiated subtypes, and is a marker stronger than residual disease in predicting early tumor relapse and independent marker of poor prognosis for epithelial ovarian cancer [19]. Our recent report also supported the notion that METCAM/MUC18 expression is correlated with the progression of ovarian cancer [20]. Furthermore, Wu et al. [41] reported that METCAM/MUC18 expression is high in metastatic ovarian cancers in comparison with other pathological types of ovarian epithelial tissues. From in vitro studies by using siRNAs to silence the endogenous METCAM/MUC18 expression in the SK-OV-3 cell line, they showed that decreasing endogenous METCAM/MUC18 expression in the cells increases apoptosis and cell spreading and invasion, suggesting that METCAM/MUC18 may play a positive role in the progression of ovarian cancer cells [41]. However, this notion has not been directly supported by evidence from animal studies.
METCAM/MUC18 is expressed at a lower level in the ovarian cancer cell lines established from malignant ascites than in the cell lines from adenocarcinomas
To test if he above notion is correct or simply a fortuitous correlation, we directly tested the role of METCAM/MUC18 in the progression of epithelial ovarian cancer in vitro and in vivo. First, we re-evaluated the expression of METCAM/MUC18 in one immortalized normal ovarian epithelial cell line (IOSE) and five human ovarian cancer cell lines, BG-1, HEY, CAOV-3, SK-OV-3 and NIHOVCAR3 [42], as shown in the following Figure 2. and X for one domain (without any disulfide bond) in the extracellular region, and TM for transmembrane domain. P stands for five potential phosphorylation sites (one for PKA, three for PKC, and one for CK2) in the cytoplasmic tail. The six conserved N-glycosylation sites are shown as wiggled lines in the extracellular domains of V1, between C2' and C2", C2'', and X. As shown in the above Figure 2, the expression level of METCAM/MUC18 in one immortalized normal ovarian epithelial cell line (IOSE) was about 10% and that in five ovarian cancer cell lines, BG-1, HEY, CAOV-3, SK-OV-3 and NIHOVCAR3, ranged from zero to 50% (assuming that a positive control, human melanoma cell line SK-Mel-28, expressed 100% of METCAM/MUC18). Since METCAM/MUC18 was expressed at a level of 31-50% in two out of three cell lines established from primary adenocarcinomas (HEY and CAOV3), but poorly expressed (1-11%) in two cell lines established from malignant ascites (SKOV3 and NIHOVCAR3), it appeared that METCAM/MUC18 was expressed poorer in malignant cell lines than in primary adenocarcinomas, suggesting that METCAM/MUC18 may play a negative role in the progression of ovarian cancer. The above result also provided an important information for us to possibly choose two ovarian cancer cell lines, BG-1 (established from a poorly differentiated adenocarcinoma) and SK-OV-3 (established from an adenocarcinoma metastasis as malignant ascites), which expressed very low levels of METCAM/ MUC18 (zero and 1%, respectively), for in vitro and in vivo studies. We included only the results from the studies by using the cell line, SK-OV-3, than those by using the cell line, BG-1, because the BG-1 cell line in some laboratories has been reported to be contaminated with the breast cancer cell line MCF7 [43].
Two complementary methods are commonly used to biochemically alter the expression of METCAM/MUC18 in the cell lines
To determine the effect of a specific gene on cellular behaviors, two complementary methods are commonly used to alter the expression of a gene in cells: a. Enforced expression of a gene in cell lines that did not or weakly express the protein [35] and b. siRNA (small interference RNA)-knockdown expression of the gene in cell lines that endogenously express the protein [44].
The clones, which highly expressed the protein or siRNA, were isolated, and used for the in vitro and in vivo studies.
METCAM/MUC18 expression in G418R-clones derived from the SK-OV-3 cell line
Since SK-OV-3 cell line did not or weakly express METCAM/ MUC18, to determine if METCAM/MUC18 expression affects the in vitro and in vivo cellular behaviors of the cells, only the enforced expression method was used to increase the expression of the gene in this cell line. All the G418R-clones should express METCAM/ MUC18, albeit at different levels in different clones. The control cells, which were transfected with the empty vector that did not contain the human METCAM/MUC18 cDNA, were also G418R and should not express METCAM/MUC18, similar to the parental SK-OV-3 cells. Figure 3 shows the expression of METCAM/MUC18 in the two typical G418R clones was higher than the empty-vector control clone derived from SK-OV-3 cell line [42]. The above G418R clones from the SK-OV-3 cell line are then used to study effects of over-expression of the METCAM/MUC18 gene on their in vitro cellular behaviors, such as cellular motility and invasiveness, and on in vivo tumorigenesis and metastasis in animal models.
Over-expression of METCAM/MUC18 decreased epithelial-to-mesenchymal transition (EMT) of SK-OV-3 cells
Epithelial-to-mesenchymal transition (EMT) is a biological process by which carcinoma cells (cancer cells derived from epithelial cells) detach from the surrounding tissue and acquire characteristics of mesenchymal cells, which are unique motile and spindle-shaped cells with end-to-end polarity [45 for a review]. Cells that have undergone EMT can migrate out of their epithelial layers to distant organs, at where they may remain mesenchymal or re-differentiate into epithelial cells by a process known as mesenchymal-to-epithelial transition (MET). Thus EMT may be a process pre-required for tumor progression. In addition to increased motility, carcinoma cells via EMT may become stem cell-like, protected from senescence, apoptosis and immune surveillance, and resistant to conventional and targeted therapies [45]. The degree of EMT in cells usually can be determined by the extent of motility and invasiveness of the cells in vitro. The above G418R clones were used to determine the effects of enforced expression of METCAM/MUC18 on their in vitro motility and invasiveness.
If METCAM/MUC18 plays a positive role in mediating EMT in this cell line, we should observe an increased motility and invasiveness of these stable clones that overly express METCAM/ MUC18. On the contrary, if it plays a negative role in EMT, we should observe a decreased motility and invasiveness of these stable clones that overly express METCAM/MUC18, which indeed were observed in Figure 4. From the results, we strongly suggested that huMETCAM/MUC18 plays a negative role in the EMT of SK-OV-3 cells and METCAM/MUC18 directly causes the decreased EMT of SK-OV-3 cells. the motility tests are indicated. P value, which was determined by analyzing two sets of data with the Student's t test by using the one-tailed distribution-type 2 method, was 0.014, indicating that the result is statistically different. (Right panel) For invasiveness test, the METCAM clone 2D and the Control (Vector) clone 3D of SK-OV-3 cells were used. Six hours after seeding cells to the top wells, cells migrating to the bottom wells were determined. Means and standard deviations of triplicate values of the invasiveness tests are indicated. P value, which was determined by analyzing two sets of data with the Student's t test by using the onetailed distribution-type 2 method, was 0.0015, indicating that the result is statistically different.
Over-expression of METCAM/MUC18 suppressed in vivo tumorigenesis and the malignant progression of the human ovarian cancer cell line SK-OV-3
To scrutinize the notion that METCAM/MUC18 may play a positive role in the development of ovarian cancer [41], we used the above clones to perform studies in model animals [42]. We determined effects of METCAM/MUC18 over-expression on in vivo tumorigenicity of SKOV3 cells in female nude mice after SC injection at either dorsal (DSC) or ventral (VSC) side. As shown in Figure 5 (left panel) that tumor proliferation of the METCAM clone 2D was much lower than that of the control (vector) clone at both sites, indicating that over-expression of METCAM/ MUC18 decreased tumorigenicity of SK-OV-3 cells in nude mice. Figure 5 (left panel), Figure 5 (right panel) shows that final tumor weights of the METCAM clone 2D were also lower than those of the control (vector) clone 3D at both sites, indicating that over-expression of METCAM/MUC18 decreased the final tumor weights of SK-OV-3 cells in nude mice. [42].
Consistent with the results in
Taken together, we conclude that over-expression of METCAM/ MUC18 suppressed in vivo tumorigenesis of SK-OV-3 cells at nonorthotopic (ventral and dorsal) subcutaneous sites in nude mice. Furthermore, the tumors induced by the METCAM clone 2D were confined to small regions, as shown in the results of H&E and IHC [42], whereas the tumors induced by the control (vector) clone 3D developed serious tumors, suggesting that tumors from the 2D clone appeared to be dormant; thus METCAM/MUC18 may function similarly to other tumor/metastasis suppressors in other tumor cells [46].
5/8
To further determine the effect of METCAM/MUC18 overexpression on in vivo tumorigenicity of SK-OV-3 cells in the orthotopic site (intraperitoneal (IP) cavity), SK-OV-3 cells from the pooled METCAM clone 2D and the control (vector) clone 3D were IP injected into female nude mice. The mice in the control group, which were injected with the control vector clone 3D, developed swollen abdominal cavity and tumors, but not the mice in the test group, which were injected with the METCAM clone 2D. Consistent with the observation, the final weights of abdominal tumors, as shown in Fig. 6 (left panel), and volumes of ascites were measured, as shown in Figure 6 (right panel), were significantly larger in the group injected with the control vector clone 3D than those injected with the METCAM/MUC18-expressing clone 2D. We concluded that over-expression of METCAM/MUC18 suppressed the tumorigenicity and ascites formation of SK-OV-3 cells in IP cavities in nude mice [42].
Taken together, METCAM/MUC18 expression in SK-OV-3 cells decreased the tumor proliferation and tumorigenesis at SC sites as well as at the orthotopic IP site, strongly suggesting that METCAM/MUC18 is a novel tumor and metastasis suppressor for the progression of human ovarian cancer cells.
Preliminary mechanisms of METCAM/MUC18mediated suppression of the progression of SK-OV-3 cells
Mechanisms of METCAM/MUC18-mediated suppression of the progression of human ovarian cancer cells have not been studied. By deducing knowledge learned from METCAM/MUC18-induced tumorigenesis of other tumor cell lines, such as, melanoma, cancers in breast and prostate and nasopharyngeal carcinoma, METCAM/MUC18 may affect tumorigenesis by cross-talk with many downstream signaling pathways that regulate proliferation, survival pathway, apoptosis, metabolism, and angiogenesis of tumor cells [7,30]. To investigate if METCAM/MUC18-mediated tumor suppression also affected expression of its downstream effectors, such as indexes of apoptosis/anti-apoptosis, proliferation, survival, aerobic glycolysis, and angiogenesis, we determined the expression of levels of Bcl2, Bax, PCNA, LDH-A, pan-AKT, phospho-AKT(Ser 473), and the ratio of phospho-AKT/AKT in tumor lysates by using Western blot analyses [42]. From the results, we suggest that over expression of METCAM/ MUC18 may suppress tumorigenesis and malignant progression of ovarian cancer cells in nude mice by decreasing their abilities in proliferation, aerobic glycolysis, and angiogenesis via decreasing the absolute levels of pan-AKT and phospho-AKT, but not altering the apoptosis/anti-apoptosis and survival pathways [42]. This is consistent with the results from clinical specimens [20].
Discussion
The above conclusion contradicts the results of a positive correlation of clinical prognosis with the increased expression of METCAM/MUC18 in malignant ovarian cancer specimens [19,20,41]. This suggests that the positive correlation in this case is fortuitous and that we should not assume a positive role of METCAM/MUC18 in the progression of ovarian cancer without the support of tests in an animal model. Our results also contradict the previously established notion that METCAM/ MUC18 serves as a tumor promoter in both prostate cancer cells and breast cancer cells, and as a metastasis promoter in human melanoma cells, prostate cancer, and breast cancer [30,[35][36][37][38][39][40]. The role of METCAM/MUC18 as a tumor suppressor was not only conclusively demonstrated in the human ovarian cancer cell line, SK-OV-3 [42], but also in a mouse melanoma cell line, K1735-9 [47] and one NPC cell line, NPC-TW01 [34,48] (Wu, unpublished results). METCAM/MUC18 has also been demonstrated as a metastasis suppressor in the two human ovarian cancer cell lines, SK-OV-3 cells and BG-1 cells [42] (Wu, unpublished results). Thus sufficient evidence is provided to support the novel suppressor role of METCAM/MCU18 in the progression of human ovarian epithelial cancers.
The most intriguing, unique biological function of METCAM/ MUC18 in tumorigenesis and metastasis is that it seems to play a dual role in the progression of some tumor cell lines [49]. It is not clear why METCAM/MUC18 plays a dual role in tumorigenicity and metastasis. One point is clear, which is that METCAM/MUC18 plays an opposite role in different cancer types or in different clones/sublines of the same cancer type [49]. Thus it is logical to propose that the effect of METCAM/MUC18 on the progression of epithelial cancers is modulated by different intrinsic factors in different tumor cells/types. The dual role of METCAM/MUC18 is very likely due to the presence of different interacting partners intrinsic to each cancer cell type and different clone, or perhaps due to different heterophilic ligands, which unfortunately have not been identified [30,49]. Interactions of METCAM/MUC18 with different sets of intrinsic partners may result in the promotion or suppression of tumorigenicity and metastasis via increasing or decreasing aerobic glycolysis, proliferation, angiogenesis, other growth-promoting pathways, as well as altering tumor cell motility, invasiveness, and vascular metastasis.
The tumor/metastasis suppressor role of human METCAM/ MUC18 in the progression of human ovarian cancer cells may point to the possibility that METCAM/MUC18 may induce tumor dormancy. How METCAM/MUC18 affects tumor dormancy should be an interesting aspect for future investigation, since tumor dormancy may be due to intrinsic growth inhibition, immunological suppression, and/or angiogenic suppression [50].
Perspectives and Clinical Applications
The tumor suppressor role of METCAM/MUC18 in the progression of human ovarian cancer cell lines may be useful for clinical application. Indeed many tumor and metastatic suppressors, such as KISS1, KAI1, nm23, MAP2K4, and some microRNA have been used for clinical applications [51]. Three strategies have been developed, such as i. Reconstitution of suppressor genes by induction of the endogenous locus or by gene therapy, ii. Direct administration of the suppressor proteins, iii. Targeting essential downstream pathways that are activated by loss of suppressor function.
In light of these, METCAM/MUC18 cDNA gene may be used for gene therapy by using the adenovirus-associated virus vector or a replication-defective adenovirus. The recombinant METCAM/ MUC18 protein, or METCAM-derived peptides, or small molecule mimetics of METCAM may also be used directly. The recombinant cognate ligand of METCAM/MUC18 may potentially be used also. However many downstream pathways of METCAM/MUC18 may not be useful because METCAM/MUC18 appeared to involve many of them. The above strategies may be used for clinical treatments by keeping ovarian cancer cells in a dormant state or arresting the cancer cells at the stage of micro-metastases.
Conclusion
In summary, we provided evidence to show that METCAM/ MUC18 is a novel suppressor for the tumorigenesis and malignant progression of the human ovarian cancer SK-OV-3 cells: A. METCAM/MUC18 was expressed at lower levels in malignant cell lines than in primary adenocarcinomas, suggesting that METCAM/MUC18 may play a negative role in the progression of ovarian cancer. B. A high expression level of METCAM/MUC18 inhibits the EMT of SKOV3 cancer cells.
C. METCAM/MUC18 expression inhibited the tumorigenicity at the subcutaneous sites as well as the tumorigenicity and ascites formation in the intra-peritoneal cavity of an athymic nude mouse model.
Since the METCAM/MUC18 expressed in the tumors and ascites cells were similar to that in the injected clones/cells, the protein was not modified to manifest these processes. Taken together, we conclude that METCAM/MUC18 serves as a tumor suppressor as well as a metastasis suppressor for the human ovarian cancer SK-OV-3 cells. METCAM/MUC18 may suppress tumorigenesis and malignant progression of ovarian cancer cells in nude mice by decreasing their abilities in proliferation, aerobic glycolysis (metabolism), and angiogenesis via down-regulating the PI3K-AKT signaling pathway.
|
2019-03-16T13:13:43.944Z
|
2017-03-21T00:00:00.000
|
{
"year": 2017,
"sha1": "330d5da84b735a31bcc038b566c762e58b87b69c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15406/ogij.2017.06.00210",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ed4a4da0cf364a627b9973afcfcf1afd3e0a8a9d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224817983
|
pes2o/s2orc
|
v3-fos-license
|
Combined Transradial and Transfemoral Approach With Ostial Vertebral Balloon Protection for the Treatment of Patients With Subclavian Steal Syndrome
Background: Patients with an obstructive subclavian artery (SA) may exhibit symptoms of vertebrobasilar insufficiency known as subclavian steal syndrome (SSS). Endovascular treatment with stent assisted percutaneous transluminal angioplasty (SAPTA) demonstrates significantly lower percentage of intraoperative and postoperative complications in comparison with open surgery. There is a 1–5% risk of distal intracranial embolization through the ipsilateral vertebral artery (VA) during SAPTA. Objective: To assess the safety and feasibility of a novel technique for distal embolic protection using balloon catheters during SA revascularization with a dual transfemoral and transradial access. Methods: We describe a case series of patients with SSS who underwent SAPTA due to severe stenosis or occlusion of the SA using a combined anterograde/retrograde approach. Transfemoral access to SA was obtained using large bore guide sheaths. Ipsilateral transradial access was obtained using intermediate bore catheters. A Scepter XC balloon catheter was introduced through the transradial intermediate catheter into the ipsilateral VA at the ostium during SAPTA for distal embolic protection. Results: A total of eight patients with SSS underwent subclavian SAPTA. Four patients had the combined anterograde/retrograde approach. Successful revascularization was achieved in three of them. It was difficult to create a channel in the fourth unsuccessful case due to heavily calcified plaque burden. No peri-operative ischemic events were identified. On follow-up, we demonstrated patency of the stents with resolution of symptoms and without any adverse events. Conclusion: Subclavian stenting using a combined transradial and transfemoral access with compliant balloon catheters at the vertebral ostium for prevention of distal emboli may represent an alternative therapeutic approach for the treatment of SA stenosis and occlusions.
INTRODUCTION
Subclavian steal syndrome (SSS) is caused by a reversal of flow in the vertebral artery (VA) ipsilateral to a stenosis or occlusion of the prevertebral subclavian artery (SA) (1). Patients with obstructive SA may exhibit upper extremity claudication, syncope, dizziness, or arm coolness owing to arterial insufficiency in the brain (vertebrobasilar insufficiency) or upper extremity, which are both supplied by the SA (1)(2)(3). The most common cause of SA stenosis is atherosclerosis (3).
The standard practice consists in conservative management for asymptomatic patients while interventions are performed for patients suffering from SSS (3,4). Treatment of SA steno-occlusive disease includes either extra-thoracic surgical approaches, or endovascular approaches through stent assisted percutaneous transluminal angioplasty (SAPTA). Extra-thoracic operations (subclavian-carotid transposition and carotidsubclavian bypass) for the treatment of SA stenosis and occlusions have been historically performed (3)(4)(5). However, recently endovascular therapy has been the first line treatment of SA stenosis (4)(5)(6)(7). In comparison with open surgery, endovascular technology demonstrates significantly lower percentage of intraoperative and postoperative complications, and it is carried out under local anesthesia (4,8,9). Nowadays, an endovascular approach is attempted first before proceeding to open SA revascularization as it is a less invasive procedure (5,9).
Most of neuroendovascular interventions are performed via a transfemoral approach. Transradial access is an alternative approach for neuroendovascular procedures that lately is being more employed (10,11). Advantages of radial access include decreased cost and patient preference for post-operative recovery (12). For SA steno-occlusive disease, it provides added advantage with regards to catheterizing the true vessel lumen (6,10,13). There is 1-5% risk of stroke during SAPTA of SA (14)(15)(16). Distal embolization through the ipsilateral VA to the posterior circulation is one of the major concerns, which can lead to periprocedural stroke. We report four consecutive cases of symptomatic SA occlusion/near occlusions with varying degrees of stenosis, which were treated at our center using a novel combined endovascular technique with both anterograde (transfemoral) and retrograde (transradial) access to SA, while using a balloon catheter in the ipsilateral VA ostium for distal emboli protection.
Study Population
We performed a retrospective search of our prospectively acquired endovascular intervention database. All cases of SA stenosis treated with endovascular intervention were assessed for inclusion and their medical charts were reviewed. Inclusion criteria were patients with diagnosis of SSS who underwent endovascular treatment for occlusion of the SA at our center from 2014 to June 2019. Symptoms of SSS were defined as vertebrobasilar insufficiency symptoms such as dizziness, syncope, nausea, dysmetria, as well as upper extremity claudication/weakness. Subclavian steal phenomena on imaging was defined as critical stenosis or occlusion of the SA with retrograde flow into the ipsilateral VA from the contralateral VA as seen on digital subtraction angiography (DSA). Patients were included in the study only if they had symptoms of vertebrobasilar insufficiency that was explained by subclavian steal phenomena seen on imaging. Catheter and stent selection were decided by the institutional neuro-interventionalist. Ethics: Written informed consent was obtained from the individual/next of kin for the publication of any potentially identifiable images or data included in this article. Approval for the study was obtained from our institutional review board (IRB).
Interventional Procedure
The right common femoral access was obtained using 9 French short sheath which was connected to arterial line monitoring system. Radial artery ipsilateral to the SA lesion was accessed using ultrasound guidance and a 6 French slender short sheath was placed. All patients were adequately heparinized. A 6 French large bore guide catheter (Cook shuttle 087 [Cook Medical Inc, Bloomington, IN, USA] or Penumbra Neuron Max 088 [Penumbra Inc, Alameda, CA, USA) was advanced through the femoral access and placed in the proximal stump of the SA ( Figure 1A). Six French intermediate bore catheter (Cook Envoy 070 or Penumbra Neuron 070) was advanced through the radial access and placed distal to the SA lesion ( Figure 1B). Dual catheter contrast injection was performed to identify the length of the lesion. An anterograde channel is created across the SA lesion using a 0.014 microwire or 0.035 guide wire ( Figure 1C). The radial catheter was used to introduce a Scepter XC balloon (Microvention Inc, Aliso Viejo, CA, USA) over a microwire to place it in the VA for protection from emboli during angioplasty and stenting of the SA (Figure 1D). The radial artery catheter was also used to create a retrograde channel across the SA lesion in cases when antegrade access could not be obtained. Once access was obtained to the SA lesion using a 0.014 wire, angioplasty followed by stenting was performed using 0.014 monorail balloons and stents while inflating the Scepter XC balloon in the origin of the ipsilateral VA. The position of the balloon was at the first straight segment of the VA to prevent any embolic debris to penetrate into the origin of the VA during the balloon inflation and the SAPTA deployment ( Figures 1E-G). After successful revascularization of the SA, the catheters and wires were removed ( Figure 1H). Femoral hemostasis was achieved with deployment of Terumo Angioseal (Terumo Medical Corp., Somerset, NJ, USA) closure device while radial hemostasis was achieved with application of Terumo TR band. Dual approach was preferred over single subclavian stenting based on proceduralist's preference and if concerns are present for increased risk of VA embolism.
Assessment of Outcomes
Procedural information including type of arterial access, catheters, stents, and residual stenosis were collected from the review of the images and operative reports. Success of the procedure was defined as improvement in the caliber of the SA post intervention and improved antegrade flow on angiography with final residual stenosis of <20%. Periprocedural complications were defined as any stroke (hemorrhagic or ischemic), femoral or cervical or brachial or radial vascular injury, hematoma or infection at the site of entry, allergic reaction to contrast or kidney injury seen within 30 days of the intervention. At 3, 6, and 18 months follow-up, the clinical outcomes of interest were resolution of presenting symptoms and the modified Rankin Scale (mRS). CTA of the neck was done at 3 months follow-up to assess the patency of the SA. Followup complications were defined as any technical complication reported or seen at follow-up, such as in-stent restenosis, dissection, as well as secondary cerebrovascular accidents, TIA, and death.
RESULTS
A total of eight cases with SA stenosis/occlusion and SSS underwent SA stenting. The average age was 67.9 years (range 53-84 years). There were four males and four females. All patients had a history of hypertension and dyslipidemia. Four patients underwent the standard transfemoral anterograde approach, and the other four patients had the combined anterograde/retrograde approach. Procedure details and clinical outcome for all patients are summarized in Table 1.
In the standard approach group, successful revascularization was achieved in all patients. One patient had a total occlusion, two patients had severe stenosis and one patient moderate stenosis. No periprocedural complications were observed. All stents were patent at 3-months CA. One patient developed an asymptomatic SA stent occlusion at 6-months follow-up. The patient was successfully retreated with SA SAPTA without any further complications.
In the dual approach group, successful revascularization was achieved in three out of the four patients. Pre-and postrevascularization DSA pictures can be visualized in Figures 2, 3. Three patients had a total occlusion of the SA and the fourth had a nearly occluded SA. It was difficult to create a channel in the unsuccessful case due to heavily calcified plaque burden. No perioperative ischemic events were identified. One subintimal dissection without clinical manifestation was reported as a periprocedural technical complication. For follow-up, only the information of the three successfully revascularized patients was available demonstrating resolution of SSS symptoms and no restenosis ( Table 1).
The clinical history and outcomes of the combined approach patients are described in more detail below.
Patient 1
Patient presented with 3 months history of intermittent dizziness and vertigo. DSA demonstrated left SSS with occlusion of left SA. Left SA revascularization using the above described combined approach was performed without complications. Dual antiplatelet therapy and high dose statin were recommended. Follow-up imaging after 3 months showed patency of the stent.
Patient 2
Patient presented to the hospital with a 2-days history of left arm weakness and dysmetria, as well as some memory problems. DSA demonstrated total occlusion of the proximal left subclavian with evidence of subclavian steal phenomenon. Further history obtained was suggestive of clinical subclavian steal phenomenon, with left arm weakness on activity. Successful revascularization of the left SA with the above described technique was performed. There were no peri-procedural or follow up complications with resolution of symptoms at 3 months follow up. Dual antiplatelets and moderate intensity statin were continued for 3 months followed by mono-antiplatelet therapy life-long.
Patient 3
Patient initially presented with scattered posterior circulation strokes. CTA neck showed left SA near-occlusion. Patient was complaining of intermittent dizziness and memory difficulties. DSA showed near occlusion of the SA with left SSS with retrograde flow in the ipsilateral VA. Left subclavian SAPTA as described above was performed with improved degree of stenosis. The procedure was without complication. Dual antiplatelet and high dose statin were continued. Three months follow up with CTA of the neck showed a patent SA.
Patient 4
Patient presented with symptoms of left arm weakness. Workup demonstrated total occlusion of the left SA. Multiple attempts at recanalization using the above described technique were made but were unsuccessful. During one of these attempts, contrast extravasation was observed at the level of the proximal subclavian stump due to a dissection without catheterizing the true vessel lumen. By then, the amount of contrast as well as the amount of radiation exposure were maximized for the patient's age, so the decision was made to stop the procedure. Patient was advised to followup to discuss other treatment modalities but was lost to follow-up.
DISCUSSION
SA high-grade stenosis and occlusion can lead to a vascular phenomenon known as SSS, which can cause upper extremity claudication, posterior circulation transient ischemic attack, stroke, syncope, and vertigo. Available treatment options include percutaneous transluminal angioplasty (PTA), direct surgical recanalization, or surgical bypass. Current guidelines recommend endovascular approach as the first line in patients with atherosclerotic lesion of the upper extremities (17). This is based on multiple studies that have shown the technical success rate of PTA to be high and similar to that of surgical treatment but also less invasive with a low rate of major complications (3)(4)(5)(6)(7)(8)(9)18). PTA for SA stenosis is performed widely as an alternative to surgery, and the technique was first described in the early 1980's in several small series of patients (19,20).
The success rate for PTA is lower for total occlusion than for stenosis, especially in patients with SSS (21,22). Liu et al. (7) reported that the technical success of PTA for SA total occlusion was achieved in 77.6% patients, with complications occurring in 6% of them. Babic et al. studied 56 patients with total occlusion of SA (6) and showed that the primary patency rates after 1 and 3 years were 97.9 and 82.7%, respectively. Some researchers propose that endovascular technology is preferable in patients with high-grade stenosis while open bypass is preferable for SA total occlusion (9); others showed that endovascular therapy is efficient for both with no difference between patients who had stenosis and those who had occlusions based on several case series (6,8,21).
The most concerning complication of endovascular treatment of SSS is distal VA embolism. Literature shows a 1-5% risk of stroke during SAPTA of SA (14)(15)(16). This incidence rate of posterior circulation vascular events coincides with the 5% stroke rate in the 1st year observed after endovascular therapy of a VA stenosis (23).
The presence of distal embolic debris has been reported in a series of 14 patients undergoing VA stenting using distal protective filters. Filters captured debris in all patients, and this occupied between 0.1 and 22% of the filter area (24). Furthermore, it has been observed an 8.3% presence of emboli signals in the VA under Doppler ultrasound continuous insonation of patients treated with SA PTA (25). Unfortunately, it is not described whether these patients suffered from SSS manifestations despite they all depicted high grade of stenosis/occlusion. SSS is generally considered as a protective factor for distal emboli due to the presence of retrograde flow in the ipsilateral VA. However, it is important to consider the intermittency at which SSS can present, as the majority of SSS are asymptomatic with precipitation of retrograde flow upon exertion of the ipsilateral upper extremity (2). Thus, this flow reversal would not be protective if there is no steal at rest (26). Nevertheless, it has been observed the presence of posterior circulation strokes during subclavian PTA in the presence of SSS in a range of 1.0-1.7% cases without the use of distal protective devices (8,27). This leads to the idea that the supposed retrograde flow protection could be overestimated. In our series, with the presence of intermittent VB symptoms and documented SSS, we opted to decrease the risk of VA embolism using distal protection.
Unlike carotid artery angioplasty and stenting during which protection against distal embolization is obtained by either deployment of distal filter device or proximal flow arrest, SA angioplasty and stenting traditionally does not allow for either of these techniques due to the need of covering the VA to obtain an adequate distal coverage of the plaque.
One of the protective techniques against distal embolisms described by Yamamoto et al. is by using the non-compliant Optimo balloon catheter (Tokai Medical Products, Aichi, Japan) through the ipsilateral radial artery (18,28). In their case series, Yamamoto et al. (18) placed the 6-French Optimo balloon catheter in the SA proximal to the origin of the VA after introducing it through an ipsilateral transradial access. They inflated the balloon around the catheter while performing antegrade angioplasty and stenting of the affected SA for distal emboli protection. Nakamura et al. (28) in their paper described a similar technique. They described two cases in which brachial artery access was established for the insertion of a balloon-tipped occlusion catheter (Optimo occlusion balloon catheter) into the SA proximal to the VA. Even though this technique appears to provide protection against distal emboli protection, there is concern that during inflation of this non-compliant balloon at the tip of the catheter, it performs angioplasty of that segment of the proximal VA and could be its own source of emboli. Secondly, and more importantly, in cases of long lesions of the SA (covering the origin of the left VA), there is not much room to perform the planned SAPTA and adequately cover the plaque distally with a stent due to the large size of the balloon. Lastly, the Optimo catheter is not found in the US market.
The use of distal filters in the VA during subclavian/vertebral stenting has been previously described with technical success and without complications (24,(29)(30)(31). The advantage of distal filters over balloons relies on the preservation of the antegrade cerebral flow throughout the procedure. However, the larger and more rigid delivery systems difficult their navigation through tortuous vessels with the risk of vessel spasm and/or dissection (32). Additionally, there is the risk of embolization of particles smaller than the size of the pores (33).
Herein, we used the transradial catheter to introduce a Scepter XC balloon (Scepter XC TM , Microvention, Inc., Tustin, CA, USA) over a microwire to place it in the ipsilateral VA for protection from emboli during angioplasty and stenting through a femoral route. The placement of this compliant balloon in the ipsilateral VA provides adequate protection from distal emboli without itself performing any meaningful angioplasty and producing new source of emboli. Since it is a 0.017-inch catheter, it leaves ample room to perform angioplasty and stenting of long lesions of the SA (covering the origin of the left VA), with easy withdrawal of the catheter once deflated after being jailed in the subclavian stent. A similar technique was previously described by Sadato et al. (34) where they used a different balloon catheter for distal protection (Multilumen balloon catheter; Clinical Supply Co., Hajima, Gifu), which is not found in the US market.
Our case series along with those of Yamamoto, Nakamura, and Sadato (18,28,34) are the few to describe a combined antegrade and retrograde endovascular approach to SA SAPTA. Our paper will serve as a reminder to young endovascular surgeons of the effectiveness, safety, and feasibility of this technique for SA recanalization now using new compliant small dual lumen balloons. One of the advantages of this technique is the ability to provide protection from distal emboli even in the subclavian plaque extends beyond the vertebral ostium as discussed above. The second advantage is it allows for double simultaneous injection through the transfemoral and transradial catheters to identify the length of the lesion and direction of the proposed wire route in cases of occlusion of the SA. Most importantly, in cases of occluded subclavian arteries in which it is not possible to cross the occluded lesion in antegrade fashion, it provides for potential retrograde access to the lesion.
There are limitations in our study. The retrospective nature of the study can lead to biases. The small sample size is not large enough to demonstrate any beneficial effect on embolic complications. Only four of our eight patients with SSS underwent the dual approach technique. The technique was used in patients with higher degree of stenosis and occlusions when the neurointerventionalist thought that there was an increased risk of vertebral emboli due to plaque proximity to the vertebral ostium. The use of the dual approach can lead to increase potential for complications including radial artery injury, temporary and permanent occlusion, and compartment syndrome.
CONCLUSION
We conclude that a combined retrograde/antegrade endovascular approach along with use of balloon catheters as protective devices against distal embolism in the ipsilateral VA should be considered during SA recanalization in the setting of SSS. This technique could be an option in complete occlusions prior to more aggressive alternatives such us bypass. Further study of this technique in larger prospective multicenter registries is warranted before widespread adoption to better evaluate its potential protective effect on distal embolization during SA stenting.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by University of Iowa Institutional Review Board. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. Written informed consent was obtained from the individuals/next of kin for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
RF, SD, AAM, and SO-G designed and defined the intellectual content of the study. SD and SO-G performed the clinical studies. AM-R, MF, CZ, DQ, and AAM contributed with data acquisition. RF, SD, and SO-G wrote and prepared the manuscript. DH, JR, ES, CD, and SO-G provided critical review of the study. All authors reviewed and approved the final version.
|
2020-10-22T19:01:09.118Z
|
2020-10-22T00:00:00.000
|
{
"year": 2020,
"sha1": "cb8d798f50ba48c9402b5b3f81313e41275b24ce",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.576383/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cb8d798f50ba48c9402b5b3f81313e41275b24ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
158677464
|
pes2o/s2orc
|
v3-fos-license
|
A two-nation Asian phenomenological study Roles and purposes of graduate journalism education through the lens of global journalism
This phenomenological study sought to describe the essence of the roles and purposes of graduate journalism education through the eyes of 16 Asian students from three graduate journalism schools in Japan and the Philippines. This article is anchored in the theory of reflective practice. Responses of students produced a Bridge of Traits of Graduate Journalism Education that illustrates these roles and purposes of graduate studies. This Bridge of Traits also entered into the theory-and-practice discussions, not to mention that this bridge represents respondents’ efforts to connect their personal, academic and professional milieus and aspirations as journalists. Making these connections is done within the realm of journalism’s theory-practice continuum, which, as respondents surprisingly articulated, is important, complementary and applicable. Respondents’ views offer hope that university-based journalism programmes can run viable graduate journalism programmes implementing several elements in pedagogy and substance that espouse a spirit of critical reflective practice in journalists. They aspire to new perspectives and approaches in the teaching, study and practice of journalism.
D
EBATES on journalism education persist, especially if the subject matter is based on the 'theory-versus-practice' issue (Josephi, 2015).Journalism is said to be an industry skill that professional journalists can teach students and many journalism schools are then being encouraged to adjust their curricula according to the required skills of the industry.However, journalism schools housed in universities are centres of research and knowledge.Not only should journalism teachers impart industry-required skills, they also must do research and wear an academic's hat in analysing issues and trends in the news media (Barkho, 2013).This theory-versus-practice debate not only has reached areas such as pedagogy but even the issue of what works are (un)recognised as 'journalistic' and as 'academic' by both parties (Murthy, 2015;Woolley, 2015;Tanner, 2015;Chua, 2015;Anuar, 2015;Duffy, 2015;Kemper, 2015;Mason, 2015).Even the recognition of journalism as a field of study (i.e.journalism studies), while said to be still 'young' is being questioned (Franklin & Mensing, 2011).
This debate takes into context how countries train students in journalism.Some countries, such as the United States, have accommodated journalism degree programmes in universities.In other countries, professional institutes and news organisations, or companies, train journalists and grant degrees (Josephi 2015) in the absence of educational institutions offering journalism degree programmes.Another context here is how a country's news media operates, as well as the levels of freedom of expression in the country.
Many journalism programmes in universities worldwide have followed Western models.However, some of these models may not work (e.g.role of journalism in a democracy) given the varying status of the news media, freedom of expression and of the press in different countries.These internal and external developments impact on how journalism education is delivered (Josephi, 2015;Opiniano et. al, 2015).So continues the never-ending calls to re-assess, re-invent and re-configure journalism education (e.g.Mensing, 2011;Webb, 2015).Now, media platforms have forced news organisations to connect stories from different parts of the world to a global audience.Journalism schools living in isolation will be disconnected from global trends in journalism education and even to other journalism educators and scholars who can help each other out.News media systems also differ across regions and countries, providing an interesting model of how the professional model of journalism may be taught given varied socio-political milieus (Josephi, 2007).
These developments have led to the rising globalisation of journalism education.Some countries now offer graduate programmes in journalism that are on top of the bachelor's programmes that have defined the 'explosion' of journalism education worldwide (Self, 2015;Josephi, 2015).The theory-practice debates surrounding journalism education have now reached graduate levels.The offering of master's or doctorate degrees in journalism is a young phenomenon, and this may be new terrain for both universities/professional institutes and for the field called Journalism Studies.Meanwhile, aspiring and current journalists may be looking at graduate education as a passageway to discover-and to know further-the world of journalism.But why do these journalists take up graduate studies in journalism?What role and purpose does this graduate programme experience have on the student?What do these graduate students think of the theory-versus-practice debates in journalism education?With journalism as a 'profession' and as a 'discipline' still finding its place in a university setting, studies on graduate journalism education remain scarce.
This phenomenological article seeks to find out from Japanese and Filipino graduate journalism students the roles and purposes of journalism graduate studies relating to them.The article will contribute to the few studies on graduate journalism education (Carpenter, 2008).But this study offers not just non-Western views about journalism education; it will also be a modest attempt to see how graduate students' views relate to similarly-situated conditions (be it the graduate students or the journalism schools) in developing and developed countries, in the spirit of global journalism education.
Theoretical background
Review of related literature and studies a. Journalism education and the theory-versus-practice debates Journalism education has come a long way since a first degree programme in the United States was offered in 1869 (Josephi, 2015).The offering of journalism programmes past and present is owed to the growth of media outfits and the resulting demand for trained professionals in individual countries; the dramatic increase in reader/viewer/listenership (Josephi, 2015); and more recently technology and the internet.
The approach to journalism education has also evolved.Journalism education has for a long time been training future journalists of a country's news media industry with curricula aligned to industry needs.However, since many of these journalism programmes are housed in universities, the journalism programme must now consider operating itself under the usual three-fold mission of a university: teaching, research and service.This is where the theory-versus-practice debates escalate, leading to 'disdain', 'misunderstanding' or 'miscommunication' between and among professional journalists, journalism educators and journalism scholars (see Zelizer, 2004).Meanwhile, Journalism Studies emerged as a field of study.Scholarly journals on journalism became the outlets for scholars to further the theory-versus-practice debates.Even if Barbie Zelizer's (2004) book Taking Journalism Seriously had attempted to bring journalists, journalism educators and journalism scholars to the table and let them see mutual benefits from each other's views, recent works provide evidence of continued debates (Anuar, 2015;Barkho, 2013;Clarke, 2010;Lugo-Ocando, 2015;Marsh, 2015;Ray, 2014).
One scholar who has put forward an all-encompassing vision for journalism education is G. Stuart Adam.Journalism's daily grind and the university's intellectualism must be both taught, Adam wrote.Students' reporting, writing and research skills may need to be complemented by disciplinal and specialisa-tion subjects (e.g.history, law, ethics, economics, political science, sociology, language studies) so that written outputs are sprinkled with substance coming from the influences of these disciplines (Adam, 2006) 1 .
Another established journalism scholar, Stephen Reese, affirms Adam's insights in the context that academia and the industry are partners-and journalism's 'intellectual ethos' can be nourished by this academe-industry collaboration.In Reese's own words (1999, p. 90): Partnerships can be productive relationships and (can be) necessary in tackling complex problems, and joint ventures a common fixture in corporate life.It must be clear, however, what one is getting out of the relationship.Academia may properly be a partner, but it should not become a mere client of the corporate world or the professions.Educators must think through what they are about, especially in journalism with so many constituencies.For all of its faults, the university provides a valuable source of leadership for society and for journalism that cannot be replicated elsewhere… Adam and Reese, as well as other journalism education scholars like Donica Mensing (Mensing, 2011;Franklin & Mensing, 2011), put forward these insights so as to ease the theory-and-practice 'tension'.This debate puts people 'between a rock and a hard place', as the late American journalism educator James Carey expressed at the 1972 conference of the Association of Educators in Journalism and Mass Communication (AEJMC) in the US.
b. Graduate journalism education
Graduate education for journalists serves to 'retrain journalists' who entered the industry and to help undergraduates seek additional specialisation and skills. 2 Folkerts (2014) observes these programmes 'had for many years been given scant attention by industry leaders, academics or associations that formed around journalism education ' (p. 287).Yet some of the famous journalism schools are graduate-level programmes, such as Columbia University's in the US.
Basically, a 'higher educational level' for the journalist can be a 'very formative type of education' for journalists (Schultz, 2002, p. 225) as they analyse the works and outputs of media content, the professional conduct of journalists, and other journalism-related phenomena.Journalism degree programmes at the master's level can either be professional degrees (their capstone being an investigative report, a narrative journalism story, and other related outputs) or academic degrees (with a master's thesis as the culminating requirement).This, even as Soloski (1994, p. 6) feared 'professional education' that leads to specialisation will see students being taught with skills 'that limit, rather than expand, their opportunities'.
Then the theory-versus-practice debate ensues at the graduate level.And James Carey's 1972 plea was resurrected by three former deans of top American journalism schools (in Educating Journalists: A New Plea for the University Tradition).The Columbia report focused almost entirely on American journalism education, but there is recognition of the theory-practice tension in journalism programmes worldwide (there is 'no obvious solution to the tension between … university and newsroom cultures' [Folkerts, Hamilton & Lemann, 2013, p. 59]).But basically, Educating Journalists affirmed the treatises of Adam (1989) and Reese (1999): graduate journalism education in a university setting is important-and both theory and practice can benefit: …(Graduate journalism education) is the place where the fundamental questions that have dogged our field from the very beginning are most likely to be resolved.That is not only because of the prestige of graduate education, but also because graduate programs have to offer a much more complete education: they do not operate under the traditional undergraduate division of skills education inside the journalism program and most of the rest of a student's education elsewhere in a university.Graduate programs claim to turn an educated person into a professional journalist.…Journalists perform a socially-essential and intellectually-challenging role that ought to merit inclusion in the pantheon of professions.Businesses need to establish themselves only in the marketplace.Professions must also establish themselves in universities, in professional schools of their own that are deeply involved in the larger academic enterprise.That is why… it is especially important that journalism schools take the fullest possible advantage of our university location.If they can do that, all of journalism will benefit.(Folkerts, Hamilton & Lemann, 2013, pp. 59, 64) Under such a vision, there can be hope that the divides between theory and practice 'must be regarded as one and the same time as vast, but also as ultimately bridgeable'.c.Studies on graduate journalism education and its students Studies on graduate journalism education are a mix of scholarly commentaries and empirical research.Most of these studies are West-centric, pointing to the scarcity of related studies from other mediascapes.Students, their published research works and the journalism schools themselves have been these studies' units of analysis.
Graduate education is a welcome opportunity for students, akin to how undergraduates enjoy journalism education.Saalberg's (1970) survey of 43 master's degree-granting journalism schools showed that the degree programmes are welcome opportunities for these students to learn basic and some advanced skills in journalism that they lack.
Undergraduate and graduate education/degrees may or may not have influenced their views on the roles of journalists.O'Boyle and Knowlton (2015) compared 'postgraduate students' from Dublin, Ireland and Amman, Jordan on their entry into journalism through graduate education.Findings reveal differences in how students view themselves; their roles (e.g.activist, neutral) are impacted on by the culture of their countries' journalism.Hanna and Sanders (2007) compared undergraduate and graduate-level British journalism students' views and news media roles.Results of their study (sample of arriving students: N=291; sample of students who graduated/completed: N=208) showed little evidence of students' attitudinal change during the time that they were students.The results may be attributed to personal and family backgrounds, and to students' exposure to British news media culture.Schultz (2002) attempted to determine the characteristics of journalists who went to graduate school versus journalists who are college graduates only.Based on two US surveys (1992 and 1996), results showed few differences in those students' perceived influences in education, journalistic role concepts and audience perceptions.But journalists with a graduate degree, Schultz found, were more likely to work for larger news organisations and to support an interpretative role by journalists.
Graduate education in journalism, not surprisingly, had pushed students to do research apart from journalistic writing.Christ and Broyle (2007) did a benchmark study of graduate education at American universities' communication and journalism programs (N=40).The graduate programs surveyed appear to have prepared students for research and teaching but not in community service.Carpenter (2008), for her part, looked at how graduate students as authors or coauthors (N=723 students writing 543 articles from ten journals over a nine-year period) of scholarly journal articles got published in the top journals.
Graduate education can also influence students' ethical practices.Valdez (2013) studied the motivations of Asian students in pursuing a Philippine university's master's program (N=63).Under the guidance of a professional competence model as the author's framework, it was found that there is high knowledge gain in ethical decision-making by students.Ethics had the greatest improvement in students' knowledge acquisition from the graduate journalism programme, and the ethics course the most useful course to students' journalistic work.
These previous studies had looked at the schools' and graduate students' profiles, their views and attitudes on the roles of journalists, and their motivations to pursue graduate education in journalism.Only one study employed a qualitative design (O'Boyle & Knowlton, 2015), although much can still be researched on.One area that remains to be studied is the role and purpose of students' pursuit of graduate education in journalism.Are these students realising how a theorypractice divide may be helpful to their education and, for some, their eventual pursuit of a career in journalism?
Theoretical framework Susan Greenberg ( 2007) first wrote about using the Theory of Reflective Practice by Donald Schön (1983) as a solution to the theory-practice divide in journalism education.Schön's theory of reflective practice is being used as a theoretical anchorage for this article.
Reflective practice is said to be a process of learning through and from experience towards gaining new insights of self and/or practice (also in Finlay, 2008).Reflective practice is said to involve examining assumptions of everyday practice; it also makes individual practitioners self-aware and critically evaluative of their own responses to practice situations.The point of reflective practice is that the student/practitioner 'recapture(s) practice experiences and (mulls these) over carefully in order to gain new understandings and so improve future practice' (Finlay, 2008).A host of models have modified this theory of reflective practice, said to be an outcome of the Experiential Learning Theory (ELT) of David Kolb (1984).
A related model that stems from Schön's reflective practice and Kolb's ELT is Graham Gibbs' Reflective Cycle (in Finlay, 2008) (Figure 1).This model proposes the enrichment of theory and practice on each other 'in a never-ending cycle' (i.e.iterative).Learners here make six steps: description, feelings, evaluation, analysis, conclusion and action plan.Gibbs' model (Finlay, 2008) carried the following aims: a) Challenge one's assumptions; b) Explore different/new ideas and approaches towards doing or thinking about things; c) Promote self-improvement given the identification of strengths and weaknesses and taking action to address things; and d) Link practice and theory by combining, doing or observing with thinking or applying knowledge.
Journalism, being an industry-oriented field, does immerse its practitioners to learn the ropes of the profession.Given the pace of journalism work, there may be little room for journalists to discern themselves, their duties, their personal aspirations and their dispositions about the roles of journalists before the public.
Figure 1: Graham Gibbs' Reflective Cycle
Source: Finlay, 2008 Graduate education by journalists may provide that venue of reflective practice.In a sense, the vision of Folkerts, Hamilton and Lemann (2013) for graduate journalism education thus fits into the theory of reflective practice (Kolb) and the reflective cycle model (Gibbs).
Methods
To describe the essence (lebenswelt) of the roles and purposes of graduate journalism education unto students, descriptive phenomenology was used.Phenomenology (Husserl, 1970, cited in Wojnar & Swanson, 2007, p. 173) is the 'science of the essence of consciousness formed on defining the concept of intentionality and the meaning of lived experience from the first person point of view'.To be characterised here are the individual and collective experiences of graduate journalism students (N=16) from one Japanese and two Philippine graduate journalism schools.
Qualitative research enables to fully describe a phenomenon in the perspectives of both the researcher and the reader.For people to understand better, 'they should be provided with information in the form which they usually experience'.In so doing, the depth and breadth of the qualitative data this research gathered will remain to provide information and insights on the phenomenon being studied.This approach for qualitative research thus yields findings that harmonise with readers' experiences (Lincoln & Guba, 1985).
Study site and selection
This study was located at three graduate journalism schools in Japan (Waseda University) and the Philippines (Asian Institute of Journalism and Communication [AIJC]), and Ateneo de Manila University and its Asian Center for Journalism [ACFJ]) (Table 1).Having subjects that are from developed and developing countries finds similarity to what O'Boyle and Knowlton (2015) did.a. Profile of the journalism schools Waseda University offered Japan's first master's degree programme in journalism.This was to address the need for a 'professional journalism school in Japan' (WU01, interview).It is the newspaper companies that train the journalists, with special emphasis on the on-the-job training.Waseda started off with certificate programmes in science and technology journalism, environmental journalism, medical journalism and political journalism (WU02).
Ateneo de Manila University's Asian Center for Journalism (ACFJ) was founded under the School of Social Sciences.With financial support from the Konrad Adenauer Foundation since the centre began in 2000, journalists across the continent-especially from South and Southeast Asia-are the ACFJ's target and its curriculum rooted in the Asian tradition (AdMU01 and AdMU02, interviews).The Asian Institute of Journalism and Communication (AIJC) is a professional institute that was formed as a graduate school of journalism in 1980, working then with the Journal Group of Publications in Manila to offer a master's degree in journalism in the 1980s.AIJC stressed development journalism which helped in the transformation of the center as an educational institution-cum-consultancy group that does commissioned research on communication and journalism issues in the Philippines and other parts of the world (AIJC website; AIJC01, interview).
b. Profile of the respondents
This study interviewed 11 graduate students from Japan and five from the Philippines, as well as the programme heads and some colleague faculty of the master's programmes in journalism (Table 2).Most of the respondents are new entrants in journalism education, especially given that journalism in Japan is taught as a master's degree programme.While the Japanese students were currently taking their internship at the time of the interview, three of the five Filipino graduate students have ongoing news media experience.Having a limited number of respondents from the Philippines is admittedly a limitation of this research since Filipino working journalists are cornered by their daily reporting duties and have little or no time to be interviewed.
Homogenous sampling was employed here since participants are all graduate students who may have almost similar experiences.But maximum variation sampling was also considered for this paper since participants have diverging forms of experiences as seen from their backgrounds prior to and during their entry into graduate school.
Procedure and research instrumentation
The Waseda and AIJC students were interviewed in batches of three.The Ateneo de Manila students were interviewed individually given the difficulty of getting an agreed schedule for students who are currently full-time journalists. 3Interviews conducted were free-flowing and their answers were transcribed with their consent.Statements in Filipino were carefully translated, interpreted and checked in order to remain faithful to the original meaning of their answers.
Respondents' sharings and musings revolved around the key questions for this research: a) Personal background prior to entering into graduate school; b) Purposes for taking up graduate journalism education; c) Views on the theory-and-practice debate that these students immerse themselves in while studying and while doing media work; and d) Roles of graduate education for the said student.
Mode of analysis and ethical considerations
To capture the essence of the phenomenon, Colaizzi's seven-step method of phe- nomenological data analysis was done (1978).The data were read and re-read as selected verbalisations from the 16 respondents helped collectively describe the commonalities of respondents' views and experiences.Condensed meanings of the significant statements led to the categorisation of codes, sub-themes and major themes.
Cool and warm analyses, facilitated by the use of a dendrogram, were done in order to capture the central meaning of respondents' experiences.Themes that emerged from their answers were labeled as truthfully and accurately, with each major theme assigned a metaphor.In turn, an outcome space was developed as the result of this phenomenological research (in Larsson and Holmström, 2009).
Member checking and critical friend techniques were done to validate data, especially when assessing their trustworthiness.The researcher also assured respondents that their identities would be kept confidential given the consent they gave.
Findings
Five themes emerged from the articulations of 16 Asian journalism students' meanings of the role and purposes of graduate journalism education.Looking at how the themes emerged, respondents took cognisance of not just their personal realisations while studying but also their understanding of the theory-andpractice relationship in graduate journalism education.
An overall look of respondents' answers, and the contexts where they are situated, can be likened to a truss bridge.A truss 'is a triangulated framework of elements that act primarily in tension and compression' (Tata Steel Construction, n.d.).This type of bridge is a common design for a bridge in many parts of the world.Like any type of bridge, the structure deals with tension (a force that pulls materials apart) and compression (a force that squashes materials together).Vehicles of various loads passing through truss bridges provide those tension and compression forces, which is why truss bridges are considered 'expensive to fabricate' (Tata Steel Construction, n.d.).
Having said that, graduate journalism education can be likened to a truss bridge wherein the compression forces are students' milieus as graduate students (personal, academic and professional), and the tension forces are the theory and practice perspectives of journalism.But the bridge, overall, tries to balance itself-and eventually, remain sturdy-given the compression and tension forces.In the same vein, graduate journalism education bridges the personal, academic and professional milieus of graduate students and their dealings with the usual theory-and-practice tensions in journalism.
Thus, the articulations of the respondent-graduate students can be summed up into an outcomes space which the researcher calls a Bridge of Traits of Graduate Journalism Education (Figure 2).This Bridge of Traits carries five major themes: 1) Insights maker (IM); 2) Context provider (CP); 3) Role agent (RA); 4) Capacity builder (CB); and 5) Individual booster (IB).
Insights maker: Graduate journalism education's intuitive nature
Respondents viewed graduate study as a provider of additional knowledge.This not only covers beginning knowledge for first-timers but also advanced knowledge.Beginners' responses reveal their interests: I am interested about the methods: how to research, how to conduct the interview.I am also taking social psychology classes, so I want to know the methods, how to see this
Graduate journalism education also informs students of advanced principles on journalism. That was what one respondent from Ateneo realised:
There are some things that I can't learn as a print reporter producing two stories a day (your bread and butter).How much can you really learn a specific industry?But it is the same people who don't read other publications, like Financial Times.You will even learn from the way The New York Times is doing their reports.You will see your limitations: Why can't I write like them?Why can they do those things?So it's boastfulness, for me, that others think journalists do not need to pursue graduate studies.You will see that in your first few days of reporting, and in the way they approach their work.Especially in terms of ethics: They don't understand conflict of interest.If you don't have that classroom training, or nobody tells you, you will not mind those kinds of issues.(MT)
Figure 2: Bridge of Traits of graduate journalism education
Context provider: Graduate journalism education's explanatory nature Journalism in general elucidates contexts to stories that were reported.In the same boat, respondents think graduate journalism education carries an explanatory nature.Prior to understanding the situations journalists face, graduate journalism education reminded one respondent of the duty journalists have for the public: 'I think (journalists) explain the difficult things in simple ways.It's a very important role' (SP).
Having said the above, graduate journalism education explains the situations journalists face.Such explanations were what one respondent, a veteran journalist, sought for: I think graduate study is a big help so that you get re-oriented, especially when you have long been in the industry and you are always plunged into work.You can see changes in the media industry but these are not explained to you.I think when you go back to school and take a master's degree, you are being refreshed, re-oriented and it grounds you about our profession… There are many changes in the industry that, through some courses, you can understand better how to face these changes.(LL) Respondents think graduate journalism education also contextualises the encounters between theory and practice.On this score, some respondents acknowledge the following surrounding these encounters: 1. Theory is a pre-requisite to practice ('…When I came to Waseda, I realised the important things in journalism are the theories'
Role agent: Graduate journalism education's designatory nature
Respondents do acknowledge the roles journalists play.The roles that graduate journalism education had told unto students cover individual journalists (individual authority) and journalism as a whole (societal functions).
Beginners especially marvel at these roles ascribed unto journalists: I think studying journalism is like, you can connect to the newest digital things and you can use social media to express your opinions and influence people.I want to connect to people and express my opinion very smoothly.(KT) I have to tell the information, the most important things.We need to try to find things that many people don't know.Only a journalist can do that.(RA) The students also acknowledged the societal influence journalists provide.Even if students come from countries with diverse media systems (from restrictive to being free), the respondents' recognition of the societal roles of journalism is leaned towards being the Fourth Estate: Journalism is a watchdog for the people… to change society.Journalists report to citizens and to the origins of the information-editing the information.(TK) For me, my first impression of journalism is criticism.Journalists have deep introspection of this world, about this society.(TC) …the MA program we are taking in Ateneo grounds you more theoretically: What is our role in society?What is journalism?...When you are doing your job, it reminds you why you were there.You are the touch point between the people and the people with power.(MT) Capacity builder: Graduate journalism education's developmental nature Two sub-themes under this role emerged from respondents' answers: graduate journalism education sharpens students' journalistic skills and develops a sense of critical analysis unto them.
As graduate-level journalism study still tackles reporting, writing and editing skills, the graduate program helped students refine these same skills-especially geared, says two respondents, to senior and supervisory roles in news organisations (MT and LL).For the new entrants, they learn new skills but with some value-added given that students are at the graduate level.There is even a 'mature' skills set that some respondents claimed to have learned from graduate school.As some of them articulated: The Ateneo curriculum has that political aspect to journalism.So I think I can sharpen my skills there [I'm proficient in business].If work in a different job, with different bosses or beat… I want be able to do it properly.In a way that makes my stories more informative and better for the readers.I want to add more substance in the story, to make people care about it.(MT) I know everyone can write, everyone can just talk to people.But you… talk to people and write articles in a mature way, (telling) a mature citizen.It [no graduate schooling] did not help to journalism, or to improve the quality of journalism.(AC) But much had been said on the critical analysis skills that graduate journalism education imparts.It is in this respect where theory and practice complement, that which respondents acknowledged.This theory-practice complementation differs from the earlier sub-theme on theory-practice encounters, in the sense that critical analysis is applied in the synergy of theory and practice.
1.One thing some respondents said is that the graduate program provides theoretical grounding into journalistic practice.This grounding was 'expected' in graduate study, says other respondents: I realise that it (journalism) is not easy and you need to know the theory behind it, as well as the practical side.So yeah, you definitely need both theory and practice.(MG) I expected theoretical and practical practice [sic] in journalism… we are now (taking an internship) for a newspaper company and so I think we needed theory.(RY) 2. Another insight from some respondents is the need to operationalise both theory and practice in conducting journalism: I think balance is very important for me because… journalists are the experts in this field so they can have deep conversations with sources.So I think the theories are important.But the practical (things) are also important because you have to know how to ask questions, how to take photos and interviews.You will look so stupid if you don't know basic knowledge in this field and just interview the expert.(SP) I think before learning to practice, we should learn something about the theories because we could damage (someone's reputation).Students need to know also the practical tips.But my internship mentor said I should know how to interview, take photos.So he said it is very important to learn something, to learn the theories.(TC) 3. The graduate programme provides a third theory-and-practice perspective in which critical analysis is applied: journalistic practice has its theoretical lenses.As verbalised by a Waseda student: If you want to be a journalist, you need to understand the society that you live in.And in order to understand the society you live in, you need to understand the theory that goes with it.Like in a democratic system, how does voting work?How do people respond to journalism and the process that goes with that?So if you don't understand that, I don't think you can create news that is relevant.I think theory kind of helps journalists understand that… understand their own society better.(MG) 4. Finally, the critical analysis lessons learned from graduate studies have made some respondents realise that theoretical discussions are a necessity in real-world journalism: I realised why we need theory; as a reader as well of news, like we not only report but we read the news as well right?We need to know the theory in order to be critical to (an) article… Like did (the article) use this process?Is that like, credible?You can't ask these questions if you don't know the theory of journalism.(MG) If the only training you get is on-the-job training, you won't be able to look at yourself objectively through the lens of theory.This is because you are colored with the company's style and system, you know what I mean?So if you know your theory beforehand, you can have a more critical way of thinking when you look at yourself and other journalists.(IA) Not surprisingly, theories do clash with daily journalistic practice and mixing theory and practice 'is crazy' (LL).Nevertheless, theories have also made respondents reflect on and analyse things: In the future, before we write something, (theory) also helps us in being responsible; there are theories we can get back from, especially the basic ones to help us know what we should write before we share these to the public.Let's face it: we don't have much time while at work.You will be lucky if someone will explain to you why these things happen right now and why the industry is currently this way.If you take (an) MA, these things will be properly explained to you.(LL) The theory about journalism taught me to show respect to the power of journalism.Many have taught me that journalism is a very powerful thing…and you have to use that power.(SP) Individual booster: Graduate journalism education's goal-oriented nature Notwithstanding the tensions between theory and practice in journalism that frequent graduate students' personal, professional and academic lives, the earlier responses show graduate education tries to balance things out.This is especially in consideration of respondents' individual goals, which graduate journalism education helps boost.Graduate studies in journalism adhere to respondents' self-interest in journalism, their aspirations for career growth, and their pursuit of a sense of accomplishment.
Varied reasons surround some respondents' self-interest into taking the graduate programme.Some found that interest through reading (MG); some linked their interest in journalism with technology (AW); others got the epiphany that journalism is a 'long-term career' (VL).For another respondent, journalism through graduate studies is an ambition: My ambition is to write, and to have authority, and be credible to people.I think it's also an individual aspiration; if you have the inclination of writing, and you want to make your best on it, you should go to school specialising in it.(AC) Graduate studies in journalism are also viewed as a jump to the next phase of students' professional lives.One wants to teach (LL) while another recognises that a master's degree may be a plus should he be assigned a higher role in the news organisation (MT).
Another respondent even realised that since journalism is not her undergraduate training, and actual jobs in the industry are limited, the graduate degree is a must: 'I tried to find a job in the media.But in Japan it is very difficult to find jobs in newspaper companies because it is a small industry.So I then decided to go for the master's degree.' (RA) The training from graduate education will even be an upgrade of the skills of the journalism student, which two graduate students recognised: So I thought if I'm going to advance in my career.I should be able to offer potential employer, something more than just being a good reporter or whatever.And also we are moving into digital (journalism).So if I have more skills before that transition takes place, or as that transition is taking place, as I think it is now, I'm able to equip myself with better reporting tools-mostly for career advancement.(MT).
Taking a master's degree is a self-upgrade.If you learn more about the media industry, it will put you in a higher plane.You stand taller, especially now that anyone can practically be a journalist.(LL) And if self-interest and career growth are being buoyed, the graduate journalism programme helps students develop that sense of accomplishment.The degree programme helps in students' aspirations for the following: • With regard to the relevance of graduate journalism education, one respondent averred that simply being trained on the basic skill of reporting and writing may not be enough.Graduate journalism education then comes into the picture as an ego booster:
Discussion
This study enumerated Japanese and Filipino graduate journalism students' views on the roles and purposes of taking a journalism master's degree programme.This, as respondents have recognised the 'tensions' surrounding theory and practice in journalism.The respondents' answers brought forth a 'bridge of traits' on how graduate journalism education helps these students, be it beginning or mid-career journalists.The use of a bridge to illustrate respondents' answers phenomenologically (i.e., the outcomes space) respects students' overall disposition: Both theory and practice complement, while their graduate programme portrays the roles of insights maker, context provider, role agent, capacity builder, and individual booster.
The contribution of the Bridge of Traits of Graduate Journalism Education is that students are providing perspectives on how graduate journalism education may look.From a programmatic standpoint, respondents' answers reinforced, in more ways than one, the elements of an ideal program in journalism that Folkerts, Hamilton and Lemann outlined (2013).There can be intersections surrounding what the journalism deans wrote and what the respondents of this study answered (Table 3), as the fifth trait-individual booster-personalises the place of graduate journalism education unto the student.
Observations on graduate journalism education as insights maker and capacitybuilder affirm Saalberg's findings (1970) that non-journalism bachelor's degree holders find value from the skills they learned in graduate journalism education.While many of this study's respondents do not hold journalism bachelor's degrees, these students do understand the demands of journalism practice and analysis at both beginner and advanced levels.And since the degree is a graduate degree, these respondents can become 'more solidly grounded' in journalism and media (Saalberg,1970).
Respondents' answers on the role agent trait only confirm that the roles of journalism and of journalists are a commonplace discussion in journalism education.It is but inevitable for respondents to share their feelings about the roles journalists play.Journalists' roles were also questioned by Schultz (2002) and O' Boyle and Knowlton (2015).Interestingly, some respondents' answers reveal the influence of their home countries' news media systems.Some answers even affirm a desire by respondents to feel the Fourth Estate disposition of journalism, at least through graduate studies.
The context provider trait is interesting.This set of answers came about as a result of the usual training that journalism provides: that stories on events and issues be contextualised.It is under this trait that the theory-and-practice discussion continues.Respondents were candid in their young views surrounding this debate, given the context that journalism is intrinsically a practice-oriented field.
Respondents may have understood theory as things that do not only make sense of what is happening around them.'Theory' or 'theories' is/are those that have been taught in school, like definitions, principles, concepts or the theories themselves.What adds up here is the setting, the journalism school in a university set up: the learning of graduate-level journalism is now beyond the skills being taught, but is traversing critical analysis of the writings and actions of journalists, and the interactions between journalism and other stakeholders in the society the journalists cover (also in Chapman & Papatheodorou, 2004).This can be gleaned from verbalisations such as undergraduate training in political science that can be used in reporting and writing (MY), or even the news media system and political milieu that influence journalists' work (MG).
Scholars have engaged in somewhat heated intellectual arguments about how theory and practice collide in journalism and journalism education (Barkho, 2013;Bacon, 2011;Greenberg, 2007;Robie, 2014Robie, , 2015)).But the responses of these graduate students reveal their simplistic-but meaningful-understandings of how theory and practice complement.Considering theories as pre-requisites (AW, MY), guides (MY), subconscious mindsets (RY) and even necessities (MG, IA) is a modest leap of faith from how scholars abhor either theory or practice, or try to bridge these two worlds but are having a hard time.If such answers came about, the three graduate journalism schools and their courses may have been successful in balancing theory and practice, with journalists learning two categories of knowledge: academic and professional (de Burgh, 2003) Except for two respondents who have more than five years of journalism experience (reporting, editing), the rest of the respondents are newcomers to the field.This length of experience influences how the Theory of Reflective Practice of Schön and the Reflective Cycle of Gibbs are being applied.A future study, of seasoned journalists who are also graduate journalism students, can see how these theories make journalists exercise critical self-reflection (Chapman & Papatheodorou, 2004).But for now, some verbalisations by respondents show efforts by these graduate students to reflect on the work of journalists, on trends in the profession, on the implications of journalists' articles and roles, and on journalism's place in a society.Graduate journalism school even answers queries on trends in the profession that a respondent's daily immersion in the news media industry cannot, or may not have the time to, answer (LL).This shows that graduate journalism education is a reflective place for him to understand further the news media industry.
Sure, some theories do not apply in real-world journalism.Given also that the majority of respondents are newcomers, their verbalisations may have lacked the insight of how practice can also influence theory (Chapman & Papatheodorou, 2004).But graduate journalism school became a watering hole to ponder on their own skills and on the issues facing journalists and their practice (or even the journalism sector in their own countries).
In the formality of graduate journalism education under a university setting, the individual booster trait provides a personalised feel into the discussion.Not surprisingly, graduate journalism education for some of these respondents is a ticket to career advancement and improved self-confidence.But for some respondents to say that graduate journalism education provides a stamp of authority (AC) and legitimacy (BD) even to the newcomer will be an endearing statement on the part of the journalism programme head.
The researcher recognises the limitations of having only 16 respondents for this phenomenological study.Having more respondents, especially practising journalists, may have provided further insights.Nevertheless, the Bridge of Traits on Graduate Journalism Education is a humble contribution this paper brings forth to the study of journalism education worldwide.
Conclusion
There is a dearth of literature on the roles and purposes of taking up graduate education in a field-journalism-that is highly associated with its practice.This phenomenological study thus attempted to find out why these beginning and seasoned journalists from Japan and the Philippines take up graduate studies, and what is the goal of such level of journalism education unto them.Answers from 16 Japanese and Filipino graduate journalism students yielded the Bridge of Traits of Graduate Journalism Education that illustrates these roles and purposes of graduate studies.This bridge of traits also entered into the theory-and-practice discussions that have frequently divided scholars, educators and professionals in the journalism field.
Reflective practice in education and learning was evident in some answers by respondents.Their answers may be based on a simple understanding of the things they have learned in the graduate school, or experiences acquired from doing daily journalism work that were brought inside the classroom.But students recognised the possibilities and tensions of blending theory and practice in the field of journalism.
Thus saying, this Bridge of Traits of Graduate Journalism Education represents respondents' efforts to connect their personal, academic and professional milieus and aspirations as journalists.Making these connections is done within the realm of journalism's theory-practice continuum which, as respondents surprisingly articulated, is important, complementary and applicable.Such articulations provide hope to visions of graduate journalism education, within or outside a university: A viable graduate journalism program implements several elements in pedagogy and substance (Folkerts, Hamilton & Lemann, 2013) that espouse a spirit of critical reflective practice unto journalists (Greenberg, 2007;Chapman & Papatheorodou, 2004) and that aspire for new perspectives and approaches to the teaching, study and practice of journalism.
This research was done aiming at moderatum generalisation, and further refining the Bridge of Traits of Graduate Journalism Education will be helpful.Nevertheless, respondents' verbalizations are indicative and can lead to further studies on other aspects of graduate journalism education: programme delivery, outcomes achieved, the influence of graduate studies unto daily journalism practice, how theories may improve journalism practice, and how trends from journalism practice may advance journalism knowledge and theories.
It is recommended that the three journalism schools determine ways how a reflective education (de Burgh, 2003) will enhance graduate program delivery.For now, reflective practice is a workable approach to bridge the theory-practice divide (Greenberg, 2008).Reflective practice can even open up dialogues between professionals, scholars and teachers/educators (Zelizer, 2004), leading to a more integrated set of views on journalism as a profession and as a discipline.Reflective education can hopefully lead to producing master's degree-bearing journalists who can improve either daily journalism practice or journalism knowledge-generation, or both.This can be a curricular agenda for journalism schools worldwide with graduate programs.
But seeing the graduate student achieve personal fulfillment from completing this master's degree is, for the meantime, a source of triumph for the journalism school.What these journalism schools hope next is that their products make a difference in the field of journalism.
Notes 1.In an earlier piece, Adam (1989) affirmed an earlier statement of Joseph Pulitzer: journalism is a literary craft in itself, so literature must be taught in the courses.
2. Graduate education for journalism includes taking a PhD in journalism and related fields, with doctoral education preparing graduates to conduct research (Christ & Broyles, 2007).However, doctoral studies as a form of graduate or postgraduate education are not included in this article.Christ and Broyles' paper included master's and doctoral students as respondents.
3. The Filipino students of ACFJ were emailed several times requesting an interview at places and times of their convenience.They did not reply to the researcher's repeated email requests, even to backdoor requests made through classmates who were eventually interviewed.
Table 2 : Profiles of interviewees
society, how to see this world, how to see this problem.(SP) Anybody can write.But if you have a background in journalism, you have a guide.You are guided because you have books with instructions to follow, as well as methods and rules that you should employ and follow.Since I have no background in journalism, this is very new to me.It's like, if you want this field, you'll follow it.(AC) Studying a master's degree in journalism 'is a requirement for me-if you want to write with authority, credibility and inspiration.Writing has no age limit.Journalism is an art, isn't it?And it has no age limit.As long as you can write, as long as your brain is functioning, nobody can stop you.That's why I am here (in journalism master's programme)… This master's degree is important for me, because this is another chapter of my life, and maybe… the last chapter of my life.I want to finish this, that's my ambition for now.' (AC) Some people think there is no need for… journalism education.But it builds confidence.It is a really good time to build confidence; when you know that you have more knowledge, you have more courage to interview people or to write an article.So it's (journalism graduate programme) really a good place to nurture your confidence.(MG)
|
2019-01-02T01:22:32.782Z
|
2017-10-17T00:00:00.000
|
{
"year": 2017,
"sha1": "36133fd0c9de8d11ea52b18223062fa480653b8e",
"oa_license": "CCBYNC",
"oa_url": "https://ojs.aut.ac.nz/pacific-journalism-review/article/download/27/371",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "36133fd0c9de8d11ea52b18223062fa480653b8e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
4237010
|
pes2o/s2orc
|
v3-fos-license
|
Polygamy, sexual behavior in a population under risk for prostate cancer diagnostic: an observational study from the Black Sea Region in Turkey
ABSTRACT Aim: Although prostate cancer (PCa) is the most common cancer type in men, a replaceable risk factor has not yet been established. In our study, we assessed the relationship between the number of sexual partners, age of first sexual experience and age of first masturbation and prostate cancer incidence. Materials and Methods: In Ordu University Department of Urology between January 2013 and September 2016, in PSA elevation and rectal examination, patients with prostate biopsy were evaluated due to nodule palpation in the prostate. At younger ages and at present, their first masturbation ages, first sexual debut ages, and total sexual partner numbers were recorded. The correlation between the obtained data and PCa frequency was evaluated. Results: The study included 146 patients with PCa identified on biopsy and 171 patients with benign biopsy results who answered the questions. 66.7% of the ones whose biopsy results were benign and 40.6% of cancer suspects had only one sexual partner. The median number of sexual partners was 1±4 (1-100) in the benign group and 2±6 (1-500) in the malignant group (p=0.039). There was a negative correlation between age of first sexual debut and number of partners (r: −0,479; p <0.001). Conclusion: In our study, it appears that there may be an association between the number of sexual partners and prostate cancer in the patient group with PSA level above 4ng/mL. Avoidance of sexual promiscuity or participation in protected sex may be beneficial to protect against prostate cancer.
INTRODUcTION
Prostate cancer (PCa) is the most common cancer type in males and in second place in terms of cancer-linked deaths (1). Though this is a very common cancer, the etiology is only partly understood. Age, ethnicity and family history are known to be important for the etiology (2). Studies of migrants have reported environmental factors and lifestyle factors may be important (3). However, the majority of these factors cannot be changed. A variable risk factor for PCa has still not been definitely determined. Studies have related PCa risk with factors linked to more active sexual life (4). Sexual activity include various dimensions like sex of the partners, number of sexual partners, frequency of ejaculation and age of first sexual debut. It is thought that all of these may affect PCa development in negative or positive ways to varying degrees. Some studies have reported that sexual behavior and sexually transmitted disease may play a role in the etiology of prostate cancer (5). In our study, we aimed to assess the correlation between number of sexual partners, monthly frequency of sexual relations, age of first sexual debut and age of first masturbation with the incidence of prostate cancer.
MATERIALS AND METHODS
Patients undergoing prostate biopsy at Ordu University Medical Faculty Urology Clinic from January 2013 to September 2016 were assessed. Patients with PSA>4 or nodule felt during digital rectal examination were referred for biopsy. All biopsies were completed in 12 quadrants with TRUS guidance. All patients provided information to a doctor during an interview. Informed consent forms have been obtained as the patients have been informed that their data would be kept confidential and used for research purposes. The answers to questions about frequency of sexual relations when young and currently, age of first masturbation, age of first sexual experience and total numbers of sexual partners were recorded. The correlation of the obtained data with the incidence of PCa was assessed. Patients who did not wish to answer the questions were excluded from the study. The study included 146 patients with PCa identified on biopsy and 171 patients with benign biopsy results who answered the questions. Patients lived in the same geographical region, and had similar lifestyle and nutritional habits. All males in our study group were circumcised. Our study is an observational study. This study was approved by Ordu Univercity Local Ethic Comity (2017/167).
Statistical analysis
Data were tested for normal distribution with the Kolmogorov-Smirnov test. Data with normal distribution were compared with the independent groups t test (student t), whi le data with non-normal distribution were compared with the Mann-Whitney U test. The Spearman correlation coefficient was used to assess the correlations among groups with normal distribution. Results with p<0.05 were accepted as statistically significant.
RESULTS
Of 1072 patients with PSA values on record, 317 patients with prostate biopsy completed and necessary information available were assessed. Of these patients, 171 had benign biopsy results, while 146 patients had PCa identified. The mean age of patients in the benign group was 64.38±8.20 (41-92) years and in the malignant group was 67.39±9.34 (43-97) years. The median PSA values were 6.55±4.86 (2.28-42.00) in the benign group and 8.60±13.15 (3.26-934.30) in the malignant group. In terms of sexual partners, 66.7% of the benign group and 40.6% of the malignant group had only one sexual partner. Median number of sexual partners was 1±4 (1-100) in the benign group and 2±6 (1-500) in the malignant group. The difference in numbers of sexual partners was found to be statistically significant between the groups (p=0.039). There was no statistically significant difference between the two groups in terms of number of relations per month when young and currently, first masturbation age and first sexual debut age. Details of data and p values are shown in Table-1. There was a negative correlation identified between age of first sexual debut and number of partners (r: -0.479; p<0.001).
DIScUSSION
Our study found an association between the number of sexual partners over life and positive biopsy for PCa among patients at risk of PCa diagnosis. It is thought that sexually transmitted infections (STI) are a significant factor for the link with PCa. A study by Rosenblatt et al. identified a positive correlation between the number of female sexual partners and PCa risk (6). Studies related to this topic have found that syphilis identified by serologic tests increases the risk of prostate cancer by 1.8 times and this increase is correlated with the number of sexual partners (7). Two meta--analyses published in 2002 and 2005 identified that sexually transmitted infections increased the risk of PCa (4,8). It is thought that factors like bacteria and fungus causing infections and injury to the prostate stimulate inflammasome-mediated proinflammatory cytokines beginning tumor progression (9). The role of inflammation in PCa development has been known for a long time. It is thought that inflammation encourages angiogenesis and DNA damage causing carcinogenesis (10)(11)(12). A PCPT study showed that inflammation observed in prostate biopsy specimens is related to high grade cancer developing in advanced periods (13). Proinflammatory cytokines and chemokines released from immune cells aid the formation of neoplastic cells (9). In our study, we believe the relationship between number of partners and incidence of PCa may be due to sexually transmitted infections and prostatic inflammation linked to these infections.
There are studies showing that age of initial sexual activity and frequency of ejaculation are effective in development of PCa. The increase in frequency of ejaculation reduces the carcinogenic material concentration within prostatic fluids and intraluminal prostatic crystalloid accumulation and thus it is reported that frequency of ejaculation may be protective against cancer (14)(15)(16). In our study, we identified that both groups were similar in terms of the markers of ejaculation frequency of first masturbation age, first sexual debut age and frequency of sexual relations when young and currently. Among data assessing sexual life, a significant difference was only found for the number of sexual partners. The median number of sexual partners and the number of those with more than one sexual partner in the malignant group were identified to be statistically significantly higher compared to the benign group.
In our study, we identified that as the age of first sexual debut reduced, the number of partners increased. There are different interpretations in the literature related to the effect of age of first sexual debut and number of partners on PCa development. Rotkin et al. reported that early age of first sexual debut increased the risk of prostate cancer (17). Rosenblatt et al. proposed that rather than an early start to sexual relations, the number of partners was important (6). In fact, the study by Rotkin did not assess the effect of number of sexual partners. Ahmadi et al. reported that the risk of prostate cancer was lower for males who married at a young age (18). The study by Ahmadi was based in Iran. Iranian Muslims are explicitly different to Western societies as a result of tight governmental restrictions on open sexual relationships and prostitution as well as cultural and religious fixation on avoiding premarital and extramarital sexual activity. In our study, as the age of sexual debut reduced, the number of partners increased and we found the number of partners was related to incidence of prostate cancer. When all this data is assessed with our results, it appears that the number of sexual partners is important for PCa development.
Our study group comprises a homogeneous group living in a small geographical area. The study region does not have inward migration and is a relatively small area. As the region is small, it may not be possible to generalize our results to the whole population. However, it is considered that genetic factors, nutritional habits and environmental factors are effective in the etiology of PCa. An advantage of our study is that the study group comprises people living in the same geographic region, with similar life styles, nutritional forms and genetic characteristics. In both our patient groups, the effects of these factors may be accepted as being similar. Additionally, in our country, society avoids open conversation related to sexuality. It is very difficult to obtain information about previous sexual experience from males of adult age. There is a risk that information given on the question forms and in interviews is deficient or wrong. In our clinic, a single doctor works with patients to resolve concerns of patients worried due to the possibility of prostate cancer and to ensure compliance with testing and follow-up during this process. More time is allotted to patients and confidence is given that the information that they provide will remain confidential. The diagnosis, treatment and process after treatment for these patients is performed by the same doctor. Due to the trust built up in this environment, we believe the information given is accurate. Thus, it is very hard to record data for this number of patients from a small region. As a result, our study is the first study to investigate this data in Turkey.
There are some limitations of our study. The first is that serologic testing was not completed to assess our patients for STI. As a result, we do not have clear results in terms of previous STI history. However, as the number of partners increase, an increase in incidence of STI is expected. Studies have shown that experiencing STI clearly increases the chances of PCa. In our study, as the number of sexual partners increased, there was an increase observed in the incidence of prostate cancer. Another limitation is that though biopsy was benign in the group with high PSA, there is a chance of occurrence on 2nd and 3rd biopsies. Although, the number of patients with a single sexual partner was higher in the group with benign biopsy results, there was a considerable number of patients with multiple sexual partners in both groups. As the size of groups was limited in our study, further studies with larger sample size can give different results. Since both of the groups consisted of patients with high PSA values, some of the patients with benign biopsy result may develop prostate cancer later in their life and diagnosed by a second or third biopsy. Therefore, it is better to say "there was a significant relationship between number of sexual partners and prostate cancer detection by first biopsy". We believe it will be beneficial to perform a similar comparison with a control group with low PSA levels and hence a low risk of PCa.
cONcLUSIONS
In our study, it appears that there may be an association between the number of sexual partners and prostate cancer in the patient group with PSA level above 4ng/mL. There was no association found between frequency of sexual relations in youth and at present, first age of masturbation and first age of sexual experience with PCa. Though not clearly revealed by our study, when the literature and our results are assessed together, we hypothesize that avoidance of sexual promiscuity or participation in protected sex may be beneficial to protect against prostate cancer. However, there is a need for more comprehensive studies to confirm this.
|
2018-04-01T17:43:43.143Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "0566203a0919bd9f0452efc1dad9661c473e52c7",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/ibju/v44n4/1677-6119-ibju-44-04-0704.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3717f853c878a41895cd7437797377e3ddb89b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1724406
|
pes2o/s2orc
|
v3-fos-license
|
Bmc Medical Research Methodology Open Access the Assessment of Recalled Parental Rearing Behavior and Its Relationship to Life Satisfaction and Interpersonal Problems: a General Population Study
Background: Parental rearing behavior is a significant etiological factor for the vulnerability of psychopathology and has been an issue of clinical research for a long time. For this scope instruments are important who asses economically recalled parental rearing behavior in a clinical practice. Therefore, a short German instrument for the assessment of the recalled parental rearing behavior Fragebogen zum erinnerten elterlichen Erziehungsverhalten (FEE) was psychometrically evaluated [Recalled Parental Rearing Behavior].
Background
The impact of parental rearing behavior on child development has been an issue of clinical research for a long time. Perceived parental rearing practices were emphasized as a significant etiological factor in a vulnerability model of psychopathology [1] and connected to a child's general psycho-social development as well as to the social problems of children [2]. Subjects who reported having had supportive, non-rejecting and non-overinvolved parents showed higher psychological adjustment, less social alienation and more life satisfaction [3,2]. However, age and gender moderated these effects as older individuals idealized their parents' child rearing behavior more than younger ones did [4]. On the whole, male subjects reported a more rejecting parental rearing behavior than their female counterparts did [5].
In clinical research and in retrospective studies most of the empirical results were obtained with the help of two questionnaires: The first one is the Parental Bonding Instrument (PBI [6]) and its clinical version, the Measure of Parenting Style (MOPS [7]). The PBI was comprised of the two dimensions "care" and "control", and the MOPS consisted of an additional third dimension of "parental abuse" (retest-reliability = .63 to .76 [6]). The second one was the Egna Minnen Beträffande Uppfostran (EMBU [8] [Own Memories of Child Rearing Experiences] and yielded separately a three-factorial structure with the dimensions "rejection/punishment", "emotional warmth", and "control/overprotection" for the mother as well as the father [9]. The long version of the EMBU displayed an internal reliability of < .70 [9] and the short version an internal reliability of > .72 [10]. Both questionnaires (EMBU, PBI) exist as long versions in German and show good psychometric properties. Economical short versions for clinical application do exist in English and German exclusively for the EMBU. The German short version of the EMBU, the Fragebogen zum erinnerten elterlichen Erziehungsverhalten (FEE [11]) [Recalled Parental Rearing Behavior], was already implemented in different studies examining the perceived parental rearing behavior in siblings, in clinical samples in respect to attachment and relationship characteristics [12][13][14][15][16]. However, the psychometric properties of this German FEE short version had not yet been specified in a representative sample. Therefore, the purpose of this study was to evaluate the psychometric properties of this short version based on a representative sample.
The first objective of this study was the specification of internal reliability of this German FEE short version based on a representative German sample. This questionnaire might show a similar range of internal reliability indices as published for the different English EMBU versions.
The second objective of this study was the evaluation of the construct validity of this German FEE short version. Starting from the assumption that similar psychometric properties and results as based on the English version are replicable, the male subjects may state more rejection in parental rearing behavior than female ones. Whereby, the older subjects might idealize the parental rearing behavior more than the younger subjects did. Furthermore, nonsupportive and rejecting parental rearing behavior may be associated with more interpersonal problems and less life satisfaction.
Subjects
The sample consisted of 2,948 subjects constituting a representative sample of the German population interviewed by a demographic consulting company (USUMA, Berlin) in 1994. The selection of the households by the randomroute-procedure was based on the register of the political elections of 1994. The randomly selected household member had to be over 18 and a native speaker. The subjects were asked to fill out an extensive questionnaire. The response rate of this examination was 68%. This sample was representative for the main socio-demographic data of the German population (age, gender, city, county and education). The mean age of this sample was M = 47.35 (SD = 17.10, range = 18-92) with 44.2% male subjects and 55.8% female subjects.
The study followed the ethical guidelines of the "German professional institutions of social researcher" [Arbeitskreis Deutscher Markt-und Sozialforschungsinstitute e.V. (ADM), Arbeitsgemeinschaft Sozialwissenschaftlicher Institute e.V. (ASI), Berufsverband Deutscher Markt-und Sozialforscher e.V. (BVM)] which were implemented to improve the German federal law to protect the ethical rights of individuals. These guidelines specify amongst others how the person has to be approached and treated in the interview and how the personal as well as the collected data has to be handled and stored. The application of the guidelines secures that the studies follow the valid ethical standards. Therefore, additional ethical approval is not necessary. The study presented here was approved according to the guidelines of the German professional institution of social researchers.
Instruments
The EMBU by Perris and colleagues [8] was chosen to establish an economical German instrument for recalled parental rearing practices since it comprises good internal reliability (> .70), is a well implemented questionnaire and appears to be generalizable across cultures [17]. The questionnaire Fragebogen zum erinnerten elterlichen Erziehungsverhalten (FEE [11]) [Recalled Parental Rearing Behavior] represents the only German short version of the EMBU for measuring recalled parental rearing behavior which is not identical to any existing English EMBU short versions. Since the factor loadings of the English and the German long versions of the EMBU differ considerably, the item selection for this German short version (FEE) was based on the eight highest factor loadings on the scales of the German long version [12]. For the scale "emotional warmth" the items 2, 7, 9, 12, 14, 15, 17, 24, for the scale "rejection/punishment" the items 1, 3,6,8,16,18,20,22 and for the scale "control/overprotection" the items 4, 5, 10, 11, 13, 21, 23 were selected. Even though the two English versions and the German short version of the EMBU [3,10] care based on different item selections, the same theoretical constructs (emotional warmth, rejection/punishment, control/overprotection) could be identified for the English and the German versions.
The FEE includes 24 items to be answered separately for both, mother and father, on a Likert-type scale with the categories 1 (no, never), 2 (yes, sometimes), 3 (yes, often) and 4 (yes, always) (range = 8-32, for an example of the items see [8]). This separate answering style had been evaluated successfully for the original version of the EMBU by a principal component factor analysis [8]. The scales of the FEE were factor analytically derived as in the English versions [9]: emotional warmth, rejection/punishment, control/overprotection. A high score on the scale "rejection" means highly rejecting and punishing recalled parental rearing behavior. Similarly, for emotional warmth and overprotection high scores indicate great emotional warmth and overprotective recalled parental rearing behavior.
General and specific life satisfaction was assessed by the Fragebogen zur Lebenszufriedenheit (FLZ) [Life Satisfaction Questionnaire] by Fahrenberg, Myrtek, Schumacher and Brähler [18]. This instrument measures the following areas of life satisfaction: health, job and profession, finances, leisure, spouse/partner, relationship to own children, self, sexuality, friends and relatives, and home. All of the ten subscales of the FLZ (with seven items each) were rated from 1 (very dissatisfied) to 7 (very satisfied). High scores indicate high satisfaction with the areas of life. The reliability ranged from .82 to .95.
The Inventory of Interpersonal Problems (IIP) [19] assesses self-reported difficulties in social interaction. The German version of the instrument (IIP-D) [20] includes 64 items subdivided into eight subscales (overly domineering; overly vindictive; overly cold; overly socially avoidant; overly non-assertive; overly exploitable; overly nurturant; overly intrusive). The items were rated on a five-point Likert scale from 0 (not at all) to 4 (very much). High scores on the scales of the IIP stand for great difficul-ties in social interactions. The reliability of the scales ranges from .36 to .64.
Statistical analysis
The statistical data analysis was performed by means of SPSS for Windows version 10.0.
Concerning the psychometric properties of the FEE scales the internal consistency of the scales was specified by Cronbach's alpha. In addition, a split-half reliability was calculated for each scale by a Spearman-Brown correlation. Since the extraction of the items is based on the factor loadings of the German long version, the internal reliability of the original and the new German short version may be similar.
In order to test the three-dimensional structure and the factorial validity of the FEE a principal component analysis (PCA) was calculated separately for the mother and the father. Hereby, an orthogonal varimax rotation was applied ahead of time. The factorial structure for the English and the German long version had already been replicated in several studies [9]. Therefore, the original factorial structure can be expected to be replicable also for the new German short version.
Differences between the mother-and father-scale ratings were calculated by t-tests for paired samples. Since parental rearing behavior proved to be specific to the parent's gender [8], significant differences between the motherand father-scales were to be expected. However, the similar constructs between the mother and the father might be associated with higher than opposing constructs between the mother and the father rating. Furthermore, the scales were correlated with each other by using a Pearson correlation coefficient (two-tailed).
To specify the influence of age (three categories: 18-30, 31-60 and 61-92 yrs.) and gender on parental rearing behavior, two-factorial univariate analyses of variance were calculated. T-tests were calculated to evaluate the effect of the child's gender (child gender) on the rated parent. Furthermore, the interaction effects between the gender of the evaluated parent (father/mother) and the gender of the child were calculated by a two-factorial multivariate analysis of variance. It can be postulated that older subjects may recall more rejection and stricter parental rearing behavior than younger ones [4], whereby the male subjects may perceive more rejecting parental rearing behavior than the female ones.
To specify the construct validity of the FEE, the well established associations between perceived parental rearing behavior and life satisfaction as well as interpersonal problems were tried to be replicated using the FEE. Pear-son correlation coefficients (two-tailed) were calculated to examine associations between the FEE and other constructs. Hereby, non-supportive, rejecting and overprotective parental rearing behavior may be connected to more interpersonal problems and less life satisfaction than in the case of supportive, non-rejecting and non-overprotective parental rearing behavior.
Reliability of the FEE
The means and standard deviations of the FEE scales and some additional characteristics are depicted in Tables 1 and 2. Each of the three FEE scales yielded a good to satisfactory internal consistency and split-half reliability for both paternal and maternal rearing behavior. For all three scales, significant mean differences between the experience of recalled maternal vs. paternal rearing behavior emerged in terms of less rejecting and less punitive, emotionally warmer and more overprotective mothers. The highest score on emotional warmth had 0.1 percent of the sample whereas rejection had 0.2 percent and overprotection reported 0.02 percent of the participants. The lowest score on emotional warmth had 0.6 percent, rejection described 7.4 percent and overprotection experienced 1.4 percent of the participants.
Factorial validity of the FEE
Furthermore, the factorial structure as well as the independence of the three dimensions were tested. In Table 3 the factor loadings for the recalled parental rearing items are depicted. The original factor structure could be satisfactorily replicated (with the exception of item 13 -"Did your parents use the expression: If you don't do this, then I'll be sad"). The largest proportion of variance was explicable by the factor "rejection and punishment" (19.5% for the paternal items and 18.0% for the maternal items) was followed by "emotional warmth" (17.6% and 17.10%, respectively) and "control and overprotection" (11.5% for both factors). The cumulative explained variance was 48.6% for recalled paternal rearing and 46.6% for recalled maternal rearing behavior. Table 4 shows that some of the scales are highly correlated. The highest intercorrelations could be found between the identical (regarding content) scales for fathers and mothers (r = .70 to .77). Positive but moderate correlations were also found when investigating one parent or both parents concerning the scales "rejection and punishment" as well as "control and overprotection". Furthermore, the scale "emotional warmth" correlated negatively with the scale "rejection and punishment" whereas "emotional warmth" and "control and overprotection" were unrelated.
Influence of socio-demographic factors
In addition, the influence of age and gender on recalled parental rearing behavior was calculated. Age (three categories: 18-30, 31-60 and 61-92 yrs.) exerted a significant influence on the perception of recalled paternal rejection and punishment as well as on emotional warmth. Furthermore, the age also influenced the father's recalled rejection and punishment as well as the mother's emotional warmth (see Tables 5 and 6). According to their memories, the older the subjects, the stronger they experienced their parents' rejection and the less their parents' emotional warmth. In addition, they reported much stricter and less emotionally warm parental rearing behavior in reference to their father as well as a lack of emotional warmth from their mother. The data also revealed significant main gender effects on recalled parental rearing behavior, i.e. the male subjects recalled more rejecting and stricter parental rearing behavior than the female subjects. In addition, the parents' emotional warmth was less remembered by the male subjects than by the female subjects. To be specific, the female subjects reported their fathers as emotionally warmer and less punishing than did the male subjects (see Tables 5 and 6).
Correlations between the FEE and other scales
As a first step for evaluating the external validity of the FEE its relationships to other scales were tested (see Tables 7 and 8 for p < .001). Concerning life satisfaction (FLZ), low to satisfactory relationships were established to most of the examined areas from .07 to .39: Subjects who had recalled more rejecting, punitive and controlling parental rearing behavior and a lack of emotional warmth reported a lower level of life satisfaction than subjects with a more positive rearing experience.
Low to satisfactory relationships to recalled parental rearing, behavior, in particular on the scales "rejection and punishment" and "control and overprotection", also appeared for interpersonal problems (IIP factors) with a range of .09 to .39: Subjects who recalled rejective and strict rearing behavior spoke of difficulties in trusting other people, supporting others and caring for others' needs. Furthermore, they had problems cooperating, and they attempted to dominate and control others. They also described themselves as resentful and quarrelsome.
Discussion
Based on the data presented in this paper the FEE represented a reliable and valid instrument for the assessment of adults' memories of their parents' rearing behavior for the German-speaking areas. Its relatively low number of items allowed for an economical application in research and clinical work. Both, the items and the three scales showed satisfactory to good psychometric properties. The reliability figures highly corresponded to the values obtained for the original long version of the EMBU from 14 countries (N = 3.500) [21].
In the present data, the results from the principal component analysis successfully replicated the previously obtained factorial structure of the EMBU [21]. The cumulative variance explained by the three factors is higher than the corresponding figures for the EMBU [21] and confirmed the factorial validity of the FEE. Also, the intercorrelational pattern of the FEE factors was consistent with previous results using the original long version of the EMBU [21].
When comparing recalled paternal and maternal rearing behavior, a number of significant differences were found, whose relevance deserves a closer look due to the small numerical mean differences. However, the findings moth-ers compared to fathers tended to be emotionally warmer and less rejecting on the one hand and more controlling and overprotective on the other, were in agreement with the results of the study by Gerlsma and Emmelkamp [22].
Whereas based on the EMBU only a few studies reported age and gender effects, the presented data indicated some influence of these factors on recalled parental rearing behavior. Based on their memories, older subjects recalled their parents as more rejecting and less emotionally warm than did their younger counterparts. This age effect might be explained by the historical changes in parenting attitudes and behavior in child rearing practices. In addition, the historical German background may also have to be considered. Furthermore, based on their memories, the female subjects reported having received more emotional warmth from their fathers than the male subjects did, whereby the latter recalled their fathers as stricter and more rejecting.
In the literature a few studies have focused on the connection between life satisfaction and parental rearing behavior [2,3,23,24]. In line with the literature, the presented data have shown that recalled parental rearing behavior was connected to life satisfaction and self-reported interpersonal problems. The retrospective assessment of recalled parental rearing behavior represents a specific problem in assessing the actual parental rearing experienced during childhood or its subjective representation [25,26]. The subjective representation may reflect the present mood, errors in autobiographical memory (un-/conscious distortions), false memories or idiosyncratic reconstructions of the subjects' personal history. However, the existing literature did not provide consistent and conclusive data on the mood-congruent recall of relevant personal stimuli [25,[27][28][29][30] as well as on the validity of retrospective data on parental rearing behavior [31]. Therefore, longitudinal studies with independent raters should be considered for the validity of parental rearing practices (see [32]). Unfortunately, in clinical practice, the child rearing experienced by patients can only be assessed retrospectively after the onset of the disorder. Nevertheless, the then obtained information can be of help in the therapeutic process.
Based on the data from a representative sample, general conclusions cannot be drawn, since the large sample size could easily lead to significant results, e.g. a correlation coefficient of r = .07 (p < .001). In the context of validating the instrument, this study did not focus on any connection to psychological symptoms. Therefore, additional studies that include assessments of psychological symptoms as well as a formal comparison of the German long and short version of the EMBU as well as the original version would still be essential. Furthermore, retest-reliability testing and connections to other parental child rearing questionnaires or external ratings would be helpful to further specify the validity and reliability of this short version.
Conclusion
In summary, the afore-discussed data show that the FEE is a reliable and valid instrument for the retrospective assessment of subjective representations of parental rearing behavior. Moreover, this information will be relevant not only to the research concerning psychological disorders but also to the field of non-clinical applications.
|
2018-05-08T18:12:08.558Z
|
0001-01-01T00:00:00.000
|
{
"year": 2009,
"sha1": "f634867e7518f3e8aac5f481f5e0e6905163de28",
"oa_license": "CCBY",
"oa_url": "https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/1471-2288-9-17",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f634867e7518f3e8aac5f481f5e0e6905163de28",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
266740478
|
pes2o/s2orc
|
v3-fos-license
|
Poly(l-lactic acid)-based double-layer composite scaffold for bone tissue repair
Abstract Bone defect is a serious threat to human health. Osteopractic total flavone (OTF) extracted from Rhizoma Drynariae has the effects of promoting bone formation. Panax notoginseng saponin (PNS) has the function of activating blood circulation and removing blood stasis. Therefore, combining OTF and PNS with poly(l-lactic acid) (PLLA) to prepare scaffolds containing PNS in the outer layer and OTF in the inner layer is a feasible solution to rapidly remove blood stasis and continue to promote bone formation. In addition, degradation rate of the scaffold can affect the release time of two drugs. Adding Mg particles in outer layer can control the degradation rate of the scaffold and the drug release. Therefore, a double-layer drug-loaded PLLA scaffold containing OTF in the inner layer, PNS and Mg particles in the outer layer was prepared and characterized to verify its feasibility. The experimental results showed that the scaffold can realize the rapid release of PNS and the continuous release of OTF. With the increase of Mg content, the drug release rate became faster. Animal experiments showed that the scaffold containing 5% Mg particles could effectively promote the formation of new bone in the bone defect of male New Zealand white rabbits, and the area and density of new bone formed were much better than those in the control group. These results demonstrated that the double-layer drug-loaded scaffold had good ability to promote bone repair.
Introduction
Bone has always been an indispensable role for the human body, not only to provide support for human activities but also to protect internal organs from impact [1].However, large bone defects seriously affect the life activities of patients, and it is difficult to achieve self-healing [2-6].Autogenous bone transplantation can effectively promote the repair of the defect, but it is limited by the problems of secondary injury to patients [7,8].The strategy of allogeneic bone transplantation is faced with the problems of immune rejection and inducing infectious diseases [7,9].Tissue engineering provides a new idea to solve this problem, that is, through a series of engineering designs, using biomaterials with good biocompatibility and degradability, to prepare scaffolds [10][11][12][13][14][15].After being implanted into the human body, the scaffold plays a role in regulating the physiological microenvironment of the defect site, thereby promoting bone regeneration.With the development of research, multifunctional composite scaffold has shown great potential in the field of bone tissue repair in recent years [16,17].
Osteopractic total flavone (OTF) has the effect of resisting osteoporosis, improving bone density and inducing osteogenesis of cells by increasing the activity of alkaline phosphatase, the level of osteocalcin, the expression of type I collagen, osteocalcin and osteopontin mRNA [18,19].It is an ideal drug for bone repair.However, the rupture of blood vessels in the defect area will lead to the formation of local hematoma and hinder bone regeneration.Large hematoma will not be easily absorbed by human body, or even cause infection, which requires another operation.Panax notoginseng saponin (PNS) is an effective pharmaceutical ingredient extracted from Panax notoginseng, which can improve blood rheology by reducing blood viscosity, accelerate the blood supply of fracture site, improve microcirculation and thus accelerate the healing of defect site.However, OTF and PNS cannot be directly used for the preparation of scaffolds, so it is a good choice to choose a polymer material with good biocompatibility and easy to prepare scaffolds.Poly(L-lactic acid) (PLLA) is a biocompatible biomaterial, and its degradation products have no toxic effect on human body [20][21][22].However, the simple blending of OTF and PNS in PLLA scaffolds cannot effectively exert the combined effects of the two drugs, and PNS should be firstly allowed to rapidly release to drive out the hematoma, promote blood circulation, and then continuously release OTF to promote osteogenesis.Double-layered composite scaffolds were prepared with PLLA/OTF (PO) composite as the inner core and PLLA/PNS (PP) composite as the outer shell, which are expected to achieve the purpose of first removing hematoma and then promoting osteogenesis.
Common scaffold preparation methods are diverse, including 3D printing, freeze-drying, phase separation, electrospinning etc. [23][24][25][26].In the field of bone repair, the scaffold with a coherent pore structure is more conducive to the growth of cells and tissues to promote bone regeneration [27].Phase separation is a common method to prepare 3D scaffolds.Among them thermally induced phase separation technique has good controllability over scaffold structure including porosity, pore size and pore interconnectivity.At the same time, PLLA scaffold prepared by phase separation method can not only meet the requirements of the scaffold for through porosity, but also retain certain mechanical properties for the scaffold.The diameter of electrospun fibers is usually in the range of nanometers to submicrons, with large specific surface area and high porosity, which is conducive to cell attachment and proliferation.Moreover, the larger surface area of electrospun fibers can enhance the permeability of oxygen and accumulated fluid, thereby promoting wound healing.Nanofibers can also simulate the structure and morphology of extracellular matrix, act as mechanical support and regulator to improve cell activity before host cells refill and synthesize new matrix [28,29].Therefore, using the thermally induced phase separation method to prepare PO core, using the electrospinning method to prepare PP membrane, and using the PP membrane to wrap the PO core, can prepare a double-layer drug-loaded scaffold for bone defect repair.
The degradation rate of outer layer will affect the overall degradation rate of scaffold and the timing of the release of the two drugs.So the degradation rate of outer layer needs to be controlled.The incorporation of Mg nanoparticles into PLLA can not only regulate the degradation rate of PLLA, but also neutralize the acidic products produced by PLLA degradation, enhance the viability of mesenchymal stem cells, promote the macrophage mediated regulation of inflammatory response to degradation products, thus promoting bone integration and reducing host inflammatory response [30,31].In addition, Mg is also an important metal element in human body.Mg 2þ is the fourth largest cation in human body, accounting for 0.72 wt% of bone.Mg deficiency will have adverse effects on all stages of bone metabolism, resulting in bone growth being stopped, osteoblast and osteoclast activity decreasing, bone loss and fragility.Therefore, the incorporation of Mg particles into outer layer of scaffold is not only expected to control the degradation rate, but also to balance pH, enhance the activity of osteoblasts and osteoclasts, and promote osteogenesis.
Therefore, a double-layer drug-loaded scaffold with PP electrospun membrane as shell and PO thermally induced phase separation porous 3D structure as core was prepared in this article, and it was characterized by multiple tests.The application prospects of this scaffold in bone tissue engineering have been explored, providing new ideas for the preparation of bone tissue repair scaffolds.
Purification of PLLA
Dissolve PLLA (Evonik Industries AG, Germany) particles in chloroform (Beijing Chemical Reagent Company, China) solution and stir fully at room temperature until completely dissolved.Then slowly pour the dissolved PLLA into anhydrous methanol (Beijing Chemical Reagent Company) and use glass rod to stir continuously to collect purified PLLA.After the anhydrous methanol becomes turbid, recover the waste liquid and add clean anhydrous methanol again until the purified PLLA is completely collected.Put the recovered PLLA into an electric blast drying oven (DGG-9240B, Shanghai Senxin Experimental Instrument Co., Ltd, China), dry it at 37 � C for 3 days and store it for future use.
Preparation of porous PLLA scaffold by phase separation method
Dissolve PLLA in 1,4-dioxane (Shanghai McLean Biochemical Technology Co., Ltd, China) to a concentration of 10 wt%, stir and dissolve it in a 60 � C thermostatic water bath (Feb-85, Shanghai Sile Instrument Co., Ltd, China).Add deionized water (from CASCADA MK2, Pall Company, USA) to make 1,4-dioxane:deionized water ¼ 87:13, continue to stir at constant temperature to homogeneous transparent solution, then transfer the solution to polytetrafluoroethylene mold (customized, China), put it back into the water bath, heat induced phase separation and gel for a period of time, and then put it into −80 � C refrigerator (BC/BD-379HB, Haier Group, China) for freezing.Then it is put into a freeze dryer (7960071, LABCONCO, USA) to freeze dry to obtain porous PLLA scaffolds.
Preparation of porous PO scaffold by phase separation method
Dissolve PLLA in 1,4-dioxane to a concentration of 10 wt%, and stir and dissolve it in a constant temperature water bath at 60 � C; add deionized water to make 1,4-dioxane:deionized water-¼ 87:13, continue to stir at constant temperature to homogeneous transparent solution, then add OTF (self-made, China) with a concentration of 10%, stir evenly, and the other steps are the same as 'Preparation of porous PLLA scaffold by phase separation method' to obtain drug-loaded porous PO scaffold.
Preparation of PLLA/PNS/Mg film by electrospinning
The mixed solution of tetrahydrofuran (THF, Shanghai McLean Biochemical Technology Co., Ltd, China)/N,N-dimethylformamide (N,N-dimethylformamide, DMF, Shanghai McLean Biochemical Technology Co., Ltd, China) with a preparation ratio of 3:1 was used as the electrospinning solution.Dissolve PLLA in THF/DMF mixed solution with a concentration of 10 wt%, heat and stir in a constant temperature water bath at 60 � C to obtain a transparent and uniform PLLA spinning solution after dissolution.Then add a certain amount of PNS (self-made, China), fully stir to obtain pale yellow PP electrospinning solution.Add Mg particles of different proportions (Tangshan Weihao Magnesium Powder Co., Ltd, China) into the electrospun solution, mix evenly and turn it into gray black PLLA/PNS/Mg (PPM) electrospun solution.Use a 10 ml syringe to absorb a certain amount of electrospinning solution, then fix the syringe to the propulsion pump of the electrospinning machine (YFSP-T, Tianjin Yunfan Technology Co., Ltd, China), connect the positive pole of the power supply, use a tin foil plate to connect the negative pole of the power supply as the receiving end, adjust the voltage of the high-voltage power supply to þ13 kV, −2 kV, the spinning speed to 0.0025 mm/s, and the receiving distance to 20 cm.
Preparation method of double-layer drug-loaded scaffold
The preparation method of double-layer drug-loaded scaffold is shown in Figure 1.The inner layer porous drug-loaded PO scaffold and PPM electrospun solution were prepared according to the preparation methods in 'Preparation of porous PO scaffold by phase separation method' and 'Preparation of PLLA/PNS/Mg film by electrospinning'.Place PO scaffold on the rotating motor, adjust the speed to 10 rpm, connect the negative pole of the high-voltage electrostatic spinning machine power supply as the receiving device of electrospinning, and keep the other conditions unchanged to prepare a double-layer drug carrier PLLA/OTF-PLLA/PNS/Mg (PO-PPMx, where x represents Mg content).Keep the quality of electrospun membrane on the outer layer of the scaffold in each group consistent, which is 5 mg.
Scanning electron microscope observation method
After the prepared PLLA and PO scaffolds were quenched by liquid nitrogen, they were cut into 1 mm thick pieces and glued onto the sample stage with conductive glue before gold spraying.Spray gold two times.The gold sprayed samples were placed in a scanning electron microscope (SEM) (QUANTA 250 FEG, FEI company, the Netherlands), evacuated, and the height of the sample stage was adjusted after internal stabilization of the instrument to observe the microtopography of the samples.
The fabricated PPM electrospun membranes were tailored to 5 � 5 mm 2 pieces and glued onto the sample stage with conductive gel and sprayed gold two times.The gold sprayed samples were put into the SEM, vacuumed and the height of the sample stage was adjusted after the internal stabilization of the instrument to observe the sample microtopography.
Put the PO-PPM0, PO-PPM5 and PO-PPM10 scaffolds into 5 ml test tubes, respectively, add 2 ml PBS buffer (Beijing Solebar Technology Co., Ltd, China), and shake at 37 � C on the shaker (THZ-100, Shanghai Yiheng Instrument Co., Ltd, China) at 100 rpm.The scaffolds were collected at 12 weeks, washed with distilled water and freeze-dried.SEM was used to observe the micro morphology of the degraded scaffold.
Aperture statistical method
The diameter of wells in scaffold sectional SEM images were measured using Image J software, counted and analyzed, counting at least of 300 wells per sample.
Porosity test and calculation method
Use the specific gravity method to measure the porosity of the scaffolds: under constant temperature, fill the pycnometer with absolute ethanol, and measure its mass as m 1 .Immerse the scaffold with a mass of m s into ethanol, vacuum and degas, fill the pores of the scaffold with ethanol, and then add ethanol.The total mass measured is m 2 .After taking out the scaffold filled with ethanol, the remaining ethanol and the pycnometer are weighed as m 3 .Calculate the porosity of the scaffold e for:
Mechanical strength test and calculation method
Use vernier caliper (DL90150, Deli Group Co., Ltd, China) to measure the diameter of the load-bearing surface of the scaffold, and then fix the scaffold on the sample platform.Open the software and enter the diameter of the scaffold.Use a universal testing machine (EZ-LX, SHIMADZU Company, Japan) to conduct a compression test to test the mechanical properties of the scaffold, and at least three parallel samples in each group.At the end of the test, the software will display the maximum compressive strength of the scaffold.
Infrared spectrum test method
First cut the dried PP, PO and PLLA samples and put them in a clean and dry agate mortar, then add potassium bromide powder with a mass ratio of 1:100 to mix them evenly and grind them fully.Prepare the tablet pressing mould and spread the ground powder evenly on the mould.After the mold is placed under the tablet press for tablet pressing, the sample and potassium bromide are pressed into circular slices.Then put the slices into an infrared spectrometer (FTIR-7600, lambda scientific, Germany) for scanning, and the scanning wave number range is 400-4000 cm −1 .
Release curve of OTF and PNS
Three samples per group of PO-PPM0, PO-PPM5 and PO-PPM10 scaffolds were placed in 5 ml tubes with 2 ml of PBS buffer, shaken at 100 rpm on a shaker at 37 � C and the samples were taken at the 1, 2, 3, 4, 5, 6 and 7 days, and 1 ml of each supernatant was removed and supplemented with PBS buffer.The 0.2 ml of the supernatant was added to a quartz microplate (Shanghai Jing'An Biotechnology Co., Ltd, China), and the release of OTF was detected by UV spectrophotometry (microplate reader, multiskan FC, Thermo Co., USA).The 0.5 ml of the supernatant was taken and put into an electrothermal blast (dgg-9240b, Shanghai senxin Experimental Instrument Co., Ltd, China) drying oven at 60 � C overnight for drying, followed by vanillin (Tianjin Guangfu Fine Chemical Research Institute, China)-perchloric acid (Shandong EISA Technology Co., Ltd, China) chromogenic detection of PNS release.The release curves of OTF and PNS were calculated and plotted.
release curve
Three samples per group of PO-PPM5 and PO-PPM10 scaffolds were placed in 5 ml tubes, added 2 ml PBS buffer, shaken at 100 rpm in a shaker at 37 � C and sampled at 1, 2, 3, 4, 5, 6 and 7 days, respectively.For each experiment, 1 ml of the supernatant was removed and supplemented with 1 ml PBS buffer.The supernatant was diluted 30 times with 0.2 M HCl solution, ensuring that all Mg was dissolved, and the Mg 2þ concentration in the supernatant was tested by using inductively coupled plasma optical emission spectrometry (inductively coupled plasma optical emission spectrometer, optima 5300DV, Perkin Elmer instruments Ltd, USA).
pH change curve
Three scaffolds for each group (PO-PPM0, PO-PPM5 and PO-PPM10) were placed in 5 ml tubes with 2 ml PBS buffer, and all solutions were collected at 1, 2, 3, 4, 5, 6 and 7 days by shaking at 100 rpm in a shaker at 37 � C, and supplemented with 2 ml PBS solution.An acid-base meter (UB-7, Denver Instrument Company, USA) was used to measure the pH value of the solution, and the pH change curves were recorded.
Grouping of animals
Six adult male New Zealand large white rabbits were randomly divided into two groups of six each, and the experimental group was implanted with PO-PPM5 scaffolds, the control group was the blank defect group, and no scaffolds were implanted.The selected observation time point was 12 weeks postoperatively.This experiment has been reviewed and approved by the ethics committee of biology and medicine of Beihang University, approval number: BM20220026.
Establishment of rabbit steroid-induced femoral necrosis model
New Zealand white rabbits were raised in separate cages and fed normally.On the first day, lipopolysaccharide was injected through the ear vein with the injection volume of 10 lg/kg.On the second to fourth days, the glucocorticoid methylprednisolone was injected via the gluteus medius muscle at an injection volume of 20 mg/kg, while the rabbits were provided free access to drinking water after shaking with 0.025 ml of diquali per 500 ml of water.And from the next day, until the end of modeling, penicillin was administered intramuscularly daily at 40 000 units per day.Biofermin were administered at a dose of three pills per day with a minimum interval of 6 h between the feeding and the penicillin injection.And every other day ranitidine 15 mg was added to 500 ml of water for rabbits to drink freely.The animal status was observed, and the modeling was successful 14 days after the last needle of methylprednisolone was finished.
Scaffold implantation procedures
All surgeries were performed by the same team of personnel under the same operating conditions.Animals were anesthetized preoperatively with ketamine and xylazine, which were administered at 40 and 4 mg/kg, respectively.Based on the intraoperative response, additional inhalation anesthesia was performed with isoflurane to maintain the depth of anesthesia.
When the animal enters deep anesthesia, a 20 mm incision is made through lateral skin, exposing the greater trochanter of femur.Core decompression surgery with a diameter of 3 mm was then performed along both hip femoral neck axes from the distal end of the greater trochanter.First, under the guidance of fluoroscopy, use a drill with a diameter of 2 mm to create a bone channel from the lateral side to the superior medial aspect of femoral head.The channel direction was at the mid shaft of femoral neck.Then use a drill bit with a diameter of 3 mm to widen the 2 mm channel.PO-PPM5 was randomly assigned and implanted into the channel of the right or left hip (as shown in Figure 1), while the other channel was not implanted with a scaffold and remained empty as a blank control.Manually press a scaffold into the bone channel.
The animals will feel pain after the anesthesia subsides.Therefore, in order to give the animals pain relief, buprenorphine should be injected two times a day in the first 2 days after the operation, with the dose of 0.5 mg/kg each time.
Micro-CT observation of bone repair
After 12 weeks of scaffold implantation, pentobarbital sodium was injected through ear vein.In order to kill the animals, the dose of pentobarbital sodium injection was 90-120 mg/kg.After execution, collect the samples of bilateral femurs, remove the soft tissues and put them into 10% formalin solution for fixation for 1 week.Then use micro-CT with spatial resolution of 18 lm to scan the distal femur of rabbits under the condition of voltage 66 kV and current 110 lA.In the center of the bone channel with a diameter of 3 mm, select the appropriate area and reconstruct it through the CT analyzer.Two hundred axial images were reconstructed into 3D images, and the bone volume/total volume (BV/TV), bone mineral density (BMD), bone trabecular spacing (Tb.SP), bone trabecular number (Tb.N) and bone trabecular thickness (Tb.Th) were calculated by software.
Statistical analysis
The experimental data were statistically analyzed with the statistical software GraphPad Prism 8.All experiments were repeated at least three times, and the results were expressed in the form of mean ± standard deviation.The paired t-test analysis was used to compare the two groups of data.When � P < 0.05, �� P < 0.01 or ��� P < 0.001, the difference was considered significant.
Pore structure and mechanical property of PO scaffolds
Figure 2 shows the SEM images of PLLA scaffold and PO scaffold, where (A) and (C) are PLLA scaffold, (B) and (D) are PO scaffold.The PLLA and PO scaffolds prepared by phase separation method in the system of 1,4-dioxane and water are porous.Since water is well dispersed throughout the system, moisture removal by freeze dryer in the later stage of preparation enables to obtain scaffolds with more uniform pore distribution.Water has fluidity and is not disturbed by shape, so the created macropores have connected small pores between them.After loading OTF, the scaffold microtopography was not obviously affected and was still an interconnected porous structure.The pore sizes of 300 pores in the SEM images of PLLA scaffolds and PO scaffolds were counted, respectively, and their distribution was shown in the following figure .PLLA scaffolds had an average pore size of 24.24 ± 6.09 lm with 93% of pores distributed at 15-35 lm, especially pores with pore size at 20-25 lm accounting for 41% of all pores, as shown in Figure 2G.The PO scaffolds pore size distribution situation was similar to that of PLLA scaffolds, with scaffold average pore size of 24.31 ± 6.94 lm, in which 87% of all pore diameters were distributed in 15-35 and 20-25 lm pores accounted for 34% of all pores, as shown in Figure 2H.It can be seen that loading OTF did not affect the pore structure of scaffolds.Porosity is an important indicator of scaffolds for bone repair, and according to a literature study, scaffolds with porosity >50% can achieve vascularization [27].Figure 2E is the average of PLLA and PO scaffolds porosity.The porosity of the PLLA scaffolds was as high as 93.67%, although after loading OTF, the porosity of the scaffolds had an obvious decrease, but the porosity of the PO scaffolds still remained above 89.56%,with a statistical difference between the two ( � P < 0.05).The reduction of the porosity of PO scaffold may be due to the inclusion of OTF in PLLA, which occupies the internal space of scaffold.The high porosity of PLLA and PO scaffolds meet the requirements of bone implant material porosity, facilitating the exchange of oxygen and nutrients during bone repair.
Mechanical performance is a very important index of bone implants.Different implant sites have different mechanical requirements for scaffolds.Figure 2F shows the maximum compressive strength of PLLA and PO scaffolds.According to the measurement results, both PLLA and PO scaffolds can meet the minimum strength requirements for cancellous bone repair.The mechanical properties of the scaffolds increased after adding OTF.The maximum compressive strength of the PLLA scaffolds was 1.22 MPa.After loading OTF, the strength of the scaffolds reached 1.46 MPa.The difference between the two was statistically significant ( � P < 0.05).The increase in the maximum compressive strength of PO scaffolds may be due to the fact that the porosity of PO scaffolds is lower than that of PLLA scaffolds under the same substrate material.
SEM analysis of PPM film
Figure 3 shows the SEM images of PPM electrospun films with different Mg content, 2.5% in (A) and (D), 5% in (B) and (E) and 10% in (C) and (F).The prepared drug-loaded electrospun film had uniform electrospinning distribution, continuous and smooth fibers, and the fiber diameter was between 300 and 900 nm.The mixed Mg particles are large, not in the fiber, but wrapped by interlaced fiber and embedded in the electrospun film, as shown by red arrow in the figure .It can be seen that with the increase of Mg content in the electrospun solution, the Mg particles in the electrospun film also increase significantly, which indicates that both low and high concentration Mg particles can be spun smoothly.The addition of magnesium particles has no significant effect on fiber diameter, as shown in Figure 3G.
Infrared spectrum analysis of PP and PO
FTIR can be used to analyze the chemical structure of scaffold materials.Figure 4 contains the FTIR analysis of PO, PP and PLLA.The FTIR results of PO include not only the characteristic peak of PLLA, but also the -OH stretching vibration absorption peak with wave number of 3486 cm −1 and the C ¼ O stretching vibration absorption peak in 1655 cm −1 aromatic ketone.These peaks are consistent with the characteristic peaks of OTF, indicating that OTF has been successfully loaded into the PO scaffold without new chemical bonds and the chemical structure of OTF itself has not been destroyed.The FTIR results of PP include not only the characteristic peak of PLLA, but also the -OH stretching vibration absorption peak with a wave number of 3447 cm −1 , the C¼C stretching vibration absorption peak with a wave number of 1656 cm −1 , and the strong absorption peak of C-O-C stretching vibration with a wave number of 1101 cm −1 .These peaks are consistent with the characteristic peaks of PNS, indicating that PNS has been successfully loaded into PP electrospun membrane, no new chemical bond has been generated, and the chemical structure of PNS itself has not been destroyed.Therefore, this drug loading method is feasible.
Drug release and pH changes of PO-PPM scaffolds
In the double-layer drug-loaded scaffold, OTF was located in the inner porous scaffold, PNS was located in the electrospinning of the outer electrospinning layer.And Mg, as the component that regulates drug release, existed in the gap of the electrospinning network.In the inflammatory organization stage of hematoma, the hematoma formed by the damage of periosteum and surrounding blood vessels will coagulate into blood clots within several hours, which will lead to the ischemic necrosis of some soft tissues and bone tissues, causing inflammatory reaction.PNS can relieve the hypercoagulable state of the body, inhibit the formation of local thrombus, improve microcirculation, increase vascular permeability, and accelerate the absorption and organization of hematoma.As shown in Figure 5A, the release of PNS in the three groups of scaffolds on the first day was high, reaching �60%.On the seventh day, the release of PNS in PO-PPM5 and PO-PPM0 reached �92%, and the release of PNS in PO-PPM10 scaffold reached 98%.Therefore, the scaffold can effectively exert the efficacy of PNS through its rapid release, improving the healing efficiency of hematoma inflammation at the tissue stage.The initial callus formation stage needs 4-8 weeks.OTF can promote the proliferation and differentiation of osteoblasts, promote the expression of osteogenic related genes and effectively improve the efficiency of the initial callus formation stage.The release of OTF is shown in Figure 5B.Mg had a significant effect on the release of OTF.The release rate of the non-Mg group was the lowest.With the increase of Mg content, the release rate of OTF was significantly accelerated.On the first day, the release of OTF in the PO-PPM0 scaffold was 29%, while the release of OTF in the Mg-containing scaffold on the first day was �40%.Next, the release of OTF slowed down significantly.On the seventh day, the OTF released by the PO-PPM10 scaffolds was 52%, while the OTF released by the PO-PPM5 and PO-PPM0 scaffolds were 44% and 37%, respectively.Therefore, OTF can be released slowly and continuously in PBS solution to meet the demand for OTF during the initial callus formation period.
Mg can promote the release of PNS and OTF, and with the increase of Mg content, the promotion effect was more significant.This was because in the process of water intrusion from the electrospinning mesh to the inner layer, the Mg particles in the scaffold began to dissolve in water, which increased the channel of water entering the scaffold, thus promoted the release of PNS and OTF.Low concentration of Mg had no significant effect on the release of PNS, possibly due to the smaller pores formed by the dissolution of Mg at this concentration.In addition, it may be because of the large specific surface area of electrospun membranes and the PNS are easily soluble in water.With the increase of Mg particles, the pores formed by their dissolution increased, and the promotion of PNS and OTF release was more obvious.
Mg reacted with water to form Mg 2þ and OH -, in which the amount of Mg 2þ was determined by inductively coupled plasma emission spectrometry, and the pH value was directly measured by pH meter.It can be seen from Figure 5C that Mg 2þ was continuously released throughout the release cycle.Due to the high molecular weight of PLLA used to make scaffolds, the hydrolysis within 7 days can be ignored.PO-PPM0 group did not contain Mg, so the main source of pH fluctuation was drug release.Both OTF and PNS are weakly acidic drugs.Their release was the highest on the first day, and the release was relatively slow in the next 6 days.Therefore, as shown in Figure 5D, the pH value of PO-PPM0 group reached the lowest on the first day, which is 7.28.The pH value of PO-PPM0 group rose slightly and remained relatively stable in the next 6 days, between 7.30 and 7.35.Because of the large release of Mg in the early stage of PO-PPM5 and PO-PPM10 groups, the influence of Mg on the pH value was far greater than that of OTF and PNS, so the pH value was significantly increased, and the pH value of PO-PPM10 group was higher than that of PO-PPM5 group.On the first day, the pH value of PO-PPM10 group reached 7.71, and that of PO-PPM5 group also reached 7.65.On the third day, the influence of Mg on pH was neutralized with that of OTF and PNS, so the pH value remained at �7.40, the same as that of PBS solution.At the later stage of release, OTF and PNS have a greater impact on the pH value, so the pH value of the solution was lower than that of PBS solution.However, due to the presence of Mg, the pH value of PO-PPM5 and PO-PPM10 groups was still higher than that of PO-PPM0 group.
Degradation of PO-PPM scaffolds
PLLA will be hydrolyzed first in vitro, and the ester bonds on the main chain will be hydrolyzed to form oligomers.The hydrolysis of PLLA will be affected by the environmental pH value.In the alkaline environment, the degradation rate of PLLA was the fastest, followed by the degradation rate in the acidic environment, and the degradation rate in the neutral environment was the slowest.Because the PLLA used in this paper had a high molecular weight and a slow degradation rate, the degradation of the scaffold at 37 � C for 12 weeks was observed.
Figure 6 shows the micro-morphology of the scaffolds of PO-PPM0, PO-PPM5 and PO-PPM10 after 12 weeks of degradation.In order to better observe the status of scaffolds, after the scaffolds were cut along the central axis of the cylinder, the SEM images of the inner and outer layers of the scaffold were taken, respectively.SEM results showed that after 12 weeks of degradation, the electrospun fiber boundary of the outer layer of PO-PPM0 scaffold became blurred, and large holes were formed on some PLLA fibers.With the increased of Mg content, this phenomenon became more obvious, and some PLLA fibers in PO-PPM5 and PO-PPM10 scaffolds also had adhesion and fracture.However, after 12 weeks of degradation, the macropore structure in the inner layer of the scaffold remained, and the interconnected micropores between the macropores disappeared.With the increase of Mg content, the phenomenon of microporous boundary adhesion became more obvious.The results of in vitro degradation of the scaffold showed that the addition of Mg could promote the degradation of the scaffold.
Animal experiment
From the test results in 'Drug release and pH changes of PO-PPM scaffolds' and 'Degradation of PO-PPM scaffolds', it is known that both PO-PPM5 and PO-PPM10 scaffolds were effective in promoting drug release and stabilizing pH value, but the pH value of PO-PPM5 group was more stable.Therefore, in order to study the ability of double-layer drug-loaded scaffold to promote bone repair, the PO-PPM5 scaffold was selected for animal experiment, and the new bone formation effect of the scaffold was analyzed by Micro-CT.In order to further study the difference of new bone formation in the bone channel, the 3D image was analyzed by software to obtain the quantitative analysis results.Quantitative analysis includes five indicators: BMD, bone volume fraction, Tb.SP, bone trabecular density and Tb.Th.BMD is an important indicator to evaluate bone strength.Bone volume fraction can reflect the change of bone mass.Trabecular separation, trabecular density and trabecular thickness are important indicators to reflect the microstructure of bone trabecular.
Figure 7A-D shows the Micro-CT 3D bone reconstruction image in the bone channel; (A) and (B), (C) and (D) are the results of observation from the same angle.It can be clearly seen from the figure that the formation of new bone in the control group was less, and the distribution was uneven, which was thin and flaky bone.The amount of new bone formed in the PO-PPM5 group was significantly different from that in the control group, which was widely distributed and balanced, and had produced 3D bone structure.The results of quantitative analysis are shown in Figure 7E-I.The group of implanted PO-PPM5 scaffolds had higher bone density.Bone volume fraction, the density and thickness of trabeculae were also much higher than those of the control group.The trabecular separation was smaller than that of control group.The experimental results showed that the group implanted with PO-PPM5 scaffold had more bone mass, greater bone strength and better bone trabecular structure.Figure 7j-M shows the H&E staining image of the bone channel, (J) shows the PO-PPM5 group, (L) shows the control group, and the black dashed line shows the location of the bone channel.Figure 7K and M shows enlarged images of the white box areas in (J) and (L), respectively.The black area indicated by the yellow arrow in (K) represents the residual degradation of the scaffold.From Figure 7, it can be seen that there is significant new tissue growth around the scaffold in the PO-PPM group, while in the control group, there are numerous pores and few new tissue.Therefore, double-layer drug-loaded scaffold can effectively promote bone repair, accelerate bone repair speed and improve bone repair quality.
Conclusion
A double-layer drug-loaded PLLA scaffold was prepared by combining the phase separation method with the electrospinning method.The inner porous PLLA scaffold was loaded with OTF, the outer electrospinning layer contained PNS in the electrospinning fibers, and the fibers contained Mg particles.The results showed that the double-layer PLLA scaffold can realize the rapid release of PNS and the continuous release of OTF, which was in line with the demand for drugs in the stage of hematoma inflammation and the stage of primary callus formation.The release rate of drugs in the scaffold can be regulated by controlling the amount of Mg particles added.Therefore, the double-layer PLLA drug-loaded scaffold had good bone repair ability.
Figure 1 .
Figure 1.Schematic diagram of preparation method of double-layer drug carrier and scaffold implantation.
Figure 2 .
Figure 2. Comparison of pore size and mechanical properties of PLLA and PO scaffolds, (A) and (C) SEM images of PLLA scaffold; (B) and (D) SEM images of PO scaffold; (E) the pore size distribution of PLLA and PO scaffold; (F) the maximum compressive strength of PLLA and PO scaffold; (G) the pore size distribution of PLLA scaffold; (H) the pore size distribution of PO scaffold.Significant differences between various groups were denoted as � (P < 0.05).
Figure 3 .
Figure 3. SEM of PPM scaffolds with different Mg contents, in which Mg content in (A) and (D) is 2.5%, Mg content in (B) and (E) is 5%, and Mg content in (C) and (F) is 10%; (G) statistical analysis of the diameter of fibers in the SEM images.
Figure 4 .
Figure 4. Comparison of infrared spectra of PLLA, PP and PO.
Figure 5 .
Figure 5.The release curve of PNS, OTF and Mg 2þ in different Mg content scaffolds and the change of solution pH, (A) the release curve of PNS; (B) the release curve of OTF; (C) the release curve of Mg 2þ and (D) the change of solution pH.
Figure 6 .
Figure 6.SEM images of scaffolds after 12 weeks of degradation, (A), (B) and (C) the outer layers of PO-PPM0, PO-PPM5 and PO-PPM10 scaffolds, respectively; (D), (E) and (F) the inner layers of PO-PPM0, PO-PPM5 and PO-PPM10 scaffolds, respectively, the image inside the white box is a partially enlarged image.
|
2024-01-04T05:05:49.660Z
|
2023-10-27T00:00:00.000
|
{
"year": 2023,
"sha1": "cd8e92f213abc8bca089194a809b3c83c4d3890d",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/rb/advance-article-pdf/doi/10.1093/rb/rbad093/52632323/rbad093.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd8e92f213abc8bca089194a809b3c83c4d3890d",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229472875
|
pes2o/s2orc
|
v3-fos-license
|
THE EMERGENCE OF LOCAL COFFEE SHOPS IN INDONESIA AS A COUNTER TO AMERICAN CULTURE HEGEMONY
After winning World War II, the United States (US) tried to spread its hegemony in almost all aspects, including culture. Starbucks has become the biggest MNC belong to the US that spreads western culture in Indonesia. Starbucks, with its 326 outlets in Indonesia, has brought its new value to Indonesian society. In this paper, the writer would like to analyze the response of Indonesians in dealing with the cultural hegemony that Starbucks brings as the representation of the American culture. This paper uses library research as the data collection method and qualitative method in analyzing the data. The writer analyzes this case by applying the circuit of culture theory, which consists of 5 aspects: production, consumption, regulation, representation, and identity. The writer will focus on how local coffee shops adopt the management and production process from Starbucks applied in their coffee shops. The creativity of Indonesians has made new cultures are quickly adopted. The advent of Starbucks in Indonesia had stimulated the establishment of local coffee shops that are not less competitive with Starbucks as the giant coffee shop corporation. The local coffee shops can give a unique experience in enjoying a coffee just like Starbucks with its “Starbucks Experience”. The local coffee shops also can provide not only coffee, but also other products that might take the interest of customers. The local coffee shops are able to imitate, and modify Starbucks concept in local versions.Keywords: Starbucks; circuit of culture; production; local coffee; coffee culture
INTRODUCTION
In the modern era, coffee culture has become the way of life among the community, including young people, both in developed and developing countries. In Indonesia, coffee culture has become a trend among young people, where the culture of coffee is used as an instrument for young Indonesian just to meet friends, family, or even do homework and assignment. In addition, the development of coffee culture in Indonesia had become one of the driving factors for many developed countries such as the US, to establish coffee shops in developing countries such as Indonesia.Coffee shops owned by developed countries such as the US usually serve coffee, tea, or other types drink, and also provide snacks and foods such as French fries, cakes, pastries, bread, donuts and pasta.
Not only that, the US coffee shop also provides adequate facilities for coffee connoisseurs, cozy room with air conditioner and most importantly can access free WiFi. The existence of free WiFi that makes young people willing to linger at the coffee shop just to get free WiFi (Hashim, Mamat, & Halim, 2017). For price issues, of course coffee shops from the US put up prices that are high enough to just drink coffee. However, some young people are willing to pay dearly for higher quality (Hashim, Mamat, & Halim, 2017). In other words, with many facilities offered by the US coffee shops, it certainly becomes an attraction for consumers. Especially among young people to have a coffee in the US coffee shops. It indirectly thrives the culture hegemony of US through coffee culture.
Based on this, it can be seen that the US is indeed a country that has power to build an empire under its power through aspects that can control the world, as well as the coffee culture. The culture of coffee looks trivial but has a big impact. In this case, the US spreads and implements its influence through coffee culture to the world, especially Indonesian, despite the current increasingly multipolar power in international world. One of forms of expansion through the coffee culture from the US is the emergence of international coffee shop owned by the US called Starbucks. Starbucks is used as a business opportunity and an instrument to spread the influence of US culture hegemony. Starbucks is international coffee corporation that focuses on products and the development of a variety of flavors.
In Indonesia, Starbucks was first opened in 2002 at Plaza Indonesia, Jakarta. As January 2018, Starbucks outlets in Indonesia has grown to 326 outlets in 22 cities (Starbucks Indonesia, 2018). With so many Starbucks outlets in Indonesia, it makes the US easier to spread and instill coffee culture to make it popular in Indonesia. Currently, drinking coffee at Starbucks is considered as a lifestyle for people in urban areas, such as Jakarta. Indeed, Indonesian are used to drink coffee. But the style in enjoying coffee has a lot changed. Previously, not all ages like coffee, the typical coffee lovers are men ages 20-60. That is because the coffee is pure, only added with sugar so that men drink coffee to get rid of drowsiness. However, Starbucks has changed that mindset. After the advent of Starbuck, coffee can be enjoyed by everyone both men and women from all ages, even kids. This is because the variants of coffee are very various, not pure as before. It is mixed with other substances such as milk, fruit, chocolate so that the image of coffee has expanded the customers of coffee and can be enjoyed anytime.
Indonesian who visit Starbucks are not only for coffee, but they want recognition for being a cool or an up-to-date person. This is a real proof that coffee has become a lifestyle. According to du Gay in the circuit of culture theory, the cultural production does not only refer to products that are traditionally related to literature, film, and music but also anything that is intentionally made with a specific meaning or purpose when circulated. These products can be superficial, as coffee (du Gay, Hall, Janes, Mackay, & Negus, 1997). Coffee is software in influencing young people tocreate a new lifestyle, while Starbucks is the hardware in delivering that mission.
As time goes by, the presence of Starbucks was precisely the driving factor for the growth of the local coffee shops industry in Indonesia. Local coffee shops are increasingly established due to an increase in coffee consumption. In fact, the local coffee shops are in a high demand. Many local companies or individuals establish local coffee shops with various uniqueness and a large scale of promotion. In addition, more shops provide modern coffee makers that can make the best taste of the coffee (Farokhah & Wardhana, 2017).
With the rapid growth of local coffee shops in Indonesia it also paves the way for the advancement of Indonesian local coffee productions. Indonesian coffee which is already well known has a distinctive taste, such as Gayo coffee, Toraja coffee, Flores coffee, Luwak coffee and others. In other words, the rapid growth of local coffee shops in Indonesia also contributes to develop the industry in Indonesia and that means that the presence of multinational coffee shops from the US does not always become a boomerang for Indonesia. On the contrary, the presence of Starbucks in Indonesia has become the driving factor for the development of local coffee shops. The rapid development of the local coffee shops become the main instrument to counter the culture hegemony from the US. This phenomenon also indicates Indonesian subtle resistance to the culture hegemony that the United States is trying to build through the coffee culture in Indonesia. Based on the information above, this research is conducted in order to analyze how Indonesians response to the American culture hegemony brought by Starbucks. Through this research it can be understood how the presence of local coffee shops in Indonesia can also be an instrument to counter the US culture hegemony. The results of this research are expected to provide information and understanding about the impact of the United States culture hegemony through a coffee culture that does not always have a negative impact.
In order to understand the phenomenon on the emergence of local coffee shops after the advent of Starbucks in Indonesia, the writer has to use theory or concept which is relevant to this issue. In this paper, the writer would like to use Circuit of Culture theory. Circuit of culture is a series of five cultural processes used to interpret a text or cultural artifact (du Gay, Hall, Janes, Mackay, & Negus, 1997). If a part of the process is felt to be sufficient to reveal the meaning of a text or cultural artifact, then it does not need all the processes applied. This theory was made by Paul du Gay and Stuart Hall in 1997 to observe Walkman. There are five aspects in the Circuit of Culture, but the most relevant one with this paper is production aspect. Hall in his book has stated that the process of production focus on entering into business and economic world. (Hall, 1997) Du Gay also stated in his book that the "production of culture" does not only refer to the products that traditionally related to literature, film, and music, but also any goods that intentionally made with particular meanings and associations when they are produced and circulated. The product can be banal, such as coffee (du Gay, Hall, Janes, Mackay, & Negus, 1997). In their research, du Gay classifies three kinds of cultural products: the Walkman, cassettes, and music (du Gay, Hall, Janes, Mackay, & Negus, 1997). The Walkman is categorized as "cultural hardware" while the music is "cultural software". It is the similar case with the discussion in this paper, in which coffee shops as "cultural hardware" and coffee as "cultural software".
Related to the production of culture, it can be explained that Starbucks as the trendsetter in making style of enjoying coffee. In Indonesia coffee used to be seen as commodity for export or edible substance that contains high caffeine, well-known by its effectiveness in relieving drowsiness. Coffee was also only enjoyed by oldies or men who work in blue-collar sectors such as truck driver and construction workers. Coffee was also typically consumed in the morning. But after the advent of Starbucks, coffee is promoted to be consumed by people in all age range started from kids to oldies. This is because the coffee variants offered are various and have been mixed with several substances like cream, fruit, and milk. This invention has expanded the consumer of coffee.
In the following discussion, it will be revealed the production aspects that have shifted the "coffee culture" in Indonesia starting from the process of coffee making using particular tools and the setting of very convenient place that has pushed coffee to be more as lifestyle. Therefore, it is very relevant to use the production aspect of Circuit of Culture since the production consist of making the thing, inventing it, fabricating it, reproducing it, marketing it, distributing it, and paying for all the workers or labors.
DISCUSSION
Coffee culture is defined as the shifting of Indonesian in enjoying coffee from habit to become lifestyle. Café culture is part of shifting culture that previously Indonesian have coffee in Warung and home but now mostly have coffee in café (coffee shops).The culture of coffee is a transcultural meeting because of the culture that spreads from one region to another (Farokhah & Wardhana, 2017). In this case, the culture of coffee brought by foreigners meets local cultures. Similar to the culture of coffee at Starbucks, which is brought and distributed by the US, it meets the culture in Indonesia that really likes to consume coffee. The character of Indonesian people also who tend to like gathering or just meeting old friends. The combination of cultures in Indonesia who like to consume coffee and like to gather with friends makes the culture of coffee at Starbucks grows and even becomes a lifestyle for the Indonesian.
However, the cultural development of coffee at Starbucks does not always have a negative impact on Indonesia. The coffee culture in Starbucks slowly encourages the interest of the coffee industry activists in Indonesia to innovate more. Seen today, more and more local coffee shops are emerging in various cities in Indonesia. Local coffee shops that develop in Indonesia have many variations, including coffee shops that are made for the purpose of drinking coffee and coffee shops are made not only for the purpose of drinking coffee, but also for the purpose of supporting lifestyle. The coffee shop has innovations starting from the type of coffee served, the atmosphere of the place, internet facilities and adequate parking. With the many innovations issued by local coffee shops in Indonesia, the prices offered are not expensive. Conversely, the price of coffee in local coffee shops in Indonesia is quite cheap, especially coupled with adequate facilities like the multinational coffee shop of the US, Starbucks. This is because some people, especially young people in Indonesia, have made coffee a part of life. In other words, local coffee shops in Indonesia are intended by various groups; upper, middle and lower.
There are three factors that influence Indonesia toaccept the coffee and café culture. First, basically, Indonesian are fond of coffee, that has become an effective drink to stay awake more with the high caffeine substance inside it. This can be defined as local culture meets foreign culture that resulted in the mixed culture.Second, the nature of Indonesian who like to gather with friends and family in the spare time. The sense of family in Indonesia is very high. The coffee shops are very comfortable to become venue for any events, therefore Indonesian choose coffee shops to gather.The last but not least, this phenomenon also can be seen from globalization perspective, the transition from traditional into modern society. Internet era also drives people to look for place which provided Wireless Fidelity (WiFi) to do the task or just connect with people in social media.
There are several indicators adopted by local coffee shops from Starbucks:
Cozy place
Warung Kopi in Indonesia is used to have simple place, only a room filled with men drinking coffee. But after the advent of Starbucks, the Warung Kopi are replaced by coffee shops with cozy room that provides enough space for a lot of people. The table are arranged beautifully and artsy.Students also can stay there longer because it provides electric plug for ones who want to keep the battery of their laptop and smartphone full.
Facility
Warung Kopi does not have any facility. But coffee shops have sufficient budget to make the customer want to stay longer. The main facilities are WiFi and air conditioner. The air conditioner keeps the room cool and very comfortable. The WiFi is really wanted by people who do not have internet connection in their house.
Management
Local coffee shops adopt the system of Starbucks both inside the shop and the recruitment process of the labor. The labor is trained well so that instead of serving coffee, the barista is also friendly to the customer. The customer comes to the counter for ordering, the name is written, when coffee is ready, the barista will call the customer's name.The system is also similar to Starbucks in which the coffee can be ordered for either dine in or take away.
Cooperation
In attempt to make the shop being visited by more people, the shop usually makes cooperation with start-up delivery companies such as Gojek and Grab that provide take away system for the customer.
Promotion
Local coffee shops just like Starbuck use social media and website to promote their discount and new menu or variant. Indeed, social media plays a huge role in making people stay updated. Almost everyone in Indonesia use smartphone.
Smoking area
To make the customer more convenience, local coffee shops that have a large space create separate room for smoker and nonsmoker. It contains value of high tolerance toward customer who do not smoke.
Parking Area
The coffee shops owners realize that mostly Indonesian bring their own vehicles when visiting the place, therefore, the place should provide sufficient parking area to the convenience of the customer. The parking area is also kept by parking man who is ready to assist.
Coffee variants
Types of coffee variants inspired by Starbucks: a. Espresso Espresso is produced by extraction on coffee beans that have been through the grinding process. It needs an espresso machine to mix the coffee. This coffee is served fast. Making this espresso-style coffee began to be known in Italy.
b. Latte
Latte is coffee that combines espresso with milk. Most baristas say that the ideal ratio between coffee and milk is 3:1. The amount of coffee used is more than milk. Latte only has a small amount of thin foam on the surface of the coffee.
c. Cappuccino
This is the reverse of latte. The comparison between coffee and milk in a cappuccino is 1:3. This type of coffee blend is identic with the appearance a lot of foam on the surface and the taste is smooth and sweet. Cappuccino is mostly favored by coffee lovers who want to enjoy a light coffee because more milk in it.
d. Frappe
Unlike most other coffee, frappe is made using cold water to make an ice coffee. Frappe is made from instant coffee, water, sugar, and ice cubes.
e. Mochaccino
The word mocha comes from an original coffee from Yemen called Mocha. Mochaccino comes from a mixture of espresso with chocolate and milk. The target of this coffee is chocolate lovers.
According to this, then with the presence of Starbucks in Indonesia, the local coffee industry in Indonesia also experienced improvisation because it made Starbucks the benchmark. The following are indicators that make local coffee in Indonesia undergo improvisation: 1. Larger space (not in mall, airport, or hotel) The space of local coffee shops is generally larger than Starbucks. Some coffee shops are not located not in the downtown, it is located in the uptown. But the customers are willing to visit those places due to the ambience and the uniqueness of the concept. Some shops have garden concept in which customers enjoy coffee in an open space. Therefore, the large space is an advantage for the shops to develop the café concept better than Starbucks. Therefore, the customers are able to enjoy more than cozy place, but also peculiar thematic place.
Unique concept
While Starbuck provides comfortable place to sit, local coffee shops facilitate people for hangout. The unique concept consists of various ideas such as making the room exactly like in the office, provide bookshelf with various books inside, educating people by inviting them to the tea plantation, provide coffee lab to facilitate customers on how to make the coffee by themselves, and others. Indeed, Indonesian are very creative. They can create an artsy yet educative places. Some places are considered having sociopreneur concept that dedicated to the society. Moreover, the driven factor for customers to visit the places is photography and blogging for interesting contents in both social media and websites.
Lavatory
Not all Starbucks provide lavatory for the customer, only ones in the center of cities. But the local coffee shops are able to provide comfortable lavatory so that the customers are willing to stay a whole day there.
Hospitality
Starbucks have barista to only serve the coffee, but some local coffee shops are willing to teach the customers how to make coffee they want. It has the transparency on what the barista put inside the coffee to make it good. It also spread the ability of barista to the customer so that the customers might be interested to open their own coffee shops someday.
Price
Starbucks which located in airport, mall, and hotel sure they spend a lot of money for paying the taxes. While local coffee shops they are located not in public places so that they can sell coffee with a reasonable price. Starbucks is very expensive so that some people come there for pride, not for coffee. -kedai-kopi-unik-yang-wajib-kamu-kunjungi-ketika-berlibur-kebandung , https://manual.co.id/directory/7-speed-coffee/) 6. Live music In addition to coffee, Indonesian also loves music. Local coffee shops have a special stage for the musician so that the customer can enjoy the music. They love to invite famous artists to take interest of the customers.
Game tools
To make the shops feels homey, they provide game tools such as bridge card, uno card, scrabble, and others. It takes the interest of young people who want to do activities with their friends or colleagues.
By looking at the development of local coffee in Indonesia, Indonesian coffee shops indirectly transformed from traditional to modern coffee shops.When it comes to coffee culture, Indonesian used to enjoy coffee in Warkop (Warung Kopi), but globalization as the driven factor spreading different way in enjoying coffee. Some aspects that shift the coffee culture in Indonesia can be explained in the following table: The table above shows the transformation of coffee shops in Indonesia from traditional into modern one. Local coffee shops indeed adopt many aspects from Starbucks, but it turns out that local coffee shops can better improvise with the concepts they make because local coffee shops have a venue that was deliberately made for a café. From the table in terms of facilities, local coffee shops provide more facilities than the premium coffee shops because their venues are spacious, not limited. They are even able to provide lavatory, game tools that are very fun for groups to do when they are together. They provide live music with popular song they present.
The point is that the local coffee shops have more freedom to create a café according to the concept they really want. Because they are not franchise and they do not have the obligation to report to the center corporation, not affiliated with any multinational corporation. Based on the table, there are various types of coffee shops that are developing with all the facilities offered by each coffee shops. In addition, the table also shows that now Indonesian people tend to prefer coffee shops that are not premium types because the coffee shops provide a unique and comfortable atmosphere. Indeed, Starbucks is a premium type of coffee shop, but the atmosphere at Starbucks is somewhat monotonous because Starbucks is more in public places. In other words, local coffee shops in Indonesia are developing from the Starbucks concept and that means local coffee shops in Indonesia can fight the culture hegemony that has been spread by the United States. In addition, in today's digital era, Indonesian local coffee shops can easily carry out large-scale promotions on social media to introduce their coffee shops that are not inferior to Starbucks.
According to du Gay in the Cultural Circuit theory, coffee culture is a cultural product of the type of cultural software. The cultural phenomenon of coffee at Starbucks is not a serious threat to Indonesia because the phenomenon is only as a cultural shift towards a positive direction. Conversely, the coffee culture can be a software for Indonesia because it can bring up local coffee shops in Indonesia which are far better innovations. If the US produces Starbucks coffee shops to disseminate culture hegemony to Indonesia, then Indonesia can also produce local coffee shops to maintain the Indonesian coffee culture and create a great coffee industry. The more local coffee shops in Indonesia, the more local coffee production will increase and this will have an impact on the prosperity of coffee farmers. Those are 4 most favorable coffee shops in Indonesia with unique concept:
Ruang Seduh
RuangSeduh means Brewing Room. RuangSeduh is located in Jakarta and Yogyakarta. Ruang Seduh has a very unique concept in which the customers can make their own coffee. The following is written in Ruang Seduh website: Think of Ruang Seduh like a guide for dummies on how to make your own cup of coffee. In this café, coffee enthusiasts will be presented with a non-intimidating guide by baristas without feeling like an actual dummy (Manual, 2015).
It looks like a lab with an interior that is stripped of colors. When the other cafes served the ready one, Ruang Seduh teaches the customer how to make it and what goes into it. Other cafes do not involve customer, the customer just goes to the counter to order then sit pretty. This cafe provides different yet unique way so that the customer can have experience in making a tasteful cup of coffee. Eventhough there is no exact science in preparing portions of cup of coffee, but everyone can make it with the help of the baristas.
Armor Kopi
This Coffee Shop has a very unique concept. This is located in the Great Forest Park Djuanda Dago, Bandung (Kumparan, 2017). The specialty of this Coffee Shop is the hot coffee because the weather there is cold. Therefore, coffee lovers can enjoy hot coffee in a cold condition. The view of this Coffee Shop also become the main interest for the customer because the café is located in the middle of the forest. This Coffee Shop is truly using "back to nature" concept.
One Eighty Coffee
This Coffee Shop is located in Geneca Street No.3, Bandung (Kumparan, 2017). This Coffee Shop is the one which provide the way to enjoy coffee in "relaxation" concept. What is unique about this place, every visitor is required to take off his feet to be able to soak his feet in the pool. The customers will have the real relaxation with this way of enjoying coffee. After finishing the coffee, the customers will feel fresh and ready to go back to work with a new spirit.
7 Speed Coffee
This Coffee Shop is located in North Kemang Street No.1, Jakarta (Manual, 2015). The name "7 Speed Coffee" is dedicated for local cycling and skating communities. The commitment is manifested in the decorations of the place which is full of skateboard and some bicycles. Skateboard desks are transformed into menus and the door handles are in form of bicycle bars. A BMX bike sits neatly at the back of the room serving as a backdrop to the gently-illuminated space (Manual, 2015). In this place, costumers can enjoy their coffee with a fresh air and possibly the occasional skateboard stunts from a group of skaters which become the special attraction only in this coffee shop.
By looking at the four examples of local coffee shops, it can be seen that the presence of Starbucks in Indonesia has opened up great opportunities for the coffee shop industry in Indonesia. The coffee shop is not only as a place to drink coffee, but also as a coffee drinking experience associated with social life (Yuliandri, 2015). In other words, the culture of drinking coffee has become part of the lifestyle and if someone has not visited the local coffee shop that provides sufficient facilities, then she or he will look less in association, especially for young people.
CONCLUSION
Based on the explanation above, it can be concluded that the response of Indonesian in dealing with the US culture hegemony is a resistance through actions. The action conducted by Indonesian is establishing their own coffee shops that are similar to Starbucks and even with a better concept. Indonesian adopts the management and system of Starbucks to be implemented in their coffee shops and can make several new innovations. It can be proven that the production of a multinational coffee shop like Starbucks does not always have a negative impact on Indonesia. On the contrary, the presence of Starbucks makes a cultural shift in coffee in Indonesia where previously only coffee was only consumed in the morning or evening and was only enjoyed by parents or people working in the field. However, slowly the culture of drinking coffee is identified with a coffee shop that has a comfortable place and contains coffee flavors, as well as providing a lifestyle for the people of Indonesia. With the coffee culture shift, it will also develop the local coffee shops industry which has innovated more than what Starbucks does. With the development of the local coffee shop industry, it also promotes coffee farmers and promotes Indonesian coffee products. In other words, aspects of cultural production mentioned by du Gay that have shifted the coffee culture in Indonesia, starting from the process of making coffee that uses sophisticated tools and a comfortable place which is the cultural driving factor of coffee used as a lifestyle.
|
2020-12-03T09:06:30.970Z
|
2020-11-21T00:00:00.000
|
{
"year": 2020,
"sha1": "ff7be9fc231c778ba99ca88e61b4206e48169210",
"oa_license": "CCBYNCSA",
"oa_url": "https://jurnal.ugm.ac.id/rubikon/article/download/61485/29956",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "40c28aa406d0768225ee851caa7cc6e6baf9f147",
"s2fieldsofstudy": [
"Sociology",
"Business"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
119297854
|
pes2o/s2orc
|
v3-fos-license
|
Observed relationship between CO2-1 and dust emission during post-AGB phase
A CO\,2-1 line survey is performed toward a sample of 58 high Galactic latitude post-AGB (pAGB) stars. To complement the observations, a compilation of literature CO\,2-1 line data of known pAGB stars is done. After combining the datasets, CO\,2-1 line data are available for 133 pAGB stars (about 34 per cent of known pAGB stars) among which 44 are detections. The CO line strengths are compared with infrared dust emission for these pAGB stars by defining a ratio between the integrated CO\,2-1 line flux and {\it IRAS} 25\,mu flux density (CO-IR ratio). The relationship between the CO-IR ratio and the {\it IRAS} color C23 (defined with the 25 and 60\,mu flux densities) is called here the CO-IR diagram. The pAGB objects are found to be located between AGB stars and planetary nebulae (PNe), and segregate into three distinctive groups (I, II and III) on the CO-IR diagram. By analyzing their various properties such as chemical types, spectral types, binarity, circumstellar envelope expansion velocities, and pAGB sub-types on the CO-IR diagram, it is argued that the group-I objects are mainly intermediate mass C-rich pAGB stars in early pAGB stage (almost all of the considered carbon rich `21\,mu' stars belong to this group); the group-II objects are massive or intermediate mass pAGB stars which already follow the profound trend of PNe; and the group-III objects are mainly low mass binary pAGB stars with very weak CO\,2-1 line emission (almost all of the considered RV\,Tau variables belong to this group). The CO-IR diagram is proven to be a powerful tool to investigate the co-evolution of circumstellar gas and dust during the short pAGB stage of stellar evolution.
INTRODUCTION
After the super-wind has ceased, the evolution of the remnant circumstellar envelope (CSE) around a single post-Asymptotic Giant Branch (post-AGB, or pAGB) star is dominated first by its expansion, and later by photochemical pro-cesses when the central star temperature quickly rises up. The expansion of the remnant CSE produces a detached circumstellar shell and the object is usually characterized by a double peak spectral energy distribution (SED) due to the lack of hot dust (Kwok 1993). CO molecules in the CSE are protected from photodissociation mainly by selfshielding and line shielding by atomic and molecular H 2 (see e.g., Mamon et al. 1988).
There have been few dedicated works on the relationship between dust and CO in pAGB stars. Alcolea & Bujarrabal (1991) had noticed that some pAGB stars of RV Tauri type are usually very deficient in CO line emission and proposed peculiar elemental abundances that prevent the formation of CO molecules as the possible inter-1 pretation of the phenomenon. Bujarrabal et al. (1992) observed 12 CO and 13 CO 1-0, 2-1 lines toward several young pAGB stars. Combined with literature data, their results revealed a correlation between CO 1-0 integrated intensity and IRAS 60 µm (F 60 ) flux density. However, there are some exceptional pAGB stars that show either too strong or too weak CO 1-0 line intensities relative to F 60 flux densities. They also noticed the trend that the CO line is stronger in some AGB carbon stars than in investigated pAGB stars. They compared the CO line of the young pAGB stars with other pAGB stars from literature and tentatively concluded that younger pAGB stars might have stronger CO lines. With more observational CO data accumulated to date, it is possible to extend such studies in a more systematic way.
Here we present a new observational study of relationship between dust and CO in the evolving circumstellar envelopes of pAGB objects using the compilation of such objects from the Torun Catalog of pAGB stars (Szczerba et al. 2007(Szczerba et al. , 2012. After description of our observations and data reduction in Sect. 2, the results are presented in Sect. 3. To augment the data set for analysis, we have also performed as complete as possible compilation of literature CO 2-1 data for all known pAGB stars in Sect. 4 (with details presented in the Appendix). Then, the observed relation between the integrated CO 2-1 line fluxes and IR dust emission flux densities is investigated and compared to that of AGB stars and PNe in Sect. 5. To better understand the observed relationship between CO and dust emission, some other properties of the pAGB stars, such as CSE chemical types, spectral types, binarity, spectral energy distribution types, CSE expansion velocities and pAGB sub-types, are discussed in Sect. 6. At the end, the main points of this work are summarized in Sect. 7.
SAMPLE SELECTION, NEW OB-SERVATION AND DATA REDUC-TION
In order to explore the relationship between dust and CO in the pAGB circumstellar envelopes with smaller contamination from interstellar CO emission, a sample of 58 high Galactic latitude pAGB stars (with |b| ≥ 15 • ) that are acces-sible from the Arizona Radio Observatory 10 m submillimetre telescope (AROSMT, for objects with a declination ≥ −38 • ) were selected from the Torun Catalog of pAGB stars (Szczerba et al. 2007). However, the discussions in this paper will be based on the second version of the catalogue (Szczerba et al. 2012) 2 . We note that, although great efforts have been made in building the catalog, the evolutionary status of some objects (such as RV Tau type and R CrB stars) is still uncertain in the catalog. Almost all of these pAGB objects have optical and/or infrared photometry and/or spectroscopy data, which provides a reliable basis for the analysis in this work.
The ALMA band-6 sideband-separating receiver on the AROSMT telescope was used for our 12 CO J=2-1 line survey from 2007 November to 2008 April. Sky subtraction was made by beam-switch mode with a 2 arcmin throw at 1 Hz wobbling in the azimuth direction. Two filter banks (FFBs, 1 GHz width, 1024 channels) were used for the two linear polarizations. The beam width was about 32 ′′ in this line. A nominal factor of 44.4 Jy/K can be used to derive line peak fluxes.
The GILDAS/CLASS package was used to reduce and analyze the data. The two polarizations were combined to improve the S/N. An on-plusoff integration time of 20 min per object yielded an typical root mean square (RMS) of about 15 mK at a spectral resolution of 1 MHz in the polarizationaveraged spectral baseline. A linear baseline was removed from each spectrum.
RESULTS
The CO 2-1 line were detected toward only six sources among the 58 observed in this survey. The baseline RMS noise of all observed data and the line parameters for the detected sources are given in Tables 1 and 2, respectively. A main beam efficiency of 0.75 is used to convert the antenna temperature T * A into main beam temperature T mb . Also given in Table 2 are integrated line flux, F int , and its ratio to the IRAS 25 µm flux density Only good quality IRAS flux densities (with Q = 2 or 3) are used in this work. The IRAS 25 µm flux density is considered because it is the representative wavelength of the cool dust emission from the circumstellar envelopes of all evolved stars considered here: AGB stars, pAGB stars, and PNe. In addition, 25 µm flux densities have smaller interstellar contamination than 60 µm flux densities for low galactic latitude objects which will be compiled from literature in the next section. Among the six detected sources, there is only one new detection of CO 2-1 line in IRAS 07430+1115 (only its CO 1-0 line was detected before). The CO 2-1 line was observed but not detected toward this object by , which was perhaps due to the wrong source position used by them (the position they observed was away from the IRAS position by about 30 ′′ , comparable to the beam size). The very narrow CO 2-1 line toward IRAS 19437-1104 is also new, but it is possibly a contamination from a high Galactic latitude molecular cloud, because: 1) so narrow CO line (V exp = 0.7 km s −1 ) is very rare among pAGB stars, and 2) the LSR velocity of the narrow CO line (at V LSR = 3.58 km s −1 ) is the same as an interstellar cloud toward the same direction as measured by Dame et al. (2001). Therefore, this object will not enter our discussions hereafter.
The resulting CO line profiles of the six detected sources are shown in Fig. 1. Here we give notes for some interesting spectral line features in these profiles. 1. A common feature of these CO 2-1 lines is the presence of high velocity line wings. Four of the five detected circumstellar CO lines show evidence of such line wings (with the only exception being IRAS 07430+1115, see Fig. 1). The Gaussian line wings are usually produced by fast (bipolar) winds from the central pAGB star.
2. The CO 2-1 line of IRAS 19500-1709 is a composite profile with a strong narrow feature superimposed at the center of a broader flat pedestal. Similar composite line profiles had been found in some other AGB or pAGB stars, such as RS Cnc (Gérard & Le Bertre 2003;Libert et al. 2010), EP Aquarii (Winters et al. 2007), and more such sources in Knapp et al. (1998) and Winters et al. (2003). The interpretation of such profile is still controversial, however.
3. The sharp peak CO line profile of IRAS 17534 +2603 had been nicely interpreted by Bujarrabal et al. (2007) on the basis of their CO line interferometry data with a bipolar hourglass shaped outflow model.
4.
A small narrow emission like feature can be recognized around V LSR = 5 km s −1 on the top of the broad line profile of Frosty Leo. A plateau was seen near the same velocity in a better quality line profile obtained with IRAM (with smaller beam) by Bujarrabal et al. (2001). The change of profile shape of this feature with different beam sizes indicates that it should originate from the circumstellar envelope instead of from interstellar clouds. The same object was observed with IRAM 30m by Bujarrabal et al. (2001) and clearer absorption like features had been seen around similar velocities on top of broad line wings in their better profile data. As argued by Bujarrabal et al. (1992), it could be the signpost of still deeply embedded smallsize fast winds. Similar absorption like features also appear in the blue line wing of IRAS 17436+5003 at Doppler shifts of -33, -23 and -14 km s −1 with respect to the systemic velocity of -35.3 km s −1 in Fig. 1. The bluest of the three absorption features even shows negative strength. Although the three absorption features in this object are weak, their appearance can be nicely confirmed in this object by comparing our data with previous observation of the same line with IRAM 30m telescope by Bujarrabal et al. (1992) more than 16 years ago. These absorption features were also briefly mentioned by Castro-Carrizo et al. (2004) with their more recent observations. Part of these weak 4 but stable absorption features could be similarly interpreted by embedded small-size fast winds, as for the case of IRAS 19500-1709. The absorption with negative flux could also be interpreted similarly given that continuum emission is strong enough in the compact outflow regions. However, it is hard for this scenario to explain all the three absorption features in this line profile with simple CSE structures. Similar absorption line features have been found in the CO line profiles of other well-known AGB or pAGB stars, e.g., CRL 2688 (Kawabe et al. 1987;Cox et al. 2000;Bujarrabal et al. 2001), CRL 618 (Hajian et al. 1996;Sánchez Contreras et al. 2004), and some other sources from Castro-Carrizo et al. (2010).
LITERATURE DATA
Because the detection rate of CO 2-1 line in our high Galactic latitude pAGB star sample turns out to be low, we complement our sample with compilation of literature CO 2-1 line data for all 393 known pAGB stars (including likely, RCrB-eHe-LTP and RV Tau types) from the second version of the Torun Catalog (Szczerba et al. 2012). Here eHe means extreme helium stars, while LTP stands for Late Thermal Pulse objects. The details of the data compilation are given in Appendix A. Only the summary of the literature data are given here.
In total, the compilation by October 26, 2011 includes 175 literature CO 2-1 line data entries (see Tables 5 and 6 in Appendix A) of 87 pAGB stars. The repeated observations have been averaged together to yield the most representative CO 2-1 line flux for each of these 87 objects. They are presented in Table 7 in Appendix A, together with IRAS flux densities, IRAS colors and the ratio between integrated CO 2-1 line flux and IRAS 25 µm flux density.
After combining this with our high Galactic latitude sample and removing duplicated objects, the total number of observed pAGB stars is 133, among which 44 objects were detected in CO 2-1 line. Thus, about 34 per cent of the known pAGB stars have been observed in the CO 2-1 line, and the detection rate is also about 34 per cent.
According to the analysis of the statistical prop- erties of the observed sample (see Appendix B), it is found that the available CO 2-1 observations have the following bias effects: 1) pAGB stars in the Galactic Center direction are underrepresented; 2) the available CO 2-1 line observations are biased to strong IR emitters and the detectability of CO 2-1 line is sensitivity limited.
In addition, 1117 CO 2-1 line data entries are collected for 751 other objects (AGB stars and PNe) from the following papers for the purpose of comparison with pAGB stars: the compilation of Loup et al. (1993); some S stars from ; Ramstedt et al. (2009); some PNe from Huggins et al. (2005Huggins et al. ( , 1996; and some O-rich semiregular and irregular variables from Kerschbaum & Olofsson (1999).
Among them are 569 AGB stars (28 OH/IR stars plus another 155 O-rich AGB stars, 56 S stars, and 330 C-rich AGB stars) and 182 PNe. Repeated CO observations for these objects are also averaged together, as it was done for pAGB objects. We do not present the detailed data in this paper, however, since they are not complete compilations for any of these type of objects.
The IRAS point source flux densities of these objects are also extracted from the IRAS Point Source Catalog and in a few cases from the IRAS Faint Source Catalog. Only those good quality IRAS data (with the quality factor Q>1) are used in this work.
CO-IR relation of high and low Galactic latitude post-AGB stars
Although the relation between CO line strength and infrared dust emission strength has been studied before (e.g., Bujarrabal et al. 1992), it is still lack of in-depth investigation. In this work, we do not try to compare the dust and gas mass loss rates. Instead, we directly compare the observed CO line fluxes with IRAS flux densities (R CO25 ratio -see Eq. 1). The greatest advantage of the direct comparison is that it allows to exclude the uncertainties introduced by distances and empirical formulas for mass loss rate. The traditional IRAS color-color (C-C) diagram of the two colors, C12 = 2.5 log(F 25µm /F 12µm ), C23 = 2.5 log(F 60µm /F 25µm ), is also used. Here, F 12 , F 25 and F 60 are the IRAS flux densities. The C23 color is a better tracer of the cool dust emission around pAGB stars than C12. Thus, we will concentrate on R CO25 -C23 relationships of our sample stars.
Our new observational results of the high Galactic latitude (h.g.l.) pAGB stars are combined with literature data. The h.g.l. objects (we adopt |b| > 15 • ) should be mainly local stars or low mass halo stars statistically less massive than the low Galactic latitude (l.g.l.) counterparts. Thus, we plot their R CO25 -C23 and IRAS C-C diagrams separately in Fig. 2. Compared with our new observations, there is no CO 2-1 line detections in literature toward other h.g.l. objects. However, the average literature CO line fluxes show clear discrepancies with our new measurements for Frosty Leo and 89 Her (see the two symbols linked to the same object names in the figure). The lower literature CO line flux of 89 Her could be due to the fact that it has been partially resolved by the small beam of the IRAM telescope which was used to obtain two of the three available literature CO datasets. For Frosty Leo, the literature data was obtained by a 12m telescope (similar as in this work) more than 18 years ago. The reason for the discrepancy is unclear. The 1-σ upper limits of non-detection sources are estimated by assuming a typical CSE expansion velocity of 10 km s −1 (≈ 7.7 MHz). The R CO25 -C23 diagram will be called CO-IR diagram hereafter.
Shown in the upper panels of Fig. 2 are h.g.l. pAGB stars. Both the literature data (gray circles) and our new observations (black dots for the detected objects) are shown, as indicated by double arrows in the figure. In the upper left panel, almost all the detected pAGB stars, excluding the only exceptional case of Frosty Leo, show very similar CO-IR flux ratios of R CO25 ≈ 1.04(±0.08) [MHz] (2) as delineated by the horizontal line in the figure. This relation is valid in the color range of −1.6 < C23 < 0.0 for the considered pAGB stars. The slant dashed line will be explained below.
On the IRAS C-C diagram in the upper right panel, Frosty Leo is not shown because of the poor quality of its IRAS 12µm flux density data. The remaining four objects are distributed along an elongated region below the black body line. Par ticularly, the C12 colors vary more than the C23 colors. This trend of increasing colors could be the result of fast weakening of the 12µm flux density when the inner CSE of the pAGB stars is gradually evacuated. Thus, the increasing IR color sequence of the detected h.g.l. pAGB stars could be an evolutionary sequence. Two special objects are commented in the footnote 3 4 . In the two lower panels of Fig. 2, the l.g.l. pAGB stars are shown. The CO-IR diagram in the lower left panel shows that many stars with CO detection (gray circles) are crowded around a similar region as the high latitude stars (with R CO25 ≈ 1.04 and C23 < 0.0), while the rest objects seem to be located in an elongated region, extending from blue C23 color and small R CO25 ratio to red C23 color and large R CO25 ratio. The object names are marked out in the figure for the latter group of stars. We find that a slant straight line (the dashed line) of ( 3) can represent this trend reasonably well. This loglinear trend holds in the color range of −1.2 < C23 < 1. Eq. 3 is not a fit to the data, but a simple function recognized by eyes. The lower right corner of this CO-IR diagram is devoid of objects, hinting that this log-linear trend could represent a border of pAGB star distribution on the CO-IR diagram. The l.g.l. pAGB objects, which are distributed along the slant line and have C23<0 are located on or below BB line on the IRAS C-C diagram. On the other hand, the rest objects that have 3 Frosty Leo stands out with its very red C23 color and very high R CO25 ≈ 60 in the upper left panel of Fig. 2. Although the very red C23 color can be partially attributed to its very strong water ice emission band around 46µm (Forveille et al. 1987), the very large R CO25 ratio needs other mechanisms or explanation. This object also has other peculiar properties such as a high initial stellar mass of 4.23 M ⊙ (Murakawa et al. 2008), a massive compact expanding ring seen in CO 2-1 and 1-0 lines and in near-IR polaro-imaging data Dougados et al. 1990) around a binary system (Roddier et al. 1995), and frenzied equatorial jets (Sahai et al. 2000). 4 V887 Her is also peculiar in that the 1-σ upper limit of the R CO25 ratio is more than 10 times smaller than that of the five CO detections in the upper left panel of Fig. 2. It is an RV Tauri variable (Şahin et al. 2011) showing depletion of refractory elements such as Al, Y and Zr.
C23>0 (with the exception of IRAS 19475+3119) are located above BB line (see the lower panels of Fig. 2). In addition, it is seen that the l.g.l. pAGB stars, which appear in a similar region as the h.g.l. ones on the CO-IR diagram (compare the two left panels) also occupy a similar region on the IRAS C-C diagrams (compare the two right panels). This indicates that these h.g.l. and l.g.l. objects share a similar combination of dust and CO gas characteristics.
CO-IR diagrams in the context of AGB-pAGB-PN sequence
Although some regularity has begun to emerge on the previously discussed R CO25 -C23 diagrams of the pAGB stars, the discussed trends are still murky due to the limited number of objects involved. In this section, we try to verify these trends in a broader context of the AGB-pAGB-PN evolutionary sequence. The CO 2-1 line data collected from literature for some AGB stars and PNe make this comparison possible. Thus, the CO-IR diagrams of these AGB stars and PNe are plotted and compared with the pAGB stars (represented by the two straight lines that delineate the major trends) in Fig. 3. The traditional IRAS C-C diagrams are also shown for them. For clarity, the C-rich and O-rich AGB stars are plotted in separate panels.
AGB stars and PNe on the CO-IR diagrams
We briefly discuss the distribution of AGB stars and PNe on the CO-IR and IRAS C-C diagrams, while more comparison among these objects is presented in Appendix C.
In the top left panel of Fig. 3, C-rich AGB stars (empty circles) mainly concentrate in a compact region on the R CO25 -C23 diagram, and S stars (gray filled circles) mainly scatter in a similar region as the C stars. On the IRAS C-C diagram (the top right panel), most of these stars are distributed slightly below the black body line (the dotted line) or in regions with red C23 but blue C12 colors. Their mean IR color is C23= −1.54 ± 0.35 and their mean CO-IR flux ratio is log(R CO25 ) = 0.41 ± 0.40.
In the middle left panel of Fig. 3, many O-rich AGB stars (empty circles) concentrate in another compact region different from that of C stars on the CO-IR diagram, whilst an extreme subsample of them -OH/IR stars (half-shaded black circles) -scatter in regions with similar or smaller R CO25 ratios but usually redder C23 colors than the other O-rich AGB stars. Their mean IR color is C23= −1.70 ± 0.76 and their mean CO-IR flux ratio is log(R CO25 ) = −0.13±0.58, which means that they are slightly bluer and have 3.5 times smaller mean R CO25 ratios than C-rich AGB stars on average.
On the IRAS C-C diagram in the middle right panel of the figure, most of these O-rich AGB stars concentrate in a region below the black body line, while the OH/IR stars stretch out to much redder regions, as expected.
In the bottom left panel of Fig. 3, PNe show a pronounced increasing log-linear trend on the CO-IR diagram. The trend spans more than 4 orders of magnitude of the R CO25 ratio and are in close agreement to the log-linear trend of pAGB stars (the dashed line). On the IRAS C-C diagram in the bottom right panel, most of the PNe appear in a very red region above the black body line, as expected for cold dusty CSEs.
The pAGB trends on the CO-IR diagram in the context of AGB-pAGB-PN evolution
By comparing the pAGB stars (represented by the two straight lines) with AGB stars and PNe in Fig. 3, it is now clear that the pAGB stars are distributed in a transitional region between the AGB stars and PNe on the CO-IR diagram. The subgroup of pAGB stars represented by the horizontal solid line in the figure have similar CO-IR ratios (with a mean log(R CO25 ) = 0.02 ± 0.03) as AGB stars, but have redder IR colors (with C23= −1.6 ∼ 0.0). The log-linear trend followed by the rest group of pAGB stars on the CO-IR diagram agree quite well with the trend of PNe. Thus those pAGB stars with very red C23 colors (> 0) could be the precursor of the shown PNe. The other end of the log-linear trend (with C23 < 0) is populated by some pAGB stars that have exceptionally small R CO25 (very weak CO line emission).
The distinctive characteristics of these pAGB stars allow us to divide them into three subgroups, which, as we show later, have different properties.
Grouping of post-AGB stars
Our sample of pAGB stars aggregate in different regions on the CO-IR diagram, which is not so well seen on the traditional IRAS C-C diagram. As we will see below, the aggregation on the CO-IR diagram just reflects different nature (e.g., mass, binarity, chemistry, etc.), and the stage of pAGB evolution of these stars through the contrast of dust and CO line emission. Here, we merge the h.g.l. and l.g.l. pAGB stars and divide them into three CO-IR groups, as shown in the upper panel of Fig. 4. Only significant, from point of view of performed source classification, CO nondetections are plotted in Fig. 4.
group-I -those pAGB stars with R CO25 > 1/3 and C23 < 0 (the red filled circles, or gray filled squares in Fig. 4). They compose the largest group of pAGB stars, which is distributed in a horizontally elongated region on the CO-IR diagram, with similar R CO25 ratio of ∼ 1 MHz (actually in the range of 0.42-5.2 MHz, with a median of ∼ 1 MHz). Their distribution can be roughly represented by the horizontal line that was determined for our h.g.l. pAGB stars in Sect. 5.1.
group-II -those pAGB stars with C23 > 0 (the green filled circles and one gray filled square in Fig. 4). They are the reddest group with usually enhanced CO 2-1 emission relative to their IR dust emission. They occupy the red part of the loglinear trend as delineated by the dashed line in the figure. group-III -those pAGB stars with R CO25 < 1/3 and C23 < 0 (the blue filled circles in Fig. 4). They have exceptionally weak CO 2-1 line emission, although their C23 colors are similar to that of group-I stars. They occupy the blue part of the log-linear trend (the dashed line) and the region immediately above it. This group also includes some CO 2-1 non-detections whose 1-σ upper limit of R CO25 ratios are smaller than 1/3 and thus they are also CO deficient pAGB stars. We did not plot the other non-detections, since they could be members of different groups.
All of the three CO-IR groups of pAGB stars are listed in Table 3, with different groups in separate sub-sections of the table. The vicinity of intersection between the solid and dashed lines in the top panel of Fig. 4 do no allow us to be sure to which group a given object belong. Therefore, the Fig. 2. Object names are labelled out for our five high Galactic latitude pAGB stars and the CO-detected pAGB stars that belong to groups-II and III.
classification of the objects near the group borders (here we have assumed somewhat arbitrarily a rectangular border region as R CO25 > 1/3 and −0.5 < C23 < 0) is tentative, and we labeled them with a leading '*' symbol before their object names in Table 3. The table is organized in a way to enable easy comparison between the CO-IR and IRAS C-C diagrams in Fig. 4. Objects in groups-I and II are sorted with the increasing C23 colors, while those of group-III are ordered in decreasing C23 colors. Also listed in the table are some properties of the pAGB stars that will be explained and discussed later in Sect. 6. In the lower panel of Fig. 4, the three groups are displayed also on the traditional dust emission diagnostic tool, the IRAS C-C diagram. Group-II pAGB stars are clearly separated from the other stars by their large C23 colors. However, the groups-I and III pAGB stars are well mixed on the IRAS C-C diagram, which demonstrates that the involvement of gas emission (the CO 2-1 line) in the grouping does have brought us a new information about the pAGB stars.
Group-I pAGB stars distribute in an elongated region on the IRAS C-C diagram, with their C23 colors varying only by a factor about 1.5, while their C12 colors varying by a larger factor of about 4.0. The larger variation range of the C12 colors may be the natural consequence of fast weakening of the 12µm dust emission in the expanding detached CSE. Thus, group-I pAGB stars are possibly still in early pAGB stage when the detached CSE is still actively developing, which is also supported by the fact that their R CO25 ratios are similar to that of AGB stars. Their similar R CO25 ratios hints that the thermal balance in the gas (represented by the CO 2-1 line) and dust (represented by the 25µm emission) could be still tightly coupled during the early pAGB stage.
The red C23 color and richness of CO 2-1 emission of the group-II stars indicate that they could have massive and cool CSEs. To the opposite, the weakness of CO 2-1 emission and the blue C23 colors of the group-III stars suggest that they could be lower mass pAGB stars with more transparent spherical component in their CSEs where CO molecules have been partially or totally destroyed by penetrating UV photons. The well known low mass binary disc system Red Rectangle (Men'shchikov et al. 2002;Witt et al. 2009) that shows narrow CO line is just a member of the group-III. However, it is not clear why the distinct groups-II and III share the same log-linear relationship on the CO-IR diagram.
DISCUSSION
As we showed in Sect. 5, the AGB, pAGB and PN CO-emitters segregate into different regions on the CO-IR diagram, which sets up a new platform for discussing the various aspects of the pAGB stars evolution. In this section, we consider chemical and central star spectral types, spectral energy distribution (SED) types, binarity, CSE expansion velocities of the pAGB stars, and their pAGB sub-types to investigate evolution and diversity of these objects using CO-IR diagram.
To facilitate the discussion, we collect the important properties of the three groups of pAGB stars in Table 3. Altogether, excluding 32 nondetections with upper limit of R CO25 > 1/3, we have gathered information for 55 objects. There are following columns in the table: Object name, C23 -IRAS color, R CO25 -CO to IR flux ratio, V exp -CSE expansion velocity, SED -spectral energy distribution type, Binarity, Chem.type (*/CSE) -central star and CSE chemical type, Spectral type -central star spectral type, pAGB class, and Chem.ref (*/CSE) -literature for chemical type of the star and CSE. The references for chemical and spectral types are given in footnotes to Table 3. Most of the data are already collected in the Torun Catalog, but not yet for chemistry of central stars and their circumstellar envelopes. Therefore, we have performed dedicated literature study to determine chemical types (but also to fine-tune spectral types) of pAGB objects, so credits to the original works or compilations are given below Table 3. Note that chemistry of CSE is also inferred from dust features seen in Infrared Space Observatory (ISO) spectra (if available in several cases) which can be found in the Torun catalog.
Chemical and spectral types of post-AGB objects
For discussion in this paper, we assume that the chemical type of a pAGB object is C-rich if it has a C-rich central star and/or C-rich dust in its CSE. Also stars with dual dust chemistry in their CSE (simultaneous presence of C-and O-rich dust features and/or molecules) are treated as having C-rich chemical type. Knowledge of chemical type is important since it allows to roughly estimate mass of the progenitor. Single carbon stars are formed only in a limited progenitor mass range. For solar metallicity this happens for ∼ 1.5 M ⊙ < M ZAMS <∼ 5 M ⊙ , and the mass range shrinks and shifts towards somewhat smaller values for lower metallicities (Piovan et al. 2003). For progenitor mass lower and higher than the above mass range a star will remain O-rich. Note, however, that in close binary systems, mass transfer to a companion star may reduce the star's AGB lifetime and the star may remain O-rich even if its progenitor was of intermediate mass.
The distribution of the chemical types from Table 3 is visualized together with central star spectral types on the CO-IR diagram in Fig. 5. Crich and O-rich pAGB stars are plotted in separate rows, while those with late and early spectral types in different columns.
In the left two panels of Fig. 5, the C-and O-rich pAGB stars with late spectral types (F, G, K and M) show distinctive distributions. Crich pAGB stars of late spectral types are mostly group-I sources, with only few objects, like Red Rectangle, Roberts 22 and IRAS 19477+2401 5 , belonging to group-III. On the other hand, O-rich pAGB stars with similar spectral types belong predominantly to group-III 6 , and to the transition region between different groups. Group-II objects with late spectral types are not numerous, in general, but among them there is only one C-rich star (IRAS 11385−5517) with SiC emission at 11.3 µm seen in its ISO spectrum, but also with OH emission detected from its shell (see Torun catalog and references in Table 3).
Post-AGB stars of earlier spectral types (right panels of Fig. 5) are also not numerous in our sample but are located mostly along the log-linear track (dashed line in the figure), with a clear exception being V886 Her (C23 = -2.07) -a massive (Ryans et al. 2003) fast evolving O-rich pAGB star (Arkhipova et al. 1999). Note.- * These objects appear near the joint region between the three CO-IR groups and thus their group identities are tentative. (Objects are sorted by C23 color within each group.) a SED types, binarity and the sub-types of post-AGB stars are taken from the Torun Catalog.
b These values are preferentially taken from this work, instead of from literature. From point of view of the progenitor mass, a feasible interpretation of the discussed trends is that group-I objects, being predominantly C-rich, are intermediate mass pAGB stars; the group-III sources that are CO-deficient objects are the lowest mass pAGB stars; and the group-II sources are intermediate or high mass stars, which already follow the PNe trend on the CO-IR diagram (see bottom left panel in Fig. 3). This simplified interpretation is supported by the significantly large percentage of O-rich sources (about 70 per cent, see Table 3) among groups-III and II. On the other hand, the paucity of O-rich pAGB stars in group-I region (see the two bottom panels of Fig. 5) seems to suggest that O-rich pAGB stars do not evolve into this region.
From point of view of a single star evolution, we expect that during pAGB stage, the C23 color almost continuously increases with time, while F 25 flux generally decreases after a short-lasting increase at the transition phase between AGB and pAGB (see, e.g., Szczerba & Marten 1993;Steffen et al. 1998). Such behavior is due to cooling of circumstellar shell, which is moving away from the central star. However, evolution of the CO 2-1 line flux, another key factor determining the position of a source on the CO-IR diagram, is neither simple nor investigated theoretically to our knowledge. In this respect the CO-IR diagram serves as an observational tool that allows us to put constraints on the CO 2-1 line flux behavior during pAGB phase of stellar evolution.
PNe are well concentrated along the log-linear region (see bottom left panel in Fig. 3) to which ultimately pAGB stars (with exception of the low mass ones that could disperse their circumstellar shells before the onset of photoionization) should evolve. In the frame of single star evolution, the existence of such trend can be understood if the CO 2-1 line flux does not change much during late pAGB (pre-PNe) and PNe phase of evolution, while the F 25 µm flux density monotonically decreases. However, in the earlier stages of pAGB evolution, the F 25 continuum (see above) and probably also CO 2-1 line fluxes could change non-monotonically, resulting in a relatively complex distribution in Fig. 5 (e.g., the lack of O-rich pAGB sources in group-I and the presence of Crich ones among group-III).
SED and Binarity
The SEDs of pAGB stars have been classified according to the scheme introduced by van der Veen et al. (1989), with addition of a type 0 (Szczerba et al. 2012). There are six SED types in total: types 0, I, and II, which show significant near infrared (NIR) excess, and types III, IVa, and IVb showing cold dust emission together with a second peak at shorter wavelengths from central star emission. de Ruyter et al. (2006) proposed that the NIR excess, which is seen in the first three SED types (0, I and II) is a signature of gravitationally bounded circumbinary discs perhaps formed during strong binary interaction (van Winckel et al. 2009). The other three SED types (III, IVa and IVb), are signatures of a detached shells and/or expanding tori, which are formed due to mass loss on AGB, or by interaction of AGB star with its companion (Zijlstra 2007), respectively.
Among our CO-IR group-I sources, three quarters (13 out of 17) have SED types of III, IVa or IVb, indicating that their dust emission is dominated by a detached shell/expanding torus. On the other hand, two third (12 out of 18) of the CO-IR group-III pAGB stars have SED types of 0, I or II, showing that they have excess emission from hot dust, a signature of a disc. It is natural to expect that formation of a disc is due to interaction among stars, when primary star was a giant. It is also interesting to note that the pAGB sources that are located in the transition region (those marked by '*' in Table 3), are showing mostly (8 out of 11) emission from cold (detached) CSEs (with SEDs of type III, IVa, and IVb). The situation in the CO-IR group-II is less evident, as about the same fraction of sources show presence of hot+cold or only cold dust.
Information about binarity is directly obtained from the Torun Catalog of pAGB stars. Although such information is by no means complete in the catalog, some interesting features still can be recognized in the distribution of known binary pAGB stars on the CO-IR diagram, as shown in Fig. 6. First, known binaries appear in all three regions of pAGB stars on CO-IR diagram. However, most of them appear in the CO-deficient region of group-III. In the current sample, about 39 per cent (7 out of 18) group-III pAGB stars are known binaries.
Among the six C-rich stars that belong to group-III, two are known binary systems (Red Rectangle and RV Tau). Binaries are also common in the group-II region (about 40 per cent). What is also striking on Fig. 6 is dichotomy in C23 color distribution of binaries. One, more numerous group, have blue C23 colors (no cold dust), while the second group have red C23 colors (significant amount of cold dust, with Frosty Leo being the most extreme example).
CSE expansion velocity
The CSE expansion velocity of all the 42 pAGB stars with detection of CO 2-1 line show profound trend: V exp is smallest among group-III pAGB stars, intermediate among group-I ones and largest among group-II objects. Excluding few exceptional objects, which will be discussed below, we obtain average velocities from the CO line widths for each group: 12 ± 2 km s −1 for group-I; 28 ± 12 km s −1 for group-II and 4 ± 2 km s −1 for group-III. Note that the vast majority of objects located in the transition region (those marked with '*' in Table 3.) have velocities in between those characteristic for group-I and group-II, with clear exception being IRAS 17106−3046, which have very low V exp that is more typical for group-III objects. Because the AGB wind velocity is expected to be higher for more massive AGB stars (see, e.g., the discussions of Nyman et al. 1992), the trend we found suggests that group-II objects are statistically more massive than group-I pAGB stars. On the other hand, so low expansion velocities for group-III objects suggest that CO is observed from circumstellar disks (rotating and/or expanding) rather than from outflows (Bujarrabal et al. , 2013. It is very likely that such disks are formed in binary systems (e.g. van Winckel 2003).
There are two objects in group-I (89 Her and AI Sco), which have very low expansion velocities of their CSE derived from CO lines. Both of them are known to be binaries with 0-type SED, O-rich chemistry, and a blue C23 color close to that typical for O-rich AGB stars. They share characteristics of group-III pAGB objects, but have at the same time a relatively strong CO 2-1 emission in comparison with their 25µm continuum flux.
The other two group-I objects (IRAS 08005−2356 and IRAS 23541+7031), on the other hand, have very high expansion velocities, much larger even than those typical for group-II pAGB objects. IRAS 08005−2356 has a broad CO line, which was only tentatively detected by Hu et al. (1994). However, similarly high velocities are also detected in optical spectrum of this star by Slijkhuis et al. (1991), and Sánchez Contreras et al. (2008), which may be interpreted as a signature of ongoing fast wind (or jet) from this source. Its type-I SED indicates significant near infrared emission, which may be interpreted as emission from hot dust in a disc, but its R CO25 = 5.20 is the largest among group-I pAGB stars. IRAS 23541+7031 has a disk or torus of molecular gas, which seems to be expanding (Castro-Carrizo et al. 2002). It has very rich and complex wind activity and rapidly evolving shocked material (Sánchez Contreras et al. 2010). In this respect, a broad CO 2-1 line detected by Hajian et al. (1996) is not surprising.
Roberts 22 and IRAS 17245-3951 (Walnut Nebula) are another two special objects with normal or high CSE expansion velocities (31 and 15 km s −1 , respectively), but this time being members of our group-III objects (R CO25 = 0.16 and 0.25, respectively). Again, the broad CO lines may be interpreted by the contribution from bipolar outflows, because both show typical bipolar nebulae. We note that Roberts 22 was known to show broadened H α emission line and was thus suspected to be a Wolf-Rayet star (Roberts 1962).
Post-AGB sub-types
Finally, we would like to put attention to the fact that, among sources from Table 3, there are two pAGB classes (Szczerba et al. 2007) that are the most abundant. They are '21 micron' sources (12 objects) and RV Tau stars (8 sources). These two groups have quite well defined properties. The '21 micron' sources are a group of C-rich intermediate mass stars with the still unidentified 21 µm dust feature (Kwok et al. 1989), which also show s-process elements enhancement (Van Winckel & Reyniers 2000), and have periodic variability well correlated with their effective temperature (Hrivnak et al. 2010). RV Tau stars are luminous variable stars, which show alternating deep and shallow minima with periods between 30 and 150 days, and spectral types F, G and K (e.g. Preston et al. 1963). Most of them show IR excess, which is interpreted as a remnant of AGB mass loss (Jura 1986). RV Tau stars with near-IR excess (very likely due to radiation from disks) are probably binaries (de Ruyter et al. 2005).
These two groups have well established positions on the CO-IR diagram. Almost all members of the '21 µm' type sources from our sample are in our group-I, with one belonging to the transition region and only one to group-III (IRAS 19477+2401). It is remarkable that 10 (out of 17, or 59 per cent) objects in group-I are '21 µm' sources. Their IR colors are in the range −1.3 < C23 < −0.55. All but one have a G or F spectral type, 9 (among 12) have a SED of IVa or IVb type, none of them is known to be binary, their outflow velocities are around 10 km s −1 (with an average of 12 ± 3 km s −1 ).
On the other hand, 7 (out of 8) RV Tau stars from our sample belong to group-III objects, with only one source (AI Sco) being a member of group-I on the CO-IR diagram. This is not so surprising since RV Tau variables are well-known to be CO-deficient stars (Alcolea & Bujarrabal 1991). RV Tau stars from our sample consist of the majority of the blue part of the group-III objects, with IR colors in a narrow range of −1.5 < C23 < −1.1. They have O-rich chemistry, 7 (out of 8) have SED classified as 0 or I (with AR Pup being the only exception with SED of type-III), 7 of them are also known binaries (with the only exception of AI CMi, which do not have near-IR excess -see e.g. SED in the Torun catalog), and the three RV Tau variables with detected CO 2-1 line all show very narrow CO lines (with V exp = 1.5 ∼ 5.6 km s −1 ).
Overall properties of the CO-IR groups of post-AGB stars
We summarize below the properties of the different groups of pAGB objects, as inferred from the above discussion. The key features of each CO-IR group are collected in Table 4 and illustrated in a cartoon (Fig. 7).
Group-I
The group-I pAGB stars have a relatively narrow range of R CO25 ratios from 0.42 to 5.2 MHz, with a median of ∼ 1 MHz and C23 color varying between -1.5 and -0.5. They represent the early pAGB stage of intermediate mass pAGB stars, because of their F to G spectral types and C-rich chemistry. This group contains almost all of the considered '21 µm' objects.
Group-II
The group-II pAGB stars are the reddest objects in our sample. They are distributed along a log-linear trend on the CO-IR diagram, which coincides with a similar trend of PNe. On average, group-II objects are more massive pAGB stars than group-I sources, due to their higher V exp . However, this region should also contain evolved intermediate mass pAGB stars that are approaching PNe phase.
Group-III
The group-III pAGB objects are those with similar C23 colors as group-I objects but with much weaker CO 2-1 line. They are probably the lowest mass pAGB stars, predominantly Orich, with very high percentage of known binaries, which is in agreement with their type-0, I and II SED (near-IR excess). Most of the RV Tau variables are in this group.
Transition region
The objects in this region have C23 colors intermediate between groups-I and II. They are predominantly O-rich as group-II objects, but have R CO25 ratios similar as group-I objects. Their CSE expansion velocities are also intermediate between the two groups.
SUMMARY
A survey of CO 2-1 line has been performed toward 58 known high Galactic latitude pAGB stars. Circumstellar CO lines were detected only toward five of these objects, with one new detection. The detected line profiles show various features such as line wings, absorption features and triangular peak.
To complete our survey, we performed a compilation of literature reporting single dish CO 2-1 line observation for all 393 pAGB stars (likely, RCrB-eHe-LTP and RV Tau types from the Torun Catalog of pAGB stars). We found observations for 133 pAGB stars (34 per cent of all objects in the Torun Catalog). The CO 2-1 line has been detected in 44 objects among them. The CO 2-1 line data for AGB stars and PNe were also compiled for comparison.
CO-IR flux ratio R CO25 is defined using the integrated CO 2-1 line flux and IRAS 25µm flux density. This ratio is compared with IR color C23 that is defined with the IRAS 60 to 25µm flux ratio. So constructed CO-IR diagram is used to investigate the pAGB phase of stellar evolution.
Post-AGB stars segregate into three groups on the CO-IR diagram: group-I pAGB stars show a narrow range of R CO25 ratios that are independent of the C23 colors (Eq. 2); groups-II pAGB stars have redder C23 colors and usually larger R CO25 ratios; group-III pAGB stars have significantly smaller R CO25 ratios (weak CO lines).
Comparison of the pAGB stars with AGB stars and PNe on the CO-IR diagram reveals that post AGB objects are really located between the AGB stars and PNe. Planetary nebulae show a profound trend on the CO-IR diagram which agrees well with the trend of group-II pAGB stars.
Combining these features with various properties such as chemical types, central star spectral types, binarity, SED types, CSE expansion velocities and pAGB sub-types (defined in the Torun Catalog) of the pAGB stars, the three pAGB CO-IR groups are found to have distinctive characteristics related to mass, evolutionary stage and binarity.
The CO-IR diagram is proven to be a powerful tool to discriminate the different effects of stellar mass, evolution and binarity of pAGB stars and to investigate the co-evolution of circumstellar gas and dust during the fast post-AGB stage.
This research has intensively made use of NASA's Astrophysics Data System Bibliographic Services and SIMBAD database, operated at CDS, Strasbourg, France. We thank ARO telescope operators for the assistance in remote observations. The SMT is operated by the Arizona Radio Observatory ( .331H a cont = interstellar contamination; Jy/K = line peak temperature converted back from line peak flux in Jy; hand = line peak temperature measured by hand from published spectral plots; 2C = line peak temperature as the sum of a two components fit. Refereed papers that contain original single dish observations of CO 2-1 line have been searched for in the ADS database using object position (with a search radius of 1 ′ ) for all 393 known pAGB stars in the version 2 of the Torun Catalog of pAGB stars (Szczerba et al. 2012). The search was done by Oct. 26, 2011. Both single pointing observations and single dish mapping are considered. Interferometer data are not considered due to the missing flux issue.
CO line parameters such as line peak temperature (T mb or T * A or T ′ A ), line area, line center velocity, and CSE expansion velocity are compiled, if available. The RMS noise level in the baseline are used as the 1 σ upper limit of undetected lines. The information about the telescopes, e.g., conversion factor to obtain main beam temperature from other antenna temperature scales, the telescope response (Jy/K) for conversion of main beam temperature into line flux (Jy), and the beam size of the telescopes at CO 2-1 line frequency are also collected. The telescope responses are adopted as nominal values for all involved telescopes: 7.33 Jy/K for the 13 ′′ beam of the IRAM 30-m, 21 Jy/K for the 22 ′′ beam of the JCMT 15-m, 25 Jy/K for the 24 ′′ beam of the SEST 15-m, 44.4 Jy/K for the 32 ′′ beam of the AROSMT 10-m, 39 Jy/K for the 30 ′′ beam of both the NRAO 12-m and the CSO 10.4-m telescopes, and 29.4 Jy/K for the 26 ′′ beam of the old 10-m Caltech OVRO telescope.
Here we give the list of all 175 CO 2-1 data records for 87 pAGB stars in Table 5, in which data records are sorted in increasing alphabetic order of object names and increasing observation dates. All the involved literature is collected in Table 6. When available, we also collect the information about the chemistry of the observed circumstellar shells (see Table 5).
Duplicated CO 2-1 observations were averaged to give a mean line strength. Before averaging, we convert all antenna temperature scales into velocity-integrated line flux in Jy MHz, so that the measured line strengths by different telescopes can be directly compared to check consistency. In the case when only line peak temperature was given in the papers, the line width V exp is used to estimate the line area by assuming a Gaussian line profile. In very few cases when the V exp is also not given, a fixed value of 10 km s −1 is assumed. The repeated observations usually agree with each other within a factor of 2-3, which is significantly larger than the usually accepted flux calibration uncertainty of 20 per cent. The reason of the large variation could be attributed to pointing error, bad weather, or technical problem during the observations. Weights are used in the averaging, which are set as follows: a weight of unity is set for most of the data entries, while a weight of 0.5 is given to those line area data estimated from line peak temperature and V exp , and a weight of 2 is given to those single dish mapping data because these observations have less problems with pointing and are more reliable for nearby extended objects.
The average quantities for all the 87 pAGB objects (the velocity-integrated CO 2-1 line flux, and CSE expansion velocity V exp ), as well as IRAS flux densities, IRAS colors (defined in Sect. 5.1), and CO-IR flux ratio R CO25 (defined by Eq. 1 in Sect. 3) are collected in Table 7.
B. STATISTICAL PROPERTIES OF THE pAGB STAR SAMPLE
Statistical properties of the pAGB star sample are discussed here to assess if any selection effects would have biased the conclusions drawn from the sample. We discuss in this section the completeness of the Torun Catalog of pAGB stars, our compilation of CO 2-1 observations, and the detection rates of CO 2-1 lines among the observed objects. First of all, the Torun Catalog of pAGB stars was created with various object selection criteria (Szczerba et al. 2007). It is still unknown if the objects in this catalog are representative to all pAGB stars in the Galaxy. The version 2 of the catalog (Szczerba et al. 2012) is now divided into several sub-catalogs: likely, possible, RV Tau, R CrB/LTP/eHe, and unlikely. Here we plot the histograms of the Galactic position and IRAS 25µm flux densities of the three most important sub-catalogs: likely, RV Tau and R CrB/LTP/eHe, to check if the catalogs have any obvious bias. As shown in Fig. 8, all three sub-catalogs show decreasing trends from Galactic Center (GC) to anti-GC and from Galactic disc to high latitude and from 25µm-weak to 25µm-strong objects. Although the detailed comparison of these distributions with that of pAGB star model prediction for the Milky Way Galaxy is beyond the scope of this work, the trends roughly agree to the distribution of Galactic zero age main sequence stars. Thus, it is concluded that no severe bias can be seen in the three sub-catalogs from their Galactic position and IR flux density distributions.
Secondly, the pAGB stars with CO 2-1 observations in literature and in our survey (this work) are compared with all pAGB stars in the Torun Catalog in Fig. 9. The left and middle column of panels show that pAGB stars toward the GC and the Galactic disc have the lowest percentage of CO 2-1 observations, while most of the high Galactic latitude pAGB stars have been observed in CO 2-1 line. In the right column of Fig. 9, the CO observations of all three sub-groups of pAGB stars are clearly biased toward IR strong objects. Conversely, the CO observations of the R CrB/LTP/eHe types of pAGB stars in the third row of the figure are the most incomplete, because observers usually do not expected them to show CO line emission.
Thirdly, the detection rates of the CO 2-1 line among the 'Likely' and RV Tau type pAGB stars are plotted in Fig. 10. The R CrB/LTP/eHe type pAGB stars are not shown, because none of them have been detected in CO 2-1. Although there is no obvious trend along the Galactic longitude in the left panel of the figure, one can see in the middle panel that the CO 2-1 line detection rate in the Galactic disc is more than two times higher than at higher Galactic latitudes. This could be due to the higher masses of the pAGB stars in the Galactic disc, because more massive pAGB stars have thicker relic circumstellar envelope and thus have their CO lines more easily detected. In the right panel of the figure, there is a clear decreasing trend of CO 2-1 line detection rates toward IR-weaker objects, which indicates that the CO 2-1 line observations are sensitivity limited.
In a summary, the major bias effects found in the CO observations of pAGB stars are 1) PAGB stars in the GC direction are not adequately observed in CO 2-1 line; 2) The CO line observations in literature and our work are biased to IR-strong sources and are sensitivity limited. 3) The CO line observations of R CrB/LTP/eHe type pAGB stars are very incomplete in any sense;
C. COMPARING AGB STARS ON THE CO-IR DIAGRAMS
Because this work is concentrated on pAGB stars, we only briefly mention some interesting points about the distribution of AGB stars and PNe on the new CO-IR diagram. Given below are more details not mentioned in Sect. 5.2.1 of the main text.
C.1. O-and C-rich AGB stars
Although the majority of the O-rich AGB stars (empty blue circles in the second row of Fig. 3) concentrate in a small region on both the CO-IR and IRAS color-color diagrams, some objects do show significantly smaller R CO25 ratios (CO-weak) or significantly redder C23 colors. The redder objects could be either post-thermal pulse objects with detached CSEs (see, e.g., Steffen et al. 1998), OH/IR star candidates, or mis-identified pAGB stars. Many of the CO-weak objects are semi-regular variables of which the CSEs are probably optically thinner and thus CO molecules could have been partial destroyed by photodissociation.
The 28 OH/IR stars (half-shaded black circles) scatter in a larger region with similar or smaller CO-IR flux ratios but redder C12 and C23 colors than the other O-rich AGB stars. This can be explained by cooler CO gas and lower average dust temperatures in the very thick CSEs of the OH/IR stars Kastner 1992).
The O-rich AGB stars have slightly bluer C23 colors and about 3 times smaller R CO25 ratios than the C stars on average. The redder C23 color of C stars had been known (e.g., in the study of IRAS color-color diagram by van der Veen & Habing 1988) as due to the shallower emissivity of carbon rich dust in the 12-100 µm region (Zuckerman & Dyck 1986b). The stronger CO line in C stars had long been recognized since the CO line survey of Nyman et al. (1992). The most possible explanation is higher CO abundance in carbon stars than in O-rich AGB stars, because only part of the oxygen atoms are used to make CO molecules in the latter.
C.2. S stars against O-and C-rich AGB stars
S stars on the R CO25 -C23 diagram (red filled circles in the top row of Fig. 3) scatter in the similar regions as the C stars, indicating that the gas and dust properties of the CSE of S stars are closer to that of C stars than O-rich AGB stars. However, because S stars are expected to have much less dust in their CSEs due to the lock of most C and O atoms into gaseous CO molecules, their CO-IR flux ratios are intuitively expected to be even larger than that of typical C stars, which is not true in our Fig. 3. The not high enough CO-IR ratios of S stars could be explained by assuming that the CO molecules are not efficiently formed through ideal equilibrium chemistry (as discussed for C stars by Papoular 2008) and by assuming dust grains such as the recently proposed solid SiO dust (Wetzel et al. 2013) may still be efficiently formed around S stars.
Comparing the IRAS C-C diagrams in the upper right and middle right panels of Fig. 3, one can see that the S stars also have different IRAS colors than the C-and O-rich AGB stars. Their C12 colors are generally bluer than the latter, while their C23 colors are similar to C stars but redder than O-rich AGB stars. Again, the solid SiO dust proposed by Wetzel et al. (2013) has the potential to naturally explain these color differences. The solid SiO grains have the 10µm feature as normal silicates, but not the 18µm feature. Comparing with the Silicates dust in O-rich AGB stars, the lack of the 18µm feature of SiO dust in S stars just results in a weaker IRAS 25µm flux density, and thus bluer C12 and redder C23 colors of S stars than O-rich AGB stars, as showed above. Comparing with the C-rich dust in C stars, the most salient feature of the SiO dust is the appearance of the emission feature at 10µm which naturally causes the enhancement of the IRAS 12µm flux density, and thus bluer C12 and comparable C23 colors of the S stars against C stars.
|
2013-12-25T13:59:05.000Z
|
2013-12-25T00:00:00.000
|
{
"year": 2014,
"sha1": "b41189c38bc08a249070ac1905ba16ea201953e1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1312.6975",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b41189c38bc08a249070ac1905ba16ea201953e1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
5510915
|
pes2o/s2orc
|
v3-fos-license
|
Using Non-lexical Features to Identify Effective Indexing Terms for Biomedical Illustrations
Automatic image annotation is an attractive approach for enabling convenient access to images found in a variety of documents. Since image captions and relevant discussions found in the text can be useful for summarizing the content of images, it is also possible that this text can be used to generate salient indexing terms. Unfortunately, this problem is generally domain-specific because indexing terms that are useful in one domain can be ineffective in others. Thus, we present a supervised machine learning approach to image annotation utilizing non-lexical features 1 extracted from image-related text to select useful terms. We apply this approach to several subdomains of the biomedical sciences and show that we are able to reduce the number of ineffective indexing terms.
Introduction
Authors of biomedical publications often utilize images and other illustrations to convey information essential to the article and to support and reinforce textual content. These images are useful in support of clinical decisions, in rich document summaries, and for instructional purposes. The task of delivering these images, and the publications in which they are contained, to biomedical clinicians and researchers in an accessible way is an information retrieval problem.
Current research in the biomedical domain (e.g., Florea et al., 2007), has investigated hybrid approaches to image retrieval, combining elements of content-based image retrieval (CBIR) and annotation-based image retrieval (ABIR). ABIR, compared to the image-only approach of CBIR, offers a practical advantage in that queries can be more naturally specified by a human user (Inoue, 2004). However, manually annotating biomedical images is a laborious and subjective task that often leads to noisy results.
Automatic image annotation is a more robust approach to ABIR than manual annotation. Unfortunately, automatically selecting the most appropriate indexing terms is an especially challenging problem for biomedical images because of the domain-specific nature of these images and the many vocabularies used in the biomedical sciences. For example, the term "sweat gland adenocarcinoma" could be a useful indexing term for an image found in a dermatology publication, but it is less likely to have much relevance in describing an image from a cardiology publication. On the other hand, the term "mitral annular calcification" may be of great relevance for cardiology images, but of little relevance for dermatology ones.
Our problem may be summarized as follows: Given an image, its caption, its discussion in the article text (henceforth the image mention), and a list of potential indexing terms, select the terms that are most effective at describing the content of the image. For example, assume the image shown in Figure 1, obtained from the article "Metastatic Hidradenocarcinoma: Efficacy of Capecitabine" by Thomas et al. (2006) in Archives of Dermatology, has the following potential indexing terms, which have been extracted from the image mention. While most of these do not uniquely identify Caption: Figure 1. On recurrence, histologic features of porocarcinoma with an intraepidermal spread of neoplastic clusters (hematoxylin-eosin, original magnification x100).
Mention: Histopathologic findings were reviewed and confirmed a diagnosis of eccrine hidradenocarcinoma for all lesions excised ( Figure 1). Figure 1: Example Image. We index an image with concepts generated from its caption and discussion in the document text (mention). This image is from "Metastatic Hidradenocarcinoma: Efficacy of Capecitabine" by Thomas et al. (2006) and is reprinted with permission from the authors. the image, we would like to automatically select "sweat gland adenocarcinoma" and "eccrine" for indexing because they clearly describe the content and purpose of the image-supporting a diagnosis of hidradenocarinoma, an invasive cancer of sweat glands. Note that effective indexing terms need not be exact lexical matches of the text. Even though "diagnosis" is an exact match, its meaning is too broad in this context to be a useful term.
In a machine learning approach to image annotation, training data based on lexical features alone is not sufficient for finding salient indexing terms. Indeed, we must classify terms that are not encountered while training. Therefore, we hypothesize that non-lexical features, which have been successfully used for speech and genre classification tasks, among others (see Section 5 for related work), may be useful in classifying text associated with images. While this approach is broad enough to apply to any retrieval task, given the goals of our ongoing research, we restrict ourselves to studying its feasibility in the biomedical domain.
In order to achieve this, we make use of the previously developed MetaMap (Aronson, 2001) tool, which maps text to concepts contained in the Unified Medical Language System R (UMLS) Metathesaurus R (Lindberg et al., 1993). The UMLS is a compendium of several controlled vocabularies in the biomedical sciences that provides a semantic mapping relating concepts from the various vocabularies (Section 2). We then use a supervised machine learning approach, described in Section 3, to classify the UMLS concepts as useful indexing terms based on their non-lexical features, gleaned from the article text and MetaMap output.
Experimental results, presented in Section 4, indicate that ineffective indexing terms can be reduced using this classification technique. We conclude that ABIR approaches to biomedical image retrieval as well as hybrid CBIR/ABIR approaches, which rely on both image content and annotations, can benefit from an automatic annotation process utilizing non-lexical features to aid in the selection of useful indexing terms.
Image Retrieval: Recent Work
Automatic image annotation is a broad topic, and the automatic annotation of biomedical images, specifically, has been a frequent component of the ImageCLEF 2 cross-language image retrieval workshop. In this section, we describe previous work in biomedical image retrieval that forms the basis of our approach. Refer to Section 5 for work related to our method in general. Demner-Fushman et al. (2007) developed a machine learning approach to identify images from biomedical publications that are relevant to clinical decision support. In this work, the authors utilized both image and textual features to classify images based on their usefulness in evidencebased medicine. In contrast, our work is focused on selecting useful biomedical image indexing terms; however, we utilize the methods developed in their work to extract images and their related captions and mentions.
Authors of biomedical publications often assemble multiple images into a single multi-panel figure. developed a unique two-phase approach for detecting and segmenting these figures. The authors rely on cues from captions to inform an image analysis algorithm that determines panel edge information. We make use of this approach to uniquely associate caption and mention text with a single image.
Our current work most directly stems from the results of a term extraction and image annotation evaluation performed by . In this study, the authors utilized MetaMap to extract potential indexing terms (UMLS concepts) from image captions and mentions. They then asked a group of five physicians and one medical imaging specialist (four of whom are trained in medical informatics) to manually classify each concept as being "useful for indexing" its associated images or ineffective for this purpose. The reviewers also had the opportunity to identify additional indexing terms that were not automatically extracted by MetaMap.
In total, the reviewers evaluated 4006 concepts (3,281 of which were unique), associated with 186 images from 109 different biomedical articles. Each reviewer was given 50 randomly chosen images from the 2006-2007 issues of Archives of Facial Plastic Surgery 3 and Cardiovascular Ultrasound 4 . Since MetaMap did not automatically extract all of the useful indexing terms, this selection process exhibited high recall averaging 0.64 but a low precision of 0.11. Indeed, assuming all the extracted terms were selected for indexing, this results in an average F 1 -score of only 0.182 for the classification problem. Our work is aimed at improving this baseline classification by reducing the number of ineffective terms selected for indexing.
Term Selection Method
A pictorial representation of our term extraction and selection process is shown in Figure 2. We rely on the previously described methods to extract images and their corresponding captions and mentions, and the MetaMap tool to map this text to UMLS concepts. These concepts are potential indexing terms for the associated image.
We derive term features from various textual items, such as the preferred name of the UMLS concept, the MetaMap output for the concept, the text that generated the concept, the article containing the image, and the document collection containing the article. These are all described in more detail in Section 3.2. Once the feature vectors are built, we automatically classify the term as either being useful for indexing the image or not.
To select useful indexing terms, we trained a binary classifier, described in Section 3.3, in a 3 http://archfaci.ama-assn.org/ 4 http://www.cardiovascularultrasound.com/ Figure 2: Term Extraction and Selection. We gather features for the extracted terms and use them to train a classifier that selects the terms that are useful for indexing the associated images.
supervised learning scenario with data obtained from the previous study by . We obtained our evaluation data from the 2006 Archives of Dermatology 5 journal. Note that our training and evaluation data represent distinct subdomains of the biomedical sciences.
In order to reduce noise in the classification of our evaluation data, we asked two of the reviewers who participated in the initial study to manually classify our extracted terms as they did for our training data. In doing so, they each evaluated an identical set of 1539 potential indexing terms relating to 50 randomly chosen images from 31 different articles. We measured the performance of our classifier in terms of how well it performed against this manual evaluation. These results, as well as a discussion pertaining to the interannotator agreement of the two reviewers, are presented in Section 4.
Since our general approach is not specific to the biomedical domain, it could equally be applied in any domain with an existing ontology. For example, the UMLS and MetaMap can be replaced by the Art and Architecture Thesaurus 6 and an equivalent mapping tool to annotate images related to art and art history (Klavans et al., 2008).
Terminology
To describe our features, we adopt the following terminology.
• A collection contains all the articles from a given publication for a specified number of years. For example, the 2006-2007 issues of Cardiovascular Ultrasound represent a single collection.
• A document is a specific biomedical article from a particular collection and contains images and their captions and mentions.
• A phrase is the portion of text that MetaMap maps to UMLS concepts. For example, from the caption in Figure 1, the noun phrase "histologic features" maps to four UMLS concepts: "Histologic," "Characteristics," "Protein Domain" and "Array Feature." • A mapping is an assignment of a phrase to a particular set of UMLS concepts. Each phrase can have more than one mapping.
Features
Using this terminology, we define the following features used to classify potential indexing terms. We refer to these as non-lexical features because they generally characterize UMLS concepts, going beyond the surface representation of words and lexemes appearing in the article text.
F.1 CUI (nominal): The Concept Unique Identifier (CUI) assigned to the concept in the UMLS Metathesaurus. We choose the concept identifier as a feature because some frequently mapped concepts are consistently ineffective for indexing the images in our training and evaluation data. For example, the CUI for "Original," another term mapped from the caption shown in Figure 1, is "C0205313." Our results indicate that "C0205313," which occurs 19 times in our evaluation data, never identifies a useful indexing term.
MeSH is a controlled vocabulary created by the US National Library of Medicine (NLM) to index biomedical articles. For example, "Adenoma, Sweat" is one MeSH term assigned to "Metastatic Hidradenocarcinoma: Efficacy of Capecitabine" (Thomas et al., 2006), the article containing the image from F.7 Parts-of-Speech Ratio (real): The ratio of words p i in the phrase p that have been tagged as having part of speech s to the total number of words in the phrase.
This feature is computed for noun, verb, adjective and adverb part-of-speech tags. We obtain tagging information from the output of MetaMap.
F.8 Concept Ambiguity (real):
The ratio of the number of mappings m i of phrase p that contain concept c to the total number of mappings for the phrase: F.9 Tf-idf (real): The frequency of term t i (i.e., the phrase that generated the concept) times its inverse document frequency: The term frequency tf i,j of term t i in document d j is given by where n i,j is the number of occurrences of t i in d j , and the denominator is the number of occurrences of all terms in d j . The inverse document frequency idf i of t i is given by where |D| is the total number of documents in the collection, and the denominator is the total number of documents that contain t i (see Salton and Buckley, 1988). For the purpose of computing F.9 and F.10, we indexed each collection with the Terrier 9 information retrieval platform. Terrier was configured to use a block indexing scheme with a Tf-idf weighting model. Computation of all other features is straightforward.
Classifier
We explored these feature vectors using various classification approaches available in the Rapid-Miner 10 tool. Unlike many similar text and image classification problems, we were unable to achieve results with a Support Vector Machine (SVM) learner (libSVMLearner) using the Radial Base Function (RBF). Common cost and width parameters were used, yet the SVM classified all terms as ineffective. Identical results were observed using a Naïve Bayes (NB) learner. For these reasons, we chose to use the Averaged One-Dependence Estimator (AODE) learner (Webb et al., 2005) available in RapidMiner. AODE is capable of achieving highly accurate classification results with the quick training time usually associated with NB. Because this learner does not handle continuous attributes, we preprocessed our features with equal frequency discretization. The AODE learner was trained in a ten-fold cross validation of our training data.
Results
Results relating to specific aspects of our work (annotation, features and classification) are presented below.
Inter-Annotator Agreement
Two independent reviewers manually classified the extracted terms from our evaluation data as useful for indexing their associated images or not. The inter-annotator agreement between reviewers A and B is shown in the first row of Table 1. Although both reviewers are physicians trained in medical informatics, their initial agreement is only moderate, with κ = 0.519. This illustrates the subjective nature of manual ABIR and, in general, the difficultly in reliably classifying potential indexing terms for biomedical images.
Annotator
Pr ( Table 1: Inter-annotator Agreement. The probability of agreement Pr(a), expected probability of chance agreement Pr(e), and the associated Cohen's kappa coefficient κ are given for each reviewer combination.
After their initial classification, the two reviewers were instructed to collaboratively reevaluate the subset of extracted terms upon which they disagreed (roughly 15% of the terms) and create a Table 2: Feature Comparison. The information gain and chi-square statistic is shown for each feature. A higher score indicates greater influence on term effectiveness.
gold standard evaluation. The second and third rows of Table 1 suggest the resulting evaluation strongly favors reviewer A's initial classification compared to that of reviewer B.
Since the reviewers of the training data each classified terms from different sets of randomly selected images, it is impossible to calculate their inter-annotator agreement.
Effectiveness of Features
The effectiveness of individual features in describing the potential indexing terms is shown in Table 2. We used two measures, both of which indicate a similar trend, to calculate feature effectiveness: Information gain (Kullback-Leibler divergence) and the chi-square statistic.
Under both measures, the MeSH ratio (F.4) is one of the most effective features. This makes intuitive sense because MeSH terms are assigned to articles by specially trained NLM professionals. Given the large size of the MeSH vocabulary, it is not unreasonable to assume that an article's MeSH terms could be descriptive, at a coarse granularity, of the images it contains. Also, the subjectivity of the reviewers' initial data calls into question the usefulness of our training data. It may be that MeSH terms, consistently assigned to all documents in a particular collection, are a more reliable determiner of the usefulness of po-tential indexing terms. Furthermore, the study by found that, on average, roughly 25% of the additional (useful) terms the reviewers added to the set of extracted terms were also found in the MeSH terms assigned to the document containing the particular image.
The abstract and title ratios (F.6 and F.5) also had a significant effect on the classification outcome. Similar to the argument for MeSH terms, as these constructs are a coarse summary of the contents of an article, it is not unreasonable to assume they summarize the images contained therein.
Finally, the noun ratio (F.7) was a particularly effective feature, and the length of the UMLS concept (F.11) was moderately effective. Interestingly, tf-idf and document location (F.9 and F.10), both features computed using standard information retrieval techniques, are among the least effective features.
Classification
While the AODE learner performed reasonably well for this task, the difficulty encountered when training the SVM learner may be explained as follows. The initial inter-annotator agreement of the evaluation data suggests that it is likely that our training data contained contradictory or mislabeled observations, preventing the construction of a maximal-margin hyperplane required by the SVM. An SVM implementation utilizing soft margins (Cortes and Vapnik, 1995) would likely achieve better results on our data, although at the expense of greater training time. The success of the AODE learner in this case is probably due to its resilience to mislabeled observations.
Annotator
Precision Recall F 1 -score Classification results are shown in Table 3. The precision and recall of the classification scheme is shown for the manual classification by reviewers A and B in the first and second rows. The third row contains the results obtained from combining the results of the two reviewers, and the fourth row shows the classification results compared to the gold standard obtained after discovering the initial inter-annotator agreement.
We hypothesized that the training data labels may have been highly sensitive to the subjectivity of the reviewers. Therefore, we retrained the learner with only those observations made by reviewers A and B (of the five total reviewers) and again compared the classification results with the gold standard. Not surprisingly, the F 1 -score of this classification (shown in the fifth row) is somewhat improved compared to that obtained when utilizing the full training set.
The last row in Table 3 shows the results of classifying the training data. That is, it shows the results of classifying one tenth of the data after a tenfold cross validation and can be considered an upper bound for the performance of this classifier on our evaluation data. Notice that the associated F 1score for this experiment is only marginally better than that of the unseen data. This implies that it is possible to use training data from particular subdomains of the biomedical sciences (cardiology and plastic surgery) to classify potential indexing terms in other subdomains (dermatology).
Overall, the classifier performed best when verified with reviewer A, with an F 1 -score of 0.326. Although this is relatively low for a classification task, these results improve upon the baseline classification scheme (all extracted terms are useful for indexing) with an F 1 -score of 0.182 . Thus, non-lexical features can be leveraged, albeit to a small degree with our current features and classifier, in automatically selecting useful image indexing terms. In future work, we intend to explore additional features and alternative tools for mapping text to the UMLS.
Related Work
Non-lexical features have been successful in many contexts, particularly in the areas of genre classification and text and speech summarization.
Genre classification, unlike text classification, discriminates between document style instead of topic. Dewdney et al. (2001) show that non-lexical features, such as parts of speech and line-spacing, can be successfully used to classify genres, and Ferizis and Bailey (2006) demonstrate that accurate classification of Internet documents is possible even without the expensive part-of-speech tagging of similar methods. Recall that the noun ratio (F.7) was among the most effective of our features. Finn and Kushmerick (2006) describe a study in which they classified documents from various domains as "subjective" or "objective." They, too, found that part-of-speech statistics as well as general text statistics (e.g., average sentence length) are more effective than the traditional bag-ofwords representation when classifying documents from multiple domains. This supports the notion that we can use non-lexical features to classify potential indexing terms in one biomedical subdomain using training data from another. Maskey and Hirschberg (2005) found that prosodic features (see Ward, 2004) combined with structural features are sufficient to summarize spoken news broadcasts. Prosodic features relate to intonational variation and are associated with particularly important items, whereas structural features are associated with the organization of a typical broadcast: headlines, followed by a description of the stories, etc.
Finally, Schilder and Kondadadi (2008) describe non-lexical word-frequency features, similar to our ratio features (F.4-F.7), which are used with a regression SVM to efficiently generate query-based multi-document summaries.
Conclusion
Images convey essential information in biomedical publications. However, automatically extracting and selecting useful indexing terms from the article text is a difficult task given the domainspecific nature of biomedical images and vocabularies. In this work, we use the manual classification results of a previous study to train a binary classifier to automatically decide whether a potential indexing term is useful for this purpose or not. We use non-lexical features generated for each term with the most effective including whether the term appears in the MeSH terms assigned to the article and whether it is found in the article's title and caption. While our specific retrieval task relates to the biomedical domain, our results indicate that ABIR approaches to image retrieval in any domain can benefit from an automatic annota-tion process utilizing non-lexical features to aid in the selection of indexing terms or the reduction of ineffective terms from a set of potential ones.
|
2014-07-01T00:00:00.000Z
|
2009-03-30T00:00:00.000
|
{
"year": 2009,
"sha1": "5202521cb233945c42fd90c3edc68a920517e9ea",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1609149&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "5202521cb233945c42fd90c3edc68a920517e9ea",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
210995792
|
pes2o/s2orc
|
v3-fos-license
|
Machine-Learning Assisted Prediction of Spectral Power Distribution for Full-Spectrum White Light-Emitting Diode
The full-spectrum white light-emitting diode (LED) emits light with a broad wavelength range by mixing all lights from multiple LED chips and phosphors. Thus, it has great potentials to be used in healthy lighting, high resolution displays, plant lighting with higher color rendering index close to sunlight and higher color fidelity index. The spectral power distribution (SPD) of light source, representing its light quality, is always dynamically controlled by complex electrical and thermal loadings when the light source operates under usage conditions. Therefore, a dynamic prediction of SPD for the full-spectrum white LED has become a hot but challenging research topic in the high quality lighting design and application. This paper proposes a dynamic SPD prediction method for the full-spectrum white LED by integrating the SPD decomposition approach with the artificial neural network (ANN) based machine learning method. Firstly, the continuous SPDs of a full-spectrum white LED driven by an electrical-thermal loading matrix are discretized by the multi-peak fitting with Gaussian model as the relevant spectral characteristic parameters. Then, the Back Propagation (BP) and Genetic Algorithm-Back Propagation (GA-BP) NNs are proposed to predict the spectral characteristic parameters of LEDs operated under any usage conditions. Finally, the dynamically predicted spectral characteristic parameters are used to reconstruct the SPDs. The results show that: (1) The spectral characteristic parameters obtained by fitting with the Gaussian model can be used to represent the emission lights from multiple chips and phosphors in a full-spectrum white LED; (2) The prediction errors of both BP NN and GA-BP NN can be controlled at low level, that is to say, our proposed method can achieve a highly accurate SPD dynamic prediction for the full-spectrum white LED when it operates under different operation mission profiles.
Introduction
With the improvement of living standards, people's requirements on lighting have gradually shifted from environmental protection and energy saving to the pursuit of health and comfort. However, several traditional light-emitting diode (LED) products have high proportion of blue light with short wavelength and low proportion of red light at long wavelength, which will lead to visual fatigue and vision loss [1]. Thus, the design of next generation LED light source will be challenged not only by low cost and high light efficiency, but also need to meet the demands of health, comfort, high color quality, low frequency flash, high reliability and so on [2]. As natural sunlight is the most suitable and comfortable light for human beings, the full-spectrum white LED has great potential applications in indoor lighting, medical and human centric lighting, special display, plant lighting and other fields [3]. Currently, a common design of the full-spectrum white LED is to mix all emission lights from multiple LED chips and phosphors [4]. As a fundamental performance indicator, the spectral power distribution (SPD) of a full-spectrum white LED is complex and always inferenced by electrical and thermal loadings [5]. Therefore, achieving dynamic prediction of SPD to simulate natural light spectrum becomes one of essential but challenging research topics in future human centric lighting design and application.
The SPD represents a functional relationship between spectral density with wavelength and the spectral density represents the radiation energy per unit wavelength range. Because there are many internal and external factors to determine the SPD of a LED light source, it is difficult to achieve its dynamic prediction with high accuracy. At present, there are some researches relating the SPD prediction. For instance, J. C. C. Lo et al. [6] proposed a mathematical model, in which the excitation and emission spectra of various phosphors were used as input parameters to predict the emission spectra of multiple LED phosphors and compared with the experimental results. It is found that the proposed model can accurately estimate the emission spectrum of the LED with mixed multiple phosphors. P. Dupuis et al. [7] developed an SPD prediction method by using the low-order polynomial function with current as the only input parameter. Moreover, the SPD decomposition approach with statistical models is used to discretize the continuous spectrum of a LED and the statistical characteristic parameters is extracted to represent the whole information in an SPD. The commonly used statistical models include Gaussian model, asymmetric double sigmoidal model and Lorentz model and so on [8]- [11]. J. J. Fan [12] et al. used the Gaussian and Lorentz models to extract SPD features, and achieved a dynamic and accurate predicting of the color coordinates, correlation color temperature (CCTs), CRIs and estimating the residual life of phosphor converted white LED (PC-wLED). C. Qian [13] et al. modeled the SPD of PC-wLED by superimposing two asymmetric double sigmoidal (Asym2sig) functions. H.T. Chen [14] et al. provided a method to predict the instantaneous changes of CCT and CRI when the power of LED system was changed, in which the Gaussian function was also used to model the SPD of LED system. M. H. Chang [15] et al. used multiple peak fitting method to extract features of SPD for PC-wLED and applied the principal component analysis (PCA) to reduce the dimensionality of features, finally achieved a shorten of the LED qualification test period from 6000 hours to 1200 hours. Generally, the statistical characteristic parameters extracted from the SPD decomposition model have been proved as an effective way to discretize the continuous spectrum, however, the multidimensional data mining and processing of the extracted statistical characteristic parameters become another challenge in SPD prediction.
In general, machine learning (ML) is a set of methods that can be used to learn and detect patterns from input data samples and use the uncovered patterns for further decision making in prognostics or predicting future data [16]. ML always includes supervised learning, unsupervised learning and semi-supervised learning. In the supervised learning, a labeled or classified set of input data is used to estimate and predict the resulting output pattern. As a result, it depends on the learning method to discover the group of input data or desired pattern. As one of most popular supervised learning methods, Artificial Neural Network (ANN) has been proven as an effective way on data mining and processing. It abstracts the human brain neuron network from the perspective of information processing, establishes a simple model, and forms different networks according to different connection modes [17]. As it has the advantages of function approximation, self-learning, complex classification, associative memory, fast optimization, and strong robustness and fault tolerance brought by highly parallel distributed information storage [18], [19], ANN has been widely used in many fields, such as medicine [20], [21], biology [22], [23], physics [24], and even in the LED field [25]. For example, K. Y. Lu [26] et al. proposed a lifetime prediction method based on the multi-dimensional back propagation neural network (BP-NN), in which the Adaboost algorithm improved BP-NN could lower the life prediction error but the operation time was increased. Aiming at the high cost of reliability prediction and evaluation of high-power LED, an intelligent prediction method based on dynamic neural network was proposed by Yan [27]. It is proved that this method has good extrapolation ability and robustness and it can successfully predict the life of high-power LED in a short time with prediction error as less than 5%. Liu et al. [28] combined the finite element modeling (FEM) analysis of a single heat transfer physics field with the ANN to present a more efficient method for heat dissipation analysis of multi-chip LED light sources. However, the limitation of BP-NN is that its convergence speed is slow and it is easy to fall into local minimum [29]. Genetic Algorithms (GA) is a computational model to simulate the natural selection and genetic mechanism of Darwin's biological evolution theory. It is a method to search the optimal solution by simulating the natural evolution process [30], [31]. The GA can be used to optimize the initial weights and thresholds of the neural network to prevent the BP-NN from falling into local minimum in the training process. L. Liu et al. [32] used the GA-BP neural network to recognize the alphabet. The results show that the network optimized by genetic algorithm has a more accurate and faster convergence compared with BP-NN. J. W. Gao et al. [33] predicted the short-term traffic flow, and applied the GA to optimize the weights and thresholds of BP-NN. The results also show that the new model has higher superiority in convergence speed and prediction accuracy.
Taking the dynamic change and complexity of SPDs of a full-spectrum white LED operated under different electrical and thermal loadings into consideration, this paper combines the SPD decomposition method with the ANN based machine learning method (i.e., BP-NN and GA-BP NN) to realize the dynamic prediction of SPDs for a full-spectrum white LED. The remaining of this paper is organized as follows: Section 2 proposes the SPD decomposition model and ANN theory. Section 3 introduces the test sample, experimental setups and collected data used in this study. Section 4 compares and discusses the prediction results obtained by using the BP-NN and GA-BP NN. Finally, the concluding remarks are presented in Section 5.
The SPD Decomposition Model
Generally, the SPD of full spectrum white LED contains many peaks, which can be modeled by the multi-peak function. In the SPD decomposition model proposed in this paper, a summation of symmetric Gaussian functions was chosen to fit and disassemble the SPD of a full-spectrum white LED, that is expressed as Eq. (1).
Where y 0 represents the initial value, A, x c and w are the peak area, peak wavelength and half-wave width of each emitted spectrum respectively.
BP-NN and GA-BP NN
As mentioned before, ANN refers to a complex network structure formed by interconnection of a large number of processing units (neurons), which is an abstraction, simplification and simulation of the structure and operation mechanism of human brain. Fig. 1 is a structure diagram of a single hidden layer neural network, which contains n inputs and m outputs. ANN does not show the exact relationship between input and output. It only presents the unsteady factors that cause output changes, i.e., non-constant parameters. In this paper, BP-NN was firstly used to predict the SPD of a full-spectrum white LED. BP-NN is a multi-layer feedforward neural network trained by error back propagation algorithm, and is the most widely used one at present. Its basic idea is the gradient descent method, which uses the gradient search technique to minimize the mean square error between the actual output value and the expected output value of the network. However, BP-NN also has some drawbacks, such as easy to fall into local minimum values, the number of network layers and the number of neurons without corresponding theoretical guidance. Currently, there are some improved BP-NN approaches to accelerate the convergence speed of the network and avoid falling into local minimum. To optimize the BP-NN, the GA model was integrated in this paper with BP-NN to reduce the blindly searching process in the early stage of BP-NN algorithm and find the optimal weights and thresholds. Fig. 2 shows the flowchart of GA optimization.
Test and Data Acquisition
This paper chooses a full-spectrum white LED package with high CRI (Ra = 90∼92, CCT = 4800∼5200 K). It is packaged with a cyan LED chip (IF C = 150 mA), a blue LED chip (IF B = 150 mA), a red LED chip (IF R = 30 mA, VF R = 2.0∼3.0 V) and coated with a yellow phosphor layer, and the cyan chip is connected in series with the blue chip (IF CB = 150 mA, VF CB = 5.0∼6.0 V). The package size is 5.2 mm × 5.4 mm as shown in Fig. 3.
In order to realize the dynamic prediction of SPD of the full-spectrum white LED operated under different electrical and thermal conditions, we first used the integrating sphere to measure the SPD of test sample driven by an electrical-thermal loading matrix. As shown in Fig. 4, the experimental setup consists of an integrating sphere (Model: EVERFINE HASS20) to collect the SPD data, a DC power supply (Model: KEYSIGHT N5751A) to provide driven current, and a temperature control platform (Model: EVERFINE CL-200) to control the case temperature for test sample. The ranges of case temperature and driven current selected for the test sample are shown in Table 1. The selected full-spectrum white LED package needs two driven currents to power-on the red LED chip and series-connected cyan-blue LED chips separately, and a synchronous change of driven currents is recommended for the test samples. Usually, the limit operation temperature of most LEDs is between 80 ∼100°C. In addition, the excessively high temperature and large driven current may cause serious degradation and even bring catastrophic failure to LEDs. Considering the rated currents and actual operation temperature limit, the maximum case temperature was selected as 80°C and the maximum driven currents were set as 40 mA and 200 mA for the red LED chip and series-connected cyan-blue LED chips respectively. Fig. 5(a) shows the SPD change trend of test samples operated under different driven currents when the case temperature is controlled as 25°C, where its SPD increased when the driven current rising. Meanwhile, Fig. 5(b) shows the SPD changed under different case temperatures when a drive current was fixed as 150 mA, which indicates that the case temperature has negative impact on SPD.
SPD Decomposition Results
It can be seen from Section 2, the number of fitted peaks is selected according to the spectral power distribution of different LEDs. The full-spectrum LED used in this paper should select four fitted peaks. As calculated from Table 1, 372 sets of test condition are considered in this paper. Firstly, the SPD decomposition model described in Eq. (1) is validated by using the data collected under four sets of test conditions as shown in Table 2. The SPD decomposition modeling with the Gaussian function fitting is present in Fig. 6, which includes the original SPD, four extracted individual spectrum, and cumulative peak-fitting model, respectively. The Goodness-of-Fit is evaluated by using the coefficient of determination (R 2 ). The maximum value of R 2 is 1, and the closer the value of R 2 to 1, the better the fitting degree of regression line to the observation value is. Here, we fit sixty sets of SPD data and obtain all R 2 values. As shown in Table 3, all R 2 values are larger than 0.98, which indicates that the Gaussian based SPD decomposition model used in this study is appropriate to represent the original SPD of the full-spectrum white LED.
SPD Prediction With BP-NN
According to above analysis, it can be seen that the SPD of the full-spectrum LED package is highly controlled by the case temperature and driven current. The neural network modeling in this paper is implemented by MATLAB. As the junction temperature (T j ) is not effectively monitored during operation, the case temperature (T) and driven current (I) are selected as the inputs of the BP-NN model used in this study. And the outputs are the characteristic parameters extracted from the Gaussian function fitting, such as y 0 , BP-NN is used to predict each output parameters and each network has two input neurons and one output neurons. Eq. (2) is used to calculate the number of hidden layers of the network.
Where q is the number of hidden layers, n is the number of input neurons, m is the number of output neurons and a is the adjustment constant between 1 and 10. q = 7 is chosen in this paper. The designed neural network structure is shown in Fig. 7. Next, we arrange the 372 sets of collected SPD data to validate the BP-NN in SPD prediction, in which 352 sets are randomly selected as training set through the above network shown in Fig. 7. The remaining 20 sets of collected SPD data are used in testing. The gradient descent method is used to learn the neural network. The learning rate is usually 0.01 or 0.1, here, 0.01 is chosen. And the maximum number of network training is set as 1000. Table 4 shows the input data (case temperature and driven current) of the randomly selected test samples used in prediction. It can be seen from the table that the test samples used in prediction has a good discreteness, covering a variety of case temperatures and driven currents.
Furthermore, run the network and get the prediction result. In order to evaluate the prediction accuracy, the percentage of prediction error described in Eq. 3 is used, in which E p is the percentage of error, R represents the actual characteristic parameters of Gaussian function and P are the predicted values by using the NN. Fig. 8 shows the absolute percentage of prediction error, which reveals that all maximum prediction errors can be controlled under 5%, and the averaged values are lower than 1.2%. Except for A 4 and y 0 , the average error percentage of other parameters is less than 0.5%. This TABLE 4 The Input Data of the Randomly Selected Test Scenarios Used in Prediction result demonstrates that the BP-NN can achieve a good prediction accuracy for the characteristic parameters of SPD. In order to illustrate the SPD prediction more intuitively, we plot the measured SPD, the Gaussian function fitting model and the predicted SPD by BP-NN in Fig. 9 with four sets of conditions as examples. It can be seen that the predicted SPDs by the BP-NN coincide with those modeled by the Gaussian function. To see the two curves more clearly, we bold the SPD predicted by the Gaussian function. Meanwhile, according to the fitting coefficient R 2 > 0.98 obtained in Section 4.1, it also proves that the BP-NN has high accuracy of predicting the SPD for the full-spectrum LED package. In order to furtherly illustrate the accuracy of predicting SPD, we calculate the Root-Mean-Square-Error (RMSE) and chromaticity difference ( xy) between predicted SPD and actual SPD, those can be expressed as Eq. (4) and Eq. (5).
RMSE =
(I m (λ j ) − I e (λ j )) 2 n (4) where I m (λ j ) is real measured SPD, I e (λ j ) is predicted value of SPD, and n is number of measurements.
where x m , x e and y m , y e are respectively chromaticity coordinates calculated from measured and expected value at CIE1931 color space. As shown in Table 5, the averaged RMSE and chromaticity difference are 6.33 * 10 −5 and 0.0021, respectively, that furtherly confirms the high prediction accuracy of the proposed BP-NN method.
SPD Prediction With GA-BP NN
In order to improve the SPD prediction accuracy, we integrate the GA with BP-NN to optimize the weights and thresholds of the BP-NN. In the GA-BP NN, the number of iterations of the GA, the population size and the crossover probability are set as 20, 10 and 0.4 separately. The network structure is the same as Fig. 7. The prediction results of characteristic parameters in Gaussian model with GA-BP NN are shown in Fig. 10, which presents that the maximum error percentage is 3.3%, the maximum average error percentage is 0.8%. Comparing the prediction results in the Fig. 8 and Fig. 10, it can be seen that both the average and maximum values of prediction error percentage with the GA-BP NN are lower than those of BP-NN. This is because the GA-BP NN Vol. 12, No. 1, February 2020 6 The Prediction RMSE and Chromaticity Difference reduces the blindly searching process at the early stage of BP-NN implement and find the optimal weights and thresholds. However, when the prediction error is reduced, the program running time is increased because of the more 20 iterations implemented in GA optimization. The running time of BP-NN is 4s, while the running time of GA-BP NN is 23s.
In order to illustrate the SPD prediction more intuitively, we also compare the measured SPD, the Gaussian fitting model and the predicted one by the GA-BP NN in Fig. 11 with four sets of conditions as examples.
Similarly, we can also get the RMSE and chromaticity difference. As shown in Table 6, the averaged RMSE and xy are 6.26 * 10 −5 and 0.0019, respectively. According to the above analysis, it is known that the percentage of error, root mean square error, and chrominance difference predicted by GA-BP NN are smaller than those predicted by BP-NN. It can be explained that GA-BP NN has played an optimization role to a certain extent.
Robustness Study of the Methods
To evaluate the universal applicability of the proposed methods, the robustness study with one two additional cases is presented and discussed in this section. Both cases are based on the BP-NN predictions.
Case 1: Predict the Data Outside the Experimental Measurement:
In order to verify that the method can still predict the data outside the experimental dataset, we select some data to prove it. The prediction method is chosen in the same way as in Section 4.2. Select the input training set case temperature range from 25°C to 65°C, a total of 279 sets of training data. Used to predict the SPD with case temperatures of 70°C and 75°C, a total of 20 sets prediction data. The specific training set and test set input data are shown in Tables 7 and 8. Fig. 12 shows the absolute percentage of prediction error, which reveals that all maximum prediction errors can be controlled under 5%. Table 9 shows RMSE and xy, from which we can see that the average RMSE and xy are 5.50 × 10 −5 and 0.0018, respectively. This result proves that this method can still achieve good prediction for data outside the test data interval.
Case 2 Prediction With Small Amount of Training Data:
In order to prove that the method can still achieve prediction when the training data is small, we choose 48 sets of data to prove. Table 10 shows their drive current and case temperature, of which sets of data as the training set and the remaining 6 sets as the test set. The input data of the test set is shown in Table 11. The prediction method is chosen in the same way as in Section 4.2. Fig. 13 shows the absolute percentage of prediction error, which shows that all maximum prediction errors can be controlled under 6%, and the averaged are lower than 2.2%. Table 12 shows RMSE and xy, from which we can see that the average RMSE and xy are 5.50 × 10 −5 and 0.0018, respectively. This result proves that this method can still achieve prediction when the training data is small. Accuracy of prediction depends on nature and number of samples. This shows that the prediction accuracy has a relationship with nature of LED and amount of test data.
Conclusion
This paper proposes an SPD prediction method for a full-spectrum white LED package by combing the SPD decomposition model with ANN base machine learning method. The Gaussian based SPD decomposition model is firstly proposed to discretize the continuous SPD of a full-spectrum white LED package as many spectral characteristic parameters. Then case temperature and driven current as inputs in both BP-NN and GA-BP NN models are used to estimate the spectral characteristic parameters, and finally a highly accurate SPD prediction is achieved and validated with different case studies. The prediction results show that: (1) The averaged SPD prediction RMSE and chromaticity difference ( xy) in both BP-NN and GA BP-NN can be controlled at around 6 * 10 −5 and 0.002 respectively. (2) The robustness study also proofs the effective potential of our proposed method applied in dynamically accurate SPD prediction for the full-spectrum white LED operated under different mission profiles.
Disclosures
The authors declare no conflicts of interest.
|
2020-01-02T21:50:59.623Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "43066f8f305672cadc47e3454a779155b09ec0f3",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/4563994/8951187/08945382.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "19425948efa0507d656c20af7b455acd748bbc14",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
236958991
|
pes2o/s2orc
|
v3-fos-license
|
Mechanism of dimerization and structural features of human LI-cadherin
Liver intestine (LI)-cadherin is a member of the cadherin superfamily, which encompasses a group of Ca2+-dependent cell-adhesion proteins. The expression of LI-cadherin is observed on various types of cells in the human body, such as normal small intestine and colon cells, and gastric cancer cells. Because its expression is not observed on normal gastric cells, LI-cadherin is a promising target for gastric cancer imaging. However, because the cell adhesion mechanism of LI-cadherin has remained unknown, rational design of therapeutic molecules targeting this cadherin has been hampered. Here, we have studied the homodimerization mechanism of LI-cadherin. We report the crystal structure of the LI-cadherin homodimer containing its first four extracellular cadherin repeats (EC1-4). The EC1-4 homodimer exhibited a unique architecture different from that of other cadherins reported so far, driven by the interactions between EC2 of one protein chain and EC4 of the second protein chain. The crystal structure also revealed that LI-cadherin possesses a noncanonical calcium ion–free linker between the EC2 and EC3 domains. Various biochemical techniques and molecular dynamics simulations were employed to elucidate the mechanism of homodimerization. We also showed that the formation of the homodimer observed in the crystal structure is necessary for LI-cadherin–dependent cell adhesion by performing cell aggregation assays. Taken together, our data provide structural insights necessary to advance the use of LI-cadherin as a target for imaging gastric cancer.
Liver intestine (LI)-cadherin is a member of the cadherin superfamily, which encompasses a group of Ca 2+ -dependent cell-adhesion proteins. The expression of LI-cadherin is observed on various types of cells in the human body, such as normal small intestine and colon cells, and gastric cancer cells. Because its expression is not observed on normal gastric cells, LI-cadherin is a promising target for gastric cancer imaging. However, because the cell adhesion mechanism of LI-cadherin has remained unknown, rational design of therapeutic molecules targeting this cadherin has been hampered. Here, we have studied the homodimerization mechanism of LI-cadherin. We report the crystal structure of the LI-cadherin homodimer containing its first four extracellular cadherin repeats (EC1-4). The EC1-4 homodimer exhibited a unique architecture different from that of other cadherins reported so far, driven by the interactions between EC2 of one protein chain and EC4 of the second protein chain. The crystal structure also revealed that LI-cadherin possesses a noncanonical calcium ion-free linker between the EC2 and EC3 domains. Various biochemical techniques and molecular dynamics simulations were employed to elucidate the mechanism of homodimerization. We also showed that the formation of the homodimer observed in the crystal structure is necessary for LI-cadherin-dependent cell adhesion by performing cell aggregation assays. Taken together, our data provide structural insights necessary to advance the use of LI-cadherin as a target for imaging gastric cancer.
Cadherins are a family of glycoproteins responsible for calcium ion-dependent cell adhesion (1). There are more than 100 types of cadherins in humans, and many of them are not only responsible for cell adhesion but also involved in tumorigenesis (2). Human liver intestine-cadherin (LI-cadherin) is a nonclassical cadherin composed of an ectodomain consisting of seven extracellular cadherin (EC) repeats, a single transmembrane domain, and a short cytoplasmic domain (3). Previous studies have reported the expression of LI-cadherin on various types of cells, such as normal intestine cells, intestinal metaplasia, colorectal cancer cells, and lymph node metastatic gastric cancer cells (4,5).
Because human LI-cadherin is expressed on gastric cancer cells but not on normal stomach tissues, LI-cadherin has been proposed as a target for imaging metastatic gastric cancer (6). Previous studies have reported that LI-cadherin works as a calcium ion-dependent cell adhesion molecule as other cadherins do (7). Also it has been shown that trans-dimerization of LI-cadherin is necessary for water transport in normal intestinal cells (8). Sequence alignment of mouse LI-, E-, N-, and P-cadherins (classical cadherins) has revealed significant sequence similarity between EC1-2 of LI-cadherin and EC1-2 of E-, N-, and P-cadherins, as well as between EC3-7 of LIcadherin and EC1-5 of E-, N-, and P-cadherins ( Fig. 1) (9). From the sequence similarity and the proposed absence of calcium ion-binding motifs (10,11) between EC2 and EC3 repeats, there is speculation that LI-cadherin has evolved from the same five-repeat precursor as that of classical cadherins (9).
However, LI-cadherin is different from classical cadherins in several aspects, such as the number of EC repeats and the length and sequence of the cytoplasmic domain. Classical cadherins possess five EC repeats, whereas LI-cadherin displays seven (2). Classical cadherins possess a conserved cytoplasmic domain comprising more than 100 amino acids, whereas that of LI-cadherin is only 20 residues long with little or no sequence homology to that of classical cadherins (7,12).
The characteristics of LI-cadherin at the molecular level, including the homodimerization mechanism, remain unknown. Homodimerization is the fundamental event in cadherin-mediated cell adhesion as shown previously (13,14,15). For example, classical cadherins form a homodimer mediated by the interaction between their two N-terminal EC repeats (EC1-2) (10,14).
In this study, we aimed to characterize LI-cadherin at the molecular level because the molecular description of the target protein may play a significant role for the rational design of therapeutic approaches. We have extensively validated LI-cadherin to identify its homodimer architecture. Here, we report the crystal structure of the homodimer form of human LI-cadherin EC1-4. The crystal structure revealed a dimerization architecture different from that of any other cadherin reported so far. It also showed canonical calcium-binding motifs between EC1 and EC2, and between EC3 and EC4, but not between EC2 and EC3. By performing various biochemical and computational analyses based on this crystal structure, we interpreted the characteristics of the LI-cadherin molecule. In addition, we showed that the formation of the EC1-4 homodimer is necessary for LI-cadherin-dependent cell adhesion through cell aggregation assays. Our study revealed possible architectures of LI-cadherin homodimers at the cell surface and suggested the differential role of the two additional EC repeats at the Nterminus compared with classical cadherins.
Homodimerization propensity of human LI-cadherin
To investigate the homodimerization mechanism of LIcadherin, we expressed the entire ectodomain comprising EC1-7 (Table S1) and analyzed the homodimerization propensity using sedimentation velocity analytical ultracentrifugation (SV-AUC). Although formation of a homodimer was observed, it was not possible to determine its dissociation constant (K D ) because no concentration dependence in the weight-average of the sedimentation coefficient was discerned, suggesting a very slow dissociation rate (Fig. 2). Therefore, to understand the homodimerization mechanism of LI-cadherin in more detail, we prepared truncated versions of LI-cadherin containing various numbers of EC repeats and evaluated their homodimerization potency. The constructs were designed based on the sequence homology between LI-cadherin and classical cadherins. We compared the sequence of human LI-cadherin and human classical cadherins (E-, N-, and P-cadherins) using EMBOSS Needle (16). As it has been pointed out in a previous study (9), EC1-2 and EC3-7 of human LI-cadherin had an approximately 30% sequence homology with EC1-2 and EC1-5 of human classical cadherins, respectively (Fig. 1, Tables S2 and S3).
Notably, Trp239 locates at the N-terminal end of LI-cadherin EC3, and because of that, it has been suggested that this Trp residue might function as an adhesive element equivalent to that of the conserved residue Trp2 of EC1 of classical cadherins, playing a crucial role in the formation of strand swap-dimer (9,10,(17)(18)(19). Considering the degree of sequence homology and that EC1-2 of classical cadherins is the element responsible for homodimerization, we hypothesized that EC1-2 and EC3-4 of LI-cadherin would be responsible for its dimerization. Therefore, we determined the degree of homodimerization of EC1-2 and EC3-4, as well as those of EC1-4 and EC1-5, using SV-AUC (Table S1).
Homodimerization of EC1-2, EC1-4, and EC1-5 was observed, and unlike EC1-7, the weight-average of the sedimentation coefficient increased in a concentration-dependent manner. The K D values determined were 75.0 μM, 39.8 μM, and 22.8 μM, respectively. In contrast, we did not observe dimerization when using EC3-4 despite the sequence similarity with EC1-2 of classical cadherins and the presence of Trp239 in EC3, a residue located at the analogous position to that of the key Trp2 residue in EC1 of classical cadherins (Fig. 2). The solution structure of EC3-4 even at a higher concentration was monomeric as determined by small-angle X-ray scattering (SAXS), supporting the results of SV-AUC (Fig. S1, A-E and Table S4).
Crystal structure analysis of EC1-4 homodimers
To determine the EC repeats responsible for the homodimerization of LI-cadherin, we determined the crystal structure of EC1-4 expressed in mammalian cells at 2.7 Å resolution ( Fig. 3 and Table 1). Each EC repeat was composed of the typical seven β-strands seen in classical cadherins, and three calcium ions bound to each of the linkers connecting EC1 and EC2, and EC3 and EC4 (Fig. 3). We also observed four N-glycans and two N-glycans bound to chains A and B, respectively, as predicted from the amino acid sequence. We could not resolve these N-glycans in their entire length because of their intrinsic flexibility. From the portion resolved, all N-glycans face the side opposite to the dimer interface.
Two unique characteristics were observed in the crystal structure of LI-cadherin EC1-4: (i) the existence of a calcium-free linker between EC2 and EC3 and (ii) an unusual homodimerization architecture not described before for cadherins. A previous study had suggested that LI-cadherin lacks a calcium-binding motif between EC2 and EC3 (9), and our crystal structure has confirmed that hypothesis experimentally. Crystal structures of cadherins displaying a calcium-free linker have been reported previously, and the biological significance of the calcium-free linker has been discussed (20,21).
The EC1-4 region of LI-cadherin was assembled as an antiparallel homodimer in which EC2 of one chain interacts with EC4 of the opposite chain. This architecture is different from that of other cadherins, such as classical cadherins, which exhibit a two-step binding mode (14), and to that of protocadherin γB3, which forms an antiparallel EC1-4 homodimer (22) stabilized by intermolecular interactions in which all the EC repeats participate.
The fact that the affinity of the EC1-5 homodimer is almost twice as high as that of the EC1-4 homodimer suggested the presence of contacts between EC1 and EC5, as can be predicted from the arrangement of EC1 of one chain and EC4 of the other chain in the crystal structure, although this interaction does not seem to be strong. In addition, there was no interaction between EC1-2 of one chain and EC1-2 of the other chain in the crystal structure, suggesting that the architecture of the EC1-2 homodimer detected by SV-AUC should be different from that of the EC1-4 homodimer.
Calcium-free linker
We investigated the calcium-free linker between EC2 and EC3. Classical cadherins generally adopt a crescent-like shape (15,18). However, in LI-cadherin, the arch shape was disrupted at the calcium-free linker region and because of that EC1-4 exhibited unique positioning of EC1-2 with respect to EC3-4.
Generally, three calcium ions bound to the linker between each EC repeat confer rigidity to the structure (11). In fact, a previous study on calcium-free linker of cadherin has shown that the linker showed some flexibility (21). To compare the rigidity of the canonical linker with three calcium ions and the calcium-free linker in LI-cadherin, we performed molecular dynamics (MD) simulations. In addition to the monomeric states, we also used the structure of the EC1-4 homodimer as the initial structure of the simulations. After confirming the convergence of the simulations by calculating RMSD values of Cα atoms (Fig. S2, A and B, see Experimental procedures for the details), we compared the rigidity of the linkers by calculating the RMSD values of Cα atoms of EC1 and EC3, respectively, after superposing those of EC2 alone (Fig. 4, A-C). The EC3 in the monomer conformation exhibited the largest RMSD. The RMSD values of EC3 in the homodimer were significantly smaller than those of EC3 in the monomer form. Dihedral angles consisting of Cα atoms of residues at the edge of each EC repeat also indicated that the EC1-4 monomer bends largely at the Ca 2+ -free linker (Fig. S3, A-C). These results showed that the calcium-free linker between EC2 and EC3 is more flexible than the canonical linker (Movies S1 and S2).
Another unique characteristic in the region surrounding the calcium-free linker was the existence of an α-helix at the bottom of EC2. To the best of our knowledge, this element at the bottom of the EC2 is not found in classical cadherins. The multiple sequence alignment of the EC1-2 of human LI-, E-, N-, and P-cadherin by ClustalW indicated that the insertion of the α-helix-forming residues corresponded to the position immediately preceding the canonical calciumbinding motif DXE in classical cadherins (10) (Fig. S4). The Asp and Glu residues of the DXE motif in LI-cadherin dimers EC1 and EC3 coordinate with calcium ions (Fig. S5, A and B) and were maintained throughout the simulation (Fig. S5, C-J). The α-helix in EC2 might compensate for the Figure 3. Crystal structure of the EC1-4 homodimer. Calcium ions are depicted in magenta. No calcium ions were observed between EC2 and EC3 in either molecule. Four partial N-glycans were modeled in chain A (light green) and two in chain B (cyan) (the amino acid sequence of EC1-4 is given in Table S1). EC, extracellular cadherin. Dimerization mechanism of LI-cadherin absence of calcium by conferring some rigidity to the molecule.
Interaction analysis of EC1-4 homodimers
To validate if LI-cadherin-dependent cell adhesion is mediated by the formation of the homodimer observed in the crystal structure, it was necessary to find a mutant exhibiting reduced dimerization tendency. First, we analyzed the interaction between two EC1-4 molecules in the crystal structure using the PISA server (Table S5) (23). The interaction was largely mediated by EC2 of one chain of LI-cadherin and EC4 of the other chain, engaging in hydrogen bonds and nonpolar contacts (Fig. 5). The dimerization surface area was 1254 Å 2 , and a total of seven hydrogen bonds (distance between heavy atoms < 3.5 Å) were observed. Based on the analysis of these interactions, we conducted site-directed mutagenesis to assess the contribution of each residue to the dimerization of LIcadherin. Eleven residues showing a percentage of buried area greater than 50% or one or more intermolecular hydrogen bonds (distance between heavy atoms <3.5 Å) were individually mutated to Ala (Tables S1 and S5). To quickly identify mutants with weaker homodimerization propensity, sizeexclusion chromatography with multiangle light scattering (SEC-MALS) was used. EC1-4WT (or mutants) at 100 μM were injected in the chromatographic column. Analysis of the molecular weight (MW) showed that the MW of F224A was the smallest among Dimerization mechanism of LI-cadherin all the samples evaluated ( Fig. 6A and Table 2). Analogous observations were made when the protein samples were injected at 50 μM ( Fig. S6A and Table 2). Similarly, the sample with the greatest elution volume among the 12 samples analyzed corresponded to F224A (Fig. 6, B and C and Fig. S6, B and C).
We must note that the samples eluted as a single peak, corresponding to a fast equilibrium between monomers and dimers as reported in a previous study using other cadherins (14). Although the samples were injected at 100 μM, they eluted at 4 μM because SEC will dilute the samples as they advance through the column. Considering that the K D of dimerization of EC1-4WT determined by AUC was 39.8 μM, at a protein concentration of 4 μM, the largest fraction of the eluted sample should be monomer. This explains why the MW of the WT sample was smaller than the MW of the homodimer (99.6 kDa), and why the differences in the MW among the constructs were small. We reasonably assumed that the decrease of MW and the increase of the elution volume indicated a lower fraction of the homodimer in the eluted sample, thus indicating a smaller dimerization tendency caused by the mutations introduced in the protein.
To confirm the disruption of the homodimer by mutation of Phe224, we performed SV-AUC measurement for EC1-4F224A. In agreement with the result of SEC-MALS, it was revealed that this construct does not form homodimer even at the highest concentration examined (120 μM, Fig. 7). Collectively, the mutational study showed that the mutation of Phe224 to Ala abolished homodimerization of EC1-4.
Contribution of Phe224 to dimerization
Although Phe224 does not engage in specific interactions (such as H-bonds) with the partner molecule of LI-cadherin in the crystal structure (Fig. S7, A and B), it buries a significant surface (71.8 Å 2 ), that is, 95% of its total accessible surface, upon dimerization as determined by the PISA server. To understand the role of Phe224 in the dimerization of LI-cadherin, we conducted separate MD simulations of the monomeric forms of EC1-4WT and that of EC1-4F224A. We first calculated the intramolecular distance between Cα atoms of residues 224 and 122. The simulations revealed that Ala224 in the mutant moves away from the strand that contains Asn122, whereas the original Phe224 remains within a closer distance to Asn122 (Fig. 8A, Fig. S8, Movies S3 and S4). The movement in the mutated residue suggests that the side chain of Phe224 engages in intramolecular interactions, being stabilized inside the pocket. Superposition of EC2 (chain A) in the crystal structure of EC1-4 and EC2 during the simulation of the EC1-4F224A monomer suggests that the large movement of the loop containing Ala224 would cause steric hindrance and would inhibit dimerization (Fig. 8B). The analysis of the thermal stability using differential scanning calorimetry (DSC) revealed that EC1-4F224A exhibited two unfolding peaks, whereas that of EC1-4WT displayed a single peak (Fig. 8C). These results suggested that a part of the EC1-4F224A molecule was destabilized by the mutation. In combination with the data from MD simulations, we propose that Phe224 contributes to the dimerization of LI-cadherin by restricting the movement of the residues around Phe224 and thus preventing the steric hindrance triggered by the large movement observed in the MD simulations of the alanine mutant. DSC measurements showed that some other mutants exhibited lower thermal stability than the WT protein ( Table 2 and Fig. S9). However, because the value of T M1 of F224A is the lowest among all the mutants examined, and because other mutants displaying lower T M1 than WT did not exhibit a drastic decrease in homodimer affinity like F224A, we conclude that among the residues evaluated by Ala scanning, Phe224 was the most critical for the maintenance of the homodimer structure and thermal stability.
Functional analysis of LI-cadherin on cells
To investigate if disrupting the formation of EC1-4 homodimer influences cell adhesion, we established a CHO cell line expressing full-length LI-cadherin WT or the mutant F224A (including the transmembrane and cytoplasmic domains fused to GFP) that we termed EC1-7GFP and EC1-7F224AGFP, respectively (Table S1 and Fig. S10). We conducted cell aggregation assays and compared the cell adhesion ability of cells expressing each construct and mock cells (nontransfected Flp-In CHO) in the presence of calcium or in the presence of EDTA. The size distribution of cell aggregates was quantified using a micro-flow imaging (MFI) apparatus. EC1-7GFP showed cell aggregation ability in the presence of CaCl 2 . In contrast, EC1-7F224AGFP and mock cells did not show obvious cell aggregates in the presence of CaCl 2 (Fig. 9, A-C). From this result, it was revealed that Phe224 was crucial for LI-cadherin-dependent cell adhesion and the formation of EC1-4 homodimer in a cellular environment.
We next performed cell aggregation assays using CHO cells expressing various constructs of LI-cadherin in which EC repeats were deleted, to elucidate the mechanism of cell adhesion induced by LI-cadherin. LI-cadherin EC1-5 and EC3-7 expressing cells were separately established (EC1-5GFP and EC3-7GFP) (Table S1 and Fig. S10). Importantly, neither EC1-5 nor EC3-7 expressing cells showed cell aggregation ability in the presence of CaCl 2 (Fig. 10).
Because the expression of EC1-5GFP was not conducive to cell aggregation, it is suggested that effective dimerization at the cellular level requires full-length protein. Combined with the observation from SV-AUC for EC1-7 from above, EC6 and/or EC7 should contribute to the slow dissociation of the homodimer. In the absence of EC6-7, it is likely that the dissociation rate of LI-cadherin would increase, thus impairing cell adhesion.
Similarly, expression of EC3-7 on the surface of the cells did not result in cell aggregation. In this case, the observation agreed with the results of SV-AUC and SAXS, showing that EC3-4 does not dimerize. The truncation of EC1-2 from LI-cadherin generates a cadherin similar to classical cadherin in the point of view that it has five EC repeats and that it has a Trp residue at the N-terminus. Together with the crystal structure of the EC1-4 homodimer, which showed that Trp239 was buried in its own hydrophobic pocket and not participating in homodimerization (Fig. S11), the fact that LI-cadherin EC3-7 did not aggregate represent a unique dimerization mechanism in LI-cadherin.
Discussion
This is the first report examining the architecture of LIcadherin EC1-4 homodimer and the flexibility of the Ca 2+free linker in LI-cadherin. The mutational study and the cell aggregation assay showed that LI-cadherin-dependent cell adhesion is mediated by the formation of the dimerization interface between EC2 in one chain and EC4 in the other chain, and the contribution from other EC repeats. Our findings regarding the novel EC1-4 homodimer advances the understanding of LI-cadherin at the molecular level. The EC1-2 homodimer, which was observed by SV-AUC, and the contribution of EC6 and/or EC7 to slow the dissociation rate of the homodimer are also important to understand the mechanism of cell adhesion mediated by LI-cadherin.
The EC1-2 homodimer observed by SV-AUC appeared not to be sufficient to maintain LI-cadherin-dependent cell adhesion. There is a possibility that the weaker dimerization of EC1-2 cannot maintain cell adhesion because of the mobility of the Ca 2+ -free linker between EC2 and EC3. Contrary to the canonical Ca 2+ -bound linker, such as the linker between EC1 and EC2, the linker between EC2 and EC3 in LI-cadherin does not contain Ca 2+ . The lack of Ca 2+ resulted in greater mobility when the EC1-4 homodimer observed by crystal structure (Fig. 3) was not formed. The combination of low dimerization affinity and high mobility likely explains the absence of EC1-2 driven cell adhesion (Fig. S13, A and B). Considering that there are several families of cell adhesion proteins in the human body, we cannot rule out the possibility that the EC1-2 homodimer is formed on cells where cell adhesion is maintained by other cell adhesion proteins. In LI-cadherin-dependent cell adhesion, we assume that the unique architecture of the EC1-4 homodimer was necessary to restrict the movement of the Ca 2+ -free linker and to maintain LI-cadherin-dependent cell adhesion (Fig. S14, A-C).
The noncanonical α-helix in EC2 may also contribute to the unique characteristics of LI-cadherin. A previous study on the Ca 2+ -free linker of protocadherin-15 indicated that a unique 3 10 helix in the middle of the Ca 2+ -free linker is one of the factors conferring mechanical strength to the linker (21). We assume that the α-helix in LI-cadherin EC2 is required to maintain structural rigidity in the absence of the coordination of negatively charged residues to Ca 2+ .
Contribution of EC6 and/or EC7 on homodimerization was also a notable factor. C-cadherin is known to form strandswap dimer, which is mediated by interactions at the N-terminal strand of EC1 (18). However, the bead aggregation assay and laminar flow assay suggested that C-cadherindependent binding activity is maintained through interactions of multiple EC repeats (24). Likewise, our findings that EC1-5GFP and EC3-7GFP do not show cell aggregation ability, whereas EC1-7GFP does, suggested that dimerization of LI-cadherin results from a collective effort from several repeats throughout the protein. According to data from the cell aggregation assays of the mutant EC1-7F224AGFP, it is clear that LI-cadherin-dependent cell adhesion is abolished by the mutation of Phe224, a residue located within the EC2 repeat. This result suggests that EC6 and/or EC7 contribute to cell adhesion after the formation of the EC1-4 homodimer, an idea that is also supported by the observation that EC3-7GFP does not show cell aggregation ability. The mechanism underlying the contribution of EC6 and EC7 repeats to the slow dissociation rate of the EC1-7 homodimer needs to be further investigated.
As for the reasons why EC1-5GFP did not aggregate, there are some alternative explanations. For example, the EC1-4 homodimer observed by X-ray crystallography and detected by SV-AUC cannot be replicated in the short EC1-5 construct in a cellular environment. We hypothesize that the overhang EC1 repeat in the dimer belonging to one cell would collide with the membrane of the opposing cell (steric hindrance) (Fig. S15A). It is also possible that inappropriate orientation of the approaching LI-cadherin molecules would contribute to the inability of EC1-5 to dimerize (Fig. S15B).
Several differences between LI-cadherin and E-cadherin might explain the reason for the unique biological characteristics of LI-cadherin. Both LI-cadherin and E-cadherin are expressed on normal intestine cells; however, their sites of expression are different. LI-cadherin is expressed at the intercellular cleft and is excluded from the adherens junctions (7), where E-cadherin is precisely expressed (25). Although LI-cadherin is not present at the adherens junctions, trans-interaction of LI-cadherin is necessary to maintain water transport through the intercellular cleft of intestine cells (8). Clustering on the cell membrane might also be different. Classical cadherins including E-cadherin are thought to form clusters on the cell membrane to facilitate cell adhesion. The lateral interaction interface of these cadherins was suggested from packing contacts in the lattice of protein crystals (15,18). In contrast to classical cadherins, we did not observe crystal packing contacts that might suggest lateral (cis) interactions in our crystal structure. Indeed, our crystal structure indicates that the few Nglycans present in LI-cadherin are directed toward the opposite side of the homodimer interface, suggesting that the protein chains belonging to the homodimer do not participate in cis-interactions. We speculate that LIcadherin form homodimers with a broad interface to maintain trans-interactions without the need of cis-clusters on the cell membrane.
In summary, our study with LI-cadherin has unveiled novel molecular-level features for the dimerization of a cadherin molecule. We expect that our data will provide fundamental information for the development of diagnostic tools and/or therapeutic solutions targeting LI-cadherin.
Protein sequence
Amino acid sequence of recombinant protein and LI-cadherin-expressing CHO cells are summarized in Table S1.
Expression and purification of recombinant LI-cadherin
All LI-cadherin constructs were expressed using the same method. All constructs were cloned in pcDNA 3.4 vector (Thermo Fisher Scientific). Recombinant protein was expressed using Expi293F Cells (Thermo Fisher Scientific) following manufacturer's protocol. Cells were cultured for 3 days after transfection at 37 C and 8% CO 2 .
Purification method was identical for all the constructs except for EC1-7 (see below). The supernatant was collected and filtered followed by dialysis against a solution composed of 20 mM Tris-HCl, pH 8.0, 300 mM NaCl, and 3 mM CaCl 2 . Immobilized metal affinity chromatography (IMAC) was performed using Ni-NTA Agarose (Qiagen). Protein was eluted by 20 mM Tris-HCl, pH 8.0, 300 mM NaCl, 3 mM CaCl 2 , and 300 mM imidazole. Final purification was performed by SEC using HiLoad 26/600 Superdex 200 pg column (Cytiva) at 4 C equilibrated in buffer A (10 mM Hepes-NaOH at pH 7.5, 150 mM NaCl, and 3 mM CaCl 2 ). Unless otherwise specified, samples were dialyzed against buffer A before analysis and the filtered dialysis buffer was used for assays.
For the purification of EC1-7, dialysis after the collection of the supernatant and IMAC were performed by the same method as other constructs. After the purification with IMAC, the fraction containing the protein was dialyzed against 20 mM Tris-HCl, pH 8.0, 5 mM NaCl, and 3 mM CaCl 2 , and anion-exchange chromatography was performed using a HiTrap Q HP column (1-ml size; Cytiva). The column was Figure 10. Cell adhesion mediated by short constructs. Cell aggregation assay using EC1-5GFP and EC3-7GFP. EC1-7GFP and Flp-In CHO (mock cells) were used as positive and negative controls, respectively. Particles that were 25 μm or larger were considered as cell aggregates. The number of cell aggregates of both EC1-5GFP and EC3-7GFP in the presence or absence of Ca 2+ was determined. Data are corresponded to the mean ± SEM of four measurements. EC, extracellular cadherin.
washed with anion A buffer (20 mM Tris-HCl, pH 8.0, 10 mM NaCl, and 3 mM CaCl 2 ) before the injection of the protein.
The percentage of anion B buffer (20 mM Tris-HCl, pH 8.0, 500 mM NaCl, and 3 mM CaCl 2 ) was increased in the stepwise manner in increments of 12.5% to elute the protein. Elution at an anion B buffer percentage of approximately 25% to 37.5% was collected for final purification. The final purification was performed by injecting the collected fractions onto a HiLoad 26/600 Superdex 200 pg column (Cytiva) at 4 C equilibrated in buffer A.
The collected data were analyzed using continuous c(s) distribution model implemented in program SEDFIT (version 16.2b) (26) fitting for the frictional ratio, meniscus, timeinvariant noise, and radial-invariant noise using a regularization level of 0.68. The sedimentation coefficient ranges of 0 to 15 S were evaluated with a resolution of 150. The partial specific volumes of EC1-7, EC1-2, EC3-4, EC1-4, EC1-5, and EC1-4F224A were calculated based on the amino acid composition of each sample using program SEDNTERP 1.09 (27) and were 0.732 cm 3 /g, 0.730 cm 3 /g, 0.733 cm 3 /g, 0.732 cm 3 /g, and 0.734 cm 3 /g, and 0.732 cm 3 /g, respectively. The buffer density and viscosity were calculated using program SEDNTERP 1.09 as 1.0055 g/cm 3 and 1.025 cP, respectively. Figures of c(s 20, w ) distribution were generated using program GUSSI (version 1.3.2) (28). The weightaverage sedimentation coefficient of each sample was calculated by integrating the range of sedimentation coefficients where peaks with obvious concentration dependence were observed. For the determination of the dissociation constant of monomer-dimer equilibrium, K D , the concentration dependence of the weight-average sedimentation coefficient was fitted to the monomer-dimer self-association model implemented in program SEDPHAT (version 15.2b) (29).
Solution structure analysis using SAXS
All measurements were performed at beamline BL-10C (30) of the Photon Factory. The experimental procedure is described previously (19). Concentrations of EC3-4 was 157 μM. Data were collected using a PILATUS3 2M (Dectris), and image data were processed by SAngler software (31). A wavelength was 1.488 Å with a camera distance 101 cm.
Exposure time was 60 s, and raw data between s values of 0.010 and 0.84 Å −1 were measured. The background scattering intensity of the buffer was subtracted from each measurement. The scattering intensities of four measurements were averaged to produce the scattering curve of EC3-4. Data are placed on an absolute intensity scale. Conversion factor was calculated based on the scattering intensity of water. The calculation of the theoretical curves of SAXS and χ 2 values were performed using FoXS server (32,33).
MD simulations
MD simulations of LI-cadherin were performed using GROMACS 2016.3 (34) with the CHARMM36m force field (35). A whole crystal structure of the EC1-4 homodimer, EC1-4 monomer form, EC1-4F224A monomer form, and EC3-4 monomer form was used as the initial structure of the simulations, respectively. EC1-4 and EC3-4 of chain A was extracted from EC1-4 homodimer crystal structure to generate the EC1-4 monomer form and EC3-4 monomer form, respectively. Sugar chains were removed from the original crystal structure. Missing residues were modeled by MOD-ELLER 9.18 (36). Solvation of the structures was performed with TIP3P water (37) in a rectangular box such that the minimum distance to the edge of the box was 15 Å under periodic boundary conditions through the CHARMM-GUI (38). Addition of N-bound type sugar chains (G0F) and the mutation of Phe224 in EC1-4 monomer to Ala224 were also performed through the CHARMM-GUI (38,39). The protein charge was neutralized with added Na or Cl, and additional ions were added to imitate a salt solution of concentration 0.15 M. Each system was energy-minimized for 5000 steps and equilibrated with the NVT ensemble (298 K) for 1 ns. Further simulations were performed with the NPT ensemble at 298 K. The time step was set to 2 fs throughout the simulations. A cutoff distance of 12 Å was used for Coulomb and van der Waals interactions. Long-range electrostatic interactions were evaluated by means of the particle mesh Ewald method (40). Covalent bonds involving hydrogen atoms were constrained by the LINCS algorithm (41). A snapshot was saved every 10 ps.
All trajectories were analyzed using GROMACS tools. RMSD, dihedral angles, distances between two atoms, and clustering were computed by rms, gangle, distance, and cluster modules, respectively.
The convergence of the trajectories was confirmed by calculating RMSD values of Cα atoms (Figs. S2, A and B and S8). As the molecule showed high flexibility at Ca 2+ -free linker, as for the EC1-4WT monomer, EC1-4F224A monomer, and EC1-4 dimer, RMSD of each EC repeat was calculated individually. Five Cα atoms at the N terminus were excluded from the calculation of RMSD of EC1 as they were disordered. As the RMSD values were stable after running 20 ns of simulations, we did not consider the first 20 ns when we analyzed the trajectories.
Generation of EC3-4_plus
MD simulation of the EC3-4 monomer was performed for 220 ns. The trajectories from 20 to 220 ns were clustered using the 'cluster' tool of GROMACS. The structure which exhibited the smallest average RMSD from all other structures of the largest cluster was termed EC3-4_plus and used for the purpose of comparison with the data in solution (SAXS).
Crystallization of LI-cadherin EC1-4
Purified LI-cadherin EC1-4 was dialyzed against 10 mM Hepes-NaOH at pH 7.5, 30 mM NaCl, and 3 mM CaCl 2 before crystallization. After the dialysis, the protein was concentrated to 314 μM. Optimal condition for crystallization was screened using an Oryx8 instrument (Douglas Instruments) using commercial screening kits (Hampton Research). The crystal used for data collection was obtained in a crystallization solution containing 200 mM sodium sulfate decahydrate and 20% w/v PEG 3350 at 20 C. Suitable crystals were harvested, briefly incubated in the mother liquor supplemented with 20% glycerol, and transferred to liquid nitrogen for storage until data collection.
Data collection and refinement
Diffraction data from single crystals of EC1-4 were collected in beamline BL-5A at the Photon Factory under cryogenic conditions (100 K). Diffraction images were processed with the program MOSFLM and merged and scaled with the program SCALA (42) of the CCP4 suite (43). The structure of EC1-4 was determined by the method of molecular replacement using the coordinates of P-cadherin (PDB entry code 4ZMY) (44) and LI-cadherin EC1-2 (PDB entry code 7EV1) with the program PHASER (45). The model thus obtained was further refined with the programs REFMAC5 (46) and extensively built with COOT (47). Validation was carried out with PRO-CHECK (48). Data collection and structure refinement statistics are given in Table 1. UCSF Chimera was used to render all of the molecular graphics (49).
Site-directed mutagenesis
Site-directed mutagenesis was performed as described previously (50).
SEC-MALS
The MW of LI-cadherin was determined using Superose 12 10/300 GL column (Cytiva) with inline DAWN8+ MALS (Wyatt Technology), UV detector (Shimadzu), and refractive index detector (Shodex). Protein samples (45 μl) were injected at 100 μM or 50 μM. Analysis was performed using ASTRA software (Wyatt Technology). The concentration at the end of the chromatographic column was measured based on the UV absorbance. The protein conjugate method was used for the analysis as sugar chains were bound to LI-cadherin. All detectors were calibrated using bovine serum albumin (Sigma-Aldrich).
Comparison of thermal stability by DSC
DSC measurement was performed using MicroCal VP-Capillary DSC (Malvern). The measurement was performed from 10 to 100 C at the scan rate of 1 C min −1 . Data were analyzed using Origin7 software.
Establishment of CHO cells expressing LI-cadherin
The DNA sequence of monomeric GFP was fused at the C-terminus of all human LI-cadherin constructs of which stable cell lines were established and was cloned in the pcDNA5/FRT vector (Thermo Fisher Scientific). CHO cells stably expressing LI-cadherin-GFP were established using Flp-In-CHO cell line following the manufacturer's protocol (Thermo Fisher Scientific). Cloning was performed by the limiting dilution-culture method. Cells expressing GFP were selected and cultivated. Observation of the cells was performed by In Cell Analyzer 2000 (Cytiva). The cells were cultivated in Ham's F-12 nutrient mixture (Thermo Fisher Scientific) supplemented with 10% fetal bovine serum (FBS), 1% L-glutamine, or 1% GlutaMAX-I (Thermo Fisher Scientific), 1% penicillin-streptomycin, and 0.5 mg ml −1 Hygromycin B at 37 C and 5.0% CO 2 .
Cell aggregation assay
The cell aggregation assay was performed by modifying the methods described previously (51,52). Cells were detached from the cell culture plate by adding 1× HMF supplemented with 0.01% trypsin and placing on a shaker at 80 rpm for 15 min at 37 C. FBS was added to the final concentration of 20% to stop the trypsinization. Cells were washed with 1× HMF supplemented with 20% FBS once and with 1× HMF twice to remove trypsin. Cells were suspended in 1× HMF at 1 × 10 5 cells ml −1 . Five hundred microliter of the cell suspension was loaded into a 24-well plate coated with 1% w/v bovine serum albumin. EDTA was added if necessary. After incubating the plate at RT for 5 min, the 24-well plate was placed on a shaker at 80 rpm for 60 min at 37 C.
MFI
MFI (Brightwell Technologies) was used to count the particle number and to visualize the cell aggregates after the cell aggregation assay. After the cell aggregation assay described above, the plate was incubated at RT for 10 min and 500 μl of 4% paraformaldehyde phosphate buffer solution (Nacalai Tesque) was loaded to each well. The plate was incubated on ice for more than 20 min. Images of the cells were taken using EVOS XL Core Imaging System (Thermo Fisher Scientific) if necessary. After that, cells were injected to MFI.
MFI View System Software and MFI View Analysis Suite were used for the measurements and analyses. The instrument was flushed with detergent and ultrapure water before the experiments. Cleanliness of the flow channel was checked by performing the measurement using ultrapure water and confirming that less than 100 particles/ml was detected. The flow path was washed with 1× HMF before the measurements of the samples. Purge volume and analyzed volume were 200 μl and 420 μl, respectively. Optimize Illumination was performed before each measurement. Particles within a size of 100 to 1 μm were counted.
Data availability
The coordinates and structure factors of LI-Cadherin EC1-4 have been deposited in the Protein Data Bank with entry code 7CYM. All remaining data are contained within the article.
|
2021-08-10T06:23:02.707Z
|
2021-08-05T00:00:00.000
|
{
"year": 2021,
"sha1": "e196f44465ec7d98a7d5e2ea1629f9e73af99770",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/article/S0021925821008565/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d44d47f9b9067285707ca796877b8e4f66e3b972",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269685974
|
pes2o/s2orc
|
v3-fos-license
|
LIBRARY SERVICE STRATEGIES IN INCREASING STUDENT READING INTEREST
Reading is important for students as it will influence their skills and knowledge. The research aims to explain and describe library service strategies in enhancing students' reading interest at Budi Cendekia Islamic School Junior High School and Al-Hasra Senior High School. The research method employed a qualitative case study approach. Data collection techniques included documentation, observation, and interviews. Data processing involved reduction, data presentation, and drawing conclusions. Data validity was ensured through credibility and confirmability. Research findings: (1) Reading interest analysis: students have a high interest in reading during breaks, free periods, and books that meet their needs. (2) Service strategies: provision of fiction books, selection of reading ambassadors, procurement of relevant learning materials, book provision surveys, creating reading corners, library electronic services, and literacy competitions. (3) Impact: increasing students' enthusiasm for reading inside or outside the library. Conclusion: library services have been operating at their maximum capacity to enhance students' enthusiasm for reading.
A. INTRODUCTION
Reading has an important role for students because through this activity they can find the main ideas contained in the text, can hone comprehension skills, increase vocabulary, and broaden their horizons on various topics (Purba et al., 2023).Reading must be instilled as a culture to every student because it helps students understand, utilize, analyze, and transform information correctly so that students can develop the critical and literacy skills needed for future academic and professional success (Romadhon, 2020).Internal factors that cause low student interest in reading are limited reading ability and lack of reading habits, while external factors include a less supportive school environment, the role of the library that has not been maximized, and the limited books and reading materials available (Sari, 2018).
The school library is an important learning resource that arouses students' interest in reading, therefore, its management must be optimized to provide maximum benefits (Fhadillah, 2020).The school library plays an important role in increasing student learning motivation, reflected in the high interest in student visits every week, this can be seen from the initiative of students who actively borrow textbooks to broaden students' horizons and references (Prasetya, 2013).The library is not only a source of learning, but also a place to develop students' interest in reading as well as a place of entertainment as seen from the collection of children's storybooks and other entertainment materials provided, which helps create a pleasant environment and stimulate imagination for students (Gusti & Bakhtaruddin, 2014).Research shows that library services also significantly affect student learning achievement at school (Oktariani et al., 2023).In addition, the results of the study also show that school library management has a significant influence on student reading interest, whose influence reaches 29% (Sulistianti et al., 2022).
The results show that effective and attractive library management strategies, which have been implemented by librarians, have a positive impact on increasing reading interest through a comfortable layout, informative reference services, and an efficient circulation system contribute to strengthening this influence (Haris et al., 2022).Then another strategy carried out by the head of the library in an effort to increase student reading interest with teacher collaboration to provide motivation, reading-related assignments, and training on visitor services and library facilities to ensure an optimal learning experience (Maulidiyah & Roesminingssih, 2020).The research equation is to examine library services in increasing students' interest in reading.While the difference with the research and also as a novelty in this study is that the research focuses on analyzing students' reading interest in two library service strategy schools, and the impact of services on students' reading interest.
Specifically, this study aims to describe and explain the analysis of students' interest in reading at Budi Cendekia Islamic Junior High School and Al-Hasra Senior High School.In addition, the research focus also includes library service strategies implemented to increase students' reading interest.Through the use of surveys, observations, and data evaluation, the research will evaluate the impact of library services on students' reading interest.It is hoped that the results of this study can provide valuable insights for the development of literacy education in both schools, as well as serve as guidelines for the development of more effective library service strategies in the future.
B. RESEARCH METHODS
The study was conducted in two different locations, namely at Budi Cendekia Islamic Junior High School on Jalan Boulevard Grand Depok City, Kalimulya, Cilodong District, Depok City, West Java 16413, and Al-Hasra Senior High School, on Jalan Raya Parung -Ciputat Km 24, Bojongsari Baru, Bojongsari District, Depok City, West Java 16516.The method used is a qualitative approach through case studies.The purpose of this study is to investigate in depth the reading interest of students in the two schools, the types of library services provided to increase students' reading interest, and the impact on students' reading interest.
Library service strategies in increasing students' interest in reading at Budi Cendekia Islamic Junior High School and Al-Hasra Senior High School were studied through interviews, observations, and documentation.Through interviews with librarians, principals, teachers, and students were able to provide a direct explanation of student reading interest in both schools, the types of library services provided to increase student reading interest, and the impact on student reading interest.Observation was conducted to directly monitor the interaction between students and library facilities, as well as to see how students use the available resources.Documentation was used to collect written data such as library policies, programs that have been implemented, and reports related to efforts to increase students' reading interest.
Data on library service strategies to increase students' interest in reading at Budi Cendekia Islamic Junior High School and Al-Hasra Senior High School were processed through three main stages: reduction, data presentation, and conclusion drawing.Reduction: data obtained from interviews, observations, and documentation were reduced to a more grouped and organized form.This involved grouping information based on the subject matter, namely reading interest analysis, types of library services and the impact of library services.Then data presentation: the data that has been reduced is presented in a form that is easy to understand and analyzed narratives that describe the findings and patterns that emerge from the data.While drawing conclusions: Based on the data that has been processed and presented, conclusions are drawn regarding the effectiveness of library service strategies in increasing students' interest in reading in both schools.
To ensure the validity of data on library service strategies in increasing students' interest in reading at Budi Cendekia Islamic Junior High School and Al-Hasra Senior High School, important validity techniques are credibility and confirmability.First, credibility with a source triangulation approach, where data is collected from various sources, such as interviews with teachers, students, and library staff, direct observation in the library, and analysis of related documents.While triangulating techniques by collecting data through interviews, observations, and documentation, so that researchers can better verify and confirm findings.Secondly, confirmability is where the findings and interpretation of data can be checked by internal school parties (librarians of both schools) to ensure that the conclusions produced are in accordance with the evidence that has been explored by researchers.
The research design on library service strategies in increasing students' interest in reading at Budi Cendekia Islamic Junior High School and Al-Hasra Senior High School is explained as follows: This work is licensed under a Creative Commons Attribution 4.0 International License.
Analysis of Student Reading Interest
Al-Hasra students' interest in reading has increased from year to year, but sometimes decreases as school exams approach because they are not allowed to borrow books during exams.This is a concern because reading enthusiasm is very important to improve student literacy and progress in student learning.Meanwhile, the reading interest of students at SMP Budi Cendekia Islamic School is very high due to sufficient free time and the implementation of dhuha prayer before entering class, which makes students have sufficient time to read during breaks.The cool atmosphere of the library also provides comfort to students so that they enjoy reading.The peak of students' reading interest occurs during breaks and when teachers are in meetings or unable to teach.Reading interest requires guidance and encouragement from students, teachers, and parents in order to grow and develop, with interest in reading that each individual is interested in as the key to satisfying students' curiosity (Elendiana, 2020).
It is important to analyze students' reading interests in the library of SMP Budi Cendekia Islamic School so that book procurement can be tailored to students' preferences.Al-Hasra Library pays attention to the school's learning schedule, so book procurement is not done when students are taking exams.Thus, the library can provide new books that match student interests and support the student learning process.To increase students' interest in reading, the use of serialized picture media can be one of the effective strategies.
Steps such as observing serial images, listening and reading related paragraphs, interacting through questions and answers, and directing students' attention directly to learning can provide a significant boost in building their interest in reading (Pangestu, 2019).
Analyzing the level of interest in reading of Al-Hasra students is done by making a graph of book visits and borrowing every day.The graph helps in evaluating the level of student interest in reading.From the graph, the library can identify obstacles that may be faced by the library and find solutions to attract students back.For example, if there is a decrease in the number of visits or book borrowing on certain days, the library can evaluate whether there are certain factors that affect students' interest, such as a lack of variety of interesting books or activity schedules that compete with library time.Internal factors include aspects such as intelligence, interest, attention, motivation, perseverance, attitude, reading habits, and physical and health conditions, while external factors consist of conditions such as library availability, inadequate reading materials, lack of encouragement from teachers, lack of support from parents, economic limitations that prevent parents from facilitating, and lack of parental attention to students' reading interest (Hapsari et al., 2019).
Strategies to Increase Student Reading Interest by the Library
The strategy of increasing students' interest in reading by the library is a series of steps in the form of services carried out by the library to increase students' interest in reading.The purpose of this service is to inspire and encourage students to read more actively, thus helping in the development of students' literacy and knowledge.Evaluation methods carried out by Al-Hasra library to determine the types of services that will be used to increase students' interest in reading.The steps include: measuring performance indicators of student reading interest and the availability of reading materials in the library.Data collection is done through tests, questionnaire surveys, observations, and interviews with students.After data collection, the library department compiles an evaluation report which is then discussed with teachers and school administrators.The aim is to formulate an action plan from the evaluation, such as increasing the book collection, adjusting the service model, or developing a more effective literacy program.
Collaboration between librarians, teachers and school administrators is crucial in ensuring that the measures taken have a positive impact on students' reading interest and the overall effectiveness of library services.Collaboration between librarians, teachers and students ensures that library services run well.Given that the current generation has a weakness in literacy, the development and implementation of a new service model is expected to improve the quality of library services and generate student interest and enthusiasm in visiting and borrowing books.By involving all these aspects, library service models can be designed and customized to be effective in meeting the needs and interests of student readers.The service strategies used by school libraries to increase students' interest in reading include: a. Provision of Fiction Books Al-Hasra Library not only offers books that support learning, but also presents a collection of the latest fiction books that are popular among teenagers today.Thus, students can enjoy a variety of reading options that cover not only subject matter, but also interesting stories that can entertain and inspire them.This aims to stimulate students' reading interest and broaden their literacy scope by capitalizing on current trends and interests in the world of adolescent literature.Research results show that fiction books show that fiction stories have a significant impact on the development of literacy culture (Ihsania, 2020).
Genre fiction books can be strong evidence of the influence of fiction on literacy culture.Through fiction, readers are not only entertained, but also engaged in creative, imaginative and critical thinking processes.Fiction has the power to arouse curiosity, broaden horizons and inspire readers to explore new concepts.In addition, fiction often presents moral values and life lessons that can enrich readers' understanding and experience.Efendi mentions the characteristics of fiction collections include having a story idea, plot or plot that displays the sequence of events, characterization that describes the character, setting that explains space, time, and atmosphere, and writing point of view that can come from the character or narrator (Mestika & Marlini, 2013).
b. Reading Ambassador Selection
Al-Hasra Library holds a "reading ambassador" election every year, where students are selected to be ambassadors who encourage reading among their peers.In addition, the library also organizes article writing activities using books from the library collection.This is a collaboration between the library and Indonesian language teachers to inspire new ideas that can improve students' literacy.The program is effective because learners are engaged through socialization and peer and school support.Aiming to increase literacy participation in schools, the program promises a big impact if implemented (Widayani, 2022).
At the end of each semester, during the distribution of report cards, the library publicly announces to educators, staff, parents, and library reading ambassador committee members, with the hope of encouraging other students' motivation and involving parents in providing support in the family environment (SYahidin, 2020).The task of reading ambassadors as information exchange is to actively organize reading activities, promote literacy through various events, and discuss with generation Z, then as modeling, they diligently visit the library, becoming role models for others in an effort to increase interest in reading and literacy (Nurfadillah et al., 2023).c.Procurement of Books Relevant to Learning Al-Hasra Library collaborates with other teachers to enrich students' learning experience by lending books relevant to the subject matter or homework.This collaboration aims to optimally utilize library resources to support cross-curricular learning and expand students' literacy coverage in various areas.The characteristics of books that are suitable for learning are good material, interesting presentation and the language used is easy to understand (Rahmawati, 2015).The library currently has a quality and adequate collection of illustrated textbooks, both in terms of physical and information content.The existence of this collection has succeeded in encouraging users, especially students, to be more active in using these books as additional learning resources that support the learning process at school (Saputra et al., 2016)
. d. Book Supply Survey
The way Al-Hasra library procures new books is by distributing questionnaires to students to collect a list of books that students want or need.After that, the library tries to acquire the books according to the students' requests.By involving students in this process, the library can better understand students' reading preferences and needs, so that the book collection can be tailored to their interests and concerns more effectively.It can also increase student engagement in library use and strengthen the relationship between the library and the student community.In the learning process, students must actively use the library as an additional resource, where the library has provided a collection that is in accordance with the school curriculum, so that it can meet the information needs of users and play a role in supporting the effective learning process at school (Syahdan et al., 2021).
The School Library of SMP Budi Cendekia Islamic School analyzes students' preferences in reading books and students' literacy needs.The library must analyze the collection of books that students want, consisting of at least 20% textbooks and the rest are reading books.This is done in order to meet the needs of students appropriately, in line with school targets such as accreditation and participation in library competitions, and keep up with the times by adjusting the types of books that are most in demand according to the age of the students.The identification process carried out is by conducting a survey to find out the opinions of students regarding the types of reading that are in demand in the Al-Hasra library.e. Creating a Reading Corner The initiative at Al-Hasra to create a "reading corner" in each classroom is a very positive step.There, reading books donated by students are placed and each student is expected to read for at least 15 minutes before entering class.In addition, the "literacy tree" in some corners of the schoolyard is also an interesting idea.There, books donated by students are available for reading.The aim of this initiative is to create a varied atmosphere throughout the school environment, not just in the library.This allows students to have easy access to interesting reading even outside of formal learning time.In addition, by involving students in donating books for the "reading corner" and "literacy tree", it also strengthens their sense of ownership of the school literacy program.
The reading corner is a form of school commitment by providing a mini library in the classroom to support the 15-minute Compulsory Reading Movement recommended by the Government, according to Permendikbud Number 23 of 2015 (Aswat & Nurmaya G, 2019).The implementation of reading corners succeeded in increasing students' literacy culture by 90% and had a positive impact on students' cognitive and effective development (Putra et al., 2022).The reading corner in each classroom provides equal opportunities for all students to focus on developing literacy skills equipped with bookshelves and various types of reading such as knowledge books, stories, comics, and others, the reading corner is an easily accessible source of inspiration to increase reading interest and develop students' literacy skills in the school environment (Indriani et al., 2022).f.Library Electronic Services Al-Hasra Library is developing e-services through the Slims application, an application for digital libraries that allows the creation of a list of books available in the school library.Besides students, media is also an important resource in the development of library e-services.By utilizing and increasing the use of media, library services can develop more effectively.The communication strategy used by Al-Hasra library is to utilize social media, particularly Instagram.Through this platform, students can easily get information about the latest news and books available in the school library.By using social media as a communication channel, the library can reach its audience quickly and efficiently, and provide easier access for students to connect with library collections and related information.
IT-based library services have a positive impact on users and librarians, because the service process becomes faster and more accurate, which ultimately increases user satisfaction (Widodo, 2018).In implementing information technology (IT) in libraries, it is necessary to pay attention to several things such as support from management / parent institutions, operational continuity, system care and maintenance, skilled human resources, and other infrastructure such as electricity, space, furniture, interior design, and computer networks, library user factors such as needs, comfort, education level, conditions, and others are also important to consider (Malik, 2019).Information technology-based libraries have various forms, such as selection of library materials through electronic versions of publisher catalogs, online procurement of library materials, computer-assisted processing of library materials, publishing electronic catalogs and bibliographies, and user services such as online catalogs, circulation services, reference services, and even full-text services in digital format (Saleh, 2015).g.Literacy Competition Literacy competitions at SMP Budi Cendekia Islamic School can include various activities, such as "reading corner" competitions and the selection of "reading ambassadors".Indicators of the success of this competition could include: the number of student visits to the library can be used as an indicator, with the winners of the competition being students who consistently visit the library to read and borrow books; students actively borrow more books from the library, so the number of books borrowed can be used as an indicator; students become active in literacy activities such as book discussions, article writing, or shared reading activities; and students actively follow and participate in competitions held by the library.The School Literacy Movement (GLS) has a significant influence on students' reading interest and learning outcomes.This is able to encourage students to actively read and engage in literacy activities, which ultimately increases their interest in reading and improves student learning outcomes (Rusniasa et al., 2021).
In addition, Al-Hasra High School Library holds article writing activities that utilize books from the library, collaborates with Indonesian language teachers to provide new ideas in improving student literacy, and collaborates with other teachers to empower the library in borrowing books for learning or for student assignments at home.The development of literacy programs is adjusted to the stage of student development to enable the selection of appropriate strategies in carrying out literacy activities, starting from the habituation, development, to learning stages that suit individual needs (Rohim & Rahmawati, 2020).
The Impact of Strategies to Increase Student Reading Interest
The right library service model by SMP Budi Cendekia Islamic School & SMA Al-Hasra plays a crucial role in increasing students' enthusiasm and participation in reading activities.One indicator of the success of the service model is seen from the increase in students' interest in reading when there is an addition of a new book collection.This confirms that enriching the book collection with relevant quality can provide a significant boost to students' reading interest.This positive impact shows the importance of regularly monitoring the effectiveness of the library service model.Through such monitoring, librarians can identify trends in student reading interest and evaluate the success of the service model implemented.In addition, careful monitoring also makes it possible to determine the next developmental steps that need to be taken to improve the quality of library services and meet the needs of readers.There is a significant relationship between library services and interest in library visits, which can be categorized as a strong relationship (Habir, 2015).
Through continuous evaluation of the library service model, librarians can identify areas for improvement or enhancement.This includes adjusting the book collection, providing additional services that suit students' needs, and developing promotion and encouragement strategies to increase students' participation in reading activities.Thus, librarians can ensure that library services continue to be relevant and effective in supporting students' literacy development.Library services have a significant influence on student satisfaction, with its ability to identify students' information needs and help them learn to search and find appropriate information sources indicated by students feeling effectively served, increasing their satisfaction with the experience of using the library (Amalia, 2023).
The better the quality of library services and activities carried out by librarians, the more likely it is that users' information needs are met as expected, which ultimately leads to the realization of optimal library user satisfaction (Risparyanto, 2022).
D. CONCLUSIONS
Overall, library services have played a crucial role in increasing students' reading interest.By conducting an in-depth analysis of students' reading interests during breaks and free time, and providing book collections that match their preferences, the library has been able to design effective service strategies.Measures such as the provision of interesting fiction books, selection of reading ambassadors, and procurement of books according to the curriculum have helped create an environment that significantly boosts students' reading interest.In addition, initiatives such as surveying students' reading preferences, creating friendly reading corners, elibrary services that make access easier and literacy competitions have made a real impact.As a result, students' enthusiasm for reading, both inside and outside the library, has consistently increased.The collaboration of the various service strategies implemented has successfully created a supportive environment for more active and in-depth literacy exploration for students.This not only strengthens the library's role as a knowledge center but also helps to enrich students' overall learning experience.
Figure
Figure 1.Research design
|
2024-05-11T16:23:40.995Z
|
2024-04-05T00:00:00.000
|
{
"year": 2024,
"sha1": "bfc0d025910b2e5fc51970ebec9e2aabee4e8ef7",
"oa_license": "CCBY",
"oa_url": "https://jurnal.staithawalib.ac.id/index.php/thawalib/article/download/324/186",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fd7e73b82c5e2cb3e9fd739a178db1a484a83ca7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
153313335
|
pes2o/s2orc
|
v3-fos-license
|
Structures suggest a mechanism for energy coupling by a family of organic anion transporters
Members of the solute carrier 17 (SLC17) family use divergent mechanisms to concentrate organic anions. Membrane potential drives uptake of the principal excitatory neurotransmitter glutamate into synaptic vesicles, whereas closely related proteins use proton cotransport to drive efflux from the lysosome. To delineate the divergent features of ionic coupling by the SLC17 family, we determined the structure of Escherichia coli D-galactonate/H+ symporter D-galactonate transporter (DgoT) in 2 states: one open to the cytoplasmic side and the other open to the periplasmic side with substrate bound. The structures suggest a mechanism that couples H+ flux to substrate recognition. A transition in the role of H+ from flux coupling to allostery may confer regulation by trafficking to and from the plasma membrane.
Introduction
Secondary active transporters couple the movement of ions down their electrochemical gradient to the concentration of substrate. Differences in the mechanism of energy coupling enable them to meet diverse physiological requirements. In general, transporters related in sequence exhibit similar ionic coupling. However, a highly conserved transporter family contains members that use distinct mechanisms of energy coupling to transport substrate in opposite directions, suggesting divergence from a common underlying mechanism.
The concentration of organic anions has a crucial role in diverse biological processes, from nutrient uptake by cells to the packaging of neurotransmitter into secretory vesicles for subsequent release by exocytosis. Members of the solute carrier 17 (SLC17) family catalyze the coupled transport of anions driven by a proton electrochemical gradient Δμ H+ (= Δψ − 2.3× (RT/ F) × ΔpH, where R is the gas constant, T is temperature, and F is Faraday's constant), with contributions from both electrical potential Δψ and proton gradient ΔpH (pH o /pH i ). To take PLOS Biology | https://doi.org/10.1371/journal.pbio.3000260 May 13, 2019 1 / 25 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 advantage of environmental conditions, members of the SLC17 family have tuned their response to the available driving force, illustrating an unusual form of adaptive divergence. SLC17 protein sialin uses H + symport [1,2] to drive the anion sialic acid out of lysosomes ( Fig 1A). The electroneutrality that derives from coupling to H + enables the efflux of negatively charged sialic acid despite a lumen-positive Δψ. In contrast, vesicular glutamate transporters (VGLUTs), also of the SLC17 family, rely on membrane potential Δψ (inside positive) to accumulate the principal excitatory transmitter glutamate inside synaptic vesicles (Fig 1A) [3][4][5][6]. In this case, glutamate uptake occurs against the outwardly directed H + gradient. The VGLUTs thus exhibit a dependence on components of Δμ H+ different from that of other SLC17 family members such as sialin. Sialin has itself been reported to confer vesicular transport of aspartate and glutamate [7,8]. Although the physiological significance remains uncertain [9,10], this finding would suggest that a single protein can catalyze both activities. The VGLUTs also differ from other vesicular neurotransmitter transporters (not of the SLC17 family) that act as H + exchangers [11]. They nonetheless retain a pH dependence that limits their activity to uptake by acidic compartments such as synaptic vesicles and prevents nonvesicular glutamate efflux across the plasma membrane, where the pH is neutral [12].
How do different components of Δμ H+ drive transport in different directions through closely related proteins? Mammalian transport proteins of the SLC17 family show up to 27% sequence identity to bacterial relatives. Although of undefined function, many of the bacterial SLC17 genes occur within operons dedicated to the metabolism of acidic sugars such as galactonic, glucaric, and galacturonic acids [13]. To understand how the SLC17 family has adapted to divergent roles in metabolism and neurotransmitter transport, we determined the crystal structures of a D-galactonate transporter in 2 conformers.
DgoT is a proton symporter highly selective for D-galactonate
The D-galactonate transporter (DgoT) gene lies within an operon devoted to the metabolism of D-galactonate, suggesting a role in transport of this acidic sugar [14]. Because the ionic coupling was not known, we reconstituted purified, recombinant DgoT into liposomes containing the pH-sensitive fluorophore 8-hydroxypyrene-trisulfonate (pyranine; S1 Fig). With an inwardly directed H + gradient, addition of D-galactonate outside quenches the fluorescence of lumenal pyranine, indicating galactonate/H + cotransport ( Fig 1B). Without DgoT, without Dgalactonate, or with the galactonate epimer gluconate, there is no effect on the pyranine fluorescence of proteoliposomes, suggesting high selectivity for galactonate. Because entry of the anion galactonate might alone create the inwardly negative Δψ that drives H + influx, we tested the coupling between galactonate and H + by adding galactonate to proteoliposomes with an outwardly directed H + gradient (Fig 1C). Under these conditions, galactonate still quenches the pyranine fluorescence of proteoliposomes but not control liposomes, indicating that translocation of H + and galactonate are coupled by DgoT, not by changes in Δψ. Further excluding a role for Δψ in the acidification by galactonate, dissipation of Δψ by the K + ionophore valinomycin increases fluorescence quenching by galactonate (Figs 1C and S1). Indeed, the stimulation by valinomycin shows that Δψ opposes galactonate/H + influx, suggesting that DgoT moves net charge, presumably positive. The transport activity demonstrates that purified DgoT is functional, enabling its use for structural studies. In addition, the H + cotransport mechanism indicates similarity to the lysosomal sialic acid/H + cotransporter sialin. Unlike electroneutral sialin, however, the effect of membrane potential on DgoT activity suggests that galactonate flux is coupled to the movement of more than one H + .
Open inward structure of wild-type DgoT
To determine the structure of DgoT, the wild type (WT) protein was crystallized at pH 5.35, in an inward-open conformation and crystal space group P2 1 . Nonsubstrate gluconate improved the diffraction, and it as well as β-nonylglucoside (β-NG) were identified nonspecifically within the aqueous cavity of the structure (S2 Fig). The structure was determined to a resolution of 2.9 × 3.8 × 3.8 Å by molecular replacement (MR) trials based on all available major facilitator superfamily (MFS) transporters in turn because the MFS typically each crystallize in different parts of the transport cycle. The correct MR solution derived from a polyalanine model of (inward facing) Glycerol-3-phosphate transporter (GlpT; 1PW4) [16] and was subsequently built and refined to an R-factor/Rfree of 24.9%/29.7% (S1 Table). The asymmetric unit contains 2 almost identical, antiparallel monomers of DgoT, with 4 molecules of β-nonylglucoside (β-NG) between the 2 molecules of DgoT and a root-mean-square deviation (rmsd) of 0.57 Å between their congruent 392 Cα atoms.
The structure contains residues 27 through 443, lacking a 6-amino acid unstructured cytoplasmic loop (residues 235-242) between transmembrane (TM) domains 6 and 7 (Fig 2A). DgoT is composed of 12 TM helices and belongs to the MFS fold [17]. Two 6-helix bundles (N-and C-terminal domains) are related by a pseudo 2-fold symmetry axis perpendicular to the membrane plane. The DgoT structure has 2 intracellular helices (ICH1 and ICH2) between N-and C-terminal domains, similar to sugar transporters of the MFS [18][19][20]. The periplasmic ends of TM1, 2, and 5 from the N terminus and TM7, 8, and 11 from the C terminus form contacts that seal the aqueous cavity from the periplasmic side.
Proton flux to an N-terminal polar pocket
Although the main cavity of the inwardly oriented structure is sealed to the outside, the N-terminal lobe of DgoT contains a tunnel that extends from the periplasmic surface near glutamate-180 (Glu180; TM5) to a membrane-embedded site at aspartate-46 (Asp46; TM1; Figs 2B, 2C and S4). The average diameter of the tunnel near its entrance (3.6 Å) and exit (5.0 Å) suggests accessibility to a line of water molecules. Polar residues lining the entrance (Glu59, glutamine-179 [Gln179], Glu180), interior (threonine-172 [Thr172], Thr297), and exit (Asp46) have the potential to interact with H 3 O + and could therefore facilitate H + transport. On the other hand, residues phenylalanine-278 (Phe278), proline-279 (Pro279), and Thr297 reduce the tunnel diameter to 2.4 Å, about the diameter of a water molecule (2.75 Å; Fig 2C), suggesting a dynamic mechanism to control access of H 3 O + from the periplasm. A positively charged region near Pro279 will also raise the energy barrier for H + movement toward the site of potential protonation at Asp46 (S4 Fig). With its side chain carboxyl sequestered, Asp46 is predicted to have a pKa of approximately 6.0 (H++ server) [21], and this could be much higher due to screening by the lipid bilayer. Thus, Asp46 is most probably protonated (uncharged) in this structure (Fig 2).
At the end of the tunnel, a pocket of conserved polar residues buried within the membranespanning region of the N-terminal domain lies adjacent to Asp46 (Fig 2C and 2D). In this pocket, residues conserved between sialin and the VGLUTs include arginine-47 (Arg47; TM1), Arg126, and Glu133 (TM4). However, Asp46 itself is not conserved in metazoan SLC17 proteins, suggesting it may contribute to the difference in function between family members. The buried Glu133 residue has a predicted pKa of approximately 8.1 (H++ server) [21], suggesting that it is protonated. Although Arg47 is directly adjacent, the structure thus indicates that the 2 residues do not form a charge pair in this inward-facing state. Multiple TM helices contribute less well conserved residues to this unusual charged pocket buried in the trans- Fig 2D).
To test the functional significance of Asp46 and Glu133 for galactonate transport by DgoT, we replaced each separately by an uncharged residue (asparagine [D46N] and glutamine [E133Q], respectively) to mimic the electroneutrality of their protonated states. Reconstituting these purified proteins into liposomes, D46N and E133Q each reduce acidification by D-galactonate to the level observed with nontransported gluconate (Fig 3A).
To characterize galactonate transport by DgoT rather than rely on H + flux to infer its properties, we developed a quantitative assay based on radiotracer uptake. Expressed in the E. coli DgoT mutant, WT DgoT confers robust uptake of 14 C-galactonate relative to empty vector ( Fig 3B). Using this assay, we reexamined the substrate specificity of DgoT by competition. Nonradioactive galactonate (1 mM) eliminates uptake of 14 C-galactonate, whereas epimer gluconate or cyclized neutral sugar galactose (10 mM) has no effect, supporting the remarkable specificity for galactonate ( Fig 4A). However, high levels (100 mM) of gluconate do eliminate uptake, indicating some recognition of the epimer. The Michaelis-Menten constant (Km) for galactonate is indeed 18 ± 4 μM (Fig 4B), approximately 100-fold lower than for glutamate transport by the VGLUTs (1-3 mM) [3,15] and approximately 20-fold lower than for sialic acid transport by sialin (approximately 0.5 mM) [1,2], tuning the Km values to the respective available substrate concentrations. To assess energy coupling by DgoT, we monitored 14 C-the N to C terminus. Dotted line represents unstructured regions. (B) Overall view of inward-facing DgoT in cylinder representation shows the putative water tunnel (tan surface) with the surrounding residues shown as sticks. (C) A close-up view of the tunnel highlights the entrance and exit points. (D) Putative sites of protonation (Asp46 and Glu133) lie in a membraneembedded pocket of polar residues buried within the NTD (crossed-eye stereo). The polar and charged residues composing the pocket are shown as sticks. Electron density is from a composite, simulated annealing "omit map" to eliminate model bias (1σ). Outer surface and cavities formed from interdomain contacts are shown in blue (NTD) and green (CTD). Asp46, aspartate-46; CTD, C-terminal domain; DgoT, D-galactonate transporter; Glu133, glutamate-133; ICH, intracellular helix; NTD, N-terminal domain.
Consistent with the absence of H + flux, D46N and E133Q DgoT also accumulate very little 14 C-galactonate in the radiotracer assay (Figs 3B and S8). Thus, active transport requires reversible protonation of both Asp46 and Glu133. The location of these residues at the end of the H + tunnel from the periplasm further suggests specific roles in the H + translocation pathway.
Substrate binding stabilizes the occluded state
The position of Glu133 at the end of the putative H + tunnel, its importance for active transport, and its proximity to Arg47 facing the substrate cavity suggested that Glu133 might occupy a central role coupling H + flux to transport of D-galactonate. We were able to crystallize E133Q DgoT with D-galactonate bound. At pH 9.0, this complex crystallized in an outward-facing conformation, in a different crystal form than for the WT protein alone, and refined to an R-factor/Rfree of 28.5%/30.0% at 3.5 Å resolution (Fig 5A and S1 Table). There are 2 independent copies of the protein in each asymmetric unit. This is the case in each of the 2 crystal forms; i.e., there are 2 independent structures of the molecule in each of the 2 molecular configurations. In each of the 2 major domains within each molecule, the congruence of intramolecular contacts and hydrogen bonds (i.e., 4 independent observations) provide strong cross-validation of intradomain interactions. As with the inward-open conformation, 2 essentially identical monomers (rmsd of 0.63 Å between their 345 Cα atoms) pack in pairs closely against each other sharing contacts from the lipophilic surface but in an antiparallel arrangement in the asymmetric unit (i.e., one molecule with its cytoplasmic side up packs against another with its cytoplasmic side down in a noncrystallographic 2-fold orientation). The structure includes residues 24 to 443 except for the 11-amino acid dynamic region in the cytoplasmic loop (residues 231-243) and 13-amino acid periplasmic end of TM7 (residues 276-291). In both conformations, the substrate cavity has a positive electrostatic surface potential to match the negatively charged substrate (Fig 5A and 5B). Computational modeling of glutamate at the position of Gln133 in the E133Q mutant structure predicts a pKa of approximately 10.1 [21]. Glu133 would thus be protonated in this conformation, supporting its possible role in proton transfer.
In the outward-facing structure, Arg47 (TM1) forms a direct electrostatic interaction with the carboxyl group in D-galactonate through a 3.6 Å salt bridge (Fig 6). Arg47 is conserved in almost all other members of the SLC17 family from bacteria to mammals, consistent with this role. The 3.6 Å distance between the ε-NH 2 of Arg47 in DgoT and the galactonate carboxyl suggests that the salt bridge is primarily electrostatic and may not involve hydrogen bonds, perhaps maintaining specificity without an affinity too high for rapid flux. In addition, conserved tyrosine-79 (Tyr79) coordinates the carboxyl group, suggesting a common role for these 2 residues in recognition of the carboxyl across the SLC17 family (Figs 6A and S3). Tyr44 in DgoT also coordinates the carboxyl group, although it is replaced by phenylalanine in the VGLUTs.
Four residues (Asn393, Gln264, serine-370 [Ser370], and Gln164) provide 8 hydrogen bonds with the 5 substrate −OH groups and confer high specificity ( Fig 6A). The geometry of the stereogenic center C4-OH recognized by Asn393 in TM11 and the less conserved Gln264 in TM7 may account for the selectivity for galactonate over its epimer gluconate to compete with galactonate. The different geometry of gluconate in a solution may also contribute. In Proton coupling by a family of organic anion transporters contrast to sugar transporters that generally segregate H + and substrate translocation to different lobes, both N-terminal (Tyr 44 and Arg47 from TM1, Tyr79 from TM2, and Gln164 from TM5) and C-terminal domains (Gln264 from TM7, Ser370 from TM10, and Asn393 from TM11) of DgoT thus participate in substrate recognition. , and lac permease [LacY]), suggesting a common mechanism for periplasmic gating [17,22,23]. Preventing access to the cytoplasmic side, TM4 and TM10 form major interhelical contacts in the outward-facing state. Aromatic residues Phe137 in TM4 and tryptophan-373 (Trp373) in TM10 both make hydrophobic interactions with the aliphatic surface of D-galactonate (Fig 6A and 6C). Removing the aromatic side chain in the equivalent phenylalanine residue of sialin increases the Km 15-fold (and the maximal rate (Vmax) 3-fold) for the physiological substrate glucuronic acid, with little effect on transport of sialic acid [24], supporting a role in substrate recognition and transport.
Comparing the 2 structures reveals hinges in TM4 and TM10 that accommodate changes associated with alternating access and substrate binding. In the outward-facing conformation, TM4 has a kink at Pro135, whereas TM10 is continuous (Fig 6C and 6E). In the inward-facing conformation, TM4 is continuous, whereas TM10 has a kink (Figs 6E and S5). In the outwardfacing and substrate-bound state, Trp373 interacts with Ser377 (in TM10) and isoleucine-385 (Ile385) from TM11 ( Fig 6C). Helical kinks are also seen in sugar transporters of the MFS [17,19,20], although in these proteins, the kinks occur within a single domain, perhaps because only that domain participates in substrate recognition.
Arg47 interacts with the substrate carboxyl in the outwardly oriented E133Q structure, but the residue occupies the same position relative to the rest of the N-terminal lobe as observed in the inwardly oriented state (S6 Fig). Protonation of Glu133 thus enables Arg47 to remain associated with D-galactonate without a major change in position of the TM1 arginine. However, the position of other residues that contact galactonate, especially F137 and W373, shows apparent movement between the 2 states (S5 and S6 Figs). In addition to the rotation in TM7, this movement in the substrate site closes the main cytoplasmic gate thereby occluding the substrate from both periplasmic and cytoplasmic sides.
The outwardly oriented structure lacks the putative H + tunnel observed in the inwardly oriented state. The N-and C-terminal lobes that line the periplasmic entry site separate in the outwardly oriented structure, and the remaining tunnel is blocked (S7 Fig). However, it is plausible that closure of the tunnel secures protonation of the buried carboxyls as a key part of the cycle. The critical Glu133 has no clear access to the main substrate cavity in either structure D-galactonate. Y44 and Y79 form hydrogen bonds, and Arg47 forms a salt bridge with the carboxyl group of D-galactonate. The Fo-Fc density map of D-galactonate (green mesh) was contoured at 3σ, and the 2Fo-Fc density map (gray mesh) of DgoT residues was contoured at 1σ. (B) The overall structure of DgoT in cylinder representation defines the views shown in panel A (black rectangle), panel C (purple), and panel D (orange). N-and C-terminal domains are shown in blue and green, respectively. ICH1 is shown in cyan and ICH2 is shown in bright green. D-galactonate (yellow) is shown in stick representation. (C) Hydrophobic gating residues F137 and W373 interact with the substrate while forming contacts between N-and C-terminal domains. N141 forms a cytoplasmic gate by hydrogen bonding with the backbone carbonyl of W373 and the hydroxyl of S377.
Discussion
A critical question is how DgoT and sialin cotransport H + with substrate, whereas the VGLUTs concentrate anion in a direction opposite the H + electrochemical gradient. The 2 conformations of DgoT suggest a pathway for H + translocation by the SLC17 family. The inward-facing WT structure has a tunnel capable of water-mediated H + movement from the periplasmic space to a polar pocket in the N-terminal lobe. This putative H + pathway terminates at Asp46, which may serve as a possible conduit for H + to nearby Glu133, a residue highly conserved among SLC17 family members from bacteria to mammals, including the VGLUTs and sialin. Protons may access these acidic residues from the main vestibule rather than through the putative tunnel, but the crystal structures do not reveal an obvious alternate pathway. Because coupled ions in symporters generally access from the same side as the substrate, the presence of a H + tunnel to the periplasm despite inward orientation to the cytoplasm makes DgoT unusual among solute carriers. Although we do not know the significance, this configuration could confer the potential for coupling to different steps of the transport cycle (e.g., reorientation of the empty carrier to the outside), allosteric activation, or even transport in different directions (as in the VGLUTs).
In both structures, Glu133 is sequestered from the solvent, so it is predicted to have a high pKa and be protonated. Mutation of either Asp46 (D46N) or Glu133 (E133Q) to mimic their neutral and/or protonated state eliminates active transport, as in other H + -driven transporters [25]. Because sensitivity to valinomycin indicates that transport is electrogenic and suggests coupling to more than a single H + , Asp46 and Glu133 are candidates for transport of these protons. It is important to note that coupled to H + , DgoT concentrates substrate driven by a negative Δψ, in contrast to the positive Δψ that drives glutamate uniport by the VGLUTs.
We have not determined the stoichiometry of H + :D-galactonate transport by DgoT, but thermodynamic considerations help us to understand the number of protons that could be driven uphill by a galactonate gradient (Fig 1B and 1C), with implications for the concentration gradient of D-galactonate that could be produced in E.coli. The electrochemical gradient of protons across the membrane is composed of interconvertible electrical and chemical components according to Δµ H+ = Δψ − 2.3 (RT/F) ΔpH, where 2.3 RT/F is 58.8 mV at 20˚C.
Moving a single charge across an electrochemical gradient E = 58.8 mV corresponds to NeE m = FE m , approximately 1.4 kcal/mol of free energy. To move a proton uphill against ΔpH of 0.3 (Fig 1C), when Δψ = 0 (in the presence of valinomycin) thus requires 0.3 × 1.4 = +0.42 kcal/mol. Without valinomycin, transport is reduced as opposing Δψ develops due to excess positive charge, suggesting that the H + /D-galactonate stoichiometry is greater than 1 [26,27]. A 10:1 gradient (out:in) of D-galactonate would provide a chemical-free energy of ΔG 0 f = RT ln(C in /C out ) = 1.4 log 10 (C in /C out ) kcal/mole = −1.4 kcal/mol. Therefore, this gradient could pump approximately 2 to 3 H + against the initial ΔpH of 0.3, consistent with electrogenic transport and the observed stoichiometry of >1. In E. coli at external pH of approximately 5.5, internal pH of approximately 7.5 (and hence ΔpH = 2), Δψ = −60 mV [28,29], and electroneutral cotransport of a single proton by DgoT would contribute ΔG 0 f of approximately −2.8 kcal/ mol and produce a galactonate gradient of approximately 100:1. If the stoichiometry were 2 H + per substrate, the resulting electrogenic transport would contribute ΔG 0 f of approximately 7 kcal/mol to produce a galactonate gradient of approximately 10 5 :1. At pH 7.5 outside, when ΔpH = 0 and Δμ H+ � −100 mV, all due to Δψ [29], this would result in equilibration of galactonate (i.e., no gradient of galactonate) if driven by cotransport of a single H + and hence no net charge movement. However, electrogenic transport driven by 2 H + would contribute ΔG 0 f of approximately 2.3 kcal/mol (due only to Δψ) to generate a gradient of approximately 50:1. Thus thermodynamic considerations help us to understand the physiological rationale for a H + stoichiometry great than 1.
In the outward-open substrate-bound structure, Arg47 in DgoT interacts with the carboxyl on the anionic substrate and is highly conserved among other SLC17 members from bacteria to mammals. Two exceptions are the mammalian transporters SLC17A3 and SLC17A4, in which the equivalent residue is, respectively, asparagine and glutamine. These proteins transport organic anions, some of which (e.g., uric acid) do not contain a carboxyl group [30,31], supporting a specific role for the TM1 arginine in DgoT and other SLC17 proteins in recognition of the substrate carboxyl [32,33]. In total, 9 hydrogen bonds contribute directly to substrate recognition, of which 3 are donors to the carboxyl, and the remaining 6 can be acceptors to each of the substrate −OH groups, conferring the high specificity observed. In addition, both N-and C-terminal domains of DgoT participate in substrate recognition, whereas sugar transporters generally segregate H + and substrate translocation to different lobes. Furthermore, comparison of the 2 structures shows that major conformational changes and local movement around the substrate site occur to accommodate substrate for transport.
The 2 structures and the functional effects of mutations suggest a mechanism for coupling H + to substrate translocation in the transport cycle. In the outward-facing conformation of DgoT, Glu133 is protonated. As a result, Arg47 can provide the positive charge required to complement the incoming substrate carboxyl (Fig 7) [34]. Sequestration of the substrate then stabilizes the neutral, protonated state of Glu133, consistent with the high pKa of the buried carboxyl inaccessible to the solvent. The transporter then reorients to the cytoplasmic side, where it releases substrate and H + . Similar to substrate, cytoplasmic release of H + may occur from the substrate cavity, in contrast to the apparently specialized pathway from the periplasm. Glu133 must therefore deprotonate to enable movement of the empty transporter, a reduction in its pKa (by occlusion with water) accompanying formation of a charge pair with Arg47. Thus, Glu133 may release H + via water to the side that closes when and only when Arg47 does not interact with substrate. After reorientation of the empty carrier from inside to out, another H + enters, displacing the H + bound at Asp46 to Glu133 so that external substrate can again bind.
The VGLUTs show conservation of Arg47, Arg126, and residues that contact galactonate in DgoT. The arginine in TM4 plays a critical role in glutamate transport by VGLUT [35] as well as in sialic acid transport by sialin [24]. Of the 2 key protonatable residues in DgoT (Asp46, Glu133), the VGLUTs (which are not driven by proton flux) contain only the highly conserved glutamate in TM4 (equivalent to Glu133 in DgoT) that is required for glutamate transport [36]. The VGLUTs thus retain the crucial H + binding site of bacterial relatives (Glu133 in DgoT) and sialin. The VGLUTs function in an acidic environment, and protonation of Glu133 could be the titratable site for allosteric activation by H + that we have previously observed [12]. Indeed, mammalian SLC17 proteins that operate at the plasma membrane (e.g., Na+-dependent phosphate transporter (NPT) 1-4) lack the conserved glutamate in TM4 and do not depend on low pH [30,37]. These observations suggest that, very similar to their role in coupled transport, H + may allosterically activate the VGLUTs by protonation of the TM4 glutamate for translocation of the loaded carrier and by deprotonation of the same site for reorientation of the empty carrier. The difference between the 2 mechanisms is that H + are not coupled to glutamate transport, and VGLUTs indeed lack the Asp46 required for coupled transport by DgoT.
The structures also help us to understand the effect of mutations in the lysosomal H + symporter sialin, which mediates efflux of sialic acid, a monosaccharide essential for oligosaccharide synthesis. Mutations in sialin cause a storage disease due to ineffective export and hence lysosomal accumulation of sialic acid [38]. The H183R mutation in human sialin prevents efflux of sialic acid, leading to infantile sialic acid storage disease (ISSD) [38,39]. This mutation eliminates transport activity with no effect on sialin expression or localization [1,2]. DgoT contains an asparagine at the equivalent position in TM4, and this residue (Asn141) forms hydrogen bonds with TM10 through the hydroxyl group of Ser377 and the backbone carbonyl of Trp373, thereby positioning Trp373 to interact with the substrate directly (Figs 6B, 6C and S5). Thus, the H183R mutation in sialin would impair cytoplasmic closure of the aqueous cavity and so inhibit outward opening of sialin, accounting for the loss of transport activity and the severe clinical phenotype.
Sialin and the VGLUTs (but not the sugar transporters) also contain sequences corresponding to DgoT ICH1 and, to a slightly lesser extent, ICH2. The R39C mutation in sialin causes a milder sialic acid storage disorder, Salla disease [38,39]. In contrast to H183R, R39C reduces but does not eliminate transport [1,2]. The corresponding residue Arg29 in DgoT lies at the cytoplasmic end of TM1, surrounded by conserved residues in TM5 and ICH1 (Fig 6D). It forms a salt bridge with Glu152 in TM5. Thus, the R29C mutation in DgoT would disrupt interactions between TM1, TM5, and ICH1 (Fig 6D). Conserved residues Glu219 and Tyr222 in the ICH1 domain form salt bridges with, respectively, Arg28 (TM1) and His151 (loop between TM4 and TM5). ICH1 thus provides an alternative mechanism to connect TM1 and TM5. As a result, the R39C mutation in sialin might be expected to impair but not eliminate the tripartite interactions between TM1, TM5, and ICH1, accounting for the Salla disease phenotype. To account for electrogenic transport, we presume that Asp46 would also lose a H + to the inside before this transition of the empty carrier. After reorientation to the outward-facing state, reprotonation of Asp46 and Glu133 (top middle) would allow Arg47 to interact with substrate (top left) and complete the cycle. We presume that the H + tunnel to the periplasm occurs in the outward (as well as inward) orientation, before occlusion of the substrate, although we do not know whether proton(s) use this pathway or the main cavity to access the N-terminal polar pocket with Asp46 and Glu133. Arrows indicate the direction for inward uptake, although all the reactions are reversible. The VGLUTs lack an acidic residue in TM1 equivalent to Asp46 in DgoT and hence do not couple to H + . However, they contain a glu in TM4 equivalent to Glu133, and analogy with DgoT suggests that protonation of this site from the outside could allow the Arg in TM1 to bind neurotransmitter and facilitate transport. Protonation and deprotonation of a surrogate of Glu133 from the same side could account for allosteric activation of the VGLUTs by H + . This, in turn, could account for the efficient glu transport activity by synaptic vesicles, which is then inhibited by the higher pH in the synapse after vesicle fusion with the plasma membrane. Arg, arginine; Asp, aspartate; Dgal, D-galactonate; DgoT, Dgalactonate transporter; Glu, glutamate; TM, transmembrane; VGLUT, vesicular glutamate transporter. https://doi.org/10.1371/journal.pbio.3000260.g007 In summary, the 2 structures of DgoT suggest a common mechanism for divergent ionic coupling by the SLC17 family: protonation of Glu133 effectively releases Arg47 to bind and translocate substrate. If there is no substrate bound, Glu133 must give up its H + so that it can form a charge pair with Arg47 and reorient empty to complete the transport cycle. Thus, translocation of the conjoint substrate site-loaded with substrate or empty-mandates neutralization of Arg47 and Glu133, either by binding of substrate to Arg47 and protonation of Glu133 or by formation of a charge pair between deprotonated, charged Glu133 and Arg47.
Cloning and expression
The full-length E. coli DgoT gene (accession number AKK15832.2) was subcloned into pQE60 (Qiagen, Venlo, Netherlands) with a C-terminal modified decahistidine tag preceded by a thrombin cleavage site. Mutant constructs were generated from this plasmid by site-directed mutagenesis and confirmed by sequence analysis. The protein was produced in 2 different ways: first for crystallization in the lipidic cubic phase (LCP) and second for crystallization by vapor diffusion.
DgoT purification for vapor diffusion crystallization
For WT crystals, E. coli C41 cells, transformed with pQE60 DgoT WT were grown at 37˚C in TB medium supplemented with 2 mM MgSO 4 . At a cell density of 0.6 to 0.8 OD 600 , the temperature was reduced to 18˚C, and the culture was induced with 0.5 to 1 mM IPTG, incubated at 18˚C, and harvested after 16 hours. A typical yield of 200 g from 6 L culture was split into 50 g aliquots and stored at −80˚C. Harvested cells (50 g) were resuspended in 20 mM Tris (pH 7.4), 300 mM NaCl (250 mL) containing 1× protease inhibitors (Sigma, Burlington, MA) and lysed using the Emulsiflex C3 homogenizer (ATA Scientific, Australia) for 5 to 6 cycles at 15,000 to 20,000 psi. The whole cell lysate was centrifuged at 10,000 or 18,000g for 20 minutes to remove debris, and the supernatant was centrifuged at 185,500g for 1 hour to collect the membranes. The membranes were split into 3.5 g aliquots, flash frozen in liquid nitrogen, and stored at −80˚C until further use.
A 3.5 g aliquot of membranes was resuspended in 50 ml 20 mM Tris (pH 7.4), 150 mM NaCl, 10% glycerol using a glass Dounce homogenizer. The resuspended membranes were solubilized by adding 200 mg n-dodecyl-β-maltoside (DDM; Anatrace, Maumee, OH) per gram membrane, for a final DDM concentration of approximately 1.4% (w/v). The membranes were solubilized by stirring at 4˚C for 2 hours, the insoluble material removed by sedimentation at 185,500g for 20 minutes, and the supernatant incubated with 5 ml Talon cobalt affinity resin (Clontech) at 4˚C for 2 hours under gentle nutation. After drainage, the resin was washed with 10 column volumes of 20 mM Tris (pH 7.4), 150 mM NaCl, 10% glycerol, 0.05% β-DDM, 10 mM imidazole (wash buffer), and 2 column volumes of wash buffer supplemented with 500 mM NaCl. The protein was eluted with 4 column volumes of wash buffer containing 150 mM imidazole (elution buffer). Imidazole was removed from the eluate with a 10-DG desalting column (Bio-rad, Hercules, CA), and the polyhistidine tag was removed by digestion overnight at 4˚C with α-thrombin. The digested protein was concentrated to 0.5 ml using a 100 kDa molecular weight cut-off (MWCO) centrifuge concentrator (Millipore) followed by filtration through 0.2 μm PVDF. To change the detergent from β-DDM to β-NG (Anatrace), 0.5 ml protein was injected onto a size exclusion column (TSK3000; Tosch Bioscience) pre-equilibrated with 10 mM HEPES (pH 7), 150 mM NaCl, 10% glycerol, 0.5 mM TCEP, and 0.2% β-NG (SEC buffer). Peak fractions were pooled and concentrated for crystallization using a 50 kDa MWCO centrifuge concentrator (Millipore, Hayward, CA).
For DgoT E133Q, protein expression and purification were performed as described for WT DgoT, with several exceptions. During protein purification, the protein-bound affinity resin was washed with 10 column volumes of 20 mM Tris (pH 7.4), 150 mM NaCl, 10% glycerol, 0.05% β-DDM, and 10 mM imidazole (wash buffer) followed by 2 column volumes of wash buffer supplemented with 25 mM imidazole and 500 mM NaCl. For both protein concentration steps, a 50 kDa MWCO centrifuge concentrator (Millipore) was used instead of the 100 kDa MWCO concentrator at the first step.
Crystallization, lipidic environment, and data collection
For crystallization of WT DgoT, protein was relipidated into E. coli polar lipids (Anatrace) according to the High Lipid-Detergent (HiLiDe) method [40] using 2.33:1 protein:lipid and 5:1 detergent:lipid weight ratios. For a typical HiLiDe trial, 50 μL DgoT (5 mg/ml) was used. HiLiDe-treated DgoT was crystallized via hanging drop methods in a 96-well crystallization plate (Greiner) using the TTP mosquito (TTP Labtech). For hanging drops, 150 nl 5 mg/ml protein was mixed with 150-nl-well solution containing 39% PEG 400 and 100 mM sodium acetate (pH 5.35) and grown at 4˚C. Initially, small, rod-shaped crystals grew and disappeared within 2 to 3 days, possibly reflecting sensitivity to precipitant concentration that occurs through vapor diffusion. To grow larger and more stable crystals, 50 μl of Al's oil (Hampton Research, Viejo, CA) was placed over the well solution to slow vapor diffusion. Also, DgoT was pre-incubated with 10 mM sodium gluconate (Sigma) on ice for 10 minutes prior to crystallization set-up. Crystals were harvested and flash frozen in the well solution using liquid nitrogen. X-ray diffraction data sets were collected at the Advanced Light Source Beamline 8.3.1. Data were collected at a wavelength of 1.1159 Å.
For crystallization of DgoT E133Q, protein was also crystallized via hanging drop but remained unlipidated because HiLiDe was not performed. DgoT E133Q was pre-incubated with 10 mM Na galactonate for 10 minutes on ice prior to crystallization; 100 nl 7 to 8 mg/ml protein was mixed with 100-nl-well solution containing 32% PEG 1000 and 100 mM glycine (pH 9) and grown at 4˚C. Rectangular crystals appeared around 3 to 4 weeks and continued to grow over 9 months to about 150 to 200 μm in length. Crystals were harvested and flash frozen in the well solution using liquid nitrogen. X-ray diffraction data sets were collected at the Advanced Light Source Beamline 8.3.1. Data were collected at a wavelength of 1.1158 Å.
Data processing and structure determination
For the inward-facing (WT DgoT) crystals, data sets were recorded on a MAR CCD detector, processed, integrated, and scaled using the HKL2000 package [41]. The space group was P2 1 (n = 2). Because diffraction was anisotropic, the data were submitted to the UCLA anisotropy server [42] for ellipsoidal truncation. This removes reflections with F/σ < 3. The best data set was truncated to 2.9 × 3.8 × 3.8 Å resolution along the a, b, and c axis, respectively. After ellipsoidal truncation and anisotropic scaling, B-factor sharpening was applied using a negative isotropic B-factor of −68 Å 2 . The resolution for reporting data and refinement statistics (S1 Table) is based on the statistically significant correlation coefficient value of CC 1/2 [43]. If resolution cut-off is based on the criterion of <I/σ(I)> > 1 the resolution is reduced to 3.5 Å.
However, the quality of the electron density maps shows details in places as in a 2.9 Å density map.
For E133Q crystals, data sets were recorded on a Pilatus3 detector, and because this involved 3,600 images, the XDS package is more appropriate for the processing, integration, and scaling [44]. The space group was C2 (n = 2). Post-processing for anisotropic data was not necessary. These crystals diffracted isotropically but to a more limited highest resolution of 3.5 Å. This resolution limit was determined by statistical significance of the CC 1/2 criterion [43]. Despite the low <I/σ(I)> = 0.2 and the low CC 1/2 (0.16) data in the outer shell, it is still statistically significant. If resolution cut-off is based on the criterion of <I/σ(I)> >1, the resolution would be reduced to 4.0 Å. The quality of the map shows details as in a 3.5 Å map.
Both DgoT structures were determined by MR using Phaser-MR (PHENIX). For the inward-facing structure, phases were determined by Phaser-MR (PHENIX) using a poly-alanine model of the crystal structure of GlpT [16] (PDB ID: 1PW4) that is 24% sequence identical as a search model. The unit cell volume indicated the presence of 2 molecules in the asymmetric unit, with a solvent content of 70%. Initial phases yielded an electron density map that identified 12 TM helices suggesting the secondary structure of MFS transporters. Reiterative model building in COOT [45] followed by refinement runs in phenix.refine (PHENIX) [46] programs were used. For initial refinement strategies, rigid-body refinement [47], individual B-factor refinement, simulated-annealing refinement (torsion: 2,500 K start; 300 K final; 25 steps), and translation libration screw (TLS) rotation model refinement were used. PHE-NIX-autobuild was used in calculating iterative-build composite omit maps followed by intermittent simulated-annealing composite omit maps to reduce model bias during building [46,48,49]. Once Rfree reached approximately 32%, hydrogens were added to the model and were refined using individual B-factor refinement with X-ray and/or stereochemistry target weight optimization. The final model of DgoT includes residues 10 to 219 and 229 to 427 and has an R free value of 29.7%.
For the outward-facing E133Q structure, the N-terminal and C-terminal domains of the WT DgoT structure were separately used as search models for MR. Two molecules in the asymmetric unit were identified similar to WT. Initial phases yielded an electron density map that identified the 12 TM helices and the 2 ICHs, but in a different structural arrangement than WT, suggesting a different conformation. Initial refinement was performed using TLS and restrained refinement with 50 to 100 cycles of jelly-body refinement implemented in Refmac [50] along with ProSMART [51] to produce general fragment-based restraints for lowresolution refinement. Once Rfree reached approximately 35%, additional refinement under PHENIX was done using rigid-body refinement [47], individual B-factor refinement, and TLS refinement. D-galactonate assignment was guided using 2Fo-Fc and Fo-Fc maps. All structural figures were generated using PyMol (Schrödinger, New York, NY). Composite SA omit map (−100 Å2) in Fig 4A was B-factor sharpened (PHENIX). Electrostatic potential surfaces were calculated using APBS [52]. The proton tunnel was visualized using Mole [53].
Sodium D-galactonate preparation
To our knowledge, the only commercially available form of D-galactonate is the insoluble calcium salt. We therefore used previously reported methods to produce the Na + salt from Ca 2+ D-galactonate [54,55]. Briefly, 10 g Ca 2+ D-galactonate (City Chemical, West Haven, CT) was resuspended in 100 ml water. An equimolar amount of oxalic acid dihydrate (2.93 g; Sigma, Burlington, MA) was added to the mixture and stirred for 10 minutes at 55˚C. The precipitated calcium oxalate was then separated from aqueous D-galactonic acid by filtration under vacuum through 0.22 μm nylon. Sodium hydroxide was titrated into the solution to pH 7. Absolute ethanol was added in a 3:1 (v/v) ratio, the mixture stored at 4˚C overnight, and the resulting crystalline precipitate removed by filtration and washed with absolute ethanol. The precipitate was dried for 24 hours in a vacuum desiccator. The resulting crystalline Na + Dgalactonate was then stored at room temperature for subsequent use. The crystalline precipitate was analyzed by 1 H-NMR using a Bruker 400 mHz Avance III HD spectrometer, confirming the purity of Na + D-galactonate.
DgoT reconstitution and transport assay
Purified DgoT was reconstituted using preformed vesicles permeabilized with Triton X-100 [56]. Briefly, 10 mg E. coli polar lipids (Avanti Polar Lipids, Alabaster, AL) were dissolved in 1 ml chloroform, dried first under nitrogen and then under vacuum overnight. The dried lipid film was rehydrated in 1 ml 0.5 mM HEPES (pH 7.5), 150 mM N-methyl-D-glucamine (NMDG)-methanesulfonate (a large organic cation that should not permeate), 2 mM MgSO 4 , and 0.5 mM pyranine (reconstitution buffer) by incubating at 37˚C for 30 minutes and resuspended through pipetting. To form unilamellar vesicles, the lipid suspension was subjected to 4 cycles of freeze-thaw, alternating between liquid nitrogen and 25˚C, with sonication in a water bath for 5 minutes between freeze-thaw cycles. The lipids were then extruded through a 200 or 400 nm filter, sonicated for 5 minutes, and finally extruded through a 100 nm filter. The extrusions both involved 20 passages through each filter.
To prepare proteoliposomes, the liposomes were destabilized by adding 0.6% (v/v) Triton X-100 and incubated at 4˚C overnight under gentle nutation. Purified DgoT was added to the destabilized liposomes at a 1:30 protein:lipid (w:w) ratio and incubated for 1 hour at 4˚C. To extract detergent, SM2 Bio-beads (Bio-rad, Hercules, CA) were added sequentially in 4 steps. First, 200 mg SM2 Bio-beads were added and incubated for 1 hour at 4˚C. This addition was repeated once (step 2) followed by the addition of 400 mg SM2 Bio-beads for an additional 2 hours (step 3). For the final step, 600 mg SM2 Bio-beads were added and incubated overnight. To remove unincorporated pyranine and the Bio-beads, the proteoliposome mixture was separated using a 10-DG desalting column (Bio-rad, Hercules, CA), eluting with 4 ml pyraninefree reconstitution buffer according to the manufacturer's protocol.
To measure transport by DgoT, an inwardly directed H + gradient was produced by adding 200 μl proteoliposomes into a cuvette containing 1.35 ml 0.5 mM MES (pH 5.5), 150 mM NMDG methanesulfonate, and 5 mM MgSO 4 (reaction buffer). Transport was initiated by adding 50 μl D-galactonate or D-gluconate in reaction buffer, and uptake was monitored at 30˚C by measuring the fluorescence emission at 510 nm upon dual excitation at 415 nm (F1) and 460 nm (F2). The fluorescence data were collected at 0.7-second intervals using a F-4500 fluorimeter (Hitachi). The ratio of fluorescence emission at the 2 excitation wavelengths (F2/F1) was normalized to the initial ratio (F2 o /F1 o ) and the results plotted as 1 − [(F2/F1)/(F2 o /F1 o )] (as a percentage), with galactonate added at t = 0. To assess coupling between galactonate and H + , we used the same conditions but with lumenal pH 7.2 and external pH 7.5 (both buffered with Hepes rather than MES), 5 mM K + in both lumenal and external solutions, with or without 0.2 μM valinomycin. The fluorescence measurements at the low pH were performed in triplicate on multiple occasions. The fluorescence measurements at the higher pH were performed using 2 reconstitutions, and the initial rates were determined by subtracting the drift in fluorescence before galactonate addition from the time points after. Subtracted data were fitted to the equation y = A + B(x) + Ce −D(x) , which contains a single exponential decay component [Ce −D(x) ] plus a linear term [A + B(x)] that better fits the rate of decay in liposome controls.
Cell-based radiotracer uptake
DgoT-deficient E. coli strain dgoT727(del)::kan (Keio collection; E. Coli Genetic Stock Center, Yale, New Haven, CT) was lysogenized with IPTG-inducible T7 RNA polymerase using the λDE3 kit (Novagen Madison, WI). DgoT variants were subcloned into pQE60 vector containing a C-terminal deca-histidine tag and transformed into the lysogenic strain. One colony was picked to inoculate a 150 ml LB culture (+100 μg/mL carbenicillin and 35 μg/ml kanamycin) and grown at 37˚C with shaking until the OD 600 reached 1.0. Basal protein expression from the T7 promoter was sufficient for uptake assays, and no IPTG was added during growth. After harvesting the cells at 3,800g using a table-top centrifuge, the pellet was washed twice with 5 mM MES (pH 6.5), 150 mM KCl (MK buffer), and resuspended to an OD 600 of 2.0. To measure substrate transport, 150 μl cell suspension was pre-incubated at 25˚C for 5 minutes; 50 μl MK solution containing 10 μM 14 C-galactonate (American Radiolabeled Chemicals) was added to initiate the reaction, the mixture was incubated 30 seconds or 2 minutes at 25˚C, the reaction was terminated by filtration through 0.45 μm nitrocellulose (Millipore), and the filters were washed 3 times with 2 ml cold MK buffer before measurement of the bound radioactivity by scintillation counting. Incubation for 30 seconds was used for competition and saturation experiments, and 2 minutes was used for the comparison of different DgoT constructs. All transport assays were performed in triplicate on at least 3 independent cell preparations.
The expression of DgoT protein was monitored by immunoblotting with a polyhistidine antibody conjugated to horseradish peroxidase (HRP; Qiagen, Venlo, Netherlands) and the chemiluminescent detection of HRP monitored with a ChemiDoc MP imaging system (Biorad, Hercules, CA). The absolute amounts of WT and mutant DgoT expression were estimated by comparing the band intensities of each to a serial dilution of purified DgoT (100 to 3 ng) on the same western blot.
Spheroplast uptake
Spheroplast generation was adapted from previously established methods [57]. DgoT constructs in pQE60 vector were transformed into the E. coli DgoT knock-out strain (see previous section). One colony was picked to inoculate a 150 ml LB culture (+100 μg/mL carbenicillin and 35 μg/mL kanamycin) and grown at 37˚C with shaking overnight. The overnight culture (3 ml) was used to inoculate a 150 ml LB culture (+carbenicillin/kanamycin) and grown at 37˚C with shaking until the OD 600 reached 1.0. At that point, 25 ml culture was sedimented at 3,800 rpm for 10 minutes using a table-top centrifuge, the pellet was washed twice in 30 mM Tris (pH 8; 16 ml per 0.2 g cell pellet) and was resuspended in 30 mM Tris (pH 8) with 20% sucrose (approximately 80 ml buffer per g cell pellet), lysozyme was added (to 20 μg/m), and the mixture was incubated for 5 to 10 minutes. After 1:1 dilution with 30 mM Tris (pH 8) buffer, EDTA was added (to 1 mM), and the resulting spheroplasts were incubated for 10 to 15 minutes with gentle nutation. The spheroplasts were then sedimented at 16,000g for 20 minutes and resuspended in 5 mM MES (pH 5.5), 150 mM KCl, and 20% (w/v) sucrose (MKS-5.5 buffer) supplemented with 20 mM glycerol and 5 mM MgCl 2 to an OD 600 of 2.0.
To measure net flux, 150 μl spheroplast solution was pre-incubated at 25˚C for 5 minutes, 50 μl MKS-5.5 with 10 μM 14 C-galactonate was added, and the mixture was incubated for 2 minutes. After incubation, the spheroplasts were subjected to filtration through 0.45 μm nitrocellulose and were washed 3 times with 2 ml cold MK buffer before measurement of the bound radioactivity by scintillation counting. For the analysis of ionophores, 150 mM KCl was replaced by K acetate in both internal and external solutions.
To measure exchange, the spheroplasts were preloaded with 1 mM nonradioactive galactonate by incubation overnight at 4˚C, sedimented at 16,000g, washed 3 times with 10 ml MKS-5.5, and resuspended in MKS-5.5 containing 20 mM glycerol to an OD 600 of approximately 1.1. Transport was measured as previously described for net uptake by spheroplasts, with or without nigericin or valinomycin (both at 2 μM). As described above, both net flux and exchange were measured in triplicate using at least 3 independent preparations. We subtracted the value for empty vector from WT and mutant DgoT and normalized the exchange rate by mutants for their reduced expression relative to WT (S8 Fig).
Data analysis and statistics
All graphs and statistics from the functional data were calculated using GraphPad Prism software. The length of each bar represents the mean, and errors bars represent the standard error of the mean. Statistical significance was determined by one-way ANOVA with Bonferroni post hoc comparison.
|
2019-05-15T14:31:20.303Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "7ff86251f8fb35965f6c3f45a138afa388bb2ae7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3000260&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ff86251f8fb35965f6c3f45a138afa388bb2ae7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
213918483
|
pes2o/s2orc
|
v3-fos-license
|
Development and validation of microsatellite markers for an endangered dragonfly, Libellula angelina (Odonata: Libellulidae), with notes on population structures and genetic diversity
The Bekko Tombo, Libellula angelina Selys, 1883 (Odonata: Libellulidae), is listed as an endangered species in South Korea, and is classified as a critically endangered species by the International Union for Conservation of Nature (IUCN). An assessment of the genetic diversity and population relationships of the species by molecular markers can provide the information necessary to establish effective conservation strategies. In this study, we developed 10 microsatellite markers specific to L. angelina using the Illumina NextSeq 500 platform. Forty-three samples of L. angelina collected from three localities in South Korea were genotyped to validate these markers and to preliminarily assess the population genetic characteristics. The 10 markers revealed 4–11 alleles, 0.211–0.950 observed heterozygosity (H O), and 0.659–0.871 expected heterozygosity (H E) in the population with the largest sample size (n = 20), thereby validating the suitability of these markers for population analyses. Our preliminary assessment of the population genetic characteristics appears to indicate the following: presence of inbreeding in all populations, an isolation of the most geographically distant population (Seocheon), and a lower H O than H E. The microsatellite markers developed in this study will be useful for studying the population genetics of L. angelina collected from additional sites in South Korea and from other regions.
Introduction
The Bekko Tombo (Libellula angelina Selys, 1883; Odonata: Libellulidae), distributed throughout northern China, Japan, and Korea (Inoue, 2004;Jung, 2012), is listed as an endangered species in South Korea. It occurs in several areas in the western region of South Korea (Figure 1; e.g. Incheon, Gwangmyeong, Ansan, Yongin, Anseong, Seocheon, Gimje, and Jeonju); however, because of the reduction in population and extinction, there are only a limited number of stable populations of L. angelina at present (Shin et al., 2012). Natural ponds and swamps rich in organic matter are the main habitats for Bekko Tombo species in South Korea (National Institute Figure 1. The distribution and sampling localities of Libellula angelina in South Korea, with the pairwise results of genetic distance (F ST ) in a geographical context. The distribution localities are marked in dark grey, and sampling localities are marked in green. General locality names are as follows: 1, Incheon Metropolitan City; 2, Gwangmyoung City, Gyeonggi-do Province; 3, Ansan City, Gyeonggi-do Province; 4, Yongin City, Gyeonggi-do Province; 5, Anseong City, Gyeonggi-do Province; 6, Seocheon City, Chungcheongnam-do Province; 7, Gimje City, Jeollabuk-do Province; and 8, Jeonju City, Jeollabuk-do Province. This map was acquired from the Korea National Spatial Data Infrastructure Portal. The asterisk indicates statistical significance, (p < 0.05).
of Biological Resources, 2013), but such habitats are declining because of urban development and expansion, which are accompanied by reclamation and contamination.
At the international level, L. angelina has been classified as critically endangered since 1986 by the International Union for Conservation of Nature (IUCN). According to Inoue (2004), who designated the species as endangered in Japan, the prime habitats necessary for the maintenance of sustainable L. angelina populations in the country are old and stable ponds, with moderate growth of emergent vegetation and an area of open water, in lowland hill areas.
Populations decline, however, as a consequence of anthropic developments that destroy and degrade such habitats, besides introducing predators that threaten native species (Inoue, 2006). Furthermore, several dragonfly species including L. angelina, which were once common in lentic habitats, are now reported as endangered, because of rapid and extensive changes and degradation of agricultural habitats in Japan (Kadoya, Suda, & Washitani, 2009).
Estimation of population genetic characteristics such as population structures, genetic diversity, and genetic isolation is important for establishing a conservation strategy for endangered species (Moritz, 2002;Palsbøll, Berube, & Allendorf, 2007). The genetic diversity and differentiation of L. angelina in the Okegayanuma area of Japan was previously determined using random amplified polymorphic DNA analyses; the species was found to present low genetic diversity among and within populations and exhibited no significant genetic difference among populations (Takahashi, Fukcui, & Tubaki, 2009). However, the South Korean populations of L. angelina have never been subjected to population genetic analyses.
In this study, we developed 19 new microsatellite markers from L. angelina. To our knowledge, the present study is the first of its type for this species and other congeneric species. Given the limited access to this endangered species and its rarity, population genetic analyses were performed based on the examination of a limited number of samples from three South Korean localities, and the validity of the markers was tested using the largest population.
Sampling and DNA extraction
Adult L. angelina were sampled from three localities in the western region of South Korea in June 2016 ( Figure 1). Seocheon (36.027328°N, 126.717994°E; n = 13) is the southernmost collection locality, and is situated approximately 135 km south of Ansan (37.272664°N, 126.581689°E; n = 20) and approximately 160 km south of Gwangmyoung (37.458503°N, 126.869561°E; n = 10). Gwangmyoung and Ansan are located approximately 35 km from each other. The latter locality is a small offshore islet (34.39 km 2 in area and less than 1 km from the nearest mainland in South Korea). For each location, we obtained the necessary collection permits from the respective authorized environmental offices.
Total DNA was extracted from the hind legs of the collected L. angelina using a Wizard Genomic DNA Purification Kit (Promega, Madison, WI, USA) as per the manufacturer's instructions and was stored at -20°C until use. Voucher specimens were deposited at the National Institute of Biological Resources, Incheon, South Korea (accession numbers NIBRGR0000412973-NIBRGR0000413014, NIBRGR0000413016).
Development of microsatellite markers and genotyping
For the construction of a DNA library, one specimen collected from Ansan was used. DNA quality and concentration were assessed using a NanoDrop spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA). Genomic DNA (200 ng) was sheared into fragments of approximately 550 bp using a Covaris S220 ultrasonicator (Covaris, Woburn, MA, USA) and then processed to produce an Illumina paired-end library using a TruSeq Nano DNA Library Kit (Illumina, San Diego, CA, USA). Size and concentration of the prepared library were confirmed using an Agilent 2100 Bioanalyzer system (Agilent Technologies, Santa Clara, CA, USA) and a quantitative PCR-based KAPA library quantification kit (KAPA Biosystems, Wilmington, MA, USA), respectively. The library was sequenced on an Illumina NextSeq 500 system (San Diego, CA, USA) using 150 base-length read chemistry in a paired-end mode.
For assembly, sequencing errors were discarded using the error correction module of ALLPATHS-LG (Gnerre et al., 2011). Then, genome assembly was performed using IDBA_UD (Peng, Leung, Yiu, & Chin, 2012) with the pre-correction option. To identify reliable assemblies, short reads were remapped to assembled sequences using Bowtie2 (Langmead & Salzberg, 2012); only assembled scaffolds with a depth of ≥ 10 × and coverage ≥ 95% were retained for microsatellite marker identification.
Data analyses
To validate the markers, selected genetic diversity measures, such as number of alleles, observed heterozygosity (H O ; Weir, 1990), and expected heterozygosity (H E ; Weir, 1990), were calculated per population for each locus using GenAlEx ver. 6.5 (Peakall & Smouse, 2012). F IS (Hartl & Clark, 1997), which is a measure of the deficiency in heterozygosity resulting from non-random mating, was estimated per population for each locus and also for each population for all loci using GenAlEx ver. 6.5 (Peakall & Smouse, 2012). Allelic richness (AR)-standardized for variation in sample size-was calculated per population for each locus using FSTAT 2.9.3.2 (Goudet, 2001). Genotypic linkage disequilibrium (LD) between all pairs of loci, as well as deviation of genotypic frequencies from the Hardy-Weinberg equilibrium (HWE) per population per locus, were tested using GENEPOP Web ver. 4.2 (Raymond & Rousset, 1995;Rousset, 2008) with the Markov-chain approach modified from Guo and Thompson (1992) using 10,000 steps of dememorization and iteration. The 95% significance levels for HWE and linkage disequilibrium (LD) tests were adjusted using a Bonferroni correction (Rice, 1989). The fixation index (F ST ) (Weir & Cockerham, 1984) between all pairs of populations was estimated based on the infinite allele model of mutation using Arlequin v. 3.5 (Excoffier & Lischer, 2010). The significance of the F ST between all pairs of populations was calculated using Fisher's exact test based on 10,000 permutations. Principal coordinates analysis (PCoA) via covariance with standardization of the population genetic distances was performed to detect and plot the relationships between populations using GenAlEx ver. 6.5 with default parameters (Peakall & Smouse, 2012). STRUCTURE ver. 2.3.3 (Pritchard, Stephens, & Donnelly, 2000) was employed to identify the true number of subgroups (K) using the method described by Evanno, Regnaut, & Goudet (2005). An admixture model with correlated allele frequencies was used, with the K-value ranging from 1 to 10. Ten independent runs were performed for each K-value, with a burn-in period of 10,000 iterations, followed by 50,000 iterations for data collection. The structure result was visualized using the web-based tool, STRUCTURE HARVESTER ver. 0.6.8 (Earl & von Holdt, 2012).
Sequencing and microsatellite selection
The sequencing of 150 bp paired-end reads from the Illumina library resulted in a total of 160,044,678 reads (Table 2). A total of 251,215 assembled scaffolds with an average length of 3,192.18 bp were retained for microsatellite marker identification (Table 2). Trinucleotide repeats were the most abundant class of microsatellites (45,160 regions) in the L. angelina genome, followed by dinucleotide (11,536 regions), tetranucleotide (5089 regions), pentanucleotide (201 regions), and hexanucleotide (94 regions) repeats (Table 2). Sequencing data were deposited in the Sequence Read Archive of GenBank (accession number SRR8432568). After testing the candidate microsatellites for the availability of primer sites, amplification efficiency, degree of polymorphism, and specificity for target loci, 10 markers were eventually selected for use in subsequent genotyping (Table 1).
The analysis of 43 genotyped L. angelina samples from three South Korean populations (20, 10, and 13 L. angelina from Ansan, Gwangmyoung, and Seocheon, respectively) revealed that the observed number of alleles at each locus ranged from 6 to 18, and availability ranged from 0.93 to 1 ( Table 1). Tests of genotypic LD showed no significant association of alleles among the 10 loci after applying Bonferroni correction, indicating that all loci can be considered as independent markers. The GenBank accession numbers of the 10 loci are listed in Table 1. At each locus level, the observed number of alleles, H O , and H E were 4-13, 0.211-0.950, and 0.659-0.871, respectively, in the L. angelina samples at Ansan, which had the largest sample size (20 L. angelina samples; Table 3), thereby validating the suitability of the markers for population analyses. In the samples from Ansan, six of the 10 loci showed significant deviation from the Hardy-Weinberg equilibrium.
Population genetic analysis
The allelic patterns across populations indicated an absence of obvious differences among populations in terms of mean total number of alleles per population, number of effective alleles, (Figure 2). Within-population gene diversity, which corresponds to H E in diploid data, ranged from 0.784 to 0.815 ( Figure 2). Lower H O than H E and a positive estimate of F IS , which is an evidence for the existence of inbreeding, were detected in all populations (Figure 2). The PCoA based on the first principal coordinate showed that the L. angelina population at Seocheon (the most distantly located region-at least 135 km-from the other two populations) showed a substantial divergence that accounted for 94.79% of the variation (Figure 3). Furthermore, the remaining two populations at Ansan and Gwangmyoung (located only 35 km from each other), did not form an immediately close group based on the second principal component, which accounted for 5.21% of the variation (Figure 3). Pairwise F ST estimates also supported the result of PCoA analysis, highlighting the significant genetic differentiation of the L. angelina population at Seocheon from that at Ansan and Gwangmyoung, whereas no significant genetic difference was found between the L. angelina populations at Ansan and Gwangmyoung (Figure 1). An examination of the likelihood scores from 10 replicate runs across K-values from 1 to 10 indicated that the optimal K-value was 2, suggesting the presence of two genetic groups (Figure 4). The assignment results from K = 2 showed that both gene pools were found in all populations, although the frequency of each gene pool in each population differed. This result indicates that the three populations of L. angelina shared the same gene pools, although the PCoA and F ST supported the isolation of L. angelina population at Seocheon from the two remaining populations. In conclusion, a suite of polymorphic microsatellite markers specific to L. angelina was developed using a next-generation sequencing technique. Considering the results presented in this study, these markers will be useful for studies on the genetic structure of undiscovered South Korean and Asian populations of L. angelina. This is particularly relevant considering the limited number of previous population genetic studies for this endangered species (e.g. Takahashi et al., 2009). Although the results of this study are based on a limited number of samples, lower H O than H E , positive estimates of F IS , and the substantial isolation of the L. angelina population at Seocheon from the other two populations (besides the lesser distance between the two remaining populations) collectively suggest that the South Korean populations of L. angelina consist of small, more or less isolated, and inbreeding populations. This result is consistent with field observations based on which the species was classified as endangered. Nevertheless, the shared gene pools in all populations indicates that the isolation of the L. angelina population at Seocheon from the other populations may not be sufficient for creating an independent gene pool. As more samples are being collected from different regions of Asian countries, including South Korea, more thorough data analyses of population genetics will be possible.
|
2020-01-23T09:06:22.616Z
|
2020-01-20T00:00:00.000
|
{
"year": 2020,
"sha1": "e55f65b95d65768182feee9ff2c7c46b3c26a28c",
"oa_license": "CCBY",
"oa_url": "https://worlddragonfly.org/wp-content/uploads/ijo/tijo20.v023.i02/13887890.2019.1701573/13887890.2019.1701573.pdf",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "ede4182a286f48cb2e1c93a3977befd996892766",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
235655946
|
pes2o/s2orc
|
v3-fos-license
|
Nonsmoking and Nondrinking Oral Squamous Cell Carcinoma Patients: A Different Entity
Objective Our goal was to analyze the demographic and pathologic characteristics as well as prognosis in nonsmoking and nondrinking (NSND) oral squamous cell carcinoma (SCC) patients compared with typical oral SCC patients. Patients and Methods A total of 353 patients were retrospectively enrolled and divided into two groups: the NSND group and the current smoking/current drinking (CSCD) group. Demographic, pathologic, and molecular data were compared between the two groups. The main research endpoints were locoregional control (LRC) and disease-specific survival (DSS). Results In the NSND group, 16.3%, 41.9%, and 53.5% of patients were aged no more than 40 years, were female, and had an educational background of high school or above compared to 3.7%, 6.0%, and 38.2% of patients in the CSCD group, respectively. A total of 15.1% of the NSND patients had SCC of the lower gingiva and floor of the mouth, which was lower than the 35.6% of patients in the CSCD group. CSCD patients were likely to have an advanced disease stage (48.7% vs 32.5%, p=0.042) and poorly differentiated cancer (26.6% vs 16.3%, p=0.042). The NSND patients had a mean Ki-67 index of 24.5%, which was lower than the mean of 35.7% in the CSCD patients. The two groups had no HPV infection and similar p16 expression (4.7% vs 10.1%, p=0.132), but there was higher expression of p53 (38.6% vs 17.4%, p<0.001) and p63 (59.9% vs 29.1%, p<0.001) in the CSCD group. The 5-year LRC rates for NSND patients and CSCD patients were 48% and 38%, respectively, and the difference was significant (p=0.048). The 5-year DSS rates for NSND patients and CSCD patients were 56% and 39%, respectively, and the difference was significant (p=0.047). Further, a Cox model confirmed the independence of smoking and drinking status for affecting LRC and DSS. Conclusion NSND oral SCC patients are a different entity. HPV infection has a limited role in carcinogenesis in NSND patients, and p16 expression is associated with worse locoregional control.
INTRODUCTION
Oral squamous cell carcinoma (SCC) is the most common malignancy in cancers of the head and neck (1), and it significantly threatens people's lives and quality of life. The latest epidemiologic data in 2011 showed that in China, the age-standardized incidence and mortality rates of oral SCC were 2.22 per 100,000 and 0.9 per 100,000, respectively (2). Tobacco smoking and alcohol consumption are considered to be the main risk factors and are responsible for at least 80% of oral SCC patients (3)(4)(5). There are 50 potential carcinogens including polycyclic aromatic hydrocarbons and nitrosamines in tobacco, and they can result in mutations of some important genes such as the tumor suppressor gene p53 that disturb modulation of the immune system and cell cycle regulation (6). The carcinogenic mechanism of alcohol is complex and might be involved in the genotoxic effects of acetaldehyde, genetic polymorphisms, cytochrome P450 2E1-mediated generation of reactive oxygen species, aberrant metabolism of folate and retinoids, and increased estrogen (7).
Although there has been increased knowledge regarding giving up smoking and drinking, the incidence of oral SCC has not decreased significantly (8,9), and even nonsmoking and nondrinking (NSND) oral SCC patients are increasingly common. A number of previous researchers have tried to determine the difference regarding etiology, pathologic characteristics, and molecular expression as well as prognosis between nonsmoking patients and typical patients (10)(11)(12)(13)(14), but unfortunately, there is great controversy. Some authors have depicted that there is no significant survival difference between these two groups (10)(11)(12), some have reported that nonsmoking patients have a better prognosis (13), and some have described that there is worse survival in young nonsmoking patients (14). The majority of these studies did not limit their patients to NSND patients, and this minor designation flaw may not completely eliminate their potential confounding effects (1). On the other hand, literature on the molecular expression of NSND patients remains scarce, even though the reported rates of HPV16 infection, p16 expression, and p53 expression vary greatly (15)(16)(17)(18)(19). Therefore, in the current study, we aimed to analyze the demographic and pathologic characteristics as well as prognosis in NSND oral SCC patients compared with typical oral SCC patients.
Ethnic Consideration
Our Hospital institutional research committee approved our study, and all participants signed an informed consent agreement. All methods were performed in accordance with the relevant guidelines and regulations. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/ or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Patient Selection
From January 2014 to December 2018, the medical records of 654 patients with surgically treated oral SCC were retrospectively reviewed. Oral SCC referred to SCC arising from the tongue; buccal, lower and upper gingiva, and the floor of the mouth. The included patients met the following criteria: the disease was primary; there was no history of other cancers; there was no habit of betel-nut chewing; the patient was classified as a NSND or a current smoker or current drinker (CSCD); and there was enough paraffin-embedded tissue available for HPV detection. Patients without sufficient demographic, pathologic, or follow-up data were excluded from the analysis. Information regarding age, sex, smoking, alcohol consumption, educational background, family cancer history, pathologic TNM stage (8 th AJCC system), pathologic reports, treatment, and follow-up was extracted and analyzed.
Important Variable Definition
A NSND patient was defined as a patient who had smoked no more than 100 cigarettes and had simultaneously drank wine no more than once every two weeks in their lifetime (20)(21)(22). A CSCD patient was defined as a patient who had smoked at least 20 cigarettes per day for at least 10 years or had drank wine at least once per day for at least 10 years (14,15,19). All pathological sections were re-reviewed by at least two pathologists in a double-blind manner. Perineural invasion (PNI) was considered to be present if tumor cells were identified within the perineural space and/or nerve bundle; lymphovascular infiltration (LVI) was positive if tumor cells were noted within the lymphovascular channels (3,23). Similar to our previous research (23), data on the family cancer history were obtained at initial treatment. During the preparation of this article, a questionnaire was sent to the patients or their family by email, postal letter, or WeChat if the information was not recorded clearly. The family members in the current study only consisted of first-degree relatives, and the patients were categorized as having a family cancer history if any of those relatives had any cancer other than nonmelanoma skin cancer. Otherwise, the patient was recorded as not having a family cancer history (23). The pathologic depth of invasion (DOI) was measured from the level of the adjacent normal mucosa to the deepest point of tumor infiltration, regardless of the presence or absence of ulceration (24).
Immunohistochemical (IHC) Analysis
From July 2013, routine immunohistochemical analysis of Ki-67, p16, p53, and p63 was performed for every head and neck SCC patient. The level of positivity of p16 overexpression was consistent with previous studies (17,19): 0-+, defined as less than 25% tumor staining; ++, defined as 25-50% tumor stating; +++, defined as 50-75% tumor staining; and ++++: defined as more than 75% tumor staining. Tumors with levels of +++ and ++++ were classified as having p16 positivity. Similar standards were used for p53 and p63. The Ki-67 score (0-100%) was calculated by the ratio of the number of immunostained nuclei to the total number of nuclei in tumor cells. The counting was performed in three randomly selected fields at ×400 magnification. The cut-off value of the Ki-67 score in the current study was defined as the median value (25,26).
Surgical Principle
In our cancer center, systemic ultrasound, CT, MRI and/or PET-CT examinations were routinely performed for every patient. All oral SCC operations were performed under general anesthesia. The primary tumor was completely excised with at least a 1 cm margin; if necessary, a pedicled flap or free flap was used to close the defect. Neck dissection was usually performed except for tumors with very small sizes in the upper gingiva; levels of I to III were manipulated for a cN0 neck, and levels of I to IV or V were manipulated for a cN+ neck. Adjuvant treatment was suggested if T3/4 disease, cervical nodal metastasis, PNI, LVI, or positive margins were present.
Statistical Analysis
Student's t test was used to compare the continuous variables between the two groups, and the Chi-square test was used to compare the categorical variables between the two groups. The main study points were locoregional control (LRC) and diseasespecific survival (DSS). The survival time of LRC was calculated from the date of surgery to the date of local, regional or locoregional recurrence or to the last follow-up, and the survival time of DSS was calculated from the date of surgery to the date of cancer-related death or the last follow-up. The Kaplan-Meier method (log-rank test) was used to calculate the LRC and DSS rates. The factors that were significant in univariate analysis were then analyzed in the Cox proportional risk regression model to determine the independent prognostic factors. All reported p values were two-sided, and a value of p<0.05 was considered significant. All statistical analyses were performed with SPSS 20.0.
Demographic Characteristics
A total of 353 patients (301 males and 52 females) were enrolled for analysis. The NSND group consisted of 86 patients with a mean age of 50.6 (range: 30-68) years; 14 (16.3%) patients were aged ≤40 years, and there were 50 (58.1%) males and 36 (41.9%) females. Forty-six (53.5%) patients had an educational background of high school or above. Six (7.0%) patients had a family cancer history: esophageal cancer was noted in 4 (66.7%) families, and lung cancer was noted in the remaining two families (33.3%). The CSCD group consisted of 267 patients with a mean age of 62.5 (range: 38-76) years; 10 (3.7%) patients were aged ≤40 years, and there were 251 (94.0%) males and 16 (6.0%) females. A total of 102 (38.2%) patients had an educational background of high school or above. Twenty-nine (10.9%) patients had a family cancer history: esophageal cancer was noted in 13 (44.8%) families, lung cancer was noted in 7 (24.1%) families, breast cancer was noted in 4 (13.8%) families, liver cancer was noted in 3 (10.3%) families, and colorectal cancer was noted in 2 (6.9%) families. Patients in the NSND group were more likely to be female (p<0.001), have a younger age (p<0.001) and have a higher educational background (p=0.012) than those in the CSCD group. There were no apparent differences regarding family cancer history between the two groups (p=0.294) ( Table 1).
Operation and Pathologic Characteristics
In the NSND group, 15 (17.4%) patients underwent free flap reconstruction: 10 with radial forearm flaps, 3 with anterolateral flaps, and 2 with fibular flaps. Tongue SCC was present in 37 (43.0%) patients, buccal SCC was present in 20 (23.3%) patients, and SCC of the upper and lower gingiva was present in 16 (18.6%) and 7 (8.1%) patients, respectively. SCC in the floor of the mouth was present in 6 (7.0%) patients. The median DOI was 8.2 mm, with a range from 2.0 mm to 23.5 mm. The pathologic tumor stages were distributed as T1 in 19 Compared to the CSCD patients, the NSND patients had a significantly lower Ki-67 index (p<0.001). However, the CSCD patients had higher expression of p53 (p<0.001) and p63 (p<0.001). The two groups had similar distributions of p16 expression (p=0.132).
Survival Analysis
During our follow-up with a median time of 34 months, in the NSND group, 45 patients received adjuvant radiotherapy, and 19 patients underwent adjuvant chemotherapy. A total of 37 patients suffered from disease recurrence: 34 cases locoregionally and 3 cases distantly. Only 10 patients were successfully salvaged by radical surgery. Nineteen patients died of the disease.
In the CSCD group, 162 patients received adjuvant radiotherapy, and 81 patients underwent adjuvant chemotherapy. A total of 150 patients suffered from disease recurrence: 141 cases locoregionally and 9 cases distantly. Only 40 patients were successfully salvaged by radical surgery. A total of 100 patients died of the disease.
The 5-year LRC rates for NSND patients and CSCD patients were 48% and 38%, respectively, and the difference was significant (Figure 1, p=0.048). Further, the Cox model confirmed the independence of smoking and drinking status for affecting LRC (p=0.022, Table 2).
The median DSS time for NSND patients and CSCD patients was 59.3 months and 54.0 months, respectively. The 5-year DSS rates for NSND patients and CSCD patients were 56% and 39%, respectively, and the difference was significant (Figure 2, p=0.047). Further, the Cox model confirmed the independence of smoking and drinking status for affecting DSS (p=0.015, Table 3).
DISCUSSION
The most significant finding in the current study was that compared to typical oral SCC patients, NSND patients had significantly different epidemiological, pathologic, and molecular features and better prognosis, suggesting that NSND patients might be a different entity. This finding prompts more personalized cancer treatment for traditional and NSND oral SCC patients and more high-quality studies to clearly clarify the etiology of NSND patients.
In the beginning of preparing this research, one of the most important factors was to identify a clear definition of NSND and CSCD patients, which would improve the reliability of this study. Different definitions of never/current smokers and never/current drinkers have been described by previous authors (1,(11)(12)(13)(14)(15)(17)(18)(19)(20)(21)(22), and it was noted that in most of those studies, an affirmative never drinker even had one drink once a week. Current evidence distinctly proves that alcohol consumption apparently increases the risk of oral SCC (27). More importantly, the association of alcohol consumption with the relative risk for developing cancer tends to be dose-dependent (14); therefore, we should make a stricter standard for NSND patients, such as the definition used in this research. On the other hand, a typical oral SCC patient is usually associated with heavy tobacco and alcohol use for 10 years or more (28), and a similar viewpoint has also been reported by Brennan et al. (11), and Harris et al. (12). Therefore, to clearly determine the difference between NSND and CSCD groups and eliminate the influence of confounding factors, we identified a stricter standard for CSCD patients. It was noted that there was a younger age in the NSND group, and a similar finding was also described by previous authors (9)(10)(11). However, literature regarding age distribution is scarce. There were significantly more patients aged less than 40 years in the NSND group. On the other hand, there was a male predominance in both groups but a significantly higher proportion of women in the NSND group in the current study; a similar finding was also noted by Bachar et al. (14) and Durr et al. (20). These two demographic findings might vaguely suggest that there are unknown factors explaining the occurrence of SCC in NSND patients; however, the influence caused by environmental tobacco cannot be ignored. Tan et al. (29) found that exposure to environmental tobacco in the home was always reported by elderly women with head and neck SCC, and men usually had a higher possibility of second-hand smoke exposure owing to their occupational nature (19).
Tumor site specificity has been demonstrated by a number of researchers (21,30). Compared to CSCD patients, NSND patients had a lower possibility of developing SCC of the floor of the mouth and the lower gingiva but a higher possibility of developing SCC in the upper gingiva. It has been proposed that because of gravity dependence, pooling saliva containing alcohol/ tobacco-derived carcinogens leads to an increased prevalence of cancer in the lower location of the oral cavity. A greater presence of adverse pathologic characteristics, including PNI, LVI, poor tumor differentiation, and advanced disease stage, has also been reported by previous authors (13,14,22), and similar findings were also noted by us. However, it is difficult to attribute this phenomenon to internal differences between the two groups because long-term alcohol and tobacco use can accelerate the development of cancer and change the biological behavior of disease (12).
The clarification of molecular expression variation was one of our main goals, as it would provide the strongest evidence for answering whether NSND patients are a different entity. Very few authors have performed similar analyses (17)(18)(19). Considerable attention has been given to the HPV virus owing to its possible etiological mechanism in head and neck SCC occurrence (28). Western researchers have even described HPV as being responsible for at least 70% of newly diagnosed cases of oropharynx SCC (31), but the role of HPV in inducing oral SCC remains unclear. Dediol et al. (17) reported that 27% of their NSND patients were HPV positive, but HPV detected by PCR did not distinguish whether HPV had been activated, and this finding did not support the causal relationship of HPV infection with tumorigenesis. Recent evidence by de Abreu et al. (32) showed that the frequency of high-risk HPV types in oral cavity SCC was very low and was less than 4%, and the authors concluded that HPV was not involved in the genesis of oral cavity SCC. Our study would also support this viewpoint, as no HPV infection occurred in either groups.
Furthermore, p16 is usually evaluated together with HPV. For oropharynx SCC, there is a reliable association between HPV infection and p16 overexpression, and p16-IHC is usually regarded as a surrogate marker of HPV infection. However, in the current study, we noted that approximately 5% of the NSND patients showed p16 positivity, although no HPV infection was detected by PCR. In a previous report by Harris et al. (12), 40% of young oral tongue SCC patients had p16 positivity, but no HPV was found in any of the tumor samples. Similar findings were also noted by Poling et al. (33): 9 of the 78 patients had p16 positivity, but only 1 patient had HPV E6/E7 mRNA transcripts. Moreover, our two groups had similar distributions of p16 expression. These findings suggest that p16 is not suitable for assessing the etiology associated with HPV infection in oral SCC. In addition, p53 and p63 have been widely analyzed in head and neck SCC, but only a few authors have analyzed their expression in NSND patients. Heaton et al. (18) reported that a total of 16 tumors had strong p53 expression with a prevalence of 31.4%, and a previous review depicted that the overall rate of p53 positivity in head and neck SCC varied from 20% to 90% (33), which was slightly higher than that (17.4%) in our NSND patients but was consistent with that in our CSCD patients. The variation was attributed to the fact that both tobacco and alcohol could lead to mutations in the TP 53 gene. p63 was rarely assessed in NSND patients, and we might be the first to report that 29.1% of NSND patients show strong expression of p63. Previous studies have shown that the expression of p63 in SCC tissue is significantly higher than that in epithelial dysplasia and normal tissues (34). Together with our findings, these results suggest a role for p63 expression in carcinogenesis, and the effect might be enhanced by tobacco and alcohol. Ki-67 is an indicator of cancer cell proliferation, and a greater Ki-67 index might indicate more aggressive and poorer disease survival (26). We might be the first to report that the mean Ki-67 proliferation index was 24.5% for NSND patients, which was significantly lower than that in typical patients. This finding again provides evidence that NSND patients might be a different entity.
Survival differences between NSND patients and CSCD patients have been frequently compared, and conflicting results have been reported. Bachar et al. (14) divided 291 patients into two groups based on the status of tobacco smoking and alcohol abuse, and the two groups had similar local and regional control rates as well as overall survival rates. However, Durr et al. (20) described that compared to former or current smoking patients, never smoking patients tended to have decreased overall survival. In our opinion, long-term exposure to tobacco and alcohol is linked to a higher risk of peripheral vascular disease, chronic obstructive pulmonary disease, and coronary artery disease. Therefore, the index of overall survival might not be reliable enough for detecting the survival difference between the two groups. Pytynia et al. (13) found that after being matched to 50 ever smokers according to important variables, never smokers had a greater DSS and recurrence-free survival, and a further Cox model confirmed its independence. Our previous study also suggested that smoking was associated with an approximately 2-fold increase in the risk for recurrence and a 5-fold increase in the risk for disease-related death (22). In the current study, we noted that compared to CSCD patients, NSND patients had significantly better LRC and DSS in both univariate and multivariate analyses. A similar finding was also reported by Farshadpour et al. (11). Thus, NSND oral SCC patients might be a different entity.
It was interesting to find the negative prognostic significance of p16 expression in oral SCC. As usual, p16 expression was related to better survival in oropharynx SCC, but the exact opposite result was found in oral SCC. In a recent publication by Dediol et al. (17), the authors also reported that p16 expression carried a negative prognosis in oral SCC patients. However, in a recent meta-analysis, Almangush et al. (35) noted that there was no sufficient evidence to support p53, Ki-67 and p16 as prognostic biomarkers for oral SCC. The prognostic significance of p63 in oral SCC remains unknown, and our study failed to report a significant relationship between p63 expression and survival. However, Xu-Monette et al. (36) described the protective effect of p63 expression in high-risk diffuse large B-cell lymphoma. Therefore, more high-quality studies are needed to clarify these questions.
The limitations of the current study must be stated: there was inherent bias within this retrospective study, which may have decreased our statistical power; some other potential risk factors including chronic periodontitis, oral hygiene and economic status were not taken into consideration; and our strict standard may have artificially widened the difference between the two groups.
CONCLUSIONS
In summary, NSND oral SCC patients are a different entity compared with typical patients. HPV infection has a limited role in carcinogenesis in NSND, and p16 expression is associated with worse locoregional control.
DATA AVAILABILITY STATEMENT
All data generated or analyzed during this study are included in this published article; the primary data can be obtained from the corresponding authors.
ETHICS STATEMENT
The Zhengzhou University institutional research committee approved our study, and all participants signed an informed consent agreement.
|
2021-06-28T13:24:26.027Z
|
2021-06-28T00:00:00.000
|
{
"year": 2021,
"sha1": "bcc41cfe8f29370cd40fb2d2e45a3e1e5f3aee75",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.558320/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bcc41cfe8f29370cd40fb2d2e45a3e1e5f3aee75",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216454472
|
pes2o/s2orc
|
v3-fos-license
|
ADJUSTMENT OF A REGIONAL ALTIMETRIC NETWORK, IN BRAZIL, TO ESTIMATE NORMAL HEIGHTS AND GEOPOTENTIAL NUMBERS
: In 2018, the Brazilian Institute of Geography and Statistics (IBGE) defined the normal height values and geopotential numbers for the High Precision Altimetric Network (HPAN), creating the need for developing procedures able to adapt works correlated with the Brazilian Geodetic System (BGS). In that context, and considering the state and municipal altimetric networks, it is necessary to estimate normal heights. To this end, this paper proposes to estimate normal height values and geopotential numbers for the Altimetric Network of the Federal District (AN-DF). Computational procedures involving the Least Squares Method were established and applied in a geometric levelling survey, which includes 200 stations distributed throughout the study area. The obtained results allowed estimating the normal heights of the network stations with accuracy up to 0.032m. However, because significant differences were found in the adjusted and known values of some used HPAN stations, we recommend a rigorous analysis of these stations before using them. In any case considering that the geometric levelling and the applied procedures were carried out correctly, it is suggested that the values estimated for all stations of the AN-DF in this work may be used.
Introduction
The High Precision Altimetric Network (HPAN) belongs to the Brazilian Geodetic System (BGS) and is maintained by the Brazilian Institute of Geography and Statistics (IBGE).This network covers the entire Brazilian territory, having nowadays two vertical datums, Imbituba and Santana, and continuous data collection via geometric leveling technique, where new stations are implanted.Therefore, to incorporate new stations into the HPAN, all stations in the network adjustment are used to estimate and/or update their heights using the Least Squares Method (LSM).
Like the HPAN, a Gravimetric Network (GN), which also belongs to the BGS, has been continually updated, with the acquisition of new gravimetric data.At first, the GN was developed to enable and improve the computation of geoid heights.However, to give a physical meaning to the heights, new gravimetric data are being acquired, mainly in the HPAN stations.As a result, the IBGE published new heights, now called normal heights, for the HPAN stations in 2018 (IBGE 2018), where gravimetric observations were included in the functional model for the network adjustment, following the recommendations of the International Association of Geodesy (IAG).
The HPAN update is in agreement with the works performed in many parts of the world.In this context, Freitas and Blitzkow (1999) promoted a reflection on the physical meaning of height, the problems and the practical solutions that can be implemented to define a vertical datum for the Geocentric Reference System for the Americas (SIRGAS).Drewes et al. (2002) showed that height type, reference surface, realization and maintenance of the reference system are presented as main topics for defining the vertical reference system for SIRGAS, and recommended introducing two height types, the geometric (ellipsoidal) and normal (physical).Luz (2004) investigated the possibility of using gravimetric data to define the physical heights in Brazil and to connect them with the South American Height Systems associated with SIRGAS.To contribute with the integration of the Brazilian vertical network into the global height system, Palmeiro and Freitas (2010) developed a methodology to measure/ determine the physical heights by computing the geopotential numbers using the Geographical Information System to integrate the BGS information.Severo et al. (2013) determined and evaluated the corrections of different physical heights (dynamics, normal and orthometric) in a leveling belonging to HPAN, located in the northeastern Rio Grande do Sul.Also, Luz (2016) reviewed the main concepts related to vertical geodetic reference systems and physical heights, to prepare the first HPAN adjustment considering the geopotential differences.
Considering the exposition and also that states and cities in Brazil have their geodetic networks referred to the BGS, they need to be updated for each new adjustment and/or change in the national geodetic network.Therefore, it is necessary to establish procedures for estimating normal heights for these networks.Consequently, this research aims at developing a procedure to adjust the Altimetric Network of the Federal District (AN-DF), Brazil, and to estimate the normal heights and the geopotential numbers to be associated with the BGS.
Normal Height and Geopotential Number
Normal heights ( N H ) can be characterized as a type of physical height ( F H ) since they are associated with the terrestrial gravity field ( g ).Therefore, this height can be expressed by the geopotential number ( P C ), defined as the difference between the gravity potential on geoid level ( 0 W ) and the gravity potential at the point ( P W ) considered.
This relationship can be expressed as (Torge 1991;Freitas and Blitzkow 1999;Gemael 1999;Luz 2016;IBGE 2018): (1) In practice, height is determined by surveys involving geometric leveling technique and, therefore, Equation 1must be expressed by a discrete model.Then, considering a finite sum of sections of height differences ( H ∆ ), with the average gravity values ( g ) observed for each section, we have (Gemael 1999): (2) Due to propagation errors on the geometric leveling technique to compute heights, Luz (2016) and IBGE (2018) suggested that is preferable to compute geopotential differences (
AB C ∆
), for a small distance between points ( A and B ).It allows geopotential numbers ( C ) to be computed adjusting the observations of these differences ( C ∆ ), because: (3) According to Torge (1991), C describes better the behavior of masses in the gravity field and could be used in several applications, such as hydraulic engineering and oceanography.However, C contradicts the demand for a height system that works on the metric unit.In this case, the physical height ( F H ) can be expressed as (Freitas and Blitzkow 1999;Torge 1991;Gemael 1999;Severo et al. 2013;Luz 2016;IBGE 2018): (4) Considering a height using the geoid as a reference, we have an orthometric height ( H ) as definition while G (Equation 4) must represent the average gravity measured along the plumb line (Figure 1).Thus, becoming necessary to know the gravity values inside the Earth.However, H can not be precisely estimated as the density distribution in the Earth interior is not known.
As an alternative to solve the problem for determining H , IBGE (2018) in consonance with Drewes et al. (2002), defined the use of normal heights ( N H ), according to the definitions established by SIRGAS.In this case, Equation 4can be expressed as: (5) v γ represents the normal gravity average value, along the vertical (Figure 1), and can be expressed as (Heiskanen and Moritz 1967;Luz 2016;IBGE 2018): ϕ and N H are the geodetic latitude and the normal height of a point of interest; a , b , e and f are the major and minor axes, the first eccentricity and flattening, respectively; a γ , b γ and 0 γ are the normal gravity on the equator, pole and the point considered, respectively; and ω and GM are the angular velocity and the geocentric gravitational constant.All parameters are associated with the adopted reference ellipsoid.
Given the above, according to Figure 1 and IBGE (2018), v γ is the normal gravity average value along the vertical, N H is measured along the so-called normal vertical direction and, because it does not consider the real Earth, the height does not refer to the geoid, but to the quasi-geoid.
Altimetric Network Adjustment and Evaluation
According to Equations 2 and 3, C as a parameter to be estimated can be used to compute the physical height, since H ∆ and g are considered observed values.Subsequently, N H can be estimated by Equation 5.
In this research N H values were estimated by a functional model defined by developing the Equations 3 and 5 and then C values were computed by Equation 3. Therefore, the functional model considering geometric leveling and gravimetric observations for distinct points ( A and B ), can be expressed as: (10) The estimation of the parameters in Equation 10, using the Constrained LSM (CLSM), considers the following: where, b L is the vector of observed values (
AB H ∆
); V , the vector of residual values; 0 X , the vector of initial parameters; X , vector of corrections; and a X , vector of adjusted parameters.
For the CLSM, the condition equations refer to the normal heights of stations used as reference ( Therefore, X can be expressed as (Gemael, 1999): ) close to zero was defined for the stations used as a reference.In this case, these stations were considered local vertical datums defined in the study area.
To evaluate the results of this research, we analyzed: the V / σ ) using the Tau (τ ) test, both according to Cross (1983).
Computing the Normal Heights and the Geopotential Numbers of the Altimetric Network in the Federal District, Brazil
The Federal District region, in Brazil, between 48.25°W and 47.33°W and 15.45°S and 16.06°S (Figure 2), has a slightly wavy relief, ranging from 600 to 1340 meters above sea level.Several works on territorial planning have been carried out over time due to the continuous development of the region, among them, the implementation of a precision altimetric network to elaborate and update the cartographic and cadastral basis, and the execution of infrastructure projects as well.Despite having 136 stations distributed throughout the studied region, this network, known as RN-DF (Figure 2), is concentrated in urban and urban expansion areas.In addition to the mentioned stations, the RN-DF incorporates another 30 existing HPAN / IBGE stations, here called RN-IBGE, and 34 stations of the Planimetric Network (SAT-IBGE) (Figure 2), both members of the BGS.The main objective of using the BGS stations was to improve the spatial distribution of altimetric data in the region.
In 2016, the Federal District Development Company (TERRACAP) used the geometric leveling technique and conducted an altimetric survey of the 200 mentioned stations, from RN-DF, RN-IBGE, and SAT-IBGE.This survey required relative datum defined in specific RN-IBGE stations; implementation of altimetric safety points (RNPI-DF) with distance up to 2km (Figure 2) within each leveling circuits; measurements of height differences at horizontal distances less than 100m±20m; use of ground support for positioning the leveling rod; measurements on the leveling rod 50cm above the ground; double sections of leveling rod measurements, with direct and inverted position, changing the equipment position for each section; survey in closed circuits; and closure error smaller than 6mm.km 0.5 .The altimetric survey used calibrated and certified Leica DNA3 digital levels and was conducted in 2016.
Figure 2 shows the data cited above that was used for estimating normal heights and geopotential numbers.The 728 RNPI-DF points (Figure 2) show the paths covered, comprising 1393 km of geometric leveling performed in 58 circuits for the 200 points considered in this study.
In addition to altimetric data, the region (Figure 3) has 1377 Ground Gravity Stations (GGS) provided/managed by the BGS/IBGE, the National Petroleum Agency (ANP) and the University of Brasilia (UnB).The GGS provided by the IBGE and ANP were acquired using the LaCoste & Romberg and Scintrex Autograv CG-5 gravity meter between 1990 and 2014.However, the GGS provided by the UnB were acquired using only the Scintrex Autograv CG-5 gravity meter between 2013 and 2017.
The GGS data were used to generate a map of Bouguer anomalies ( Bg A ), shown in Figure 3.The anomalies, which represent the varying physical properties of the rocks in the region, vary from -103,290 mGal to -130,702 mGal, and were used to estimate gravity values ( g ) for stations or points that did not have that information.The flowchart in Figure 4 indicates that the information used in this research was separated into reference data, geometric leveling data, and gravity and position data.
The reference data comprises information from stations used as relative datum for computing the heights of the stations of interest.This information includes the identifier, or the name of the station, the geodetic coordinates, the normal heights, the geopotential numbers, and the gravity values.In the first moment, a single reference station (relative datum) was defined, both for computing a priori heights and for the final adjustment of the normal heights and geopotential numbers of the stations of interest.Subsequently, after analyzing the uncertainties of the final results, other reference stations were defined to improve accuracy.The uncertainties used for each reference station were extracted from the HPAN reports.
The geometric leveling data consists of information that allows computing the heights for the points of interest, starting from the reference stations.This information includes the identifier of the backsight ( BS ) and foresight ( FS ) points, the horizontal distance ( D ) between BS and FS , and height difference ( H ∆ ).First, the heights and closure errors were computed.The errors were compared with the established tolerance value (6mm.km 0.5 ).After determining that the closure errors were within the tolerance interval, the normal heights, with their uncertainties, and geopotential numbers were adjusted.
The gravity and position data include information about all points used (RN-DF, RN-IBGE, SAT-IBGE, and RNPI-DF), including their respective identifier, geodetic coordinates, and gravity values.In the stations or points lacking gravity values ( g ), they were computed by interpolating the computed Bg A values, using the values of ground gravity stations (Figure 3), adopting the following relationship: (15) where ρ is the average density of the Earth crust (2.67g.cm -3 ); G , the gravitational constant of Newton (6.67428e-11 m 3 Kg -1 S -2 ); Fa C , the free-air correction; Bg C , the Bouguer correction; and γ , the normal gravity value.
Interpolation was performed using the Inverse distance weighting (IDW) method applied to the computed Bg A values.This data interpolation procedure is commonly adopted to compute geoid models and was used by Marotta and Vidotti (2017).
The height data computed using the reference stations, geometric leveling data and g computed, was also used a priori to compute v γ , according to the flowchart in Figure 4.
With the requested data (Figures 2 and 3), as described in the previous sections, it was possible to estimate the values of N H , N H Σ and C for stations and points of interest (Figure 4).
In this research, the GRS80 ellipsoid was used as a reference, and all the constants used, inherent to the presented equations, were given by Moritz (1984) and Petit and Luzum (2010).
Results
Figure 5 shows the adjustment results of the altimetric network of the Federal District using the uncertainties of N H values estimated for all used stations and points (RN-DF, RN-IBGE, SAT-IBGE, and RNPI-DF), and taking the RN-IBGE station, 2223S, as reference.This station was chosen as a reference because it was the origin of the observations in the geometric leveling survey.
Considering the uncertainties of the N H values estimated by the CLSM, which vary from 0.000 m to 0.037m (Figure 5), it can be seen that the stations and points farthest from the reference station (RN-IBGE 2223S) have less accurate results.For this reason, other reference stations were defined to improve the accuracy of the results for the whole study area.This choice considered this station as the origin of the observations, in the geometric leveling survey (RN-IBGE 2223S), and stations located in the central region of the study area (RN-IBGE 9305U) and/or that were used in more extensive and distant circuits (RN-IBGE 2263Z).Additionally, the differences between computed and known N H were smaller than the uncertainties estimated for all selected reference stations.
On the applied CLSM performance, the 2 0 σ estimated (0.158) was out of the 2 / gl χ range (0.888 to 1.119) defined for 578 degrees of freedom ( gl ) at 95% confidence level.As the 2 0 σ was less than the 2 / gl χ lower limit and considering the absence of errors in the adopted procedures, it was suggested that the weight of the observations was underestimated.Therefore, the 2 0 σ was used to compute the standard deviation of the estimated parameters.
After applying the CLSM, the maximum residual value was 0.011 m.Also, the maximum a posteriori standardized residual value was 3.502, smaller than the critical τ value (4.118) defined for 578 gl at 95%.Therefore, it was concluded that the observations used in this work did not present substantial errors.
Figure 6 shows the N H estimated for the altimetric network, considering the stations 2223S, 9305U and 2263Z of the RN-IBGE as reference.Figure 7 shows the uncertainties of the N H values ranging from 0.000 m to 0.032 m whereas Figure 8 shows the C computed directly, according to Equation 5, using computed v γ (Equation 6) and estimated N H , shown in Figure 6.
The results in Figures ( 5) and ( 7) show that the position and spatial distribution of the reference stations affected significantly the estimation of the uncertainties of the estimated N H values.
The N H values estimated using the three reference stations (2223S, 9305U and 2263Z) were compared with known values (Tables 1, 2 and Figure 9), according to IBGE (2018).The results for the 27 stations analyzed show that the differences (Table 1) range from 0.119m to -0.307m, with -0.008m average and 0.102m root-mean-square deviation (Table 2).Also, the largest differences between -0.124m (2226R) and -0.308m (1361T) are concentrated in the northeastern region of the studied area (Figure 9), close to the 9305U station (used as reference), in approximately 15km stretch of a geometric leveling circuit.In addition, the differences varied from 0.003m to 0.118m (Table 1) in stations (Figure 9) located near to 2223S (used as a reference).
Despite the large differences in the N H values (Tables 1, 2 and Figure 9), the geometric leveling performed and used in this research was consistent.This result is firstly explained by the fact that all 58 circuits adjusted together, contemplating 30 RN-IBGE stations, 136 RN-DF stations, 34 SAT-IBGE stations, and 1393km total length, had differences smaller than 6mm.km 0.5 for each circuit and, secondly, the residual values and the statistical test applied showed no problems in the observations.Furthermore, it is noteworthy that between 1953 and 2005, 256 RN-IBGE stations were implanted within the Federal District limits, close to roads that have been maintained or expanded along the years.And, according to IBGE information, only 70 out of 256 stations were considered in good conditions between 1995 and 2011 while the others were either destroyed or not found.In 2016, at the time of this research, only thirty out of 70 stations were found and used because they had reliable planimetric coordinate values estimated by GNSS positioning.Also, as described, it is possible that some of the RN-IBGE stations had their original positions changed, being, therefore, advisable to conduct a rigorous analysis of these stations by investigating the original leveling data measured by IBGE or by repeating some leveling circuits, including the stations with large differences between N H as determined in this work.
Conclusion
This work presents procedures adopted for adjusting, using the geometric leveling, and estimating the normal heights and geopotential numbers for stations and points that are part of the Altimetric Network of the Federal District, Brazil.
The results show good accuracy of the estimated normal height values.Spatially, the accuracy increased (ranging from 0.000 m to 0.032 m) as the number of stations used as reference to compute the normal heights also increased.Large differences were observed in the results when comparing the normal height values estimated in this research with the known values established by IBGE (2018).It is also observed that the greatest differences were concentrated in specific regions of the study site, in approximately 15km stretch of a geometric leveling circuit located near to one station used as a reference.
Despite the large differences in the normal height values, the geometric leveling performed and used in this research was consistent.This result is firstly explained by the fact that all 58 circuits adjusted together, contemplating 30 RN-IBGE stations, 136 RN-DF stations, 34 SAT-IBGE stations, and 1393km total length, with differences smaller than 6mm.km 0.5 for each circuit and, secondly, the residual values and the applied statistical test indicated no problems in the observations.Therefore, in the Federal District, it is suggested to use the normal heights and geopotential numbers computed in this research for the RN-DF stations, to meet internal demands.However, due to the large differences detected in the RN-IBGE stations in this study, it is also recommended to conduct a rigorous analysis of the original leveling data measured by IBGE or to repeat some leveling circuits, including those stations with detected large differences between normal heights.
ACKNOWLEDGMENT
TERRACAP, for providing the raw data of geometric levelling; IBGE, for providing data of stations and information and C C are the matrices of partial differential equations, which consider the functional model and the condition equations (functional model of injunction) related to the parameters, respectively; L , vector of the differences between the computed ( 0 L ) and observed ( b L ) values; P and C P are the weight matrices of the observed and reference (injunction) values, respectively; and the C ε , the closure error vector of the model.As demonstrated in Equations 5, 6 and 10, N H and v γ are interdependent.Therefore, it is necessary to develop the adjustment by LSM involving an interactive process, until the a priori and a posteriori N H values are not significantly different.The heights ( H ) used a priori, both for defining the initial parameter vectors, and for determining v γ , N H and N H Σ (uncertainties), are derived from the direct computation of the leveling circuits (Equation 13), involving N H values of the reference stations.
Σ
, estimated after applying the a posteriori variance factor in the weight matrix; the differences between the estimated and known N H values in stations adopted as verification; the a posteriori variance factor ( 2 0 σ ) using the Chi-square ( 2 χ ) test, and the a posteriori standardized residual ( i i V
Figure 3 :
Figure 3: Bouguer anomaly and distribution of gravimetric (GGS) stations provided by IBGE, ANP, and UnB, in the study area.
Figure 4 :
Figure 4: Flowchart of the sequence to estimate N H ,
Figure 5 :
Figure 5: Standard deviations (S.D.) of the N H values estimated for the stations and points used (RN-DF, RN-IBGE, SAT-IBGE, and RNPI-DF), using the 2223S RN-IBGE station as reference.The black polygon corresponds to the geographic boundary of the Federal District.
Figure 7 :
Figure 7: Standard deviations (S.D.) of the N H values estimated for the stations and points (RN-DF, RN-IBGE, SAT- IBGE, and RNPI-DF), taking the stations 2223S, 2263Z and 9305U of the RN-IBGE as reference.
Figure 9 :
Figure 9: Spatial distribution of the differences between the N H estimated values, according to the adjustment considering the reference stations 2223S, 2263Z and 9305U, and known values for the RN-IBGE stations used.
Table 1 :
Differences between the N H estimated values, according to the adjustment considering the reference stations 2223S, 2263Z and 9305U, and known values ( N IBGE H ) for the RN-IBGE stations used.
|
2020-03-12T10:20:36.407Z
|
2020-04-24T00:00:00.000
|
{
"year": 2020,
"sha1": "54aabb2eac0863ff215519c8f878a58a231f3b58",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1590/s1982-21702020000100002",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "194a57d38ff8be5f2c0412775dcd494cb6f50311",
"s2fieldsofstudy": [
"Geography",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
}
|
196529153
|
pes2o/s2orc
|
v3-fos-license
|
Sociodemographic and health characteristics of formal and informal caregivers of elderly people with Alzheimer’s Disease
Objective: to evaluate and compare the sociodemographic characteristics, depressive symptoms, anxiety and perceived stress of formal and informal caregivers of elderly people with Alzheimer's disease. Method: Quantitative, cross-sectional and comparative study with 44 caregivers that were divided into two groups of 26 informal caregivers (IC) and 18 formal caregivers (FC). The Instrument for Characterization of the Caregiver, Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI) and Perceived Stress Scale (PSS) were applied. Results: Of those IC, the majority were women (96.2%), mean age of 52.9 years, majority represented by sons (daughters) (65.4%). They presented, on average, depressive symptoms (10.1%), anxiety symptoms (11.5%) and scored 32.1 in the PSS. The FC group, the majority were women (94.4%), mean age of 45.2 years were not related to the elderly people (66.7%). They presented, on average, depressive symptoms (7.1%), anxiety symptoms (6.4%) and scored 31.7 in the PSS. Conclusion: Similarities were shown in the sociodemographic aspects, but the groups differ in the health profile, revealing an alert for the planning of interventions looking for health promotion and disease prevention. Implications for practice: The study contributes to the improvement of the caregivers ́ quality of life.
INTRODUCTION
The ageing of the population is a worldwide process, seen both in developing countries and in developed countries. 1In this sense, it is possible to notice transitions in the demographic structure, through a series of changes that affect the different spheres of an economic, political and social organization. 2n the other hand, the demographic transition has brought about successive changes in the epidemiological profile, given that the immediate consequence of this process was the decline of the incidence of infectious diseases, in addition to the rise in non-transmissible chronic diseases (NTCD), so that the most prevalent are the cardiovascular and neurodegenerative diseases. 3mong the neurodegenerative diseases the dementia syndromes are included, with the, Alzheimer´s Disease (AD) as the most frequent etiology 4 , characterized by neurological alteration which presents progressive, degenerative, slow and irreversible condition, and may be defined by the loss of the harmonious functioning of the cognitive and behavioral functions. 4s a consequence, the AD restricts the autonomy, the independence for daily life activities -feeding, bathing, dressing, moving, having urinary and fecal continence, taking medicines, among others -, as well as the quality of life of elderly people, leading to an increase in the demand of care and constant supervision. 5n this context, the valuing of the care taken by the caregivers emerges, who can be represented both by the formal caregivers, and by the informal caregivers. 6In this manner, the formal caregivers are professionals paid who work in homes or Long -Term Care Institutions (ILPIs), offering service with workload previously established. 6On the other hand the informal caregivers are represented by the family members, friends or neighbors, whose role is to provide the care in the domicile in a voluntary manner, without financing benefits and carrying out the activity full time. 6ue to the demand for the care taken to dependent elderly people, damage to health are reported by these professionals, due to the existence of stressor events regarding the act of caring, that causes negative influence in the quality of life of these people. 7This way, Borges (2017) noted that about 87.9% of the caregivers presented complications in their health, since after assuming the occupation, evidenced at least one disease. 7s a result of the progressive loss of memory and autonomy, concurrently with the behavioral and psychological changes of the sick elderly people, the caregivers are submitted to psychological disturbs, such as anxiety, depression and stress, when compared with individuals who do not play this function. 7n view of this, the formal caregiver tends to suffer negative consequences because of the care carried out, however the informal caregiver can present a higher level of these psychological disturbs, due to the overload caused by the following factors: degree of dependence in the basic and intermediate daily life activities; time spent for care; level of cognitive impairment of the elderly people assisted; and perception of changes in its family structure. 8erefore, these caregivers need professional support and an environment to share insecurities and expectations, independently whether the care provided is formal or informal.The attention to the caregiver´s health should be established from its capacity to determine this public´s health needs, and to plan and assess the interventions related to the care for the people, based on their wishes and particularities.
In the light of the above, the objective of the present study was to assess and compare the socio-demographic and health profile of the formal and informal caregivers of elderly people with Alzheimer.For that purpose, we opted to assess, compare and identify the more important factors for depression, anxiety and stress perceived in these caregivers, and its relationship to the sex, age, time spent for care, education and degree of dependence of the elderly people care receiver.
METHOD
It´s a quantitative, cross-sectional and comparative study, for verification of the level of stress, depressive symptoms and of anxiety in formal and informal caregivers of elderly people who are users of a private health care plan users in the municipality of São Carlos-SP.
The participants came from the Department of Home Healthcare (DHHC), belonging to a health operator (agreement), of private character.In total, there were 200 patients registered in the Home Healthcare and, approximately, 80% of this population were elderly people with Alzheimer disease diagnosis who had informal and formal caregivers for their daily care.The list of elderly people with the respective caregivers was provided by the coordinator of the Department of Home Healthcare of the Health Operator (n=160).
As inclusion criteria, were considered informal caregivers of elderly people with Alzheimer disease diagnosis, who provided the care for more than one year, did not receive wage from the other family members to provide the care and were convened to the health operator.Furthermore, the formal caregivers, who were remunerated and took care of the elderly people with AD for more than one year.As exclusion criteria, were caregivers with age less than 18 years, or that had some form of physical or mental disability that would make impossible the participation in the research.
The data collection was carried out in the first semester of 2017, being that, from the list of contacts provided by the coordinator of the DHHC, the researcher called all with the aim of inviting them to participate in the research and, while respecting the inclusion criteria.The meeting was scheduled.On the date set, the caregivers signed the Free Informed Consent Form to participate in the research and were evaluated, in private sector, in the own health operator or in the own home.That way, they were divided into two groups of 18 formal caregivers and 26 informal caregivers.
The project was submitted and approved by the Research Ethics Committee with Human Beings of the Federal University of São Carlos, CAAE 65119517.1.0000.5504(Opinion No.
Profile of caregivers of elderly people
Martins G, Corrêa L, Caparrol AJS, Santos PTA, Brugnera LM, Gratão ACM 2.069.671/2017).For the evaluation, the following instruments were used: The Instrument for characterization of the caregiver was created by the researchers, allowing to know it by means of the following particularities: sex, age, marital status, school degree, kinship, hours devoted to the care, types of diseases (dyslipidemia, diabetes, high blood pressure, heart disease, osteoporosis or others), type of care and support received.
][11] The Beck Depression Inventory (BDI) is a symptomatic of depression. 10It consisted of a questionnaire with 21 items, of multiple choice, with four alternatives, for each one and score ranging from zero to three points. 10The sum of points provides a total score that indicates the intensity of the depression, ranging among the degrees: minimal, light, moderate and severe. 10In Brazil, an extensive work was carried out to develop a Portuguese version of BDI and study of its psychometric properties, with authorization of The Psychological Corporation and support of the Home of the Psychologist. 11The classification of the intensity of depression, based on the BDI scores in Brazilian standards, is: minimal (0-11), light (12-19), moderate (20-35), severe (36-63). 113] The PSS has 14 questions with response options that range from zero (never) to four (always). (12)(13)] For the analysis of data, the descriptive statistics was used, being: the categorical variables described by percentage; the continuous variables represented by the central trend (mean); and the variability, by the standard deviation.The results of outcome variables (anxiety, depression and stress) were submitted to the assessment of normality by the Kolmogorov-Smirnov test and, after the normality confirmation, parametric tests were applied.The Pearson´s chi-square test (X2) was used to compare the dichotomous categorical variables between the groups.The outcome variables means were compared (formal x informal caregivers) and analyzed by means of the Student T test for independent samples.The Statistical Package for the Social Sciences software (SPSS) version 21.0 was used for the analysis of data, considering significant the p-value ≤ 0.05.
In addition, multi-analyses were carried out to identify the factors associated with psychological variables, treated as the indication of depressive symptoms, high levels of stress and anxiety.The sociodemographic profile variables and information about the care context (independent variables) were tested in the association with the psychological variables (dependent variables).Through the Chi-Square test with Odds Ratio (OR) statistics and 95% confidence interval (CI95%), it is sought to analyze the associations between the above-mentioned dependent categorical variables and the independent.The associations and correlations considered statistically significant were those that obtained p-value ≤0.05.
RESULTS
In Table 1, the sociodemographic characteristics of the 44 caregivers are observed, with 26 (59.1%) informal (IC) and 18 (40.9%)formal (FC).The sample had female predominance both in IC (96.2%) group and of FC (94.4%), and the mean age of the IC (52.9%) was higher than FC (45.2%).The level of education was shown from the quantity of years of study, that is, the FC (10.1 %) have higher level than those of IC (9.6%).As to the degree of kinship with the elderly people, the majority of the IC were sons (daughters) (65.4%) or husbands (wives) (19.2%).Still, the IC evidenced more time for care in daily hours than the FC; that is, 16.6 hours and 13.0 hours, respectively, pointing out that the IC have more time addressed to the care for the elderly people.
In Table 2, the health profile and the support that the caregivers receive by Religious Institutions, NGOs, Support Groups, Clubs or Social Assistance Service are observed.The total average of diseases was higher among the IC (1.31%) than the FC (0.7%).Regarding the health profile, the average of the IC who presented depressive symptoms, anxious and of perceived stress was higher than the FC; that is, IC (10.1 %) and FC (7.1%), for depressive symptoms, IC (11.5%) and FC (6.4%) for anxiety and IC (32.1 %) and FC (31.7 %) for perceived stress.
In Table 3, the type of care provided to the elderly is noticed.The FC carried out the most part of the care -except for sleep/ rest -, being 72.2% and 73.1% among the IC, in addition to the monitoring related to the consultations; that is, (61.1%) and 80.8% for the IC.
In Table 4, the association between the independent variables (sociodemographic profile and information about the care context) and dependent (depressive symptoms, of anxiety and stress) of the informal caregivers is identified.Thus, "Care for an elderly for 4 years or above" and "Have depressive symptoms" was the only association that showed as significant; that is, the fact of caring for an elderly for 4 years or more showed itself to be a protective factor for depressive symptoms.
DISCUSSION
The results showed that there was female predominance in the caregivers of elderly people with AD (96.2%).According to the literature, the responsibility attributed to the woman in the care context refers to the tendency of the persons of this sex to have longer life expectancy than the men, or, even, because they are younger than their partners. 14n the current study, the most caregivers were included in the age group from 40 to 59 years, both informal, and formal -53.8% and 50.0%, respectively.These data are similar to described in the study of Diniz et al. (2018), in that the predominant age of the formal and informal caregivers range between 36 and 56 years. 15his result can be related to the fact that the young adults have more vitality to carry out the care. 16s to the marital status, the most caregivers of elderly people had a companion, that is, 57.7% for the informal and 55.6% for the formal.This matches with the study of Orlandi et al. (2017), who evidenced that the most informal caregivers were married, once, due to factors as the woman´s participation in the job marked, as well as the restructuring of the family´s arrangements, there is the unavailability of persons to care for the elderly. 17This way, the partners are presented as the only option to perform this role. 17In addition, the study of Almeida et al. (2018) refers that the married women are more prone to develop care activities than the divorced, widows and singles. 18n the schooling question, the majority of the caregivers of elderly with AD had 9 years of schooling or above, being the informal 53.8% and the formal 66.7%.This result is in accordance with findings of the research of Leite et al. (2017), whereby great part of the informal caregivers had more than 10 years of schooling. 19urthermore, such level of schooling was also presented in the study of Diniz et al. (2018). 15This is a fundamental factor in the care for the elderly people, since the low schooling may influence directly in the quality of the service provided. 20egarding the individual income, the most caregivers received about 1 to 3 minimum wages, totaling 57.7% for the informal and 94.4% for the formal.The data corroborate with the study of Queiroz et al. (2018), which points out that the most caregivers have income equivalent to the same category. 21By the other hand, in the study of Almeida et al. (2018), the caregivers of elderly people presented absence of income (30.4%),receipt of one minimum wage (34.8%) or income originated from other sources (63%). 18The factor income of family caregivers may produce stressful situations, since, many times, these caregivers account for their income added to that of the elderly, in order to enable the expenditure administration 16 .
Regarding to the degree of kinship, 65.4% of the informal caregivers were sons (daughters), 19.2% were husbands (wives), 7.7% were grandsons (granddaughters) and 7.7% were brothers (sisters).According to Queiroz et al. (2018), the fact that most of these caregivers were sons/husbands, strengthen the idea of the feeling of obligation of care for these relatives or, even, for lack of resources. 21Moreover, the authors expose some factors that influence the choice of the caregiver, such as proximity, coexistence, living with the elderly, free time and financial situation. 21owever, regarding the formal caregivers, all of them reported that they did not have any kinship with the care receiver, that is, 100,0%.
Profile of caregivers of elderly people
Martins G, Corrêa L, Caparrol AJS, Santos PTA, Brugnera LM, Gratão ACM A great part of the informal caregivers reported residing with the elderly who received the care, that is, 73.1%.According to Queiroz et al. (2018), the cohabitation with the elderly causes a greater working day, making that the tasks performed become more frequent, and, consequently, can lead to an increased overload to the caregiver. 21For such, the support to the main caregivers is essential, to enable the division of tasks of the care carried out. 17In addition, it is needed the support of this population by means of the public equipments. 22Among the formal caregivers, 66.7% reported that they did not reside with the elderly.
In the study population, the time spent for care, in years, was expressed by the means 6.1 and 6.4 years, respectively, for the informal and formal caregivers.In addition, the hours of care, a day, constituted 16.6 hours worked by the group of the informal and 13.0 by the formal.According to Leite et al. (2017), there is the possibility that the caregivers does not receive the support necessary, such as the reduction of the workload, division of tasks and the access to the health public services, in an early way, according to the time required for the care.This, besides bringing consequences regarding the care for the elderly, could give rise to adverse health outcomes of this public. 19bout the health profile, a significant difference in the number of diseases among the groups is observed, with informal caregivers presenting a higher average.Only the informal caregivers reported having a disease (73.1%); what diverges from the study of Diniz et al. (2018), which evidenced the predominance of health complaints in the formal caregivers, such as back ache, what could be explained by the difference in the activities of care performed or work load. 15In addition, a practice suggested to try reducing the impacts generated is resumed in the adequacy of the caregivers´ work day, before its individual context. 15n addition, the current study enabled us to know the psychiatric aspects of these caregivers, by means of depressive symptoms, anxious and of stress.Thus, the depressive symptoms average of the informal caregivers was higher than that found in the formal caregivers.In accordance with the study of Rossi et al. (2015), the depression is one of the most recurrent symptoms among the informal caregivers and can persist for up to four years after the elderly´s death. 16In accordance with the literature, the caregivers with AD tend to present higher indexes of depression, contributing both to adverse outcomes to the caregiver and to the early institutionalization of the care receiver. 23egarding the anxious symptoms, it is noticed a significant distinction among the groups, given that the average of informal caregivers was also higher than among the formal caregivers.These results agree with the study of Rossi et al. (2015), in which the anxiety was one of the most frequent consequences after assuming the function of family member caregivers. 16n accordance with Rossi et al. (2015), taking responsible for the care of a dependent elderly -mainly for AD -consists in an impacting experience in the caregiver´s life. 16Even more when attached to the care by a family member, since the caring for emerges many times in an unpredictable way, and the task needs to be constantly performed, being able to cause modifications in the caregivers´ biopsychosocial health. 16s to the stress perceived, both the informal caregivers and the formal did not demonstrate significant levels of stress; however, the informal caregivers evidenced higher average in this symptom.Considering the national scenario, similar data were presented in a study conducted in the interior of São Paulo State, Brazil, since the caregivers punctuated in this scale an average of 19.9%. 24Beyond this, in the study of Silva et al. (2018) the stress was reported by the most informal caregivers among the more prevalent symptoms. 20According to the type and frequency of the activities used by the caregiver, there is the possibility of the establishment of stressful events, bringing adverse consequences in the life of this individual, such as the worsening of his quality of life. 20herefore, it should be emphasized that, according to the risk when compared to the analysis of the depressive symptoms, anxious and of stress perceived, the informal caregivers presented higher levels of these psychological aspects compared with the formal caregivers.Factors such as feeling of overload resulting from the workload spent in the care for the elderly, complexity of care and, for commonly disregarding their own needs, generate an increase in the probability of these individuals manifest depressive and anxious symptoms. 15ith regard to the type of care exerted for the elderly, the informal caregivers reported performing, more regularly, aid in the feeding, medicines control and monitoring in the consultations.By the other hand, the formal caregivers reported carrying out the activities of feeding, medicines control, body hygiene and oral hygiene.The repercussion of the disease of the elderly care receiver implies consequences -both for the elderly affected, and for the caregiver responsible -since the cognitive and behavioral functions impairment cause influence in the daily activities (DAs); such as, to maintain the body hygiene, to get dressed, to move, to have preserved urinary and fecal continence, to feed, to medicate itself, to put the house in order, among others. 6n the current study, the fact that the function of caregiver of elderly with AD was performed for four years or above was shown as a protective factor for depressive symptoms.According to the study of Scalco et al. (2013), after the impact generated by the responsibility in the care for an elderly with dependence, beyond the implications caused by exerting this function, the informal caregivers initiate the search for the acceptance of the elderly condition, as well as the adaptation in the face of his current situation. 25In this direction, there are studies that relate this behavior to a greater resilience; that is, the development of positive attitudes, in order to help supporting an array of negative and harmful factors to health originated from the care process over the years, is strongly associated with lower rates of depression, better physical health and social support. 26Others
Profile of caregivers of elderly people
Martins G, Corrêa L, Caparrol AJS, Santos PTA, Brugnera LM, Gratão ACM authors argue that because the caregivers exert this function for a considerable period of time, positive feelings are awakened about the relationship caregiver-elderly. 27herefore, the research contributions meet the importance of studying the differences and similarities between formal and informal caregivers, and of indicating them for their participation along with a social support net, in companionship groups, once enable the change of information about the elderly´s disease, maximizing care already performed, in addition to change of experiences about feelings and expectancies of this public before the task of caring for. 28Moreover, there is need for the creation and implementation of health public policies addressed to the assistance of these individuals and the creation of a care plan, in order to point out ways of reduction of the generate impact, as well as to provide the increase of the support network to caregivers of dependent elderly people. 29n view of this, it is needed to direct our view towards the caregiver, who is frequently affected by psychiatric disturbances such as depression, anxiety and stress, arising from the task of caring for.For this, it is needed to guide the caregiver by addressing their expectations and goals, adapting them to the situation and, in addition, driving them to means of support available.
This study presents some limitations.As this is a study with caregivers of elderly people with Alzheimer, there was difficulty to carry out the collection -both along with the formal caregivers and the informal -once the type of care provided by them required, many times, integral period, making it difficult the availability of time, by reflecting in the small size of the sample.In addition, the cross section did not allow to establish a causality among the variables, making the analysis of results difficult.
CONCLUSION
The study evaluated and compared the sociodemographic and health profile in formal and informal caregivers with Alzheimer Disease.The significant differences were those represented by the relation of kinship with the elderly, the cohabitation with the elderly, anxiety symptoms, the care provided to the elderly -body hygiene activities, oral hygiene and feeding -, revealing values more evident to the informal relative caregiver familiar when compared with the formal non-relative caregiver.
In addition to, the fact that the elderly has been cared for 4 years or above, it has proven to be revealed to be a protective factor for depressive symptoms, both for the formal caregiver and the informal.In addition, it has contributed to the discussion about the planning of interventions such as support groups, implementation of public policies and development of care plans, in order to establish the assistance of the caregivers in the assistance scope, aiming at health promotion and prevention of diseases of these individuals.
Therefore, it is considered essential the support addressed not only towards the elderly that receive the care, as well as the caregiver, through actions that enable comprising their individual needs.Such practice should represent a support mechanism to this population, to ensure not only the reduction of depressive symptoms, of anxiety and stress perceived in the caregivers, as well as to improve the quality of care provided to the elderly with Alzheimer Disease.
Due to the findings of this study, influenced by the limiting factors cited, it is suggested the carrying out of future researches, given the relevance of understanding and analysis of the sociodemographic profile, in addition to the depressive symptoms, of anxiety and stress perceived in caregivers of the elderly with AD, as well as the scarcity regarding these themes is emphasized.
Table 2 .
Distribution of the caregivers according to the health profile.São Carlos -SP, 2018 * PD = Pattern Deviance; ** Student t test
Table 3 .
Distribution of the caregivers according to the type of care provided to the elderly.São Carlos -SP, 2018
|
2019-04-26T13:36:13.519Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6a0691252afbb10d673d22655462c11b57e591ba",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/ean/v23n2/1414-8145-ean-23-02-e20180327.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ba61780a81899ed6a4e28f46a5e8ce335fa1d0ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
774661
|
pes2o/s2orc
|
v3-fos-license
|
Nursing Activities Score and workload in the intensive care unit of a university hospital
Objective The nursing workload consists of the time spent by the nursing staff to perform the activities for which they are responsible, whether directly or indirectly related to patient care. The aim of this study was to evaluate the nursing workload in an adult intensive care unit at a university hospital using the Nursing Activities Score (NAS) instrument. Methods A longitudinal, prospective study that involved the patients admitted to the intensive care unit of a university hospital between March and December 2008. The data were collected daily to calculate the NAS, the Acute Physiology and Chronic Health Evaluation (APACHE II), the Sequential Organ Failure Assessment (SOFA) and the Therapeutic Intervention Scoring System (TISS-28) of patients until they left the adult intensive care unit or after 90 days of hospitalization. The level of significance was set at 5%. Results In total, 437 patients were evaluated, which resulted in an NAS of 74.4%. The type of admission, length of stay in the intensive care unit and the patients’ condition when leaving the intensive care unit and hospital were variables associated with differences in the nursing workload. There was a moderate correlation between the mean NAS and APACHE II severity score (r=0.329), the mean organic dysfunction SOFA score (r=0.506) and the mean TISS-28 score (r=0.600). Conclusion We observed a high nursing workload in this study. These results can assist in planning the size of the staff required. The workload was influenced by clinical characteristics, including an increased workload required for emergency surgical patients and patients who died.
INTRODUCTION
The nursing workload in hospitals has been discussed globally because of its implications on the quality of patient care. (1) In intensive care units (ICU), concern is increasing because of the effect of new technologies on care, the changing profile of critically ill patients and the need for skilled labor. (1) In an ICU, nurses note daily whether critical patients will require prolonged assistance regarding the performance of routine procedures both upon admission and during their stay because organ instability can occur at any time over the course of the patients' stays in these units. (2) This variability challenges the balance between an adequate delivery of care and the rational use of resources. (3) evaluated, which resulted in an NAS of 74.4%. The type of admission, length of stay in the intensive care unit and the patients' condition when leaving the intensive care unit and hospital were variables associated with differences in the nursing workload. There was a moderate correlation between the mean NAS and APACHE II severity score (r=0.329), the mean organic dysfunction SOFA score (r=0.506) and the mean TISS-28 score (r=0.600).
Conclusion:
We observed a high nursing workload in this study. These results can assist in planning the size of the staff required. The workload was influenced by clinical characteristics, including an increased workload required for emergency surgical patients and patients who died.
Nursing workload consists of the time spent by nursing staff to perform the activities for which they are responsible, whether directly or indirectly related to patient care. These activities can change depending on the patient's degree of dependency, the complexity of the disease, the characteristics of the institution, work processes, the physical layout and the nature of the professional team. (4) The nursing workload also includes other factors in which certain activities unrelated to the patient or his/her family become a component of the responsibilities of nurses during their work shifts. These activities include nursing education (monitoring students, training staff) and organizational and administrative work. (5) Thus, the nursing workload is the total of all of the needs that must be fulfilled relative to the nursing staff available to fulfill them, which ultimately translates into time of care.
The various studies that have described nursing workloads have shown that the demographic and clinical characteristics of severely ill patients were not associated with differences in measuring the nursing work. (1,(6)(7)(8) When evaluating nursing work in terms of patient severity, some authors reported that the Nursing Activities Score (NAS) at admission was associated with longer stays in the ICU. (6,7) In addition, there was an association between mortality and NAS, thus showing that patients who did not survive resulted in an increased nursing workload. (8) To optimize financial resources and to properly allocate human resources in an ICU, thus prioritizing quality and safety of care, ICU performance must be evaluated using prognostic indices and by measuring the nursing workload. The latter is important because among the healthcare teams working in ICU, it is the nursing staff that spends the most time at a patient's bedside, performing procedures and therapeutic interventions. (9) Thus, the aim of this study was to evaluate the nursing workload in an adult ICU of a university hospital using the NAS instrument and to analyze the effects that demographic and clinical characteristics have on their workload. 17 beds, including 10 beds that are designated for surgical inpatients and clinical patients not infected with multi-drug resistant bacteria. The remaining seven beds are intended for patients who may be infected with and/or colonized with multi-drug resistant bacteria who require isolation. The adult ICU nursing staff consists of one nurse for every ten beds and a nursing technician for every two beds.
This
The sample included all patients admitted consecutively to the ICU during the study period. Patients under 18 years old or whose stay in the ICU was less than 24 hours were excluded. For patients who had more than one ICU admission (readmission), only the first admission was considered for the analysis. The patients transferred from the ICU to another hospital/service were considered lost in the follow-up.
To characterize the patients, identification and ICU stay data were collected. The identification data included initials, gender, date of birth, clinic, medical record number and service number. The data collected on adult ICU stay included the date and time of admission, origin (ward, emergency room or surgical center), type of admission (medical, elective surgery or emergency surgery), admission diagnosis, date and time of discharge from the ICU, condition at discharge from the ICU (alive or deceased) and condition at discharge from the hospital (alive or deceased).
The institution where the study was performed routinely applies the Therapeutic Intervention Score System (TISS 28) to measure and characterize the nursing workload in the ICU, in addition to the Acute Physiology and Chronic Health Evaluation (APACHE II) severity score and Sequential Organ Failure Assessment (SOFA) organ dysfunction score to characterize patient severity. The routine data were evaluated for the study patients and added to the data collected for the NAS. These data were obtained daily from the patient's medical records. The data were collected until the patient was discharged from the adult ICU or until the patient reached 90 days of hospitalization.
The NAS instrument is divided into 7 major categories and 23 activities. These categories include basic activities (monitoring and controls, laboratory tests, medication, hygiene procedures, drain care, mobilization and positioning, support and care for families and patients and administrative and managerial tasks), ventilatory support, cardiovascular support, renal support, neurological support, metabolic support and specific interventions. (6) To evaluate the nursing workload in the adult ICU, the TISS-28 (10) and NAS instruments were applied. (6) In applying these instruments, some observations were considered: a 24-hr period was considered from 7 am until 7 am the following day; on the first day of hospitalization, the activities performed were computed from the time of ICU admission until 7 am the following day; on the day of discharge, interventions were considered from their last application until the time of discharge; and items that did not occur during the application of the instrument received a score of zero.
In this study, the SOFA, TISS-28 and NAS scores were initially measured on the day of ICU admission, designated as SOFA-admission, TISS-28-admission and NAS-admission, respectively.
The APACHE II score (11) was collected to characterize the study population based on patient severity and mortality risk. The calculation for this score was based on data obtained over the initial 24 hours following admission to the ICU. The definition of chronic disease followed the criteria described by this score.
To detect changes in organ function, all patients received a SOFA evaluation score, (12) which included an assessment of the six major organ systems: respiratory, renal, hepatic, coagulation, cardiovascular and central nervous. Organ dysfunction was quantified using scores ranging from zero to four, and the worst values over that 24-hr period for each organ were used.
Statistical analysis
Continuous quantitative variables were described after assessing whether they were normally distributed. For this evaluation, the Shapiro-Wilks test was used. For variables with a normal distribution, the means and standard deviations were calculated. The nominal categorical variables were described using the absolute and relative frequencies (%) of each variable.
The continuous variables were compared and correlated after assessing whether they were normally distributed. For normally distributed data, Student's t-test was used for comparisons between two groups, and analysis of variance was used for comparisons between more than two groups. For data with a non-normal distribution, a Mann-Whitney test was used for comparisons between two groups, and a Kruskal-Wallis test was used for comparisons between more than two groups.
Pearson's correlation coefficients were used to assess the correlations between the continuous variables. To analyze the magnitude of the correlations, the following reference values were adopted: weak <0.30; moderate 0.30 to 0.60; strong >0.60 to 0.99 and perfect=1.00. The level of significance was set at 5%.
RESULTS
During the study period, 622 patients were admitted to the ICU, and 19 patients were excluded from the study because they were under 18 years of age; 35 readmissions and 131 patients who remained in the ICU for less than 24 hours were also excluded. None of the patients were transferred to another hospital/service. Thus, a total of 437 patients were evaluated.
The demographic and clinical characteristics of these 437 patients evaluated using the NAS during the study period are shown in table 1.
The results of the comparisons between the mean NAS according to the demographic and clinical characteristics of the patients are described in table 2, which shows that the type of admission, length of stay in the ICU and condition at discharge from the ICU and hospital were the variables associated with differences in nursing workload.
The analysis in table 3 shows that the results generated for the correlation between the mean NAS-Admission with the APACHE II, mean SOFA-Admission and mean TISS-28-Admission were significant (p<0.001). There was also a moderate correlation in the analysis of these scores.
DISCUSSION
This study evaluated the nursing workload described by the NAS for patients admitted to a medical-surgical ICU. The high mean NAS observed in the study reflects that each patient required more than half of the nursing workload, thus suggesting an ideal proportion of one nurse professional per ICU bed.
This issue is of fundamental interest because an oversized staff results in higher costs. (5) Conversely, smaller teams tend to impair the quality of care, interfering with patient safety, (9) prolonging hospitalization and generating higher costs. (13) The indices that measure the nursing workload provide an adequate assessment of the complexity of the patient, the nursing time required to provide care, the number of nurses necessary per shift and the material resources required. (1) The index most described in the literature is a simplified version of the TISS-28. (12) Despite the importance of this instrument, its practical application showed structural failures for fully measuring nursing workloads because the activities related to indirect patient care, such as organizational tasks, were not included in its composition. (14) The NAS was developed by Miranda et al. (6) and has been increasingly used in the ICU. This score includes a large number of activities performed by the nursing staff, (15) primarily in the category "basic activities", with greater detail for the items "monitoring and control", "hygiene procedures" and "mobilization and positioning the patient" and the inclusion of items for "support and care for family members/patients" and "administrative and managerial tasks".
The score generated using the NAS scoring system directly expresses the percentage of time spent by the nursing staff caring for critically ill patients and can range from zero to 176.8%, (6) i.e., it represents how much of a staff member's working time was required by a patient over the last 24 hours. Thus, a score of 100 points indicates that a patient required 100% of the nurse's time in the past 24 hours. (13) Each point in the NAS is equivalent to providing 14.4 minutes of nursing care. (1) With respect to the study population, the demographic and clinical characteristics of our patients were comparable to recent studies conducted on critically ill patients, although our average age was slightly higher compared to that of a study performed in a teaching hospital (16) but similar to that of a study conducted in an ICU of a large hospital. (17) The mean APACHE II scores were higher in our study compared to those obtained in a study conducted in a general ICU of a university hospital. (18) The balanced ratio between the surgical and clinical patients observed in our study was similar to that in a study conducted in a gastroenterology unit (19) but differed from the data reported in other studies, including a reported predominance of surgical patients in a study conducted in a ICU of a large hospital (17) and a predominance of clinical patients in a study that included elderly patients. (20) In a recent study, Nogueira et al. described differences in how the NAS was measured between patients treated in public and private institutions, in which patients in public institutions had a higher mean NAS at admission (68.1) compared to that of patients in private institutions (56.0). (21) Our patients were admitted to a public hospital and had a high mean NAS score at admission (87.5), which was consistent with the data in the literature.
Few studies (22,23) attained mean NAS scores similar to our data, which indicates an increased demand for nursing care. Resolution 26 of the National Health Surveillance Agency (Agência Nacional de Vigilância Sanitária -ANVISA) (24) of May 11, 2012, which established the minimum operation requirements for an ICU and made other provisions, recommended a ratio of one nurse technician for every two beds and a ratio of one clinical nurse for every ten beds or a fraction thereof for each work shift. This proportion of staff per bed may be considered inadequate for the care of patients in this study because a high nursing workload can affect the safety and quality of care provided to patients.
The workload assessed by the overall mean of the NAS over the course of hospitalization was not influenced by differences in demographic characteristics, such as gender, age range or patient origin. These results were similar to those reported in other studies (25,26) that evaluated the performance of the NAS in patients admitted to an ICU.
The patient admitted for post-operative care following emergency surgery required a higher workload in our study, which was a result that slightly differed from other reports in the literature. (25,26) Regarding the length of stay in the ICU, various studies have shown that workload is directly associated with the ICU stay, i.e., patients who required a higher workload at the beginning of their stay had a longer overall stay in the ICU. (26,27) Notably, in our case series, patients with shorter stays (up to two days) required the highest nursing workload. From analyzing this group of patients, we observed that the majority of patients did not survive; therefore, this result was likely influenced by the outcome of the patient.
Regarding the discharge condition of patients from the ICU and hospital, higher workloads were associated with deceased patients, i.e., patients who died required a higher workload than those who survived, which were results consistent with the literature. (28) This result was likely because the dysfunction of multiple organs and systems is a frequent cause of death for ICU patients, (29) and this clinical condition requires establishing several alternative therapies, which thus increases the nursing workload.
The moderate correlation observed between the NAS and the APACHE II disease severity and SOFA organ dysfunction scores reinforced the concept that the nursing workload was not only associated with patient severity, intensity of interventions and procedures performed but also covered a broader array of activities that involved the clinical, administrative, educational and organizational dimensions of an ICU. Higher degrees of correlation have been described by other authors both for patient severity (30,31) and the presence of organ dysfunction. (25) For the moderate correlation between the NAS and TISS-28, the results in the literature are largely conflicting, with one study (32) reporting a strong and significant correlation and another study (6) reporting a significant yet moderate correlation between the NAS and TISS-28. These results reflect that although these two instruments measure the nursing workload, the TISS-28 measures the nursing work that involves direct patient contact, and the NAS more fully evaluates the nursing activities and functions in an ICU. Because the research institution exclusively applied the TISS 28 score to evaluate the nursing workload, the results of this study suggested that the routine incorporation and collection of NAS data in this field would result in a more accurate assessments of staff size.
Some limitations should be considered. The primary limitation of this study was that the data originated from a single ICU; thus, caution should be used in extrapolating the results to other institutions with similar characteristics. In addition, the study population was a case-mix population; therefore, these results should be interpreted with caution for specific patient groups.
CONCLUSION
This research study conducted in a general intensive care unit had a higher average Nursing Activities Score, which indicated that there was a high nursing workload at this research hospital. The characteristics associated with an increased nursing workload included the type of admission (emergency surgery) and patient outcome (deceased). Patient severity and organ dysfunctions were moderately correlated with nursing workload.
|
2017-06-18T17:51:53.652Z
|
2014-07-01T00:00:00.000
|
{
"year": 2014,
"sha1": "0b2e52fe0cc90751e1d9ca91a71f6e00a486bbbd",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5935/0103-507x.20140041",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b2e52fe0cc90751e1d9ca91a71f6e00a486bbbd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53589516
|
pes2o/s2orc
|
v3-fos-license
|
Male Drosophila melanogaster learn to prefer an arbitrary trait associated with female mating status.
Although males are generally less discriminating than females when it comes to choosing a mate, they still benefit from distinguishing between mates that are receptive to courtship and those that are not, in order to avoid wasting time and energy. It is known that males of Drosophila melanogaster are able to learn to associate olfactory and gustatory cues with female receptivity, but the role of more arbitrary, visual cues in mate choice learning has been overlooked to date in this species. We therefore carried out a series of experiments to determine: 1) whether males had a baseline preference for female eye color (red versus brown), 2) if males could learn to associate an eye color cue with female receptivity, and 3) whether this association disappeared when the males were unable to use this visual cue in the dark. We found that naïve males had no baseline preference for females of either eye color, but that males which were trained with sexually receptive females of a given eye color showed a preference for that color during a standard binary choice experiment. The learned cue was indeed likely to be truly visual, since the preference disappeared when the binary choice phase of the experiment was carried out in darkness.This is, to our knowledge 1) the first evidence that male D. melanogaster can use more arbitrary cues and 2) the first evidence that males use visual cues during mate choice learning. Our findings suggest that that D. melanogaster has untapped potential as a model system for mate choice learning.
Being able to distinguish between mates that are receptive to courtship and those that are not can help an individual to avoid wasting time and energy.Thus, cues that indicate receptivity are especially valuable to males in deciding which females to courtand eventually mate with.Understanding how individuals make decisions in mate choice is important in order to understand the dynamics of sexual selection and reproductive isolation (Verzijden et al., 2012).Mate choice decision-making can be aided by learning from experience, and many species have shown to learn some aspects of their mate choice (Dukas, 2006;Gailey et al., 1982;Kozak and Boughman, 2009;Magurran and Ramnarine, 2004;Svensson et al., 2010;ten Cate and Vos, 1999;Verzijden et al., 2012).In particular, when learning which females are receptive to their courtship, males of several species show associative learning between particular cues and female mating status.For instance, D. melanogaster males learn to associate an pheromone, cis-vaccenyl acetate (cVA), with mated females, and will subsequently reduce courtship to females that carry this pheromone (Ejima et al., 2007;Keleman et al., 2012).Similarly, male rove beetles discriminate unreceptive females by cuticular hydrocarbons (Schlechter-Helas et al., 2012).Such learning is not restricted to within-species variation: in several species, males can also learn to stop courting heterospecific females after rejection from such females (Dukas, 2004(Dukas, , 2008;;Magurran and Ramnarine, 2004).
This learning behavior is a valuable field of study for a number of reasons.First, it is a useful model for how the interaction between experience and genetic predispositions can explain variation in animal behavior (Bateson and Laland, 2013).Second, because mate choice directly influences the genetic composition of the next generation, it is important for evolutionary processes such as sexual selection and speciation (Verzijden et al., 2012).Third, learned mate choice allows us to study the underlying cellular mechanismsofthis behavior, by mapping the interplay between brain regions, the function of individual neurons (e.g.Datta et al., 2008;Keleman et al., 2012) and even epigenetic cellular processes (e.g.Kramer et al., 2011).Drosophila species have been used as a model organism to great advantage within all three of these research aims.
Drosophila melanogaster males are known to learn aspects of their mating behavior.They learn to suppress courtship towards females that are unreceptive, either because they are unable to mate (immature) (Ejima et al., 2005), unwilling to mate because they were recently mated (Ejima et al., 2007), or unwilling to mate because they are of another species (Dukas, 2004(Dukas, , 2008)).Males will suppress their courtship behavior in general for several hours after rejection.This is usually mediated through a short-term memory of the experience (Siegel and Hall, 1979).When rejection occurred repeatedly over the course of a training period of several hours, males will form a long-term memory, in which they learn to associate female rejection behavior with olfactory cues of the females (Ejima et al., 2005;Ejima et al., 2007;Griffith and Ejima, 2009a).Males will then continue to suppress courtship to females with a similar pheromone profile.For instance, immature virgin females have a different pheromone profile than mature virgin females, and mated females emit pheromones that are either present in the male sperm, or are transferred by males upon close contact during mating (Ejima et al., 2007).Species also differ in pheromone profiles, which probably facilitates species discrimination by males after experience (e.g.Blows and Allan, 1998;Dyer et al., 2014;Jallon and David, 1987).
Studies on the proximate factors involved inmale courtship learning have focused on olfactory/gustatory memories.However, male courtship is not only guided by olfactory cues, visual and tactical cues are also used, and male courtship only ceases if all three modalities are impaired (Krstic et al., 2009).In fact, female D. melanogaster have shown preferences for visual traits (Katayama et al., 2014), and vision is an important sensory modality in courtship (Griffith and Ejima, 2009b), indicating that visual traits are commonly evaluated in mate choice.Drosophila species show intra-and interspecific variation, in several visual cues, such as abdominal pigmentation (Matute and Harris, 2013), eye color, and wing inference patterns (Shevtsova et al., 2011), yet it has to date not been shown that males can learn to associate female receptivity to mating with a visual trait.This may be because visual traits are typically unrelated to mating status (although immature individuals are paler), they are less likely to change in short time spans, whereas pheromone profiles are a direct and honest cue (Ejima et al., 2007).However, from an ecological point of view -i.e.outside of the laboratory mating settings of most studies on male mate choice learning in D. melanogaster -offspring from one female are likely to emerge within a short time span, and be within a relatively short distance from each other.They are likely to share any distinguishing visual traits with, such as the degree of abdominal pigmentation (e.g.Gibert et al., 2004b;Gibert et al., 1998) and eye color (Nitasaka et al., 1995) and are likely to have the same mating status, at least shortly after emergence.Integrating cues from several sensory modalities, such as vision and smell, would potentially aid males in rapid decision making in which female to court and which not (Griffith and Ejima, 2009a).Furthermore, male Drosophila learn to avoid heterospecific females (Dukas, 2004(Dukas, , 2008(Dukas, , 2009)), and this learning too could be based on visual differences between species (e.g.Gibert et al., 2004a;Llopart et al., 2002).
Here we ask if a male is able to learn to associate a trait, which is initially arbitrary in relation to the female's receptivity, with her willingness to mate.The trait in question is the eye color: brown or red.Wildtype eye color of Drosophila is red, but brown eye color is a single gene mutation, found in the wild in several species (Aparisi and Najera, 1990;Ashadevi and Ramesh, 2000;Nitasaka et al., 1995).By giving males experience with both red-eyed and brown-eyed females that are either immature virgins -and thus unable and unwilling to mate, and mature virgins -that are willing to mate, we challenged males to associate the mating status of females with their eye color.
Materialsand Methods
We used a large (N e > 1800) outbred D. melanogaster population, LH m (Rice et al., 2005), and a replicate population, LH m -bw, which has all brown-eyed individuals, caused by the recessive bw marker, but otherwise the same genetic background as the main LH m population.To ensure that the LH m and the LH m -bw populations were genetically equally heterogenous, we crossed the two populations for 10 generations prior to this experiment.The LH m populations are maintained on a 14-day cycle at 25°C, with a 12:12 hour light/dark.Males were collected and separated from females under light CO 2 anesthesia, within 3 hours of emergence from their pupae, and kept with 20 individuals / vial with 7 ml food.
The gene responsible for brown eyes (bw 1 ) is part of a pathway that also involves production of serotonin and dopamine, neuromodulators that are important in experiencing reward and memory formation (Krstic et al., 2013).For this reason, we only used red-eyed males.At the same time as males were collected, we also collected virgin females and separated them by eye colorwith 40 individuals/vial with 7 ml food.48 hours after initial collection, we again collected virgin females, within 2 hours of emergence, and kept them by eye color for another 2 hours after collection.
The training phase
We then set up vials in which we had either: 40 mature virgin females (> 48 hours after emergence) with red eyes, and 40 immature females with brown eyes (up to 4 hours after emergence), or 40 mature virgin females with brown eyes and 40 immature females with red eyes.We then introduced 20 virgin males to each vial, thus having a sex ratio of 1:4 in each vial.We chose a highly female-biased sex ratio in order to ensure that all males would have ample opportunity to gain experience with females, and reduce the possible effects of male-male competition.Males were then allowed to interact with the females for up to 1.5 hours, until mating ceased.Vials were inspected for mating pairs every 15 minutes.Only matings with mature females were observed.All individuals were then anesthetized and males were separated from the females, and kept in a fresh vial.The females were discarded.
The testing phase
After 24 hours, males were individually placed in a vial with 2 mature virgin females, one red eyed and one brown eyed, which were collected in the initial collection round (i.e.> 40 hours after emergence), and were observed for 30 minutes.Each minute, males were inspected if they were courting (orientation, wing extension and following) red or brown-eyed females, until they were mated with one of the females.Duration of the mating was then also scored (one minute precision).After that all individuals were discarded.
We tested 53 naïve males for a baseline preference for eye-color.We then tested 83 males that experienced brown eyed mature virgin females in the training phase and 78 males that experienced red eyed mature virgin females in the training phase.
After obtaining positive results of the above training (see results below), we proceeded to test if the males were truly choosing the females based on their eye color, and repeated the experiment with final testing phase in a dark room under dim red light from a 15 W dark room safelight (model 4018 Kaiser Fototechnik GmbH & Co. KG, Buchen, Germany).The light emission in the spectrum below 600 nm of these lights is negligible, whereas D. melanogaster light sensitivity is negligible above 600 nm (Schnaitmann et al., 2013), meaning that the flies are effectively blind under these conditions.Such light conditions have previously been successfully used to observe D. melanogaster courtship without visual stimuli (Joiner and Griffith, 2000).These light conditions made it impossible to distinguish red-eyed and brown-eyed females for human observers as well, thus we were not able to note which of the two females the males were courting.When a male mated a female, the vial was taken out of the dark room and the eye color of the female was scored.
Statistical analysis
Chi square tests were used to test if males had a baseline preference for eye color, had a preference of eye color after training, and finally if they had a preference for eye-color genotype in darkness.In order to test if males courted females of one eye color more than the other, paired t-tests were performed.All statistical analyses were done in R (version 3.1.3).
Preference tests under normal light conditions.
Males mated more often with the female with the eye-color that was associated with the mature females in the training phase, and this effect was similar in the two treatments: 60 % of the red-eye trained males mated with red-eyed females, and 59% of the brown-eyed trained males mated with the brown-eyed females (Fig. 1): χ 2 = 4.7196, df = 1, P = 0.0298.All but six of the 161 males we trained mated.
Prior to mating, males also courted the female with the eye color associated with the mature females in the training phase twice as often: t 77 = 3.412, P = 0.001 for males trained with mature red-eyed females; t 82 = -3.026,P = 0.003 for males trained with mature brown-eyed females (Table 1).
Males that mated with the 'preferred' eye color did not mate faster than males that mated with the other eye color (t 59.22 = 1.7788,P = 0.080 for males trained with mature brown-eyed females; males trained with mature red-eyed females: t 72.33 = -0.043,P = 0.966).The duration of the mating was longer for males trained with brown-eyed mature females, when mating with browneyed females than for those mating with red-eyed females: t 77.87 = -3.652,P = 0.0005.However, there was not a similar trend for males trained with red-eyed mature females (t 70.89 = -0.32,P = 0.750) (Table 1).
Preference test under dark conditions
We tested 112 males under dark conditions, that were collected and trained identically (under light conditions) as described above, 55 trained with mature brown-eyed females and immature red-eyed females, and 59 males trained with mature red-eyed females and immature brown-eyed females.Males mated equally often with red-eyed females as with brown-eyed females: χ 2 = 0.0321, df = 1, P = 0.8578, and this effect was similar in the two treatments: 48 % of the males trained with mature red-eyed females mated with red-eyed females, and 49% of the males trained with mature brown-eyed females mated with the brown-eyed females (Fig. 1).
Discussion
After experience with mature and immature virgins, males preferred to mate with females with the eye color that was associated with the mature females.Naïve males showed no preference for female eye color, and males preferred females with brown and red eye colors similarly according to their experience.We thus conclude that males learned to associate the female eye color trait with their mating status and associated receptivity to mate.The eye color trait of the females is gene-rally unrelated to their mating status, and thus in that sense arbitrary.Only through training with mature and immature females with different eye colors did males learn to prefer this trait, generalizing from their previous experience with mating with mature females and rejection from the immature females.Immature D. melanogaster females show different behavior towards courting males than mature virgin females (Dukas and Scott, 2015, this issue).These behavioral cues are presumably associated with the success of their courtship attempt and, in this case, with the eye color of the females.
The exact mechanism behind this learned preference for eye color is not entirely clear from these experi- ments; however, it has previously been shown that female pheromone profiles that are related to age or mating status act as a conditioned stimulus, and that this learning is associative learning (Griffith and Ejima, 2009a).The eye color trait may similarly be a conditioned stimulus.The neuronal and circuit mechanisms are unknown, as well as the anatomical structures that are involved.However, it is likely that the mushroom body is involved, since visual learning has recently been located in this brain center (Vogt et al., 2014).We show that males can learn to associate (and prefer) a female trait initially arbitrary to her mating status.It is very likely that the males used the visual (eye color) trait, since males did not distinguish between females under dark conditions.It has been shown that olfactionimpaired male D. melanogaster can learn to suppress courtship after rejection by females, but are later unable to distinguish between females in various mating statuses (Ejima et al., 2005).However, this does show that they are able to use non-olfactory sensory information for courtship.Here we show that males with intact olfaction rely on visual information to distinguish between equally mature females, whose eye colors have previously marked female receptivity.
The bw gene that is responsible for the brown eye color has pleiotropic effects on the production of neurotransmitters (Krstic et al., 2013), but it is not known if it might also affect the pheromone profile of the individuals, or the behavior of the females.Thus, it is conceivable that males may not have associated the visual eyecolor trait with female mating status, but a second unknown trait.However by testing the trained males in the dark, we confirmed that males were not distinguishing between the females without visual cues, strengthening our preliminary conclusion that males indeed used the visual cue under normal light conditions.It is possible that under dark conditions the males did not identify that there were two females, and thus have mated at random.If this were the case, this would imply that males couldn't distinguish between the females based on nonvisual cues, which supports our suggestion that the males used eye color to distinguish between the females.Specifically, if red eyed and brown-eyed females differ enough in pheromone profile that males can detect and learn to make associations with these differences, then the males tested in darkness would probably have sensed the two different smells and responded accordingly.Similarly, if eye color had affected mating-relevant behavior, this would have resulted in a mating bias.Instead, the probability of mating with one eye color over the other is equal in both test groups in the dark, as well as under light conditions with the naïve males.Alternatively, it is possible that if there were a pheromone difference between red and brown eye females, males used this to distinguish between them under light conditions, but that the dark conditions, and thus the lack of visual sensory input, may have caused males to switch to different molecular or neurological circuits, bypassing the memory formation that was previously established by training under light conditions (Griffith and Ejima, 2009a;Joiner and Griffith, 2000).
How can the ability to learn to associate an arbitrary trait to mating status within a species influence the outcome of sexual selection?We expect that this causes the pattern of sexual selection to become more random, both with respect to the traits under selection and the direction of the preferences for those traits.Males that learn to prefer the distinguishing trait of females that are receptive to mating will locally cause a positive frequency dependent effect for females bearing this trait.Such patterns will then be highly transient, since within days, the availability of receptive females can change.Furthermore, we speculate that such memories might last only a number of days (McBride et al., 1999).However, such temporal fluctuations in mate preferences are a possible driver of the maintenance of genetic variation for multiple traits (e.g.Chaine and Lyon, 2008;Lehtonen et al., 2010).Male Drosophila also have been shown to learn to reduce courtship towards heterospecific females (Dukas, 2004(Dukas, , 2008;;Dukas and Dukas, 2012), and in this case, such associative learning could potentially contribute to increased phenotypic species differentiation, especially if females also employ such learning behavior (Servedio and Dukas, 2013).Interestingly, Morier-Genoud and Kawecki (2015, this issue) showed in a simulation model that when males learn to adjust their courtship effort towards females that are more receptive to them, this could increase their reproductive success, and this learning could strengthen sexual selection in the population by increasing the male fitness variation.
In conclusion, this is the first evidence that male Drosophila melanogaster can make mate choice decisions based on learning of an arbitrary visual cue.Given the increasing interest in how learning can influence speciation (Verzijden et al., 2012), and the wealth of knowledge on Drosophila neurobiology, this suggests that D. melanogaster has untapped potential as a model system for increasing our understanding of the mechanics and evolutionary outcome of mate choice learning.
Fig. 1
Fig. 1 After either of the two training treatments (immature brown-eyed females + mature virgin red-eyed females vs immature red-eyed females + mature virgin browneyed female), males were given the choice between two equally mature virgin females, and preferred the female with the eye color associated with the mature virgin females in the training phase, indicated by the white bars deviating from the zero-line.This preference was not present under dark conditions
|
2019-03-31T13:42:24.951Z
|
2015-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "db85251a09a7abf88e15ad81fd1e5b61a291af09",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/cz/article-pdf/61/6/1036/32970611/czoolo61-1036.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "db85251a09a7abf88e15ad81fd1e5b61a291af09",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
204090657
|
pes2o/s2orc
|
v3-fos-license
|
Joint Optimization of Pico-Base-Station Density and Transmit Power for an Energy-E ffi cient Heterogeneous Cellular Network
: Heterogeneous cellular networks (HCNs) have emerged as the primary solution for explosive data tra ffi c. However, an increase in the number of base stations (BSs) inevitably leads to an increase in energy consumption. Energy e ffi ciency (EE) has become a focal point in HCNs. In this paper, we apply tools from stochastic geometry to investigate and optimize the energy e ffi ciency (EE) for a two-tier HCN. The average achievable transmission rate and the total power consumption of all the BSs in a two-tier HCN is derived, and then the EE is formulated. In order to maximize EE, a one-dimensional optimization algorithm is used to optimize picocell BS density and transmit power. Based on this, an alternating optimization method aimed at maximizing EE is proposed to jointly optimize transmit power and density of picocell BSs. Simulation results validate the accuracy of the theoretical analysis and demonstrate that the proposed joint optimization method can obviously improve EE.
Introduction
Mobile wireless communications have experienced explosive growth over the past decade, which has resulted in higher data rate and coverage requirements [1,2].Heterogeneous cellular networks (HCNs) were introduced to address the exponential growth of mobile data traffic [3,4].A typical multi-tier HCN consists of a macrocell for long-range coverage and several tiers of low-power, small cells for short-range coverage, such as picocells [5].The deploying of multi-tier base stations (BSs) can enhance network throughput and mobile quality-of-service (QoS) [6,7].Energy efficiency (EE) is one of the major parameters in the design of HCNs [8][9][10][11][12].It has been revealed that 70%-80% of the energy consumption in a cellular network is attributed to the operations of the BSs [13].Therefore, reducing energy consumption of BSs is critical to improve the EE of HCNs.
Stochastic geometry theory-Poisson point process (PPP) in particular-provides an effective and tractable method to analyze the performance of HCN [14][15][16][17].Previous authors [18][19][20][21] have analyzed the total power consumption minimization frameworks under different performance constraints.In one study [22], the authors suggested that the power consumption of cellular networks be minimized by dynamically switching BSs on or off.In [23], the authors suggested power consumption be reduced via a joint sleeping strategy and power control.The works in [18][19][20][21][22][23] consider network power consumption rather than EE.In [24], the authors analyzed the EE of HCNs, however, no optimal EE scheme was given.In [25][26][27], the authors focused on the tradeoff between the spectral efficiency (SE) and the EE of HCNs, however, the closed-form expression of EE was not obtained.In [28], the authors analyzed the impact of BS transmit power on EE of HCNs and proposed an algorithm to find the optimal picocell BS transmit power in order to maximize EE.In [29], the EE was given in a tractable, closed-form formulation.Moreover, it has been mathematically proven that the EE is a unimodal and strictly pseudo-concave function in the transmit power given the density of the base stations, and likewise in the density of the base stations given the transmit power.In [30], the influence of parameters such as BSs' transmit powers and inter-site distances on EE was analyzed.In [31], the optimization of BS density to enhance EE through traffic-aware sleeping strategies in both one-and two-tier cellular networks was researched.In [32][33][34][35][36], area spectral efficiency (ASE) and EE of ultra-dense cellular networks were analyzed or optimized.
In this paper, we prefer to analyze and maximize the EE of a two-tier HCN.In previous works, EE has usually been defined as the ratio of sum rate to total power consumption [24,[37][38][39][40].In [24], the rate was computed based on the signal-to-interference ratio (SIR) threshold; the rate thus obtained should be the lower bound of the actual rate.The rate in [37][38][39][40] was based on either real-time signal to interference plus noise ratio (SINR) or SIR, which is equivalent to channel capacity.Unlike the previous works, in this paper, EE is defined as the ratio of average achievable transmission rate to BS power consumption.Considering that the service providers are interested in knowing the average rate they can provide to the users that are within coverage, the rate in this paper is computed on the condition that the user equipment (MU) is within coverage.In HCNs, the deployment of BSs has a notable impact on the EE [12,35].In this paper, we intend to maximize the EE by jointly optimizing the deployment density and transmit power of picocell BSs.
The main contributions of this paper are as follows.First, we analyze and obtain the average achievable transmission rate of a two-tier HCN.Next, we investigate the total power consumption of all BSs in the per unit area of a two-tier HCN.Based on this, we derive the formulation of EE for a two-tier HCN with respect to MU density, BS densities, target signal-to-interference ratio (SIR), and power consumption of BSs.Then, we use a one-dimensional search algorithm to find the optimal density and transmit power, respectively.Finally, we propose a joint optimization strategy for picocell BS density and transmit power that can maximize EE.
The remainder of this paper is organized as follows.Section 2 introduces the system model.Section 3 analyzes the EE.The proposed EE optimization scheme is presented in Section 4. Simulation results are discussed in Section 5. Section 6 concludes the paper.
System Model
A downlink two-tier HCN comprising macrocells and picocells is considered.Macro BSs (MBSs) and pico-BSs (PBSs) are characterized by the corresponding density, transmit power, and target SIR.BSs in the k-th tier are modeled as a PPP φ k (k = 1, 2) in the two-dimensional Euclidean plane, whose density, transmit power, and target SIR are denoted as λ k , P Tk , and γ k , respectively.Mobile user equipment (MUE) is modeled by another independent PPP with density λ u .As shown in Figure 1, different cells in different tiers could be separate, overlapping, or included.
Given the stationary nature of the network model, network performance can be characterized by considering the throughput of a typical MUE located at the represented origin [11].The typical MUE is called tagged MUE, and its serving BS is called a tagged BS.The distance between the tagged MUE and the tagged BS located at x ∈ φ k (k = 1, 2) is denoted as d x .The propagation channel is characterized by both path loss and small-scale fading.The received power at the tagged MUE attributed to the transmission from its serving BS is P Tk h x d −α x , where α > 2 is the path-loss exponent and h x ~exp(1) is a random variable modeling Rayleigh fading.All channel coefficients are assumed to be independent identically distributed (i.i.d).Thus, SINR of the tagged MUE is where I x is the cumulative interference from all of the tiers when the tagged MUE is served by the tagged BS located at x, while σ 2 is the noise power.Typical HCNs are interference-limited; the noise power can be neglected in interference-limited cellular networks [14].Therefore, in the following analysis, we use SIR instead of SINR.SIR of the tagged MUE can be written as Future Internet 2019, 11, x FOR PEER REVIEW 3 of 12 where x I is the cumulative interference from all of the tiers when the tagged MUE is served by the tagged BS located at x , while 2 σ is the noise power.Typical HCNs are interference-limited; the noise power can be neglected in interference-limited cellular networks [14].Therefore, in the following analysis, we use SIR instead of SINR.SIR of the tagged MUE can be written as
Energy Efficiency Analysis
Definition 1.In this paper, the energy efficiency is defined as the ratio of the average achievable rate to BS power consumption in the HCN.The EE indicates the total downlink rate that an HCN can achieve with certain power consumption.
where total R is the total average achievable transmission rate and total P is the total power consumption of BSs.Firstly, we characterize the average achievable transmission rate.Then, we obtain the total power consumption of BSs and formulate the network EE.
The Average Achievable Transmission Rate of A Two-Tier HCN
We assume am open access strategy, where a typical MU can connect to a BS in any tier without any restriction (i.e., each MUE is served by the BS providing the highest received SIR).
Energy Efficiency Analysis
Definition 1.In this paper, the energy efficiency is defined as the ratio of the average achievable rate to BS power consumption in the HCN.The EE indicates the total downlink rate that an HCN can achieve with certain power consumption.
where R total is the total average achievable transmission rate and P total is the total power consumption of BSs.Firstly, we characterize the average achievable transmission rate.Then, we obtain the total power consumption of BSs and formulate the network EE.
The Average Achievable Transmission Rate of A Two-Tier HCN
We assume am open access strategy, where a typical MU can connect to a BS in any tier without any restriction (i.e., each MUE is served by the BS providing the highest received SIR).Definition 2. The coverage probability of the target MUE served by the k-th tier BS is defined as This definition represents the probability that the tagged MUE can achieve a target SIR γ k when it is served by a BS in the k-th tier.Assuming γ k > 1 (0 dB), at most one BS in the entire network can provide SIR greater than the required threshold, γ k [14].This assumption is easy to satisfy, except for with the users at the edge of the cell; the simulated and analytical results match reasonably well for a distinct minority (cell edge users).Under this assumption, Equation ( 4) can be written as [14] P cov (γ k ) = T(α) where T(α) = sin(2π/α) 2π/α .Under unit bandwidth, the average achievable rate R achieved by a randomly located MUE when it is within coverage can be expressed as [14] where γ min = min k γ k .We observe that the average rate expression involves only a single integral, which can be easily evaluated numerically.For the sake of tractability, we assume that each BS equally allocates the frequency resource among its serviced MUEs, and the bandwidth of the single frequency band is B. The average achievable rate of a randomly chosen MUE when it is within coverage of BSs in the k-th tier can be expressed as where U k is the average number of MUEs served by the tagged BS in the k-th tier, and ρ k is the average fraction of MUEs to be served by the k-th tier BSs.In other words, ρ k represents the proportion of the MUE coverage contributed by the BS in the k-th tier.As MUEs have different bandwidths when they access BSs in different tier, we need to distinguish which tier of BSs covers the MUEs.So, we use ρ k in Equation (7).Based on corollary 2 in [14], in open access mode, the average fraction of MUEs served by k-th tier BSs is According to Lemma 1 in [39] and Equation ( 9) in [24], U k can be expressed as According to the above analyses, the total average achievable rate can be readily defined.
Definition 3. The total average achievable rate of a two-tier HCN is defined as where A is the area of HCNs and λ u is the density of MUE.R 1 and R 2 represent the average achievable rate of a randomly chosen MUE when it is under coverage of BSs in the 1-th tier or the 2-th tier, respectively.Thus, we can get the total average achievable rate of a two-tier HCN.
Total Power Consumption of Two-Tier HCN
When referring to BS power consumption, it is generally agreed that BSs have two types of power consumption: circuit power consumption and transmit power consumption [38,40].The circuit power consumption is caused by signal processing, site cooling, and battery backup.In this paper, we use the linear approximation model given by [41]: where N TRk is the number of transceivers of a BS in the k-th tier; each transceiver serves one transmit antenna element.Here, P Ck is the static power expenditure and θ k is the slope of load-dependent power consumption.Thus, the total power consumption of all BSs in a two-tier HCN can be written as
Energy Efficiency of Two-Tier HCN
Substituting Equations ( 10) and ( 12) into Equation ( 3), the energy efficiency of a two-tier HCN can be expressed as where η EE represents network EE.Actually, γ k , N TRk , θ k , and P Ck can be regarded as constants.From Equation ( 13), we can see that the network EE is only determined by the density of MU, the densities of BSs, and the transmit powers of BSs.
Energy Efficiency Optimization
Generally speaking, macrocell BSs are used to provide basic coverage, so their transmit power and density settings must ensure basic coverage.At the same time, due to the mobility and randomness of MUEs, their density cannot be controlled.Hence, we intend to improve the energy efficiency by jointly optimizing the transmit power and density of picocell BSs.Based on the above considerations, in this section, the density and the transmit powers of macrocell BSs and the density of MUE (i.e., λ 1 , P T1 , and λ u ) are assumed to be given.It is difficult to directly optimize the density and transmit power of picocell BSs jointly, so we first discuss the optimization based on density and power separately.
Density Optimization Given the Transmit Power of Picocell BSs
In this section, we analyze the optimal density of picocell BSs in order to maximize the EE formulated in Equation (13).The optimization problem can be expressed as max In Equation ( 14), η EE is the objective function of the optimization problem, and density is the optimal variable.Here, λ min > 0 and λ max > 0 are the minimum and maximum density of the BSs, respectively.We assume, without loss of generality, λ min → λ 1 and λ max → λ u .Through symbolic computation and simulation, we find that the objective function is a convex function and has the global optimum.According to the properties of convex functions, any extreme point of a convex function within a convex set is also its best point.Therefore, we use the one-dimensional optimization algorithm to find the optimal point of the objective function in the effective interval.In this paper, the golden section method is adopted to find the optimal solution of the optimization problem in Equation ( 14), and the solution is given by where λ * 2 is the only stationary point of the objective function.
Transmit Power Optimization Given the Density of Picocell BSs
In this section, we analyze the optimal transmit power of picocell BSs.The optimization problem can be expressed as max η EE s.t.P T2 ≥ P min , P T2 ≤ P max (16) In Equation ( 14), η EE is the objective function of the optimization problem, and transmit power is the optimal variable.Here, P min > 0 and P max > 0 are the minimum and maximum transmit power of the BSs, respectively.We assume, without loss of generality, that P min → 0 and P max → P T1 .Through symbolic computation and simulation, we find that the objective function is also a convex function and has the global optimum.Similar to the solution to the optimization problem given in Equation ( 14), the golden section method is adopted to find the optimal solution to the optimization problem, and the solution is given by P opt T2 = max P min , min P * T2 , P max (17 where P * T2 is the only stationary point of the objective function.
Joint Optimization of Density and Transmit Power
In Sections 4.1 and 4.2, we have solved the optimization problem formulated in Equation ( 14) with respect to λ 2 for a given P T2 and the optimization problem formulated in Equation ( 16) with respect to P T2 for a given λ 2 , respectively.In this section, we tend to find the optimal pair P opt T2 , λ opt 2 that jointly maximizes the EE in Equation ( 13).The joint optimization problem can be formulated as max λ 2 ,P T2 η EE s.t.P min ≤ P T2 ≤ P max , λ min ≤ λ 2 ≤ λ max (18) By using these results obtained in Sections 4.1 and 4.2, we propose an alternating optimization method to solve the joint optimization problem given in Equation (18).This method iteratively optimizes λ 2 for a given P T2 and P T2 for a given λ 2 until convergence of the EE within a desired level of accuracy.The algorithm that solves Equation (18) based on the alternating optimization method is given in Algorithm 1.
Algorithm 1. Alternating Optimization of Density and Transmit Power
Step 1: Let P T2 ∈ [P min , P max ], Step 4: Solving Equation ( 16) with given λ 2 = λ opt 2 , denote optimal solution as P * T2 , get P opt T2 = max P min , min P * T2 , P max , solving Equation ( 14) with given P T2 = P opt T2 , denote optimal solution as λ * 2 , get λ opt 2 = max λ min , min λ * 2 , λ max ; Step 5: E = η EE ( λ First, the optimization range of variables is given as P T2 ∈ [P min , P max ], λ 2 ∈ [λ min , λ max ].Second, the initial iteration value of density is given as λ opt 2 ∈ [λ min , λ max ], the initial iteration value of EE is given as E = 0, and the desired level of accuracy is given as ε = 10 −5 .Third, we iteratively optimize λ 2 for a given P T2 and P T2 for a given λ 2 until convergence of the EE within a desired level of accuracy.Last, the optimal solution pair and optimal pair P opt T2 , λ opt 2 meeting the desired accuracy requirement is obtained.
Simulation Results
In this section, a series of numerical simulations are carried out to verify the accuracy of our derived EE equation and the effectiveness of our proposed optimization algorithm.Simulation parameters are listed in Table 1.For the power consumption model, we refer to the statistics in [41].Figure 2 shows the impact of picocell BS density on EE.It can be seen that the simulation results are in line with the theoretical results, which validates the correctness of EE expression in Equation (13).We can see that if the MUE density, the macrocell BS density, and the power consumption of BS are given, there must exist an optimal picocell BS density that can maximize EE.It can be seen that the EE increases first and then decreases with the increase of the density of the picocell BS.The reason for the increase is that with the increase of the density of the picocell BS, since the distance from the picocell BS to the MUE is obviously smaller than that from the macrocell BS to MUE, more users will access the picocell BS.Additionally, the power consumption of the picocell BS is small, so EE will be improved at this time.However, when the density of the picocell BS is greater than a given value, the number of access users will be saturated, while a continuous increase of the density of the picocell BS will lead to an increase of power consumption, meaning the EE will gradually decrease.from the picocell BS to the MUE is obviously smaller than that from the macrocell BS to MUE, more users will access the picocell BS.Additionally, the power consumption of the picocell BS is small, so EE will be improved at this time.However, when the density of the picocell BS is greater than a given value, the number of access users will be saturated, while a continuous increase of the density of the picocell BS will lead to an increase of power consumption, meaning the EE will gradually decrease.In addition, both biased mode and unbiased mode are compared in this simulation.Unbiased mode means that the SIR threshold of a microcell BS is the same as that of a picocell BS, while biased mode means that the SIR threshold of a microcell BS is higher than that of a picocell BS.The target SIR for the picocell BSs is set to 10 dB, while the target SIRs for picocell BSs are set to 10 dB in unbiased mode and 5 dB in biased mode, respectively.It can be seen that when 2 λ is not very low, In addition, both biased mode and unbiased mode are compared in this simulation.Unbiased mode means that the SIR threshold of a microcell BS is the same as that of a picocell BS, while biased mode means that the SIR threshold of a microcell BS is higher than that of a picocell BS.The target SIR for the picocell BSs is set to 10 dB, while the target SIRs for picocell BSs are set to 10 dB in unbiased mode and 5 dB in biased mode, respectively.It can be seen that when λ 2 is not very low, bias technology helps to improve EE.This is because, in both unbiased mode and biased mode, more MUEs choose to access macrocell BSs when the distribution of picocell BSs is sparse, since macrocell BSs can provide higher SIR compared to picocell BSs.In this case, the EEs of the two modes are close to each other.With the increase of the picocell BSs density, in biased mode, increasingly more MUEs choose to access picocell BSs.Because the power consumption of a picocell BS is much lower than that of a macrocell BS, the EE of the biased mode is much higher.
Figure 3 gives the average EE against the transmit power of picocell BSs. Figure 3 shows the impact of picocell BS transmit power on EE.The target SIR for the macrocell BSs and picocell BSs are set to 10 dB and 5 dB, respectively.It can be seen that the EE increases first and then decreases with the increase of the transmission power of the picocell BS.The reason for the increase is that with the increase of the transmission power of the picocell BS, since the SIR threshold of the micro-base station is low, increasingly more users will access the micro-base-station, while because the power of the picocell BS is small, EE will be improved at this time.However, when the transmission power of the picocell BS is greater than a given value, the number of access users will be saturated, and the continuous increase of the transmission power of the picocell BS will not lead to the increase of network throughput, so the EE will gradually decrease.It can also be seen that the simulation results are in line with the theoretical results.In addition, we can conclude that if λ u , λ 1 , λ 2 , and P T1 are given, we can find an optimal P T2 that maximize EE.In Figure 4, we evaluate the network EE with various target SIR thresholds to analyze the impact of different optimization schemes on EE.We use MATLAB (R2015b) to realize the golden section optimization algorithm.The CPU of our computer is 3.8 G, and the execution times to solve optimization Equations ( 14) and ( 16) are 0.0161 and 0.0159 s, respectively.The final optimal pair is obtained after 16 iterations of alternate optimization, and the execution time of our proposed optimization method is about 0.57 s.
ency (bits/Joule) In Figure 4, we evaluate the network EE with various target SIR thresholds to analyze the impact of different optimization schemes on EE.We use MATLAB (R2015b) to realize the golden section optimization algorithm.The CPU of our computer is 3.8 G, and the execution times to solve optimization Equations ( 14) and ( 16) are 0.0161 and 0.0159 s, respectively.The final optimal pair is obtained after 16 iterations of alternate optimization, and the execution time of our proposed optimization method is about 0.57 s.
In Figure 4, we evaluate the network EE with various target SIR thresholds to analyze the impact of different optimization schemes on EE.We use MATLAB (R2015b) to realize the golden section optimization algorithm.The CPU of our computer is 3.8 G, and the execution times to solve optimization Equations ( 14) and ( 16) are 0.0161 and 0.0159 s, respectively.The final optimal pair is obtained after 16 iterations of alternate optimization, and the execution time of our proposed optimization method is about 0.57 s.
Network Energy Efficiency (bits/Joule) From Figure 4 we can see that with the increase of target SIR, the network EE curve rises first and then drops.This is because the achievable data rate of the tagged MUE increases as 1 γ grows, while the coverage probability decreases as target SIR increases.According to the optimization method given in Section 4.1, we calculate that λ is in agreement with the simulation results shown in Figure 2. According to the optimization method given in Section 4.2, which is also the optimization method proposed in [28], it From Figure 4 we can see that with the increase of target SIR, the network EE curve rises first and then drops.This is because the achievable data rate of the tagged MUE increases as γ 1 grows, while the coverage probability decreases as target SIR increases.According to the optimization method given in Section 4.1, we calculate that λ opt 2 = 4.4 × 10 −3 m −2 .It can be seen that the optimal solution λ opt 2 is in agreement with the simulation results shown in Figure 2. According to the optimization method given in Section 4.2, which is also the optimization method proposed in [28], it can be calculated that P opt T2 = 1.9 × 10 −2 W. According to the proposed joint optimization algorithm, it can be calculated that the optimal pair is P opt T2 = 2.9 × 10 −2 W, λ opt 2 = 3.8 × 10 −3 m −2 .We also compare the proposed joint optimization scheme with the optimization algorithms proposed in [31], as well as the schemes for fixed power and density.The proposed joint optimization scheme can obviously improve network EE.Therefore, in HCNs, BS density and transmit power should be carefully designed.Otherwise, arbitrarily designed density or transmit power will decrease the network EE.
Conclusions
In this paper, we use stochastic geometry approach to determine the energy efficiency of picocell BS density and transmit power for a two-tier HCN.We formulate and derive the expression of the network EE in terms of the MUE density, BS density, target SIR, and the transmit powers of BSs.We propose an alternating optimization scheme to achieve joint power and density optimization.The scheme can achieve optimal pairing of transmit power and density.Simulation results validated the analysis and proved the effectiveness of the EE joint optimization scheme.We found that compared with unilateral optimization of density or transmit power, joint optimization can improve system EE more effectively.This work can offer theoretical references for the design and deployment of dense HCNs.
Future
Internet 2019, 11, x FOR PEER REVIEW 9 of 12
Figure 4 .
Figure 4. Network EE with fixed picocell BS density and optimal picocell BS density.Note: SIR = signal-to-interference ratio.
Figure 4 .
Figure 4. Network EE with fixed picocell BS density and optimal picocell BS density.Note: SIR = signal-to-interference ratio.
|
2019-10-03T09:11:36.365Z
|
2019-09-27T00:00:00.000
|
{
"year": 2019,
"sha1": "225df0586b587c9940ef0c38e7843e8889442c4e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-5903/11/10/208/pdf?version=1569748622",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e049b7ab8082a620494b4e348c08b7d094fd3472",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
204836464
|
pes2o/s2orc
|
v3-fos-license
|
Coffee intake and decreased amyloid pathology in human brain
Several epidemiological and preclinical studies supported the protective effect of coffee on Alzheimer’s disease (AD). However, it is still unknown whether coffee is specifically related with reduced brain AD pathologies in human. Hence, this study aims to investigate relationships between coffee intake and in vivo AD pathologies, including cerebral beta-amyloid (Aβ) deposition, the neurodegeneration of AD-signature regions, and cerebral white matter hyperintensities (WMH). A total of 411 non-demented older adults were included. Participants underwent comprehensive clinical assessment and multimodal neuroimaging including [11C] Pittsburgh compound B-positron emission tomography (PET), [18F] fluorodeoxyglucose PET, and magnetic resonance imaging scans. Lifetime and current coffee intake were categorized as follows: no coffee or <2 cups/day (reference category) and ≥2 cups/day (higher coffee intake). Lifetime coffee intake of ≥2 cups/day was significantly associated with a lower Aβ positivity compared to coffee intake of <2 cups/day, even after controlling for potential confounders. In contrast, neither lifetime nor current coffee intake was not related to hypometabolism, atrophy of AD-signature region, and WMH volume. The findings suggest that higher lifetime coffee intake may contribute to lowering the risk of AD or related cognitive decline by reducing pathological cerebral amyloid deposition.
Introduction
Coffee is one of the most popularly consumed beverages in the world and a high proportion of adults drink coffee daily 1 . Coffee contains hundreds of bioactive compounds, including caffeine, chlorogenic acid, polyphenols, and small amounts of minerals and vitamins, some of which are known to have positive effects on health 2 . Many epidemiological studies suggest that coffee has beneficial effects on various medical conditions, including stroke 3 , heart failure 4 , cancers 5 , diabetes 6 , suicide 7 , Parkinson's disease 8 , and mortality 9 .
Several epidemiological studies also supported the protective effect of coffee on Alzheimer's disease (AD) [10][11][12] and cognitive decline [13][14][15] . Nevertheless, there is limited information available on the neuropathological evidences that support the protective effects of coffee on AD and related cognitive decline in humans. Although a preclinical study of aged transgenic AD mice reported that caffeine, a major component of coffee, decreases brain beta-amyloid (Aβ) levels [16][17][18] , it is still unknown whether coffee is specifically related with reduced brain AD pathologies, including Aβ deposition and regional neurodegenerations in human.
Therefore, we investigate relationships between coffee intake and in vivo AD biomarkers on multimodal brain imaging, including cerebral Aβ deposition, AD-signature region cerebral glucose metabolism (AD-CM), ADsignature region cortical thickness (AD-CT), and cerebral white matter hyperintensities (WMH) in nondemented older adults.
Participants
This study was part of the Korean Brain Aging Study for Early Diagnosis and Prediction of Alzheimer's Disease (KBASE), which is an ongoing prospective cohort study that begun in 2014 19 . As of February 2017, 411 individuals [282 cognitively normal (CN) adults, and 129 adults with mild cognitive impairment (MCI)], between 55 and 90 years of age were enrolled in the study.
The CN group consisted of participants with a Clinical Dementia Rating (CDR) 20 score of 0 and no diagnosis of MCI or dementia. All participants with MCI met the current consensus criteria for amnestic MCI, including: (1) memory complaints confirmed by an informant; (2) objective memory impairments; (3) preservation of global cognitive function; (4) independence in functional activities; and (5) no dementia. Regarding Criterion 2, the age-, education-, and gender-adjusted z-score was <−1.0 for at least one of four episodic memory tests: Word List Memory, Word List Recall, Word List Recognition, and Constructional Recall tests; these are included in the Korean version of the Consortium to Establish a Registry for Alzheimer's Disease (CERAD-K) neuropsychological battery 21 . All MCI individuals had a CDR score of 0.5. The exclusion criteria were as follows: (1) presence of a major psychiatric illness; (2) significant neurological or medical condition or comorbidity that could affect mental functioning; (3) contraindications for an magnetic resonance imaging (MRI) scan (e.g., pacemaker or claustrophobia); (4) illiteracy; (5) the presence of significant visual/hearing difficulties and/or severe communication or behavioral problems that would make clinical examinations or brain scans difficult; (6) pregnant or lactation; (7) use of an investigational drug; and (8) drinking tea extract regularly. The Institutional Review Board of Seoul National University Hospital and the SMG-SNU Boramae Medical Center in South Korea approved the present study, and all subjects provided written informed consent prior to participation. More detailed information on recruitment of the KBASE cohort is described in our previous report 19 .
Clinical and neuropsychological assessments
All participants were administered standardized clinical assessments by trained board-certified psychiatrists based on the KBASE clinical assessment protocol which incorporated the CERAD-K clinical assessment 19 , which incorporates the CERAD-K 22 . All subjects were also given a comprehensive neuropsychological assessment battery, administered by a clinical neuropsychologist or trained psychometrists according to a standardized protocol incorporating the CERAD-K neuropsychological battery 21 . Details on full assessment battery were described previously 19 .
Assessment of coffee intake
All participants were systematically assessed by trained nurses to determine coffee intake. Specifically, the amount of coffee intake (cups/day) for each participant were assessed for the past one year (i.e., current) and overall lifetime. Previous epidemiologic studies on the effect of coffee intake 10,12,23 showed that there was a clear difference in the risk of overall or AD dementia between "<2 cups/day (no or lower drinker)" and "≥2 cups/day (higher drinker)" group. Based on the findings, we categorized the participant into the two group, and tried to test the hypothesis that there is a difference in AD pathology between the two.
Assessment of potential confounders
Coffee intake may be influenced by various other conditions. Therefore, all participants were systematically evaluated about potential confounders, such as lifetime cognitive activity (LCA), occupational complexity, annual income, vascular risk, depression, smoking, and alcohol intake.
Cognitive activity participation frequency was measured by 39-item structured questionnaires 24,25 . The details of the measurement of cognitive activity are described in our previous report 26 . Item scores were averaged to yield separate values for each age period. We then calculated the composite score of LCA to use in the subsequent analysis which was an average of all 4-epoch means. With regard to occupational complexity, we considered only the longest-held occupation and then classified into four levels based on the skill levels described in International Standard Classification of Occupations (http://www.ilo. org/public/english/bureau/stat/isco/). Occupations typically involve simple and routine physical or manual tasks at skill level 1, the performance of tasks, such as operating machinery and electronic equipment; driving vehicles; maintenance and repair of electrical and mechanical equipment; and manipulation, ordering and storage of information at skill level 2, the performance of complex technical and practical tasks that require complex problem solving, reasoning, and decision making in a specialized field at skill level 3, and the performance of tasks that require complex problem-solving, decision-making, and creativity based on an extensive body of theoretical and factual knowledge in a specialized field at skill level 4. Information about occupation was obtained from selfreport by the participants and confirmed by reliable informants. Annual income was evaluated and categorized into three groups (below the minimum cost of living (MCL), more than MCL but below twice the MCL, twice the MCL or more (http://www.law.go.kr). The MCL was determined according to the administrative rule published by the Ministry of Health and Welfare, Republic of Korea in November 2012. The MCL was 572,168 Korea Won (KRW) for single-person household and added 286,840 KRW for each additional housemate. The comorbidity rates of vascular risk factors were assessed by interviews of participants and their reliable informants; a vascular risk score (VRS) was calculated based on the number of vascular risk factors present and reported as a percentage 27 . To acquire accurate information, reliable informants were interviewed, and medical records were reviewed. The Geriatric Depression Scale (GDS) 28 was used to measure the severity of depressive symptoms. Smoking status (never/former/smoker) and alcohol intake status (never/ former/drinker) were evaluated through nurse interview. Blood samples were also obtained via venipuncture, genomic DNA was extracted from whole blood and apolipoprotein E (APOE) genotyping was performed as described previously 29 . APOE ε4 (APOE4) positivity was defined as the presence of at least one ε4 allele was present.
Measurement of cerebral Aβ deposition
All participants underwent simultaneous threedimensional [ 11 C] Pittsburg compound B (PiB)-positron emission tomography (PET) and T1-weighted MRI scans using a 3.0 T Biograph mMR (PET-MR) scanner (Siemens; Washington DC, WC, USA) according to the manufacturer's guidelines. The details of PiB-PET acquisition and preprocessing were described in our previous report 30 . An AAL algorithm and a region-combining method 31 were applied to determine the regions of interest (ROIs) for characterization of PiB retention levels in the frontal, lateral parietal, posterior cingulate-precuneus, and lateral temporal regions. The standardized uptake value ratio (SUVR) values for each ROI were calculated by dividing the mean value for all voxels within each ROI by the mean cerebellar uptake value on the same image. Each participant was classified as Aβ positive (Aβ+) if the SUVR value was >1.4 in at least one of the four ROIs 31,32 . Considering the bimodal distribution of our PiB data, only Aβ positivity was used as an outcome variable 33,34 .
Measurement of AD-CM
All subjects underwent [ 18 F] fluorodeoxyglucose (FDG)-PET imaging using the above-described PET-MR machine. The details of FDG-PET acquisition and preprocessing were described in our previous report 30 . ADsignature FDG ROIs that are sensitive to the changes associated with AD, such as the angular gyri, posterior cingulate cortex, and inferior temporal gyri 32 , were determined. AD-CM was defined as the voxel-weighted mean SUVR extracted from the AD-signature FDG ROIs.
Measurement of AD-CT
All T1-weighted images were acquired in the sagittal orientation using the above-described 3.0 T PET-MR machine. MR image acquisition and preprocessing were described in our previous report 30 . AD-CT was defined as the mean cortical thickness values obtained from ADsignature regions including the entorhinal, inferior temporal, middle temporal, and fusiform gyrus, as described previously 32 .
Measurement of WMH
All participants underwent MRI scans with fluid attenuated inversion recovery using the abovementioned 3.0 T PET-MR scanner in a validated automatic procedure that has previously been reported 35 . The details of the volume measurement of cerebral WMH were previously described 36 .
Statistical analysis
We first compared demographic variables, other potential confounders [APOE4, clinical diagnosis (CN vs. MCI), LCA score, occupational complexity, annual income status, VRS, GDS score, smoking status, and alcohol intake status] for the relationship between coffee intake and AD biomarkers, and AD imaging biomarkers between lifetime coffee intake categories (<2 cups/day and ≥2 cups/day) by t test or χ 2 test as appropriate. In order to explore the relationship between lifetime coffee intake amount and potential confounders, we performed Spearman correlation analyses. To examine the relationships between lifetime (or current) coffee intake category and neuroimaging parameters, multivariate logistic or linear regression analyses were performed as appropriate. In these analyses, "<2 cups/day" category was used as a reference. Three models were tested for controlling the covariates stepwisely. The first model included age, gender, education, APOE4, clinical diagnosis as covariates; the second model included covariates in the first model plus LCA score, occupational complexity, annual income status, VRS, GDS score, smoking status, and alcohol intake status; and third model included covariates in the second model plus the duration of coffee intake and the age of first coffee intake. To reduce false positive error due to multiple testing, we applied Bonferroni correction. Actually, p < 0.00625 (=0.05/8) was used as the threshold for statistical significance for each analysis considering 4 biomarkers and 2 time periods.
For the AD neuroimaging biomarker with significant association with coffee intake in above analyses, additional exploratory analyses were performed. First, to explore whether there are any brain regional specificity in regard of the relationship between lifetime coffee intake and the biomarker, the same analysis was done for each of the four ROI (i.e., the frontal, lateral parietal, posterior cingulate-precuneus, and lateral temporal region). Second, in order to investigate the modulating effects of the potential confounders (i.e., age, gender, education, APOE4, clinical diagnosis, LCA score, occupational complexity, annual income status, VRS, GDS score, smoking status, and alcohol intake status) on the relationships between coffee intake and the biomarker, we performed the same analysis including two-way interaction term between coffee intake and any one of the confounders, as well as coffee intake itself, as an independent variable. We additionally examined the three-way interaction between lifetime coffee intake and any two of age, education, gender, and APOE4 on the relationship between coffee intake and the biomarker. Third, to explore the dose-effect relationship between overall amount of coffee intake and the biomarker, the same analysis including the total amount of lifetime coffee intake (=duration of coffee intake × cups of coffee intake/ day) as an independent variable instead of coffee intake category (lower vs. higher) were performed. For similar purpose, we also compared the AD biomarker among four coffee intake categories (i.e., 0 or <1 cups/day, 1≤ and <2 cups/day, 2≤ and <3 cups/day, and 3≤ cups/day) instead of the dichotomous categories by using χ 2 test. For these exploratory analyses, p < 0.05 was served as a statistical threshold. All statistical analyses were performed using IBM SPSS Statistics 24 software (IBM Corp., Armonk, NY, USA).
Participant characteristics
The demographic and clinical characteristics of the participants are presented by the categories of lifetime coffee intake in Table 1. Of the 411 participants, 269 were no or lower coffee drinkers (<2 cups/day) and 142 were higher coffee drinkers (≥2 cups/day). There were significant differences of sex, education, duration of coffee intake, age of first coffee intake, LCA score, occupational complexity, smoking status, alcohol drinking status, and Aβ positivity between the two lifetime coffee intake groups. Correlations of lifetime coffee intake amount with potential confounders for the relationship between coffee intake and AD biomarkers were also presented in Supplementary Table 1.
Difference of Aβ positivity between high and low coffee intakes
The association between coffee intake and Aβ positivity presented in Table 2 and Fig. 1. Lifetime coffee intake of ≥2 cups/day showed significantly lower Aβ positivity compared to coffee intake of <2 cups/day, regardless of the models. To explore whether there are any brain regional specificity in regard of the relationship between lifetime coffee intake and Aβ positivity, the difference of Aβ positivity between high and low lifetime coffee intakes was tested for each of the four ROI (i.e., the frontal, lateral parietal, posterior cingulate-precuneus, and lateral temporal region). Lifetime coffee intake of ≥2 cups/day showed lower Aβ positivity in all four regions (Table 3). In contrast to lifetime coffee intake, current coffee intake was not related to Aβ positivity regardless of the covariates.
Moderating effect of potential confounders on the relationship between lifetime coffee intake and Aβ positivity Any two-way interaction between lifetime coffee intake and each of age, gender, gender, APOE4, clinical diagnosis, LCA score, occupational complexity, annual income status, VRS, GDS score, smoking status, and alcohol intake status was not significant, indicating that the potential confounders do not moderate the relationship between lifetime coffee intake and Aβ positivity (Supplementary Table 2). We additionally examined the three-way interaction between lifetime coffee intake and any two of age, gender, education, and APOE4 on the relationship between coffee intake and Aβ positivity, but did not find any significant finding.
Dose-effect relationship between lifetime coffee intake and Aβ positivity
To explore the dose-effect relationship between lifetime coffee intake amount and Aβ positivity further, we compared Aβ positivity rates according to four lifetime coffee intake strata, i.e., 0 or <1 cups/day, 1≤ and <2 cups/day, 2≤ and <3 cups/day, and 3≤ cups/day by using χ 2 test. As shown in Supplementary Fig. 1, there was a significant trend of association between lifetime coffee intake strata and Aβ positivity (p = 0.048). Multiple logistic regression analysis also demonstrated that there was a trend toward significance on dose-effect association between the total amount of lifetime coffee intake (=duration of coffee intake × cups of coffee intake/day) and Aβ positivity [OR (95% CI) = 0.991 (0.982-1.001), p = 0.067]. As the amount increased, so Aβ-positivity rate decreased (Supplementary Table 3).
Association of coffee intake with cerebral tau deposition, AD-CM, AD-CT, and WMH
In contrast to the results for Aβ positivity, neither lifetime nor current coffee intake was related with any of AD-CM, AD-CT, and WMH (Table 4).
Discussion
The present study found that a lifetime coffee intake of ≥2 cups/day (higher coffee intake) was associated with lower cerebral Aβ positivity rate in non-demented older adults when compared to the coffee intake of <2 cups/day. We did not find any association of coffee intake with regional neurodegeneration and WMH. This is the first study to investigate the association between higher coffee intake and in vivo AD pathologies in human.
The present finding of the relationship between higher coffee intake and a decreased rate of pathological Aβ deposition is in line with results from previous studies using animal models, which indicated that higher caffeine, one of the major ingredients of coffee, intake exerts a protective effect via molecular Aβ-related mechanisms [16][17][18]37,38 . For example, Arendash et al. 18 suggested that caffeine protects AD mice against cognitive impairment and reduces brain Aβ production by deactivating the positive-feedback loop from the γto β-secretase cleavages on the Aβ protein precursor. The same group also reported that high caffeine intake improves cognitive performance of aged AD mice, but not of aged wild-type mice, with reduced brain Aβ levels, suggesting that the cognitive enhancing effect of caffeine in AD mice is mediated by a decrease in Aβ concentration 16 . Furthermore, Cao et al. reported that caffeine suppresses Aβ levels in the plasma and brain of AD mice 17 and also suggested that caffeine and other components in coffee may synergize to protect against cognitive decline in AD mice 38 . Moreover, Li et al. 37 indicated that caffeine suppresses Aβ protein precursor internalization and Aβ generation via adenosine A3 receptor-mediated actions. The present finding also provides a neuropathological explanation for the relationship between higher coffee intake and reduced risk of AD dementia observed in several clinical and epidemiological studies [10][11][12] . Those studies reported higher coffee drinkers had 31-65% decrease in the risk of AD dementia, which is quite comparable to about 65% decrease of Aβ positivity rate in higher coffee drinkers (27.14%) compared to lower coffee drinkers (17.61%). Furthermore, the relationship between higher coffee intake and lower Aβ positivity was prominent for lifetime coffee intake than for current coffee intake. This suggests that the protective effects of higher coffee intake against Aβ pathology involve the chronic effects associated with prolonged exposure rather than an acute or short-term effect.
In the present study, we did not find any association of coffee intake with regional neurodegeneration and WMH. Although no previous study investigated the relationship between coffee intake and brain metabolism, the Honolulu-Asia Aging Study showed that coffee intake was not associated with generalized brain atrophy and microvascular ischemic lesions 39 , similarly to our findings.
In addition, the Health Professional Follow-up Study also showed that chronic coffee or caffeine intake is not associated with a risk of cerebrovascular or cardiovascular disease 40 . Although some previous reports indicated an association between coffee intake and cerebrovascular risk, they examined the acute effect of coffee intake, but not the chronic effect of long-term coffee intake 41,42 . Such a null association between coffee intake and AD-related neurodegeneration or vascular changes indicates that chronic coffee intake has no direct effects on neurodegenerative or cerebrovascular changes through Aβ-independent mechanisms. Given the significant association between higher coffee intake and lower Aβ positivity, the negative finding for AD-related regional neurodegeneration appears related to the long-time delay between pathological Aβ accumulation and Aβ-dependent neurodegeneration 43,44 .
The present study had several limitations that should be considered. First, because this was a cross-sectional study, it is difficult to infer causal relationships from the findings. However, the significant relationship between lifetime coffee intake and amyloid pathology supports the possible causal nature of the relationship. Second, underestimates of coffee intake or retrospective recall bias may have affected the results of lifetime coffee intake in older individuals. However, coffee intake is less prone to misreporting because coffee intake is a long-term habitual behavior. Evaluation for coffee intake is known to be performed with the highest validity and reproducibility 45 .
In addition, the current finding between coffee intake and amyloid was significant even after controlling the effect of clinical diagnosis on cognitive status, and the reported frequency of coffee intake was not related with the proportion of MCI (Table 1). Finally, it is unclear which ingredient(s) in coffee acts on Aβ pathology. Although caffeine is among hundreds of bioactive compounds in coffee 46 , it is the most widely studied ingredient against Aβ pathology [16][17][18] . Other bioactive compounds include chlorogenic acid, polyphenols, small amount of minerals, and vitamin B 3 , which have also been investigated [47][48][49] . However, it remains controversial whether a single ingredient in coffee is effective against Aβ pathology or whether a combination of ingredients is effective. Table 4 Results of multiple linear model analyses for assessing the relationship between stratified coffee intake and AD-CM, AD-CT, or WMH volume in non-demented individuals Aβ beta-amyloid, AD-CM Alzheimer's disease signature cerebral glucose metabolism, AD-CT Alzheimer's disease signature cortical thickness, WMH white matter hyperintensities, CI confidence interval, LCA lifetime cognitive activity, GDS geriatric depression scale, APOE4 apolipoprotein ε4 a Adjusted for age, gender, education, APOE4, and clinical diagnosis b Adjusted for covariates in Model 1 plus, LCA score, occupational complexity, annual income status, vascular risk score, GDS score, smoking status, and alcohol status c Adjusted for covariates in Model 2 plus, duration of coffee intake and age of first coffee intake Therefore, further investigations are needed to clarify which ingredient(s) in coffee are important for reducing Aβ pathology. The comparison between coffee with and without caffeine may give us a clue on the specific effect of caffeine.
In conclusion, the findings of present study suggest that higher lifetime coffee intake is likely to contribute to lowering the risk of AD or related cognitive decline by reducing pathological cerebral amyloid deposition.
|
2019-10-23T15:24:18.759Z
|
2019-10-22T00:00:00.000
|
{
"year": 2019,
"sha1": "796b33c14e87514b10e5900912c62b1623e052f8",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41398-019-0604-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "796b33c14e87514b10e5900912c62b1623e052f8",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250328295
|
pes2o/s2orc
|
v3-fos-license
|
A Novel Polarimetric Channel Imbalance Phase Estimation Method Based on the Rotated Double-Bounce Backscatters in Urban Areas
: Polarization calibration without artificial calibrators has been one of the focuses of research and discussion for PolSAR communities. However, there is limited research on the treatment of dual-polarization systems and the calibration methods for getting rid of distributed targets. In this paper, we contribute to proposing a new and convenient method for estimating the polarimetric channel imbalance phase at the transmitter and receiver, which can be used for both quad-pol and dual-pol SAR systems. We found a brand-new reference object in the urban area scene, namely the effective dihedrals . A statistical calculation method was proposed correspondingly, which obtained an effective estimation of the channel imbalance phases. The theoretical explanation of the proposed method was consistent with the statistical phenomena presented in the experiments. The technique was illustrated and verified through C-band SAR images, including GaoFen-3 (GF-3) data and Sen-tinel-1 data. The technique was also validated and successfully applied in airborne SAR data of P, L, S, C, and X bands. The estimation error could be within 7° when crosstalk items were less than − 30 dB. The method realizes a fast and low-cost dual-polarization phase imbalance estimation and provides a new technical approach to supplement the traditional tropical-rainforest-based quad-pol system calibration. The method can be conveniently applied to the monitoring of polarization distortion parameters, ensuring good polarization SAR data quality.
Introduction
The polarimetric synthetic aperture radar (PolSAR) plays an important role in a variety of quantitative applications with the incremental information of target properties from the performance under polarized radar waves. However, polarization distortions always exist in the raw single-look complex (SLC) data [1], such as energy leakage and amplitude and phase inconsistency when transmitting and receiving different polarized waves. Polarization calibration is a key step to correct these distortions so that the data can truly reflect the polarimetric backscattering performance of ground objects, which is the basis of various subsequent quantitative applications of PolSAR data.
The polarization calibration technique by using artificial calibrators is mature, such as corner reflectors (CRs) and polarimetric active radar calibrators (PARCs). Because of its controllable high precision, it has been widely used in the initial calibration of various space-borne SAR systems [1][2][3]. However, the calibrators' arrangement and maintenance in the calibration field are costly, and the measurement range and calibration frequency provided by these calibrators are limited. With the development of polarimetric SAR technology, the calibration of the multi-beam, multi-band, and multi-working-mode polarimetric SAR system has become a huge burden to the traditional calibration methods. Therefore, it is of great scientific significance and application value to study ground objects with special properties in usual SAR scenes for polarization calibration.
For the quad-pol calibration methods, there has been a lot of outstanding research published. Van Zyl et al. first used only image parameters and one trihedral corner reflector to achieve calibration for Jet Propulsion Laboratory (JPL) airborne SAR (AIRSAR) data [4]. Furthermore, Quegan et al. [5] and Ainsworth et al. [6] utilized natural targets to solve crosstalk and cross-polarization channel imbalance elements. To further reduce the cost of assembling and deploying calibrators, researchers have focused on methods that do not require any external calibrators to be deployed before imaging. Shimada et al. [7] utilized the three-component Freeman-Durden decomposition method to estimate the distortion elements. Lei Shi proposed the co-polarization channel imbalance determination methods by the use of bare soil and corner-reflector-like targets [8,9]. By combining the information of the distributed target and a corner reflector, using a numerical optimizer for polarimetric calibration was researched [10]. Jiang Sha et al. proposed a fast polarization distortion estimation method for GF-3 data only based on volume scattering targets [11]. Based on this method [11], S. Shangguan et al. realized the normalized monitoring of the polarimetric distortions of GF-3 quad-pol data [12].
A quad-polarized system is limited by some systematic constraints. As a compromise, in the case of providing cross-polarization information, a dual-polarized system can be applied to a higher imaging resolution mode or a larger swath width mode. Dual-polarization SAR data has a wide range of applications, such as marine monitoring [13,14] and biomass retrieval [15,16]. These precise quantization applications require good polarization data quality for dual-polarization data. Chen Lin et al. discussed a general calibration method of the dual-pol SAR system based on three ideal artificial calibrators, namely, a trihedron, a 0° dihedral reflector, and a 45° dihedral reflector [17]. M. Lavalle et al. proposed a calibration approach of dual polarimetric C-band data using two gridded trihedrons and an oriented dihedron for Sentinel-1 [18]. As far as we know, the known dualpolarization calibration methods are all mainly based on artificial calibrators. Differently from quad-pol SAR data, some assumptions of volume-scattering targets are no longer available for dual-pol SAR data, such as the reciprocity, which limits the feasibility of the distributed targets of volume scattering for polarization calibration of dual-polarization systems.
Achieving an effective estimation of the phase imbalance of dual-pol or full-pol data without relying on a calibrator has been a difficult problem. In this paper, we contribute to proposing an effective estimation of the channel imbalance phase at both transmitter and receiver of the polarimetric SAR system based on the discovered novel ground reference objects and the proposed corresponding statistical estimation method.
For this technique, we innovatively utilized the statistical information of the crosspol phase difference (XPD) of the double-bounce backscatter targets from urban areas. Atwood and Thirion-Lefevre et al. introduce the concept of the effective dihedral [19], with one plate coincident with the building wall and one plate associated with some ground facet, oriented to support double bounces of radar waves. In this paper, we develop a technique based on this concept to achieve the estimation of the phase imbalance. First, the effective dihedrals with specific rotation angles, namely rotated double-bounce backscatters (RDBs), were extracted from complex reflections in urban areas through a series of extraction operations. Then, we used the distribution statistics information of XPD values of the RDBs to achieve the estimation of the phase imbalance. Finally, for the 180-degree phase ambiguity problem that arises in the method, we temporarily adopted the method of inverting its orientation for a specific ground building to make the whole estimation technique complete.
We used a variety of SAR data and multiple approaches to verify the validity and accuracy of the estimation method. The feasibility of the method in normally monitoring dual-pol phase imbalance was verified by calibrated Sentinel-1 dual-pol data. The validity and accuracy of the method were verified by comparing it with other polarimetric calibration methods based on GF-3 full-pol data. The universality of the method for P, L, S, C, X-band SAR data was verified by running on the airborne multi-band full-pol SAR data.
This paper is organized as follows: Section 2 expounds on the principle, derivation, implementation details, properties, and theoretical estimation errors of the method. Section 3 presents the experiments, the operation results, and verification comparison results of the method on GF-3 and other data. The entire estimation method is discussed in Section 4. Section 5 gives the conclusion.
The Polarimetric Distortion Model of the PolSAR System
The observed polarized SAR data should reflect the real polarimetric reflection signal of the observed targets, but it is distorted by multi error factors. The distortion model for the whole link is given in the literature [20] and can be expressed as follows: where the ] [S matrix is the true scattering matrix. A is the absolute amplitude factor, and is the absolute phase. ] [N is the additive noise, which can be roughly estimated out by the coherence relationship between HV and VH data [21]. In general, for polarization calibration, people focus on the distortion matrix R/T of the system, while other factors are not the concerns of this article. Then, Equation (1) can be rewritten as [11]: , respectively, represent the other leaked polarized signal when the system receives or transmits one polarized signal.
The dual-pol SAR system transmits one polarized wave and receives both co-and cross-polarization signals. For the case of H-transmission, the measured scattering matrix can be written as follows: The V-transmission case is similar to E, only the crosstalk item 3 should change.
As a dual-pol system is used, it is not possible to correct for the transmit crosstalk distortion 3 . However, it will affect the phase imbalance estimation.
The proposed method can solve the phase of r f , namely the phase imbalance of the receiver, based only on HH and HV observations in urban areas. For full-pol data, the phase imbalance of the transmitter can be obtained only by using the HH and VH observations. The following mainly introduces the whole method in the case of HH and HV to solve the phase imbalance of the receiver.
The Phase Imbalance Estimation Method Based on the Rotated Dihedral Corner Reflector
For the dihedral corner reflector, in the case of a perfect electrical conductor and incidence along the dihedral bore site, the scattering matrix of the dihedral is [22]: where angle is the rotation of the perfect dihedral about the SAR look direction. The positive values correspond to clockwise (CW) rotation, and negative values correspond to counterclockwise (CCW) rotation [19]. From Equation (4), we can see that since both signals are real numbers, from the perspective of complex numbers, the phase of both is only 0 degrees and 180 degrees. One sees that there are only two cases of the phase relationship between the HH and HV channels: For the H-transmission dual-pol SAR system, Equation (3) can be rewritten as: Considering that the values of cross-talk items are small, the measured phase of For a dihedral with a specific rotation angle, which conforms to Equation (5a), the phase imbalance can be estimated from Equation (7) where f ˆ is the estimated phase of channel imbalance. We can see from Equation (6) that the crosstalk items are the major source of error in estimating the phase of f . For this phase estimation error, we consider the phase error caused by small disturbances of cross-talks that are not considered in the above calculation. Due to the amplitude characteristics of the point targets extracted by the subsequent method, this analysis is simplified by setting the signal strength of each polarization channel to 1. Thereby, the estimation error can be approximately expressed as: when the vector sum of the interference terms is orthogonal to the reference signal and the direction of each interference term is consistent, the extreme worst-case estimation error can be approximately expressed as: where means an amplitude level of the equivalent crosstalk. For example, a −30 dB crosstalk level may lead to an estimation error of up to about 7 degrees in this extreme case. It can be seen that the accuracy of the method is severely affected by the channel isolation of the system. Notice that the magnitude of crosstalk is small and can easily reach −30 dB for today's space-borne SAR sensors. As the above-mentioned extreme case should be very rare, the error of the actual phase estimation should be much smaller than this value in most cases.
The Phase Imbalance Estimation Method Based on the Urban Rotated Double-Bounce Backscatters (RDBs)
The phase imbalance estimation method expressed in Section 2.2 is based on the polarimetric scattering properties of the dihedral corner reflector. Instead of studying the traditional calibration method that relies on the precisely deployed calibrators, this paper utilizes the widespread double-bounce backscatters in urban areas to achieve the phase imbalance estimation. The technique uses a statistical method on a wide range of extracted targets rather than precisely extracting some specific objects and treating them as dihedral-like targets.
Atwood and Thirion-Lefevre researched the phase behavior of the double-bounce backscatters of urban areas [19]. The concept of the effective dihedral is introduced, with one plate coincident with the building wall and one plate associated with some ground facet. They also derived the relationship of the rotation angle of the effective dihedral and the rotation angle of the building wall [19]: where is the SAR local incidence angle and is the rotation angle of the building wall around the vertical axis. The literature describes the urban scenario as the result of a forest of dihedrals; the XPD information of all pixels in the scenario is counted for the classification of urban areas [19].
In this method, we first perform a series of filtering operations on urban targets to obtain those valid double-bounce backscattering structures with specific rotation angles.
Specifically, the first filtering operation can be expressed as: where ij M is an expression that calculates the average amplitude of the corresponding polarized channel ( ij stands for hh or hv ) of the scene. The symbol is logical.
The symbol K here is a multiplying coefficient. The K value relates to the content of the scene's features. Generally, the appropriate range of K values may be between 1 and 10. When the K value is chosen too small, too many point targets are obtained, and even some points lose the function of phase imbalance calculation, but it is a way to barely cope with the scenario of insufficient building targets. Too large K values may cause an insufficient number of extracted points, it can cause statistical bias in the subsequent statistical histogram calculation of peak positions. Empirically, K can be set to about 3 for general scenarios. In practice, the principle is to appropriately increase the K value under the premise of ensuring sufficient obtained point targets.
The function of this filtering process (Equation (12)) can be seen visually in Figure 1b. It reserves the parts with two polarization channels greater than a threshold of 0.5, while the other parts are discarded.
The polarimetric strong scattering points' filtering is to remove interference from other scattering mechanisms in urban areas, such as the trihedral reflection structure that also exists. Meanwhile, it can ensure the reliability of the target phase. If the scattering intensity of one polarization channel of the target is small, its phase is susceptible to noise and other clutter signals, which is not conducive to the statistics of phase information.
In addition, for the extracted strong scattering points, we perform coherence filtering on the local regions of HH and HV of these points, reserving target points whose coherence coefficient is greater than a threshold. It can be expressed as: where the symbol i M represents the data of a certain size of a square area centered on the i th target point. In the experiment, the side length size of the local area was set as 7 pixels. The coherence coefficient threshold was set at 0.8. The filtering operation of Equation (13) can ensure the validity of polarization information of target points and avoid the multiple scattering mixed invalid signals in urban areas and some strong scattering volume scattering target signals in the scene.
Next, we use Equation (8) to estimate the phase imbalance estimation value of each extracted RDB target. For the numerous estimates obtained from the set of point targets, we count the cluster centers of these values as the final estimates for this whole scene.
There are various kinds of processing methods, and this paper uses the method of statistical distribution histogram. The peak position of the statistical histogram is the estimation result of phase imbalance of the whole scene. To accurately obtain the peak position from the distribution histogram, we use the approach of normal distribution fitting to estimate the fitted peak position as the estimated phase imbalance. The estimated phase imbalance can be expressed as: For a scene with enough buildings, the RDBi XPD target obtained should preferably exceed 30,000, and the normal curve fitting approach is to fit the midpoint position of the frequency of each interval within plus or minus 40 degrees around the peak as each fitting point. Technical details will be given in conjunction with specific experiments in Section 3.
The 180-Degree Phase Ambiguity Problem
From Figure 1b, one can see that the phase relationship of HH and HV has two cases listed as Equations (5a) and (5b). The above estimation method is based on Equation (5a). However, with complete randomness for an urban area, the situation of Equation (5b) will occur with equal probability, which will result in another solution with a phase difference of 180 degrees. We name this problem the 180-degree phase ambiguity problem. The specific manifestation of this problem is that the RDBs' XPD distribution histogram will have two statistical peaks, which is also observed by Atwood and Thirion-Lefevre with one peak centered at zero and one peak centered at [19].
The estimated phase imbalance corresponds to one of the two peaks, namely the target peak. For the calibrated PolSAR product, the phase imbalance should be zero, so the peak centered at zero or close to zero is the target peak, which means that the method does not need to consider the 180-degree phase ambiguity problem in the PolSAR data quality monitoring task.
However, for the uncalibrated SAR data, the phase imbalance is unknown, so the determination of the target peak in the twin peaks is a problem to be solved. A wrong peak selection will result in a phase difference of 180 degrees in estimating phase imbalance.
There is currently no convenient and simple way to solve this problem. We adopt the most straightforward approach in implementation, which is to reverse the rotation angle of the effective dihedral according to the orientation of the specific building wall relative to the line of sight of the satellite (according to Equation (11)) and then judge whether the statistical peak is the target peak according to the position of the rotation angle in Figure 1b. For example, when the rotation angle is about 22.5 degrees, it is determined as the target peak.
There are two key points, one is to identify such a building target. We combined optical images from Google Maps and RDBs from SAR images to find such a building target. The target building structure should be as simple as possible, with a flat wall structure and a simple top structure. Such architectural targets should be dominated by only one dihedral scattering pattern, i.e., the RDBs all correspond to the same peak.
After identifying the specific building wall, the next key point is to calculate the rotation angle of the wall along the vertical axis. For intuitive expression, a schematic diagram is shown in Figure 2. Where K is the line of sight of the SAR sensor, V is the direction of the building wall (V0 when it is directly facing the sensor), and is the rotation angle of the building wall, where a positive value corresponds to CCW rotation around Z (the vertical axis). We calculate the value of using the following approach. If we calculate the rotation angle by relative geometric relations using the L1A image, it is considered that the direction of V0 is the range direction, which is from left to right in the L1A image. When calculating the rotation angle of the building, it is necessary to consider the different pixel spacing sizes in the azimuth direction and range direction of the pixels. Meanwhile, it is necessary to consider the need to invert the image up and down when the scene is ascending.
If we calculate the rotation angle by absolute geometric relations, first, we use the imaging trajectory of the SAR image or the geometrically corrected image to determine the azimuth direction. The line of sight of the radar in strip mode is vertical to the azimuth direction, and we can get the direction of V0 for this scene. Then, we need to determine the orientation V of the corresponding building wall in Google optical maps or the geometrically corrected SAR images. Finally, we can obtain the rotation angle as shown in Figure 2.
Naturally, the rotation angle calculated by the method above has a certain deviation, which comes from the error of human judgment of the rotation angle , nonhorizontal ground, or even nonvertical reflective walls, etc. However, the filtering of Equation (12) produces the effect that only the RDBs of a specific rotation angle interval can be selected. From Figure 1b it can be seen that there is a difference of the rotation angle of more than 20 degrees between different peaks, which provides redundancy for the inversion calculation of the rotation angle in this section. It assures the validity and feasibility of this 180-degree phase ambiguity determination approach.
The Framework of the Imbalance Estimation Method
From Sections 2.2-2.4, the whole implementation approach and principle of the method are given, which is based only on HH and HV data. It can be used for the corresponding dual-pol and full-pol SAR data estimation to obtain the phase imbalance of the receiver. As seen in Equation (4), the hv S and hv S have the same expression on the dihedral reflector. Therefore, using HH and VH polarization data of full-polarization data, we can estimate the phase imbalance of the transmitter without changing the implementation details of the method.
In addition, the proposed method does not need to consider the screening for urban areas, but only requires the existence of urban areas or a certain number of man-made buildings of villages and towns in the scene, because the two filtering operations in the method make the RDBs be efficiently extracted. A key factor is that the more RDBs extracted from the scene, the more reliable the statistical distribution obtained, therefore, there is still a certain demand for the scene.
The proposed phase imbalance estimation process chain is shown in Figure 3.
Result
In this paper, the effectiveness and feasibility of the proposed method were verified through the following series of experiments. The robustness of the proposed method was verified by applying it to various data sources, especially on P/L/S/C/X multi-band data of airborne data. For the 180-degree phase ambiguity problem, two implementation examples realized by specific ground buildings are given in Sections 3.2 and 3.3. Through these two complete solution examples, the correctness of the method's principle was verified, and a deeper understanding of the double-bounce reflections of the ground surface was obtained. Since the method can be applied to estimate the phase imbalance of fullpolarization data, we used other polarization calibration algorithms to verify the accuracy of the estimated phase imbalances.
In Section 3.1, the feasibility of the proposed method for phase imbalance monitoring on calibrated polarimetric SAR data is verified by using Sentinel-1 dual-polarization data. In Sections 3.2 and 3.3, the phase imbalance estimation performance is studied using the uncalibrated GF-3 full-polarization data and GF-3 dual-polarization data. Finally, the performance of the proposed method in different bands is given under the multi-band fullpolarization airborne data in Section 3.4.
Phase Imbalance Monitoring using Sentinel-1 Dual-Pol Data
As seen in the flow chart of Figure 3, the 180-degree phase ambiguity determination operation can be omitted to obtain a fast phase imbalance monitoring for the calibrated SAR images. Because the residual phase imbalance is unlikely to reach or exceed 90°, the peak close to 0° should be the target peak of the two estimated phase-imbalance peaks. The experiment used Sentinel-1 calibrated dual-polarization single look complex (SLC) data of the Stripmap mode. The Stripmap mode acquires data with an 80 km swath and 5 m by 5 m spatial resolution. The characteristics of the Stripmap products are a phase error within 5°, radiometric accuracy with 1 dB (3σ), and maximum NESZ with −22 dB [23]. The operation of the experiment data is the same as shown in Figure 3, that is, first, extract the RDBs through polarimetric strong scattering points filtering and coherence filtering, then, calculate each RDB's phase imbalance. Finally, the histogram statistics of the results are performed. In the specific parameters in the experiment, K was 2, and the threshold was set to 0.8 for local 7-by-7 pixel areas. The results based on Sentinel-1 data are shown in Figure 4. This scene locates in the city of Houston and contains the entire urban area. One local area is shown in Figure 4a, where the yellow crosses mark the extracted RDB targets. There are a total of 186,474 RDBs extracted in this scenario. The histogram statistical results are shown in Figure 4b. As can be seen, the peak of the distribution histogram occurs at the position of about 0°, which is in line with expectation, and the other peak that locates around 180°can be ignored. Meanwhile, Figure 4b shows the normal fitting curve of the data within the range of plus or minus 40°centered at 0°, and the corresponding peak position can be obtained as 2.55°according to the normal fitting results. Therefore, the phase imbalance of this scene is estimated to be 2.55°.
As seen from the experimental results, there were enough RDB targets extracted in such an urban located scene, and its statistical distribution presented an obvious characteristic similar to the normal distribution, which can ensure that the final results obtained are accurate enough in the statistical sense. The results obtained in the experiments prove an excellent polarization phase performance of the Sentinel-1 data, which is consistent with its technical index that phase error within 5°. At the same time, the experimental results also prove the effectiveness of the proposed method for phase imbalance monitoring applications. Although the quantity of RDBs is large, the processing algorithm of individual RDB is not complicated, so the method can realize the phase imbalance monitoring mission quickly.
Phase Imbalance Estimation using Uncalibrated GF-3 Quad-Pol Data
In this part, we use GAOFEN-3 (GF-3) quad-pol data for the estimation of the unknown polarimetric phase imbalances at the transmitter and receiver. GF-3 is the first full polarization SAR satellite of China launched in August 2016. It is designed so that the channel isolation is better than −35 dB, and the channel imbalance is within 0.5 dB/10°. It provides full-polarized data with swaths of at least 20 km with a resolution of about 8 m (QPSI mode), and a 35 km swath with a resolution of about 25 m (QPSII mode).
GF-3 has been polarimetric-calibrated using polarimetric active radar calibrators' methods. The calibration of GF-3 quad-pol data was implemented in July 2017 [2]. However, to validate the method for the estimation of unknown and possibly large phase imbalances, in this experiment, we specifically chose the SAR data that was imaged earlier than July 2017. It means that the used quad-pol SAR data was not calibrated.
In the case of full-polarization data, the comparison object of experimental results is a full-polarization distortion parameters' estimation method based on common distributed targets [11], which utilizes the forest area of volume scattering to effectively obtain the channel imbalances with an accuracy of about 0.3 dB/4°. To facilitate the comparison of the two methods, the experimental scene was selected to be located in Jiangmen, Guangdong Province, imaged on 30 December 2016. This scene has both the urban area and forest area. Meanwhile, the quantization between polarization channels should be considered when processing the SLC data of GF3, the details of which can be found in the literature [12]. The Pauli polarization pseudo-color image of this scene and the phase imbalance estimation results of the two methods are shown in Figure 5. As shown in Figure 5a,b, there were a lot of factories and residential areas in this complex scene. In addition, the area marked by the white dashed box in the upper right corner of Figure 5a was a densely forested mountainous area, which satisfied the requirements of the quad-pol estimation method for distribution targets. The extraction details of RDBs were the same as in the previous experiment, and the RDB targets obtained are marked as yellow crosses in Figure 5a. It can be seen that they were densely distributed in areas of buildings, and a small number of sporadic targets in areas such as farmland and mountain forests could be considered discrete values, which had little effect on the final statistical results. Figure 5c,d shows the results obtained by the distributed-target-based method, for which the transmitting phase imbalance was −91.1°and the receiving phase imbalance was 52.3°. Figure 5e,f shows the estimation results that come from RDBs. HH and VH were used to obtain the transmitting phase imbalance, from Figure 5f, the peak position of the normal fitting of the left peak was −94.5°. Similarly, it can be seen in Figure 5e that the receiving phase imbalance at the right peak was 49.3°.
As seen in Figure 5e,f, there were two peaks in one distribution histogram. It was the representation of the 180-degree phase ambiguity problem, which was caused by the fact that the phase relations of Equations (5a) and (5b) are not distinguished in the RDBs extraction. The difference of about 180° between the two peaks was consistent with the analysis in Section 2. For the case of estimating the receiving phase imbalance, we marked these RDB targets of the two peaks separately in the SAR scene, as shown in Figure 5b. The yellow rings correspond to the right peak, and the blue triangles correspond to the left peak in Figure 5e. As the orientations of the buildings in the same residential area tend to be consistent, the rotation angles of these effective dihedrals were likely to be consistent, which caused the RDB of the same peak to cluster according to geographic location.
Next, the 180-degree phase ambiguity determination needed to be executed to determine the final phase imbalance estimation result. The experiment was based on the case of receiving phase imbalance estimation, which is shown in Figure 6. Some RDBs corresponding to the left peak in Figure 5e appeared on the factory circled in Figure 6a. It is an L1-level image, where the radar illuminates from the left to the right on the image. Figure 6b is a Google optical image of the corresponding area, where the circled building is the research target. It can be seen that the effective dihedral of the factory building should be composed of protruding bars on the roof and the roof plane. Further, the orientations of the walls of the buildings in this area were all the same, facing the direction perpendicular to the road. From the SAR image, we could obtain about 96 m in the direction of 0 V (the direction is shown in Figure 2) and about 42 m in the azimuth direction. Then, was estimated to be about 23.6°. The incidence angle of this scene was 26.17°. Assuming that the scene had no obvious slope, the local incidence angle was considered to be 26.17°.
Using Equation (11), we could obtain the rotation angle of these effective dihedrals, which was about −25.9 degrees. From Figure 1b, we can see that the phase relationship between HH and HV of these effective dihedrals was opposite, which was not the assumed case in the method (Equation (5a)). Therefore, the corresponding peak (left peak in Figure 5e) was not the target peak. The receiving phase imbalance of the scene finally obtained with the proposed method was 49.3°, which was consistent with the result of the distributed-targets-based method. It can also be proved that the left peak was the target peak in the estimation of transmitting phase imbalance; the similar derivation process was not repeated here. From the experimental results, the phase imbalance estimation differences between the two methods turned out to be 3.4 degrees and 3.0 degrees. Considering both methods' estimation errors, the experimental results were reasonable and in line with expectations. This experiment fully verified the correctness, accuracy, and completeness of the proposed method.
The Receive Phase Imbalance Estimation Using Uncalibrated GF-3 Dual-Pol Data
In this part, we apply the proposed phase imbalance estimation method to the uncalibrated GF-3 dual-polarization data. The scene was imaged in the urban area of Beijing, which was the FSI mode, ascending, and with an incidence angle of 30.77°. The operation of the experiment was the same as in the previous two experiments. The result is shown in Figure 7. Figure 7a shows the polarization pseudo-color image of a local area in the Beijing scene, where the RDB targets marked in blue triangles correspond to the left peak in Figure 7b, and the RDBs marked in yellow circles correspond to the right peak. Figure 7b shows the statistical results of the phase imbalance estimation of the extracted total of 528,819 RDB targets from the entire scene. It can be seen that the peak on the left was dominant, and its phase imbalance was estimated to be −108.1°. The peak on the right was relatively weak, it corresponded to an estimated phase imbalance of about 70°.
The phenomenon that one peak was much stronger than the other peak was seen in this experiment. When a large number of buildings in an urban area have the same orientation (i.e., most buildings in Beijing have a north-south orientation), the orientation of the walls corresponding to the effective dihedrals is the same. Then, the extracted RDBs correspond to the same type mostly, resulting in a strong peak.
However, since the data was not calibrated, we cannot judge whether the dominant peak was the target peak. The 180-degree phase ambiguity determination still needed to be performed.
The experiment is shown in Figure 8. It was a local area of this scene, in which there were a large number of RDBs marked in blue and a small number of RDBs marked in yellow, which corresponds to the two peaks, respectively, in Figure 7b. Two typical architectural targets were selected as the research objects. One was the building cluster corresponding to the circled blue RDBs, which was the Beijing West Railway Station and nearby buildings, almost facing due south, as shown in Figure 8c. The other was the building corresponding to the yellow RDBs, which was the Beijing Electric Power Hospital, as shown in Figure 8d. The imaging range of the entire scene is displayed in Figure 8b, the latitude and longitude of the upper-left corner were (40.150°N, 115.912°E), and the lower-left corner was (39.669°N, 116.031°E). After conversion, the difference between the two points was 53 km in the north-south direction and 10 km in the east-west direction. The angle about the north-south direction was about 10.7°.
For the wall of the buildings facing south, the angle was equivalent to about 79.3°. Meanwhile, the incidence angle of this scene was 30.77°. Assuming that the scene had no obvious slope, the local incidence angle should be about 30.77°. Using Equation (11), we could obtain the rotation angle of these effective dihedrals was about −80.78 degrees. However, note that the only meaningful range for is between −45°and 45° [24]. Since urban blocks are usually rectangular, rotations exceeding ±45° begin to present the orthogonal sides as the stronger backscatter target [19]. Therefore, the value of should be about -10.7°, for which the western wall should be the main backscatter target.
Then, the rotation angle of these effective dihedrals should be about 12.4°.
From Figure 1b, it can be seen that these RDBs should belong to the type of phase relationship of Equation (5a), which was exactly the condition used in the method. Therefore, the left peak in Figure 7b should be the target peak. The phase imbalance of this scene was eventually estimated to be −108.1°.
While the building shown in Figure 8d was located along the road, the wall of the main building facing the road was the structure corresponding to the strong scatter targets, whose orientation was not the north-south direction. It can be seen that it was rotated about 45 degrees from the north-south direction, then, the phase characteristic of HH and HV changed. For the corresponding RDBs, the yellow marks in the figure indicate that they belonged to the right peak in Figure 7b, which was not the target peak.
In general, the estimated rotation angle of the effective dihedral may not have been a completely accurate value, because the estimation of was affected by image parameters, geometric accuracy, etc., and the local incidence angle was also based on the assumption that the ground was flat. However, the phase difference of from one peak to another peak in Figure 1b was 45 degrees, which provided enough redundancy and insurance for the estimation errors of the rotation angle . Therefore, the judgment of the 180-degree phase ambiguity determination of the two experiments should have been correct and valid.
Since GF-3 dual-polarization data were not calibrated, the result could not be verified temporarily, but a meaningful reference value has been obtained for the first time.
The Phase Imbalance Estimation Experiments on Multiple Band Airborne SAR Data
The experiments in this subsection apply the method to airborne SAR data of different bands to verify the universality and robustness of the method.
The Aerospace Information Research Institute, Chinese Academy of Sciences (AIR-CAS) led a development of an airborne multidimensional space joint-observation (SARMSJosSAR) system and carried out data acquisition experiments [25]. The airborne multidimensional SAR system includes six bands in total, among which the Ka-band has some problems, and here, a total of five bands of P, L S, C, and X data were used accordingly in this experiment. The system parameters of this airborne multi-band data are shown in the Table 1. "Br" is bandwidth, "Fsr" is the sampling rate, "Xbin" is the pixel interval in the azimuth direction, and "Rbin" is the pixel interval in the range direction. The data set used in the experiment was imaged in the Houhai area, east of Wanning, Hainan Province, on 25 December 2020. This data set has not undergone a calibration operation, the distortion parameters of each band are unknown. We chose a scene that contained a village and a forest area, as seen in Figure 9. The airborne SAR image range was small, encompassing only a few towns and villages. As can be seen from the figure, the color of some ground surface changed significantly with the change of radar wavelength.
P L S C X The experiment was conducted in the same way as in Section 3.2. The reference values were obtained by the distributed-targets-based polarimetric distortion estimation method. In the experiment, we chose the mountain forest region on the upper right of this scene as the object. The method proposed in the paper estimated the phase imbalance parameters using a limited number of village buildings in the scene. Meanwhile, the points marked in yellow in Figure 9 are the extracted RDB targets, for which we appropriately relaxed the K value to 2 or 1.5 to ensure that enough point targets were extracted. With the results of a forest-based quad-pol calibration method as a reference, we skipped the determination for 180-degree phase ambiguity. The results are shown in Table 2. Table 2, it can be seen that the estimation differences between the two methods in most cases were within the range of about 5 degrees. The S-band transmitting result that reached a level of 10 degrees was regarded as a discrete value, which may have been a statistical error caused by insufficient target points, for which we think it had no significant representative significance. Interestingly, the phase imbalance results estimated by RDBs were generally larger than those estimated by the distributed targets in this experiment. The systematic difference may have come from the unknown crosstalk of the airborne polarimetric system.
As for the reasons why the results were not ideal, firstly, the SAR data quality of the airborne data was not very good, including the poor radiation quality and unsatisfactory polarization isolation. In addition, due to the small image scene of the airborne data, the quantity of the RDBs obtained in this experiment was small (about 20,000, and even less for some bands' data), which reduced the accuracy of the estimation results.
Nevertheless, the estimation differences were still within an allowable range in such complex multi-band scenes with poor data quality. From the results, the estimated errors did not show a clear correlation with the band, and it was seen that the method worked effectively in these bands. Therefore, it was shown that the method could be used for a rough estimation in complex airborne SAR data conditions. The method had a significant contribution to the quality assurance of full-polarization data and dual-polarization data.
Discussion
This method achieved an efficient and fast estimation of the phase imbalance of the full-polarization system and the phase imbalance of the receiver side of the dual-polarization system without relying on any calibrators.
It is necessary to fully discuss the considerations and limitations of this method. Firstly, the proposed method is suitable for SAR systems with high polarization isolation. The theoretical estimation error range is directly affected by the system cross-talks. Although there is an amplitude imbalance term in Equation (10), it is generally close to 1. As for the crosstalk, its magnitude directly affects the uncertainty of the estimation results. Fortunately, with the improvement of antenna technology, nowadays the SAR systems can generally achieve a high level of polarization isolation. Therefore, the method applies to most cases, and this can be proved in the experiment of airborne multi-band data.
In addition, the method requires the presence of a certain number of building targets in the scene, and there is no doubt that data imaged in urban areas are the ideal targets. Otherwise, a small number of RDBs will cause unknown bias in the statistical estimates. The impact of this error on the estimation results may be serious and cannot be quantified. Empirically, it is desirable to have more than 50,000 statistical samples in one scenario.
Processing parameters such as K-values and coherence thresholds are empirical, and changes in their values can have an impact on the number of extracted RDB targets. However, with a sufficient number, the results obtained from histogram statistics should be stable. If the scene is mostly urban, these parameters can be appropriately strict, and if the scene has limited buildings, these parameters can be appropriately relaxed. When the relaxed parameters still do not yield enough RDBs, the scenario may not be suitable.
The extracted RDB targets may also appear in small amounts in ridge regions, exposed rock areas, and so on. These targets most likely do not correspond to dihedral reflection structures and may be invalid. However, because we used the statistical method, the method itself has a strong tolerance to invalid values. As long as the point set is mostly extracted from an urban area, it will not affect the statistical results. For simplicity, we extracted the RDBs from the whole scene in the experiment. Future improvements for refinement and automation could add to the operation of identifying urban areas, but this was not the focus of this article.
Another limitation is the problem of 180-degree phase ambiguity determination, which can only be executed based on the double-bounce backscatters of a specific building structure at present. This operation requires human intervention and is kind of cumbersome as seen in the experiments. Fortunately, as the system is generally stable without jumps in distortion parameters, only one-time 180-degree phase ambiguity determination processing is needed for one system. In addition, there is enough error redundancy of inversion calculation to ensure the accuracy of the determination result. Of course, an automated and convenient 180-degree phase ambiguity determination approach will be an important advance for this RDBs-based phase imbalance estimation method.
Conclusions
In this paper, the extraction of numerous dihedral reflector clusters at specific rotation angles in urban areas was achieved by taking advantage of the existence of the secondary reflection structure of SAR radar waves widely present in urban areas. Innovatively, the calculated XPD values of these RDBs were used to obtain the estimation of the polarimetric phase imbalance by using the statistical big data approach with the help of the dihedral reflector characteristics. Finally, the proposed method achieved the effective estimation of the phase imbalance at the receiver side and transmitter side without relying on any artificial calibrators, of which the accuracy can be within 7 degrees. The method can be conveniently applied to dual-polarization data and full-polarization data in common wavebands with convenient, universal, and robust features.
The article gives a detailed theoretical derivation and principle explanation and gives detailed operational steps, especially the estimation error analysis of the method. It can be seen that although the method needs to deal with a large number of point targets, the method itself is relatively simple and intuitive, which reduces the application limitations and application threshold of the method, and the computation burden is not severe and can be performed in parallel. The universality, robustness, and high estimation accuracy of the proposed method were fully verified by a series of experiments. In practical application, the estimation error was generally within 5 degrees.
The method can be applied to assist in the calibration of dual-polarization systems or to the long-term monitoring of the key parameter of dual-polarization systems. Especially, this novel method can estimate the transmitting and receiving phase imbalances of the quad-polarization SAR data based on the urban areas, which means it can be an important supplement to the traditional distributed-targets-based quad-pol polarization calibration methods, such as the Quegan method [5] and the Ainsworth method [6]. Undoubtedly, the method presented in this paper has important reference and supplementary significance to the traditional polarization distortions' calibration algorithms independent of calibrators.
Differently from the discrete point requirements emphasized in Lei Shi's research about corner-reflector-like targets [9], because of the statistics of massive point targets, there is no need to carry out fine processing on individual points, which means the proposed method has a strong tolerance to complex SAR data performance. Therefore, the method has the advantages of strong robustness and low complexity.
As mentioned in the discussion, there are also some limitations to the proposed approach. The calibration of artificial calibrators cannot be replaced by the proposed method. Regarding the 180-degree phase ambiguity problem, an automatic and convenient solution approach is the focus of our future work. In addition, the method for achieving adaptive and automated extraction of RDBs in an unknown scene can continue to be researched and developed. At last, there are still two key distortion parameters of the polarimetric system, i.e., amplitude imbalance and crosstalk, which deserve more innovative work to be implemented, especially the estimation of amplitude imbalance at the receiver of the dual-polarization system, which is of great importance to realize the initial correction of the dual-polarization system.
Patents
The work reported in this article has applied for a national invention patent of China (Application Number: 202210527642.1).
|
2022-07-07T15:07:58.023Z
|
2022-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "d3064d692aece55f5f46334334899721ed89130a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/14/13/3177/pdf?version=1656986808",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "eb7e5272e9294f3fe9b2494b6fdf3df4764a6431",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
219180562
|
pes2o/s2orc
|
v3-fos-license
|
Data on ion-exchange membrane fouling by humic acid during electrodialysis
This data paper aims to provide data on the effect of the process settings on the fouling of an electrodialysis pilot installation treating a sodium chloride solution (0.1 M and 0.2 M) in the presence of humic acid (1 g/L). This data was used by “Colloidal fouling in electrodialysis: a neural differential equations model” [1] to construct a predictive model and provides interpretive insights into this dataset. 22 electrodialysis fouling experiments were performed where the electrical resistance over the electrodialysis stack was monitored while varying the crossflow velocity (2.0 cm/s - 3.5 cm/s) in the compartments, the current applied (1.41 A - 1.91 A) to the stack and the salt concentration in the incoming stream. The active cycle was maintained for a maximum of 1.5 h after which the polarity was reversed to remove the fouling layer. Additional data is gathered such as the temperature, pH, flow rate, conductivity, pressure in the different compartments of the electrodialysis stack. The data is processed to remove the effect of temperature fluctuations and some filtering is performed. To maximise the reuse potential of this dataset, both raw and processed data are provided along with a detailed description of the pilot installation and sensor locations. The data generated can be useful for researchers and industry working on electrodialysis fouling and the modelling thereof. The availability of conductivity and pH in all compartments is useful to investigate secondary effects of humic acid fouling such as the eventual decrease in membrane permselectivity or water splitting effects introduced by the fouling layer.
a b s t r a c t
This data paper aims to provide data on the effect of the process settings on the fouling of an electrodialysis pilot installation treating a sodium chloride solution (0.1 M and 0.2 M) in the presence of humic acid (1 g/L). This data was used by "Colloidal fouling in electrodialysis: a neural differential equations model" [1] to construct a predictive model and provides interpretive insights into this dataset. 22 electrodialysis fouling experiments were performed where the electrical resistance over the electrodialysis stack was monitored while varying the crossflow velocity (2.0 cm/s -3.5 cm/s) in the compartments, the current applied (1.41 A -1.91 A) to the stack and the salt concentration in the incoming stream. The active cycle was maintained for a maximum of 1.5 h after which the polarity was reversed to remove the fouling layer. Additional data is gathered such as the temperature, pH, flow rate, conductivity, pressure in the different com partments of the electrodialysis stack. The data is processed to remove the effect of temperature fluctuations and some filtering is performed. To maximise the reuse potential of this dataset, both raw and processed data are provided along with a detailed description of the pilot installation and sensor locations. The data generated can be useful for researchers and industry working on electrodialysis fouling and the modelling thereof. The availability of conductivity and pH in all compartments is useful to investigate secondary effects of humic acid fouling such as the eventual decrease in membrane permselectivity or water splitting effects introduced by the fouling layer.
Value of the data
• Ion-exchange membrane fouling is still an important hurdle to overcome when processing bio-based process streams through electrodialysis. Previous research showed that the fouling rate strongly depends on the process settings [2] . This dataset enables the quantification of these effects. • Researchers and industry aiming to improve the fouling resistance of electrodialysis can make use of this dataset to explore the effect of the process settings on the fouling rate during electrodialysis. Furthermore, researchers aiming to develop mathematical models for fouling prediction can make use of this data. • The availability of conductivity and pH measurements before and after all compartments can be used to investigate the secondary effects of humic acid fouling such as the decrease in membrane permselectivity or water splitting effects introduced by the fouling layer. • This data is collected with the use for model training, calibration and validation in mind and provides more consistency, reproducibility and more measured quantities than previous work [2] .
Data Description
The data consists of time series of different sensors throughout an electrodialysis pilot installation that processes a salt solution in the presence of humic acid. Humic acid fouling is described in a set of 22 experiments, performed under varying process settings as indicated by Fig. 4 . Fig. 1 shows the evolution of the stack resistance for all experimental conditions. The y -axis depicts the stack resistance and the x -axis the time. The rows represent the different crossflow velocities tested and the columns the salt concentration. Next to this, a lot of additional sensor data is collected. A subset of the data is illustrated in Fig. 2 where three reversal cycles are dis- played at an increasing current. The alternation of the active and cleaning cycles is shown in Fig. 2 a where the applied current is visualised in time. Next to the stack resistance (derived from the stack potential and applied current ( Fig. 2 a), there is data on the pH of the feed streams and the electrolyte rinsing solution ( Fig. 2 c). The conductivity is also monitored at 5 locations, two of which are visualised in Fig. 2 d. Lastly, the temperature of the different circuits is monitored at 3 different locations ( Fig. 2 e). The data consists of both raw data and the processed data after temperature compensation ( Eqs. 1 -3 ), filtering ( Eq. 4 ) and resampling. All data is available on a data repository on Zenodo and the code to reproduce the figures in this paper from the dataset is published on Github.
Electrodialysis pilot installation
An automated ED pilot installation (VITO, Belgium) is used to generate the data in a series of electrodialysis fouling experiments. The ED installation consists of three circulation loops separating the concentrate, diluate and electrode rinsing solutions. The diluate and concentrate compartments are connected to the same storage tank to ensure a constant salt concentration. The electrolyte rinsing circuit is fed by a 0.1M solution of Na 2 SO 4 in a separate tank and pumped to the anode and cathode compartments in series. The concentrate, diluate and electrolyte rinsing circuit are driven by three separate pumps (GM-V IWAKI USA -PID controlled). In the pilot, the temperature, pH, flow rate, conductivity, pressure and electric potential are measured at several locations in the pilot installation, a detailed overview of the sensor locations is shown in Fig. 3 . An overview of the different sensors along with the manufacturers' information is provided in Table 1 . The data acquisition and control is handled by Mefias®, a LabVIEW-based control software (VITO, Belgium). An FT-ED(R)-100-4 (ED100, FuMA-Tech, Germany) ED stack with three cell pairs is built of alternating cation-and anion-exchange membranes, starting and ending with a cation-exchange membrane. The cation-and anion-exchange membranes used in this study are Table 1 An overview of the sensors in the pilot installation along with information on the identifier in the P&ID, the manufacturer and the units of the measured signal.
Experimental Design
For every experiment the following procedure was followed, a NaCl solution is prepared at a certain concentration ( Fig. 4 -colour) and humic acid is added to a concentration of 1 g/L. Next, the electrodialysis pilot installation is run at a certain crossflow velocity and current density ( Fig. 4 -x/y axis). Following this procedure, 22 experiments were performed at a different crossflow velocity, applied current and the salt concentration in the fouling solution, according to the experimental design depicted in Fig. 4 . The time series consists of the electrical resis-tance of the stack as a measure of the fouling severity [2] complemented by the data from the above-mentioned inline sensors. Humic acid sodium salt (Sigma-Aldrich) is used as the fouling component in 1g/L solution.
Data-processing and filtering
The data sampling period for data-acquisition is 15 seconds for all sensors. Each of the experiments presented in Fig. 4 . As time progresses the resistance of the stack increases and as the power supply is operated in a galvanostatic manner the potential applied to system increases. For some of the experimental conditions presented in Fig. 4 the fouling intensity was severe and the maximum potential of 15V was reached before 1.5h and are terminated early. Consequently, the length of the experiments varies from a few minutes to a maximum of 1.5h (360 data points). After termination or after 1.5h the polarity of the electrodes is inverted for 0.5h to clean the membranes and bring the system back to its original state. To remove the internal and external temperature effects on the stack resistance a non-linear temperature compensation is performed which takes into account the effect of temperature on water viscosity and ion diffusivity [3] , with R 20 the compensated resistance, R the measured resistance. A and B are temperaturedependent parameters and are computed as follows [3] , with T the temperature ( °C). By applying this compensation, the effect of temperature fluctuations is eliminated. Filtering of the data was performed with a simple moving-average filter with a window size of 10 data points, Finally, the data is resampled to yield 20 time points spanning each experiment.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships which have, or could be perceived to have, influenced the work reported in this article.
|
2020-05-28T09:12:14.234Z
|
2020-05-25T00:00:00.000
|
{
"year": 2020,
"sha1": "589e6c5c6b197d6046f2d39436edd2776a9a6843",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.dib.2020.105763",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6114e18d9c18276dc2eaa4f41487b2be45515646",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
16235816
|
pes2o/s2orc
|
v3-fos-license
|
On invariant measures of finite affine type tilings
In this paper, we consider tilings of the hyperbolic 2-space, built with a finite number of polygonal tiles, up to affine transformation. To such a tiling T, we associate a space of tilings: the continuous hull Omega(T) on which the affine group acts. This space Omega(T) inherits a solenoid structure whose leaves correspond to the orbits of the affine group. First we prove the finite harmonic measures of this laminated space correspond to finite invariant measures for the affine group action. Then we give a complete combinatorial description of these finite invariant measures. Finally we give examples with an arbitrary number of ergodic invariant probability measures.
Introduction
Let N be either the hyperbolic 2-space H 2 , identified with the upper half complex plane: {z ∈ C |Im(z) > 0} with the metric ds 2 = dx 2 +dy 2 y 2 , or the Euclidean plane R 2 .
A tiling T = {t 1 , . . . , t n , . . .} of N , is a collection of convex compact polygons t i with geodesic borders, called tiles, such that their union is the whole space N , their interiors are pairwise disjoint and they meet full edge to full edge. Let G denote a Lie group of isometries of N preserving the orientation. A tiling is said of G-finite type if there exists a finite number of polygons {p 1 , . . . , p n } called prototiles such that each t i is the image of one of these polygons by an element of G. For instance, when F is a fundamental domain of a discrete cocompact group G of isometries of N , then {γ(F ), γ ∈ G} is a tiling of N . However the set of finite type tilings is much richer than the one given by discrete cocompact groups. When N = R 2 , R. Penrose [15] gave an example whose set of prototiles is made with teen rhombi: the Penrose's tiling. When N = H 2 , Penrose also constructed a finite type tiling made with a single prototile which is not stable for any Fuchsian group. This example is the typical example of tilings studied in this paper. The construction goes as follows. Let P be the convex polygon with vertices A p with affix (p − 1)/2 + i for 1 ≤ p ≤ 3 and A 4 : 2i + 1 and A 5 : 2i (see figure 1): P is a polygon with 5 geodesic edges. Consider the two maps: R : z → 2z and S : z → z + 1.
The hyperbolic Penrose's tiling is defined by T = {R k • S n P |n, k ∈ Z} (see figure 2). This tiling is an example of P-finite type tiling where P denote the group of affine maps i.e. isometries of H 2 of the kind z → az + b with a, b reals and a > 0. The argument of Penrose is a homological one: he associates with the edge A 4 A 5 a positive charge and two negative charges with edges A 1 A 2 , A 2 A 3 . If T was stable for a Fuchsian Figure 1: The prototile P group, then P would tile a compact surface. Since the edge A 4 A 5 can meet only the edges A 1 A 2 or A 2 A 3 , the surface has a neutral charge. This is in contradiction with the fact P is negatively charged. G. Margulis and S. Mozes [12] have generalized this construction to build a family of prototiles which cannot be used to tile a compact surface. Notice the group of isometries which preserves T is generated by the transformation R. In order to break this symmetry, it is possible to decorate prototiles to get a new finite type tiling which is not stable for any non trivial isometry (we say in this case that the tiling is aperiodic). Using the same procedure, C. Goodmann-Strauss [10] construct a set of polygons which can tile H 2 only in an aperiodic way.
To understand the combinatorial properties of a tiling, it is useful to associate with this tiling, a set of tilings that we can study both from a geometric and dynamical point of view. The image of a G finite type tiling T by an element of G is again a G finite type tiling. We consider a compact metric space Ω(T ), which is the completion of the set of tilings image of T by elements of G, for a natural metrizable topology defined in section 2. The space Ω(T ) is called the continuous hull of T . The group G acts continuously on this space. In this paper we are mainly interested in the situation when the G-action on the hull is free (without fixed point). This is the case for the P-action on the hulls of examples in [10] as well as for the translation group action on the hull of the Euclidean Penrose's tiling. In this case, the G-action induces a specific laminated structure on the hull: a G-solenoid structure, where leaves are orbits for the group G-action (see section 2). The combinatorics properties of the tiling T are related to geometrical properties of Ω(T )and dynamical properties of (Ω(T ), G). In particular, the distribution of tiles of the tiling, which is our main interest for this paper, can be described by the statistical properties of the leaves of the solenoid. On one hand, these properties can be grasped from a dynamical point of view. When the group G is amenable, the G-action possesses finite invariant measures. R. Benedetti, J.-M. Gambaudo [2] and L. Sadun [17], show that a G-solenoid can be seen as a projective limit lim ← (B n , π n ) of branched manifold B n . Furthermore, when the group G is unimodular (for example when N = R 2 and G is the translation group), authors of [2] prove that the notions of transverse invariant measure, foliated cycle and finite G invariant measure, are equivalent. Thanks to this, they characterize finite G-invariant measures as the elements of a projective limit of cones in the dim G-homology groups of the branched manifolds B n . When the group G is amenable and not unimodular (this is the case when G is the affine group P), their results do not apply. Actually, we prove that on P-solenoid there is no transverse invariant measure (Proposition 3.1). On the other hand, statistical properties of the leaves can be studied through a geometric point of view. Following the work of L. Garnett [7] on foliations, we can consider harmonic currents on the hull (such currents always exist on laminations). A riemannian metric one the leaves yields a correspondence between harmonic currents and finite harmonic measures and these measures give statistical properties of random path in a leaf of Brownian motions. More particulary, harmonic measures enable to define the average time of a generic path crossing an open subset of the hull. We prove that, for a P-solenoid, both geometrical and dynamical approaches are related: Theorem 1.1 A finite measure on a P-solenoid is harmonic if and only if it is invariant for the affine group action.
By using the structure of projective limit lim ← (B n , π n ) of a P-solenoid, we give a characterization of harmonic measures of a P-solenoid: Theorem 1.2 There exists a sequence of linear morphisms A n such that the set of harmonic measures is isomorphic to the projective limit of cones in 2 chains spaces of branched manifold B n , lim ← (C 2 (B n , R) + , A n ).
The linear morphisms A n will be defined in section 4. We deduce from Theorem 1.2 that the number of ergodic invariant probability measures on the solenoid is bounded from above by the maximal number of faces of the branched manifolds. Finally we prove, by giving explicit examples:
Background on tiling spaces
We recall here different useful notions defined in [11] and [2] 2.1 Action on the hull Let G be the subgroup of isometries acting transitively, freely and preserving the orientation of the surface N , thus G is a Lie group homeomorphic to N . The metric on N gives a left multiplicative invariant metric on G. We fix a point O in N that we call origin. For a tiling T of G finite type and an isometry p in G, the image of T by p −1 is again a tiling of N of finite affine type. We denote by T.G the set of tilings which are image of T by isometries in G. The affine group G acts on this set by the right action: We equip T.G with a metrizable topology, finer as one induced by the standard hyperbolic metric. A base of neighborhoods is defined as follows: two tilings are close one of the other if they agree, on a big ball of N centered at the origin, up to an isometry in G close to the identity. This topology can be generated by the metric δ on T.G defined by : for T and T ′ be two tilings of T.G, let The continuous hull of the tiling T , is the metric completion of T.G for the metric δ. We denote it by Ω(T ). Actually this space is a set of tilings of N of G-finite type. A patch of a tiling T is a finite set of tiles of T . It is straightforward to check that patches of tilings in Ω(T ) are copies of patches of T . The set Ω(T ) is then a compact metric set and the action of G can be extended to a continuous right action on this space. The dynamical system (Ω(T ), G) has a dense orbit (the orbit of T ). We fix in each prototile prot of T , a marked point x prot in its interior. Consequently, each tile t of a tiling T ′ ∈ Ω(T ) admits a distinguished point x t . Let Ω 0 (T ) denote the set of tilings of Ω(T ) such that one x t coincides with the origin O. With the induced topology, Ω 0 (T ) is compact and completely disconnected.
Definition 2.1 A tiling T satisfies the repetitivity condition if for all patch P , there exists a real R(P ) such that every ball of N with radius R(P ) intersected with the tiling T contains a copy of the patch P .
This definition can be interpreted from a dynamical point of view (see for instance [11]). We call a tiling non-periodic if the action of G on Ω(T ) is free: for all p = Id of G and all tilings T ′ of Ω(T ) we have T ′ .p = T ′ . It is straightforward to show that when the stabilizer of T is reduced to the identity (T is aperiodic) and T is repetitive then T is non periodic. In this case the space Ω 0 (T ) is a Cantor set. For example when N = R 2 and G is the translation group, the Euclidean Penrose's tiling is a non-periodic repetitive tiling of R 2 finite type. When N = H 2 and G is the affine group P, we saw that the hyperbolic Penrose's tiling is not aperiodic, however we shall construct in the last section examples of repetitive and aperiodic affine finite type tilings (with specific ergodic properties).
Structure of G-solenoid
where f i,j .x means the multiplication of x ∈ V j with an element f i,j of G, independent of x and c ∈ C j ; and g i,j is a continuous map from C j to C i independent of x. Two atlases are equivalent if their union is again an atlas. We will call a compact metric space M with an equivalence class of atlas, a G-solenoid.
The transition maps structure provides the following important notions: 1. slices and leaves: a slice is a set of the kind h −1 i (V i × {c}). The leaves are the union of the slices which intersect. The global space M is laminated by these leaves. Leaves are differentiable manifolds of dimension 2. A G-solenoid M is called minimal if every leaf of M is dense in M .
2.
Vertical germs: it is a set of the kind h −1 i ({x} × C i ). Transition maps map vertical germs onto vertical germs, and thus this notion is well defined (independently of the charts).
These transition maps enable to define right multiplication by an element of G close to the identity. We suppose furthermore that each leaf is diffeomorphic to N and that this local G right action on a leaf extends to a free G right action on M . Leaves correspond to orbits of the action of G by right multiplication. This action is minimal if and only if the G-solenoid is minimal.
Furthermore this action has locally constant return times: if an orbit (or a leaf) intersects two verticals V and V ′ at points v and v.g where g ∈ G, then for any point w of V close enough to v, w.g belongs to V ′ .
It turns out that the hull of a tiling has a laminated structure (see for instanceÉ. Ghys [8]). More precisely, in [2] authors prove that the hull Ω(T ) of a non periodic G finite type tiling T , has a G-solenoid structure. The boxes of Ω(T ) are homeomorphic to spaces V i × C i where V i is an open subset of G ≃ N and C i is a closed and open subset of Ω 0 (T ). The charts are the inverse of the maps f i : The action of the group G on the solenoid coincides with the action of this group on the hull. This G-action is expansive: there exists a positive real ǫ such that for every points T 1 and T 2 in the same vertical in Ω(T ), if δ(T 1 .g, T 2 .g) < ǫ for every g ∈ G, then T 1 = T 2 .
If furthermore T verifies the repetitivity condition, the hull Ω(T ) is minimal, and the transversal in any point in any box is homeomorphic to a Cantor set.
Branched manifolds and projective limits
A box decomposition of a solenoid M is a finite collection of charts B 1 , . . . , B n such that: any two boxes are disjoint and the closure of the union of all boxes is the whole space M ; The hull of a finite affine type tiling has a natural box decomposition, where boxes are homeomorphic to the product of a prototile of the tiling times a disconnected set. Boxes are sets of tilings having the same tile on the origin. We say that this box decomposition is associated to tiles of the tiling.
Let us consider a box decomposition on M . We consider now the equivalence relation ∼ generated by the relation ≈: x ≈ y ⇔ x and y belong to the closure of the same box and are in the same vertical.
Let B be the quotient space M/ ∼ and let p be the projection of M onto B. Authors of [2] prove that the set B with the quotient topology, has a differentiable structure and is a branched manifold, a structure by R. Williams (see [22]). Actually, in the proof of Theorem 1.2 we will only use the simplex structure of B.
Example: consider a non-periodic tiling T which is a decorated hyperbolic Penrose's tiling (see section 5). The set of prototiles is a finite union of different copies of P . Let us consider now the box decomposition of Ω(T ) associated to its prototiles. The quotient space Ω(T )/ ∼ is then homeomorphic to the collapsing of prototiles along edges. Points on prototiles are identified if somewhere, on T , their copies meet (see [1]). For the Penrose's tiling T , this identification leads to a branched manifold N homeomorphic to P with edges identified as follows: edges A 1 A 2 , A 2 A 3 and A 4 A 5 are identified themselves and edge A 4 A 1 is identified with A 5 A 3 . This space is homeomorphic to the mapping torus of the application x → 2x mod 1 on the circle S 1 ≃ R /Z . There is a natural projection of Ω(T )/ ∼ onto N .
We say that the box decomposition B 2 is zoomed out of the box decomposition B 1 if: 4. if a vertical in the vertical boundary of a box in B 1 contains a point in a vertical boundary of a box in B 2 , then it contains the whole vertical.
A tower system of a solenoid M is a sequence of box decompositions (B n ) n≥1 , such that for any n ≥ 1, B n+1 is zoomed out of B n . In [2] it is proved that any P-solenoid admits a tower system(B n ) n . From above, for every n, there exists a branched manifold B n associated to the box decomposition B n and a projection p n : M → B n . By definition, the set of verticals of boxes of B n+1 is included in the set of verticals of B n , this induces a natural map π n : B n+1 → B n such that p n = π n • p n+1 . We recall that lim ← (B n , π n ) is a subspace of ΠB n defined by {(x n ) ∈ ΠB n | x n = π n (x n+1 )} and equipped with the topology induced by the product topology.
Foliated cycles and harmonic currents 3.1 Foliated cycles
The leaves of a G solenoid M carry a 2-manifold structure. Following [8], we call k-differential form the data, in any box, of a family of real k-differential forms (C ∞ ) on slices V i × {c} which depends continuously of the parameter c (in the C ∞ -topology) and such that each family is mapped onto each other by the transition maps. We denote by A k (M ) the set of k-differential forms on M . The differentiation along leaves gives an operator d : Foliated cycles, introduced by D. Sullivan [20], are a continuous linear forms A 2 (M ) → R which are positive on positive forms and vanish on exact forms. Proposition 3.1 A P-solenoid does not admit a foliated cycle.
In order to prove this result, let us introduce the following definition.
Definition 3.2 A finite transverse invariant measure on M is the data of a finite positive measure µ i on each set C i such that for any borelian set B in some C i which is contained in the definition set of the transition map g ij then The data of a transverse invariant measure for a given atlas provides another invariant transverse measure for any equivalent atlas and thus gives an invariant measure on each verticals. Thus it makes sense to consider a transverse invariant measure µ t of a P-solenoid. It turns out that finite transverse invariant measures are in one-to-one correspondence with foliated cycles (also called Ruelle-Sullivan current) and that conversely any foliated cycle implied the existence of a transverse invariant measure.
Proof of Proposition 3.1: if µ t is a finite invariant transverse measure of a P-solenoid Ω and λ is a left invariant Haar measure on borelian sets of P (for example the measure induced by the standard metric on H 2 ). We can define a global finite measure µ on Ω as follows. On a box U i × C i we consider the product measure λ ⊗ µ t , which is well defined thanks the invariance properties of considered measures. Up to multiplication by a scalar, we can suppose the measure µ is a probability measure on Ω. As P acts on Ω, any element g of P defines an homeomorphism of Ω denoted τ g . Let f be a continuous function on Ω with value in R with support included in a box B ≃ U ×C. Thanks the locally constant return times property, we can decompose B into a finite disjoint union of boxes b i ≃ U × C i where C i is a closed and open subset of C, such that b i and τ (b i ) are included in the same box D i . We consider now the probability measure τ g * µ obtained by the transport of µ by τ g . We have In each box By taking a partition of the unity associated with open sets of an atlas, it is possible to prove the equality (♯) holds true for any continuous function f : Ω → R. Thus the measure τ g * µ is the measure aµ. This is a contradiction with the fact that µ is a probability measure.
Remark 1 When the Lie group G is unimodular, a G-solenoid admits foliated cycles, which are characterized in [2].
Remark 2
The existence of a foliated cycle is a very strong hypothesis. The non existence of foliated cycle gives information on geometric behavior of leaves. Following J. Plante [16], it implies the exponential growth for every leaf of a P-solenoid. L. Garnett [7] gives the local structure of such measures. In a box U i ≃ V i × C i a harmonic measure µ disintegrates into a probability measure ν i on C i times the measure f i (z, c)dz where dz denotes the Riemannian leaf measure and f i :
Harmonic currents
defined for almost all c of C i and harmonic on all the slices V i × {c}. Thus for any Borelian B included in U i : This local decomposition is not unique. If two decompositions µ i , f i and µ ′ i , f ′ i define the same measure, then it exists a measurable application δ i : Remark 3 For a R 2 -solenoid, leaves are then homeomorphic to the plane. The harmonic function obtained is positive and defined on all the plane then it is a constant function. The harmonic measure associated with is then locally disintegrated into a measure µ i on C i times the Riemannian measure, and µ i is then a transverse invariant measure.
Harmonic measures and ergodic theorem
Let x ∈ M be a point of the hull and let Γ x be the set {γ : R + → L x continuous |γ(0) = x, γ(R + ) ⊂ L x } where L x is the leaf passing trough x. The set Γ x is the set of continuous path beginning at x and strictly include in L x . We equip this set with the topology of uniform convergence on compact sets. On the space of borelians, there exists a natural finite measure w x called the Wiener measure. This measure is defined so that the motion Γ x × R + : (γ, t) → γ(t) ∈ L x is a brownian motion. Let Γ = x∈Ω(T ) Γ x be the set of continuous paths of M strictly included in leaves, we equip again this set with the topology of uniform convergence on compact sets. If µ is a finite measure on M , then µ = w x ⊗ µ(x) is a finite measure on Γ. The semi-group R + acts on the space Γ by time translations: for τ > 0 and γ ∈ Γ we define the semi-group of transformations S τ with S τ (γ)(s) = γ(s + τ ). It is straightforward to check transformations S τ preserve µ if and only if µ is a harmonic measure. This comes the Wiener measure is built with the heat kernel. For a harmonic measure µ, we can apply the Birkhoff ergodic theorem. Since the group P is the extension of two Abelian groups, P is amenable, and the set of invariant measures is a closed non-empty set for the weak topology. Actually, for a P-solenoid invariant measures and harmonic measures are the same (Theorem 1.1).
First let us prove that a harmonic measure of M is an invariant finite measure for the Paction. We will use the lemma: such that for each i, b i and b i .τ −1 are included in a same box D i . By taking ǫ small enough, for every element g of a neighborhood of τ , we have also that b i and b i .g −1 are included in D i . Therefore where g is the map z → az + b. We recall here for z = (x, y) in H 2 , z.g = (x + by, ay).
As shown in section 3.2, the map f i (., t) for a fixed t, can be extended to a harmonic map on the whole half plane H 2 . The map g → f i (z.g, t) is defined on P and it is straightforward to check it is a harmonic map. Thus the bounded map g ∈ H 2 → b i φdg * µ ∈ R reads Proof : Fix a box V × C, we decompose m in this box into a transversal measure ν on C and a system of leaf measure σ c on V × {c} for each c of C. Hence we have for any measurable function f with support included in the box, We fix a point x of the box and a closed neighborhood K included in the box. Let A be the set of bounded measurable functions with support in K. If m is P-invariant for any f ∈ A and for any g ∈ P s.t. It follows that when m is invariant, for ν almost all c in C, f (x)−f (x.g)dσ c = 0. Therefore, by identifying the leaf with the Lie group P, for ν almost all c, σ c is a right invariant Haar measure.
When identifying the Lie group P with H 2 , a right invariant measure reads λ y dxdy for some constant λ > 0. Therefore an invariant measure m on M can be written in a box λ c dxdy y dν(c), where c ∈ C → λ c ∈ R + is a measurable map. Then the measure m is harmonic. This ends the proof of Theorem 1.1.
As we know, the local decomposition of an invariant measure m is not unique. If λc y dxdydν(c) and λ ′ c y dxdydν ′ (c) are two decompositions of the same measure m, the measures ν and ν ′ are in the same class, and thus there exists a positive measurable map defined almost everywhere δ : C → R + * such that ν = 1 δ(.) ν ′ and λ c = δ(c)λ ′ c . An important consequence is that the value C λ c dσ(c) is well defined. Consider f the positive function H 2 → R defined by f (x, y) = C λ c dσ(c).y, then the measure of a cylinder A × C (where A is a measurable set of V ) of the box is m(A × C) = A f (x, y) dxdy y 2 . We will use this function to characterize invariant measures. We consider first a box decomposition of the P-solenoid M . With each box B and for an invariant measure m, we have seen that we can associate a non negative number b = C λ c dσ(c) > 0. The identification of elements belonging to the same vertical of the box decomposition leads to a fibration p of M over a branched manifold B. We associate to the interior F i of a 2-face of B a box B i = p −1 (F i ) with the fibration and then we consider the 2-chain Σ i b i F i ∈ C 2 (B, R) + . Therefore the fibration p : M → B induces a linear map
Combinatorics of the invariant measures
If we consider now a tower decomposition (B n ) n , we obtain a sequence of fibration p n over branched manifolds B n and a sequence of map π n : B n+1 → B n such that p n = π n • p n+1 and M ≃ lim ← (B n , π n ). These maps induce linear maps (p n ) * : M(M ) → C 2 (B n , R) + .
The relation between (p n ) * (m) and (p n+1 ) * (m) can be described as follows. We denote by B n i ≃ F n i × C n i the boxes of B n , where the index i is an enumeration of these boxes. Let f i (x, y) be the function (x, y) → C n i λ n ic dσ n i (c).y = b n i y for a local decomposition of the measure m. The intersection of B n i and B n+1 j is either empty or a disjoint union of boxes l D l ij . In the non trivial case, there exists transition maps h l ij : z, γ ij (c)) and g l ij ∈ P. Thus for any cylinder A × C n i of B n i we have Since this is true for any A ⊂ V n i , we have the relation: Let us denote p(n) the dimension of C 2 (B n , R) and A n the p(n) × p(n + 1) matrix with positive coefficients a n i,j = l α(g l ij ) when B n i and B n+1 j intersect and 0 otherwise. We have the relation (p n ) * (m) = A n ((p n+1 ) * (m)), and thus the sequence ((p n ) * (m)) n is an element of Lim ← (C 2 (B n , R) + , A n ). This enables us to extend maps (p n ) * to a map It is obvious that p * maps the set of probability invariant measures to the set lim ← (P(B n , R), A n ).
Actually this linear map is an isomorphism whose inverse can be constructed as follows. Let (v n ) n be an element of lim ← (C 2 (B n , R) + , A n ). We consider the family of cylinder A such that there exists a box B n i ≃ V n i × C n i where A ⊂ B n i and A ≃ A n i × C n i for some measurable subset A n i of V n i . Let m(A) be the value A n i b n i dxdy y where v n = (b n 1 , . . . , b n i , . . . , b n p(n) ). Thanks to the relations between v n and v n+1 , the value m(A) is well defined and can be extended by additivity to the σ-algebra generated by cylinders A. This set is big enough so that its σ-algebra is actually the Borel tribe. It is then straightforward to check that p * (m) = (v n ) n . Furthermore, since m disintegrates locally into a transverse measure times a measure of the kind by dxdy y 2 on the slices, m is a harmonic measure, then from Theorem 1.1 m is also an invariant measure.
The above result can be summarized in the following theorem which is an explicit reformulation of Theorem 1.2 : Theorem 4.3 If M is a P solenoid and M is homeomorphic to a projective limit of branched manifolds B n , lim ← (B n , p n ).
The restriction to the set of invariant probability measure is then homeomorphic to Lim ← (P(B n , R) + , A n ).
This last theorem allows us to exhibit some criteria to bound the number of invariant probabilities. 2. If furthermore M is minimal and the linear map A n are uniformly bounded, then there is a unique invariant probability measure.
Proof : Without loose of generality, we may assume that for all n ≥ 1, dim R C 2 (B n , R) = N . Let us consider N sequences (w n j ) n ∈ Π n C 2 (B n , R) + for j ∈ {1, . . . , N } where w n j = (w n j,1 , . . . , w n ji , . . . , w n j,N ) and w n j,i = 0 if j = i and 1 otherwise. Fix an integer n, for any j in {1, . . . , N } and m > n let w nm . Up to a choice of a subsequence, we can suppose that the sequences (w nm j ) m>n converge to w j ∈ P(B 1 , R). Let us denote proj n the projection of the product Π n C 2 (B n , R) onto C 2 (B n , R), and P rob n = proj n (Lim ← (P(B n , R), A n ). The set P rob n is a convex set and if H m is the convex hull of {w nm j |j = 1, . . . , N }, we have P rob n = m>n A n • . . . • A m−1 (H m ). Therefore P rob n is the convex hull of {w j |j = 1, . . . , N }. Suppose now there is more than N ergodic invariant probabilities then for n big enough, there would be more than N extremal points in P rob n , a contradiction.
In order to prove the second statement, we show that for any n, P rob n is reduced to a point. For this we define the hyperbolic distance between two points x, y in P(B n , R).
where m is the Euclidean length of the segment [x, y] and l, r are the length of connected components of S\[x, y] where S is the largest line segment containing [x, y] in P(B n , R). It is straightforward to check positive matrices contract this distance and the minimality of the action implies the positivity of matrices. Since linear maps A n are uniformly bounded and defined on space with bounded dimension, the contraction is uniform. Therefore P rob n = m>n A n • . . . • A m−1 (P(B m , R)) is reduced to a point.
Examples and proof of Proposition 1.3
We give an example of a non periodic repetitive P finite type tiling with exactly r ergodic invariant probability measures, for any integer r > 0. The idea is to decorate the Penrose's tiling with a non periodic bi-infinite sequence. We choose a sequence such that the action of the shift on the closure X of the orbit for the action, is minimal and has r ergodic invariant probability measures.
First, consider the case r ≥ 2. Let Σ be the set {1, . . . , r}. We associate to each symbol in Σ a different color. Let P be the polygon defined in the introduction to build the Penrose's tiling. Let R and S be the affine maps defined in the introduction. For an element i of Σ, let P i be the prototile P painted in the color i. To a sequence w = (w k ) k∈Z ∈ Σ Z , we associate the decorated tiling T (w) of finite affine type, with prototiles P i for i in Σ, defined by T (w) = {R q • S n (P wq )|n, q ∈ Z}.
Its tiles are isometric to P and its stabilizer is included in < R >. To a sequence (w n ) n∈Z the shift σ associates the sequence (w ′ n ) n∈Z where w ′ n = w n+1 . Thus we have T (w).R = T (σ(w)). Therefore if the sequence w is not periodic for the action of the shift, then T (w) is not stable for any element of P. The product space Σ Z is equipped with the product topology and is a Cantor set. Let X denote the closure of the orbit of w by the action of the shift σ: X = {σ n (w), n ∈ Z}. The set X is a compact metric space stable under the action of σ. When the dynamical system (X, σ) is minimal then Ω(T (w)) is minimal.
In [23], S. Williams generalizes an example of J. C. Oxtoby ([14]) and defines a Toeplitz sequence w ∈ Σ Z for which the action of the shift is minimal and has r ergodic probability measures. We recall here the definition of this sequence. Consider the sequence of natural numbers (p i ) i∈N with p 0 = 3 and p i+1 = 3 i .p i and the sequence s i ≡ i mod r ∈ Σ for i ∈ N. Define then the sequence w = (w q ) q∈Z ∈ Σ Z by inductive steps. The first step (step 1) is to set w q = s 1 for all q ≡ 0 or − 1 mod p 1 . In general for i ∈ N, k in Z, let J(i, k) denote the set of integers q ∈ [kp i , (k + 1)p i ) for which w q has been not yet defined at the end of the step i. The step (i + 1) is to set w q = s i+1 for q ∈ J(i, k) with k ≡ −1 or 0 mod 3 i . The dynamical system (X, σ) is minimal and X is a Cantor set.
Let us define now a sequence of atlas of words for the sequence w. Let A 0 be the set of words {s i , i = 1 . . . r}. Let A 1 be the set of words {s 1 s p 1 −2 i s 1 , i = 1, . . . , r}, where for two words a and b, ab denotes the concatenation of the two words and a q denotes the concatenation of q times the word a. In the general case for any integer q ≥ 1, we denote by p q,i i ∈ {1, . . . , r} the word of A q indexed by i and for q > 1, A q is the set of words The suspension of the action of σ on X, is the quotient space X = R × X/σ where points (t, x) and (s, x ′ ) are identified if s − t ∈ Z and x = σ s−t (x ′ ). The natural R-action by time translation on the space R × X induces a R-action on the suspension. It turns out that the suspension R × X/σ is a R-solenoid ( [2]) which has exactly r invariant ergodic probability measures ( [23]). For any q ≥ 0, A q defines a box decomposition of the suspension X . Each box is identified with a unique word of A q .
We will construct a tower system for Ω(T (w)) associated to the former box decompositions of the suspension, thanks to a collection of patches for the tiling T (w). For a word b = w i 0 . . . w i 0 +l of w, let Pa(b) be the patch l j=0 {R −j • S k (P w i 0 +j ) for k = 0, . . . , j} of T (w). Now let us consider for q ≥ 0 the collection of patches Pa q = {Pa(p q,i ), for i = 1, . . . , r}. For any q, the tiling T (w) is an union of elements of Pa q , copies of patches meeting only on their borders. Remark that all the patches of Pa q have the same size and actually, the box decompositions of Ω(T (w)) associated to Pa q define a tower system of the hull. If we denote by ∼ q the relation generated by the identification of borders of patches of Pa q which meet somewhere in the tiling T (w) and B q = r i=1 Pa p q,i / ∼ q , we have applications π q such that: Now we construct a natural continuous map h from Ω(T (w)) onto X . For an element g : z → az + b of the group P, we define h(T (w).g) = [(log 2 (a), w)] ∈ X where [(t, x)] denotes the class of the element (t, x) in R × X for the relation defined by σ. The map h is then a continuous map from T (w).P to X . Remark that if the origin O lies in a copy of a patch Pa(p q,i ) for some q ≥ 1 and i ∈ Σ in the tiling T (w).g, then O lies also in a copy of the patch Pa(p q,i ) in the tiling T (σ n (w)), where n denotes the integer part of log 2 (a). Thus the origin of the sequence σ n (w) lies in the word p q,i . As h(T (w).g) = [(log 2 (a) − n, σ n (w))], we get that h(T (w).g) is in the box of the suspension defined by the word p q,i . It follows that for any q ≥ 1, the map h sends the restriction to the orbit of T (w) of the box associated to the patch Pa(p q,i ) to the box of the suspension associated to the word p q,i . Thus the map h is uniformly continuous. It follows that h can be extended to a map from Ω(T (w)) onto X also denoted h. It is straightforward to check that each fiber of the map h is stable under the action of the group N = {z → z + t, t ∈ R}. Furthermore, as P is an extension over N and the the group {z → az, a > 0}, the action of the group P preserves the set of fibers. Then the P-action on the hull Ω(T (w)) defines through the application h, a P-action on the suspension X and h is a semi-conjugacy from the hull Ω(T (w)) to X . The group N acts trivially on X . The invariant measures for the P-action on X are the invariant measures for the R-action. We claim that the map h sends the invariant measures of the hull onto the invariant measures of the suspension. To prove this, we use a Følner's base of P that we denote (A n ) n and a right multiplicative invariant Haar measure on P that we denote λ. Let µ be a ergodic invariant probability measure for the P-action on X . By the ergodic theorem, there exists a point x in the suspension such that the sequence of probability measures µ n = 1 λ(An) An δ g.x dλ(g) converges, when n grows to infinity, to the measure µ. Let y be a point in Ω(T (w)) such that h(y) = x. Then, up to the choice of a subsequence, the sequence of probability measures on Ω(T (w)) ν n = 1 λ(An) An δ g.y dλ(g) converges to a probability measure ν invariant for the P-action. As h * ν n = µ n , we get h * ν = µ. It follows that the map h sends the set of invariant measures of Ω(T (w)) onto the set of invariant measures of X . Furthermore the map h sends ergodic measures on ergodic measures. Then Ω(T (w)) has at least r independent ergodic probability measures. From Proposition 4.4, we also know that the hull Ω(T (w)) admits at most r invariant ergodic probability measures. Thus there are exactly r probability measures.
To obtain an example of a minimal P-solenoid with a single P-invariant probability measure, we use the same strategy as before. We keep the same notations as the case r = 2 but we define an other Toeplitz sequence w on which the shift action is free, minimal and uniquely ergodic ( [9]). We consider the substitution S over the alphabet Σ = {1, 2} defined by S(1) = 112, S(2) = 122. Using the extension of the substitution over the words by the concatenation, we can iterate the substitution. The sequence w is then the bi-infinite sequence defined by: where the dot . is placed between the 0 and −1 coordinate.
Let A 0 be the set {1, 2}, and for any integer q ≥ 1, let A q be the atlas of words {S q−1 (1)S q−1 (i)S q−1 (2), i = 1, 2} for the sequence w. The sequence w is a bi-infinite sequence of words of A q . Now let us consider the collection of patches Pa q = {Pa(wo), wo ∈ A q }. For any q ≥ 0, the tiling T (w) is an union of elements of Pa q and the box decompositions of Ω(T (w)) associated to Pa q define a tower system of the hull. The hull Ω(T (w)) is then homeomorphic to lim ← (B q , π q ) where B q = wo∈Aq Pa(wo)/ ∼ q . By Theorem 4.3, the space of invariant measures M(Ω(T (w))) is isomorphic to lim ← (C 2 (B n , R) + , A n ). A simple calculation shows that the linear applications A n are defined by the matrices: A n = 1 + 2 −3 n +1 1 2 −3 n 2+2 2 −3 n +1 + 2 −3 n 2+2 .
Proposition 4.4 enables us to conclude that the hull Ω(T (w)) admits only one P-invariant probability measure.
|
2014-10-01T00:00:00.000Z
|
2004-12-15T00:00:00.000
|
{
"year": 2004,
"sha1": "d382044a7b5e66c5ce720a82a29aff553d9ff450",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/0412290",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fdcd5160df7bd1254cc8eb335a68cb1c9b22b3d7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
131632811
|
pes2o/s2orc
|
v3-fos-license
|
Influence of supplemental protein versus energy level on intake, Influence of supplemental protein versus energy level on intake, fill, passage, digestibility, and fermentation characteristics of beef fill, passage, digestibility, and fermentation characteristics of beef steers consuming dormant bluestem range forage steers consuming dormant bluestem range forage
This report is brought to you for free and open access by New Prairie Press. It has been accepted for inclusion in Kansas Agricultural Experiment Station Research Reports by an authorized administrator of New Prairie Press. Copyright 1988 the Author(s).
Introduction
Previous research at Kansas State University suggests that winter supplementation with moderate to high crude protein (CP) supplements is preferable because of their ability to stimulate forage intake and utilization.Supplements low in CP (e.g., cereal grains) tended to promote lower levels of forage intake and significantly depressed fiber digestibility.
However, low CP supplements are frequently much cheaper.The question exists whether feeding increased quantities of low CP supplements (i.e., increasing the level of energy offered) would sufficiently offset some of their negative impacts on forage utilization.Therefore, our study was designed to evaluate how varying the levels of protein and energy in winter supplements would affect the intake and utilization of dormant, bluestem range.
Experimental Procedures
In two trials, 16 ruminally cannulated steers were randomly assigned within weight group (avg.= 732 and 884 lb. for trials 1 and 2, respectively) to each of four treatments.
Treatments consisted of supplementing steers with soybean meal (SBM)/milo mixtures that were combinations of various protein and energy levels (Figure 9.1).Crude protein (CP) concentrations in supplements and the level at which they were fed were: 1) 22% CP fed at .3% of body weight (SW); 2) 11% CP fed at .6% BW; 3) 44% CP fed at .3%BW; and 4) 22% CP fed at .6% BW.Protein concentration was altered by varying the quantities of SBM and milo.Because SBM and milo are nearly equivalent in energy value, level of supplemental energy provided was varied by feeding different quantities of supplement.Dormant prairie hay was provided at 130% of the previous 5-day average intake.
Trial 1 was a 28-day digestion study with 14-day adaptation, 7-day intake, and 7-day fecal collection periods.Rumen fill values were obtained by complete ruminal evacuations, and subsamples of solid digesta were collected.The alkaline peroxide lignin component of the subsamples was used to describe fill and passage of an indigestible component of the diet.On day 28, CoEDTA was given intraruminally, and rumen samples collected at 0, 3, 6, 9, 12, and 24 hours after feeding to measure liquid volume and passage.
Trial 2 was a 26-day study consisting of 18-day adaption, 5-day intake, and 2day ruminal sampling periods.Procedures were similar to those of trial 1, except fecal collections were not made.On day 26, CoEDTA was given intraruminally, and rumen samples were taken at 0, 3, 6, 9, 12 and 24 hours after feeding to measure liquid volume, and passage.
Results and Discussion
In trial 1, influence of protein level on forage dry matter intake (DMI) depended on the corresponding energy level (Table 9.1).Increased supplemental energy at the low protein level depressed forage DMI.Influence of protein level on total diet dry matter digestibility (DMD) was also dependent on the corresponding energy level.Increased supplemental energy at the low protein level had a positive influence on total diet DMD.Increased DMD in this case may be explained by the reduction in forage DMI and the increased consumption of the highly digestible supplement.
However, forage fiber digestibility (e.g., acid detergent fiber) was increased only by increasing supplemental protein levels.Increased supplemental energy at the low level of protein depressed forage fiber digestibility.In trial 2, forage DMI increased in response to high supplemental protein levels but tended to decrease with increased energy levels (Table 9.2).Liquid volume and flow increased with higher protein levels.
Results from both trials indicated providing supplemental protein to cattle grazing dormant winter rangelands increases forage intake.Increasing the level of supplemental energy at low levels of crude protein appears to decrease intake and forage digestibility.At higher levels of supplemental protein, this effect is not as dramatic.
1
Department of Surgery and Medicine.
Table 9 .
1. Influence of Supplemental Protein versus Energy Level on the Intake, Digestibility, Fill, and Passage for Cattle Consuming Dormant Bluestem Range-Forage (Trial 1) a response due to protein level (P<.10).
|
2019-04-25T13:10:24.380Z
|
1988-01-01T00:00:00.000
|
{
"year": 1988,
"sha1": "ec3f7ee96c1dd9bf744590da26aa2f872f84ef9c",
"oa_license": "CCBY",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=2320&context=kaesrr",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "862819620e86e2b647befd00be4f890075e87401",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
271061062
|
pes2o/s2orc
|
v3-fos-license
|
Determinants of tobacco use transitions in smoker nursing students in Catalonia: A prospective longitudinal study
INTRODUCTION The use of emerging tobacco and nicotine products affects tobacco use behaviors among college students. Thus, we aimed to examine transitions in tobacco use patterns and identify their predictors among smokers in a cohort of nursing students in Catalonia (Spain). METHODS We conducted a prospective longitudinal study of Catalan nursing students between 2015–2016 and 2018–2019. We examined transitions in tobacco use patterns between baseline and follow-up among smokers from: 1) daily to non-daily smoking, 2) non-daily to daily smoking, 3) cigarette-only use to poly-tobacco use, 4) poly-tobacco use to cigarette-only use, 5) between products, 6) reducing consumption by ≥5 cigarettes per day (CPD); and 7) quitting smoking. We applied a Generalized Linear Model with a log link (Poisson regression) and robust variance to identify predictors of reducing cigarette consumption by ≥5 CPD and quitting smoking, obtaining both crude and adjusted (APR) prevalence ratios and their 95% confidence intervals (CIs). RESULTS Among daily smokers at baseline, 12.1% transitioned to non-daily smoking at follow-up, while 36.2% of non-daily smokers shifted to daily smoking. Among cigarette-only users, 14.2% transitioned to poly-tobacco use, while 48.4% of poly-tobacco users switched to exclusive cigarette use. Among all smokers (daily and non-daily smokers), 60.8% reduced their cigarette consumption by ≥5 CPD and 28.3% quit smoking. Being a non-daily smoker (APR=0.33; 95% CI 0.19–0.55) and having lower nicotine dependence (APR=0.78; 95% CI 0.64–0.96) were inversely associated with reducing cigarette consumption, while being a non-daily smoker (APR=1.19; 95% CI: 1.08–1.31) was directly associated with quitting smoking. CONCLUSIONS Nursing students who smoked experienced diverse transitions in tobacco use patterns over time. Evidence-based tobacco use preventive and cessation interventions are needed to tackle tobacco use among future nurses.
INTRODUCTION
Tobacco use continues to be a primary public health concern, being the cause of 8.7 million deaths globally every year 1 .Despite efforts to control tobacco use, the emergence of novel tobacco and nicotine products, new forms of targeted tobacco advertising, and renewed tobacco industry activity have led to changes in smoking behaviors among specific subgroups such as young adults, including college students [1][2][3] .
The American College Health Association (ACHA) estimated that by 2023, almost 40% of college students in the United States were regularly using tobacco and nicotine products in the past three months 2 .In particular, 24.2% were daily users, 8.9% were weekly users, and 7.2% were monthly users 2 .The most commonly used product was electronic cigarettes (e-cigarettes), whereas in Europe, the most common product used is cigarettes 4 .During the last years, there has been a marked increase in the prevalence of experimentation with tobacco and nicotine products, alternative tobacco product use, and poly-tobacco use among college students 2,5,6 .Current data reveals that nearly 59% of college students have ever used at least one tobacco or nicotine product, with 41% having ever used two or more tobacco or nicotine products 5 .Furthermore, while the prevalence of regular use of alternative or novel tobacco products varies significantly by country, it is becoming increasingly popular among both college smokers and non-smokers 6 .In addition, the use of multiple tobacco products among this group and their combination with other substances, such as cannabis, is worrying, with prevalence rates of concern (up to 9%, according to Odani et al. 7 ).
Both experimenting and regularly using alternative tobacco products may lead to cigarette initiation among non-smokers, increasing the likelihood of transitioning to regular cigarette users 5,[8][9][10] .Moreover, they increase the probability of poly-tobacco use among cigarette users, which can lead to higher levels of nicotine intake and greater nicotine addiction and hamper quitting smoking 8,9,11 .
The university period is a crucial time for most college students to establish smoking behaviors 3 , emphasizing the need to identify predictors and correlates of changes in tobacco use.This issue is particularly important among nursing students, who should perform tobacco prevention and cessation interventions in their future professional roles.Despite their committed role in tobacco control, in Spain, nursing students have a high prevalence of tobacco use (35.1%) 12 , which in most cases is similar to or even higher than that reported in the general population 13 .It is also of concern that the prevalence of tobacco use among nursing students is generally higher than that of other health science students, such as medical students, who have a prevalence of smoking of 17.5% 12 .Therefore, this study aimed to examine changes in tobacco use patterns and identify their predictors among smokers of a cohort of nursing students.
Design and participants
We conducted a prospective longitudinal study of a sample of nursing students from all nursing schools in Catalonia, Spain, from the academic year 2015-2016 to the academic year 2018-2019.At baseline, 4381 nursing students completed a questionnaire after signing an informed consent form.They agreed to participate in the study, including their willingness to participate in future follow-ups.The description, participation, and data collection of baseline and follow-up studies have already been reported [14][15][16] .For this study, the inclusion criteria were completing the baseline and follow-up surveys and being current smokers (daily or non-daily) at baseline.
Instrument and variables
At baseline, participants completed a self-administered paper-and-pencil questionnaire that assessed the use of different tobacco products, e-cigarettes, heated tobacco products (HTPs), and cannabis.The Global Health Professional Survey (GHPS) was used to create the questionnaire.For follow-up, the baseline questionnaire was a model for launching an online version through the LimeSurvey platform.Before the survey launch, we conducted a pilot test of the followup questionnaire with 20 collaborating researchers and 50 study participants (further details available elsewhere) 15 .
In both the baseline and follow-up questionnaires, we asked participants about their current and past use of various tobacco products, including manufactured (MF) and roll-your-own (RYO) cigarettes, cigars/ cigarillos/little cigars and waterpipes, e-cigarettes, HTPs, and cannabis.We used the Centers for Disease Control and Prevention and 'Diagnostic and Statistical Manual of Mental Disorders Fourth Edition' definitions of smoking behavior to classify participants according to their tobacco use 17 .Participants who were using combustible tobacco products at the time of the survey (either MF or RYO cigarettes) were considered current smokers.Participants who were using any of these products daily were classified as daily smokers, while those who were using them not every day but at least once in the last 30 days were classified as non-daily smokers.
Current smokers were asked about their tobacco use patterns, including age at smoking initiation (<17 and ≥17 years); reasons why they started smoking (because my friends/classmates smoked, because one of my family members smoked, because my teachers smoked, to experiment with new experiences, because it is trendy, to feel older, to meet people or to flirt, and other); reasons why they currently smoke (for weight control, to reduce stress/relax, for socializing, because my friend/family smokes, because it is trendy, for pleasure, because I could not quit, and other); number of cigarettes per day (classified as <10, 10-19, and ≥20); time (in minutes) to first cigarette after waking up (≤5, 6-30, 31-60, or >60); if they have seriously tried to quit smoking in the last year (yes or no); number of attempts to quit of at least 24 hours in the last year (1 or ≥2); and if they have the intention to quit or cut back in the following year (yes or no).
In addition, at baseline and follow-up, sociodemographic characteristics of all participants were collected.Baseline sociodemographic characteristics included sex (male, female); age (≤19, 20-24, or ≥25 years); year of degree (first, second, third, or fourth year); place of birth (Catalonia or outside of Catalonia); location of the nursing school (Barcelona or outside of Barcelona), and type of university (public, private with public funding, or private).At follow-up, we ascertained whether they had finished the nursing degree (yes or no); occupation (nursing student, nurse, or other); year of degree for continuing students (second, third, or fourth); work area for recently graduated employed nurses (hospital or other) and type of institution they worked (public, private, and private with public funding); if they were living with family or were independent; household monthly income (€) (≤1500, 1501-3000, or >3000); and marital status (single, married or cohabiting, divorced, or widowed).
The main dependent variable was tobacco use transition between baseline and follow-up.Seven transitions were established: 1) from daily to nondaily smoking; 2) from non-daily to daily smoking; 3) from cigarette-only use (only MF and/or RYO cigarettes) to poly-tobacco use (MF and/or RYO cigarettes with other product/s); 4) from poly-tobacco use to cigarette-only use; 5) between products; 6) reduce cigarette consumption by ≥5 CPD; and 7) quit smoking.Participants who did not change their tobacco use patterns between the two surveys were defined as: continued as a daily smoker, continued as a non-daily smoker, continued as a cigarette-only user, continued as a poly-tobacco user, or continued as a current smoker.Those who reduced cigarette consumption by ≥5 CPD were compared with those who reduced their consumption by <5 CPD, increased the number of consumed cigarettes, or did not change the number of CPD.
The independent variables included those related to the tobacco use pattern and sociodemographic characteristics at baseline.
Ethical considerations
The study protocol was approved by the Ethics Committee of the Hospital Universitari de Bellvitge (PR239/18).Informed consent was obtained from all individual participants at baseline and follow-up.
Statistical analysis
For the descriptive analysis, we calculated the prevalence (%) and its 95% confidence intervals (CIs), and for the bivariate analysis, we used the chi-squared test.To analyze the predictors of reducing cigarette consumption and quitting smoking, we performed a multivariable Generalized Linear Model with a log link (Poisson regression) and robust variance to obtain both crude (PR) and adjusted prevalence ratios (APRs) and their 95% CI.For both transitions, the adjusted models included sex, baseline age, and the significant variables identified in the bivariate analysis, except the number of CPD since it was collinear with the baseline smoking status.Predictors of transition from daily to non-daily use, from non-daily to daily smoking, from cigarette-only use to poly-tobacco use, from poly-tobacco use to cigarette-only use, and between products, were not assessed due to the small number of participants that experienced these transitions.In addition, we conducted sex-and agespecific analyses to examine potential interactions by: 1) stratifying participants' sociodemographic characteristics and tobacco use patterns variables by age and sex; 2) calculating the cumulative rates of transition from daily to non-daily smoking, from non-daily to daily smoking, from cigarette-only use to poly-tobacco use, from poly-tobacco use to cigaretteonly use, cigarette consumption reduction ≥5 CPD, and quitting smoking stratified by sex and age; and 3) adding an interaction term with sex and age with the main independent variables in the regression models.All tests were two-tailed, and the statistical significance was p<0.05.All analyses were performed using the statistical package IBM SPSS Statistics version 25.
Description of the sample
At baseline, 4381 nursing students completed the survey.Of these, 1288 (29.7% of the sample) reported being current smokers, of whom 61.9% were daily smokers and 38.1% were non-daily smokers.Of all current smokers at baseline, 276 (21.4%) filled in the follow-up survey, with 198 (71.7%) continuing as current smokers while 78 (28.3%) had quit smoking.The percentages of daily and non-daily smokers at follow-up were 70.7% and 29.3%, respectively.Table 1 shows the baseline and follow-up characteristics of the cohort by sex and baseline smoking status.Of the participants followed, 241 (87.3%) were women, and 225 (82.4%) were aged ≤24 years at baseline.At follow-up, 103 (37.3%) were nursing students and 161 (58.3%) were nurses.There were no significant differences by sex among the participants.At baseline, participants aged ≤19 years were more likely to be non-daily smokers (47.1%), while those aged ≥20 years were more likely to be daily smokers (76.9%, p<0.001).
Tobacco use patterns at follow-up
The majority of current smokers at follow-up, whether daily or non-daily, exclusively used MF and/ or RYO cigarettes (76.2% and 67.9%, respectively), consumed <10 CPD (87.5% and 100%, respectively), and had low nicotine dependence (76.5% and 100%, respectively).Poly-tobacco use was more frequent among daily and non-daily smokers using cannabis (14.0% and 13.2%, respectively) and waterpipes (9.8% and 24.5%, respectively), with the latter being more frequent among non-daily smokers than among daily smokers (p=0.008).Non-daily smokers consumed, on average, fewer CPD than daily smokers (100% vs 55.6%, p<0.01).A higher proportion of daily smokers than non-daily ones reported intending to cut back their cigarette consumption (75.0%vs 47.6%, p<0.01) (Supplementary file Table S1).Additionally, participants who had completed their nursing degree (either those who were nurses or had other situations) had a greater proportion of cigaretteonly use (66.2%); in contrast, those who were still nursing students had a higher prevalence of polytobacco use (68.6%) (p<0.001)(Supplementary file Table S2).Sex and age were not associated with any of the variables related to the tobacco use patterns.
Tobacco use transitions between baseline and follow-up
As presented in Table 2 and Figure 1, of all daily smokers at baseline, 12.1% transitioned to being nondaily smokers at follow-up.A high proportion of daily smokers with low nicotine dependence transitioned to non-daily smoking (19.5% vs 4.1%, p=0.012).Although there were no differences by type of product used, a product-by-product analysis showed a higher proportion of daily smokers who transitioned to being non-daily smokers among those who used cigarettes and cannabis concurrently (25% vs 10.8%, 2.5%, 6.7%, and 11.1%, p=0.011).Moreover, the lower the number of CPD, the greater the proportion of daily smokers who transitioned to being non-daily smokers (25.0%, 16.1%, and 3.4%, p<0.05).From the total of nondaily smokers at baseline, 36.2% (n=21) transitioned 2 present the transitions in the type of tobacco use between baseline and follow-up.Of all cigarette-only users at baseline, 14.2% had transitioned to poly-tobacco use at follow-up.In comparison to other age groups, a higher proportion (20.0%) of cigarette-only users aged 20-24 years at baseline shifted to poly-tobacco use at follow-up (p=0.027).Overall, 48.4% of poly-tobacco users at baseline transitioned to cigarette-only use.A higher proportion of poly-tobacco users switched to cigaretteonly use among participants in the second and third years of their degree studies compared to those in the first and fourth years (76.9% and 75.0%vs 31.4% and 50.0%, p=0.012).Furthermore, compared with those who continued as poly-tobacco users, a lower proportion of participants who reported initiating smoking for reasons other than having a peer/family smoker transitioned to cigarette-only use (p=0.013).
Figure 3 displays the product use between baseline and follow-up.At baseline, MF cigarettes (34.1%),MF and/or RYO cigarettes and cannabis (23.6%), and RYO cigarettes (22.1%) were the most commonly used products.At follow-up, the exclusive use of MF cigarettes continued to be the most common product used (22.4%);however, the prevalence of concurrent use of MF and/or RYO cigarettes with cannabis decreased to 9.8%, while MF and RYO cigarette use increased (17.8%).The exclusive use of RYO cigarettes continued as the third most common product used (12.7%).The prevalence of concurrent use of MF and/or RYO cigarettes and waterpipes slightly increased from 8.7% to 9.8% (13.6% considering only those who were current smokers at follow-up).Most poly-tobacco users of As shown in Table 4, 60.8% of current smokers (both daily and non-daily) at baseline reduced their cigarette consumption by ≥5 CPD at follow-up.The proportion of smokers who reduced their cigarette consumption was higher among participants aged ≥20 years compared with those aged ≤19 years (63.8% and 68.8% vs 43.9%, p=0.022); among participants Among all current smokers at baseline, 28.3% had quit smoking at follow-up (Table 5).The percentage of recent quitters was higher among participants who initially reported smoking for reasons other than to stress reduction or relaxation (78.5% vs 21.5%, p=0.006); among those who were non-daily smokers in comparison to daily smokers (45.3% vs 17.6%, p<0.001); among those who had low cigarette consumption (<10 CPD) in comparison to
Predictors of tobacco use transition
We found that being a non-daily smoker and having lower nicotine dependence were inversely associated with reducing cigarette consumption (by ≥5 CPD) compared with being a daily smoker (APR=0.33;95% CI: 0.196-0.55)and having medium and high dependence (APR=0.78;95% CI: 0.64-0.96)(Table 4).Otherwise, non-daily smoking was the only predictor of quitting smoking (APR=1.19;95% CI: 1.08-1.31)(Table 5).No effect modification of these predictors was observed in sex-and age-specific analyses.
DISCUSSION
This study among smoker nursing students provides a longitudinal overview of changes in smokers' tobacco use patterns over three years and identifies predictors of transitions in tobacco use patterns.The study found that 12.1% of daily smokers at baseline transitioned to non-daily use at follow-up, and more than one-third of non-daily smokers shifted to being daily smokers.Of all the cigarette-only users, 14.2% transitioned to poly-tobacco use, and of all the poly-tobacco users, almost half transitioned to cigarette-only use.Furthermore, among all current smokers (including both daily and non-daily smokers), two-thirds reduced their cigarette consumption by at least 5 CPD, and almost one-third had quit smoking.Finally, whereas being a non-daily smoker and having lower nicotine dependence were inversely associated with reducing cigarette consumption, being a non-daily smoker was directly associated with being a quitter at follow-up.This range of tobacco use transitions is consistent with existing evidence among college students.This suggests that nursing students, similarly to students in other disciplines, also experience several changes in tobacco use patterns during their training 5,19,20 .Furthermore, a notable proportion of this cohort of nursing students were poly-tobacco and other tobacco product users, with the former being more prevalent among those who were still enrolled in nursing school than among those who had graduated.This finding is consistent with the results of Butler et al. 21, who found higher odds of poly-tobacco use among lower level undergraduates.Additionally, other research points to young college students being more prone to using alternative tobacco products than older students 22 .
The diverse tobacco use transitions experienced during their college years and the greater prevalence of poly-tobacco use and alternative tobacco product use among university students can be explained by multiple psychosocial factors.First, college students generally seek to experience new sensations, their peers influence them, and they are vulnerable to situations that cause anxiety, which may lead them to consume several emerging tobacco products 23 .Secondly, college students are more exposed to tobacco industry messages than other individuals, which increases their probability of using tobacco products 24 .Finally, social smoking is highly prevalent among college students, which may increase the use of alternative tobacco product use and decrease their perceived addiction 25 .Poly-tobacco and other tobacco product users are as likely as cigarette-only users to intend to quit smoking, which highlights the need to implement tobacco prevention and cessation strategies during college years, especially during the first years, before the consolidation of smoking behaviors 21,22 .
Regarding predictors of tobacco use transition, being a non-daily smoker and having lower nicotine dependence were determinant factors for reducing cigarette consumption and smoking cessation in this cohort of smoker nursing students.Non-daily smokers had a lower probability of reducing their cigarette consumption but a higher probability of quitting smoking.Likewise, the percentage of non-daily users who transitioned to being daily smokers (36.2%) was tripled that of daily smokers who switched to being non-daily smokers (12.1%).Based on these findings, we consider that non-daily smokers in this cohort showed less smoking pattern stability than daily smokers, which is consistent with the results of previous longitudinal studies carried out among college students and other populations 20,26,27 .This lower stability of smoking patterns may be a consequence of the heterogeneity among non-daily smokers who present different behavioral and psychosocial smoking characteristics (frequency and amount of use, social smoking, perceived dependence, etc.) 28 .Although most non-daily smokers have low nicotine dependence, which is a strong predictor of smoking cessation, their lower perceived addiction and other psychosocial factors may inhibit them from reducing cigarette consumption and probably lead to an increase in consumption until they become daily smokers 29,30 .
Although more longitudinal studies are required, these findings highlight the need for a better understanding of potential predictors that may disrupt the pattern of escalating smoking and addiction during the college years, as both non-daily and daily smokers are in a determinant stage to consolidate their tobacco use behaviors.Additionally, current tobacco use behaviors among college students indicate the need to implement tobacco control strategies in universities early and urgently.The implementation of tobacco-free campuses has proven to be effective in the reduction of the overall prevalence of smoking and secondhand exposure among college students.However, their effectiveness may vary by tobacco product 31 .Therefore, a comprehensive enforcement strategy including tobacco-free campus policies, tobacco use prevention and tailored cessation programs, and restrictions on the marketing, advertising, and promotion of tobacco products could be effective in reducing tobacco use among nursing students 3,32 .
Strengths and limitations
While this study had a large sample at baseline, threequarters of the participants were lost to follow-up.Those who were male, aged ≥25 years, and current smokers were less likely to participate in the followup.In addition, in the definition of daily and nondaily smokers, we only included users of MF and RYO cigarettes, and this may have resulted in low sample sizes in the tobacco use transition groups due to the reduced number of smokers.The small sample size has limited us from analyzing the predictors of all the transitions.While the cohort was restricted to nursing schools in Catalonia, the participants' characteristics do not appear to differ from those of nursing students from other regions of Spain and Europe 33 .
As far as we know, this is the first longitudinal study in Europe to investigate the predictors of tobacco use transitions in nursing students.The study distinguished between levels of smoking intensity, analyzing non-daily and daily smokers separately.In addition, the survey also explored the use of conventional tobacco products, such as MF and RYO cigarettes and cigars/cigarillos/little cigars, the novel ones, such as e-cigarettes and waterpipes, and cannabis.Finally, although we included several individual and contextual sociodemographic characteristics and variables related to tobacco use patterns as potential predictors of changes in smoking habits, residual confounding.
CONCLUSIONS
Nursing students who smoked, especially those who were non-daily smokers and poly-tobacco users at baseline, underwent several transitions in their tobacco product use during the follow-up period, either by increasing their consumption, reducing it, or quitting smoking.Being a non-daily smoker and having lower nicotine dependence were inversely associated with reducing cigarette consumption by ≥5 CPD, and only being a non-daily smoker predicted tobacco cessation at follow-up.These findings suggest that tobacco use behavior in this cohort is unstable and emphasize the urgent need for the implementation of a comprehensive strategy to reduce both conventional and novel tobacco product use on university campuses.
Figure 1 .Figure 1 .
Figure 1.Transitions in tobacco use patterns among smokers of a cohort of Catalan nursing students from baseline (2015-2016) to follow-up (2018-2019) (N=276) Figure 1.Transitions in tobacco use patterns among smokers of a cohort of Catalan nursing students from baseline (2015−2016) to follow-up (2018−2019) (N=276) MF and/or RYO cigarettes and e-cigarettes or cigars/ cigarillos/little cigars (70%) switched to using only MF cigarettes; however, the prevalence of HTPs use increased at follow-up.Finally, users of MF cigarettes at baseline had the highest percentage of quitters at follow-up (26%), whereas users of waterpipes had the lowest percentage of quitters (4.2%).
Table 1 .
Sociodemographic characteristics of the followed Catalan smoker nursing students at baseline (2015-2016) and follow-up (2018-2019) according to sex and baseline smoking status (N=276) a Chi-squared test (male vs female).b Chi-squared test (daily smoker vs non-daily smoker).
Table 2 .
Baseline sociodemographic characteristics and tobacco use patterns of Catalan nursing students who transitioned from non-daily to daily smoking and from daily to non-daily smoking from baseline(2015-2016)
Table 1 .
Continued Continuedto being daily smokers at follow-up.Participants who had no intention to quit at baseline had a high proportion of non-daily smokers who transitioned to being daily smokers (46.5% vs 9.1%, p<0.05).Table3and Figure
Table 3 .
Baseline sociodemographic characteristics and tobacco use patterns of participants who transitioned from cigarette-only use to poly-tobacco use and from poly-tobacco use to cigarette-only use from baseline (2015-2016) to follow-up (2018-2019), in Catalan nursing students (N=276) Transitioned to poly-tobacco use a Transitioned to cigarette-only use b
Table 4 .
Predictors of reducing cigarette consumption by ≥5 cigarettes/day in a cohort of Catalan nursing students from baseline (2015-2016) to follow-up (2018-2019) according to baseline sociodemographic characteristics and tobacco use patterns (N=276) Compared with those who reduced their consumption by <5 cigarettes/day, increased their consumption, or who did not change their consumption (n=81).APR: adjusted prevalence ratio.b PR adjusted for sex, age group, age at smoking initiation, having started smoking because they have a family/peer smoker, current smoking to reduce stress/ relax, smoking status, and heaviness of smoking index.c Continuous variable.d Multiple responses were accepted.e Manufactured and/or roll-your-own cigarettes. a
Table 5 .
Predictors of smoking cessation in a cohort of Catalan nursing students from baseline (2015-2016) to follow-up (2018-2019) according to baseline sociodemographic characteristics and tobacco use patterns (N=276)
Table 5 .
Continued Compared with 'continued as smokers' (n=198).APR: adjusted prevalence ratio.b PR adjusted for sex, age group, current smoking to reduce stress/relax, smoking status, heaviness of smoking index and thinking about cutting back consumption.c Continuous variable.d Multiple responses were accepted.e Manufactured and/or roll-your-own cigarettes. a
|
2024-07-10T05:09:59.339Z
|
2024-07-08T00:00:00.000
|
{
"year": 2024,
"sha1": "73dff8852853a710ddfd425d4f7b1867dfc36442",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "931bad6135580a96f93d12b0a91d85d589cfc9bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
33902175
|
pes2o/s2orc
|
v3-fos-license
|
Reservoir Condition Pore-scale Imaging of Multiple Fluid Phases Using X-ray Microtomography
X-ray microtomography was used to image, at a resolution of 6.6 µm, the pore-scale arrangement of residual carbon dioxide ganglia in the pore-space of a carbonate rock at pressures and temperatures representative of typical formations used for CO2 storage. Chemical equilibrium between the CO2, brine and rock phases was maintained using a high pressure high temperature reactor, replicating conditions far away from the injection site. Fluid flow was controlled using high pressure high temperature syringe pumps. To maintain representative in-situ conditions within the micro-CT scanner a carbon fiber high pressure micro-CT coreholder was used. Diffusive CO2 exchange across the confining sleeve from the pore-space of the rock to the confining fluid was prevented by surrounding the core with a triple wrap of aluminum foil. Reconstructed brine contrast was modeled using a polychromatic x-ray source, and brine composition was chosen to maximize the three phase contrast between the two fluids and the rock. Flexible flow lines were used to reduce forces on the sample during image acquisition, potentially causing unwanted sample motion, a major shortcoming in previous techniques. An internal thermocouple, placed directly adjacent to the rock core, coupled with an external flexible heating wrap and a PID controller was used to maintain a constant temperature within the flow cell. Substantial amounts of CO2 were trapped, with a residual saturation of 0.203 ± 0.013, and the sizes of larger volume ganglia obey power law distributions, consistent with percolation theory.
Introduction
Carbon Capture and Storage is the process where CO 2 is captured from large point sources and stored in porous rock, displacing resident brines so that it remains in the subsurface for hundreds to thousands of years 1 . The CO 2 resides in the subsurface as a dense super-critical phase (scCO 2 ), with properties radically different to CO 2 at ambient conditions. There are four principal mechanisms by which scCO 2 might be immobilized in the subsurface: stratigraphic, solubility, mineral and residual trapping. Stratigraphic trapping is where CO 2 is held underneath impermeable seal rocks; solubility trapping is where CO 2 dissolves into the resident brine surrounding the injected CO 2 [2][3][4] ; mineral trapping is where carbonate mineral phases are precipitated into the rock 5 ; and residual or capillary trapping is where CO 2 is held by surface forces as tiny droplets (ganglia) in the pore-space of the rock 6 . This can occur either naturally, by the migration of the CO 2 plume [7][8][9] , or can be induced by the injection of chase brines 10 . In order to understand the processes governing the flow and trapping of this CO 2 in the subsurface a new suite of experiments must be conducted, harnessing new advances in technology to better understand the fundamental physics associated with multiphase flow.
X-ray microtomography has developed as a technique over the past 25 years from early attempts to visualize both dry geological samples 11 and multiple fluid phases 12 to the primary method for the non-invasive imaging of rock cores, both for modeling purposes and for experimental implementation [13][14][15] . Because microtomography is non-invasive, it has the ability to study systems at representative conditions, which is particularly attractive for the CO 2 -brine-rock system, as the multiphase flow behavior of scCO 2 is highly dependent on thermo-physical properties, such as interfacial tension and contact angle, which are in turn a strong function of system conditions such as temperature, pressure and salinity [16][17][18] . In such a complex system, with such an extensive and poorly understood set of inter-dependent variables, experiments using idealized pore structures 19 or analogue fluids 20,21 may not be applicable to flow processes in the subsurface. Imaging multiple fluids at conditions representative of a prospective CO 2 injection formation has, however, remained a challenge 22 . In this study we outline a methodology for the examination of multi-fluid behavior at reservoir conditions, focusing on the examination of capillary trapping Experiments with soluble fluids provide an additional challenge when using lengthy acquisition times, as CO 2 will diffuse through the polymeric portions of the experimental assembly, reducing the in-situ fluid saturation. All these issues meant that scan times longer than around 2 hr were impractical. In order to keep scan times below this requirement, particularly stringent for lab based sources, the core-holder must be around 1 cm in diameter. A larger coreholder size would have required the detector to be much further from the source to achieve the same geometric magnification, reducing the x-ray flux incident on the detector and therefore increasing required projection exposure times. The flow cell used in these experiments was based on a traditional Hassler cell design, built around a carbon fiber sleeve, with a sleeve design similar to that used by Iglauer et al. 27 , but with two significant alterations: 1) The carbon fiber composite used in the sleeve manufacture was changed from T700 fibers, with a stiffness of 230 GPa, to M55 fibers, with a stiffness of 550 GPa. This not only reduced the amount of sample movement during tomography acquisition, but also increased the maximum working pressure of the cell from 20 MPa to 50 MPa. 2) The sleeve has been elongated from 212 mm to 262 mm to allow the source and detector to be as close to the sample as possible.
A major experimental shortcoming in the first study to use micro-CT to examine CO 2 at reservoir conditions was the use of metal lines to control the flow to and from the core-holder 27 . As the sample is rotated relative to the pumps, the flow lines also need to be rotated. Stiff flow lines can cause the sample to move, reducing effective image resolution or making some or all of the dataset unusable. To prevent this we replaced all the flow lines close to the rotation stage with flexible polyether ether ketone (PEEK) tubing. These flow lines were flexible, providing very small lateral forces (load) to the core-holder during acquisition. We also attached the flow lines to valves attached to the sample stage, rather than attaching the flow lines to the coreholder. This meant that any existing flow-line load was transmitted directly to the stage, rather than to the sample, reducing the probability of sample motion. A major disadvantage of using the PEEK tubing was that CO 2 was able to slowly diffuse through it, over a timescale of around 24 hr. This meant that CO 2 saturated brine left in the flow lines would gradually desaturate.
Another major experimental shortcoming of previous studies was inaccurate control of temperature. This can impact results in a number of ways. Firstly, temperature is a strong control on both interfacial tension and contact angle [16][17][18] . Moreover, the solubility of both scCO 2 and carbonate rock in brine is also highly temperature dependent 28 . Solubility control is critical, as when scCO 2 is injected into a saline carbonate aquifer it will dissolve into the resident brine, forming a highly reactive carbonic acid, which will in turn start to dissolve any calcite present. Any inaccuracy in solubility control can therefore lead to scCO 2 dissolution/exsolution or solid dissolution/precipitation.
Previous studies 27 used a heated confining fluid to heat the coreholder; however this was problematic. It has the disadvantages associated with the difficulty of accurately maintaining a constant confining pressure using a recirculating water supply, requiring extra heating baths for that supply. Furthermore, this system only maintains an accurate control of temperature at the point of the heating bath (not at the point of the core holder, and the confining fluid would cool between the water bath and the core holder). It also requires both an inlet and an outlet port for the confining fluid, increasing the number of fluid lines attached to the coreholder and so increasing flow line load.
Instead of using a heated confining fluid, a flexible heating jacket was used to surround the core holder. This very simple heating method resulted in very little coreholder load, and allowed for the precise and accurate heating. An extremely thin polyimide heating film was used, in order to minimize sample size. The construction of this film consists of an etched copper foil element 0.0127 mm thick, encapsulated between two layers of 0.0508 mm polyimide film. The copper elements present in the jacket did not noticeably affect image quality. Temperature was measured using a thermocouple sitting in the confining annulus of the cell. It was positioned on the outside of the confining sleeve, as close as possible to the core, ensuring an accurate, reliable and stable reading of the pore-fluid temperature. The thermocouple and heating film were connected to a custom built Proportional Integral Derivative (PID) controller, and temperatures were controlled within ± 1 °C.
To maintain complete control over inter-phase solubility, and represent conditions present in the aquifer far away from the injection site, prior to injection the brine was equilibrated with scCO 2 by vigorously mixing the two fluids together with small particles (1-2 mm) of the host rock in a stirred and heated reactor. All wetted components within this reactor are made of Hastelloy to minimize corrosion. The reactor contains a filtered dip tube to allow for denser fluid to be extracted from the base of the reactor (brine) and less dense fluid to be extracted from the top of the reactor (scCO 2 ). High pressure syringe pumps were used to maintain pressure and control flow in the pore-space of the rock and in the reactor, with a displacement accuracy of 25.4 nl. The experimental apparatus used in this study is shown in Figure 1. The ionic salt used for the experiment from which the representative results were drawn was Potassium Iodide (KI), as it has a high atomic weight and so a high x-ray attenuation coefficient, making it an effective contrast agent. Less attenuating salts (such as NaCl) or mixtures could be used, however larger salinities would be required to achieve the same x-ray attenuation.
Imaging Strategy Design
1. In order to predict the imaging performance of different solute choices for the brine, calculate the x-ray spectrum of the incident x-rays [29][30][31] . Include the impact of the core-holder, core assembly and confining fluids on x-ray spectrum. An example incident x-ray spectrum using an acceleration voltage of 80 kV and electron current of 87 µA is shown in Figure 1. 2. Compare this spectrum to the transmission factors of the sample containing different pore-fluids. Simulate changes in the transmission factor due to changes in the pore-fluid using the Beer-Lambert law, assuming an effective optical length of the species within the sample, and calculated x-ray attenuation coefficients (Figure 3) 5. Open valve 4. Flush more than 1,000 pore volumes of equilibrated brine through the core by refilling pump 2 at a constant flow rate. Pore volume is found by multiplying the core volume by the porosity found using helium porosimetry. NOTE: This will miscibly displace the un-equilibrated brine, ensuring 100% initial brine saturation and creating conditions in the core akin to the subsurface conditions in an aquifer at a point slightly ahead of the front of a scCO 2 plume. . Continually take 2D projections in order to accurately measure the total injected volume by observing the point when scCO 2 displaces brine in the pore space. 2. Pass through 10 pore volumes (around 1 ml) of equilibrated brine through the core at the same low flow rate, causing scCO 2 to become trapped as a residual phase in the pore-space. 3. After steps 4.1 or 4.2, take scans of the sample to image drainage or imbibition respectively. Use a voxel size such that the entire diameter of the core fits within the field of view. 4. Reconstruct the scans using a tomographic reconstruction program. To scan the entire length of the core while retain a small voxel size, reconstruct composite volumes by stitching together multiple overlapping sections, acquired sequentially. NOTE: Each section required around 400 projections, taking 15-20 min to acquire, so the scanning of an entire composite volume took around 90 min.
Image Processing and Segmentation
1. Apply a non-local means edge preserving filter 33,34 to the dataset and correct the images for any beam hardening or softening artifacts created during image reconstruction by modeling these artifacts as radially symmetric Gaussian functions 35 . 2. Segment the data (turn the greyscale information into a binary representation of the CO 2 within the image) by the use of a watershed algorithm with a seed generated using a 2D histogram 36 , treating the CO 2 as one phase and the brine and the rock together as the other phase.
3. Analyze this segmented image to find both the total number of CO 2 voxels and also the sizes of each connected cluster of residual CO 2 .
Representative Results
The results for a single carbonate, Ketton limestone, an oolite from the upper Lincolnshire Limestone Member were analyzed in 3D in order to identify and measure the volume of each unique disconnected ganglion, which was then labeled (Figure 6). All processing was conducted within the Avizo Fire 8.0 and ImageJ programs 37 .
The segmented partially saturated images were analyzed by counting the number of voxels of residually trapped scCO 2 to find the proportion of the rock volume occupied by trapped scCO 2 -the capillary trapping capacity. This can then be converted to a residual saturation (S r ) by dividing this value by the porosity as obtained using helium porosimetry. Significant proportions of scCO 2 were trapped as a residual saturation, with a residual saturation of 0.203 ± 0.013. This agrees with the results found in previous studies using micro-CT 23 . Larger core-scale studies of residual trapping in this rock type showed a lower residual saturation of 0.137 ± 0.012 38 .
The ingress of brine into a scCO 2 saturated core is an imbibition process where a wetting fluid (brine) invades each pore, displacing non wetting fluid (scCO 2 ). In a strongly water-wet rock we expect the water to fill areas of the pore space in order of size 39,40 , trapping disconnected ganglia in the process called snap-off. This process should be percolation like 41 . Network modeling has shown that in three-dimensional cubic regular lattices the value of this exponent is around τ=2.189 43 . One natural way of extracting this exponent from real data is to plot the binned quantity, as defined by Dias and Wilkinson 41 . which should scale as: This is then plotted on a log-log plot as a function of s (Figure 7), showing power-law behavior for large ganglia, but an under-representation of smaller ganglia compared to the power law model. The exponent was calculated by excluding ganglia smaller than 10 5 voxels (approximately the start of the power-law behavior) and performing Levenberg-Marquardt regression 44,45 using a least absolute residual robust fitting algorithm 46,47 . This was performed using a commercial software package. The Fisher exponent for this system was 2.287 ± 0.009, close to the theoretical value of 2.189, indicating that imbibition in this system is indeed percolation like. More generally these results confirm conclusions in larger core-flood experiments 38,48,49 that scCO 2 acts as the non-wetting phase in carbonates. Table 1. Summary of results (transmission factor and change in transmission factor relative to the vacuum filled case) from simulation of the x-ray optical properties of the rock and pore-space filling material imaged during this study. Each column represents a different material filling the pore-space of the rock within the coreholder.
Discussion
The most critical steps for successful imaging of multiphase fluids at elevated pressures and temperatures are: 1) The successful isolation of the pore fluid from the surrounding confining fluid; 2) the effective equilibration of the fluids and rock prior to injection; 3) effective temperature control throughout the experiment; and 4) the effective segmentation of the resulting images.
The use of the aluminum wraps is critical for the successful isolation of the pore-fluid from the surrounding confining fluid as in its absence diffusive exchange across the sleeve is rapid, and saturation within the core does not remain constant for the duration of the scan. This problem can also be evident when fluid remains in the PEEK flowlines for extended periods of time (> 2 hr) prior to injection into the core in step 4.1 and 4.2. Once again, CO 2 diffusively exchanges across the plastic, causing the brine to desaturate. If this desaturated brine is injected into the core, the saturation in the core will decrease as residual clusters are dissolved by the injected brine.
Other methods for the equilibration of fluids and rocks, including fluid recirculation 50 , have been proposed in the literature. These methods increase the complexity of the experimental setup, which in turn would have increased the amount of time for each experiment, which would have in turn increased the likelihood that the brine in flow lines would have diffusively desaturated.
Effective temperature control is essential, and the presence of a thermocouple within the confining annulus of the flow cell is critical for this. Temperature is only measured at a single point, meaning there may be some gradient across the sample, leading to solubility imbalance and dissolution or exsolution. This can be minimized by locating the hot junction of the thermocouple as close as possible to the inlet face of the rock core.
The effective segmentation of the resulting images can be a real challenge with these systems, as the segmentation of images containing a partial saturation of multiple fluids is significantly more challenging that the segmentation of dry images, so the use of simple grey-scale universal thresholding is insufficient 51 . The use of watershed segmentation not only gives the most reliable results, compared to other algorithms in the literature, but is also the most effective at dealing with ring and partial volume artifacts
|
2016-05-12T22:15:10.714Z
|
2015-02-25T00:00:00.000
|
{
"year": 2015,
"sha1": "bebe33a910432c3d5478946c1fcc5dc15eb5135f",
"oa_license": "CCBYNC",
"oa_url": "https://www.jove.com/pdf/52440/reservoir-condition-pore-scale-imaging-multiple-fluid-phases-using-x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "13ce8639ab5dc84274633080d71b9dd2e5d68716",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
24449790
|
pes2o/s2orc
|
v3-fos-license
|
Statics and Dynamics of Yukawa Cluster Crystals on Ordered Substrates
We examine the statics and dynamics of particles with repulsive Yukawa interactions in the presence of a two-dimensional triangular substrate for fillings of up to twelve particles per potential minimum. We term the ordered states Yukawa cluster crystals and show that they are distinct from the colloidal molecular crystal states found at low fillings. As a function of substrate and interaction strength at fixed particle density we find a series of novel crystalline states that we characterize using the structure factor. For fillings greater than four, shell and ring structures form at each potential minimum and can exhibit sample-wide orientational order. A disordered state can appear between ordered states as the substrate strength varies. Under an external drive, the onsets of different orderings produce clear changes in the critical depinning force, including a peak effect phenomenon that has generally only previously been observed in systems with random substrates. We also find a rich variety of dynamic ordering transitions that can be observed via changes in the structure factor and features in the velocity-force curves. The dynamical states encompass a variety of moving structures including one-dimensional stripes, smectic ordering, polycrystalline states, triangular lattices, and symmetry locking states. Despite the complexity of the system, we identify several generic features of the dynamical phase transitions which we map out in a series of phase diagrams. Our results have implications for the structure and depinning of colloids on periodic substrates, vortices in superconductors and Bose-Einstein condensates, Wigner crystals, and dusty plasmas.
I. INTRODUCTION
The creation of new types of crystalline or partially ordered states and the dynamics of assemblies of interacting particles have attracted much attention both in terms of the basic science of self-assembly and dynamic pattern formation as well as for applications utilizing selfassembly processes. One of the most extensively studied systems exhibiting this behavior is assemblies of colloidal particles, where the equilibrium structures can be tuned by changing the directionality of the colloid-colloid interactions [1,2]. Since it can be difficult to control and tune the exact form of the interaction, another approach is to use colloids with well defined interactions that are placed on some type of ordered substrate. Optical trapping techniques are one of the most common methods of creating periodic substrates for colloids [3]. Studies of colloidal ordering and melting for one dimensional (1D) periodic substrate arrays have revealed ordered colloidal crystalline structures as well as smectic type structures where the colloids are crystalline in one direction and liquidlike in the other [4][5][6][7][8][9][10][11][12]. These experiments show that the substrate strength strongly influences the type of colloidal structure that forms and that as the substrate strength increases, the resulting enhancement of fluctuations can induce a transition from crystalline to smectic order [4,6,8]. Numerous different colloidal crystalline structures can also appear on 1D substrate arrays of fixed periodicity when the colloid density is varied [7,9].
More recent studies addressed colloidal ordering on two-dimensional (2D) periodic substrates [13][14][15][16][17]. In these systems, the filling factor f is defined as the number of colloids per substrate minimum. For integer fillings f = n, the colloids in each minimum can form an effective rigid n-mer, such as a dimer or trimer [16][17][18][19][20][21][22]. The n-mers have an orientational degree of freedom, and depending on the effective interaction between neighboring n-mers, all the n-mers may align into a ferromagnetically ordered state, sit perpendicularly to their neighbors in an antiferromagnetically ordered state, or form other orientationally ordered states. For square and triangular substrate arrays, n-mer states have been studied up to n = 4 [16,[19][20][21]; however, it is not known what structures would form at higher fillings when the simple picture of rigid n-mers no longer applies. Studies of the ordering of bidisperse colloidal assemblies with two different charges on 2D periodic substrates produced novel ordered phases, while a pattern switching could be induced by application of an external field [22]. Similar pattern switching also occurs for colloids with monodisperse charges under external driving [23]. The colloidal n-mer states have been termed "colloidal molecular crystals," and for conditions under which they loose their orientational ordering, they are referred to as "colloidal plastic crystals." Colloidal molecular crystals appear for integer fillings f = n. At fractional fillings such as f = 3/2, 5/2, or 7/2, it is possible for ordered composite states to form containing two coexisting species of n-mers; however, for other fractional fillings, the system is frustrated and the n-mer states are disordered [24]. Other studies have shown that novel orderings appear when the 2D substrate array has quasicrystalline order [25].
Once colloidal crystals have formed on a substrate, the driven dynamics can be explored by applying an external driving field to the sample. A variety of dynamical locking phases can occur in which the colloids preferentially flow along symmetry directions of the substrate [26]. As the filling fraction f is varied, a series of peaks in the critical force needed to depin the colloids occurs at integer values of f indicating the existence of commensurability effects [27]. Recent experiments with strongly interacting colloids on two periodic arrays show that kink-type dynamics can occur near f = 1 [28]. It would be interesting to explore higher fillings where new types of dynamics could emerge [29].
Many of the same types of phenomena found in colloidal molecular crystals can also be realized for other systems that can be modeled as interacting particles in the presence of a 2D periodic substrate. For example, the antiferromagnetic ordering of dimer colloidal molecular crystals on a square substrate was reproduced using vortices in Bose-Einstein condensates confined by optical traps with two vortices per trap [30] as well as with vortices trapped by large pinning sites in type-II superconductors [31][32][33]. Experimental and numerical studies of molecular ordering on periodic substrates show similar orderings [34]. Other systems where similar states could be realized include classical electrons or dusty plasmas with some form of substrate as well as crystalline cold atoms on optical lattices.
To our knowledge, previous studies of Yukawa interacting colloidal molecular crystals have focused only on systems with up to four colloids per trap. For the case of three colloids per trap, only a limited number of studies have considered the dynamics, and even in this limit there are several new features that we describe for the first time in this work. We show that at high fillings, the rigid n-mer picture breaks down and new cluster and ring states form. Several general features of the statics and dynamics emerge at these larger fillings which are independent of the specific filling.
One of the key findings in our work is the development of orientationally ordered shell structures at higher fillings for the 2D arrays. The development of particle shell structures was studied previously for repulsive particles in isolated individual traps, in systems that include classical charges [35], Wigner islands [36], dusty plasmas [37], colloids [38][39][40][41][42], charged balls in traps [43], and vortices in confined geometries [44,45]. The shell ordering in these systems can be altered by the shape or type of trap as well as by competing attractive and repulsive interactions between the particles [46,47]. In our system, the particles within each shell can exhibit an additional ordering due to interactions with particles in the neighboring traps. We show that as a function of substrate strength, a rich variety of colloidal cluster crystals can be created depending on the filling, and that certain shell structures are more stable than others. The number of particles in the shells and the number of shells depend on the substrate strength, and in certain cases the particles form ring structures instead of shells. We also find reentrant ordered phases as a function of substrate strength. For weak and strong substrates, the system is ordered; however, for substrates of intermediate strength, the sys-tem is generally disordered. We find that the depinning threshold can show distinct changes at the boundaries between these different phases as a function of substrate strength, and remarkably, we find that in some cases it is possible for the depinning force to decrease with increasing substrate strength. The velocity-force curves contain clear signatures of the different phases corresponding to different modes of depinning.
II. SIMULATION
We consider a 2D system with periodic boundary conditions in the x and y directions. The sample contains N c particles interacting with a triangular sinusoidal substrate containing N s potential minima. We focus on the case where the number of colloids N c is an integer multiple of N s such that f = N c /N s = n, where f is the filling factor and n is an integer. A single particle i responds to forces according to the following overdamped equation of motion: The particle-particle interaction force F cc i = − Nc j =i ∇V (R ij ) and the particle interactions are of a Yukawa form, Z * is the effective charge, 1/κ is the screening length, a 0 is the unit of length which is of order a micron, and R ij = |R i − R j |. We model the substrate as a triangular lattice, where b k = x cos(θ k ) − y sin(θ k ) + a 0 /2, θ 1 = π/6, θ 2 = π/2, and θ 3 = 5π/6. A is the amplitude of the substrate potential. The initial colloid positions are obtained by simulated annealing, where the colloids start in a high temperature molten state and are gradually cooled to a finite temperature or to T = 0. We have also considered other initialization procedures, including placing a commensurate number of colloids in each substrate minima and letting the system relax. This procedure gives very similar initial states as the simulated annealing, especially for strong substrates; however, for weaker substrates the configuration obtained through simulated annealing is generally more disordered, as we discuss in more detail later. After annealing, we investigate the transport properties by applying an external drive F ext = F Dx . We increment F D from F D = 0 to some final value in steps of δF D , and we wait 10 6 simulation time steps between increments to avoid transient effects. The depinning threshold is obtained by measuring the average colloid velocity V = N −1 c Nc i (dR i /dt) ·x. The value of δF D is modified according to the strength of the substrate. For weaker substrates we use smaller values of δF D in order to obtain an accurate depinning threshold.
III. CONFIGURATIONS AND DYNAMICS FOR f = 3, 4, AND 5
We first consider the cases f = 3, 4, and 5. Earlier numerical studies addressed the statics and dynamics for f = 2 and 3 and showed that for strong triangular substrates, f = 2 produced a herringbone state that transitioned under an applied drive into an aligned ferromagnetic state where the particles move in 1D channels [27]. For weaker substrates a triangular lattice formed and the depinning was elastic, while for intermediate substrate strengths the depinning was plastic and the herringbone state depinned into a fluctuating state that reordered at higher drives. There was a strong increase in the critical depinning force between the elastic and plastic depinning regimes similar to the peak effect phenomenon found at the transition between elastic and plastic depinning for vortices in type-II superconductors. For f = 3 the pinned state was always triangular and for weak substrates the particles depinned into either a moving crystal or a moving smectic state. At these low fillings it was also shown that for weak substrates the critical depinning force F c ∝ A 2 , as expected for elastic depinning, while for strong substrates, F c ∝ A, as expected for single particle depinning.
We now consider several features at f = 3 that were not reported in the earlier studies, including the velocityforce curves and an additional dynamical phase. Fig. 1(a,c), a triangular lattice forms with one dominant length scale that is the distance between the particles. For A = 3.0 in Fig. 1(b,d), each substrate minimum clearly captures three particles and the structure factor has the signature of two length scales. There is a well defined hexagonal pattern of maxima in S(k) surrounding the origin in Fig. 1(d) that was not present in Fig. 1(c) which is associated with the longer length scale of the substrate minima locations. The hexagonal void arrangement at larger k values in Fig. 1(d) is associated with the smaller length scale of the trimer colloid arrangement within each substrate minimum. In general, as A increases further, the structure of the lattice remains the same as shown in Fig. 1(c,d) except that the trimers gradually reduce in size.
In Fig. 2 we plot the average particle velocity V versus the external drive F D for A = 1, 2, 3, 4, 5, and 6. For the A = 4.0 curve we have labeled the different dynamical phases. The pinned phase (P) transitions into a rapidly fluctuating phase we term the random phase (R). The random phase is characterized by rapid mixing of the particles, as illustrated in Fig. 3 at F D = 0.9, where the particle trajectories show significant excursions in the direction transverse to the drive and are continuously changing with time. In Fig. 3(b), the corresponding S(k) taken from a snapshot of the particle positions in the random phase shows features of an anisotropic liquid. There is triangular ordering in S(k) at smaller k values due to the underlying triangular substrate; however, the higher order peaks of Fig. 1(d) that indicated long range order in the pinned state are replaced in Fig. 3(b) by a ring structure characteristic of a liquid. We also find a larger number of peaks along k y compared to k x due to the anisotropy induced by the xdirection drive. For higher drives the system transitions into a state with no transverse diffusion, as illustrated in Fig. 3(b,d) for a sample with A = 4.0 at F D = 1.5. In this state, which we term the moving smectic (MS) phase, the motion is confined to winding quasi-1D flows as shown in Fig. 3(b). The plot of S(k) in Fig. 3(d) indicates that the MS state has stronger partial ordering than the random phase but that there is still a tendency for the system to be more ordered along the k y direction. The R to MS transition is correlated with a decrease in V with increasing F D , producing a negative dV /dF D signature known as negative differential conductivity (NDC). Simulations and experiments on superconducting vortices moving over periodic pinning sites have revealed NDC at transitions from random or turbulent flows to ordered 1D flows [48]. The NDC in the vortex system is much more pronounced than the NDC we observe in Fig. 2, and this is likely due to the difference in the type of substrate used in the two systems. For the vortex system, artificially fabricated pinning sites produce a short range muffin-tin potential that permits some vortices to sit completely outside of the pinning sites in the flat interstitial regions. In the random phase, a large number of vortices move through the interstitial areas and do not interact directly with the pins, but at the transition to 1D flow, these vortices suddenly fall into flowing channels that pass through the pinning sites, abruptly increasing the effective amount of drag exerted by the pinning sites on the vortices. In the sinusoidal triangular substrate we consider here, there are no interstitial regions and the moving particles are always experiencing some drag from the pinning. Additionally, in the vortex system NDC was only associated with fillings just above f = 1, the first matching field [48], while in Fig. 2 we find no NDC until f ≥ 3. We note that the NDC for the triangular substrate is much more robust than NDC in the vortex system since it appears for a much wider range of fillings, including incommensurate fillings. In a recent vortex experiment [49] NDC did not occur until f > 2.5; however, this effect could not be reproduced in vortex simulations using only muffin-tin type pinning potentials. This could mean that for the particular system considered in Ref. [49], despite the fact that the pinning sites are localized, longer range interactions between the pins or distortions of the substrate may have caused the substrate to behave more like the sinusoidal type of substrate we consider here rather than like a muffin-tin potential.
For A > 4.0 we find the same three phases described for A = 4.0, with the transitions between phases shifting to higher F D with increasing A. In Fig. 2 we identify two distinct phases observed for a weaker substrate with A = 2.0. Here the random phase is absent and the system depins elastically, with no sharp jump in V at depinning as found for A > 2.0 when the flow above depinning is plastic. Even though the depinning transition for A = 2.0 is elastic, we still find transitions between distinct dynamical phases that can be identified through signatures in the velocity-force curve. In Fig. 2 we mark the two elastic flow phases EF1 and EF2 that appear on either side of a discontinuity in dV /dF D centered at F D = 0.32. In both phases the particle lattice is triangular and has an S(k) similar to that shown in Fig. 1(b). The EF1 and EF2 phases also appear for A = 3.0 with the kink in V shifted up to higher F D . In Fig. 4(a) we plot the particle trajectories in phase EF1 for the A = 2.0 system. The triangular particle lattice moves in a zig-zag pattern in order to avoid passing over the potential maxima of the triangular substrate. At higher drives the particles no longer avoid the potential maxima and enter phase EF2 where they move in the direction of drive, as illustrated in Fig. 4(b).
For f = 4, when the substrate is weak a triangular lattice of particles forms, as illustrated in Fig. 5(a,b) for a sample with A = 1.0. Weak substrates still affect the lattice by breaking rotational symmetry and causing the particle lattice to preferentially orient in a direction determined by the filling factor. At f = 4, one of the symmetry axes of the particle lattice is aligned with the x axis as in Fig. 5(a), while in Fig. 1(a) at f = 3 it was aligned with the y axis. As the substrate strength increases for f = 4, quadrimer arrangements of the particles form in each substrate minimum, and the orientation of neighboring quadrimers varies with A. For A = 8.0, Fig. 5(c) shows that all the quadrimers are aligned in the same direction. The corresponding S(k) in Fig. 5(d) shows sixfold peaks at small k from the triangular substrate, while at larger k a fourfold void structure appears due to the square ordering of each quadrimer. At A = 11 in Fig. 5(e), the individual quadrimers remain intact but their long range ferromagnetic orientational ordering is lost, producing a ring structure in S(k) as shown in Fig. 5(f). We have tried several different methods of preparing the initial configuration for f = 4 and A = 11.0 but have not found a ferromagnetically ordered state.
The orientational ordering of dimer and trimer states was previously shown to arise due to quadrupole or higher pole moment interactions between neighboring n-mers. For quadrimers, it is likely that the higher pole moments play a more important role. If the pole moment favors ferromagnetic alignment, than as A increases the size of the quadrimer decreases, reducing the pole moment responsible for the orientational ordering. Previous studies found that as the substrate strength increases, the temperature at which dimers and trimers lose their orientational ordering drops due to the decreasing effective pole moment. We note that the configurations in Fig. 5 are all obtained at T = 0.0. When we anneal the system from a high temperature and slowly decrease the temperature to zero, we still obtain the same states illustrated in Fig. 5, suggesting that the energy of the ferromagnetic state must be only slightly smaller than that of the rotationally disordered state. As A increases, the picture of rigid quadrimers begins to break down, as shown in Fig. 5(e) where the quadrimers become slightly distorted. This could further decrease the contributions of different pole moments to the orientational ordering or may even lead to competing interactions between neighboring quadrimers, resulting in a ground state with ordering produced by very long range interactions, as found in certain spin ice systems. For A > 11.0 we find states similar to those shown in Fig. 5(e,f) with increasing distortion in the square ordering, until for high enough A the distortion becomes strong enough to permit some particles to sit at the center of a substrate minimum. The dynamic phases for f = 4 are similar to those illustrated in Fig. 2 for the f = 3 system, with elastic depinning for low A crossing over to plastic depinning at higher A, followed by a dynamical reordering transition at higher F D . For f = 5 at F D = 0, as a function of A we find disordered phases interspersed among ordered phases. This is also the first filling at which the n-mer assumption breaks down and a shell structure of the particles in each substrate minimum begins to form, in contrast to the case of isolated trapped clusters of repulsively interacting particles, which first develop equilibrium shell structure at f = 6 [35,37,43,[50][51][52][53] but have a metastable state with one particle in the center for f = 5 [54]. In Fig. 6(a) for A = 0.5 at f = 5, a triangular lattice containing a small tilt distortion forms. The smearing of the corresponding S(k) in Fig. 6(b) indicates that for weak substrates at this filling, the particle lattice is not commensurate with the substrate. At A = 1.75, shown in Fig. 1(c,d), the lattice is disordered as indicated by the ring pattern in S(k). When we vary the initialization protocol, we always observe a disordered lattice for 0.9 ≤ A ≤ 2.0. For 2.0 < A < 7.0 the system orders again as shown in Fig. 6(e,f) for A = 5.0. Each substrate minimum contains what we term a jack pattern consisting of one particle located at the center of the minimum surrounded by four particles in a square arrangement. As indicated by the dashed lines in Fig. 6(e), the jacks are tilted at +20 • or −20 • to the x axis in every other row, producing a herringbone structure. The ordering is not complete as there are local regions where the jack structure breaks down, producing some smearing in S(k) as plotted in Fig. 6(f). The structure can be viewed as the beginning of a shell ordering where, in each substrate minimum, the first shell contains a single particle and the next shell contains four particles. For A > 7.0 the interactions between particles in neighboring substrate minima are reduced and the pentagon ground state structure expected for particles in an isolated trap emerges. The pentagons exhibit ferromagnetic order and are all aligned in the same direction, as illustrated in Fig. 6(g,h) for A = 11.0. For increasing A the pentagons become smaller; however, we observe no further change in the structure.
We can relate the different pinned structures at f = 5 to features in the velocity force curves and to dynamic phases. For A < 1.5 there is an elastic depinning into a moving triangular lattice. For 1.5 ≤ A < 2.4 the disordered phase depins into a fluctuating random phase similar to that observed for f = 3, and at higher drives the system can reorder into a moving crystal (MC) phase. In Fig. 7(a) we plot V versus F D for a sample with A = 2.15. The random flow phase is characterized by large fluctuations, and is followed by a sharp transition near F D = 0.034 into a MC phase with small fluctuations. The transition between these two phases can also be detected by analyzing the noise fluctuations of the velocity response. In the random phase the fluctuations are characterized by a broad band noise signal with 1/f α characteristics where α = 1.5 to 2.0, while in the MC phase there is a narrow band signal with a characteristic frequency that increases with increasing F D , similar to what has been observed for dynamically reordered Fig. 8(a). This is followed by the R phase and the moving smectic phase (MS). (c) At A = 7.2 the C phase is absent and the system transitions directly from the R phase to the MS phase. phases of superconducting vortices moving over periodic substrates [48]. For 2.5 < A < 3.3 we find no dynamical reordering and the particle arrangement remains disordered up to the highest drives we considered; however, it is possible that a transition into a moving ordered state could occur at very high drives. For 3.1 < A < 6.1, where the pinned system is in the ordered jack state, the depin- ning is elastic and the particles depin into the ordered winding channel flow phase (C) illustrated in Fig. 8(a). As F D increases, there is a crossover to a random phase followed by another transition at high drives into a moving smectic phase (MS) of the type shown in Fig. 8(b). The velocity signatures associated with these transitions appear in Fig. 7(b) for a sample with A = 4.1. The C phase is associated with small velocity fluctuations and is followed first by the strongly fluctuating R phase and then by the MS phase at higher F D . For A ≥ 6.0 the C phase is lost and the sample depins from the ordered pentagon state into a random flow state, as illustrated in Fig. 7(c) for a sample with A = 7.2. This is followed by the formation of a moving smectic state at higher F D . As A increases, the extent of both the pinned region and the random fluctuating state increases. By conducting a series of simulations, we map the dynamic phase diagram shown in Fig. 9(a). The disordered R phase separates the moving crystal and moving smectic phases. There is also a domelike region where the winding channel C phase occurs. In Fig. 9(b) we plot the depinning line on a log-log scale with dashed lines marking the regions where the pinned phases illustrated in Fig. 6 occur. For A < 0.9, the depinning force F c monotonically increases with increasing A. There is a small decrease in F c at the onset of the pinned disordered or pinned glass (PG) phase, followed by a plateau in F c . Near A = 3.0, F c begins to increase rapidly with increasing A in the pinned jack state, while in the pinned pentagon state F c still increases with increasing A but with a reduced slope. The behavior of F c near the onset of the pinned disordered phase is somewhat unusual since it indicates that even though the substrate strength is increasing, the depinning force does not increase. When the system is disordered, there are numerous dislocations present in the particle structure that produce weak spots which flow first at the depinning transition. In contrast, in the depinning of a crystalline state, there are no weak spots since the lattice structure is the same everywhere. This behavior is the opposite of the so-called peak effect phenomenon found for vortices in type-II superconductors, where a sudden increase in F c occurs when the vortex lattice becomes disordered. The peak effect is generally believed to arise due to the softening of the vortex lattice when dislocations are present. In the softer lattice, the vortices can more freely move to adjust to a random pinning potential and maximize the pinning force. In our system, the pinning potential is not random, so the crystalline states are more strongly pinned than the disordered states. This has been demonstrated clearly in studies of commensurate-incommensurate transitions for both vortices and colloids in periodic substrates, where crystalline states form at integer values of f . Near but not at commensuration, numerous defects appear in the commensurate lattice, and these defects cause a reduction in F c . As a result, F c passes through a series of peaks at integer values of f as the filling fraction is varied. In the system we are considering here, the f = 5 state is unusual since even through the system is at an integer filling, we find a regime where the pinned state is noncrystalline. This is in contrast to the lower integer fillings f = 1, 2, 3, and 4, where all the pinned states are ordered. This disordering effect and the plateau or drop in F c at intermediate substrate strengths in the disordered pinned regions is a general feature of f ≥ 5 systems at integer fillings, and both features becomes more prominent for higher fillings. It may appear from Fig. 9(b) that the plateau in F c occurs in the disordered region and not in the pinned jack state; however, the plateau persists into a window of the pinned jack state since the PJ state contains some defects.
In Fig. 9(c) we plot F c vs A for samples with f = 1, 3, 4, and 5. The dynamics and configurations of the f = 2 system were described in detail in previous work [27]. For f = 1 we find that F c ∝ A, as indicated by the upper dashed line. This is expected in the limit of single particle depinning and indicates that particle-particle interactions are unimportant at depinning for this filling.
For the higher fillings f > 1, we instead find F c ∝ A 2 as indicated by the lower dashed line. This is characteristic of collective depinning transitions, where the interactions among the particles within each minimum play an important role in the depinning process. In the limit of very strong substrates, the depinning generally occurs when one particle is pushed near the saddle point of the substrate by the other particles in the potential mini- mum. This single particle suddenly escapes and triggers a cascade of escape events in the rest of the system. At intermediate substrate strengths, the higher order fillings all generally show a change in the slope of F c versus A associated with a change in the n-mer structure.
IV. CONFIGURATIONS AND DYNAMICS FOR
In Fig. 10(a) we plot F c versus A for a sample with f = 6. A large jump in F c occurs at A = 2.5 at the transition from a pinned triangular lattice, illustrated in Fig. 11(a), to a disordered pinned state, illustrated in Fig. 11(b). For 2.5 < 5.5, the system becomes disordered, as shown in Fig. 11(b) for A = 2.75, and has a ringlike S(k) signature, shown in Fig. 11(g). In the disordered region the system depins into a plastic random flow state, while for A < 2.5 the system depins elastically into a MC state. At this filling, F c is depressed in the triangular state since it is not possible for all the particles to simultaneously be in a triangular lattice and sit at substrate minima locations, so that some particles are instead located on substrate maxima. When the substrate strength increases enough, the particles are forced off of the substrate maxima and the system disorders, allowing more particles to be located closer to substrate minima. For 5.5 < A < 15, a shell structure emerges. The system forms a pentagonal/triangular lattice composite in which one particle sits at each substrate minimum and is surrounded by five particles in a pentagon arrangement, as illustrated in Fig. 11(c) for A = 6.0. In this case the ordering is not ferromagnetic and the tilt of the pentagons varies from site to site; however, for A > 8.0 the pentagon-monomer states become aligned, as shown in Fig. 11(d) for A = 10. An applied drive can induce a polarization of the pentagons in the driving direction, as illustrated in Fig. 11(e) for A = 12.0 under an x direction drive just before depinning. For A > 15 a ring state forms in which the center of the substrate minimum is empty and the particles form a hexagon. The rings are not all aligned; instead, two orientations coexist in the sample as shown in Fig. 11(f). This produces twelvefold modulations in the structure factor as illustrated in Fig. 11(h). As A increases further, the ring structure persists but decreases in diameter. In general we find ring structures rather than shell structures for f ≥ 6 at large A. The different phases are highlighted in Fig. 10(a). Figure 10(b) shows that a similar set of pinned phases forms in a sample with f = 7. There is still a triangular lattice for small A; however, at A = 3.75 when the system enters the disordered phase, F c shows a small decrease followed by a plateau up to the end of the disordered phase at A = 7.0. For f = 7 the triangular lattice is more commensurate with the substrate and thus better pinned than the triangular lattice at low A in the f = 6 system. In the disordered phase, illustrated in Fig. 12(a) for A = 4.0, S(k) contains a ringlike structure similar to that in Fig. 11(g). For 7.0 < A < 16 the system forms a shell structure with an outer pentagon and two dimerized inner particles, as shown in Fig. 12(b) for A = 12.0. The pentagon/dimer structures are only locally aligned, leading to smearing in the corresponding S(k). Figure 12(c) illustrates the configuration at A = 20.0 where an array of hexagons appears with a monomer at the center of each hexagon. The hexagons have one of two possible orientations, similar to the f = 6 hexagon state shown in Fig. 11(f). For A = 25 shown in Fig. 12(d), the shell ordering is replaced by a ring structure where each ring has seven particles and the ring radius decreases with increasing A. As also found for the f = 6 case, it is possible for an external drive to orient the structures close to depinning.
The dynamical phases for f = 6 and f = 7 are very similar to those observed at f = 5. The system depins elastically at small A in the triangular ordering regime and depins plastically in the disordered pinned regime. A disordered flow regime extends to divergingly large drives for regions of A where a disordered pinned phase is present. For values of A above the disordered pinned phase, the strongly driven system dynamically orders into a moving smectic phase, and the driving force at which this transition occurs increases with increasing A. For the higher fillings of f = 6 and f = 7, there are some additional features in the V − F D curves at high A that did not appear at f = 5. For example, in the fluctuating phase, the system shows a coexistence regime with some particles that remain pinned while other particles move along 1D channels or troughs without rearranging the pinned particles. This feature becomes more noticeable at higher f . Fig. 13(a), the particles form a triangular lattice that is slightly anisotropic and rotated by 19.1 • with respect to the underlying substrate. This orientation permits as many particles as possible to sit close to substrate minima, and as indicated by the dashed line in Fig. 13(a), along one particle lattice symmetry direction, there are eight particles filling the space between consecutive substrate minima. This arrangement differs from the disordered lattice found at the eighth matching field in a sample with a muffin-tin potential [55]. As A increases, each substrate minimum traps two particles that dimerize and are surrounded by six other particles, as illustrated in Fig. 13(b). The dimers are rotationally ordered, and the overall particle lattice is rotated by 40.89 • with respect to the substrate, as indicated by the dashed line in Fig. 13(b). Orientationally ordered dimers did not form for f = 5, 6, or 7. For A > 17.0 the dimer state is lost and replaced by a monomer at each substrate minimum surrounded by an outer shell of seven particles, as shown in Fig. 13(c) for A = 20.0, and for A > 25.0 a ring state with eight particles per ring occurs, as illustrated in Fig. 13(d) for A = 26.0. The lack of disorder for f = 8 is clearly shown in the dynamical phase diagram in Fig. 14. There are no disordered pinned phases, and the random flow phase (R) does not extend out to large F D but is instead bounded on the high F D side by the MS flow state. For A ≤ 2.0, the triangular lattice depins elastically in the direction of drive into an MC state. There is a sharp increase in F c at the onset of the dimerized state, and the MC flow is replaced by MS flow at higher drives. For 4 ≤ A < 6.0, the system depins transverse to the driving direction along one of the symmetry directions of the particle lattice at −40.89 • from the x axis, as illustrated in the inset of Fig. 15. As F D increases, there is a sharp transition out of the transverse flow regime into a random flow phase with an average velocity oriented in the driving direction, as shown in the main panel of Fig. 15 where we plot V x and V y versus F D . In the transverse flow regime for 0.012 < F D < 0.0175, V y is negative and V x is positive, and both | V x | and V y increase linearly with increasing drive until, at F D = 0.0175, V y drops to zero and V x jumps up at the transition to the random flow regime. Note that the orientation of the pinned lattice is degenerate, so for samples prepared with different random seeds, the particles would flow along −40.89 • in half the cases and along +40.89 • in the other half of the cases. The transverse flow regime forms an intermediate state between the pinned and random flow regimes. Flows where the particles do not move in the direction of drive but at an angle to the drive have previously been reported for colloids on triangular substrates at a filling of f = 2.0 [56]. In that study, the drive was rotated 90 • relative to the substrate symmetry direction from the case considered here. Dimerization of all the particles in the system created an orientational degree of freedom that determined the flow direction of the colloids just above depinning. Numerical studies of vortices in type-II superconductors with kagome and honeycomb pinning arrays also produced similar flows at angles to the driving direction in cases where two vortices were located in the center of the kagome or honeycomb plaquettes, forming dimer states [57].
In samples with f = 9, triangular configurations appear at low A, as shown in Fig. 16(a) for A = 2.0. This is the same configuration found at the ninth matching field for vortices in triangular pinning arrays in Ref. [55]. As A increases, the particles align with the x axis in an unusual new structure illustrated in Fig. 16(b) for A = 7.0. The system breaks into dimer and linear trimer states. Each substrate minimum contains a linear trimer flanked by two dimers, all oriented in the x direction, while between adjacent minima there are elongated dimers oriented in the y direction. For A = 15.0, Fig. 16(c) shows that a superlattice of aligned triangular trimers appears at each substrate minimum, surrounded by an outer shell of six particles forming a hexagon. As A increases further, we find a transition to a state with a monomer at each substrate minimum surrounded by an eight-particle shell, followed by a ring state at high A (not shown). For the parameters we considered, we never observed a state with two dimerized particles at the centers of the substrate minima surrounded by seven outer shell particles. Either this state appears only for an extremely narrow range of A, or it is simply too high in energy to form at all. In general we find that at the higher fillings, certain shell combinations are not observed. These are often, but not always, associated with incommensurate structures such as odd-even shells like the dimer-heptamer structure for f = 9. For f = 10 at A = 1.0, Fig. 16(d) indicates that a partially ordered triangular lattice appears. This is a floating disordered solid that is only very weakly pinned by the substrate. At A = 1.5, the system transitions to a pinned disordered solid or pinned glass state (not shown), where the last vestiges of triangular ordering are lost. For A ≈ 7, the particles begin to develop an incipient quadrimer structure around each substrate minimum. The quadrimers are still very extended and the lattice is only partially localized by the substrate minima. At A = 9.0, the quadrimers become well localized within the minima and the state illustrated in Fig. 16(e) for A = 12.0 emerges, with aligned quadrimers surrounded by hexagons. At A = 15.0, aligned triangular trimers sit at each substrate minimum as illustrated in Fig. 16(f). This is similar to the state shown in Fig. 16(c) except each trimer is surrounded by seven particles in the outer ring instead of six. A similar set of orderings as those found for f = 9 occurs for f = 10 as A increases until a ring state forms at high A. In Fig. 16(g) we plot F c versus A for f = 9 and f = 10. For f = 9, the triangular lattice in Fig. 16(a) transitions into the aligned linear trimer state in Fig. 16(b) at A = 7.0, and there is a kink in F c associated with this transition. For f = 10, the low-A, partially ordered lattice illustrated in Fig. 16(d) has a very low F c since it is still floating above the substrate. A rapid increase in F c at A = 1.5 is correlated with the transition to a pinned disordered solid. F c gradually increases as the quadrimer structure gradually organizes, until at A = 9, when the quadrimers become well localized within the substrate maxima into the state shown in Fig. 16(e), F c kinks upward again. Near A = 20 there is another upward kink in F c at the formation of the ring state, which depin as if they have no orientational degree of freedom.
V. HIGHER ORDER FILLINGS AND DISCUSSION
For higher order fillings f > 10, we generally observe the same features described for the f > 5 fillings, so we believe that we have identified the generic features that arise in this type of system. For higher fillings it is likely that instead of only two shells of particles, three or more particle shell structures will form. The orientational ordering of particles trapped in adjacent substrate minima is also likely to become more fragile as the number of shells increases and the outer shell becomes more and more circular. Thermal effects on multiple shell structures are beyond the scope of this work; however, we expect that the shell structures would have multiple dis- ordering transitions as a function of temperature that are distinct from the transitions observed at lower fillings in colloidal molecular crystals [16,24]. Another factor that could influence the ordering is the shape of the substrate minima. Here we considered a sinusoidal substrate; however, with optical trapping, other forms of substrates could be created which may favor or disfavor certain shell structures from forming.
In Fig. 17 we show some representative features of higher filling states. At f = 11 and A = 14.0 in Fig. 17(a), an aligned pentagonal inner ring forms at each substrate minimum with an outer hexagonal ring. For f = 12, a triangular lattice appears for A = 1.0 as illustrated in Fig. 17(b), an oriented structure with four particles in the inner shell and nine in the outer shell forms at A = 10.0 as shown in Fig. 17(c), and at A = 20.0 there are five particles in the inner shell and seven in the outer shell as seen in Fig. 17(d).
VI. SUMMARY
We have examined the static configurations and driven dynamics of particles interacting with a repulsive Yukawa potential, such as colloids, in the presence of a triangular substrate for varied integer fillings of up to twelve particles per substrate minimum. Under static conditions, we observe a rich variety of crystalline structures for fillings that have not been previously reported for this model. We show that for fillings f > 4, shell structures can form in the substrate minima with well-defined numbers of particles in the inner and outer shells. These shell clusters can exhibit orientational ordering across the sample. As we vary the substrate strength for a fixed filling level, we find that a series of different particle lattices occur with different numbers of particles in the shells and different amounts of orientational ordering. In the limit of strong substrates we observe a ring lattice. Several fillings exhibit disordered particle configurations; however, these fillings can show a reentrant ordering behavior as the substrate strength is increased. Under an applied drive, the different particle orderings alter the critical depinning force, producing either a strong increase in the depinning force or a plateau and even decrease in the depinning force at structural transitions induced by increasing the substrate strength. We also find a remarkably rich variety of dynamical phases that can be correlated with the structure of the static pinned configuration. As the drive increases, we find moving crystalline states and moving smectic states as well as disordered flow states. Our results are relevant to colloids on periodic substrates, vortices in nanostructured superconductors, and other commensurate-incommensurate systems, including aspects of friction and nonequilibrium transport.
VII. ACKNOWLEDGMENTS
This work was carried out under the auspices of the NNSA of the U.S. DoE at LANL under Contract No. DE-AC52-06NA25396.
|
2012-02-22T22:52:15.000Z
|
2012-02-22T00:00:00.000
|
{
"year": 2012,
"sha1": "62b6eea30610f07ba0a5ba138b03d4d88c8cdb1d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1202.5059",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "62b6eea30610f07ba0a5ba138b03d4d88c8cdb1d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
253213481
|
pes2o/s2orc
|
v3-fos-license
|
Moderating effect of safety culture on the association inter work schedule and driving performance using the theory of situation awareness
The adverse effects of work schedule on driving performance are relatively common. Therefore, it is necessary to fully understand an organisation’s safety culture to improve driver performance in order to avoid road crashes. This study aims to investigate the moderating role of safety culture in the relationship between driver work schedules and driving performance. The study developed a conceptual framework based on the literature review of existing studies, which is supported by situation awareness theory that explains the model’s relationships and supports the study’s hypotheses. Three hundred four questionnaires were collected from oil and gas truck drivers then Structural equation modelling (SEM) was applied to test the study hypotheses. Derived from the findings, the outer loading for all items was above the threshold of 0.70 unless two items were deleted. The latent exogenous variables of safety culture and work schedule explained 59.1% of driving performance. Besides, work schedule and safety culture significantly impact driving performance. In addition, the results show that safety culture moderates the unfavourable work schedule impact on driving performance with an effect size of 23%. Therefore, this study showed strong evidence that safety culture acts as a critical moderator in reducing the negative impact of work schedule on driving performance in the energy transportation sector. Drivers with high safety culture can manage and reduce the effect of work schedule disorder on driving performance through their safety attitude and patterns compared to those drivers with low safety culture. Consequently, the improvement in driving performance will be noticed among drivers with a high awareness of safety culture.
Introduction
Road transport firms are made up of multiple drivers who work outside of the workplace's physical boundaries, which restricts the scope of supervisory control [1,2]. Drivers usually take responsibility for their duties and can decide to face any road danger [3]. Consequently, the drivers gain skills and abilities that have the potential to influence the organisation's overall safety. Therefore, all workers must be linked with the rest of the organisation in order to build a safety culture through all positions in the firm [4]. According to Reason and Hobbs [5], enhancing a safety culture requires all organisations' people to be involved in the procedure and committed to improving safety. Thus, the management should prioritise safety to build a high-safety culture. According to Mooren, Grzebieta [6] commitment of the management toward safety can reduce the impact of all levels of risks. Besides, an inadequate level of management's commitment to safety will decrease the employee's commitment to safety measures at work, which in turn will increase the possibility of work accidents [7]. At the same time, research by Naevestad, Hesjevoll [8] highlights that management commitment toward safety could help as a leading indicator for road safety. Naevestad, Blom [9] believed that a high safety culture within an organization could be considered as one of the critical factors that will reduce such accidents and improve overall road safety.
Therefore, to build a solid safety culture, an organisation must consider certain characteristics such as management commitment, good communication and improved employee safety patterns [10]. In particular, Reason and Hobbs [5] underline that employees must be aware of the dangers in their work and anticipate equipment and people's mistakes in order to produce the proper action to avoid this danger. However, such awareness may be difficult to achieve when most of an organisation's workers are outside the workplace [5]. Workplace safety could be at risk from several factors, including human, technological, organizational, and environmental factors. Based on situation awareness theory, managers and personnel in road transport firms must be aware of risk issues within the workplace, on the road, at loading/unloading locations, and outside of work time [11]. The necessity to disclose errors and near-misses is referred to as safety culture. Indeed, because most road transportation enterprises' tasks are performed outside the company, people in organizations must trust one another to reveal errors and near-misses to control potential risks. The organization may adjust work schedules and perhaps avoid areas and times where and when hazards arise if they understand when and where risk occurs [5].
Typically, working in road transportation is performed by shifts system. In many companies, an irregular work schedule poses a safety risk because it leads to low driving performance. Long-haul truck drivers (oil and gas) are one example of a transportation profession where shift work is common [12,13]. Previous studies suggest that individuals with non-standard work schedules may be unable to satisfy their sleep needs, resulting in impaired driving performance [14]. Based on a survey of a random sample of European truck drivers [15], noted that drivers suffer from low performance based on their sleep needs because of their work schedule. Therefore, irregular shifts are rarely avoidable among those who work in the road transportation field. However, the negative impact of an irregular work schedule can be mitigated by employing effective strategies such as scheduling work and rest times to allow adequate sleep recovery, napping, and ingesting alertness-enhancing compounds such as caffeine. Thus, all these strategies cannot be successful if drivers' safety culture is low [16,17]. Heavy truck drivers are one of the most significant drivers exposed to occupational injury because of road accidents [6,18]. Heavy trucks in many countries are involved in severe and fatal road crashes [6]. In addition, truck drivers work in an environment that leads to poor performance since they might be away from home for days, work long hours, have irregular work schedules, and are under time pressure to deliver goods on time [19]; drivers are forced to wait in order to get admission to a loading terminal and usually stuck in traffic of the road [20,21]; and some of the companies used to pay by miles driven rather than hours worked [22]. Furthermore, some drivers are required to load and unload freight alone [19,23,24,25].
Because of the work schedule, circadian rhythm is important in the domain of driver fatigue. Truck drivers frequently work when they should be sleeping and sleep when they should be awake [26]. According to Mohamed, Mohd-Yusoff [27], most fatal incidents in Malaysia occur early morning hours and result in large injuries. Fatigue and stress are blamed for some of these accidents [26,28]. Although several studies have examined the connection between a driver's work schedule and their performance behind the wheel in different sectors [29,30,31,32], there is a paucity of research on how safety culture can mitigate this unfavourable relationship between the work schedule and driving performance. Otherwise, previous studies in the oil and gas transportation sector addressed many important factors, such as exhaustion-related psychological risk factors [33], psychological well-being, perceived stress [34,35], and fatigue assessment by the psychomotor vigilance test [36]. However, the safety culture's role in regulating the work schedule balance and driving performance has been overlooked. Consequently, Malaysia's oil and gas truck drivers must be addressed for their poor driving performance due to irregular work schedules and how a high level of safety culture may minimise this negative impact. This study intends to compensate for the lack of empirical evidence on the role of safety culture in moderating the association between work schedule and driving performance. Understanding the significance of improving driver safety culture will enable drivers to balance their work schedule and their abilities in order to avoid underperformance. Driver performance evaluation enhances the quality of service and reduces the likelihood of accidents occurring.
Research motivation and objectives
Heavy vehicle accidents are very dangerous. Its effects are very significant, especially the accidents of oil and gas tankers. The consequences of these accidents affect many road users, society, the health sector, the country's economy, and the environment. Globally, traffic accidents have been estimated to cause about 1.2 million fatalities, 20-50 million terminal injuries, and about $ 518 billion yearly loss to material damage. Road accidents are especially common in low-and middle-income nations [37]. There is a particular concern for safety in heavy vehicles' transportation of hazardous materials because of their potential for fires, explosions, groundwater contamination, and toxic effects on human health if hazardous materials are spilt inadvertently or due to road accidents [38].
Moreover, the psychological impact of road traffic accidents impacts both direct participants and their families. Several nations have discovered that one of the benefits of minimizing road traffic accidents is that it reduces the cost of social support [39]. Accordingly, this was a strong motivation for us to study this problem and to provide recommendations that will help reduce these accidents in the future. Therefore, this study seeks to achieve several objectives. First, examine the work schedule's impact on the driving performance among oil and gas tanker drivers. Second, to determine the effect of safety culture on driving performance. Third, investigate the moderating role of the safety culture in the negative relationship between work schedule and driving performance.
Literature review
The current study employed safety culture as a moderator to dampen the relationship between study variables. Previously, there has been an increase in literature on safety culture, which has been considered a moderator between different variables. For example, Wamid and Youssef [40] examined the moderating impact of safety culture on the relationship between safety climate, safety commitment, and safety behaviour. The study involved 250 employees in the oil products distribution company. Therefore, the results indicated that safety culture significantly moderates the relationship between study variables.
Similarly, Ahmed and Saba Waqas [41] conducted a study in Pakistan investigating the effect of safety culture on the association between occupational injuries and turnover intention among 111 employees in safety-sensitive fields. The results indicated the effect of occupational injuries on turnover intention. In contrast, a safety culture would not reduce turnover in Pakistan because of the lack of a prevailing safety environment. The conclusion was that a safe culture would not exist in Pakistan because of cultural differences. The study underlines how important it is to promote and preserve workers' physical, mental, and social well-being in a workplace health and safety system. Trinh and Feng [42] surveyed 78 construction projects in Vietnam. In construction projects, the complexity of the project and safety performance were investigated to see if a resilient safety culture had a role in mitigating the relationship between them. According to the findings, the project complexity variable has a detrimental influence on the performance of safety. Project complexity may have less influence on safety performance if there is a high level of safety culture.
Safety culture as moderating
The current study considered safety culture as the moderating variable on the association between the work schedule and driving performance for following justifications. First, the above-reviewed studies provided evidence that safety culture has been utilized as a moderator between several variables in different fields, such as: investigating the culture in the relationships between overall and life facet satisfaction among college students [43]. Investigation on how safety culture influences the connection between safety climate, safety commitment and safe behaviour among oil product distributors' workers [40]. The effect of safety culture in mitigating the association between workplace accidents and turnover intention in safety-sensitive domains is being investigated [41]. Fourth, construction projects looked at how a strong safety culture affects the relationship between the complexity of projects and safety outcomes [42]. Therefore, it can be concluded that the literature on the organization's safety strategies suggests that safety culture has the potential power to moderate the relationship between the organization's safety strategies to avoid occupational injuries [40,41,42]. Nevertheless, the studies have not gone into great detail on how safety culture affects the association between the schedule of work and driving performance, which is especially relevant in energy transportation firms. In the present research, a safety culture will help to reduce the unfavourable relationship between variables of this study.
The second point is that, according to Situation awareness theory [44], crashes can be avoided by going via 3 phases of situation awareness: identifying the aspects of the situation, (ii) understanding the present situation, and (iii) predicting future states in order to take appropriate action. In driving with a strong safety culture, the adverse influences of the work schedule on driving performance were more likely to be eliminated or reduced.
Accordingly, Baron and Kenny [45] recommended that the weak or inconsistent results could be revitalized by introducing a moderating variable in a relationship between the two latent variables. When it comes to the connection between a driver's performance and their work schedule, drivers with high safety culture can manage and reduce the effect of work schedule disorder on driving performance through their safety attitude and patterns compared to those drivers with low safety culture. Consequently, this study developed the conceptual framework, as seen in Figure 1, and proposed the following research hypotheses: H1. The schedule of work has a significant influence on driving performance.
H2. Driving performance is significantly affected by a safety culture.
H3. The negative correlation between the schedule of work and driving performance diminishes as the level of safety culture increases among drivers.
Background theory (situation awareness theory)
According to Gilson [46], Oswald Boelke established the notion of situational awareness during World War I, he recognized the necessity of understanding the adversary before the enemy got similar knowledge and created ways to attain this. The separation between the human operator's perception of the system state and natural system status is central to the definition of situational awareness [47]. The first impetus for R&D came from the aviation sector, where pilots and air traffic controllers are under intense pressure to improve their situational awareness [48]. Control measures based on incorrect situational awareness may exacerbate a poor occurrence. Such circumstances led up to the Chornobyl disaster [48].
The conceptual framework of this study has been developed based on situation awareness theory. According to Endsley (1995), understanding the situation elements concerning a proper execution on time is essential for risk and safety assessment. The environment where employees participate should be recognizable to anticipate what will happen next and produce appropriate action. This theory suggests that accidents can be prevented through three stages of situation awareness: (i) identifying the elements of the situation, (ii) understanding the current circumstance, and (iii) predicting future states to produce actions [44]. Based on this theory, safety culture characterizes drivers' skills and behaviour patterns in responding to road safety risks during their duty or outside the workplace. Hence, drivers with a high safety culture can overcome the negative influences of the work schedule on their performance. As a result, they are more likely to be eliminated or reduced. Thus, this theory reflects how driving performance is adversely affected by the driver's work schedule and ability to deal with safety risks during driving duty, before and after the task. Therefore, the improvement in driving performance will be noticed among drivers with high abilities to manage safety risks as measured via the safety culture level. So, if the driver has a high level of safety culture, he will try to avoid anything that could impact his performance. For example, suppose the driver realizes that he has a trip early the following day (identify and understand the situation). In that case, he will go to bed early because going late will impact his performance the next day. Therefore, there is a high possibility of road accidents.
Research method
The present study is based on a quantitative research design since quantitative data were collected on the study variables utilising a crosssectional design for the questionnaire study. As the study aims to verify the proposed relationships, a quantitative approach was deemed the most appropriate research method. Since this method is designed to test hypotheses, statistical tests and analyses are quantitative to confirm or disconfirm the hypotheses developed. A quantitative approach enables statistical analysis to ensure that the data collected are reliable and valid [49]. Furthermore, the quantitative design of the study allows the researcher to figure out that the whole population is representative at lower costs than data collection for the whole population [50].
Survey development
A survey questionnaire was developed based on the literature analysis to study the moderating influence of safety culture on the relationship that exists between oil tanker drivers' work schedules and their driving performance in Malaysia. The current study used a five-point Likert scale ranging from 1 ¼ (Never) to 5 ¼ (Always) with 41 items on the questionnaire [51,52,53,54,55,56]. The Likert scale is widely used since it is one of the most trustworthy methods of measuring views, perceptions, and behaviours [57]. Table 1 depicts the organization of the research variables. Items of the research questionnaire are presented in the supplementary file.
Data collection and sampling
Three hundred fifty-seven surveys were delivered to drivers who work to transport energy from most of Malaysia's regions using a random selection approach. The questionnaire was collected personally by visiting oil and gas company branches in the period from 2019 to 2020. Afterwards, invalid questionnaires were eliminated, and 304 valid questionnaires were gathered, yielding an 85.9 per cent response rate. According to Maccallum and Bryant [59], this sample size fits the SEM sample size criteria; the minimal sample size was determined utilizing Gpower software. The sample size was calculated with and power of 1-¼ 0.80. Therefore, the results indicated (N ¼ 279) as a minimum sample, which is less than this study's sample size (N ¼ 304).
In the descriptive characteristics of the study sample, most drivers were Male, 99.7% because females avoid working in difficult work such as driving heavy vehicles. Regarding age, the highest percentage was in the group of drivers between 30 and 39 years, with 48.2%. At the same time, the highest rate of driver's ethnicity was from Malay, with 93.8%. Finally, regarding the education level, most drivers hold Secondary education with 83.7%.
Ethics
Ethical approval for this study was obtained through the Department of Management & Humanities, Universiti Teknologi PETRONAS. As instructed by the department at the beginning of the survey, a brief introduction was added to let participants know the study's objective and ask them to participate in this survey as volunteers. Correspondingly, we got informed consent from all subjects and guaranteed them confidentiality and anonymity.
Structural equation ModelingSEM
SEM is a multivariate analysis utilized to assess the validity of hypotheses by gathering samples pertaining to a theory or idea and then testing it [60,61]. The two main approaches for SEM are partial least squares structural equation modelling (PLS-SEM) and covariance-based structural equation modelling (CB-SEM) [62,63]. In describing the relationships between the study's indicators and constructs, PLS-SEM seems more adaptable than CB-SEM [64]. PLS-SEM operates well in either size of the sample; however, this must meet the sample size's minimum condition, allowing variables with intricate implications on particular model components to be generated [65]. The SEM method provides the following benefits: (i) SEM would be used to precisely evaluate complex model hypotheses based on many observations [66]. (ii) SEM perform well, particularly for complicated models with several indicators and latent variables [66]. Thus, SEM is now widely employed in various fields of social science investigation, such as hospitality management [67], the construction industry [68,69], the petroleum industry [70], commercial aviation [71], education [72], safety and health [73,74].
Consequently, the PLS-SEM approach was utilized in this work to test the three offered hypotheses. In addition, the Smart-PLS v3.2.1 software examined the path analysis and measurement model's fitting.
Measurement model
Hair Jr, Hult [64], evaluating the measurement model entails estimating indicator reliability such as composite reliability, average variance extracted, and discriminant validity. In general, outer load signals between 0.40 and 0.70 must be deleted if removing the item significantly increases AVE and composite dependability [75]. The outer loadings values for all items in the measurement model were more than 0.70, as shown in Table 2 and Figure 2. As a result, more than 0.70 are appropriate for further investigation [64]. The internal consistency was tested by composite reliability (CR) for items outer loading. CR should be greater than 0.70 [64]. As shown in Table 2, all items in the model met the CR > 0.70 benchmark and were thus acceptable. With values over 0.50, AVE is a typical metric for measuring convergent validity in model constructs; this suggests that, as Wong [76] indicated that 0.50 is acceptable convergent validity. The results in Table 2 show that these tests were passed for the measurement model of the study constructs.
According to the observed standards, discrimination validity is a concept that differs from the other constructions [77]. It used Fornell Larcker's (1981) and Cross Loading criteria to assess discriminant validity. According to Fornell and Larcker [78], the AVE's square root should be larger than the latent variable correlation. Table 3 shows the findings of the discriminative validity of the measurement model [79]. The second approach utilised in this research is the cross-loading criteria, which was also utilized to verify discrimination validity. This method defines that the indicators loading on a particular construct should be larger than the loading of all other constructs per line. In addition, the loading of the main build of signs or objects must be greater than the loading of different constructions. The results in Table 4 reveal that (indicators) latent variables have a higher loading than the other variables by row. Furthermore, the results for each construct demonstrated a significant degree of one-dimensionality.
Direct relationships
Path analyses are a type of linear regression statistical approach. Path analysis is favoured for social science and analytical management methodologies. Likewise, path coefficient analysis is a dominant tool for examining complex relations simultaneously [80]. Following the fitting of the model, structural equation modelling may be used to investigate the connections between study variables. The structural model describes the connections between research variables [81]. The findings show a link between exogenous factors and endogenous variables. The fundamental focus of the structural model evaluation is the fit of the whole model, including posited parameter values, dimensions, path, and significance [81].
Following the study context in this model, PLS-SEM was used to evaluate the moderating influence of safety culture on the link between work schedule and driving performance. The bootstrapping method was used to determine the significance of the model hypothesis. Bootstrapping is used in the random evaluation of the original data to create samples equivalent to the original data. This approach evaluates the data's dependability and predictive power, as well as the inaccuracy of the measured path coefficient [82]. The dependent construct was examined for standardised path coefficients (β) and p-values, as shown in Figure 3, Table 5. The results revealed that work schedule substantially influenced driving performance (β ¼ 0.325, p < 0.001).
Similarly, safety culture substantially impacts driving performance (β ¼ 0.404, p < 0.000). Furthermore, effect size has been utilised to assess the significance of each independent variable's impact on a dependent variable [83]. Effect size values F 2 : 0.02, 0.15, and 0.35 are measured as minor, moderate, and strong, respectively, in the context of a statistical study [77]. Table 5 shows that the impact sizes between factors were small.
Explanatory power
The data show that the measuring model has good reliability of items, convergent validity, and discriminant validity. In addition, the PLS technique yielded multiple squared (R 2 ) correlations for the model's endogenous variables. R 2 is treated in the SEM-PLS method in the same way as in traditional regression [82].
The R 2 is defined as how independent variables all together interpret the variance of the dependent variable. Therefore, a greater R 2 value increases the structural model's ability to forecast. The R 2 values were obtained using the PLS technique in this investigation, as indicated in Table 6. In his model, the R 2 value for the dependent variable (driving performance) was 0.591, suggesting that the latent exogenous variables of safety culture and work schedule could explain 59.1% of driving performance. Based on the Chin [82] guidelines, the R 2 value of 59.1% is substantial.
Moderating effect analysis
The moderator is defined as the variable that influences the strength or weakness of the relationships/interaction between independent variables or a predictor and a dependent variable or a criterion. In particular, the moderator is a third variable within a relationship research framework that influences the zero-order correlation between two other variables [45]. PLS-SEM was utilized in the current study to define and estimate the interplay of the moderating role of safety culture on the link between work schedule and poor driving performance [84,85]. The product indicator technique was utilized in this study since the postulated moderating variable (safety culture) is continuous in nature [86]. According to Henseler and Fassott [85], considering that the outcomes of the product indicator technique are generally equivalent or superior to those of the group comparison strategy, we propose utilising the product indicator approach at all times.
As previously mentioned in hypothesis 3, the safety culture moderates the association between schedule work and drivers' performance. Table 7 and Figure 4 show that the interaction factors representing the work schedule x safety culture x driving performance were significant (β ¼ À0.125, t ¼ 3.548, p < 0.000), as expected. As a result, hypothesis 3 was completely validated, following the principles of Aiken, West [87] and Marcus, Schuler [88]. The path coefficient information was used to illustrate the moderating influence of safety culture on the relationship between study variables. Figure 4 states that incorporating the moderate effect of safety culture may mitigate the unfavourable association between work schedule and driving performance.
Identifying the strength of moderating effects
The current study calculates the effect sizes using Cohen (1988) guidelines to evaluate the strength of safety culture as a moderator that affects the association between the schedule of drivers and their driving performance. Furthermore, the determined strength of moderating impact could be assessed by comparing the R2 value of the main model (without moderator) to the R2 value of the entire model, which included exogenous and moderator effects [85]. Thus, this study determines the strength of moderation effects via the underlined formula [85,89]. 0.02, 0.15, and 0.35 are classified as minor, moderate, and large moderating effect sizes [89]. It is worth noting that Chin, Marcolin [90] remarked that a moderate effect size somehow doesn't necessarily suggest that the moderate impact is significant. If the ensuing beta changes are considerable, even a little interaction impact might be important; these conditions must be addressed [90]. Based on the guidelines of [85,89], the strength of the moderation impact of safety culture was determined. Table 8 implied the driving performance effect size was 0.232, which reflects that the effect size of moderation impact is moderate [91].
Discussion
Based on the findings, the current study discovered that work schedule affected driving performance as objective one (H1). This conclusion is consistent with earlier researchers [16,51,92,93,94,95], suggesting that work schedule has a major impact on employee performance. Based on the results, drivers who understand the influence of work schedules on their performance while driving would better prevent work schedule disorder. Given that drivers spend the majority of their time on the road, one may claim that there is a chance that they will be impaired in mental or physical functioning through their driving shift. This raises the probability of traffic crashes, either because of their shift period or because they work a non-standard shift [96]. Similarly, this study found that shifts impacted most drivers because of the lengthy haulage of oil and gas driving tasks. As a result, this will directly influence driver alertness, resulting in poor performance. Besides, keeping attention necessitates regular self-regulation on the part of the driver. Therefore, the driver must balance the subjective costs (effort exertion) and advantages (intrinsic and extrinsic incentives) of keeping careful attention over time [97].
This study discovered a significant association between safety culture and driving performance as objective two (H2). The findings are consistent with previous research [98,99,100]. As a result, the safety of a driving journey is mostly dependent on the driver's performance. The driver is the primary human operating a vehicle, and their responsibilities are pretty demanding since they must meet the numerous demands and requests linked with their driving job [101]. They must also maintain their driving abilities, particularly for trains and commercial vehicles, as well as be vigilant and ecologically conscious when driving [102]. The major purpose of the present research is to evaluate the moderating role of safety culture on the negative impact of work schedules on driving performance. According to objective three (H3), safety culture moderates the association between work schedule and driving performance. PLS path modelling findings validated this hypothesis. This research supports the awareness situation theory, arguing that there must be a 'fit' between knowing the current situation and forecasting future requirements to produce activities that maintain attention for a lengthy period of time when driving [44]. According to situation awareness theory, the moderating influence of safety culture on the association between the driver's schedule and the performance of drivers may be described using situation awareness theory [44]. suggested that awareness of scenario factors about the correct action at the right moment is critical for risk management and safety. Drivers must be able to perceive their surroundings in order to foresee what will happen next and, consequently, take the right action. This hypothesis explains how, as a moderator, safety culture minimizes the amount to which the work schedule negatively impacts driving performance. It was closely tied to its drivers' skills to control safety hazards while driving, before duty, and after work. Therefore, the improvement in driving performance will be noticed among drivers who are aware of the safety culture to manage hazards.
Finally, the magnitude of the moderating effect was assessed in this study by comparing the R2 value of the main model (without moderator) to the R2 value of the complete model, which included IV, DV, and moderator [85,89]. Therefore, based on the results, the current study demonstrates that the effect size of safety culture intervention decreased the magnitude of the unfavorable relationship between work schedule and driving performance was 23.2%, which is considered a moderate effect size based on Cohen [89] guidelines. Therefore, the current study found substantial evidence that safety culture operates as a crucial moderator, which contributes to dampening the unfavorable association between the effect of schedule of drivers' work and the performance of drivers among oil and gas truck drivers.
Based on the results of the current study, many recommendations to enhance the safety culture among Malaysian oil and gas tanker drivers have been acknowledged. First and foremost, commitment and communication are essential components of a successful and good safety culture. A solid safety culture requires open and honest communication between drivers and supervisors [103]. Second, safety training assists oil and gas drivers in understanding safe behaviours and expectations [104,105]. Third, involve drivers. Research indicates that firms with a strong safety culture keeping employees engaged and emotionally committed to the company and its goals [106].
Conclusion
As per the results and discussion mentioned above, this study aimed to study the moderating impact of safety culture in the association between work schedule and driving performance. The results have proved that work schedule and safety culture significantly influence driving performance. Besides, the finding proved the moderating impact of safety culture as a preventative intervention that mitigates the unfavorable association between work schedule and driving performance.
Research contributions
As a result, this study's findings might have theoretical and practical ramifications. Regarding the theoretical viewpoint, The current study expands the situation awareness theory by establishing a comprehensive moderating model that combines diverse safety culture views. In addition, studying the moderating influence of safety culture in the association between study variables will contribute to expanding the current body of knowledge.
While in terms of practical consequences,
This study is one of the first studies that gave practical evidence to energy transportation firms to assist them in making decisions regarding the redesign of the work schedule for drivers and promoting drivers' safety culture to prevent poor driving performance. Besides, this research found evidence of the importance of each variable studied, which motivates immediate administrators of drivers to establish evaluation criteria and identify the importance of regulating the work schedule and enhancing safety culture, resulting in a reduction or increment in driving performance.
Study limitations and future research
Although the study provided significant contributions, the study's limitations have been acknowledged. Because this study relied on data collected from oil and gas tanker drivers to evaluate their perspectives on study variables, it cannot be generalized to other heavy vehicle industries due to differences in work situations and cultures. In addition, even though the sample size met the requirement of Krejcie and Morgan [107], it is preferred to make it bigger in future studies. Besides, the study did not consider the change in the oil and gas market after the Covid-19 pandemic, the demand increasing, which in turn creates new conditions for drivers that need to be addressed. Finally, as observed during the data collection period, some intriguing topics for future research include psychological hazards, job load and sleep disorders among oil and gas truck drivers. Furthermore, it is valuable to investigate the impact of the work environment on the driver's performance.
Author contribution statement
Dr. Al-Baraa Abdulrahmam Al-Mekhlafi: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Dr Ahmad Shahrul Nizam Isha: Conceived and designed the experiments; Performed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Dr. Mohammed Abdulrab: Analyzed and interpreted the data; Wrote the paper. Muhammad Ajmal; Noreen Kanwal: Contributed reagents, materials, analysis tools or data; Wrote the paper.
Data availability statement
Data will be made available on request.
Declaration of interest's statement
The authors declare no conflict of interest.
Additional information
Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2022.e11289.
|
2022-10-30T15:23:53.446Z
|
2022-10-01T00:00:00.000
|
{
"year": 2022,
"sha1": "d44771a69d416caa629312bcebabe53d559d50a6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b5120bc031c01333a28938bfaa7b9b5b9d84848a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
2328014
|
pes2o/s2orc
|
v3-fos-license
|
ACCEPTED FOR PUBLICATION IN THE ASTRONOMICAL JOURNAL Preprint typeset using L ATEX style emulateapj GENETIC-ALGORITHM-BASED LIGHT CURVE OPTIMIZATION APPLIED TO OBSERVATIONS OF THE
I have developed a procedure utilizing a Genetic-Algorithm-based optimization scheme to fit the observed light curves of an eclipsing binary star with a model produced by the Wilson-Devinney code. The principal advantages of this approach are the global search capability and the objectivity of the final result. Although this method can be more efficient than some other comparably global search techniques, the computational requirements of the code are still considerable. I have applied this fitting procedure to my observations of the W UMa type eclipsing binary BH Cassiopeiae. An analysis of V-band CCD data obtained in 1994/95 from Steward Observatory and U- and B-band photoelectric data obtained in 1996 from McDonald Observatory provided three complete light curves to constrain the fit. In addition, radial velocity curves obtained in 1997 from McDonald Observatory provided a direct measurement of the system mass ratio to restrict the search. The results of the GA-based fit are in excellent agreement with the final orbital solution obtained with the standard differential corrections procedure in the Wilson-Devinney code.
INTRODUCTION
The problem of extracting useful information from a set of observational data often reduces to finding the set of parameters for some theoretical model which results in the closest match to the observations.If the constitutive physics of the model are both accurate and complete, then the values of the parameters for the 'best-fit' model can yield important insights into the nature of the object under investigation.
When searching for the 'best-fit' set of parameters, the most fundamental consideration is: where to begin?Models of all but the simplest physical systems are typically non-linear, so finding the least-squares fit to the data requires an initial guess for each parameter.Generally, some iterative procedure is used to improve upon this first guess in order to find the model with the absolute minimum residuals in the multi-dimensional parameter space.
There are at least two potential problems with this standard approach to model fitting.The initial set of parameters is typically determined by drawing upon the past experience of the person who is fitting the model.This subjective method is particularly disturbing when combined with a local approach to iterative improvement.Many optimization schemes, such as differential corrections (Proctor & Linnell 1972) or the simplex method (Kallrath & Linnell 1987), yield final results which depend to some extent on the initial guesses.The consequences of this sort of behavior are not serious if the parameter space is well behaved-that is, if it contains a single, well defined minimum.If the parameter space contains many local minima, then it can be more difficult for the traditional approach to find the global minimum.
GENETIC ALGORITHMS
An optimization scheme based on a Genetic Algorithm (GA) offers an alternative to more traditional approaches.Restrictions on the range of the parameter space are imposed only by observations and by the physics of the model.Although the parameter space so defined is often quite large, the GA provides a relatively efficient means of sampling globally while searching for the model which results in the absolute minimum variance when compared to the observational data.While it is difficult for GAs to find precise values for the set of 'best-fit' parameters, they are well suited to search for the region of parameter space that contains the global minimum.In this sense, the GA is an objective means of obtaining a good first guess for a more traditional method which can narrow in on the precise values and uncertainties of the 'best-fit'.
The underlying ideas for Genetic Algorithms were inspired by Darwin's (1859) notion of biological evolution through natural selection.A comprehensive description of how to incorporate these ideas in a computational setting 1 formerly at the Department of Astronomy, University of Arizona 1 was written by Goldberg (1989).In the first chapter of his book, Goldberg describes the implementation of a simple GA-involving several steps which are analogous to the process of biological evolution.
The first step is to fill the parameter space uniformly with trial parameter-sets which consist of randomly chosen values for each parameter.The theoretical model is evaluated for each trial parameter-set, and the result is compared to the observational data and assigned a fitness which is inversely proportional to the root-meansquare residuals.The fitness of each trial parameter-set is mapped into a survival probability by normalizing to the highest fitness.A new generation of trial parametersets is then obtained by selecting from this population at random, weighted by the survival probabilities.
Before any manipulation of the new generation of trial parameter-sets is possible, their characteristics must be encoded in some manner.The most straightforward way of encoding the parameter-sets is to convert the numerical values of the parameters into a long string of numbers.This string is analogous to a chromosome, and each number represents a gene.For example, a two parameter trial with numerical values x 1 = 1.234 and y 1 = 5.678 would be encoded into a single string of numbers '12345678'.
The next step is to pair up the encoded parameter-sets and modify them in order to explore new regions of parameter space.Without this step, the final solution could ultimately be no better than the single best trial contained in the initial population.The two basic operations are crossover which emulates reproduction, and mutation.
Suppose that the encoded trial parameter-set above is paired up with another trial having x 2 = 2.468 and y 2 = 3.579 which encodes to the string '24683579'.The single-point crossover procedure chooses a random position between two numbers along the string, and swaps the two strings from that position to the end.So if the third position is chosen, the strings become Although there is a high probability of crossover, this operation is not applied to all of the pairs.This helps keep favorable characteristics from being eliminated or corrupted too hastily.To this same end, the rate of mutation is assigned a relatively low probability.This operation allows for the spontaneous transformation of any particular position on the string into a new randomly chosen value.So if the mutation operation were applied to the sixth position of the second trial, the result might be 24645|6|78 → 24645|0|78 After these operations have been applied, the strings are decoded back into sets of numerical values for the parameters.In this example, the new first string '12383579' becomes x 1 = 1.238 and y 1 = 3.579 and the new second string '24645078' becomes x 2 = 2.464 and y 2 = 5.078.Obviously, the new set of trial parameter-sets is related to the original set in a very non-linear way.This new generation replaces the old one, and the process begins again: the model is evaluated for each trial, fitnesses are assigned, and a new generation is constructed from the old and modified by the crossover and mutation operations.Eventually, after a modest number of generations, some region of parameter space remains populated with trial parameter-sets, while other regions are essentially empty.The robustness of the solution can be established by running the GA several times with different random number sequences.
Genetic Algorithms have been used a great deal for optimization problems in other fields, but until recently they have not attracted much attention in astronomy.The application of GAs to problems of astronomical interest was promoted by Charbonneau (1995), who demonstrated the technique by fitting the rotation curves of galaxies, a multiply periodic signal, and a magneto-hydrodynamic wind model.Many other applications of GAs to astronomical problems have appeared in the recent literature.Hakala (1995) optimized the accretion stream map of an eclipsing polar.Lang (1995) developed an optimum set of image selection criteria for detecting high-energy gamma rays.Kennelly et al. (1996) used radial velocity observations to identify the oscillation modes of a δ Scuti star.Lazio (1997) searched pulsar timing signals for the signatures of planetary companions.Charbonneau et al. (1998) performed a helioseismic inversion to constrain solar core rotation.Most recently, Wahde (1998) used a GA to determine the orbital parameters of interacting galaxies.The applicability of GAs to such a wide range of astronomical problems is a testament to their versatility.
OBSERVATIONS
I have applied a GA-based optimization scheme to my observations of the W UMa type eclipsing binary star BH Cassiopeiae.Due to an unfortunate historical accident, this relatively bright object (m V = 12.6) was neglected observationally for more than half a century prior to this investigation.
Background
The discovery observations of BH Cas were made by S. Beljawsky from the Simeïs Observatory between June and September of 1928, and were later reported in the paper "34 New Variable Stars (Fourth Series)" (Beljawsky 1931).The eleventh variable in his list was in the position of the star which later came to be known as BH Cas.Beljawsky gave it the temporary designation '353.1931(Kukarkin 1938).Based on his 62 visual observations and an examination of the plate collection at the Moscow Observatory, Kukarkin concluded that BH Cas was possibly W UMa type with a period near 0.5 days and an amplitude of 0.4 magnitudes.
The most recent edition of the General Catalogue of Variable Stars (Kukarkin et al. 1969) contains a footnote on the entry for BH Cas which refers to a 1943 paper by P. Ahnert and C. Hoffmeister.After searching for BH Cas on photographic plates, and visually with a 350-mm reflector, they concluded that "no star in the surroundings of the indicated place shows light variation..." (Ahnert & Hoffmeister 1943).After the appearance of this article, no observations of BH Cas appeared in the literature for more than 50 years.Considering the strength of their conclusion, the absence of additional efforts to observe this object is not surprising.
On the night of 19 February 1994, I obtained a series of CCD images of the region centered on BH Cas over a period of ∼1.3 hours using the Spacewatch CCD at the Steward Observatory 0.91-m telescope on Kitt Peak.For each image, I measured the flux from BH Cas and from the comparison and check stars, GSC 01629 and GSC 01134 respectively.Plots of the relative flux over time revealed an increase in the brightness of BH Cas relative to the comparison star while the latter remained constant relative to the check star (Metcalfe 1994).
Photometry
On twelve nights between September 1994 and October 1995 I used the '2kBig' CCD at the Steward Observatory 1.5-m telescope to obtain images of BH Cas and the surrounding area for photometric study.These observations allowed me to reconstruct a complete light curve in the V-band from a total of 432 data points, as well as partial curves in the R-and I-bands.
For each night of observations, I reconstructed a flatfield image from a median combination of many data frames with non-overlapping star images.I used the IRAF ccdproc package to clean and calibrate each image, and the phot package to extract aperture photometry for BH Cas, the comparison star GSC 00784, and the anonymous check star (see Figure 1).
For ten nights in December 1996 I used the 'P3Mudgee' 3-channel photoelectric photometer (Kleinman, Nather & Phillips 1996) I followed the standard reduction procedure (Clemens 1993), but some additional corrections were necessary.Intermittent problems with the filter wheel on all but the final night of observations required that I make small (∼3%) normalization corrections derived from nightly phototube cross calibrations.The considerable color difference between BH Cas and the comparison star GSC 00594 (∆B − V ∼ 0.37) yielded significant secondorder extinction for observations obtained at high airmass.I adopted reasonable values for the second order extinction coefficients based on a study by Kim & Park (1993).In this paper, the authors demonstrated that the values of the extinction coefficients can change drastically from one night to the next, and that the 'second-order' coefficient could be anywhere from negligible to dominant.Based on their Table 2, I determined that the median value of the second-order coefficient in the B-band was 0.05.Adopting this value, the resulting corrections applied to the B-band light curve of BH Cas typically amounted to a few hundredths of a magnitude.Although the paper did not determine values of the second-order coefficient in the U-band, it demonstrated the large range of acceptable values relative to the first-order coefficient.This allowed me to adopt reasonable U-band coefficients based (Davidge & Milone 1984) from these data.Table 1 lists the airmass range and extinction coefficients for all of the data sets used in this analysis.
Spectroscopy
On seven nights in September/October 1997 I obtained a time series of spectra of BH Cas using the Sandiford cassegrain echelle spectrograph (McCarthy et al. 1993) on the 2.1-m telescope at McDonald Observatory.I adjusted the cross disperser and grating rotation to provide wavelength coverage from 5430 to 6670 A. The velocity resolution of this setup was ∼2 km s −1 .
I followed the standard IRAF reduction procedure for echelle spectra (Churchill 1995).Although most spectral orders contained at least one weak metal line, there were only two strong features: the Na D lines and Hα.The Na D lines were contaminated by narrow atmospheric and interstellar features, so the only line which offered the possibility of measuring reliable radial velocities was Hα.
The velocities were derived from cross correlations of the observed spectra with a spectrum of the radial velocity standard star 31 Aql obtained on each night (v r = 100.5 km s −1 ; Astronomical Almanac 1997).I determined the systematic stability of the instrument and reduction procedure by cross correlating the spectra of 31 Aql obtained on different nights with each other.Systematic velocity shifts between the data sets ranged from 2-10 km s −1 with uncertainties in the range 1-3 km s −1 .I corrected each of the derived velocities for these small nightly offsets.
Wilson-Devinney Code
Before about 1970, the standard approach to modeling close binary stars was that of Russell & Merill (1952).
This geometrical model revolved a system of two similar ellipsoids to produce a light curve.While it was admittedly only a useful approximation of a true binary system, this model could be treated analytically, which was appropriate for the tools available at the time.
After a detailed treatment of close binary stars by Kopal (1959) which described the Roche equipotential surfaces, several authors attempted to calculate light curves by revolving this physical model.Progress was limited by the computational facilities that were available at the time.Early attempts by Lucy (1968) and Hill & Hutchings (1970) represented substantial improvements over the Russell model, but they were still incomplete.Wilson & Devinney (1971) introduced the model which has served as the foundation for many improvements over the past few decades (Wilson 1994).At present, the majority of published results for close binary stars are obtained using the Wilson-Devinney code (Milone 1993).
There are two modes of operation in the Wilson-Devinney (W-D) code for overcontact binaries like W UMa systems.One of these modes uses a single continuous gravity darkening law for the entire common envelope to fix the temperature of the secondary star.The other mode keeps the secondary temperature a free parameter, allowing the model to be in physical contact without being in thermal contact.Both overcontact modes require that the two stars have identical surface potentials, gravity darkening exponents, bolometric albedos, and limb darkening coefficients.Since the primary and secondary eclipses in the observed light curves of BH Cas have significantly different depths, I chose to use the overcontact mode with an adjustable secondary temperature.I used a blackbody radiation law and simple reflection.
Some parameters of the model may be fixed initially from theory.I interpolated the values for the monochromatic limb darkening coefficients from the tables in Al Naimiy (1978) to the proper temperature and wavelength.I used the effective wavelengths of the UBV bandpasses and the temperature derived from the colors of BH Cas.The bolometric albedo (also known as the reflection coefficient) is usually set to 1.0 for radiative stars, and 0.5 for convective stars which transport some of the incident energy to other regions of the star before re-radiating it (Rucinski 1969).The dependence of the bolometric flux on local effective gravity is described by the gravity darkening exponent; a value of 1.0 corresponds to a direct proportionality between flux and gravity, and is appropriate for radiative stars (von Zeipel 1924).The flux at any point on the surface of a convective star is less dependent on the local gravity, and the exponent is thought to be 0.32 (Lucy 1967).Since the temperature of BH Cas is well within the convective regime, the gravity darkening exponents and reflection coefficients should be near the theoretical values for convective stars.These are the values that I assumed for all of the modeling.
GAWD Code
In the summer of 1997, I began to develop a simple GA-based optimization routine for the 1993 version of the W-D code (GAWD).Inspired by a plot in a paper by Stagg & Milone (1993) which showed a slice of ill-behaved parameter space in the Ω-q plane, I decided to try using a GA to optimize in this two-dimensional space first.The results of this initial test were encouraging.I used the W-D code to generate a synthetic V-band light curve, and then I let GAWD try to find the original set of parameters from a uniform random initial sampling of 1000 points in the surrounding region of Ω-q space.After 10 generations, nearly 99% of the trial parameter-sets were statistically indistinguishable from the original set of parameters.
The GAWD code quite naturally divided into two basic functions: using the W-D code to calculate light curves, and manipulating the results from each generation of trial parameter-sets.The majority of the computing time is spent calculating the geometry of the binary system for each set of model parameters.The GA is concerned only with collecting and organizing the results of many of these models, so I incorporated the message passing routines of the public-domain PVM software (Geist et al. 1994) to allow the execution of the code in parallel on a network of 25 workstations.
With this pool of computational resources, the GAWD code evolved to do more than I originally thought would be feasible.Adding the capability to fit for more than two parameters was simply a matter of extending the GA to deal with longer strings of numbers.The light curve models take the same amount of time to run regardless of the number of parameters which are specified by the GA, so in some sense the extra parameters were added for free.This is not strictly true since increasing the dimensionality of the parameter space slows the convergence of the GA.After experimenting in the simple Ω-q space for awhile, I added the capability to fit for the inclination i, the temper-ature ratio of the two stars T 1 /T 2 , and finally the temperature of the primary star T 1 .Since the absolute temperature of a star is unconstrained by observations in only one bandpass, I altered GAWD to fit light curves in the UBV bandpasses simultaneously.
Global Search
The first thing I had to do before starting GAWD was specify the ranges of the 5 parameters: (1) The measurement of the mass ratio (q ≡ m 2 m 1 ) from the radial velocity data placed strong constraints on the allowed range.Initially I fit a spectroscopic orbit to the radial velocity data allowing the orbital period to be a free parameter, and with the eccentricity fixed at 0.0.The period of the orbit was consistent with the photometric period, so I fixed it for the final fit.I allowed GAWD to fit for a mass ratio between ±3σ of the final spectroscopic value.(2) The shape of the observed light curves indicates that BH Cas is overcontact, so I constrained the equipotential parameter Ω to be somewhere between the inner and outer critical equipotentials for a given mass ratio.(3) The depth of the eclipses implies that the inclination is fairly high, but I allowed all values above i = 50 • .( 4) The temperature ratio is strongly constrained by the relative depths of the two minima, so I included all values between 0.93 and 0.97.
(5) The photometric colors indicate that the components of BH Cas are fairly cool, so I allowed temperatures for the primary star between T 1 = 4200 and 5000 K.
After turning on the 25-host metacomputer with PVM, I started the GA master program on one of the faster computers.The process begins by reading the observational data into memory and assigning each light curve equal weight.A randomly distributed initial set of 1000 trial parameter-sets is generated, and each set of parameters is sent out to a slave host.Each slave calculates theoretical UBV light curves for the given set of parameters and returns the result to the master.Upon receiving a set of light curves, the master process sends a new job to the responding slave and computes the variance of the three calculated light curves compared to the observed data.The fitness of the trial parameter-set is determined from a simple average using the three individual variances.
When the results of all 1000 trial parameter-sets are in hand, the master process normalizes the fitnesses to the maximum in that generation.One copy of the fittest trial parameter-set is passed to the next generation unaltered, and the rest of the new population is drawn from the old one at random, weighted by the fitness.Each trial is encoded into a long string of numbers, which are then paired up for manipulation.The single point crossover operator is applied to 65% of the pairs, and 0.3% of the encoded numbers are altered by the mutation operator.The shuffled trial parameter-sets are decoded, and those which are still within the allowed ranges of parameters replace their predecessors.Computations of the new set of models are distributed among the slave hosts, and the entire process is repeated until the fractional difference between the average fitness and the best fitness in the population is smaller than 1%.As the evolution progresses, correlations between parameters are revealed in the spatial distribution of trial parameter-sets after the worst have been eliminated, but before the final solution has converged.The parameters of the best solution in the final generation provide the initial guess where the traditional approach begins.
At this point, it is instructive to ask: has the algorithm found the actual region of the global minimum?This is really an epistemological question.First, the algorithm performs a global random sampling of parameter space.After the initial convergence, the crossover and mutation operations continue to explore new regions of the parameter space in an attempt to find better solutions.Repeating this procedure many times with different random number seeds helps to ensure that the minimum found is truly global; but no solution can, with absolute certainty, be proven to be the global minimum unless every point in the parameter space is explicitly evaluated.At best, an algorithm can only take steps to sample parameter space in a sufficiently comprehensive way so that the probability of converging to the global minimum is very high.
The fitting routine supplied with the W-D code is a differential corrections procedure.This program calculates light and radial velocity curves based on the user-supplied first guesses for the model parameters, and then recommends small changes to each parameter based on the local shape of the parameter space.After the corrections have been applied, new curves are calculated and the shape of the local parameter space is again determined.New corrections are suggested, and the process continues until the suggested corrections for all parameters are smaller than the uncertainties.
RESULTS
I evolved the 'first guess' set of parameters for BH Cas using the GAWD code.The GA randomly populated the parameter space defined in §4.3 and allowed the set of trial parameter-sets to evolve.After 100 generations, the difference between the average set of parameters and the best set of parameters was insignificant.The parameter values in the final generation of 1000 trial parameter-sets averaged to (mean ± 1σ): (1) = 0.953 ± 0.005 The GAWD code utilized only the light curve data to arrive at this result.All data points were included, and each light curve was assigned equal weight in the assessment of fitness.The radial velocity data were used only to restrict the range of possible mass ratios (see §4.3).
For the final solution, I used the Wilson-Devinney differential corrections code with all of the radial velocity data, and a subset of ∼100 data points from each light curve.Starting with the best solution from the GAWD code, I allowed Ω, i, T 2 , and L 1 to be free parameters.I iteratively applied any significant corrections returned by the code to the parameters until all of the corrections were small relative to their uncertainties.Finally, I fixed T 2 and Ω and allowed T 1 and q to be free parameters instead.No significant corrections were returned by the code, and the final orbital solution for BH Cas yielded: The uncertainties given here are probable errors from the DC output.In Figure 2, the best-fit model from the W-D code is shown with the data for comparison.The deviations of the fit from the data may be the result of the blackbody assumption for the radiation law, or due to FIG. 2.-The best-fit model from the W-D code plotted with the observational data that were used for the fit.In the top panel, the observations are: U-band (upper), Bband (middle), and V-band (lower).In the bottom panel, the observations are: primary (solid points) and secondary (open points) components of BH Cas. the assumptions made during the reduction procedure, or both.Copies of the data shown in Figure 2 are available in the electronic edition of this paper.
I have listed the properties of BH Cas (with 1σ errors) in Table 2.The position and proper motion were obtained from a pair of 51-cm Astrograph plates on the measuring machine at Lick Observatory (A.Klemola, personal communication).The results were noted to be somewhat worse than is normally expected from Astrograph plates due to the weakness of the exposures and the location of the image near the plate edge.
From 12 times of minimum light spanning nearly 2000 cycles (Metcalfe 1997), I derived the following ephemeris for BH Cas: Min I = JD ⊙ 2449998.618(7± 3) + + 0. d 405890(04 ± 13) × E The residuals of this fit exhibit no systematic trend, and the single largest departure is ∼ 1 × 10 −3 days.I derived the magnitudes of BH Cas at maximum light in the V-, R-, and I-bands from CCD observations using the Selected Areas for zero-point calibration (Landolt 1992).The colors of W UMa stars change only slightly with orbital phase because the common envelope forces the secondary star to be hotter than it would be if isolated.Observationally, this appears to be true regardless of the mass ratio of the system (Shu & Lubow 1981).The V − I and R − I colors imply an effective temperature for BH Cas near T eff = 4600± 400 K, corresponding to a Main Sequence spectral type of roughly K4±2 (Bessell 1979).
An isolated Main Sequence star with this same effective temperature would have an absolute visual magnitude M V = +7.0 ± 0.7 (Allen 1973).When this is combined with the observed magnitude it provides an upper limit on the distance.The primary (more massive, cooler) component of BH Cas has a structure which is approximately the same as an isolated Main Sequence star of comparable mass.The same cannot be said of the secondary (less massive, hotter) component.As a consequence, it is best to use the observed magnitude at the orbital phase when essentially all of the light is coming from the primary component.For BH Cas, this occurs during the minimum of the deeper eclipse when m V = 13.1.The resulting upper limit on the distance is d ∼ < 160 pc.
The mass ratio and center of mass radial velocity of the binary system listed in Table 2 comes directly from the spectroscopic orbital solution.I obtained identical results whether or not the few observations during eclipses were included in the fit.The transverse component of the space velocity is determined by combining the distance with the measured proper motion.The total proper motion is µ tot = (µ 2 α + µ 2 δ ) 1/2 = 2.34± 1.0 arcsec century −1 .At a distance of 160 pc, this motion corresponds to a trans-verse velocity of v T = 17.7 ± 7.6 km s −1 .Combining this with the measured radial velocity of the system, the total space velocity is v S ∼ < 29.6 ± 4.9 km s −1 .Thus, BH Cas appears to be a member of the disk population.
In addition to the mass ratio, the spectroscopic orbit also yields the total mass of the binary system in terms of the semi-major axis, the orbital period, and some fundamental constants.If the semi-major axis is expressed in terms of the mass ratio, K amplitudes, and inclination, then the individual masses can be calculated.I list the masses obtained by using the best-fit value of the inclination.
Once the mass ratio is established and the scale of the system is determined from the K amplitudes, the best-fit value for the equipotential parameter leads directly to the absolute radii of the two stars.Since BH Cas is an overcontact system, the stars are tidally distorted, and no single number can adequately define the radii; I list the radii at the pole, side, and back.
DISCUSSION
The orbital period of BH Cas places it comfortably within the range of periods typical of many W-type W UMa systems.The mass ratio is near the average for this class, but the individual masses are near the low end of typical values.The mass of the secondary star seems particularly low, but this is not uncommon among W UMa systems.The secondary mass in V677 Cen, for example, is 0.15 M ⊙ (Barone, Di Fiore & Russo 1993).In a recent paper containing a large self-consistent sample of absolute elements for W UMa systems (Maceroni & van't Veer 1996), fully 12% of the W-type, and 18% of all W UMa secondary stars had masses less than or equal to the secondary mass derived for BH Cas.The total system mass, however, is smaller than that of any W UMa star in this same sample, the smallest of which is CC Com (1.20 M ⊙ total).
The absolute radii of the components of BH Cas are larger than expected for stars with such low masses.The average radii are similar to those of TY Boo, another Wtype system which has roughly the same mass ratio as BH Cas, but a systematically higher total mass (Milone et al. 1991).There are a number of possibilities which could have led to biased estimates of the masses of BH Cas.I derived the velocities exclusively from the Hα line since it was the only strong feature available.I used an unbroadened radial velocity standard for the cross correlation template.Also, I assumed Gaussian profiles for the correlation peaks.
The fit to the radial velocity data could also be a problem.There is considerable scatter in the observations, perhaps enough to accommodate the increased K amplitudes that would be needed to make BH Cas as massive as TY Boo.A set of high dispersion spectra of BH Cas obtained at the Multiple Mirror Telescope in 1994 has recently been re-analyzed using two dimensional cross correlation techniques with synthetic templates (G.Torres, personal communication).The re-analysis yielded a mass ratio consistent with the value derived from the echelle spectra, but the best-fit K amplitudes were considerably larger.The correlations which led to these results, however, were very weak.
A significant x-ray signal at the position of BH Cas was serendipitously discovered by Brandt et al. (1997).They found an x-ray flux of 4.2 × 10 −14 erg cm −2 s −1 .The implied upper limit on the x-ray luminosity from the distance given above is L x ∼ < 1.0 × 10 28 erg cm −2 s −1 .The detection of x-rays from BH Cas is not too surprising considering that most W UMa systems show clear signs of magnetic activity (Maceroni & van't Veer 1996).The mechanism for x-ray production is generally thought to be related to magnetic reconnections in the corona (Priest, Parnell & Martin 1994).The value of the x-ray luminosity of BH Cas is lower by 1-2 orders of magnitude compared to other nearby W UMa stars (McGale, Pye & Hodgkin 1996).
I would like to thank Robert Jedicke and R.E.White for their guidance during the early stages of this project, Staszek Zola for sharing his expertise in fitting light curves with the W-D code, and Ed Nather for many helpful discussions.Finally, I wish to thank the anonymous referee for a very helpful and thorough review which significantly improved the manuscript.This work was supported in part by a fellowship from the Barry M. Goldwater Scholarship and Excellence in Education Foundation, an undergraduate research grant from the Honors Center at the University of Arizona, and grant AST-9315461 from the National Science Foundation.
Cass' at the time of discovery.No maximum or minimum magnitudes were listed.Follow-up observations of BH Cas were made by B. Kukarkin with the 180-mm Zeiss-Heyde refractor from the Moscow Observatory and/or the 180-mm Grubb-Mart refractor from the Tashkent Observatory.Kukarkin began gathering observations in April 1936, and continued through the summer of 1937.The results of his observations were reported in the paper "Provisional Results of the Investigation of 80 Variable Stars in Fields 6, 13, 15 and 62." published in the journal Veränderliche Sterne Forschungs-und Informationsbulletin
|
2014-10-01T00:00:00.000Z
|
1999-01-01T00:00:00.000
|
{
"year": 1999,
"sha1": "a55228f3c7337b4a0edbf92e2c8c699b7b3e219d",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1086/300833/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "03674440b774c0dd352466dda04b5cc7b9090fd2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
260969414
|
pes2o/s2orc
|
v3-fos-license
|
Drain versus no drain in elective open incisional hernia operations: a registry-based analysis with 39,523 patients
Purpose Elective open incisional hernia operations are a frequently performed and complex procedure. Prophylactic drainage is widely practised to prevent local complications, but nevertheless the benefit of surgical drain placement remains a controversially discussed subject. Objective of this analysis was to evaluate the current status of patient care in clinical routine and outcome in this regard. Methods The study based on prospectively collected data of the Herniamed Register. Included were all patients with elective open incisional hernia between 1/2005 and 12/2020 and completed 1-year follow-up. Multiple linear and logistic regression analysis was performed to assess the relation of individual factors to the outcome variables. Results Analysed were data from 39,523 patients (28,182 with drain, 11,341 without). Patients with drain placement were significantly older, had a higher BMI, more preoperative risk factors, and a larger defect size. Drained patients furthermore showed a significant disadvantage in the outcome parameters intraoperative complications, general complications, postoperative complications, complication-related reoperations, and pain at the 1-year follow-up. No significant difference was observed with respect to the recurrent rate. Conclusion With 71.3%, the use of surgical drainages has a high level of acceptance in elective open incisional hernia operations. The worse outcome of patients is associated with the use of drains, independent of other influencing factors in the model such as patient or surgical characteristics. The use of drains may be a surrogate parameter for other unobserved confounders.
Introduction
Surgical drains have a long history in medicine as an integral part of the therapeutic concept [1].Already since the mid-1800s, the use of drains in gastrointestinal surgery has widely been practised.Lawson Tait, a nineteenth century surgeon, even coined the dictum "When in doubt, drain" [2], but in practice the situation turns out to be much more complex and leaves the decision of drain usage to the surgeon's perception of the overall situation.In open ventral hernia repair, drains are traditionally placed to avoid seroma and hematoma formation by facilitating fluid drainage [3].The prophylactic placement of drains has, however, aroused much controversy as studies have been published indicating that drains often fail to protect against seromas and may even contribute to infectious complications [4].Traditional intra-abdominal and subcutaneous drains were also assessed within the context of optimizing perioperative management which began with the fast-track concept of Kehlet in 1995 for colon surgery and the ERAS (enhanced recovery after surgery) management in 2005, and their avoidance was recommended in the case of questionable protective effects [5][6][7].But what are the consequences for clinical practice?Already in the past, differences between the current status of research and patient care in clinical routine [8] have been observed, leading to a more differentiated view concerning the interpretation of the respective results.Accordingly, a thorough assessment of the quality of care in hernia surgery within the framework of clinical health services research is a prerequisite that contributes an essential element to the further development of optimized therapies in everyday clinical practice.Based on data from the Herniamed Hernia registry, we evaluated the reality of care in elective open incisional hernia operations, with a particular focus on the utilization of drains in this study.
Material and methods
We evaluated prospectively collected data from 836 centres in Germany, Austria and Switzerland from the internet-based Herniamed Hernia registry and included operated patients from January 5, 2009 to December 31, 2020 with completed 1-year follow-up visit in this evaluation.The inclusion criteria were elective incisional hernia operations with open procedures (open direct suture, open onlay, open sublay, open intraperitoneal onlay mesh (IPOM), component separation).Exclusion criteria were incompletely documented cases, invalid age information, patients under the age of 16, and the use of non-approved meshes.Senior or high-risk patients were not excluded.All patients signed a consent form agreeing to the processing of their data [9].Baseline demographic data included age, gender, BMI (body mass index), and ASA (American Society of Anesthesiologists) score.In addition to the surgical methods mentioned above, the use of drains, EHS (European Hernia Society) classification, mesh implantation, pre-and postoperative pain, and recurrences were recorded.Single outcome and influencing variables (risk factors, complications) were summarized as global variables.A general, intra-or postoperative complication or risk factor was considered present if at least one single item applied.
Plausibility assessment
A plausibility check was performed to confirm the presence of a correct data set with patient master and operation data.Furthermore, the plausibility of length-of-stay data, information on surgery time and mesh size, age, weight, height, BMI, and follow-up data was verified.
Statistical analysis
All analyses were performed using SAS 9.4 software (SAS Institute Inc., Cary, NC, USA).A p-value of ≤ 0.05 was considered statistically significant.Univariate descriptive statistics were performed for the comparison of drain use (yes vs. no).All categorical patient data are presented in absolute and relative counts, while mean and standard deviation (SD) are shown for continuous data.Unadjusted analyses were carried out to assess the effect of individual influencing variables on an outcome parameter.The Chi-square test was used for categorical target variables, and the robust t-test (Satterthwaite) was used for continuous variables.Multivariable analyses were performed using the binary logistic regression model.All pair-wise odds ratios are given with the corresponding 95% confidence intervals.To rule out a potential bias in the selection of the analysis population (patients with 1-year follow-up) compared to patients without follow-up, standardized differences were estimated for the two populations.
Patient and operation characteristics
Between January 5, 2009 and December 31, 2020, data from 39,523 patients who underwent elective open incisional hernia surgery with completed 1-year follow-up were entered into the Herniamed Registry (Fig. 1).Drains were used in 28,182 patients (71.31%) undergoing elective surgery, while 11,341 patients (28.69%) did not receive a drain.Drained patients had an average age of 63.6 ± 12.8 years (mean ± SD) and were thus significantly older than patients without drain use who had an average age of 59.8 ± 15.1 years (p < 0.001).Additionally, the BMI was significantly higher in patients with compared to patients without drains (29.8 ± 5.9 vs. 27.9 ± 5.4, p < 0.001) (Table 1).
In the unadjusted analysis of the relationship between the two patient groups (drain vs. no drain) with respect to patient and operation characteristics, the expression of almost all variables differed significantly.Only with respect to gender, no statistically significant difference could be observed (p = 0.441) (Table 1).In the detailed evaluation of the unadjusted analyses concerning items relevant for general complications, significant differences between the two patient groups for fever (p < 0.001) and pulmonary embolism (p 0.007) were detected.For thrombosis, p = 0.059.The unadjusted analysis results of the relationship between postoperative complications and drain use are presented in Table 2.No significant differences in the topic-specific items seroma, wound healing disorder, and infection were observed (p < 0.001 each).
Intraoperative complications in logistic regression analyses
The risk of intraoperative complications was significantly associated with defect size, surgical procedure, drain use, age (p < 0.001 each), and recurrence (p = 0.006).Specifically, intraoperative complications occurred more frequently in larger defects, the surgical procedures open-direct suture and open-IPOM, drained patients (OR odds ratio = 1902 [1483; 2438]), elderly patients, and patients with recurrences (Table 3).
General complications in logistic regression analyses
The general complications were significantly related to defect size, EHS classification (lateral), the need for drains (p < 0.001 each), and tendentially also BMI (p = 0.077).The risk of general complications was increased by larger 4).
Postoperative complications in logistic regression analyses
The occurrence of postoperative complications was significantly associated with the defect size, BMI, the presence of risk factors, surgical method, EHS classification, the use of drains, ASA classification, and age (p < 0.001 each).A 5).
Complication-related reoperations in logistic regression analyses
The risk of reoperation was significantly associated with defect size, the presence of risk factors, the use of drains, EHS classification, BMI, surgical method and ASA classification (p < 0.001 each).The complication-related reoperation rate was significantly higher when drains were used (OR = 1632 [1385,1924]).In addition, a larger defect, the presence of a risk factor, a higher BMI, component separation, and a higher ASA score also increased the risk of reoperation (Table 6).
Results of the 1-year follow-up in logistic regression analyses
The risk of recurrences at the 1-year follow-up was strongly related to previous recurrences, the surgical method (e.g.open-onlay), EHS classification, higher BMI, larger defect size (p < 0.001 each), the use of meshes (p = 0.001), the ASA score (p = 0.002), female gender (p = 0.004), higher age (p = 0.031), and preoperative pain (p = 0.050).No significant relation could be shown for the use of drains (p = 0.650) (Table 7).Pain at rest at the 1-year follow-up was significantly associated with higher age, preoperative pain, female gender, postoperative complications, EHS classification, higher BMI, prior surgeries, drain use, larger 10).
Standardized differences for patients with and without follow-up
The results of the standardized differences for patients with (n = 39,523) and without (n = 22,361) follow-up verified that there was no bias in the patient selection of the analysis population.Patients in the analysis population were on average 3.3 years older, received more often a mesh and were less frequently operated with direct sutures.The standardized difference was above the reference value of 10%.For all other variables, including the complication rates, standardized differences of less than 0.1 were found, thus indicating no bias in patient selection.
Discussion
Should surgeons in case of doubt use drains or not in elective open incisional hernia surgery?It is beyond dispute that surgical drains help to remove access fluid which is assumed to reduce wound-related complications and seroma formation, but these advantages may nevertheless be counterbalanced with certain downsides like an increased risk of infections and postoperative pain.To shed more light on this question, we performed a Herniamed registry-based evaluation of prospectively collected data of 39,523 patients which is so far the most comprehensive quality assurance study in Germany.The influence of drains on the outcome of hernia operations has already been examined in several controlled randomized trials and meta-analyses in the past [10,11].A registry analysis, however, enables an analysis of the clinical results as part of health services research and points out possible differences between the current status of research and patient care in clinical routine.This analysis of a large clinical data basis is thus an important contribution to understand the "real world" effect of a treatment outside the tightly controlled environment of randomized trials [8].
Our investigations covered the period from 2009 to 2020 with 39,523 elective open incisional hernia operations, during which 28,182 patients (71.31%) received a drain.The high frequency of drain use in more than 2/3 of the patient collective clearly mirrors the high acceptance of drainages in clinical routine.The unadjusted analysis of the relationship between drain use, patient variables, and operation characteristic shows that the expression of almost all features differed significantly.Only with respect to the gender, no difference was observed.Drained patients had a significantly Most studies analysing the influence of drains investigated similar outcome criteria like local complications, particularly bleeding and seroma formation, surgical site infections (SSI), surgical site occurrences (SSO) and surgical site occurrences requiring procedural interventions (SSOPI) [3,10,12,13].All of these studies point to the fact that single influencing factors are difficult to extract since complications in elective open incisional hernia surgery are caused by numerous parameters, which was also the case in our study.We carried out eight multivariable analyses (intraoperative complications, general complication, postoperative complications, complication-related reoperations, recurrence on 1-year follow-up, pain on exertion at 1-year follow-up, pain on rest at 1-year follow-up, pain requiring treatment at 1-year follow-up).With the exception of recurrences in the follow-up, the use of drains was in each case associated with a significantly higher incidence of complications and higher pain rates.The multivariable analyses also showed a significant association of defect size, ASA and EHS classification Placing the focus on subject-specific criteria for drain use such as the influence of local complications, the data situation remains quite heterogeneous in the literature and reveals no clear evidence of a protective effect of drains on seroma formation.Miller et al. compared the outcome of 580 patients each with or without drainage, similar hernia size and robotic surgery with respect to seroma formation at 30 days and found a significantly decreased postoperative seroma occurrence of 3.8% in the group with drainage vs. 15.2% in the group without (p < 0.0001) [12].No significant difference with respect to the use of drains observed Westphalen et al. [11] who assessed the seroma frequency in 21 patients per group with non-significant hernia defect size difference and the exclusion of ASA III-IV patients at three different postoperative ultrasound (US) time points and with seroma frequencies between 19.0 and 52.4% with drain vs. 28.6-57.1% without drain (p = 0.469 for early postoperative US; p = 0.852 for late US).In a RCT by Willemin et al. [10], fluid collection at 30 days was reported in 60.3% of the drain group patients vs. 62.0%(p = 0.844) without drain after open mesh repair, indicating that drains failed to reduce the rate of postoperative fluid collections that might contribute to seroma formation.In our analysis of the clinical care situation with negative selection of the patient population and hernia characteristics as well as more complex hernia surgeries, we observed significantly more seromas when drains were used, even though the rate of seroma formation was generally low (4.8 vs. 2.2% without drain, p < 0.001).
In addition to SSOs like seroma formation, also the effect of drain use on SSIs and SSOPI was investigated as decisive factor.Several studies suggest that the use of drains increases the risk of SSIs, while others found no significant difference in infection rates with or without drains.This became particularly evident in data of the Americas Hernia Society Quality Collaborative [12,13] and in a recent RCS reporting comparable site infection rates in both groups [10].Westphalen et al. reported no significant difference with or without drain use concerning surgical wound infections [11].
Even in the most recent literature, the data situation shows a heterogeneous picture.In a meta-analysis of ventral hernia repair by Mohamedahmed et al. (2023), drained patient groups had higher SSI rates and longer total operation times in eight studies involving 2568 patients, but no significant advantage was seen in terms of wound-related complications [14].Marcolin et al. (2023) published a meta-analysis for retromuscular ventral hernia repair with four studies involving 1,724 patients and found no differences in SSI, hematoma, SSO, or SSO-requiring procedural intervention, but the group with drain placement had significantly fewer seromas [15].Our evaluation of the care situation, however, revealed a significant difference in the patient group with drain vs. without concerning SSI (1.7% vs. 0.7%, p < 0.001) and SSO (3.2% vs. 1.3%, p < 0.001).In addition, the complication-related reoperation rate was significantly increased when drains were used (OR = 1632(OR = [1385;;1924]).
Relationships not evaluated in our analysis are the influence of the time point of drain removal or the prolonged prophylactic use of antibiotics on the SSI and SSO.Plymate [16].Only a BMI of > 35 represented a predictor of wound occurrence in their study.
Other authors found only little persuasive evidence for a prolonged antibiotic use to reduce SSI and SSO [17,18].Drains were used in 71.3% of elective open incisional hernia operations between 1/2009 and 12/2020 which indicates a high level of acceptance in the clinical care situation.We assume that a less favourable risk profile of patient and hernias characteristics leads to a negative selection when drains are used.In the following, a significant association with a higher risk of complications and pain is observed for all target parameters with the exception of recurrences.Similar results were also reported in a registry-based multivariable analysis by Schaaf et al. who observed more intraoperative complications, general complications, and complication-related reoperations in patients with drains.In their study, also larger defect size and BMI were unfavourably associated with postoperative complications, recurrences and pain [19].From a clinical point of view, it is difficult to extract the separate effect of drainages on the complications, as the multivariable analyses showed that these were significantly influenced in all outcome measures by numerous other variables such Apparent in our analysis became however that the use of drains is significantly associated with a higher occurrence of SSI and SSO in the clinical routine, especially if patients with higher BMI and larger defects are concerned.Taken together, drains are currently used in over 70% of elective open incisional hernia surgeries, based on various criteria such as the complexity of the procedures, hernia characteristics, or patient constitution.Despite adjusting for other influencing variables in the model (independent of patient and surgical characteristics), we observed a significant association between outcomes and drain usage.The poorer patient outcomes are associated with the use of drains, regardless of other factors in the model such as patient or surgical characteristics.However, the use of drains may serve as a surrogate parameter for other unobserved confounding factors.These results should prompt a re-evaluation of the predominantly "traditional" use of drainage and encourage careful case-by-case assessment.Further investigations are required as the data situation still remains heterogeneous.
Limitations
Our study has a number of important strengths.The data used in this article are the largest quality-assured data pool in Germany, Austria and Switzerland covering the period from 2009 to 2020; the statistical power to detect changes is thus high.In general, it should be noted that effects that have been proven to be significant do not necessarily have to be also clinically relevant, since even very small differences can be statistically significant due to relatively large number of cases.A limitation of this study is the rate of missing follow-up examinations.In accordance with the selection criteria of the Herniamed registry (see flowchart in Fig. 1), patients with non-incisional hernias, entry-state-key incomplete, operations performed using laparoscopic technique, patients under 16 years of age, emergency operations, patients with physiomesh or operation dates after December 31, 2020, and patients without 1-year follow-up were excluded.The lack of follow-ups (drop out) for a relevant proportion is another limitation of the registry, but the subgroup analysis does not show any selection bias (Fig. 2).Our analysis was a project in clinical health services research.
Fig.
Fig. 1 Flowchart of patient inclusionAll hernia operations after processing of data from export on January 31, 2022 , (n=973469 by 836 centers)
Table 2
larger defect, a higher BMI, the presence of at least one risk factor, component separation and open-IPOM, the use of drains (OR = 1366 [1230; 1517]), a higher ASA score, and older age increased the risk for postoperative complications (Table
Table 4
Results of the multivariable analysis for general complications including odds ratios with corresponding 95% confidence interval
Table 5
Results of the multivariable analysis for postoperative complications including odds ratios with corresponding 95% confidence interval
Table 6
Results of the multivariable analysis for complication-related reoperations including odds ratios with corresponding 95% confidence interval
Table 7
Results of the multivariable analysis for recurrence in the follow-up including odds ratios with corresponding 95% confidence interval
Table 8
Results of the multivariable analysis for pain at rest in the follow-up including odds ratios with corresponding 95% confidence interval
Table 9
Results of the multivariable analysis for pain on exertion in the follow-up including odds ratios with corresponding 95% confidence interval
Table 10
Results of the multivariable analysis for pain requiring treatment in the follow-up including odds ratios with corresponding 95% confidence interval
|
2023-08-19T06:16:40.142Z
|
2023-08-18T00:00:00.000
|
{
"year": 2023,
"sha1": "8ac6ee9e3fe5ff8e188f3db58beb9595671c1a03",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10029-023-02862-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f106062b085612c08b82be654804d50a2845be85",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17410209
|
pes2o/s2orc
|
v3-fos-license
|
Mixed reality temporal bone surgical dissector: mechanical design
Objective The Development of a Novel Mixed Reality (MR) Simulation. An evolving training environment emphasizes the importance of simulation. Current haptic temporal bone simulators have difficulty representing realistic contact forces and while 3D printed models convincingly represent vibrational properties of bone, they cannot reproduce soft tissue. This paper introduces a mixed reality model, where the effective elements of both simulations are combined; haptic rendering of soft tissue directly interacts with a printed bone model. This paper addresses one aspect in a series of challenges, specifically the mechanical merger of a haptic device with an otic drill. This further necessitates gravity cancelation of the work assembly gripper mechanism. In this system, the haptic end-effector is replaced by a high-speed drill and the virtual contact forces need to be repositioned to the drill tip from the mid wand. Previous publications detail generation of both the requisite printed and haptic simulations. Method Custom software was developed to reposition the haptic interaction point to the drill tip. A custom fitting, to hold the otic drill, was developed and its weight was offset using the haptic device. The robustness of the system to disturbances and its stable performance during drilling were tested. The experiments were performed on a mixed reality model consisting of two drillable rapid-prototyped layers separated by a free-space. Within the free-space, a linear virtual force model is applied to simulate drill contact with soft tissue. Results Testing illustrated the effectiveness of gravity cancellation. Additionally, the system exhibited excellent performance given random inputs and during the drill’s passage between real and virtual components of the model. No issues with registration at model boundaries were encountered. Conclusion These tests provide a proof of concept for the initial stages in the development of a novel mixed-reality temporal bone simulator.
Introduction
Concern for patient safety and outcomes underscore the importance of the ever-increasing pace of development in medical simulation.
Simulators are now ubiquitous in training [ Figure 1]. Interactive computer-driven simulations provide a safe training environment for learning anatomy and procedures. Several haptic temporal bone simulators are currently available [1][2][3][4][5]. A haptic device is a robotic system designed to apply forces through an end-effector. As a virtual tool comes into contact with virtual tissues, reaction forces are simulated. Unfortunately, owing to fundamental limitations in mechanical design, existing haptic simulations are unable to realistically reproduce the vibration and contact forces experienced during surgery.
Printed models provide another approach to temporal bone simulation and can be created from virtual models using a variety of techniques. A plaster based printer (Z-Corporation, Rock Hill, SC) uses digitized data and places successive layers of material, building a physical model (additive manufacturing). The printed models provide a realistic sense of hardness, but are ill-equipped to present soft tissues [6].
Each of the former simulations have been generated in our Lab [7], however, there is an opportunity to address the limitations in both with an entirely new mixed-reality paradigm. Mixed reality (MR) techniques fuse digital data with the human perception of the real environment. The MR simulation will combine a physical printed bonemodel with a virtual soft-tissue model. The virtual model will employ a haptic device (HD 2 , Quansar Corporation, Markham, Ontario) that provides real-time contact force representation. This device is a 6 Degree of Freedom (DOF), 10 newton per axis manipulandum. The haptic device will generate forces encountered during interaction with virtual soft tissues, such as decompression of a sinus or dural plate, as well as permit metric assessments. The resulting MR model should provide a platform that is capable of both the realistic force-feedback and visualization surgical trainees require.
Developmental steps in generation of a MR Simulator: The challenges in creation of novel MR Temporal Bone Simulation are considerable. This paper addresses just one aspect, specifically the mechanical merger of a six DOF haptic device with an otic drill system (Medtronic, Minneapolis MN). This is accomplished with a custom gripper mechanism. In this structure, the drill becomes the haptic end-effector.
In the MR simulation, a user must be able to employ the drill as they would in surgery. The merged haptic drill device needs to successfully navigate three different contact conditions during device operation: movement in free space, contact with rapid-prototyped bone, and contact with virtual soft tissue. Throughout these conditions and in transitions between them, the user should feel only the weight of the drill. Further, real and virtual forces need be displaced to the drill tip from the normative mid-shaft of the haptic end-effector.
Methods
We have developed a custom fitting for the haptic device which holds an otic drill. The fitting consists of several pieces [ Figure 2] and secures the drill, which then acts as the haptic end-effector.
Contact forces with virtual objects are normally applied at mid-point of the haptic end-effector. A user would feel that contacts are coming from this location, at the point where the end-effector is grasped. In order for the user to feel that the forces are the result of drill interactions with the virtual environment, the contact forces must be re-located to the tip of the drill. This is accomplished in software, adjusting the haptic device force and torque outputs to take drill length, position and orientation into account [ Figure 3]. The result is natural-feeling forcefeedback during bone drilling.
Software was also developed to offset gravitational forces permitting the user to feel only the weight of the Figure 3 Converting haptic interaction point to drill tip from mid-wand. Haptic contact forces should be felt at drill tip rather than at the mid-point of the haptic wand. This necessitates the following changes to the forces created by the haptic device, where x is the cross product, t the distance between drill tip and haptic wand mid-point and l is handle length.
(a) (b) (c) Figure 4 Stability assessment while dissecting a simple mixed reality model. a) represents a mixed reality model with HD 2 , otic drill and gripper assembly. Note the simple 3-layer rectangular printed model. b) shows the vertical path of the drill through the model with labels at interfaces between real and virtual modes of operation. At position A, the drill is in free space, with only gravity compensation forces generated by the haptic device. This results in a steady 2.5 N z-axis force. The user moves the drill down to interact with the printed model surface at location B. From B to C, the user encounters real forces from drilling in addition to the constant gravity compensation forces. From C to D the drill is between real bone model layers and a virtual soft tissue force is generated by the haptic device, increasing with depth of penetration. From D to E, the drill is engaged with the second real bone model layer and the virtual force is off. At E, the drill enters the free space below the second bony layer where only gravity compensation forces are present. c) shows a recording of the drill's z-axis position in metres (upper axis) and force in Newtons (lower axis) generated by the haptic device. The system remains stable throughout, without high-frequency or underdamped oscillations at boundaries between models.
drill and not the custom fitting. The software guides the haptic device motors to dynamically generate an upwards force, equal to that of the drill fittings, compensating for their additional weight. A series of tests were performed to ensure the operability and stability of the re-designed system under normative conditions. The first set of tests determined the ability of the system to generate sufficient power to offset the weight of the custom end effector.
A second set of tests determined the effectiveness of the system in position control mode during disturbances from the user.
The third set of tests examined system stability during drilling with a simple MR model [ Figure 4]. The model Figure 5 Operation with gravity compensation throughout the device work space. These graphs show user manipulation of the mixedreality system within its nominal workspace (x, y, z position in meters -top three axes). Gravity compensation for the gripper is provided by z-axis forces (F z in Newton-bottom axis) and is present between times A and B. No instability is seen, as manifested by the absence of high frequency or undamped oscillations in the device position recordings.
consists of two drillable rapid-prototyped layers, separated by a free-space. Within the free-space, a virtual spring force model is applied by the haptic system. Force feedback increases steadily with depth of penetration through the free space. The real and virtual models are registered/ co-located to each other so that, as the drill penetrates the first real layer of the model, only gravity compensation is engaged. When the empty space below the first layer is encountered, the virtual spring model is additionally employed. Once the drill has passed through the empty gap, it again encounters the real model and only gravity cancelation is engaged.
Results
Gravity cancellation of the gripper assembly was effective without visible drift, throughout the nominal device workspace [ Figure 5]. A force of 2.5 N in the z-axis, and 0.1 Nm pitch-torques and 0.02 Nm roll-torques were required; less than the maximum continuous output capabilities of the device (7.67 N z-axis and 0.948 Nm). The system remained stable during random input from experimenters during operation of the otic drill up to its maximum of 42,000 rpm. Stability of the system during the drill's passage through a MR model was monitored [ Figure 4]. No issues with registration or instability at model boundaries were encountered.
Discussion
The integration of printed and virtual simulation provides a platform which can represent otic drill forces as well as contact of the drill (or potentially other instruments) with embedded soft tissue structures and facilitate metric assessments. A MR system can complement existing simulations in creating a realistic and reproducible surgical training platform, which can be used to teach/assess trainees. The system permits mistakes and explorations of technique and represents an immense opportunity to improve patient safety. Several significant challenges remain to be addressed. Highly accurate co-location of the virtual and printed simulations is requisite and will be complicated by the need to change the position of the simulation during dissection, as in real surgery. The virtual force algorithms need to be modified for the HD2 haptic as they were initially developed for a 3 DOF device. Further, investigations of construct and concurrent validity are necessary.
These tests provide evidence for the effective mechanical and software design of a novel system, integrating an otic drill with an existing haptic device. The new system provides gravity compensation and operational stability during interactions with a simple mixed-reality model.
|
2017-06-27T12:19:13.755Z
|
2014-08-08T00:00:00.000
|
{
"year": 2014,
"sha1": "fb7784f052c4884b46b8420af95f48240c2e0a31",
"oa_license": "CCBY",
"oa_url": "https://journalotohns.biomedcentral.com/track/pdf/10.1186/s40463-014-0023-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3cb28539ef94967cca5049f937c47c3d68e23cb2",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225332896
|
pes2o/s2orc
|
v3-fos-license
|
High-Throughput Raman Spectroscopy Combined with Innovate Data Analysis Workflow to Enhance Biopharmaceutical Process Development
Raman spectroscopy has the potential to revolutionise many aspects of biopharmaceutical process development. The widespread adoption of this promising technology has been hindered by the high cost associated with individual probes and the challenge of measuring low sample volumes. To address these issues, this paper investigates the potential of an emerging new high-throughput (HT) Raman spectroscopy microscope combined with a novel data analysis workflow to replace off-line analytics for upstream and downstream operations. On the upstream front, the case study involved the at-line monitoring of an HT micro-bioreactor system cultivating two mammalian cell cultures expressing two different therapeutic proteins. The spectra generated were analysed using a partial least squares (PLS) model. This enabled the successful prediction of the glucose, lactate, antibody, and viable cell density concentrations directly from the Raman spectra without reliance on multiple off-line analytical devices and using only a single low-volume sample (50–300 μL). However, upon the subsequent investigation of these models, only the glucose and lactate models appeared to be robust based upon their model coefficients containing the expected Raman vibrational signatures. On the downstream front, the HT Raman device was incorporated into the development of a cation exchange chromatography step for an Fc-fusion protein to compare different elution conditions. PLS models were derived from the spectra and were found to predict accurately monomer purity and concentration. The low molecular weight (LMW) and high molecular weight (HMW) species concentrations were found to be too low to be predicted accurately by the Raman device. However, the method enabled the classification of samples based on protein concentration and monomer purity, allowing a prioritisation and reduction in samples analysed using A280 UV absorbance and high-performance liquid chromatography (HPLC). The flexibility and highly configurable nature of this HT Raman spectroscopy microscope makes it an ideal tool for bioprocess research and development, and is a cost-effective solution based on its ability to support a large range of unit operations in both upstream and downstream process operations.
Introduction
Process analytical technology (PAT) has been a major talking point within the biopharmaceutical sector since the release of the FDA's guidance for industry on PAT in 2004 [1]. The guidance encouraged a shift away from a fixed process that could result in product quality deviations towards an adaptive process ensuring a consistent product quality through sensors supporting advanced control strategies. Although there has been significant progress made towards achieving this goal, commercial manufacturing still heavily relies upon laborious off-line analytics and rudimentary control strategies. A major issue is the disconnect between early-stage research and development (R&D) activities and late-stage commercial manufacturing operations within the therapeutic drug lifecycle. These operations differ widely in terms of scale, where R&D operations utilise small volumes in the range of µL to L and commercial manufacturing processes operate with volumes in the range of 500 to 20,000 L. This large-scale difference can limit the universal application of a proposed PAT technology in the early stages of the drug development pipeline, therefore reducing the adoption of these core technologies within late-stage process development and commercialisation. To help bridge this gap, this paper focuses on the application of multivariate data analysis (MVDA) to better leverage at-line spectral measurements generated by a novel Raman spectroscopy microscope and utilise this information to support research and development (R&D) activities within the biopharmaceutical sector.
USP Monitoring and Analytics
Therapeutic proteins are highly complex and fragile molecules, and any upstream process deviations can lead to changes in the physiochemical, biological, and immunogenic properties of the molecule [2]. Therefore, controlling the bioreactor micro-environment is of paramount importance to ensure the product heterogeneity remains within a predefined specification defined by commercial good manufacturing practice (GMP) operations. To support this objective, the bioreactor is monitored and controlled through three classes of measurements, defined as on-line/in-line, at-line, or off-line [3]. The physical environment within the bioreactor, which includes the pH, temperature, and dissolved oxygen, utilises accurate and well-established on-line measurements with minimum time delays, enabling the real-time control of these variables. At-line and off-line monitoring involves removing a physical sample from the bioreactor for measurement in a separate analyser, enabling the monitoring of the chemical and biochemical environment. At-line measurements infer the close proximity of the analyser to the bioreactor, resulting in a shorter delay in the measurement availability, ranging from minutes to hours, whereas off-line measurements involve longer delay times of up to a day or even a week, depending on the analyser and delay before processing the sample. The traditional monitoring of upstream processing (USP) activities requires three different analysers: The first is an at-line biochemical analyser that measures the daily metabolite concentrations, including glucose and lactate, and typically takes between 5 and 10 min per sample. The second is an at-line cell counter that measures the cell densities and viabilities every 1 or 2 days with an analysis time of 5-10 min per sample. The third analyser is an off-line protein A HPLC column that measures the protein concentration and product quality; this is the slowest analytical device, as the sample needs to be purified before loading onto the column, and these measurements are typically only available after the experiment is complete. The manual sampling procedure and slow response time of these analysers limit the ability of these measurements to be used for control strategies. The additional drawback of these traditional methods is that they are destructive and therefore require separate samples for each analyser.
A recent report targeting the biopharmaceutical industry has identified and prioritised the top bioreactor variables requiring investment in at-line/in-line monitoring technologies to facilitate effective in-process control strategies [4]. This priority list includes the cell viability, viable cell density (VCD), glucose concentration, amino acids, and product concentration, defined as the bioreactor's critical process parameters (CPPs), in addition to the glycosylation profile and charge profile, defined as the critical quality attributes (CQAs). These variables are all currently measured by slow and laborious off-line analytical analysers. PAT aims to change the reliance of biopharmaceutical companies on these slow off-line analytical measurements.
PAT within USP Mammalian Cell Cultures
PAT ultimately aims to integrate aspects of chemical, physical, microbiological, mathematical, and risk analyses to ensure robust biopharmaceutical operation [5]. There are various on-line and in-line PAT technologies suitable for USP mammalian cell culture monitoring. Capacitance sensors have gained popularity recently and provide robust measurements of viable cell density. These probes measure the radio frequency impedance in the cell broth and can distinguish between live and dead cells based on the assumption of living cells having intact spherical cell walls. Konakovsky et al. demonstrated the ability of capacitance probes to accurately predict VCD for mammalian cell cultures using a robust partial least squares (PLS) model that could be transferred between clones and across scales [6]. The majority of other PATs are spectroscopic, and are advantageous due to their ability to measure multiple components within the bioreactor. Examples include infrared spectroscopy, which measures reflectance, transflectance, and transmission events during near-infrared (NIR) or mid-infrared (MIR) irradiation. NIR was demonstrated by Hakemeyer et al. to predict numerous mammalian cell culture variables, including the cell viability, product concentration, glucose, and lactate, with the predictions validated across multiple scales, ranging from 2.5 to 1000 L [7]. Additionally, Sandor et al. compared the ability of NIR and MIR for mammalian cell monitoring and concluded that MIR has a higher accuracy for individual analyte concentrations, such as glucose and lactate, but recommended NIR as a better tool for the on-line monitoring of mammalian cell systems based on its ability to measure cell densities through light scattering effects, which is not possible with MIR excitation [8]. Ultraviolet and visible spectroscopy is another tool and one of the oldest forms of spectroscopy based upon the Beer-Lambert law. However, this technology has limited demonstrations within USP which have primarily focused on predicting the total cell density, as shown by Ude et al. [9]. Fluorescence spectroscopy is another valuable tool that takes advantage of the fluorescence nature of many biological components excited by visible or UV light. Typically, within mammalian cell cultures, 2D fluorescence spectroscopy is employed. Ohadi et al. demonstrated the ability of this technique to predict numerous variables, including the product concentration and glucose, and also to distinguish between living and dead cells, therefore making it an attractive tool for the in situ monitoring of mammalian cell cultures [10]. Another major advantage of this technology is the ability to exploit the auto-fluorescence nature of various variables, such as amino acids, vitamins, and proteins, including selected molecules or targets tagged using fluorescence markers. Calvet et al. used a type of fluorescence spectroscopy called Excitation Emission Matrix spectroscopy to generate a three-dimensional contour plot of the excitation wavelength vs. emission wavelength vs. fluorescence. The method generates accurate fingerprints for multicomponent systems and was demonstrated by Calvet et al. to identify the composition changes of tryptophan and tyrosine in a complex media applicable to industrial mammalian cell cultures [11]. The primary benefit of these spectroscopic tools is that they are non-destructive, non-invasive, and highly informative, making them highly suitable for mammalian cell culture monitoring.
Applications of Raman Spectroscopy in USP
In comparison to other spectroscopic methods, Raman spectroscopy has gained attention in recent years due to its suitability for the analysis of aqueous samples due to its low water interference and high specificity. Raman spectroscopy excites the sample using a monochromatic light source, causing small vibrational frequency shifts in the sample. The inelastic scattering of this light source generates Raman spectra containing quantitative and qualitative information, including the composition, chemical environment, and structural information related to the sample. The major challenge with Raman spectroscopy is the low signal to noise ratio due to the weak Raman scattering signals compared to the incident wavelength. These weak signals can be corrupted by strong fluorescence signals associated with the analysis of biological samples [12]. Alternative methods such as Surface Enhanced Raman Spectroscopy (SERS) have been developed to overcome these issues. Different types of Raman Spectroscopy, including Resonance Raman Spectroscopy, Raman Optical Activity, and surface-enhanced Raman spectroscopy (SERS), as well as their applications, are extensively described in a review by Buckley and Ryder [13].
Within USP, the biopharmaceutical industry has focused on Raman spectroscopy as one of the leading PAT technologies to monitor fermentation systems cultivating different expression systems, such as bacterial [14], fungal [12], and mammalian cell culture systems, including NS0 and HEK293 cell lines [15,16]. However, the majority of Raman spectroscopy monitoring operations focus on Chinese hamster ovary (CHO) mammalian cell lines, and have demonstrated the ability of this technology to predict the previously prioritised CPPs of glucose, lactate, VCD, and product concentration [15,17,18]. The proven ability to monitor these variables has led to the development of adaptive control strategies, as demonstrated by Craven et al., who applied a nonlinear model predictive controller to maintain the glucose concentration of a mammalian cell culture at a fixed set-point [19]. Additional control demonstrations include the application of a closed-loop control strategy using in-line Raman spectroscopy to minimise lactate accumulation through glucose feed rate additions, which resulted in the additional benefit of increasing the product concentration by approximately 85% [15]. More recently, Raman spectroscopy has shown promise as a replacement tool for the pH control of mammalian cell cultures [20]. However, the feasibility of this is questionable, given the relatively long acquisition time of each Raman spectra, which was between 16 and 20 min, in comparison to traditional pH probes with a response time of seconds. Raman spectroscopy has also been used to monitor the glycosylation patterns of a monoclonal antibody, which, as previously discussed, is a high-priority CQA. Li at al. utilised Raman spectroscopy for the real-time monitoring of a monoclonal antibody, and were able to distinguish between glycosylated and non-glycosylated molecules [21]. The ability to monitor product quality in real-time opens up significant opportunities for USP optimisation and advanced feedback control solutions.
DSP Monitoring and Analytics of Mammalian Therapeutic Products
Typical PAT for downstream processing predominantly includes various sensors for the single-variable monitoring of CPPs. These include pH and conductivity probes, pressure and flow rate sensors, and UV spectroscopic measurements and other sensors, which are analysed through means of univariate analysis and/or operator knowledge. Additional information, especially regarding the CQAs of the product, is obtained through off-line analysis, which allows the determination of process and product-related quality attributes. Whereas univariate monitoring is suitable for the monitoring of process variables, it rarely carries enough information to be able to measure CQAs such as product multimers; product charge variants; host cell-related impurities, such as host cell proteins (HCPs), DNA, and lipids, in addition to impurities such as resin leachables. CQAs are monitored using univariate off-line/at-line analytical techniques, with efforts being made to enable online implementation.
Multiple-antibody CQAs, such as the high molecular weight (HMW) and low molecular weight (LMW) species content, charge, and glycosylation variants, are most commonly performed utilising at-line/off-line HPLC utilising various column chemistries, including size-exclusion (SE-HPLC) and ion-exchange (IEX-HPLC) liquid chromatography. In SE-HPLC, different-sized product species are separated based on their differential retention time, and content percentages are calculated as the area under curve (AUC) for peaks corresponding to the UV absorption signal of the elution product [22]. As standard SE-HPLC has long run times (commonly 20 min per sample), efforts have been made to increase the speed of analysis and implement SE-HPLC-based methods on-line [23]. Decreasing the time needed for sample analysis to less than 5 min was made possible by utilising ultra-high-performance liquid chromatography (UHPLC) with sub-2µm particles [24,25], and these approaches were further developed by coupling to mass spectroscopy [26]. Alternative approaches to UHPLC were also described, such as membrane chromatography, with an analysis time as low as 1.5 min [27]. The real-time analysis of aggregates and charge variants during cation exchange (CEX) chromatography using HPLC has been described by Tiwari et al. [28]. Patel et al. describes the use of on-line UHPLC for the detection of charge variants in continuous processes, which can be adapted for aggregate analysis [29]. Although the feasibility of these approaches has been demonstrated, on-line HPLC/UHPLC has not yet been widely adopted.
Another important set of antibody product CQAs are host-related impurities, such as host cell proteins (HCPs), DNA, and lipids; and process-related impurities, such as free protein A ligands. HCPs and protein A ligands are typically detected through various immunological assays, including traditional ELISA and high-throughput microfluidic assays. The processing time can be decreased by moving from a traditional ELISA to an automated ELISA using liquid-handling systems and automated microfluidic systems [30]; these still do not allow real-time analysis, as the time needed to run these assays is in the order of hours. Efforts have been made to develop immunological methods capable of on-line measurements utilising a flow cell, although the analysis time of above 30 min per sample would not suit the current downstream requirements [31]. As the system described in Kumar et al. was developed for upstream applications with lower requirements for time of analysis, additional changes to the systems, such as parallel flow cells, could potentially be made to enable Downstream Processing (DSP) application.
Although there has been progress in adapting traditional at-line methods to allow real-time control, most methods still suffer from long run times. In order to expand process control and enable the real-time monitoring of CQAs, multivariate approaches and PAT are necessary.
Applications of PAT in DSP
Various spectroscopic methods, including infra-red (IR), mid-IR (MIR), Raman, Fourier-transform IR, fluorescence, and UV spectroscopy, have been applied to downstream processing. As described extensively in a review by Rudt. et al. [32], and more recently in Rolinger et al. [33], each of these spectroscopic techniques has its own set of advantages and applications.
Examples of spectroscopic methods applied to DSP monitoring include the use of UV-spectroscopy with PLS modelling to automatically control the loading phase of protein A chromatography by monitoring the concentration of the monoclonal antibody (mAb) product in the load [34]. UV spectroscopy has been also shown to monitor mAb aggregate and monomer concentrations [35], although during the investigated runs the monomer and aggregate peaks showed a good separation, which might not always be the case and could decrease the otherwise high sensitivity of predictions. These studies benefited from the utilisation of variable pathlength spectroscopym enabling accurate in-line measurements even at high concentrations. The same device was recently applied to the on-line monitoring of ultrafiltration/diafiltration (UF/DF) as part of a multi-sensor PAT capable of monitoring concentration in addition to the apparent molecular weight using dynamic light scattering (DLS) to monitor changes in aggregation during the process [36].
Other spectroscopic methods have been described in relation to DSP, such as mid (MIR) and near (NIR) infrared spectroscopy. NIR was used to determine mAb concentration in real-time, enabling the dynamic loading of protein A chromatography [37]. Capito et al. used MIR to monitor product concentration, aggregate, and HCP content, although this was developed as an at-line method, as the samples were processed (dried) prior to measurements [38], limiting the use of the tool for on-line monitoring. Additionally, there have been attempts to overcome the limitations of individual spectral techniques by integrating multiple different inputs. In a study by Walch et al., standard detectors (UV, pH, conductivity) were implemented alongside additional techniques including fluorescence spectroscopy, MIR, light scattering, and refractive index measurement to monitor the protein A chromatography. These inputs were then analysed through PLS regression, producing predictive models for mAb concentration, monomer purity, aggregate content, and host related impurities (HCPs, DNA). This has resulted in accurate predictions for titre and monomer purity, but less so for HCPs, DNA, and % aggregate, especially when the sample matrix was changed [39].
Applications of Raman Spectroscopy in DSP
The applications of Raman spectroscopy in DSP include the measurement of product concentration, product aggregation, glycosylation, and membrane fouling. Predicting the product concentration of mAb was first shown by Andre et al. using an immersion probe [40], and further expanded by Feidl et al. through the development of a Raman flow cell [41,42]. Determining the titre is especially relevant for continuous perfusion processes, as it allows the dynamic loading of subsequent capture steps, as demonstrated in [43]. However, the titre determination from harvest can arguably be achieved by the use of delta UV spectroscopy comparing the A280 absorbance of the feed and eluate [44], which might be easier to implement as it does not require MVDA modelling. Therefore, if Raman spectroscopy is to be widely adopted in DSP, it must provide additional information.
There are studies demonstrating the ability of Raman spectroscopy to distinguish between samples with a high content of aggregate mAb species. Typically, these studies are performed at high mAb concentrations and/or high aggregate contents for the purpose of proof-of-concept studies, in addition to studies of aggregation in the formulated drug products. Zhou et al. monitored the thermally induced aggregation of intravenous IgG (IVIG) at high concentrations (51 mg/mL), describing the various spectral features present upon aggregation, in particular shift in the tyrosine peak at 830 and 850 cm −1 and the tryptophan peak at 1550 cm −1 for the IVIG, using Raman spectroscopy coupled with DLS [45]. Thermally induced aggregation was described in another study, where five antibodies with various propensities to aggregate were analysed using the perturbation-correlation moving windows method, visualising changes in the spectra during heating. By studying multiple different mAbs, differences were observed in the aggregation mechanism and associated spectral features [46], which might potentially make it difficult to develop models that could be utilised across multiple antibody formats. The presence of subtle differences in the sequence and structure of mAbs resulting in significantly different spectra is further supported by a study using SERS for the label-free identification of different antibody products by PLS-DA [47]. Although not explicitly described in the study, the described spectral differences might not only stem from the varying structural features, but also from the composition of the formulation buffer, product concentration, etc., for which the study did not adjust. In order to utilise Raman spectroscopy for aggregate analysis, quantitative predictive models are needed. Initial steps towards such models are described by Zhan et al., where mixtures of HMW and monomer were used to generate a model, which was then validated with independent samples generated through incubation at 40 • C. The model was able to predict the validation samples with a root mean square error of prediction (RMSEP) of 1.8% [48]. To monitor HMW and LMW species during chromatography runs, the methods need to be sensitive enough to allow detection at relevant concentrations (<10 mg/mL) and low aggregate contents (<10%). In all the above-mentioned studies, the samples typically had either high concentrations and/or high aggregate contents, which might not make these models suitable for the monitoring of standard preparative chromatography. Furthermore, the models need to be robust enough to deal with the changing background of co-eluting impurities and changes in buffer composition, which is common in gradient elution and is yet to be described.
Other CQAs might also have a potential to be monitored by Raman spectroscopy. Spectral differences in antibody glycosylation have been described in simple systems (glycosylated vs. non glycosylated proteins) [49][50][51], suggesting potential for elucidating more subtle differences in glycosylation that are relevant to bioprocessing. Another interesting area that could be investigated using Raman spectroscopy is the detection of HCPs, for which there are currently no published studies.
Early efforts have been made using FT-IR by Capito et al. with limited sensitivity [52], which does not allow for widespread use. Overall, detecting HCPs might require more complex approaches, such as Raman labels, since HCPs are a structurally diverse group of proteins and therefore lack a distinct Raman signature.
Raman spectroscopy can also be utilised in process monitoring. Virtanen et al. describes using Raman spectroscopy in membrane fouling monitoring, where an immersion probe was placed into a cross-flow filtration unit, and fouling over time by vanillin, a model organic foulant, was monitored using PCA [53]. The applicability of this approach to DSP needs to be evaluated separately using relevant molecules, as potential issues might include the weaker signal of proteins compared to small organic molecules, as well as the more complex background matrix.
Overall, the application of Raman spectroscopy to DSP is a relatively young field and is expected to be further advanced in the future, as innovations in instrumentation as well as advanced techniques such as SERS become available and widely implemented.
High-Throughput Raman Spectroscopy and Its Advantages for Bioprocess Development
The acquisition set-up and type of system utilised in Raman spectroscopy typically depends on the application and on whether it requires at-line or in-line/on-line measurement. Common set-ups include immersion probes, flow cells, single cuvette systems, microscope slides, and high-throughput systems. Here, we present a high-throughput Raman spectroscopy microscope based on standard 96-well plates, which allows combined use for both upstream and downstream development.
Raman spectroscopy in USP is commonly based on an optic fibre immersion probe for each bioreactor, although multiple probes can be connected to a single Raman spectrometer. Alternatively, HT scale-down models such as ambr TM 15 (Sartorius Stedim, Göttingen, Germany) with at-line (Rowland-Jones and Jaques 2019) or integrated Raman measurement can be utilised for model building, although the latter has only been introduced recently. These systems work well for upstream applications, but might lack versatility to allow their use in other areas of bioprocess development.
An alternative approach is the use of a high-throughput Raman device which is suitable for model generation for both upstream and downstream applications. The HT Raman devices described in the literature include commercial devices such as the RamanRxn1™ High Throughput Screener (HTS) (Kaiser Optical Systems, Ann Arbor, MI, USA) [54], the InVia confocal microscope (Renishaw, Wotton-under-Edge, UK) presented in this work, the Lab Ram HR Evolution confocal microscope (Horiba Jobin Yvon, Kyoto, Japan) [55], and custom-built devices assembled using parts from various manufacturers [56,57]. Data acquisition is performed using well plates, typically 96-well plates, however settings can be adjusted to allow acquisition for other high-throughput labware or bespoke applications. Automatic plate mapping allows autonomous operation, allowing the screening of a large number of samples.
Whereas using HT Raman is more laborious when coupled with standard bioreactors requiring manual sampling, it is highly suitable for experiments using scale-down micro-bioreactors, where sampling is typically automated. A major advantage of these systems is the small sample requirement equal to 50-100 µL that is necessary for a measurement enabling the primary CPPs and CQAs to be predicted. This reduces the reliance on multiple analysers, thus reducing the capital and operational costs while providing near real-time information on difficult-to-monitor variables such as product concentration. Similarly, HT Raman is also suitable for model building in DSP. Samples can be generated through elution fractionation during preparative chromatography, where fractionating into a standardised labware minimises the number of manual steps.
The research outlined in this paper investigates the predictive capabilities of an HT Raman microscope combined with advanced data analytics to support both USP and DSP research and development operations. The paper demonstrates the ability of this technology to monitor the CPPs within an HT microbioreactor system in addition to monitoring monomer and product concentration within the development of a CEX chromatography step. At the core of these activities is the application of MVDA models to convert these multiple-dimensional spectral data sets into quantitative information necessary for monitoring. This involved negating the influence of corrupting fluorescence through baseline and scattering correction algorithms, in addition to the evaluation of the models' coefficients, enabling more robustness in predictive models. In summary, this technology was found to be highly versatile and applicable across a wide range of unit operations in bioprocess development, provided the correct MVDA models are implemented.
Product Materials
Three different molecules were used in this study; in the USP experiments, cell line A produced an antibody-peptide fusion protein, and cell line B produced a modified IgG1 molecule. In the DSP study, an Fc-fusion protein was investigated. All the molecules utilised in this work were developed internally and provided by AstraZeneca, Cambridge, UK.
Cell Line and Cell Culture Propagation
Cell line A and cell line B in the USP work, and the cell line utilised in the DSP work, used a Chinese hamster ovary (CHO) host expressing high levels of therapeutic protein. These cell lines were provided by AstraZeneca and are proprietary and commercial products. No human cells were involved in this work. The cell lines were cultivated in chemically defined CHO media, maintained at 37 • C under 5% carbon dioxide, shaken at a constant rpm, and passaged 2-3 times per week for propagation and scale-up for inoculation.
Bioreactor Systems and Cell Culture Process
Two cell lines were cultivated in an advanced micro-bioreactor (ambr TM 15) system (Sartorius Stedim) with 24 single vessels split into two separate culture stations, where each vessel was operated with a 11-15 mL working volume. Cell line A was cultivated in vessels 1-12, and cell line B was cultivated in vessels 13-24. The experimental set-up investigated the impact of both different dissolved oxygen (DO 2 ) set-points in addition to DO 2 fluctuations on both cellular growth and protein production. The DO 2 set-point was controlled to 40% and was fluctuated every 15 min to either 10% or 20% by purging with nitrogen. For cell line A, vessels 1 and 2 were maintained at a DO 2 set-point of 40%, vessels 3 and 9 were controlled to 10%, and vessels 3 and 10 were controlled to 20%. Vessels 5 and 11 were fluctuated between 40-10%, and vessels 6, 7, 11, and 12 were fluctuated between 40% and 20%. Cell line B followed the same experimental set-up as cell line A for vessels 13-24. The temperature and pH of all the vessels was controlled to 35.5 • C and 7, respectively, and the agitation rate was adjusted to ensure that the DO 2 concentration set-points were maintained. The feeding strategy involved five equally spaced additions of the feed after the initial feed day indicated. The initial seeding density was <10 × 10 5 cells mL −1 . The culture pH was controlled to 7 through the addition of sodium carbonate and sparging with CO 2 gas, with its control strategy implementing a pH dead-band equal to 0.1. Antifoam volumes of 20 µL were added as required. Daily at-line samples were analysed for the glucose and lactate concentrations using the 2950D Biochemistry Analyser (YSI, Yellow Springs, OH, USA) and every second day for the viable cell density (VCD) and viability using the Vi-Cell Automated cell viability analyser (Beckman Coulter, Brea, CA, USA).
Titre Analysis
Volumetric antibody-peptide fusion titres in cell culture supernatants were quantified by protein A affinity chromatography using a protein A ImmunoDetection sensor cartridge (Applied Biosystems, Warrington, UK) coupled with an Agilent 1200 series HPLC (Agilent, Berkshire, UK). Peak areas relative to a reference standard calibration curve were used to calculate the titres. These samples were measured on days 2, 4, 6, 8, and 10 for the ambrTM 15 system.
CEX Sample Generation
The Fc-fusion protein used in the DSP part of this study was generated using a CHO host provided by AstraZeneca, UK, Cambridge. Chromatographic experiments were carried out using an ÄKTA Avant controlled with Unicorn Software version 7.1 (Cytiva, Marlborough, MA, USA). The protein feed was purified using MabSelect Sure Protein A chromatography resin (Cytiva, Marlborough, MA, USA), subjected to a low-pH treatment, and purified further using CaptoAdhere resins (Cytiva, Marlborough, MA, USA) in flow-through mode. Screening experiments were conducted to determine the optimal conditions to purify fusion proteins on Poros 50 HS resin (Thermo Fisher Scientific, Bedford, MA) using varying conditions of the elution buffer. The elution was performed either in gradient mode from 0-0.5 M of NaCl in 20 mM of sodium citrate or step mode using 20 mM of sodium citrate (for calibration set (T1) and validation set 1 (P1)) and 50 mM of sodium citrate (for validation set 2 (P2)) with NaCl in the range 133-210 mM, at a pH range of 5.8-6.2 and a loading concentration of 10-20 g/L. The elution was fractionated into 1 mL fractions, where each fraction constituted a separate sample for spectral measurements.
Protein Concentration and Analytical Size Exclusion Chromatography
Sample concentration was determined off-line by A280 UV spectrometry using Trinean (Unchained Labs, Pleasanton, CA, USA) with the corresponding extinction coefficient. Only samples with a product concentration above 0.5 mg/mL were used for the Raman measurements. The monomer purity was monitored with high-performance size exclusion chromatography (HP-SEC) using a TSK-GEL G3000SWXL column (7.8 mm × 30 cm) from Tosoh Bioscience (King of Prussia, PA, USA) with an Agilent 1200 HPLC system (Agilent Technologies, Santa Clara, CA, USA). The column was operated at a flow rate of 1 mL/min with a mobile phase consisting of 0.1 M of sodium phosphate and 0.1 M of sodium sulphate, pH 6.8. Protein was monitored by the absorbance at 280 nm, and the sample purity was estimated by integrating the chromatograms.
Spectral Data Acquisition
All the spectral measurements were performed using the InVia confocal Raman microscope (Renishaw, Wotton-under-Edge, UK) equipped with a 785 nm laser. Prior to spectral measurements, the acquisition settings were optimised in respect to the focal point, sample volume, laser output, and duration of measurements. Measurements of the cell culture were performed in the range of 381 to 1534 cm −1 using a 10% laser power (30 mW), 30 s acquisition time, 5 accumulations, and line-focus using a 5X objective (Leica Microsystems, Wetzlar, Germany). For each sample, 350 µL of culture was spun down to remove cells, and 300 µL of supernatant was used for the acquisition. The data acquisition was performed using polypropylene (PP) 96-well plates (Greiner Bio-one, Stonehouse, UK) using the Microplate mapping option of the WiRe 5.2 software (Renishaw, Wotton-under-Edge, UK). Data acquisition for the CEX samples was performed similarly to the cell culture samples, with a difference in acquisition range (605-1741 cm −1 ). Additional experiments were performed comparing the signal from the PP 96-well plates with the signal from stainless steel plates, which were custom-made. The acquisition was further optimised using a long-distance 50X objective (Leica Microsystems, Wetzlar, Germany), increasing the laser power to 100%, and decreasing the acquisition time to 10 s and 3 accumulations.
Spectral Data Pre-Processing and Model Set-Up
The Raman data recorded by the HT InVia Raman microscope for both the USP and DSP applications were pre-processed and modelled in Python 3.7.1. PLS was used to develop the predictions of the variables in both the USP and DSP case studies. In the analysis of the USP variables and the product concentration in the DSP case study, the background fluorescence was removed by fitting and subtracting a 1st order polynomial to each individual spectrum using the open-source Rampy library [58] (Le Losq 2018). The baseline corrected spectra were subsequently normalised by applying a standard normal variate (SNV) algorithm, which corrects for scattering effects due to slight changes in the path length of the Raman device in addition to correcting for changes in the cell culture composite such as viscosity. Prior to the SNV, the data were smoothed using a Savitzky-Golay smoothing filter. For the monomer concentration in the DSP case study, the PLS model was developed using the raw spectra. In the USP application, the calibration data set consisted of cell cultures: 1-5 and 7-11 (cell line A), 13-17 and 19-23 (cell line B), and the validation runs equal to cell cultures 6 and 12 (cell line A) and 8 and 24 (cell line B). In the DSP application, the individual CEX runs differed in the elution buffer conditions, where the calibration set consisted of elution fractions from runs using 20 mM of sodium citrate, with varying levels of sodium chloride (0-0.5 M). There were two district validation sets-the first (P1) was in an identical buffer range to that of the calibration set and consisted of 40 samples, whereas the second (P2) consisted of elution fractions using elution buffer containing 50 mM of sodium citrate as well as a wider salt range (133-210 mM), and consisted of 22 samples. The optimum number of latent variables for each of the PLS models was identified by minimising both the root mean square error (RMSE) based on the calibration data set and the root mean square error of prediction (RMSEP) based on the validation data set.
Partial Least Squares Model Generation
The PLS model implemented the nonlinear iterative partial least squares (NIPALS) algorithm, as outlined in detail by Wold et al. [59]. The preprocessed spectral data (X Spectra ) were first decomposed into N latent variables, generating a matrix of scores, T, and loadings, P, with E as the residuals. The off-line concentration of the glucose concentration (Y Variable ) was decomposed in a similar fashion, generating a matrix of scores, U, and loadings, Q, with F as the residuals: The inner-relationship B vector is generated by relating the scores of the X Spectra to the scores of the Y Variable , calculated as: The PLS model works iteratively through each latent variable and, upon convergence, generates a matrix of regression coefficients β equal to: The predicted Y variable ( . Y Variable ) is calculated with the cumulative sum of the regression coefficients, taking N latent variables, defined by Goldrick et al. 2018 as [12]:
Results and Discussion
This paper evaluates the performance of a novel HT-Raman spectroscopy device applied to two critical USP and DSP operations within biopharmaceutical manufacturing. This evaluation includes the development of a Raman spectroscopy model generation workflow shown in Figure 1, outlining the necessary steps to ensure that a robust MVDA model is generated. The novelty of the presented workflow is that, in addition to quantifying the performance of the MVDA model using traditional RMSE and RMSEP metrics, it suggests interpreting the model's coefficients and comparing them to the expected Raman vibrational signatures of the variable investigated to ensure that the accuracy of the predictions are independent of the metabolism-induced concentration correlations. This novel Raman spectroscopy model generation workflow is demonstrated on two case studies, the first involves the at-line monitoring of an HT micro-bioreactor system cultivating two mammalian cells expressing two different therapeutic proteins. The second application of this device involves the development of a cation exchange chromatography step for an Fc-fusion protein to compare different elution conditions.
Demonstration of HT Raman Spectroscopy Microscope to USP
In this work, the HT InVia Raman microscope was applied to two essential R&D unit operations within biopharmaceutical manufacturing. The first investigated the application of MVDA to transform the Raman spectra from a USP operation to predict the primary metabolite concentrations, cellular growth characteristics, and therapeutic protein concentrations of a mammalian cell culture performed on a micro-bioreactor system. The off-line variables investigated in this work were recorded every 48 h and are shown in Figure 2. In total, there were 12 cell culture runs from cell line A and 12 cell culture runs from cell line B, with the majority of these harvested early on day 10 due to the high accumulation of lactate, as shown in Figure 2C. The high lactate production was due to controlled fluctuations in the DO2 set-point that involved manipulating the DO2 to between 10-40% and studying the influence of these fluctuations on the growth and productivity. The adjustments to the DO2 set-points resulted in the majority of the cells maintaining their lactate production state from day 4 to 10. The influence of these DO2 concentration fluctuations on lactate production is outside the scope of this paper. Apart from the lactate concentration, the glucose, VCD, and antibody concentrations shown in Figure 2 represent the typical ranges expected for mammalian cell cultures, providing an ideal data set to investigate the performance of this HT-Raman spectroscopy device. The analysis of the experimental Raman spectra followed the workflow shown in Figure 1, and the MVDA model chosen was a PLS model defined by Section 2.9. The data split used to build this PLS model is shown in Figure 2
Demonstration of HT Raman Spectroscopy Microscope to USP
In this work, the HT InVia Raman microscope was applied to two essential R&D unit operations within biopharmaceutical manufacturing. The first investigated the application of MVDA to transform the Raman spectra from a USP operation to predict the primary metabolite concentrations, cellular growth characteristics, and therapeutic protein concentrations of a mammalian cell culture performed on a micro-bioreactor system. The off-line variables investigated in this work were recorded every 48 h and are shown in Figure 2. In total, there were 12 cell culture runs from cell line A and 12 cell culture runs from cell line B, with the majority of these harvested early on day 10 due to the high accumulation of lactate, as shown in Figure 2C. The high lactate production was due to controlled fluctuations in the DO 2 set-point that involved manipulating the DO 2 to between 10-40% and studying the influence of these fluctuations on the growth and productivity. The adjustments to the DO 2 set-points resulted in the majority of the cells maintaining their lactate production state from day 4 to 10. The influence of these DO 2 concentration fluctuations on lactate production is outside the scope of this paper. Apart from the lactate concentration, the glucose, VCD, and antibody concentrations shown in Figure 2 represent the typical ranges expected for mammalian cell cultures, providing an ideal data set to investigate the performance of this HT-Raman spectroscopy device. The analysis of the experimental Raman spectra followed the workflow shown in Figure 1, and the MVDA model chosen was a PLS model defined by Section 2.9. The data split used to build this PLS model is shown in Figure 2 The spectral analysis was carried out using the remaining off-line analytic sample, equivalent to 300 μL. This material was split up into three separate wells on a 96-well plate, providing triplicate 100 μL samples for the HT InVia Raman microscope device. The selected sample volume of 100 μL was found to produce the most consistent spectra, although smaller volumes can be accommodated. The raw spectral data is shown in Figure 3A and highlights the significant baseline increase as the culture progresses from day 0 (inoculation) to day 10. An approximate 15-fold increase is observed in the average baseline of the Raman spectra recorded on day 0 compared to day 10. Corrupting fluorescence still remains a major problem for Raman spectroscopy, particularly during upstream processing operations, where broad fluorescence background signals have been shown to mask out important Raman peaks and thus prevent the extraction of useful correlations [12]. To minimise the influence of fluorescence in this work, a baseline removal algorithm was implemented, followed by the application of a scattering correction algorithm referred to as standard normal variate (SNV). The application of SNV is highly recommended when using a HT Raman spectroscopy microscope, as it corrects for minor path length differences between the laser source and the sample due to small volume changes, resulting in the baseline displacement of the spectrum along the vertical axis. These pre-processed spectral data are shown in Figure 3B and were used to develop the PLS models of glucose, VCD, lactate, and antibody concentrations. Alternative methods to remove strong fluorescence signals include taking the 1st derivate of the spectra, followed by SNV. This was demonstrated by Berry et al., who observed a significant baseline shift during the on-line monitoring The spectral analysis was carried out using the remaining off-line analytic sample, equivalent to 300 µL. This material was split up into three separate wells on a 96-well plate, providing triplicate 100 µL samples for the HT InVia Raman microscope device. The selected sample volume of 100 µL was found to produce the most consistent spectra, although smaller volumes can be accommodated. The raw spectral data is shown in Figure 3A and highlights the significant baseline increase as the culture progresses from day 0 (inoculation) to day 10. An approximate 15-fold increase is observed in the average baseline of the Raman spectra recorded on day 0 compared to day 10. Corrupting fluorescence still remains a major problem for Raman spectroscopy, particularly during upstream processing operations, where broad fluorescence background signals have been shown to mask out important Raman peaks and thus prevent the extraction of useful correlations [12]. To minimise the influence of fluorescence in this work, a baseline removal algorithm was implemented, followed by the application of a scattering correction algorithm referred to as standard normal variate (SNV). The application of SNV is highly recommended when using a HT Raman spectroscopy microscope, as it corrects for minor path length differences between the laser source and the sample due to small volume changes, resulting in the baseline displacement of the spectrum along the vertical axis. These pre-processed spectral data are shown in Figure 3B and were used to develop the PLS models of glucose, VCD, lactate, and antibody concentrations. Alternative methods to remove strong fluorescence signals include taking the 1st derivate of the spectra, followed by SNV. This was demonstrated by Berry et al., who observed a significant baseline shift during the on-line monitoring of a CHO cell culture system and generated highly accurate models of multiple process parameters including glucose, lactate, and VCD (Berry et al. 2015). of a CHO cell culture system and generated highly accurate models of multiple process parameters including glucose, lactate, and VCD (Berry et al. 2015). Figure 3. Raman spectra recorded by the high-throughput Raman spectroscopy microscope for each of the 24 micro-bioreactor cell culture runs on day 0-10 shown in the form of (A) the raw spectral data and (B) the baseline-corrected spectra, using a 1st order polynomial function followed by the application of a standard normal variate (SNV) scattering scatter algorithm and a Savitzky-Golay smoothing filter. Each spectrum was generated using 5 accumulations each with a 30 s acquisition time, recorded using 10% laser power (30 mW).
Four separate PLS models were developed to correlate the pre-processed spectra shown in Figure 3B Table 1. The choice of latent variables was based on minimising the root mean square error (RMSE) for the calibration data sets and the root mean square error of the prediction (RMSEP) values for the validation data sets, which ensured an accurate model with a good prediction performance. However, a low RMSE and RMSEP does not always equate to a robust model. Figure 3. Raman spectra recorded by the high-throughput Raman spectroscopy microscope for each of the 24 micro-bioreactor cell culture runs on day 0-10 shown in the form of (A) the raw spectral data and (B) the baseline-corrected spectra, using a 1st order polynomial function followed by the application of a standard normal variate (SNV) scattering scatter algorithm and a Savitzky-Golay smoothing filter. Each spectrum was generated using 5 accumulations each with a 30 s acquisition time, recorded using 10% laser power (30 mW).
Four separate PLS models were developed to correlate the pre-processed spectra shown in Figure 3B to the concentrations of glucose and lactate, VCD, and the antibody concentration. The model was calibrated using cell culture runs 1-5 and 7-11 (cell line A) and 13-17 and 19-23 (cell line B), and validated with runs 6 and 12 (cell line A) and 18 and 24 (cell line B). The prediction performances of the optimum PLS models for each variable are summarised in Table 1. The choice of latent variables was based on minimising the root mean square error (RMSE) for the calibration data sets and the root mean square error of the prediction (RMSEP) values for the validation data sets, which ensured an accurate model with a good prediction performance. However, a low RMSE and RMSEP does not always equate to a robust model. A comparison of the experimental recorded off-line variables to those predicted by the PLS models is shown in Figure 4. The PLS model for glucose concentration is shown to accurately predict the off-line measurements between the range of 1 and 5 g L −1 , as shown in Figure 4A. The RMSE and RMSEP of glucose concentration are equal to 0.19 and 0.38 g L −1 , respectively, which is equivalent to ±4.75% and ±9.5% of the glucose range investigated. Additionally, these values are below the typical measurement error of offline analysers of ∼0.5 g L −1 . These predictions are comparable to the glucose predictions demonstrated by Rowland-Jones and Jaques [60], who observed an RMSE of 0.24 g L −1 and an RMSE of cross validation equal to 0.32 g L −1 using a similar type of Raman microscope during the at-line monitoring of a mammalian cell culture system in a miniature bioreactor system. The lactate predictions were slightly higher than those reported by Rowland-Jones and Jacques [60], who reported an RMSE of approximately 0.25 g L −1 and an RMSE cross validation of 0.30 g L −1 . However, the lactate predictions considered in this work were across a much wider concentration range, and demonstrate the ability of this tool to accurately predict lactate in excess of 12 g L −1 . The prediction accuracy of this HT-Raman microscope for both glucose and lactate is highly comparable to the on-line Raman spectroscopy sensors for mammalian cell culture systems reported in the approximate RMSEP range of 0.2-0.9 g L −1 for glucose concentration and 0.1-04 g L −1 for lactate [17,18,61]. The accuracy of these at-line predictions is comparable with the accuracy of the off-line nutrient analyser, and therefore has the potential to replace these analysers and help facilitate the development of an at-line glucose control strategy. Conventionally, the RMSE and RMSEP are the gold standard for model comparison and, provided the validation data set is independent, these metrics typically provide a good measure of the model's robustness. However, to further validate these model predictions the model coefficients Four separate PLS models were built utilising 7 latent variables for each variable investigated, and calibrated using cell culture runs 1-5 and 7-11 from cell line A (indicated by the red squares) and runs 13-17 and 19-23 from cell line B (indicated by the green squares). The model was validated using cell culture runs 6 and 12 from cell line A (indicated by the blue diamonds) and runs 18 and 24 from cell line B (indicated by the yellow diamonds). The spectral data utilised in each PLS model were baseline-corrected spectra using a 1st order polynomial function, followed by the application of an SNV scattering scatter algorithm and a Savitzky-Golay smoothing filter.
Conventionally, the RMSE and RMSEP are the gold standard for model comparison and, provided the validation data set is independent, these metrics typically provide a good measure of the model's robustness. However, to further validate these model predictions the model coefficients should be scrutinised to ensure the model robustness, as outlined by the Raman spectroscopy model generation workflow defined in Figure 1. Within this work, the generated model was a PLS model, and the regression coefficients of the PLS model are shown in Figure 5. To assess whether the dominant wavenumbers highlighted by the PLS regression weights shown in Figure 5 are related to the variable of interest, a table containing the majority of the Raman vibrational peak assignments associated with each variable is shown in Table 2. This table highlights the associated Raman vibrational shifts expected after excitation from each variable due to changes in the molecular bond length such as stretching, which can be symmetric or asymmetric, or the bending of the molecular bond angles due to a wagging or a rocking motion. By comparing the expected Raman signature profile of these variables to the dominant peaks calculated by the PLS regression coefficients, the model's robustness can be evaluated. This ensures that the generated PLS model is specifically built to predict the variable investigated and not due to the metabolism-induced concentration correlations of other variables. For glucose, the dominant regression coefficients were characterised by peaks that had a regression coefficient equal to or above 0.2 (i.e., > 0.02). Wide peaks were characterised by the start, max, and end wavenumber of these peaks and by the max wavenumber value for narrow peaks. The majority of these peaks are shown to correctly align with the peaks associated with glucose, based on previously published literature defining the wavenumbers associated with glucose [62][63][64][65][66][67][68]. This includes the three distinct peaks in the 871-988 cm −1 range, which are indicated in Figure 5A as a single peak at wavenumber 928 cm −1 , which aligns with multiple literature references for the specific For glucose, the dominant regression coefficients were characterised by peaks that had a regression coefficient equal to or above 0.2 (i.e., β > 0.02). Wide peaks were characterised by the start, max, and end wavenumber of these peaks and by the max wavenumber value for narrow peaks. The majority of these peaks are shown to correctly align with the peaks associated with glucose, based on previously published literature defining the wavenumbers associated with glucose [62][63][64][65][66][67][68]. This includes the three distinct peaks in the 871-988 cm −1 range, which are indicated in Figure 5A as a single peak at wavenumber 928 cm −1 , which aligns with multiple literature references for the specific Raman scattering bands associated with glucose. The other dominant peaks at 1373, 1061, and 517 cm −1 all align with the expected peaks for glucose and further strengthen the predictions of the generated PLS model. Another interesting observation was outlined by Söderholm et al. [64], who demonstrated a small peak shift of approximate 2-5 wavenumbers depending on the glucose concentration in an aqueous solution with varying water contents; this demonstrates why, in Table 2, some of the regression peaks do not align precisely with those quoted in the literature. An additional evaluation of specific Raman bands for individual metabolites was defined by Singh et al. 2015 [65], who investigated the supernatants of a CHO cell culture grown in shake flasks using a Raman spectroscopy device. They used a classical least squares algorithm to determine, with a high degree of accuracy, the specific bands associated with both glucose and lactate. For glucose, they determined that there are seven characteristic peaks related to glucose, which are located at the wavelengths 435, 516, 990, 1076, 1121, 1374, and 1460 cm −1 , which all align with the dominant PLS regression coefficients features shown in Figure 5A. The alignment of these PLS regression coefficients with the expected Raman peaks of glucose provides additional confidence in this generated MVDA model. Similar observations were observed for the main regression peaks of lactate shown in Figure 5C that were highlighted for all the PLS regression coefficients above 0.05 (i.e., β > 0.05). The majority of these bands correspond to the Raman vibrational bands characterising lactate in the literature, as demonstrated in Table 2. Furthermore, Singh et al. characterised the main lactate-associated Raman peaks by wavenumbers 855, 922, 1045, 1085, and 1456 cm −1 , which align almost perfectly with those shown in Figure 5C. Similar to the glucose model, the correct alignment of the PLS regression coefficients with the expected lactate vibration bands provides the necessary confidence in the PLS model to enable subsequent predictions and deploy the MVDA model. Figure 3B demonstrates the ability of this HT Raman spectroscopy to accurately predict the VCD across the expected range of mammalian cell cultures. The RMSEP for the VCD was 3.49 × 10 6 cells per mL −1 , which is equivalent to the ±9% measurement range investigated, and was found to accurately predict both high and low VCD concentrations for both cell lines, as shown in Figure 3B. The prediction accuracy is similar to that reported by Rowland-Jones et al., which was equal to 4.49 × 10 6 cells per mL −1 . These VCD predictions are also comparable to on-line Raman spectroscopy systems monitoring mammalian cell culture systems reporting predictions in the range of 1-5 × 10 6 cells per mL −1 [18,61]. The accuracy of these VCD measurements demonstrates the potential of this technology to replace the traditional off-line cell counter based on the accuracy of these predictions between the two cell lines investigated. However, as outlined in the Raman spectroscopy model generation workflow, it is necessary to evaluate the regression coefficients of the model.
The PLS regression coefficients of VCD are shown in Figure 5B, with the main peaks identified as those above 0.02 (i.e., β > 0.02), which are also indicated in Table 2. It is interesting to note that the primary peaks identified are also closely associated with the expected Raman vibrational bands of lactate. These highly correlated metabolite concentrations are problematic, as, although the RMSE and RMSEP of this PLS model are low, it is difficult to deconvolute this strong association with lactate. The correlation coefficient between the off-line lactate and off-line VCD concentrations was equal to 0.88, and therefore would help explain the location of these dominant lactate peaks corresponding to the calculated VCD regression coefficients. This is particularly evident when comparing the dominant lactate PLS regression peak shown at wavenumber 855 cm −1 in Figure 5C, which corresponds to the strong peak shown in the VCD at this precise wavenumber shown in Figure 5B. This high correlation is most likely due to the high accumulated lactate on days 6, 8, and 10, shown in Figure 2C, which resulted in a significant drop in the corresponding VCD values shown in Figure 2B. Based on a subsequent analysis of the PLS regression coefficients, it would be recommended to not deploy this PLS model for VCD and to generate additional experiment runs that do not result in a strong correlation between the off-line analytics of VCD and lactate.
The most promising application of this HT-Raman spectroscopy device is the ability to accurately predict the at-line product concentration, as shown in Figure 4D. The PLS model generated had an RMSE equal to 0.09 g L −1 , and the RMSEP was equal to 0.17 g L −1 , which is equivalent to ±4.5% and ±8.5% of the glucose range investigated. The accuracies of these predictions were similar to the RMSEP reported by Rowland-Jones et al. equal to 0.57 g L −1 [60], and slightly better than the RMSEP reported by on-line systems equal to 0.75-1.21 g L −1 [18]. The majority of previous research on Raman spectroscopy focuses on the nutrient concentration and cell concentrations, however the ability to measure product concentration opens up significant opportunities for the development of advanced control strategies. One demonstration was shown by Rowland-Jones and Jaques [60], who adapted nutrient and glucose feed additions through at-line predictions of glucose and VCD using a similar type of HT Raman spectroscopy microscope. This is one of the first demonstrations of PAT applied to control a miniature bioreactor system. The subsequent evaluation of the regression coefficients for the generated PLS model shown in Figure 5D highlights some issues with the robustness of this model. The dominant PLS regression coefficients for antibody were taken as those above 0.015 (i.e., β > 0.015). These peaks are shown to be primarily associated with either glucose or lactate. The strong lactate peak shown at wavenumber 855 cm −1 is evident from Figure 5D, and in addition to the strong glucose peak shown in Figure 5A at wavenumber 1373 cm −1 which can be observed at a similar location in the antibody wavenumber of 1368 cm −1 . Furthermore, the wavenumbers of 535, 1041, and 1456 cm −1 associated with lactate are also dominant for the PLS regression coefficients of the antibody. The strong association of antibody with both glucose and lactate can be partially explained by the high correlation coefficient (R 2 ) between the antibody and lactate equal to 0.77 and the antibody and glucose equal to −0.47. The issues with these metabolite-induced concentration correlations have been previously outlined by Rhiel et al. [69], who demonstrated that, due to the complex nature of the majority of bioprocesses, some analyte predictions using spectroscopic methods are based upon correlations between other variables. This was observed by Rheil et al. [69] during the analysis of the main metabolites of a CHO cell culture using a MIR probe. They demonstrated a similar strong independence between the majority of metabolite concentrations, which negatively affected their predictions. The strong correlation between the variables investigated in this research poses a major risk in subsequent predictions, where deviations in either glucose or lactate concentrations outside of the concentrations investigated in this work would drastically influence the antibody and VCD predictions. Subsequent experimental work is therefore needed to build more robust models that should include the spiking of these variables to build an independent data set to calibrate and validate these MVDA models. Note: w = wagging, δ = bending, ν = stretching, r = rocking. Cut-off points for PLS regression coefficient are glucose: * β > 0.02; lactate: ** β > 0.05; VCD *** β > 0.2; antibody **** β > 0. 15. This ability to monitor these CPPs through at-line measurements opens up additional opportunities for the development of advanced control strategies for miniature bioreactor systems. The reduction in the necessary sample volume could enable more frequent at-line sampling, such as every 6 or 12 h, enhancing the monitoring and control of these miniature bioreactor systems. The application of advanced control strategies earlier in the process development cycle encourages the integration of PAT within future scale-up and commercial operations.
Application of Raman Spectroscopy to DSP
In this study, an HT InVia Raman Microscope was used to determine the total concentration and monomer purity of Fc-fusion protein during CEX chromatography purification in order to guide sample prioritisation for further analytical methods, such as the HPLC-SEC-based determination of monomer purity and UV absorbance at 280 nm for the total protein concentration.
The data set was based on a total of 18 individual CEX runs, including both step and gradient elution, resulting in total of 201 individual elution fraction samples, each of which were measured in duplicate, producing a total of 402 spectra. Only elution fractions above 0.5 mg/mL were included in the data set (as measured off-line by A280), as that was the previously estimated sensitivity of the device.
The PLS models were built as described in Section 2.9, and followed the Raman spectroscopy model generation workflow defined in Figure 1. The number of latent variables for the PLS model were based on minimising the RMSE and RMSEP. The validation and calibration sets showed a range of elution buffer conditions, where the PLS calibration set consisted of elution fractions from runs using 20 mM of sodium citrate, with varying levels of sodium chloride (0-0.5 M). There were two district validation sets-the first (P1) was in the identical buffer range as the calibration set and consisted of 40 samples, whereas the second (P2) consisted of elution fractions using elution buffer containing 50 mM of sodium citrate, as well as wider salt range (133-210 mM), and consisted of 22 samples. The product concentrations in the data sets ranged from 0.5 to 33.1 mg/mL for T1, 0.7 to 19.41 mg/mL for P1, and 1.7 to 33.2 mg/mL for P2. The monomer purity ranged from to 70% to 100% for all three data sets, with up to 20% HMW species and up to 20% LMW species. These two data sets were chosen in order to test the robustness of the model to changes in salt concentrations and sample matrices typically seen in purification processes.
The PLS model describing product concentration is shown in Figure 6. Using seven LVs, the model has resulted in an excellent degree of fit (Table 3), with an R 2 (T1) = 0.99 for the calibration sample set (4A) and R 2 (P1) = 0.99 and R 2 (P2) = 0.98 for the validation sets P1 (4B) and P2 (4C), respectively. The RMSEP of 1.09 mg/mL for P1 and 3.54 for P2 corresponds to 6.1% and 11.2% of the range. A higher error for validation set P2 is expected, due to wider range of buffer conditions relative to the calibration set. The second variable of interest was monomer purity, as it is desirable to achieve maximal monomer purity whilst minimising the presence of aggregated (HMW) or fragmented (LMW) species, or both. In this experiment, a model was built that predicted monomer purity ( Figure 5), although with a relatively large RMSE, which would not allow for quantitative detection, but could serve to distinguish between samples with high purity versus samples with low purity, and therefore save time for more laborious methods such as HPLC.
The model predicting monomer purity was based on the same data set as the total concentration model, with the exception that only samples with concentrations above 1.5 mg/mL were used. The model uses eight LVs. The summary of model statistics is shown in Table 3. The PLS calibration model for monomer purity resulted in an R 2 (T1) = 0.98 (5A), R 2 (P1) = 0.86 (5B), and R 2 (P2) = 0.34 (7C), as shown in Figure 7. The RMSEP for monomer purity for P1 and P2 corresponds to 4.27% and 13.68%, respectively. The model was further used to classify samples according to a monomer purity of 90% or higher, where a total of 92% of the samples were classified correctly as either true positives (80%) or true negatives (13%), as is shown in Figure 7D. Table 3. Summary of monomer purity and product concentration model fit (R 2 ), RMSE of calibration set, and RMSEP of two validation sets.
Statistical Measure of Fit Calibration (T1) Validation (P1) Validation (P2)
Product concentration (mg mL −1 ) Furthermore, the model was used to classify samples based on whether or not the concentrations were higher than 1.5 mg/mL ( Figure 6D). This value of 1.5 mg/mL was selected as the cut-off point to determine low-concentration samples; any samples below this concentration limit would not be further analysed by high-performance liquid chromatography (HPLC). Since only samples above 0.5 mg/mL were included in the spectral measurements, the classification was working within a narrow range of 0.5 and 1.5 mg/mL. Within this range, 95% of samples were classified correctly, of which 74% were true positives (concentration above 1.5 mg/mL) and 21% were true negatives (concentration below 1.5 mg/mL). The classification model classified both samples that were not part of the training set (P1 and P2), as well as samples that were used to build the model (T1), which leads to the accuracy of prediction being higher than what would be seen if only previously unknown samples were used. Nevertheless, this shows that the model can be used to determine which samples should be considered for further analysis and pooling, and which can be discarded at this stage due to no or low levels of protein.
The second variable of interest was monomer purity, as it is desirable to achieve maximal monomer purity whilst minimising the presence of aggregated (HMW) or fragmented (LMW) species, or both. In this experiment, a model was built that predicted monomer purity ( Figure 5), although with a relatively large RMSE, which would not allow for quantitative detection, but could serve to distinguish between samples with high purity versus samples with low purity, and therefore save time for more laborious methods such as HPLC.
The model predicting monomer purity was based on the same data set as the total concentration model, with the exception that only samples with concentrations above 1.5 mg/mL were used. The model uses eight LVs. The summary of model statistics is shown in Table 3. The PLS calibration model for monomer purity resulted in an R 2 (T1) = 0.98 (5A), R 2 (P1) = 0.86 (5B), and R 2 (P2) = 0.34 (7C), as shown in Figure 7. The RMSEP for monomer purity for P1 and P2 corresponds to 4.27% and 13.68%, respectively. The model was further used to classify samples according to a monomer purity of 90% or higher, where a total of 92% of the samples were classified correctly as either true positives (80%) or true negatives (13%), as is shown in Figure 7D. When validated using the two prediction sets, a relatively low RMSEP was achieved for data set P1, and no useful predictions were achieved for set P2. This led us to believe that the model was In each of these matrix cells, the value indicates the number and percentage of samples in each category. In the total matrix cell columns and rows, the total number of samples is given, with the percentage indicating the number of true positives or true negatives.
When validated using the two prediction sets, a relatively low RMSEP was achieved for data set P1, and no useful predictions were achieved for set P2. This led us to believe that the model was based on features that are absent from the P2 data set, which might be caused by changes in the elution profile and the content of co-eluting impurities. Alternatively, the type of HMW/LMW species might differ between data sets P1 and P2 due to changes in the elution buffer. Despite that, the model is capable of classifying samples with an acceptable degree of accuracy.
The PLS model developed for product concentration was found to have a significantly lower RMSEP, and was less susceptible to changes in the sample matrix conditions than the PLS for monomer purity. Examining the PLS regression plots for the concentration model (described in Table 4 and Figure 8A), multiple IgG spectral features can be identified, such as the Amide I and Amide III bands at wavelengths 1673 cm −1 , and 1236 and 1337 cm −1 , respectively. Bands resulting from the vibrations of amino acid groups can be further found at 757 and 1553 cm −1 for tryptophan, 1003 and 1208 cm −1 for phenylalanine, and 1208 cm −1 for tyrosine [72][73][74]. Regression coefficients corresponding to the IgG peaks as annotated in the literature suggest that the model is valid and is based directly on the protein spectra rather than on the spectral features of other correlated components. [73] Note: Trp = Tryptophan, Tyr = Tyrosine, Phe = Phenylalanine. **** Cut-off points for PLS regression coefficients product β > 0.0025; monomer β > 0.05. ***** Approximate assignment, as reference peaks correspond to human IgG, whereas the investigated protein is an Fc-fusion protein. The cut-off points for these PLS regression coefficients are product: β > 0.005; monomer β > 0.075. The PLS model for monomer purity was developed using the raw spectra. The PLS model for the product concentration was developed using spectra that was baseline-corrected using a 1st order polynomial function, followed by the application of an SNV scattering scatter algorithm and a Savitzky-Golay smoothing filter. The PLS model for product concentration utilised 7 latent variables and the model for monomer purity utilised 8 latent variables.
A specific spectral signature for aggregation/fragmentation might potentially be present, but the sensitivity of the set-up could be insufficient to detect it. The majority of samples had a relatively low aggregate content (below 5%), which would result in aggregate concentrations of about 0.025 to 1.655 mg/mL. Based on previous experiments, we have estimated the LOD at around 1 mg/mL; hence, a large portion of the aggregate content in the samples might be below the limit. We have shown ways to increase the sensitivity, including a change in the 96-well plate material, laser output, and objective magnification, which would likely increase the sensitivity of the model.
Although a reduction in number of samples for HPLC analysis might be welcomed by the operators of such instruments, it might not necessarily justify the acquisition of such equipment. The major simplification of large screening experiments could be enabled by the ability to predict robustly monomer purity, especially in situations where LMW/HMW species elute throughout the main monomer peak without clear separation. The ultimate application of the technology would be the real-time detection of multiple CQAs based on the Raman spectra, potentially integrated with currently monitored variables such as UV, pH, osmolarity, etc. This would allow real-time control, Figure 8. PLS regression coefficient (β) plots for each PLS model generated for (A) the monomer purity model and (B) product concentration, with the wavenumbers corresponding to the Raman molecular signature of each variable highlighted by the shaded areas. The cut-off points for these PLS regression coefficients are product: β > 0.005; monomer β > 0.075. The PLS model for monomer purity was developed using the raw spectra. The PLS model for the product concentration was developed using spectra that was baseline-corrected using a 1st order polynomial function, followed by the application of an SNV scattering scatter algorithm and a Savitzky-Golay smoothing filter. The PLS model for product concentration utilised 7 latent variables and the model for monomer purity utilised 8 latent variables.
The PLS regression plot for monomer purity in Figure 8A relies on the above-mentioned general protein peaks, as can be seen based on the similarities between the two plots, but also has additional features, especially in the area between wavelengths 831-901 cm −1 and 1437-1446 cm −1 . Neither of the two regions correspond to the spectrum of the citrate buffer (data not shown). Although difficult to determine without further experimental data, this might correspond to aggregation, as the wavelength region has previously been suggested to correspond to a shift in the tyrosine Fermi doublet at 830/850, which is seem upon aggregation [45].
The results show that HT-Raman spectroscopy has the potential to support downstream development through the rapid determination of product concentration and monomer purity, which enables the prioritisation of samples for further analysis and therefore saves operator and instrument time. We have shown that robust product predictions can be achieved, in line with the previously published literature, but predictions of monomer concentrations had a relatively large RMSEP and lower robustness. In order to make the Raman spectroscopy truly attractive to DSP, improvements in terms of sensitivity need to be made.
Studies have described various spectral changes upon protein aggregation [45,46], although the spectral features do not seem to be very consistent and might differ from product to product. As a consequence, the PLS model described here might be relying on a combination of background signal peaks and concentration peaks, rather than the specific spectral signal for aggregation and/or fragmentation. This would also explain why the model predictions are significantly worse for the P2 data set, where the elution buffer was changed, which likely results in changes to the background spectra due to the changed sample matrix.
A specific spectral signature for aggregation/fragmentation might potentially be present, but the sensitivity of the set-up could be insufficient to detect it. The majority of samples had a relatively low aggregate content (below 5%), which would result in aggregate concentrations of about 0.025 to 1.655 mg/mL. Based on previous experiments, we have estimated the LOD at around 1 mg/mL; hence, a large portion of the aggregate content in the samples might be below the limit. We have shown ways to increase the sensitivity, including a change in the 96-well plate material, laser output, and objective magnification, which would likely increase the sensitivity of the model.
Although a reduction in number of samples for HPLC analysis might be welcomed by the operators of such instruments, it might not necessarily justify the acquisition of such equipment. The major simplification of large screening experiments could be enabled by the ability to predict robustly monomer purity, especially in situations where LMW/HMW species elute throughout the main monomer peak without clear separation. The ultimate application of the technology would be the real-time detection of multiple CQAs based on the Raman spectra, potentially integrated with currently monitored variables such as UV, pH, osmolarity, etc. This would allow real-time control, leading to a higher product quality and ultimately supporting initiatives such as continuous manufacturing. This work is the first step towards such applications, as it highlights the current limitations as well as the potential improvements that can be implemented.
Strategies for Improving Predictions
Considering the high RMSEP of the monomer purity model, efforts were made to increase the sensitivity of the instrument by the optimisation of the acquisition settings. Adjustments to the acquisition settings were made to improve the signal intensity, including a change in the sample holder, microscope objective, and laser power. The data presented in this study were acquired using polypropylene 96-well plates, which were switched to stainless steel in order to improve the signal. As shown in Figure 9, there is a significant reduction in background between the PP plates and the steel plates. Furthermore, when the acquisition was performed using a 50X long-working distance objective, the background signal was further decreased, especially in the spectral range of the water peak (1300-1500 cm −1 ). Additional improvement in the spectral quality came from using the higher laser power (300 mW vs. 30 mW), which was enabled through the switch to steel plates, as using a high laser power in the polypropylene plates caused the scorching of the plate and heating of the sample. leading to a higher product quality and ultimately supporting initiatives such as continuous manufacturing. This work is the first step towards such applications, as it highlights the current limitations as well as the potential improvements that can be implemented.
Strategies for Improving Predictions
Considering the high RMSEP of the monomer purity model, efforts were made to increase the sensitivity of the instrument by the optimisation of the acquisition settings. Adjustments to the acquisition settings were made to improve the signal intensity, including a change in the sample holder, microscope objective, and laser power. The data presented in this study were acquired using polypropylene 96-well plates, which were switched to stainless steel in order to improve the signal. As shown in Figure 9, there is a significant reduction in background between the PP plates and the steel plates. Furthermore, when the acquisition was performed using a 50X long-working distance objective, the background signal was further decreased, especially in the spectral range of the water peak (1300-1500 cm −1 ). Additional improvement in the spectral quality came from using the higher laser power (300 mW vs. 30 mW), which was enabled through the switch to steel plates, as using a high laser power in the polypropylene plates caused the scorching of the plate and heating of the sample. Figure 9. Comparison of acquisition settings of the spectra recorded by HT Raman spectroscopy microscope using polypropylene plates (in black) compared to the improved acquisition settings using stainless steel (SS) plates, a higher laser power, and higher magnification objective (in red).
Raman Spectroscopy Future Perspective
Raman spectroscopy has positioned itself as the leading on-line and at-line process analyser for biopharmaceutical processes. This technology is non-invasive, non-destructive, and has little interference with water, making it an ideal tool for both USP and DSP operations. One of the primary challenges of Raman spectroscopy is the weak Raman signal that reduces the sensitivity of the instrument and can be further hindered by background fluorescence, which, as demonstrated in this Figure 9. Comparison of acquisition settings of the spectra recorded by HT Raman spectroscopy microscope using polypropylene plates (in black) compared to the improved acquisition settings using stainless steel (SS) plates, a higher laser power, and higher magnification objective (in red).
Raman Spectroscopy Future Perspective
Raman spectroscopy has positioned itself as the leading on-line and at-line process analyser for biopharmaceutical processes. This technology is non-invasive, non-destructive, and has little interference with water, making it an ideal tool for both USP and DSP operations. One of the primary challenges of Raman spectroscopy is the weak Raman signal that reduces the sensitivity of the instrument and can be further hindered by background fluorescence, which, as demonstrated in this work, is highly problematic during the analysis of biological samples. Pre-processing the spectra can alleviate the majority of this fluorescence and ensure that accurate MVDA models can be generated to predict the variable of interest. However, ensuring the developed MVDA model predictions have a low RMSE and low RMSEP does not always guarantee a robust accurate model. To ensure the developed MVDA model is sufficient, an in-depth evaluation of the regression weights is required. In order to build a robust MVDA model, the regression coefficients should correspond to the correct wavenumbers associated with the specific molecular vibration bonds associated with the Raman scattering of the variable. This work provides two useful tables (Tables 2 and 4) outlining some of the primary vibrational modes related to Raman scattering associated with glucose, lactate, and protein based upon the previously published literature. This work demonstrated that, although the PLS model generated for the antibody and VCD concentrations resulted in a low RMSE and RMSEP, the regression weights of these models did not correspond to the antibody and VCD molecular vibrations; instead, they corresponded to only those wavenumbers associated with lactate and glucose molecular vibrations. Therefore, these metabolism-induced concentration correlations could lead to poor VCD and antibody predictions when either the glucose or lactate concentrations differed from the current glucose and lactate concentrations in this experiment. Therefore, the Raman spectroscopy model generation workflow outlined in Figure 1 highlights the importance of analysing the MVDA regression coefficients to ensure they align with the expected vibrational bonds associated with the variable, in addition to presenting the traditional RMSE and RMSEP metrics. This paper also demonstrates the value of an HT Raman spectroscopy device and its potential to revolutionise the monitoring and control of automated microbioreactor systems. Although improvements to the prediction accuracy of the models are needed, HT Raman spectroscopy has the potential to replace the need for additional analytic equipment, which can reduce operating costs. The additional benefit of the HT-Raman spectroscopy device is its ability to analyse samples across both USP and DSP operations.
Conclusions
This paper demonstrates the value of implementing MVDA to complex spectral data sets generated by an HT-Raman spectroscopy microscope to support the at-line monitoring and subsequent optimisation of USP and DSP operations, particularly those activities in early-stage development with limited sample volume availability. The USP case study investigated the ability of this device to predict the key process parameters typically measured off-line during cell culture for two different cell lines grown in a micro-bioreactor system cultivating 24 cell culture runs. The Raman spectra recorded throughout this mammalian cell culture was analysed using the Raman spectroscopy model generation workflow outlined in this paper and enabled the development of an optimised PLS model resulting in accurate predictions of the glucose, lactate, viable cell density, and product concentration. The RMSE and RMSEP were comparable to previously reported in-line Raman spectroscopy probes. However, upon the investigation of the regression coefficients of these variables, the VCD and antibody PLS models were shown to be primarily correlated with the Raman vibrational signatures of lactate and glucose. Therefore, subsequent experimentation is required to validate the robustness of these models. However, these results demonstrate the potential of this technology to predict these off-line variables using a single analytic device in comparison to three separate off-line analysers, which could greatly simplify the monitoring of these micro-bioreactor systems. Additionally, these at-line measurements require less volume than traditional analytic methods. This opens up opportunities for advanced control strategies and helps promote the inclusion of PAT in early-stage process development, thus simplifying the adoption of PAT within commercial-scale manufacturing.
The second case study involved a commercial DSP unit operation and further demonstrates the versatility and flexibility of this instrument. This case study focused on streamlining the sample collection of the CEX chromatography step, investigating different elution conditions during the purification of a fusion protein. The HT Raman microscope collected spectral data on 18 CEX runs operated in both step and gradient elution modes; these runs were operated using different buffer conditions. The potential of the Raman spectra to predict the total protein concentration and monomer purity was investigated. Both variables were modelled by an optimised PLS model leveraging the data from the Raman spectra. To demonstrate the robustness of the developed PLS model, two distinct validation data sets were considered. The first calibration data set (P1) included identical buffers that were in the same range as those used to calibrate the model. The second calibration data (P2) contained elution fractions that used a buffer containing 50 mM of sodium citrate and a wider salt range that was outside of the ranges of the validation data set. It was shown that the use of HT-Raman enables relatively accurate predictions of the protein concentration and monomer purity, however when the model was validated using a sample set with a different buffer background (P2), a significant decrease in prediction accuracy was observed, suggesting separate models might have to be built for specific conditions.
In summary, the HT-Raman spectroscopy microscope demonstrated significant potential as a novel cross-functional PAT applicable to both USP and DSP operations. The device was shown to accurately predict the primary CPPs in USP and the CQAs relevant to DSP through the application of MVDA. Furthermore, this technology has the potential to reduce the reliance on multiple separate analytic devices, thus reducing the capital and operating costs. Additionally, the near real-time information generated can be further exploited to develop and implement advanced control strategies and process optimisation earlier in the process development lifecycle. Funding: This research is associated with the joint UCL-AstraZeneca Centre of Excellence for predictive multivariate decision support tools in the bioprocessing sector, and financial support for S.G. and R.Z is gratefully acknowledged. Furthermore, support from EPSRC for S.G. is also greatly appreciated (EP/I033270/1). UCL Biochemical Engineering hosts the Future Targeted Healthcare Manufacturing Hub (Grant Reference: EP/P006485/1) in collaboration with UK universities and with funding from the UK Engineering and Physical Sciences Research Council (EPSRC) and a consortium of industrial users and sector organisations.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-10-28T18:33:04.253Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "59373f8fbd2cda01911bc18a974d9393e152217b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/8/9/1179/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fe242c1f8a07473974383884f01169a10049e888",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
254021238
|
pes2o/s2orc
|
v3-fos-license
|
Dialectical behavioral therapy-based group treatment versus treatment as usual for adults with attention-deficit hyperactivity disorder: a multicenter randomized controlled trial
Background Studies on structured skills training groups have indicated beneficial, although still inconclusive, effects on core symptoms of ADHD in adults. This trial examined effects of Dialectical Behavioral Therapy-based group treatment (DBT-bGT) on the broader and clinically relevant executive functioning and emotional regulation in adults with ADHD. Methods In a multicenter randomized controlled trial, adult patients with ADHD were randomly assigned to receive either weekly DBT-bGT or treatment as usual (TAU) during 14 weeks. Subsequently, participants receiving TAU were offered DBT-bGT. All were reassessed six months after ended DBT-bGT. Primary outcomes were the Behavior Rating Inventory of Executive Function (BRIEF-A) and the Difficulties in Emotion Regulation Scale (DERS). Secondary outcomes included self-reported ADHD-symptoms, depressive and anxiety symptoms, and quality of life. We used independent samples t- tests to compare the mean difference of change from pre- to post-treatment between the two treatment groups, and univariate linear models adjusting for differences between sites. Results In total, 121 participants (68 females), mean age 37 years, from seven outpatient clinics were included, of whom 104 (86%) completed the 14-week trial. Entering the study, 63% used medication for ADHD. Compared to TAU (n = 54), patients initially completing DBT-bGT (n = 50) had a significantly larger mean reduction on the BRIEF-A (-12.8 versus -0.37, P = 0.005, effect size 0.64), and all secondary outcomes, except for symptoms of anxiety. All significant improvements persisted at 6 months follow-up. Change on DERS did not differ significantly between the groups after 14 weeks, but scores continued to decrease between end of group-treatment and follow-up. Conclusions This DBT-bGT was superior to TAU in reducing executive dysfunction, core symptoms of ADHD and in improving quality of life in adults with ADHD. Improvements sustained six months after ended treatment. The feasibility and results of this study provide evidence for this group treatment as a suitable non-pharmacological treatment option for adults with ADHD in ordinary clinical settings. Trial registrations The study was pre-registered in the ISRCTN registry (identification number ISRCTN30469893, date February 19th 2016) and at the ClinicalTrials.gov (ID: NCT02685254, date February 18th 2016).
Background Attention-Deficit/Hyperactivity Disorder (ADHD) is a common, life-spanning, neurodevelopmental disorder [1] with prevalence estimates around 3% in adults [2]. Individual, health care and societal costs due to consequences of ADHD in adults are significant [3,4]. Multimodal treatment is preferred for ADHD, both for children and adults [5]. Pharmacological treatment is shown to be effective in reducing core symptoms of ADHD [6] and is recommended as a first-line treatment [7]. However, adults with ADHD often have symptoms and challenges beyond the core symptoms of attention deficits, hyperactivity and impulsivity. Common adjuvant and secondary symptoms among adults with ADHD include lack of organizational skills and coping strategies, difficulties with time management, low self-esteem as a consequence of continuous failure and misunderstandings, problems with emotional regulation, and comorbidity or symptoms from other psychiatric disorders [4,[8][9][10]. Such problems may be less responsive to medication, and benefits of pharmacological treatment on long-term outcomes may be lower when initiated at later ages [11,12]. Psychotherapeutic interventions for ADHD should target these adjuvant problems [13].
Cognitive behavioral therapy (CBT), both individually [14,15], in group settings [16], and combined [17] is so far the most documented non-pharmacological treatment for adults with ADHD [18]. However other treatments like metacognitive therapy and mindfulness have shown promising results [18,19]. Psychoeducation often forms part of non-pharmacological treatment programs, and may alleviate symptoms in itself [20,21]. Dialectical behavioral therapy (DBT) includes these aspects, but focuses in addition on acceptance of problems; the term dialectical referring to a balance between acceptance and change of behavior. DBT was originally developed for the treatment of borderline personality disorder (BPD) [22], but adaptations to other disorders have been made, including ADHD [23]. Common traits and symptoms between BPD and ADHD (i.e., impulsivity, emotional instability, and disorganized behavior) make DBT an interesting approach for ADHD. In 2002-04, Hesslinger and colleagues developed a DBT-based group treatment program adapted to adults with ADHD in Germany [24,25]. This group program differs from the original DBT for BPD by shorter duration (12-14 weeks instead of 1 (-2) years), lack of individual sessions, and more specific focus on ADHD in the psychoeducation and skills training. Their first pilot study (8 participants) [25], and subsequent open, multicentre study (n = 72 patients) [26] as well as a later open feasibility study from Sweden (n = 98) [27] all showed reductions of both ADHD-symptoms and comorbid symptoms of depression in adults with ADHD after this group treatment. A smaller randomized controlled trial from the Swedish group (n = 51) showed that this specific group treatment was more effective in reducing core symptoms of ADHD compared to a loosely structured discussion group, but found no significant difference on comorbid depressive symptoms [28]. The largest study so far of this DBT-treatment, including 433 patients, used a four-armed design to compare the group treatment to general clinical management, combined with medication (methylphenidate) or placebo, respectively, in adults with ADHD [29,30]. Medication was found more effective in reducing core symptoms of ADHD during the trial however, follow-up studies indicated that the DBTbased group treatment had a more long-lasting effect on general clinical status and quality of life [28,31]. It can be argued that traditional checklists of core symptoms are more suitable for assessment in trials of medication than of psychotherapy, where the goal is rather on coping strategies than symptom reductions in itself [32]. Furthermore, in DBT-based treatment, two of the main tools, e.g., mindfulness and behavioral analyses, specifically target emotional regulation (ER) and executive functioning (EF), which have shown to be important and independent mediators of impairments in adults with ADHD [33][34][35]. A pilot study of another DBTbased group treatment of 8 weeks found a positive effect on self-reported EF in college students (n = 33) with ADHD [36]. However, none of the larger, clinical studies of DBT-based treatment for ADHD published so far has specifically examined ER or EF. A main motivation for conducting this study was to increase the availability of evidence-based non-pharmacological treatment options for adults with ADHD. Implementation of the group treatment in a general clinical setting was therefore an important aspect of the study design.
The specific objectives of this study were to examine the efficacy of a manualized DBT-based group treatment compared to treatment delivered as usual for adults with Trial registrations : The study was pre-registered in the ISRCTN registry (identification number ISRCTN30469893, date February 19 th 2016) and at the ClinicalTrials.gov (ID: NCT02685254, date February 18 th 2016).
Keywords: Attention-deficit, Hyperactivity disorder (ADHD), Adults, Non-pharmacological treatment, Group therapy, Dialectical behavioral therapy (DBT) ADHD. Our primary hypotheses were that the group treatment would be superior to treatment as usual on self-reported executive functioning and emotional regulation, and secondly, that the group treatment, relative to treatment as usual, would have a larger effect on core symptoms of ADHD, symptoms of depression and anxiety, and quality of life.
Study design and participants
The present study is a multicenter parallel group randomized controlled trial (RCT) comparing the effects of a DBT-based group therapy (DBT-bGT) with 'Treatment as usual' (TAU) for adults with ADHD. Included participants were randomly allocated (ratio 1:1) to either the active DBT-bGT or the control condition TAU by a blinded lottery procedure performed and supervised at each site. After this initial controlled trial, participants in the control group, i.e. who initially received TAU, subsequently underwent the DBT-based group treatment, in an uncontrolled extension phase of the study. For the control group, the post-RCT assessment was thus used as pre-assessment before starting the group treatment, as long as there was less than 2 months between end of the RCT-trial and start of group treatment. Due to summer holiday, some sites did not start the group treatment for the control group within the first 2 months, and in that case, the control group went through a new pre-assessment before starting the group treatment. All participants were then re-assessed 6 months after having received their DBT-bGT.
The study protocol was approved by the Regional Committees for Medical Research (REC South East Norway, ID 2015/01523), and conducted in accordance with ethical standards following the principals of the Declaration of Helsinki. All included participants gave their written informed consent before entering the study.
Estimating sample size based on literature review and a power calculation assuming at least difference of 10% between means of the two independent groups (and SD 15%), gave a need of about 50 participants in each group (alpha = 0.05, power = 0.9). Seven psychiatric adult outpatient clinics in South-Eastern and Western Norway contributed. Clinicians at each site included [16][17][18] patients between February 20 th and December 31 th 2016, who were then randomly allocated to either the active DBT-bGT (one group at each site) or the control condition TAU. Inclusion criteria were a clinical diagnosis of ADHD (according to the Diagnostic and Statistical Manual of Mental Disorders-IV), and a minimum age of 18 years. Diagnostic assessment was part of standard diagnostic procedures at the participating clinics, which include confirmatory assessment by a specialist in psychiatry or psychology. Exclusion criteria were ongoing psychiatric disorders and/or psychosocial factors considered to clearly interfere with the patients' motivation or ability to participate in the group therapy, i.e., ongoing substance or alcohol abuse, psychotic disorder, major depressive or manic episode, and suicidal behavior; organic brain damage, neurological diseases causing mental handicap, intellectual disability (IQ ≤ 70), and pervasive developmental disorder. Information about both ADHD and comorbid conditions was based on a questionnaire to the referring clinician, designed for this study. Participants did not undergo specific diagnostic assessment for this study in particular. However, clinical guidelines and standard clinical practice for diagnostic evaluation in psychiatric outpatient clinics in Norway include the use of diagnostic instruments corresponding to DSM-/ICD-criteria, i.e. the MINI/MINIplus interview for axis-1 psychiatric disorders, SCID-II/5 for personality disorders and DIVA for ADHD.
Patients were allowed to receive pharmacological treatment but should be stabilized on an adequate type of medication and dosage at least 6 weeks before inclusion, and as far as possible avoid changes in medication during the study-period. However, as we also aimed for a naturalistic setting, we did not exclude patients that underwent medication change during the trial, if this was judged as necessary or clinically important by the treating clinician. Instead, we included a question about this in the questionnaire to the referring/treating clinician.
Intervention
The DBT-bGT was based on a Swedish version of the manual [37] originally developed by Hesslinger et al. [25]. The treatment uses elements from DBT such as psychoeducation, acceptance, mindfulness, and functional behavioral analysis, targeting symptoms and functional problems common in ADHD. It consists of 14 weekly group sessions, each lasting two hours separated by a 15-min break. Each group included 7-9 adult patients with ADHD and two therapists. Group sessions followed a structure with manualized instruction for the therapists and workbook for the patients. A typical session starts by introducing a new mindfulness exercise performed together in the group. The first part of the session then focuses on feedback on last week's homework of skill training, while the second part introduces a new topic and related homework for the next week. The topics for the different sessions include psychoeducation, mindfulness, functional behavioral analyses, and how to understand and manage different symptoms and aspects of ADHD, e.g. impulsivity, addiction, emotional regulation, self-esteem, and relation to others [28]. Interaction between the participants is important, and the therapists should encourage and balance their feedback and discussion during the session. After each group session, patients received 15-20 min of individual coaching with one of the therapists. This was an add-on, according to a Swedish adaptation of the program [27]. The coaching focuses on adherence to homework related to each participant's situation and pre-defined goals.
The therapists were health service professionals with various backgrounds: medical doctors/psychiatrists, psychologists, nurses, and some special educators. There were no requirements of former DBT-training, but all therapists had clinical experience and interest in adults with ADHD and/or CBT and /or group treatment. All group therapists participated at a 2-days' seminar for an introduction to the principles of DBT and the use of the manual, led by one of the main contributors to the Swedish manual and studies on this method. To assure a common understanding and quality of the treatment, the therapists also participated at a minimum of two digital meetings led by the project leader to discuss and get feedback on challenges and practical issues encountered during the trial period.
The control condition of the trial (TAU) also lasted for 14 weeks. TAU was not standardized but rather defined as the treatment that the patient would have received if not included in the project. It could thus vary between both individuals and clinics. The most common treatment for this patient group in outpatients in Norway, consistent with national clinical guidelines, consists of individual consultations delivered by a psychiatrist or psychologist, focusing on psychoeducation and general clinical management, often in combination with medication. To obtain more information about the actual treatment received by the control group, referring clinicians were asked to respond to some questions about frequency and focus of the delivered treatment in the time-period of the trial.
Primary outcomes
Participants were assessed one week before treatment (pre-treatment) as baseline, and one week after the 14-week trial (post-treatment), and then again six months after ended DBT-bGT for all the participants (non-controlled follow-up). Primary outcomes were symptoms of executive functioning (EF) measured by the Behavior Rating Inventory of Executive Functioning Adult version (BRIEF-A) and emotion regulation (ER) according to the Difficulties in Emotion Regulation Scale (DERS). The BRIEF-A consists of 75 items about self-reported executive functioning operationalized in different domains of every-day life [38]. The presence of each item is rated on a 3-point Likert scale from 1 (never) to 3 (often). Several subscales may be calculated, but for the purpose of this study we used the sum score (global executive composite score). The DERS is a questionnaire consisting of 36 statements about thoughts, reactions and behavior related to own emotional state [39]. Participants rate how often the statements apply to them, from 'almost never' (0-10%) to 'almost always' (91-100%). Scores may be calculated for separate subscales and summed to a total score, the latter used in this study.
Secondary outcomes
Secondary outcomes were core symptoms of ADHD on the Adult ADHD Rating Scale (ASRS, the original 18-item version), symptoms of depression and anxiety (as defined by the Becks Depression Inventory (BDI) and Becks Anxiety Inventory (BAI), respectively), and quality of life measured by the Adult ADHD Quality of Life Scale (AAQoL). The ASRS [40] grades the presence of core symptoms of ADHD for the last 6 months, on a Likert scale from 0 (never) to 4 (very often). (For this study, the time-period for reported symptoms at the post-trial and follow-up assessments was specified to 'since last evaluation' or 'last month'). The AAQoL [41] is a 29-item questionnaire assessing health related and disease specific measures at different domains of quality of life in adults with ADHD. The BDI, version II [42] and the BAI [43] are self-report scales for last week's occurrence of symptoms for depression and anxiety, respectively.
Other parameters
As baseline characteristics, we recorded educational level, employment status, and clinical subtype of ADHD, diagnosed comorbid mental disorders, and information about medication for ADHD as reported by the patients' clinicians. Patients also filled in two screening questionnaires for alcohol-and substance-problems; The Alcohol Use Disorder Identification Test (AUDIT) [44] and The Drug Use Disorder Identification Test (DUDIT) [45], respectively.
Statistical analyses
Changes in mean scores for outcome measures from pre-to post-treatment within the DBT-bGT and TAU groups, respectively, were analyzed with paired sample t-tests. We used independent samples t-tests to compare the mean difference of change from pre-to post-treatment between the two treatment groups To account for the non-independence and nested nature of data due to participants representing different sites, we used univariate linear models, with site/clinic as a fixed factor, and excluding intercept from the model. This model yields an estimate of the group (= intervention) effect after having controlled for the different levels at each site. Since site and therapists represent the same level in this model (each site had only one group with one set of therapists) we performed the analyses only with site as a fixed factor in the model.
For the non-controlled extension part of the study, we used paired sample t-tests to assess change from baseline to 6-months follow-up, and analyses of variance (ANOVA) for repeated measures to assess change in symptom scores from baseline to post-treatment from the RCT and at 6 months follow-up after group treatment for all participants.
All analyses were pre-specified and performed with the software package IBM SPSS Statistics (version 24). Standardized effect sizes (ES) of the treatment were calculated by dividing the mean difference in symptom scores from pre-to post treatment with the pooled standard deviation (SD) of the respective measure, and reported as Cohen's d. The significance threshold was set at 5% (two-tailed) and we used two-sided 95% confidence intervals (CI). Analyses included, and were restricted to, participants with actual responses on each of the respective questionnaires, i.e., excluding patients with missing values analysis-by-analysis.
Sample characteristics
Of the 121 randomized patients, three withdrew before starting the treatment and 104 (86%) completed the 14-week trial (Fig. 1). Mean age was 37 years (range 21-59) and 56% were female. Less than one of five were full time employed or student, and one of three were out of work (unemployed, on sick leave, receiving a disability pension or work assessment allowance). The most frequent subtype of ADHD was the combined (68%), followed by the inattentive (22%). The mean total ASRS score was 46.8 (range 0-72). Most patients (88%) had tried pharmacological treatment for ADHD, and 63% were still using ADHD-medication when entering the study. Two thirds had at least one comorbid psychiatric diagnosis. At baseline, patients allocated to DBTb-GT showed a statistically higher mean score of depressive symptoms (BDI score 20.1 versus 15.1, p = 0.02), and AUDIT and DUDIT scores than the TAU group. Other clinical or sociodemographic variables did not differ significantly between the two treatment groups at baseline (Table 1).
Primary outcomes
Compared to individuals receiving TAU (n = 54), patients completing DBT-bGT (n = 50) reported a significantly larger mean improvement of EF (reduction on the BRIEF total score -12.8 versus -3.7, respectively). The difference in change between the groups was statistically significant (p < 0.001) with an ES of = 0.64, which according to common interpretations of Cohen's effect sizes corresponds to a medium effect. The proportion of patients with an actual reduction on the BRIEF total score was 74.0% and 53.8% for the DBTb-GT and TAU, respectively (Pearson chi-square (χ 2 ) 4.48, p = 0.034). The proportion of patients with a BRIEF-score in the clinical range (i.e. BRIEF T-score of 65 or more) decreased significantly from 81.4% to 64.0% (χ 2 = 6.3, p = 0.019) in the DBTgroup compared to a slight, boarder-line significant increase from 75.4% to 77.4% (χ 2 = 5.2, p = 0.051) in the TAU-group, from before to after treatment.
Participants of the DBT-group also showed a larger intra-group mean reduction on the DERS total score than the TAU group (-7.5, p = 0.03 vs. -3.9, p = 0.15, respectively), but the difference in change between the two groups was not statistically significant (p = 0.39) ( Table 2).
Follow-up at 6 months
Overall, the observed symptom reductions from pre-to post-treatment for the DBT-group persisted at 6 months follow-up. A continued improvement was found for the BRIEF and DERS scales, where 28% and 39% of the total symptom reduction, respectively, occurred after ended treatment (Table 3). For the BDI and AAQoL there was a slight decline of the observed improvements at posttreatment, but still with a significant improvement relative to baseline (Table 3).
Participants receiving TAU in the RCT showed significant and corresponding improvements after completing the post-trial additional 14-week DBT-bGT, and at 6 months follow-up thereafter (Fig. 2).
Adherence to treatment, feasibility and safety
Among the 121 patients, 10 of the 60 patients (16.7%) randomized to DBT-bGT were registered as 'drop-outs' , compared to seven of 61 (11.5%) in the TAU group (χ 2 = 0.68, p = 0.41). Reasons for dropping out of the group treatment were mainly related to practical and psychosocial circumstances, e.g. time schedules at work, sickness, and relational break-ups (see Fig. 1). Only one patient reported the drop-out being related to the treatment ('too demanding'). The mean number of lost sessions for patients completing the group treatment was 1.38 (range 0-7, median 1), with 85% participating at 12 or more of the 14 sessions. No adverse events related to the DBT-bGT were reported. Five of the seven clinics in According to information from the clinicians following the participants in TAU, the TAU consisted mostly of individual consultations of supportive character, including pharmacological controls and adherence for those using medication. The number of consultations varied from zero (n = 1) to weekly (n = 3), with a mean of 4.7 and a median of 4 consultations during the 14-week trial period. Approximately 1 of 3 patients underwent some kind of change in their ADHD-medication during the trial, but the proportions did not differ significantly between the DBT and TAU groups (n = 12/29.3% and n = 14 /33.3%, respectively, chi-square test p = 0.845). Changes included both reductions and increase of dosage, and we could not observe any systematic difference in reported reasons for change in medication between the groups.
Discussion
This multicenter study is among the largest randomized trials on a psychotherapeutic intervention for adults with ADHD. The main finding was that patients receiving a
Table 1 Sociodemographic and clinical characteristics of participants at baseline
DBTb-GT dialectical behavioral therapy-based group treatment, TAU treatment as usual, BRIEF-A Behavior Rating Inventory of Executive Function -Adult Version, total sum score, DERS Difficulties in Emotion Regulation Scale, total sum score, ASRS Adult ADHD Self-Report Scale, total sum score, AAQoL Adult ADHD Quality of Life Questionnaire, total sum score, BDI Beck Depression Inventory, total sum score, BAI Beck Anxiety Inventory, total sum score, AUDIT Alcohol Use Disorder Identification Test, total sum score, DUDIT Drug Use Disorder Identification Test, total sum score, SD standard deviation, n number of participants ‡ number of responders (n total, n group therapy/n TAU) varies between questionnaires, due to missing data for some participants
DBTb-GT (n = 60) TAU (n = 61)
Mean age, years (min-max) 36 manualized 14-week DBT-bGT reported significantly better improvements of self-reported executive functioning (EF), core symptoms of ADHD and quality of life compared to patients receiving treatment as usual. Effect sizes of the DBTb-GT were moderate to large. This should be of particular notice, since most of the patients were already stabilized on medication at inclusion. We also found a significant reduction of depressive symptoms. Improvements were maintained six months after ended group treatment in a non-controlled follow-up for all participants after having received DBT-bGT. The change in emotion regulation (ER) according to DERS did not differ between the two treatment groups immediately after treatment, but showed a continued and significant improvement six months after ended group treatment, indicating a possible effect at longer term. This study is the first to assess primary effects of this specific DBTb-GT on EF and ER among adults with ADHD in a controlled trial. The treatment effect on selfreported EF (according to BRIEF) is thus a novel finding. It is however in line with findings from some studies of related group interventions, like mindfulness-based cognitive therapy [19,46], and mindfulness meditation training [47], whereas a study on standard CBT did not find any effect on BRIEF [16]. Interestingly, the cited studies showing improvement of EF all included mindfulness as a treatment component, indicating its putative role in 'brain-training' .
We did not find any significant effect of DBTb-GT on ER. Although in line with a more indirect measure from the COMPAS study (i.e. a subscale of impulsivity and emotional lability) [31], this was somewhat unexpected, since ER is one of the main targets of DBT. Some explanations may be suggested; first, the DBTb-GT for adult ADHD of 14 weeks is of considerably shorter duration than the original DBT for personality disorders, and may thus represent insufficient time or specificity to alleviate emotional problems. Our finding that ER improved at the six months' follow-up, although the non-controlled nature of this extension prevented us from drawing causal inferences, supports this. The Swedish group found no effect of the DBTb-GT on a Perceived Stress Scale in their controlled study [28], whereas a later uncontrolled study demonstrated significant impact of DBTb-GT on both symptoms of perceived stress, mindful attention and acceptance after 14 weeks [27]. Table 2 Outcome measures before and after receiving dialectical behavioral therapy-based group treatment, and treatment as usual a Mean difference of sum score from pre-to post-treatment within each group with standard deviation (SD) and t (df ) from paired sample b p-value from independent sample t-test of mean difference of change between groups c Effect size for the difference in change between groups, reported as Cohen's d. n Number of included patients in the paired sample t-test for each outcome measure, SD standard deviation, TAU treatment as usual, BRIEF-A Behavior Rating Inventory of Executive Function -Adult Version, total sum score, DERS Difficulties in Emotion Regulation Scale, total sum score, ASRS Adult ADHD Self-Report Scale, total sum score, AAQoL Adult ADHD Quality of Life Questionnaire, total sum score, BDI Beck Depression Inventory, total sum score, BAI Beck Anxiety Inventory, total sum score
Pre-treatment
Post-treatment Change pre-post Statistics for analyses within group a
Statistics for analyses between groups b
Outcome measure (n) Interestingly, two recent uncontrolled studies of group therapies addressing emotional problems in adults with ADHD showed that 14 weeks may be enough time to improve ER [48,49]. A second reason for the inconsistencies of effect on ER may be the operationalization of the emotional dysregulation as phenomenon. The DERS questionnaire was not originally developed for adults with ADHD, and may not capture emotional traits most typical for this patient group. We applied DERS because it includes components that are important targets of DBT (e.g. awareness and acceptance of emotions). The two above-mentioned studies targeting ER in adults with ADHD also used DERS: A recent pilot study of group treatment based on a combination of CBT and DBT, found a positive effect on the DERS, which correlated to the amount of mindfulness practiced by the participants during treatment [48]. The other, a larger, multicenter study, examined the effects of the authors' own developed group therapy ('Group Therapy for Improving Emotional Acceptance and Regulatory Skills in Adults with ADHD') based on elements from both CBT, DBT, Acceptance and Commitment Therapy, and Emotion Regulation Group Therapy [49]. They found a significant effect on ER as measured by the DERS. It should be noted that their study included only ADHD patients who had 'identified problems with emotion regulation difficulties' , and the results may therefore not be directly comparable to ADHD patients in general. Related to this is our finding of a positive effect of DBTb-GT on symptoms of depression. This is in line with the Morgensterns study [27], whereas controlled trials [28,31,36] did not find any effect of DBT-based treatment on depressive symptoms in ADHD adults. One explanation may be the lower baseline scores of BDI in the studies with negative findings. Indeed, a mean BDI score of 20 in our treatment groups indicates that some of these patients had scores above the conventional cut-off (i.e. > 20) for a depressive episode. Although these patients were not judged clinically as having a depressive episode that would interfere with treatment, this finding may however motivate future studies to assess the predictive role of depressive symptoms on the effect of this group treatment. Table 3 Symptom change from baseline to follow-up for participants randomized to dialectical behavioral therapy-based group treatment BRIEF-A Behavior Rating Inventory of Executive Function -Adult Version, total sum score, DERS Difficulties in Emotion Regulation Scale, total sum score, ASRS Adult ADHD Self-Report Scale, total sum score, AAQoL Adult ADHD Quality of Life Questionnaire, total sum score, BDI Beck Depression Inventory, total sum score, BAI Beck Anxiety Inventory, total sum score, SD Standard deviations a From paired sample t-test The effect on self-reported ADHD core symptoms was larger in our study than in other controlled trials based on the same treatment manual [28,31,50]. The larger effect size found in our study may be due to slight differences in the actual delivered treatment, i.e. 14 sessions instead of 12 and 13 in the COMPAS and Swedish studies, respectively, and, perhaps more importantly, the addition of individual coaching in our study. Another Fig. 2 Change in symptom scores from randomized controlled trial and extended follow-up after ended group treatment. The graphs show mean symptom scores for main and secondary outcomes at pre-and post-treatment for the controlled trial (dialectical behavioral therapy based group treatment (DBTb-GT) and Treatment as usual (TAU), and at the extended uncontrolled follow-up, i.e. six months after having received the DBTb-GT for all participants. Analyses and graphs are based on analyses of repeated measures (ANOVA) in SPSS. Bars represent 95% CI. Abbreviations: BRIEF-A = Behavior Rating Inventory of Executive Function-Adult Version, total sum score; DERS = Difficulties in Emotion Regulation Scale, total sum score; ASRS = Adult ADHD Self-Report Scale, total sum score; AAQoL = Adult ADHD Quality of Life Questionnaire, total sum score; BDI = Beck Depression Inventory, total sum score; BAI = Beck Anxiety Inventory, total sum score explanation may be the differences in the control conditions, i.e. the non-standardized TAU in our study, versus more standardized general clinical management or discussion groups in the other studies. The COMPAS study did not find any difference in effect between DBTb-GT and general clinical management on clinician-rated core ADHD symptoms [31]. However, the DBTb-GT was more effective on a more general outcome measure, the clinical global impression scale [51]. Further, in one of their follow-up studies assessing the patient's perspective, the DBTb-GT was rated superior to general clinical management in reducing self-reported ADHD-symptoms, with only small to moderate correlations with the clinician-based measures [52]. One may thus question whether potential benefits of the DBT-bGT may be partly undetected by traditional clinician-based assessment. After all, the explicit goal of this treatment is to learn how to live with and manage symptoms rather than symptom reduction per se [28]. In line with this, a recent feasibility trial of this group treatment found no significant difference on self-reported core symptoms of ADHD, although 88% of the participants reported that they could control their symptoms better after ended group treatment [50].
Mean (SD) Mean (SD) Mean (SD) t (df)
The finding that participants in the group treatment reported a higher increase of quality of life relative to TAU in our study, is in line with some of the other studies of DBTb-GT [27,36], but not all. The COMPAS-study found that the increase in quality of life, still significant 1.5 year after ended treatment, was regardless of the initial treatment arm. They argue that the lack of difference between the group-treatment and general clinical management probably reflects a more non-specific treatment effect [53]. Interestingly though, in the context of the earlier discussion on emotion regulation, scores on the quality of life domain specifically related to feelings were more increased among participant that had received the DBT-based group treatment [53].
To learn and practice skills to cope with ADHD symptoms cognitively and emotionally are typically part of several psychosocial treatments based on CBT [54] and, as discussed, 'third wave' behavioral interventions based on e.g., meta-cognition, mindfulness and acceptance are increasingly studied [18]. To compare treatments directly head-to-head may however be challenging, due to slight differences in treatment elements and study designs, as well as in the labelling of the intervention. Hence, in this study, different components of the DBT-bGT like the group format, the principle of acceptance, mindfulness exercises, and the individual coaching between group sessions may have beneficial effects on different problems of ADHD. Further studies with specified designs should pursue the question of 'what works for whom' .
Limitations and strengths
Evaluating effects of this non-pharmacologic treatment raises methodological issues as related to the complexity of the intervention, the influence of different care providers and expertise of the centers, and the open-label design [55]. Even though we used a manual-based therapy procedure for the DBT-group, and controlled for site in the analyses, sources of variation in the delivering of the treatment may exist. However, our randomized trial design implied a corresponding variation in the comparison group TAU. Because the TAU condition was not a group-therapy setting, we cannot infer specific effects of the DBT-treatment; only superiority of this group-treatment as a whole compared to the individualized TAU.
Another potential limitation of this study is that the TAU-condition was not standardized, and thus could vary from a few to weekly consultations during the 14-week period. Further, since patients in the TAUgroup knew they were offered DBT-bGT after the first, randomized phase of the study, some may have perceived TAU more as a 'waiting list' condition. This could have lowered their expectancy to the received TAU and potentially influenced their symptom reports after TAU in a negative direction, i.e. in favor of a larger effect size of the DBTb-GT. On the other hand, nonstandardized TAU is more representative of clinical reality, making the results relevant for clinical practice.
The main outcome measures in this study were based on self-reported symptoms and functioning. We thus lacked a clinician-based measure, which is generally considered as more objective. However, the last decade's increased focus on patient-centered healthservices has led to recommendations of using patientrelated outcome measures, particularly when it comes to psychological symptoms [56]. A review of studies on adults with ADHD found an overall good concordance between clinician-based and self-report measures of the same (core) symptoms [57]. On the other hand, the significant reduction in self-reported ADHD-symptoms at follow-up for patients receiving group-treatment in the COMPAS-study was no longer significant when using clinician-rated ADHD-symptoms [51]. The two types of measures probably capture different aspects of the studied phenomenon and may not be directly comparable to each other.
This study has several strengths. It is one of the largest published randomized trials on a psychotherapeutic intervention for adults with ADHD. Further, the multicenter design limits potential therapeutic or clinicianrelated bias, and the naturalistic setting, i.e. including patients both with and without pharmacological treatment, few exclusion criteria, and clinicians with various professional background and training, increases the generalizability of our results to ordinary clinical settings.
Conclusions
Overall, this manualized 14-week DBT-based group treatment was effective in improving self-reported executive functioning, core symptoms of ADHD and quality of life in adults with ADHD, with improvements still lasting six months after ended treatment. The lack of effect on emotional regulation immediately after treatment may reflect that emotional problems represent a more complex phenomenon that may require more specific skill training or longer duration. Limitations of the study include the lack of clinicianbased outcome measures, lack of standardization of the control condition treatment as usual, and that the six-months follow-up did not include a control condition. Altogether, the design and results of this study indicate that this group treatment is an effective, feasible and well-tolerated non-pharmacological option for adult patients with ADHD.
|
2022-11-28T15:03:08.211Z
|
2022-11-28T00:00:00.000
|
{
"year": 2022,
"sha1": "c8c66100c0a31eb1ee2eecf703b0697ba2380b8d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "c8c66100c0a31eb1ee2eecf703b0697ba2380b8d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15592348
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence, associated factors and heritabilities of metabolic syndrome and its individual components in African Americans: the Jackson Heart Study
Objective Both environmental and genetic factors play important roles in the development of metabolic syndrome (MetS). Studies about its associated factors and genetic contribution in African Americans (AA) are sparse. Our aim was to report the prevalence, associated factors and heritability estimates of MetS and its components in AA men and women. Participants and setting Data of this cross-sectional study come from a large community-based Jackson Heart Study (JHS). We analysed a total of 5227 participants, of whom 1636 from 281 families were part of a family study subset of JHS. Methods Participants were classified as having MetS according to the Adult Treatment Panel III criteria. Multiple logistic regression analysis was performed to isolate independently associated factors of MetS (n=5227). Heritability was estimated from the family study subset using variance component methods (n=1636). Results About 27% of men and 40% of women had MetS. For men, associated factors with having MetS were older age, lower physical activity, higher body mass index, and higher homocysteine and adiponectin levels (p<0.05 for all). For women, in addition to all these, lower education, current smoking and higher stress were also significant (p<0.05 for all). After adjusting for covariates, the heritability of MetS was 32% (p<0.001). Heritability ranged from 14 to 45% among its individual components. Relatively higher heritability was estimated for waist circumference (45%), high density lipoprotein-cholesterol (43%) and triglycerides (42%). Heritability of systolic blood pressure (BP), diastolic BP and fasting blood glucose was 16%, 15% and 14%, respectively. Conclusions Stress and low education were associated with having MetS in AA women, but not in men. Higher heritability estimates for lipids and waist circumference support the hypothesis of lipid metabolism playing a central role in the development of MetS and encourage additional efforts to identify the underlying susceptibility genes for this syndrome in AA.
Participants and setting: Data of this crosssectional study come from a large community-based Jackson Heart Study ( JHS). We analysed a total of 5227 participants, of whom 1636 from 281 families were part of a family study subset of JHS.
Methods: Participants were classified as having MetS according to the Adult Treatment Panel III criteria. Multiple logistic regression analysis was performed to isolate independently associated factors of MetS (n=5227). Heritability was estimated from the family study subset using variance component methods (n=1636).
Results: About 27% of men and 40% of women had MetS. For men, associated factors with having MetS were older age, lower physical activity, higher body mass index, and higher homocysteine and adiponectin levels ( p<0.05 for all). For women, in addition to all these, lower education, current smoking and higher stress were also significant ( p<0.05 for all). After adjusting for covariates, the heritability of MetS was 32% ( p<0.001). Heritability ranged from 14 to 45% among its individual components. Relatively higher heritability was estimated for waist circumference (45%), high density lipoprotein-cholesterol (43%) and triglycerides (42%). Heritability of systolic blood pressure (BP), diastolic BP and fasting blood glucose was 16%, 15% and 14%, respectively.
Conclusions: Stress and low education were associated with having MetS in AA women, but not in men. Higher heritability estimates for lipids and waist circumference support the hypothesis of lipid metabolism playing a central role in the development of MetS and encourage additional efforts to identify the underlying susceptibility genes for this syndrome in AA.
BACKGROUND
Metabolic syndrome (MetS) is a clustering of different interrelated cardiometabolic risk factors including obesity, elevated blood pressure (BP), dyslipidemia and impaired fasting plasma glucose (IFG). These risk factors often occur together and increase cardiovascular disease (CVD) deaths almost by threefold to fourfold. 1 2 Since MetS is the Strengths and limitations of this study ▪ The African American community disproportionately suffers from metabolic syndrome, but relatively little is known about the genetic contribution and the environmental influence of this syndrome among African Americans. ▪ Using the data from a large community-based Jackson Heart study, this study showed a high prevalence of metabolic syndrome, and reported the associated factors and heritability estimates of metabolic syndrome and its components in African Americans. ▪ We are not aware of any published data that explored these issues among African Americans from such a big setting. The large sample size also provided a more reliable statistical ground to detect heritability estimates than nuclear families, twin pair data or sib-pair data. ▪ Potential limitations of this study included the cross-sectional observational design, which could only confirm the associations of the factors with metabolic syndrome, but not the causality, and the absence of information on shared environmental factors like childhood environment and neighbourhood factors, which might slightly overestimate the heritability results. ▪ This study encourages additional efforts to identify the underlying susceptibility genes for metabolic syndrome among African Americans.
combined effect of more than one risk factor, its aetiology is complex. Factors like lifestyle, gender, ethnicity, socioeconomic status, psychosocial factors and some inflammatory markers play key roles in the pathogenesis of MetS. [1][2][3] Findings also suggest that MetS clusters in families [4][5][6][7][8] and has reasonable heritability, which is defined as the proportion of phenotypic variance in a trait that is attributable to the additive effects of genes. [9][10][11][12][13][14][15][16][17] Thus, the interplay of environmental and genetic factors makes MetS a multifactorial disorder. Though the pathogenesis, diagnosis and the treatment of MetS remain complex because of its multifactorial nature, the construct MetS is an important risk-assessment method for early detection and early intervention of CVD. In spite of the steady decline in CVD mortality during recent decades, CVD is still the leading cause of death in all Americans, and is highly prevalent in persons of African ancestry. 18 It is important to note that the majority of studies that explored the associated factors and quantified the heritability of MetS almost exclusively involved Caucasians. 10-14 19 Relatively little is known about these issues among the adult African American (AA) population. [15][16][17] Using the Jackson Heart Study ( JHS) data, the objective of this cross-sectional study was to report the prevalence, risk factors and heritability estimates of MetS and its components in AA men and women.
Data source
The data for this analysis come from a large communitybased JHS, which comprises 5301 adult AA enrolled between September 2000 and March 2004 and residing in Jackson, Mississippi metropolitan area. 20 About 24% of the 5301 adult AA participated in the JHS family study component. 21 The family study component of JHS contained first degree ( parent-offspring and siblings), second degree (grandparent-grandchild, avuncular, halfsiblings) and third degree or more distant (great grandparent-grandchild, grand avuncular, half avuncular, first cousins, half first cousins, second cousins) family members. The JHS was approved by the University of Mississippi Medical Center Institutional Review Board, and the participants gave written informed consent. Details of the study design and data collection methods are described elsewhere. 21 22 The current study data were obtained from the baseline clinic visit during 2000-2004. After excluding 74 participants who did not have information on their MetS status, the current analysis had a total of 5227 participants, of whom 1636 from 281 families contributed to the heritability analyses.
Two measures of the waist at the level of the umbilicus and in the upright position were averaged to calculate WC. Sitting BP was measured twice at 5 min intervals with a standardised Hawksley random-zero sphygmomanometer, and the average of two measurements was used. Fasting blood samples were collected according to standardised protocols, and the assessments of FPG and lipids were processed at the Central Laboratory, University of Minnesota. 23 Respondents were asked about their medication usage for hypertension, diabetes mellitus and high lipid levels. Individuals were classified as having MetS if they had at least three of the following five components: (1) large WC or abdominal obesity (>102 cm for men and >88 cm for women); (2) hypertriglyceridaemia (fasting plasma triglyceride concentration ≥ 150 mg/dL or on drug treatment); (3) low HDL-C levels (<40 mg/dL for men and <50 mg/dL in women or on drug treatment); (4) elevated BP (≥130 mm Hg SBP or ≥85 mm Hg DBP or on drug treatment); or (5) IFG (≥ 110 mg/dL or on drug treatment). 24 25 Data about socio-demographic (age, sex and education), psychosocial (stress) and lifestyle ( physical activity, smoking status and alcohol consumption) variables were also collected. Age was classified as: 20-39, 40-59, 60-79 and 80 years and above. Education status was selfreported and was divided into three categories (less than high school, high school/some college and college/associate degree or higher, where less than high school was the referent). Stress level was obtained from The Global Perceived Stress Scale, an 8-item questionnaire that measures the severity of chronic stress experienced over a prior period of 12 months. 26 The physical activity index composite score was calculated as the sum of four different domains of physical activity: active living, work, home and garden, and sport and exercise indices. 27 Smoking status was classified as never (referent), current and former. Alcohol consumption status was defined as 'yes' if the participants currently consumed alcoholic beverages and 'no' (referent) if they had stopped drinking for more than a year, or if they never consumed alcohol. Information on clinical factors like body mass index BMI (weight in kg divided by height in meter square), C reactive protein or CRP (mg/dL), serum adiponectin (mg/dL) and serum homocysteine (µmol/L) was also obtained. 23 Analysis Data from the full cohort (n=5227) were used to explore the risk factors of MetS. Sociodemographic, psychosocial, lifestyle and clinical characteristics of participants were compared by gender and MetS status using the χ 2 or independent t test. The primary outcome measure for this analysis was the presence of MetS, evaluated as a dichotomous variable. Logistic regression analysis was used to examine the association between each independent variable (age, education level, stress level, physical activity score, smoking status, alcohol consumption status, BMI, CRP, fasting total cholesterol, serum concentration of adiponectin and serum homocysteine) and the outcome of MetS. A multiple logistic regression model was fitted including all variables to isolate the statistically significant predictors of MetS. The regression analysis was conducted using SAS software, V.9.3. 28
Heritability analysis
After checking the pedigree data for inconsistencies, a total of 1636 individuals from 281 families were analysed to calculate the heritability estimates by variance component methods using the SOLAR (Sequential Oligogenic Linkage Analysis Routines) software package to quantify the proportion of the variance in MetS and in its individual components that was attributable to the additive effects of genes. 29 We estimated the heritabilities of individual MetS components (treated as continuous variable) including WC, SBP, DBP, FPG, fasting triglyceride and plasma HDL-C with adjustment for age, education level, physical activity index composite score, smoking status, alcohol consumption status and respective medication usage. Log transformed values of FPG and triglycerides were used due to deviation from the normal distribution. Heritabilities were calculated using a standard quantitative genetic variance-components model implemented in SOLAR. 29 This approach uses the maximum-likelihood estimation to a mixed-effects model that incorporates fixed covariate effects, additive genetic effects and residual error. The heritability of MetS (discrete variable) was analysed by a threshold model in SOLAR. The method assumed that an individual belonged to a specific affected status if an underlying genetically determined risk exceeded a certain threshold. 30 For all the analyses, a significance level set at p<0.05 was used. women. About 40% of men and women had college level education or beyond. A clear gender difference, however, was found for alcohol use and smoking, with women being far less likely than men to consume alcohol and smoke cigarettes ( p<0.001). Women reported greater levels of stress, but lower levels of physical activity than men ( p<0.001). Women also had higher BMI, CRP and adiponectin levels but lower homocysteine levels than men ( p<0.001 for all). Table 1 also shows the prevalence of MetS and its individual components among the JHS participants. About 27.34% of the men and 38.94% of the women had MetS ( p<0.001). In terms of individual components, women had higher abdominal obesity (75.70% vs 41.03%, p<0.001) and IFG (22.45% vs 19.64%, p<0.001), but lower hypertriglyceridaemia (13.23% vs 18.39%, p<0.001) than men. Table 2 shows the descriptive characteristics of participants by MetS status. Those who had MetS were older, less educated, less likely to smoke, less likely to consume alcohol and less physically active ( p<0.001 for all). They also had higher BMI, CRP and homocysteine levels but a lower adiponectin concentration ( p<0.001 for all). The unadjusted and the adjusted relationships of MetS with these features are displayed in table 3. After adjustment, older age remained significant for both men and women. Notably, the trend of having MetS with increasing age was clearer for women than for men. Education was only significant for women but not for men. Women who went to high school had 24% (adjusted odds ratio or AOR: 0.76; 95% confidence interval or CI: 0.59 to 0.97) decreased odds of having MetS compared to those who had the lowest education level. Like education, higher stress level was also a significant factor for women only s e 2 : 22%) were at the lower end and similar to the estimate of FPG. Conversely, heritability of triglyceride (42%, p<0.001, s e 2 : 10%) and HDL-C (43%, p<0.001, s e 2 : 11%) was relatively high and similar to the heritability of WC.
DISCUSSION
We provide here the epidemiological and heritability data about MetS and its related traits according to ATP III criteria among AA. Overall, in our study sample, the prevalence of MetS was higher among women than among men. Factors independently associated with having MetS for men were older age, lower physical activity level, higher BMI, higher level of homocysteine and lower level of adiponectin. For women, in addition to older age, lower physical activity level, higher BMI, higher level of homocysteine and lower level of adiponectin, low education, higher stress, current smoking and alcohol consumption were also significant. The heritability of MetS was 32% and among its individual components, heritability ranged from 14% for FPG to 45% for WC.
The prevalence of MetS that we found (38.94% of women and 27.34% of men) was almost identical to a recent estimate from a National Survey, which reported that 38.2% of AA women and 25.5% of AA men had MetS. 1 A higher prevalence of MetS in women than in men has been reported in several other Asian and Eastern European countries, as well as among Hispanics and Native Americans. 1 31-33 However, it is opposite for US Caucasians with a higher prevalence in men. 1 This, together with our finding, suggests the possibility of an increased risk of MetS for women belonging to an economically disadvantaged or minority population group. The unfavourable condition of women was also evident from our multivariate analysis, where we found lower education and stress to be significantly related to MetS for women, but not for men. While social class and education are typically inversely related to different cardiometabolic risk factors regardless of gender in the industrialised society, [34][35][36] in our study this was true only for women, indicating an adverse social environment of our women participants.
Literature has indicated active smoking to be associated with development of MetS. 37 38 We, however, found active smoking to be associated with women's MetS only. The lack of association between current smoking and MetS among men in our study can be partly attributed to the much discussed inverse association between active smoking and obesity, as the smoking prevalence was higher and abdominal obesity was relatively lower in the men than women in our analysis. 39 Further, researchers have also found smoking cessation to be frequently followed by weight gain, 40 which explains our observed association between past smoking and men's MetS.
Although lifestyle, physiological and sociodemographic factors play key roles in the pathogenesis of MetS, there is also strong evidence that the syndrome is inherited. [41][42][43][44] We evaluated the contributions of genetic factors to the phenotypic variability of MetS and its traits by heritability estimation. According to various studies from different ethnic groups, heritability of MetS ranges from approximately 19-38%. [10][11][12][13] A Dutch study estimated a heritability of 19.2% of MetS in an isolated group of population. 11 A heritability of 24% in a Caribbean-Hispanic population has been reported by Lin et al. 12 The heritability for the Caucasian population was about 27% according to a large population-based study. 10 Bayoumi et al 13 reported a heritability of 38% of MetS in healthy Omani Arab families. Besides the genetic effect itself, which could be different among the different studied populations, the discrepancy in heritability might be attributable to other factors such as different sample sizes and a different structure of pedigrees or covariates included in the analysis. Compared to different ethnic groups, relatively little information is available on the heritability of MetS in AA population. The heritability of MetS in our study was 32% after taking into account the contributions of covariates, like age, sex, alcohol consumption, smoking and physical activity level, suggesting that more than one-third of the variance in MetS was attributable to the additive effects of genes in the JHS participants. This estimate is at the higher end of the heritability range reported so far, which suggests significant genetic influences on clustering of risk factors among AA.
Reported heritability from different studies for the individual traits ranges from 10% for plasma glucose to 60% for HDL-C. 10-14 19 Our estimates correspond well to these findings. In the present study, more than 40% of the variance in HDL-C, triglyceride and WC was attributable to genetic effects. Conversely, a moderate but significant heritability was observed for BP and FPG. In different studies as well, HDL-C, obesity and lipid profiles showed the strongest heritability, and BP and FPG had the lowest heritability. [10][11][12][13][14] While genetic influence remains dominant for lipid levels and WC, it seems that for FPG and BP the environmental contribution plays a more prominent role, which was apparent by the remarkable covariate effect that was observed for FPG and BP (33% and 22%, respectively) both in our findings and in those of some other studies. 10 12 This hypothesis is further supported by some genetic association studies, where investigators have tried to find a unifying pathogenic mechanism for the different MetS components and identify genetic variants contributing to MetS. No such work among AA was found, but a meta-analysis of 4000 Asian and Caucasian participants reviewing 25 genes reported an association between MetS and single nucleotide polymorphisms in the FTO, TCFL72, IL6, APOA5, APOC3 and CETP genes. 45 Another Swedish study found that genetic variants in the PPARG and ADRB1 genes conferred an increased risk of future MetS. 46 All of these genes are mostly involved in lipid metabolism. 45 46 These evidence indicate that lipid metabolism plays the central role in MetS development; and possibly, genetic impact of FPG and BP is relatively minor in MetS clusters. Our finding also indirectly supports this view as we found triglyceride, HDL-C and WC to be strongly correlated with one another and a relatively weaker correlation for BP and FPG with other traits. More importantly, we also found higher and similar heritability estimates for triglyceride, HDL-C and WC and relatively lower heritability estimates for BP and FPG, suggesting a possible similarity in the genetic mechanism of developing MetS for the AA population with other ethnic groups.
Our findings reconfirm that MetS is a complex disease and lifestyle, socioeconomic status and genetic background play important roles in the development of MetS. It was obvious from our study that the social and economic context has a disparate impact on women's cardiovascular health and that subsequent policies and health educational programmes should be particularly directed towards women for future CVD risk reduction. As the causes of MetS are reversible and the individual components are modifiable, lifestyle change such as increasing physical activity may reduce the prevalence of MetS in AA people. We found a significant and independent inverse association between MetS and adiponectin, and a positive association between MetS and homocysteine. In line with our findings, a number of recent studies also have reported similar results. [47][48][49][50][51] These findings suggest that monitoring circulating adiponectin and homocysteine levels could provide useful clinical information on the risk of developing MetS and provide effective targets for intervention aimed at modifying lifestyle. However, further studies, including economic evaluations and prospective studies, should investigate whether these markers would prove useful and cost-effective in the early identification of MetS. In this study, we found considerable heritability of MetS among AA. This provides direct support for performing genome-wide association studies in this population. Our finding also supports the hypothesis of lipid metabolism playing the central role in the development of MetS and strongly encourages additional efforts to identify the underlying susceptibility genes for this syndrome in AA.
Our results should be interpreted within the context of few limitations. We acknowledge that given our crosssectional observational design, our study can only confirm the associations of the factors with MetS but cannot prove the causality. We also recognise the considerable disagreement over the definition and diagnostic criteria related to MetS. Of the various available definitions, we used the ATPIII criteria as this is the most widely used definition in the USA. 1 24 It can be argued, however, that some other available definition of MetS could be equally valid and produce a somewhat different result. Though we have accounted for important individual covariates, our heritability estimates were influenced by shared environmental factors like childhood environment and neighbourhood factors, and thus our results could be slightly overestimated. One of the major strengths of our study is that our data, although crosssectional, come from a large community-based AA population, which is vastly understudied but has high prevalence of metabolic diseases including obesity, diabetes, hypertension and others. We are not aware of any published data that reported the associated factors and quantified the heritability of MetS among AA from such a big setting. Further, assessment of sociodemographic variables in the JHS was performed uniformly and precise techniques were used to measure all physiological and biochemical values, which makes our findings reliable. JHS also has a complex and extended pedigree structure with a large sample, which provided us a more robust statistical ground to detect genetic effects than nuclear families, twin pair data or sib-pair data.
We report the associations of important factors and significant heritability estimates of MetS and its components among JHS AA families. Our data suggest the inclusion of biomarkers like adiponectin and homocysteine to improve early identification of MetS. We have demonstrated significant heritability estimates for the MetS itself, and also for its individual components. The results strongly encourage efforts to identify the underlying susceptibility genes for this syndrome in AA. Further exploration of the genetic and environmental factors of MetS among AAs will lead to a more comprehensive understanding and better therapeutic options for the syndrome, and ultimately lead to improved cardiovascular health.
|
2016-05-12T22:15:10.714Z
|
2015-10-01T00:00:00.000
|
{
"year": 2015,
"sha1": "3c93473e2858a93661c57555eb41103693de2adf",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/5/10/e008675.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c93473e2858a93661c57555eb41103693de2adf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208625888
|
pes2o/s2orc
|
v3-fos-license
|
Shisa3 brakes resistance to EGFR-TKIs in lung adenocarcinoma by suppressing cancer stem cell properties
Background Although EGFR tyrosine kinase inhibitors (EGFR-TKIs) are beneficial to lung adenocarcinoma patients with sensitive EGFR mutations, resistance to these inhibitors induces a cancer stem cell (CSC) phenotype. Here, we clarify the function and molecular mechanism of shisa3 as a suppressor that can reverse EGFR-TKI resistance and inhibit CSC properties. Methods The suppresser genes involved in EGFR-TKI resistance were identified and validated by transcriptome sequencing, quantitative real-time PCR (qRT-PCR) and immunohistochemistry. Biological function analyses, cell half maximal inhibitory concentration (IC50), self-renewal, and migration and invasion capacities, were detected by CCK8, sphere formation and Transwell assays. Tumorigenesis and therapeutic effects were investigated in nonobese diabetic/severe combined immunodeficiency (nod-scid) mice. The underlying mechanisms were explored by Western blot and immunoprecipitation analyses. Results We found that low expression of shisa3 was related to EGFR-TKI resistance in lung adenocarcinoma patients. Ectopic overexpression of shisa3 inhibited CSC properties and the cell cycle in the lung adenocarcinoma cells resistant to gefitinib/osimertinib. In contrast, suppression of shisa3 promoted CSC phenotypes and the cell cycle in the cells sensitive to EGFR-TKIs. For TKI-resistant PC9/ER tumors in nod-scid mice, overexpressed shisa3 had a significant inhibitory effect. In addition, we verified that shisa3 inhibited EGFR-TKI resistance by interacting with FGFR1/3 to regulate AKT/mTOR signaling. Furthermore, combinational administration of inhibitors of FGFR/AKT/mTOR and cell cycle signaling could overcome EGFR-TKI resistance associated with shisa3-mediated CSC capacities in vivo. Conclusion Taken together, shisa3 was identified as a brake to EGFR-TKI resistance and CSC characteristics, probably through the FGFR/AKT/mTOR and cell cycle pathways, indicating that shisa3 and concomitant inhibition of its regulated signaling may be a promising therapeutic strategy for reversing EGFR-TKI resistance.
Introduction
EGFR tyrosine kinase inhibitors (EGFR-TKIs) have been an effective therapy for lung adenocarcinoma patients with activating mutations; however, therapeutic resistance to EGFR-TKIs inevitably develops [1]. Of note, cancer stem cells (CSCs), which can regrow after clinical management, play an important role in resistance to chemotherapy, targeted therapy and immunotherapy [2,3]. Lung cancer CSCs can be identified by the surface markers CD133, CD44, ALDH1 and ABCG2 and are regulated by Notch, Wnt and cell cycle signaling pathways, which are critical for maintaining drug resistance [3]. Anti-CSC therapeutics targeting surface markers and associated pathways in different cancer types have been developed in animal models and investigated in clinical trials [4,5]. Thus, a comprehensive understanding of the molecular mechanism of CSC regulation in EGFR-TKI resistance might provide a novel strategy for treatment intervention.
The role of tumor suppressor genes in modulating the EGFR-TKI response has attracted the attention of researchers. FOXO3a, as a suppressor, has been shown to increase EGFR-TKI sensitivity and reduce CSC properties [6]. Shisa, as a tumor suppressor, has been discovered to antagonize the CSC-associated Wnt pathway and FGF signaling by interacting with immature forms of their receptors [7]. Sox2, an essential transcriptional factor of CSCs, was upregulated after FGFR1 activation [8], and FGFR1 signaling has been shown to contribute to the maintenance of CSC properties by interacting with the Hippo/YAP1 pathway in lung cancer [9]. Shisa3 has been reported to accelerate the degradation of β-catenin, a key component of Wnt signaling, which inhibits tumorigenesis, invasion and metastasis in lung cancer [10].
In addition to the acquisition of EGFR mutations, such as T790 M (resistance to gefitinib, erlotinib and ecotinib) [11] and C797S (resistance to osimertinib) [12], the activation of receptor tyrosine kinases (RTKs) combined with triggering downstream signaling pathways, including RAS and mitogen-activated protein kinase (MAPK), phosphoinositide 3-kinase (PI3K) and Akt, or signal transducer and activator of transcription 3 (STAT3), has been recognized to drive EGFR-TKI resistance in lung cancer [13]. Recently, it has been reported that the use of Akt inhibitor plus EGFR-TKIs led to suppressed growth in lung adenocarcinoma models of TKI resistance [14]. In the presence of mTOR activation, TKIresistant lung adenocarcinoma cells showed a better response to combinational treatment with mTOR and TKIs [15].
In the current study, we screened shisa3 as a tumor suppressor with decreased expression in lung adenocarcinoma patients with EGFR-TKI resistance. Intriguingly, shisa3 combined with gefitinib/osimertinib dramatically inhibited tumor growth of TKI-resistant lung adenocarcinomas. Shisa3 blockade drove EGFR-TKI resistance, cell cycle arrest and CSC enrichment by regulating the FGFR-dependent Akt/mTOR signaling pathway. Further studies indicated that TKIs combined with cell cycle and mTOR inhibitors significantly restrained tumor growth in TKI-resistant lung adenocarcinoma xenografts with upregulated shisa3 and downregulated Ki67. Taken together, targeting shisa3-regulated networks may provide a novel treatment strategy for reversing TKI drug resistance and CSC activities.
Specimens
In this study, all the samples were obtained from the lung adenocarcinoma patients who received surgery and neoadjuvant treatment before resection at Peking University Cancer Hospital (Beijing, China). All participants provided written informed consent, and the studies were approved by the Ethics Committee of Peking University Cancer Hospital. Paraffin tumor tissues from 45 cases of EGFR mutant stage III lung adenocarcinoma patients who had received EGFR-TKI treatment (gefitinib or icotinib) after surgery were investigated; these patients were enrolled from December 2009 to January 2013. Response to treatment was evaluated according to the Response Evaluation Criteria in Solid Tumors.
In all, 102 paraffin tumor tissue samples with EGFR mutation were collected from these lung adenocarcinoma patients who underwent surgery from January 2009 to November 2011. In addition, 38 pairs of frozen tumor tissues and normal tissues from the lung adenocarcinoma patients with EGFR mutation were involved in 2012. All patients were regularly followed, and the clinical outcomes of all the patients were obtained.
Transcriptome sequencing (RNA-seq)
Total mRNA was extracted using TRIzol reagent (#15596018, Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions. RNA-seq and subsequent data analyses were performed by the Beijing Institute of Genome Research, Chinese Academy of Sciences (Beijing, China). Briefly, 2 μg RNA per sample was used to construct a cDNA library and was sequenced on the Illumina HiSeq 2500 with 125-150 bp paired-end reads following the manufacturer's recommendations. Bowtie software was used to align the raw reads to the Homo sapiens genome sequences (NCBI). The false discovery rate (FDR, i.e., a probability of wrongly accepting a difference) of each gene was determined according to the Bonferroni correction method. Differential expression analysis was performed using the edgeR R package (2.6.2). An adjusted P-value < 0.05 and FDRs < 0.01 were set as the threshold for significantly differential expression.
Quantitative real-time PCR (qRT-PCR)
Total RNA was isolated from tissues and cells using TRIzol reagent (Invitrogen) following the manufacturer's instructions, and synthesis of cDNA from total RNA (2 μg) was performed using a commercially available kit (EasyScript First-Strand cDNA Synthesis SuperMix, Transgen Biotech, Beijing, China). qRT-PCR was performed with the LightCycler 480 SYBR Green I Master using a LightCycler 480 Real-Time PCR System (Roche, Mannheim, Germany). The relative expression level of genes was normalized to GAPDH. The fold change was calculated according to 2 -ΔCt , in which ΔCt = Ct target -Ct control . All PCR assays were carried out in triplicate, and the mean of triplicates is reported. Primers are listed in Additional file 1: Table S1.
Cell lines and cell culture
The human lung adenocarcinoma cell lines PC9, HCC827 and H1975 were maintained in our laboratory. Cell lines were cultured in RPMI-1640 medium (Gibco BRL, Gaithersburg, MD) supplemented with 10% fetal bovine serum (Gibco BRL), 100 U/mL penicillin, and 100 g/mL streptomycin (Invitrogen, Grand Island, NY, USA) and incubated in a humidified incubator (37°C) with 5% CO 2 . All cell lines were certified by shorttandem repeat (STR) analysis.
Generation of EGFR-TKI-resistant cells (PC9/ER)
PC9/ER cells were developed from the parental PC9 cells by 6 months of exposure to gefitinib, starting at 10 nM and increasing stepwise to 10 μM. At 80-90% confluence, the cells were detached with trypsin/EDTA (Gibco BRL) and divided into 2 parts. One part of the cells was frozen, and the other was reseeded into a new dish at doses 30-50% higher than the original. Control cells were treated parallel with vehicle (DMSO). The PC9/ER cells were validated to be resistant to gefitinib and osimertinib as shown in the results.
pENTER-shisa3-flag was constructed by cloning the coding sequence (CDS) of shisa3 into the pENTER plasmid using the restriction sites Asis I and Mlu I. For further experiments, PC9/ER cells were transiently transfected with this plasmid using Lipofectamine™ 3000 (Invitrogen) according to the manufacturer's protocols.
Western blot analysis
The proteins of cells were extracted using RIPA buffer containing a complete protease inhibitor cocktail (Roche, Mannheim, Germany). Protein concentrations were measured with a bicinchoninic acid (BCA) protein assay kit (Beyotime). Equal amounts of protein were separated with 8% or 10% SDS-PAGE and transferred to polyvinylidene fluoride (PVDF) membranes. After blocking with 5% BSA (Amresco) or fat-free milk, the membranes were probed with primary antibodies at 4°C overnight followed by secondary antibodies at room temperature for 1 h. The proteins were then detected by chemiluminescence using Immobilon Western Chemiluminescent HRP Substrate (#WBKLS0500, Millipore) and visualized using Amersham Imager 600 (GE Healthcare, Chicago, IL).
Microarray and computational analysis
PC9-shControl and PC9-shShisa3 cells were submitted to BoHao Bio-tech (Shanghai, China) for mRNA microarray analysis. Total RNA was purified with an RNeasy mini kit (Cat.# 74,106, QIAGEN, GmBH, Germany) and hybridized using the Gene Expression Hybridization Kit (Cat.# 5188-5242, Agilent Technologies, Santa Clara, CA, US). Data were extracted with Feature Extraction 10.7 software (Agilent Technologies). Genes that were up-or downregulated with > 2-fold change (FC) and a significant difference of P < .05 were further subjected to computational simulation by Ingenuity Pathway Analysis (IPA; QIAGEN, Valencia, CA, USA) online tools for an enrichment analysis.
Cell viability
A total of 3000-5000 cells were seeded per well in 96well plates and allowed to attach for 24 h. Following treatment, cell viability was measured using a CCK-8 commercial kit (#CK04, Dojindo, Japan) according to the manufacturer's protocol, and the absorbance at 450 nm was measured using a spectrophotometer. The cell half maximal inhibitory concentration (IC50) was calculated using GraphPad software.
Sphere formation assay
The cells were trypsinized and washed in phosphatebuffered saline (PBS), and single cells were plated in ultralow-attachment 96-well plates (Corning Inc., Life Sciences). DMEM/F-12 (Invitrogen) serum-free medium including 20 ng/mL basic fibroblast growth factor (Peprotech, Rocky Hill, NJ, USA), 20 ng/mL epidermal growth factor (Peprotech), B27 (Invitrogen), 10 ng/mL hepatocyte growth factor (Peprotech), and 1% methylcellulose (Sigma, MO, USA) was used to cultivate the cells for 12 days. The spheres were counted under a light microscope. Images are shown as representatives of three independent experiments. Sphere formation efficiency was calculated as follows: sphere formation efficiency = sphere/input cells × 100%.
Migration and invasion assays
A total of 1.0 × 10 5 cells per well in serum-free RPMI were placed in the upper chamber (Cat NO. 3422, Corning Costar, Cambridge, MA) with/without precoated Matrigel (Cat NO. 356234, BD Biosciences, San Jose, California, USA) following the manufacturer's instructions. The lower chambers were filled with culture medium supplemented with 10% FBS. The indicated cells were allowed to migrate or invade through pores for 12 to 24 h at 37°C. The total numbers of migrated or invaded cells in the lower chambers were fixed in paraformaldehyde (4%) and stained with 0.1% crystal violet for 5 min at room temperature and counted under a microscope.
Cell cycle assay
The effect of shisa3 on cell cycle distribution was analyzed by flow cytometry. After being starved overnight and stimulated with complete medium for 24 h, cells were collected, washed with PBS, and fixed in 75% immediately precooled ethanol overnight at − 20°C. After washing with PBS, cells were stained with propidium iodide (PI)/RNase (BD Biosciences) at room temperature for 15 min in the dark and analyzed by flow cytometry within 1 h. Quantitative cell cycle analysis was assessed with ModFit version 3.0 software (Verity Software House, Topsham, ME).
Xenografts and treatments
Nonobese diabetic/severe combined immunodeficiency (nod-scid) mice were cared for in accordance with guidelines approved by the Ethics Committee of Animal Experiments of Peking University Cancer Hospital. A total of 3 × 10 6 cells were subcutaneously injected into the right flank of 6-week-old female nod-scid mice (Beijing HFK Bioscience Co., Ltd., China). When tumors reached a size of approximately 100 mm 3 , the mice were randomized into several groups separately via oral gavage with different treatments. Tumors were measured every 4 days using calipers, and tumor volume was calculated using the formula volume = (length × width 2 )/2. At the end of the treatments, the mice were sacrificed with CO 2 , and the tumors were stripped for successive assays.
Immunoprecipitation
Cells were washed twice in ice-cold PBS, harvested and lysed with RIPA lysis buffer (#P0013D, Beyotime, Wuhan, China) for immunoprecipitation experiments. For each sample, 1.5 mg of protein was incubated with Protein G Sepharose 4 Fast Flow (#17061802, GE Healthcare, UK) and anti-Flag antibody (#66008-2-Ig, Proteintech) or anti-IgG antibody (#A7028, Beyotime). After an overnight incubation at 4°C, the beads were washed, and the final pellet was suspended in RIPA buffer. Bound proteins were eluted from the beads by heating and centrifugation and then analyzed by Western blot.
Statistical analysis
The relationship between the expression of shisa3 and patients' clinical variables was assessed using the χ 2 -test and Fisher's exact test. The disease-free survival (DFS) and overall survival (OS) of patients were estimated using the Kaplan-Meier method and the log-rank test. The Cox hazard proportional model was applied to multivariate analysis. All statistical analyses were performed using SPSS version 20.0 software (SPSS Inc., Chicago, IL), and images were plotted using GraphPad Prism 7 (GraphPad Software, La Jolla, CA, USA). Differences were analyzed using unpaired two-tailed t tests. All data are representative of 3 independent experiments and are illustrated as the means ± SDs. (P < 0.05 was considered statistically significant.)
Decreased shisa3 is associated with EGFR-TKI resistance in lung adenocarcinoma
We used RNA-seq to screen gene expression levels in local advanced lung adenocarcinoma patients who received EGFR-TKI (gefitinib or ecotinib) therapy (Additional file 1; Table S2). There were 3 drugsensitive patients with partial response (PR) and 3 drug-resistant patients with stable disease (SD). Herein, we found that 5 genes (Shisa3, PADI1, LRP2, ANGPTL4 and ALOX15B) were upregulated and 4 genes (CXCL9, ADAMDEC1, SCGB1A1/CC10, HLA-G) were downregulated in PR tumor tissues compared to SD tumor tissues (Fig. 1a). Owing to the genes that could inhibit EGFR-TKI resistance, we focused on shisa3, which was the most upregulated gene in PR samples with a 4.68-fold increase. In addition, shisa3 was validated to be highly expressed in normal tissues compared to paired tumor tissues of lung adenocarcinoma with EGFR activating mutations (Fig. 1b, n = 38).
We further determined by IHC detection whether shisa3 was associated with the therapeutic effect of EGFR-TKIs in lung adenocarcinoma patients with EGFR mutations (n = 45) who received gefitinib/ecotinib treatment. Based on the divided groups of low shisa3 expression (−/+) and high shisa3 expression (++/+++) (Fig. 1c), an increased rate of high expression was observed in EGFR-TKI-sensitive patients (PR: 75%) compared to EGFR-TKI-resistant patients (SD: 31%) (Fig. 1d). Thus, we demonstrated that patients with high expression of shisa3 have a better response to EGFR-TKIs, indicating that shisa3 may be used to predict the efficacy of TKI therapy in lung adenocarcinoma patients.
A previous study of 69 samples from non-small-cell lung cancer (NSCLC) patients revealed that shisa3 was positively correlated with better prognosis [10]. Subsequently, we investigated shisa3 status in 102 tissue samples from lung adenocarcinoma patients with EGFR mutations. There was no significant difference between shisa3 expression level and sex, age, smoking (Table 1). We found that low expression of shisa3 was related to later TNM stage and lymph node metastasis. We further obtained evidence that low expression of shisa3 was associated with shorter DFS and OS (Fig. 1e-f). Shisa3 was identified as an independent OS factor for lung adenocarcinoma by univariate and multivariate Cox regression analyses (Table 2).
These data suggested that shisa3 may drive sensitivity to EGFR-TKIs in EGFR-mutant lung adenocarcinoma.
Considering that tumor cells with EGFR-TKI resistance may manifest stem-cell-like properties [19,20], we then analyzed whether PC9/ER cells exhibited a CSC phenotype. The primary and secondary sphere formation efficiencies were elevated in PC9/ER cells compared with PC9 cells (Additional file 1: Figure S1A-B). The expression levels of CSC-related factors, including Sox2, Oct4, Nanog and ABCG2, were increased in PC9/ER cells (Additional file 1: Figure S1C). Migration and invasion properties were enhanced in PC9/ER cells compared to PC9 cells (Additional file 1: Figure S1D-E). We then performed a limited dilution assay by transplanting PC9 and PC9/ER cells into nod-scid mice. PC9/ER cells, which showed increased tumorigenic frequency, formed more and larger tumors than PC9 cells (Additional file 1: Figure S1F). Based on the reported mechanisms in EGFR-TKIs resistance [21], we then investigated Met, HER-2, PTEN and EMT related signaling in PC9/ER and the parental PC9 cells. Among those factors, Met expression level and EMT activity (decrease of Ecadherin, and increase of N-cadherin and vimentin) were enhanced in the PC9/ER cells (Additional file 1: Figure S1G). In addition, the CSC related makers including CD133, CD44 and ALDH1A1 were up-regulated in the PC9/ER cells, compared to the parental PC9 cells (Additional file 1: Figure S1H).
These results suggested that PC9/ER cells with decreased expression of shisa3 demonstrated dramatic resistance to EGFR-TKIs and showed an enhanced CSC phenotype.
Overexpressed shisa3 attenuates EGFR-TKI resistance and suppresses the CSC phenotype
To investigate the biological effect of shisa3, we overexpressed and validated the increased expression of this protein in PC9/ER and H1975 cells (Additional file 1: Figure S2A). Shisa3 resulted in a decreased proliferation in PC9/ER cells (Additional file 1: Figure S2B) and H1975 cells (Additional file 1: Figure S2C). Since shisa3 as a tumor suppressor gene, shisa3 was established in the Tet-on inducible system to transfect the PC9/ER and H1975 cells. The expression of shisa3 was significantly up-regulated after doxycycline induction for 48 h (Fig. 2d, and Additional file 1: Figure S2D). Shisa3 led to a decreased IC50 for gefitinib and osimertinib in PC9/ER cells (Fig. 2e). Moreover, inhibitory effect of gefitinib was enhanced in the H1975 cells overexpressing shisa3 induced by doxycycline (Additional file 1: Figure S2E). We then examined whether shisa3 might be a suppressor that depresses the CSC phenotype. We observed a significant decrease in primary and secondary sphere formation efficiencies in PC9/ER cells overexpressing shisa3 ( Fig. 2f-g). Lower expression levels of Sox2, Oct4, Nanog and ABCG2 were observed in shisa3overexpressing PC9/ER cells than in PC9/ER-control cells (Fig. 2h). CD133, CD44 and ALDH1A1 were downregulated in the shisa3-overexpression PC9/ER cells (Additional file 1: Figure S2F). When shisa3 was overexpressed in PC9/ER cells, fewer cells exhibited migration and invasion capacities (Additional file 1: Figure S2G, Fig. 2i). We then performed a limited dilution assay in nod-scid mice, showing fewer and smaller tumors and lower tumorigenic frequencies in shisa3-overexpressing PC9/ER cells (Fig. 2j).
Knockdown of shisa3 triggers EGFR-TKI resistance and enhances the CSC phenotype
Next, we downregulated shisa3 in PC9 and HCC827 cells by transfecting these cells with lentiviral-based shRNAs. The shisa3 expression level was validated to be suppressed by Shshisa3#1 and Shshisa3#2 (Fig. 3a). Decreased shisa3 led to a 4.48-fold increase in gefitinib IC50 and an 11.75-fold increase in osimertinib IC50 in PC9 cells (Fig. 3b). Knockdown of shisa3 increased cell viabilities in HCC827 cells treated with gefitinib and osimertinib (Fig. 3c).
Then, we analyzed how CSC properties changed in PC9 and HCC827 cells after shisa3 knockdown. It was shown that decreased expression of shisa3 enhanced the primary and secondary sphere formation efficiencies (Fig. 3d-e). Sox2, Oct4, Nanog and ABCG2 expression levels were higher in the PC9 cells transfected with Shshisa3#1 than in the cells transfected with shControl (Fig. 3f). CD133, CD44 and ALDH1A1 were also induced in the PC9 cells with knock-down of shisa3 (Additional file 1: Figure S2H). The results showed more migratory and invasive cells in PC9 and HCC827 cells with suppressed shisa3 (Fig. 3g-i, Additional file 1: Figure S2I).
The above data indicated that shisa3 functions to reverse EGFR-TKI resistance and attenuate CSC potential.
Shisa3 interacts with FGFR to impact EGFR-TKI sensitivity
Shisa has been reported to physically interact with FGFR and inhibit its protein maturation [7]; therefore, we explored whether shisa3 interacted with FGFR to modulate the EGFR-TKI response. We further verified the bands of FGFR1 and FGFR3 in immunoprecipitation by shisa3-Flag in PC9/ER cells (Fig. 4a). We observed that overexpression of shisa3 dramatically reduced FGFR1, FGFR3, phosphorylated (p) FGFR1 (p-FGFR1) and p-FGFR3 expression levels in PC9/ER cells, and suppression of this gene induced these two receptors and their phosphorylation in PC9 cells (Fig. 4b). IHC was performed to identify FGFR1 expression in lung adenocarcinoma (Fig. 4c). A negative relationship was shown in the lung adenocarcinoma tissues with EGFR mutations (n = 102, Fig. 4d). In those patients, higher expression of FGFR1 was associated with shorter DFS and OS (n = 102, Fig. 4e-f). Then, the FGFR1/3 inhibitor-BGJ398 was used to treat the PC9/ER cells. In presence of shisa3, BGJ398 increased response to gefitinib and osimertinib by 21.63 and 25.87% in the PC9/ER cells, respectively (Fig. 4g).
These results indicated that shisa3 interacted with FGFR1/3 and inhibit their activation to increase the EGFR-TKI response to a certain extent.
Based on the microarray data (Fig. 5b, Additional file 1; Table S5), we validated whether shisa3 regulates cell cycle distribution. Cell cycle arrest was observed in the shisa3-overexpressing PC9/ER cells with increased G0/ G1 stage and decreased S and G2/M stages (Fig. 5e). In contrast, suppression of shisa3 promoted the cell cycle in PC9 cells (Fig. 5f). In addition, lower expression of cyclin D1, CDK4 and CDK6 was observed in the shisa3overexpressing PC9/ER cells, and higher expression of these proteins was observed in PC9 cells with suppressed shisa3 (Fig. 5g). We then detected how the cell cycle inhibitor palbociclib (PD0332991) influenced EGFR-TKI sensitivity. PD0332991 increased the inhibitory rate of gefitinib/osimertinib in PC9/ER cells overexpressing shisa3 (Fig. 5h).
AKT activation has been reported to mediate FGFR inhibitor resistance [22]. These data suggested that the shisa3-mediated increased response to EGFR-TKIs might be associated with the inactivation of FGFR/AKT/ mTOR and cell cycle signaling.
Targeting shisa3-regulated signaling attenuated EGFR-TKI resistance Firstly, we tested inhibitory effect of gefitinib (15 mg/kg/ day) and osimertinib (5 mg/kg/day) on the formed tumors of PC9 cells. As shown in the Additional file 1: Figure S3A-C, gefitinib or osimertinib could significantly control tumor growth of PC9 cells, with 89 and 99% inhibitory effect, respectively. To assess the role of shisa3 in inhibiting tumor growth of lung adenocarcinoma cells with EGFR-TKI resistance, we injected PC9/ER-control and PC9/ER-shisa3 cells into nod-scid mice. Until the tumor volume reached 100 mm 3 , the mice were subsequently treated with vehicle (1% Tween 80 in PBS) as a control group, gefitinib (60 mg/kg/d) or osimertinib (25 mg/kg/d) by oral gavage for 14 days. Compared to the control group, the groups treated with gefitinib, osimertinib or shisa3 overexpression alone showed inhibited tumor growth by 55.23, 67.20 and 57.91%, respectively, and the combination of gefitinib or osimertinib and shisa3 overexpression even dramatically suppressed tumor growth by 87.70 and 87.16%, respectively ( Fig. 6ab). Lower tumor weights were detected in the shisa3overexpressing PC9/ER tumors or the PC9/ER tumors with EGFR-TKI (gefitinib/osimertinib) treatment, and even the lowest tumor weights were observed in the shisa3-overexpressing PC9/ER tumors with gefitinib or osimertinib treatment (Fig. 6c).
Based on the above data, we further studied the therapeutic effect via targeting shisa3-regulated signaling, and the control group was showed as Fig. 6a-c. Compared to gefitinib treatment, BEZ235 or PD0332991 could inhibit tumor growth, and the combination of gefitinib, BEZ235 and PD0332991 even dramatically suppressed tumor growth in the EGFR-TKI resistant xenografts of PC9/ER (Fig. 6d-f). These data indicated that shisa3-regulated signaling may be a brake for lung adenocarcinoma with EGFR-TKI resistance.
Taken together, targeting shisa3-regulated signaling had an attenuated effect on EGFR-TKI-resistance that was associated with the depression of CSC properties (Fig. 7).
Discussion
Although EGFR-TKIs benefit lung cancer patients with sensitive mutations, most patients eventually develop drug resistance and relapse. Previously, the role of CSCs in EGFR-TKI resistance had not been well clarified. Deep study of the molecular mechanisms of the regulating network in EGFR-TKI resistance is critical, as it may promote the development of novel therapeutic strategies to overcome a failed treatment. In the current study, we screened and verified that shisa3, as a suppressor, prevents EGFR-TKI resistance and suppresses the CSC phenotype in lung adenocarcinoma as follows: (1) Lung adenocarcinoma patients with high expression of shisa3 had a better response to EGFR-TKIs, indicating that shisa3 may be used to predict the efficacy of TKI therapy. (2) Shisa3 significantly suppressed self-renewal; expression levels of CSC-related factors; and migratory, invasive and tumorigenic capacities of CSC phenotypes, which can drive drug resistance in tumor cells. (3) Ectopic expression of shisa3 in vivo combined with gefitinib/osimertinib dramatically inhibited xenograft tumors from EGFR-TKI-resistant tumor cells. (4) Shisa3 altered the response of lung adenocarcinoma cells to EGFR-TKI treatment via FGFR/AKT/mTOR and cell cycle signaling. (5) TKIs combined with inhibitors of shisa3regulated downstream signaling had enhanced function to restrain tumor growth in gefitinib/osimertinib-resistant xenografts, suggesting a potential therapeutic strategy to reverse EGFR-TKI resistance.
Increasing evidence has shown that acquired resistance to EGFR-TKIs is associated with an improved CSC phenotype [4,23]. Herein, we found decreased shisa3 in gefitinib-resistant lung adenocarcinoma patients and EGFR-TKI-resistant PC9/ER cells. Consequently, shisa3 overexpression significantly controlled tumors derived from PC9/ER cells and CSC phenotypes. Shisa3 has been reported to accelerate the degradation of β-catenin in the Wnt signaling pathway, which regulates CSC maintenance in diverse types of cancer [24,25]. Importantly, shisa3 combined with gefitinib and osimertinib inhibited tumor growth in PC9/ER xenografts, suggesting a potential role for this gene in reversing EGFR-TKI resistance.
Shisa3, located on chromosome 4p13, is a member of the shisa family, which mediates both WNT and FGF signaling by inhibiting the posttranslational maturation and cell surface trafficking of their receptors to cell surface [7]. Based on the above evidence showing the interaction of shisa with immature forms of FGFRs, we confirmed that ectopic shisa3 was immunoprecipitated with endogenous FGFR1 and FGFR3 in PC9/ER cells, indicating an interaction between shisa3 and FGFR1/3 that is involved in EGFR-TKI resistance. The FGF2-mediated FGFR/ERK pathway was previously considered to regulate CSCs, and inhibition of this signaling could delay tumor growth in esophageal squamous cell carcinoma [26]. Thus, it is reasonable to speculate that shisa3 decreased CSC characteristics by inhibiting the FGFrelated axis. As we expected, shisa3 restored sensitivity to gefitinib/osimertinib and decreased CSC characteristics linked to FGFR1/3 activity.
Multiple mechanisms of EGFR-dependent and EGFRindependent resistance have been described. It is known that RTKs such as EGFR and FGFR mostly result in activation of downstream MAPK or PI3K/AKT/mTOR pathways. Drug resistance could be caused by the PI3K/ AKT/mTOR signaling pathway, which is associated with CSC sustainability [27,28]. Aberrant activation of the AKT pathway drives EGFR-TKI resistance that could be Fig. 7 A schematic of shisa3-dependent FGFR/AKT/mTOR and cell cycle signaling that mediates EGFR-TKIs response and CSC phenotype in lung adenocarcinoma triggered by a variety of signaling [14]. FGFR induced by N-cadherin resulted in the phosphorylation of ERK and AKT to promote epithelial-mesenchymal transition (EMT) and CSC properties [29]. Our current findings suggest that shisa3 overcame EGFR-TKI resistance and CSC phenotypes by inactivating FGFR/AKT/mTOR and cell cycle signaling. We also found that EMT (down-regulation of E-cadherin, up-regulation of N-cadherin and vimentin) existed in the PC9/ER cells with resistance to gefitinib and osimertinib, compared to the parental PC9 cells. In addition, Met and HER2 amplification have been reported to drive EGFR-TKIs resistance [21]. In consistent with previous study, we also detected higher expression of Met in the PC9/ER cells. In summary, several mechanisms like decreased of shisa3, increased of Met and EMT activation lead to EGFR-TKIs resistance. mTOR inhibition has been reported to suppress tumor growth in lung cancer cells and patient-derived xenograft (PDX) models [30]. In addition, an mTOR inhibitor that mediated cell cycle arrest displayed antitumor activity in preclinical evidence [31]. The inhibition of cyclindependent kinase 4/6 in vivo demonstrated a better erlotinib response to suppress TKI drug resistance in esophageal squamous cell carcinoma [32]. Currently, we found that TKIs combined with inhibition of the cell cycle and PI3K/AKT/mTOR signaling dramatically suppressed tumor growth in PC9/ER xenografts related to shisa3 activation.
Conclusions
In summary, we demonstrated the crucial role of shisa3 in attenuating EGFR-TKI resistance, which is associated with CSC characteristics and the cell cycle. Our molecular studies indicated that shisa3 interacts with FGFR1/ FGFR3 to decrease the activation of these two receptors and their downstream AKT/mTOR pathway, resulting in restoration of EGFR-TKI sensitivity and suppression of CSC properties in lung adenocarcinoma. Based on these essential findings, we suggest that shisa3 or cotargeting FGFR/AKT/mTOR and cell cycle signaling may be an effective therapeutic strategy for overcoming resistance to gefitinib and osimertinib.
Additional file 1: Table S1. QRT-PCR primer sequences. Table S2. EGFR mutations in lung adenocarcinoma patients. Table S3. EGFR mutations in lung adenocarcinoma cells. Table S4. mTOR signaling pathway enrichment in the PC9 cells with shisa3 knock-down. Table S5. Cell cycle signaling pathway enrichment in the PC9 cells with shisa3 knock-down. Figure S1. PC9/ER cells resistant to EGFR-TKIs induce CSC phenotype. Figure S2. Shisa3 decreases EGFR-TKI resistance and inhibits the CSC phenotype. Figure S3. EGFR-TKI inhibits tumor growth derived from PC9 cells in vivo.
|
2019-12-05T09:04:41.971Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "01883c8297be9e874a912690d995d51a0f5061ab",
"oa_license": "CCBY",
"oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-019-1486-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59f0055faa962075aac784fc26069f13702ba148",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
52358388
|
pes2o/s2orc
|
v3-fos-license
|
Deafness : from suspicion to referral for intervention
Instituição: Centro de Estudos e Pesquisas em Reabilitação Prof. Dr. Gabriel Porto (CEPRE) da Faculdade de Ciências Médicas da Universidade Estadual de Campinas (Unicamp), Campinas, SP, Brasil 1Psicóloga; Professora Doutora do Departamento Desenvolvimento Humano e Reabilitação da Faculdade de Ciências Médicas da Universidade Estadual de Campinas (Unicamp), Campinas, SP, Brasil 2Linguista; Professora Titular do Departamento de Fundamentos de Fonoaudiologia da Pontifícia Universidade Católica de São Paulo (PUCSP), São Paulo, SP, Brasil 3Pediatra Sanitarista; Professora Doutora do Departamento de Pediatria da Faculdade de Ciências Médicas da Unicamp, Campinas, SP, Brasil ABSTRACT
Introduction
Of all communication disorders, hearing loss is special because the consequences to the global development of human beings are very serious if language is not acquired.According to Vygotsky (1) , language provides the concepts and forms of organizing the world that constitute the mediation between an individual and the object of knowledge.It is language that constitutes the subject, and it has two basic functions: social interchanges and generalizing thought.Considering that most deaf children are born into hearing families, it is fundamental that an early diagnosis of hearing loss should be made so that parents can receive guidance, as they generally communicate using oral language, inaccessible to deaf people.The difficulty in accessing oral language results in difficulty in acquiring a language, which leads to problems in the child's cognitive, social and emotional development (1,2) .
After the diagnosis of hearing loss, parents are strongly affected by the information received.The way that they perceive deafness and the functions of the hearing system, as well as the attitude of the professional that has dealt with them, as well as the quality of their counseling, interfere in parental decisions about the communicative resources (3) that will be used in their interaction with their deaf children.
Parents often find it difficult to identify hearing loss because deaf people usually have some residual hearing capacity, and they may respond to vibrations, visual stimuli, or the pressure of the air due to the movement of noisy objects; therefore, they may give pseudoresponses (4) .This may also occur during pediatric consultations, the suspicion may be difficult to establish, and the referral to the otolaryngologist may be delayed (5) .
When parents realize that the child does not hear, they suffer the loss of the fantasy of a perfect child (4) .According to Marchesi (6) , a diagnosis of hearing loss is extremely painful for hearing parents.Receiving the diagnosis of deafness is a stressing experience (7) and the source, for parents, of feelings of not only sadness, but also anxiety and insecurity in face of the unknown and the future consequences of hearing loss (8) .Studies (4,(7)(8)(9)(10)(11) indicate that finding out that a child is deaf may trigger different reactions in the family.However, when feelings of denial, anger, grief, pain, guilt and, depression, for example, are shared, it is more likely that parents may be able to face reality and seek strategies that facilitate and minimize the consequences of hearing loss.
Currently, neonatal hearing screening is routine in several maternity wards.Initial procedures can detect hearing impairment at an early stage, and newborns can be referred, in indicative cases, to other tests to confirm the diagnosis (12) .
The moment the diagnosis is made is crucial for parents, because that is the time when they receive information that is, most times, unexpected.Parents are often in shock because they do not understand or are unfamiliar with the terms and procedures, which may trigger panic (4,7) .The importance of this moment for families is clear.Therefore, this study evaluated the experiences of mothers from deafness suspicion to diagnosis and referral to early intervention, as well as the perception they had of the way the diagnosis was made and communicated.
Method
This study is part of a larger qualitative research project called Psychosocial aspects of deafness: the social representations of hearing mothers (13) .Qualitative approaches investigate processes, that is, how phenomena naturally occur and how the relations between these phenomena are established (14) .Therefore, this study investigated the process of experiencing events and the meanings assigned by mothers from suspicion that something was "wrong" with their infant to the attitudes after diagnosis.This study was approved by the Committee on Ethics and Research of the School of Medical Sciences of Unicamp, Brazil.This study included ten hearing mothers and their deaf children who attended a specialized service in a Center for Studies and Research on Early Interventions in a city in the state of São Paulo, Brazil, for at least two years.As a qualitative study, the number of participants was not defined in advance and sampling was limited by theme saturation.Chart 1 shows the characteristics of mothers (age, education, occupation), their deaf children (age, degree of hearing loss, etiology, age at diagnosis) and family income.
Mothers received explanations about the study and signed an informed term to participate in the study.After that, they answered a semistructured interview, which was recorded using an audiocassette recorder and later transcribed.The guiding topics for this study were the diagnosis of hearing loss and referrals, and the following points were analyzed: when, how and who suspected that the child did not hear; when and how the diagnosis of hearing loss was made; and how was the referral to early intervention.
Results and discussion
Interview data showed that, before the confirmation of the diagnosis of hearing impairment, most mothers went through a time of suspicion that something was not right with their child, because the child was either not startled and did not react to sounds and noises, or did not speak.Some mothers tried to share their suspicions with the child's pediatrician, but, according to their reports, their questions were not always investigated.
The analysis of diagnoses revealed that six mothers in the study received confirmation of their child's deafness before the infant was one year old.For the Brazilian reality, these diagnoses were made at an early stage.Two of them, as there were other deaf people in their families, were already being followed up by a geneticist to investigate the probable cause of hearing loss (Waardenburg syndrome).Three of the other children received a diagnosis of hearing impairment between one and two years of age, and the other, who had a diagnosis of Usher syndrome, in which case deafness may be progressive, at four.
The early diagnosis of the children in the group was frequently a result of their mother's observation and their search for a professional that would be open to their suspicions, as reported by Mother #2: "I realized, because of the radio clock; I was changing her diapers when the alarm went off, a noise like that, and she was not startled […] she was six months old, I realized […] and started making a lot of noise, making noise behind her ear and nothing, she did not respond; then, I went to the doctor, we took her for a consultation and he said, no, it is just your feeling, and that I should wait a little longer; then I returned at eight months and he referred her to BERA (brainstem evoked response audiometry) and she took that test at 11 months." A similar case was experienced by Mother #10, who had rubella during pregnancy.She reported: "I had rubella during pregnancy, then, when she was born, I was paying attention, especially when I was alone.I made noises and I realized, like, she did not follow them.And when she was near me in the baby carriage, I started banging things.Nothing startled her, nothing.She slept a lot, I turned on the music and she did not wake up."When referring to her search for a diagnosis, she reported that: "For me, it was, like, a little complicated, because I talked to the pediatricians, like, what I thought was happening, but some of them said, 'Well, mother, she is still so small, let's wait a little longer'.And then I would go and see another pediatrician (laughter).I went to four pediatricians, just the same […].Then I went to one and I said, B. was about three months, and I said, 'Look, I am almost sure that she cannot hear', and then he said, 'Mother, do you want her to have the test?'And I said: Yes." Mother #9 also had rubella during pregnancy, but her journey to have the test made was different, as she reported: "[…] a boy in the street fired a firecracker and he did not react, and then I was scared and he continued sleeping, and then I said to my husband: this boy does not hear, he is deaf […].We took him to the pediatrician, and he said, the pediatrician made some movements there and he reacted, he turned around, and he said: no, it's because he is very hyperactive […] he hears, he reacts to the sounds'.He turned his head […] then he stopped, he said he had nothing, everything was fine, this went on like this, when he was around 6 months, he could not keep his head up, his head was soft […] I mentioned that to the pediatrician and then he told us to go into the neuropediatrician's office."The mother said that the neuropediatrician referred the child to a physical therapist, who suspected hearing loss and referred the child to an otolaryngologist.The diagnosis was made when the child was eight months old, but interventions for hearing impairment started only at 1 year and 8 months of age.
A different path was followed by Child #7, who was premature, had anoxia and neonatal complications.At the time, the maternity ward where the child was born conducted neonatal hearing screening of infants at risk.The test of otoacoustic emissions was attempted more than once, and the infant was referred to the BERA test, after which hearing impairment was diagnosed at four months of age.The otolaryngologist referred the child to an ophthalmologist and justified it by the fact that the drugs that the infant had been administered might have affected his sight.As there were no ophthalmologic impairments, the infant was referred to the neurologist, who referred the child to the institution that provides early intervention programs for hearing impairment at one year and three months of age.Although suspicion was raised at an early age, the complications in the case of this child delayed referral to early intervention and, as well as in the other cases (#2, 9 and 10), the lack of concurrent or more accurate referrals contributed to the delay.
An early diagnosis is important because it may minimize the effects of the anxiety that the parents experience when they do not know what is happening with their child, a situation that may affect the affective and emotional relationship between infants and their parents at a fundamental time for their development (15,16) .It may also prevent difficulties that the child may face in linguistic, communicative, cognitive, social and emotional aspects (15) .
According to Vieira, Macedo and Gonçalves (17) , the diagnosis of hearing loss, including the degree and type of loss, is based on history and focused on the investigation of gestational, periand postnatal risks, history of infectious and respiratory diseases, otolaryngological assessment and hearing tests.Therefore, the pediatrician that follows up the infant may identify these risks and conduct the first hearing tests and, in case of any suspicion, make the referral to an otolaryngologist.
Studies (18)(19)(20) about the way pediatricians face deafness draw attention to the fact that, in general, they do not routinely examine hearing and have little information about the causes of hearing loss, its classification and assessment methods, which makes it difficult to detect it and delays treatment.
Colozza and Anastasio (20) included physicians working in neonatology in their study, in addition to pediatricians, and found that most (83%) adopt specific procedures for hearing loss when treating high risk infants, but do not investigate hearing in their routine examinations.The authors found that, despite that, all interviewees agreed that the doctor should be responsible for caring for the infant's communications capacity.
Studies conducted in Brazil show that the diagnosis of hearing impairment in our country is delayed and made at about three or four years of age, and the time from suspicion of hearing impairment to its confirmation is 11 to 48 months (21) .
Our study found that, in some cases, mothers (#1, 5 and 8) received a diagnosis for their children at one and a half to two years, although they had suspected hearing impairments earlier.Child #5, for example, had no severe impairment and responded to sounds, but did not speak at one year and nine months.The mother suspected that something might be wrong, but the diagnosis was delayed because there was a history of "speaking late" in the family, as she reported: "He was already one year and nine months and did not speak, but responded to any sound, and I made no idea.Then, at two years, I said: I'm going to see the doctor, because up to that time, the pediatrician, nobody had said anything, they saw it and said it was normal, and as there was a case in my family, my brother and my husband's sister spoke only when five years old, we thought: OK, let's wait, let's see, if he turns two and has not started developing any speech, I'll take him, yes I will, to someone that can check it." It should be noted that the family, when faced with a suspicion, tries to find an explanation for the fact that their child is taking too long to speak.This mechanism may make the family delay the decision to take the child to the doctor for an examination.In contrast, when the families tell pediatricians about their suspicions, some reassure them, as reported by Mother #5: "When I told the pediatrician that my boy was taking too long to talk, that he only mumbled "mamma", "papa" and could not say mother, father, he could not say that, the doctor made those noises and he turned to see where the sound was coming from and she thought that… 'oh, he is just a little lazy, leave it alone, he is developing', but she didn't she never had any suspicion." Something similar happed to Mother #8, who had a premature baby who received antibiotics intensively after a surgery to correct duodenal atresia on the third day of life.The mother reported that: "I took her to the pediatrician all the time: and, well, she is still too young, you have to wait at least some six months, because sometimes her development is really slower because she was premature, was kind of malnourished at birth and that went on like this." Deafness, as it is not apparent, is often overlooked in routine clinical examinations (22) .Many healthcare professionals lack familiarity with hearing problems and the use of visual cues by the baby may confuse the evaluation of responses to sounds (22) .In the case of Child #1, the pediatrician saw that she was not startled when there was a noise in the room and referred her to an otolaryngologist.The mother said: "Coincidently, a ruler fell on the floor, one of those rulers to measure babies, it fell and I was startled, the doctor was startled, but the baby did not even flinch." One of the frequent reasons for delays in suspecting that the child does not hear, both for the family and the healthcare professionals, is the fact that the child does not have a severe hearing impairment.The greater the hearing residue, the harder it is to realize that the child does not hear, as the response to stronger and deeper sounds ends up masking the child's inability to understand the sounds of oral language.
When the parents receive confirmation of the diagnosis, the shock and emotional reactions depend on how much they suspected that something was wrong with their child's hearing.At this moment, the attention and willingness to hear of the professionals that break the news are fundamental, and it is their role to make sure that the parents understand despite the emotional shock.They should also take into consideration the sociocultural conditions of those parents.
The family builds a set of ideas about the "disease" that are strongly affected by subjective issues, such as personality and culture (23) .Even when suspecting that their children have hearing losses, the emotional shock may hinder their parents' comprehension, and there might by discrepancies between what the healthcare professional tries to explain to parents, the form it is explained and the way the contents are approached, and what parents understand or are able to comprehend (24,25) .
Mother #5 clearly explained her difficulty in understanding medical terms, as she reported: "The first otolaryngologist, to be honest, I didn't understand a thing, I had already read the test result because it was sent to my home, and I can't control my curiosity, I opened and saw, and I saw that there was severe bilateral loss, but I had no idea what that meant; then, I got there, she sat down, read it and said: well, your son has severe bilateral loss.And I said: what does that mean in my language?Because I have no idea what you are talking about.She said: Your son is deaf.Then, there, I did not believe it, I was stunned [...] I made an appointment with a speech therapist covered by my medical insurance plan and she was the one and she explained, she gave me the number of C (Institution), gave me the things, but the otolaryngologist herself only left me feeling stunned because she did not give me any explanations." Gilbey (25) evaluated parents' experiences when they receive the information about their children's hearing loss, and concluded that 50% were unhappy about the diagnostic process, and one of the frequent complaints was the fact that information was given directly and thoughtlessly.In contrast, Fallowfield and Junkins (26) found that healthcare professionals report having difficulties in giving bad or sad news to patients and their families because there is a stressing interaction at such moments and physicians, when they lack effective training, may do it inappropriately, which may affect acceptance, understanding and adaptation to the problem.Mother #7 reported not being unhappy about the diagnosis maybe because it mitigated the limitations set by hearing loss due to the confusion she made from the explanation that the doctor gave her about the hearing test that had been performed, that is, when she was told about a hearing loss at 80 decibels (db), the mother understood that her daughter had 80% hearing capacity and, therefore, had a good hearing.To measure the loss and residual hearing in percentage seems to be important for parents to have a clearer idea of how much their child can hear.
In the case of Children #4, 6 and 10, who had an early diagnosis and whose doctor was able to make the necessary referral rapidly, parents had support at a time when they needed it the most, and this made a difference in their experience.For example, Mother #10 said: "I prayed to God to show me the best way to go, because I really knew nothing, nothing, nothing.I did not know what to do, then I looked forward to the day to come here and talk to somebody […] I got my feet back on the ground fast."The mother said that she started using sign language with her daughter at home, but saw no return, which generated considerable anxiety, but eventually, after four months, she realized that she was on the right path: "Four months, and that was what I did, but, my God, is this right?It was always like that, doing it, but, was it right?Is the sign correct?[…] Then B started responding and understanding, I saw that she understood, God, the best thing I ever did and I'd do it all over again."The mother, at first, had feelings of anxiety and insecurity about the guidelines on how to use the signs, but, as she felt assisted and supported, she trusted the professionals and rapidly became confident that she was treading the right path.
Although all children had a diagnosis before one year of age, and considering the multiple feelings of mothers in face of their children's hearing loss, we found that, in some cases, diagnostic suspicion might have been raised before if the mother's words had received more attention earlier.That is, our findings showed that it was difficult for healthcare professionals to "hear" the mother's questions, complaints and inquiries, even when there was a history of risk factors, such as congenital rubella.In some cases, even when neonatal hearing screening was performed and there was a timely diagnosis, care was delayed because referrals to services that provide early interventions in cases of hearing loss were not adequate or not concurrent.Moreover, at the time of diagnosis, the way that the information was communicated to the family should take into consideration the mother's social, cultural and emotional conditions.
The qualification and attention of healthcare professionals should ensure an early diagnosis and support to parents, as well as adequate referrals and follow-up of confirmed cases of hearing loss.
Chart 1 -
Characteristics of mothers and their deaf children MW: minimum wage Mother Mother
|
2018-09-17T17:27:07.109Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "a3fa295515dbb843e6d604edd5693af89e77ba90",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rpp/a/FSJSQMsmPycZxdRFFp9V73R/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c950b3bc0a14df656d9cf6a44596a14f2a181bd2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234072904
|
pes2o/s2orc
|
v3-fos-license
|
A review of the mechanical properties of additively manufactured fiber reinforced composites
Recent developments in additive manufacturing technologies have made it possible to print fiber reinforced composite materials that have reasonable mechanical performance. In this paper, a brief review of the additive manufacturing technologies for composites are presented. The focus is the mechanical properties of both discontinuous and continuous fiber reinforced composites fabricated by state-of-the-art additive manufacturing technologies. The deformation mechanisms are also briefly discussed. In addition, recommendations for future work are made.
Introduction
Additive manufacturing (AM) technology has advantages in rapid prototyping and ability to produce customized and complex geometries [1] compared with traditional manufacturing technologies (such as Injection Molding and Compression Molding). Therefore, AM has applications in some industrial fields, such as aerospace, automotive, biological and medical [2]. However, AM cannot replace traditional manufacturing methods in the past decades due to the poor mechanical properties of pure materials it produced [3,4]. For example, automotive industry requires metals with ultimate strengths between 200 MPa and 1600 MPa to bear load and improve crash resistance [5]. However, the tested average tensile strengths of previously 3D printed pure polymers were relatively low with tensile strengths of 28.5 MPa for ABS and 56.6 MPa for PLA as reported by Tymrak et al. [6]. The low strength of additively manufactured pure polymers does not meet the requirement of industrial applications for load bearing. Such drawback restricts the wide applications of AM technologies in various industrial sectors [4].
The mechanical properties of printed polymer materials can be improved significantly by adding fibers into pure polymers. The most popular reinforced fibers employed by AM technologies are carbon fibers (CF), glass fibers (GF) and Kevlar fibers (KF). Fibers can be either discontinuous or continuous. There are mainly five AM technologies to produce fiber reinforced composites (FRCs), namely Fused Deposition Modeling (FDM) using thermoplastic filaments, Laminated Object Manufacturing (LOM) using laminated sheets, Stereolithography (SLA) using photopolymer resin, Selective Laser Sintering (SLS) using plastic powder, and Direct Ink Writing (DIW) using thermoset epoxy resin. Please note SLS can only be used to print discontinuous fiber reinforced composites. FDM, LOM, SLA and DIW can be used to print both discontinuous and continuous fiber reinforced composites. Figure 1 is a flowchart shows the relation of raw material, AM process and component property. As for FDM, the most commonly used filament materials are polylactic acid (PLA), acrylonitrile butadiene styrene (ABS) and Nylon. With the recent development of 3D printers, some leading 3D printing companies can print IOP Publishing doi: 10.1088/1757-899X/1067/1/012105 2 using advanced polymers, such as polyether ether ketone (PEEK), which have much better mechanical performance than previously mentioned polymers [7]. For LOM, prepreg carbon fiber reinforced polymer sheets are required instead of filaments. In terms of SLS method, pertinent 3D printers can use a variety of powdery materials (the most common one is polyamide such as Nylon PA12 and PA11). DIW and SLA are able to print products with high precision and smooth visual appearance, but they are less frequently used to print fiber reinforced composites due to their limitations of using thermoset materials (resins) and difficulties to control fiber orientations. Among all these methods, FDM and LOM are the two desired methods to print FRCs because they can print composites with relatively high strength [8]. Comprehensive review of the AM technologies for fiber reinforced composites can be found in [9,10]. The aim of the current study is to review the mechanical properties of the fiber reinforced composites produced by AM technologies. Section 1 presents the background information. Sections 2 and 3 will discuss discontinuous and continuous fiber reinforced composites, respectively. Conclusions and recommendations for future work will be presented in Section 4.
Mechanical properties of 3D printed composites with discontinuous fibers
The mechanical performance of discontinuous fiber reinforced composites printed by different AM technologies, FDM, SLS, DIW and SLA in published papers will be summarized and compared in the following sub-sections.
Fused Deposition Modeling
Fused Deposition Modeling (FDM) technology is the most mature and developed AM technology. It is simple and cost effective [11]. Currently, the fiber length suitable for FDM is from 5 to 300 μm. Regarding the filament preparation process, carbon fiber and plastic resin pellets are compounded in a blender/mixer to produce a mixture of carbon fiber and plastic resin. Subsequently, the mixture is fed into an extruder to produce the filament. The fibers tend to orient automatically along with the printing direction due to shear forces acing on the nozzle during extrusion. Therefore, mechanical properties of the printed materials are better along the printing direction and weaker in other directions. Table 1 shows a summary of the mechanical properties of 3D printed composites with different fibers and matrix combinations. The mechanical properties of FRCs are affected by various factors. Firstly, Blok et al. [12] pointed out that the increased fiber volume fractions (Vf) contributed to the improvement in tensile properties of carbon FRCs. For example, a discontinuous carbon fiber reinforced Nylon material had tensile strengths of 33.5 MPa and 83.8 MPa when Vf of fibers were 6% and 18%, respectively. Secondly, Blok et al. [12] modified the standard FDM process by adding a consolidation step when preparing the reinforced filaments to further increase the processability and enhance the performances of CFRs. For instance, the tensile strength of a carbon fiber reinforced Nylon material was 250 MPa with 12% Vf fibers, which was two times larger than that tensile strength of standard FDM manufactured Nylon FRCs (15% Vf fibers). Thirdly, Blok et al. [12] showed the processing temperature could also affect the performances of FRCs. In general, higher processing temperature leads to the increase in tensile properties of the PLA and ABS materials, while for Nylon materials decrease in tensile strengths is observed when temperature increases. The overall low performance of carbon fiber reinforced Nylon materials may be attributed to the low modulus of pure Nylon. ABS materials have the highest tensile strength of 320 MPa and modulus of 25 GPa.
In addition to the commonly used polymers, new polymers such as PEEK (whose tensile strengths are higher than that of pure ABS and PLA) have also been employed in FDM technologies. Wang et al. [13] examined the tensile and flexural properties of carbon and glass fiber reinforced PEEK materials and found out that the printed composites were stronger than pure PEEK. The largest tensile and flexural strengths are 94 MPa for CF/PEEK and 165 MPa for GF/PEEK (165 MPa) with 5 wt% fibers, indicating the increase of 19% and 17% compared with that of the printed pure PEEK respectively. However, there were no significant discrepancies in both tensile and flexural properties between GF/PEEK and CF/PEEK. Moreover, different from previously mentioned trend that the mechanical properties of FRCs increased with fiber volume fractions, both the tensile and flexural strengths of GF/PEEK and CF/PEEK decreased with the increase of fiber weight percentage in the range of 5 wt% to 15 wt%. In order to understand the reasons, the tensile fracture surfaces of 3D printed fiber reinforced PEEK were examined using SEM (Scanning Electron Microscope) [13]. The observed microstructure showed more visible gaps and fiber pull-out phenomenon in composites with high fiber percentage, which led to the decrease in strengths. Standard FDM process without consolidation steps. 2 Modified FDM process with consolidation steps and printing at different temperatures.
Selective Laser Sintering
Selective laser sintering (SLS) uses CO2 laser as a heat source, composite powder (with a combination of reinforced fibers and polymers) as raw materials to fabricate FRCs. The ideal fibers size is between 20 and 80 μm. Carbon fibers and plastic resins are firstly dissolved in an organic solvent to make a homogeneous mixture. This solvent is then removed to precipitate out the powder, which is composed of carbon fiber and plastics. The powder is further crushed and milled. Since the fibers are in a form of fine powder as the raw material, the printed materials have nearly isotropic mechanical properties. The mechanical properties of the composites fabricated by SLS are summarized in Table 2. Jansson & Pejryd [14] showed that the tensile strength of CF/PA12 composites was influenced by printing orientations. The materials printed in the x direction exhibited the highest tensile strength, approximately 66.7 MPa. On the other hand, the materials printed along the diagonal direction showed lower tensile strength at around 30 MPa, which is approximately half of those printed in the xy plane. Moreover, the mechanical performance of SLS printed FRCs increases with the increase of carbon fiber weight percentage. Yan et al. [15] found that the flexural strength of CF/PA12 composites increased from 76 MPa to 113 MPa with the carbon fiber percentage increases from 30 wt% to 50 wt%. Furthermore, the usage of PEEK further improved the mechanical properties of FRCs. In order to fully melt the PEEK IOP Publishing doi:10.1088/1757-899X/1067/1/012105 5 materials, Yan et al. [16] increased the processing temperature from 240 to 380 degrees, the tensile strength of CF/PEEK composites reached 110 MPa at 5 wt%, which was 37.5% higher than that of the pure PEEK.
Direct Ink Writing
Direct ink writing (DIW) technology uses viscous inks as matrix materials. The reinforced ink preparation process is simple by mixing reinforcing fibers with epoxy resin at 2000 rpm for three minutes [17]. The fiber cannot be automatically aligned during DIW process unless additional shear forces are applied. Lewis et al. [18] modified the standard inkjet-based printer by adding a screw into the extruder to apply shear stress during extrusion process and an impressive alignment was achieved. The stiffness of the printed materials containing fibers aligned in the loading direction was nearly 10 times higher than that of many 3D printed pure polymers. The tested results of composites manufactured by DIW are summarized in Table 3. Nashat et al. [17] found out that both the flexural strength and moduli were increased along with the increasing in fiber volume fractions. The flexural strength and flexural modulus increased from 78.3 MPa and 3.84 GPa at 3.5% Vf to 108 MPa and 4.23 GPa at 6.3% Vf, respectively. Invernizzi et al. [19] mixed photocurable resin with polyamide so that UV light could assist the printing process as an additional curing source. In general, with the addition of glass and carbon fibers, both tensile strength and modulus of printed composites were improved. Moreover, carbon fibers exhibited better performance-enhancing abilities than glass fibers. For fiber reinforced B33 materials, the tensile strength was enhanced by 18.8% to 41.7 MPa with additional glass fibers. While for fiber reinforced B50 materials, the tensile strength was improved by 91.7% to 30.6 MPa with additional carbon fibers. Furthermore, the additional nano SiO2 further increased the tensile strength and modulus of carbon fiber reinforced B50 by 10% since SiO2 enhanced the bonding between printing layers.
Stereolithography
Stereolithography (SLA) uses laser to irradiate the surface of the light-cured material so that it can complete the printing of one layer from point to surface. SLA printed materials have the highest precision (approximately 50 μm) [20]. SLA technology uses a specific intensity of laser to focus and irradiate the surface of a light-cured material layer by layer until the final product is fabricated. A high intensity ultra-violet (UV) light is subsequently applied to the part to complete the polymerization process.
The mechanical properties of FRCs with different fibers manufactured by SLA are listed in Table 4. Sano et al. [21] found that both tensile strength and Young's modulus of glass fiber reinforced lightcured resin composites increased with the weight percentage of glass fibers. Strictly speaking, SLA printed composites using chopped glass fibers was unsuccessful because the fibers cannot be selforiented and thus only distributed in the top layers. Therefore, only powder-based (instead of chopped fibers) glass fiber reinforced light-cured resin (LCR) composite specimens with different weight percentages (from 10 wt% to 50 wt%) were printed and mechanically tested in Ref. [21]. The highest achieved tensile strength was 22 MPa, which was 110% times higher than that of the pure resin. Moreover, nano reinforcement could be employed to enhance the mechanical performance of SLA printed composites. For instance, 70% improvement in tensile strength was achieved by adding carbon nanotubes (CNTs) into neat stereolithography resins (SLRs) [22], 20.6% improvement in tensile strength was achieved by employing nano SiO2 (with the same printing resolution) [23], and the tensile strength was improved by two times when graphene oxide (GO) was added and annealed under 100 degree [24].
Mechanical properties of 3D printed composites with continuous fibers
In addition to discontinuous fiber reinforced composites, the 3D printed continuous fiber reinforced composites (CFRCs) have been developed and advanced during the past decade [25]. Before 2016, the reported highest tensile strength and elastic modulus of reinforced filaments with only raw fibers were 464 MPa and 35.7 GPa, respectively [26]. With the innovation of production technology, the sizing process has been applied to manufacture impregnated filaments and the high-temperature extruder has also been introduced which has the ability to completely melt the sizing agent to further improve the bonding strength [27]. Justo et al. [28] reported that the average tensile strength of printed composites increased to approximately 700 MPa with impregnated carbon fiber filaments, which was three times stronger than that of the composites printed with raw material filaments (without sizing agent).
The mechanical performance of continuous fiber reinforced composites printed by different AM technologies, FDM, SLA and LOM in published papers will be summarized and compared in the following sub-sections.
Composites printed by standard FDM printers
Markforged Inc. published their first commercialized 3D printer for FRCs in 2014. Table 5 lists some mechanical test data from Markforged for FRCs with fibers single-oriented. The tensile strength is almost four times larger than that of the composites produced with discontinuous fibers, and flexural strength also shows significant improvements when using continuous fibers. The addition of continuous carbon fibers enhances the mechanical properties of printed composites to the level of aluminum alloy, while reduces the weight by half. Table 5. Mechanical properties of fiber reinforced composites produced by MarkTwo [29]. Tensile properties of FDM printed CFRCs are summarized in Table 6. The data ranges from a tensile strength of 140.2 MPa with a corresponding modulus of 14.1 GPa for a carbon fiber reinforced Nylon material with 6% Vf to a strength of 750 MPa with a modulus of 40 GPa for an Aramid fiber reinforced Nylon material with 50% Vf. In general, fiber reinforced PLA materials [30][31][32] show lower strengths and moduli comparing with fiber reinforced Nylon materials [33,34]. Moreover, carbon fiber reinforced materials yielded the highest strength and modulus at the same fiber volume percentage. Furthermore, the tensile properties are dependent on the fiber volume fraction, for example, a carbon fiber reinforced Nylon material exhibited 750 MPa with 50% Vf fibers while the tensile strength was only 150 MPa when the fiber fraction was 6.7% Vf [34]. Figures 3 and 4 show the tensile moduli and tensile strengths, respectively, of CFRCs manufactured by FDM processes in approximately 35 published papers. The results indicate very scattered distributions of both strength and modulus. This is due to not only the deviation in the choice of materials and formulation (polymer, fiber type, fiber volume percentage among others), but also different processing parameters and printers. In addition, it should be noted that there are currently no specific standards for the design of specimens to evaluate the mechanical properties of 3D printed composites, thus specimen geometry varies which influences the test results as well.
Materials printed by modified FDM printers
In order to prevent a large number of voids formed in the printing process and increase the interfacial shear strength between beads, compaction during 3D printing was proposed, which was named as modified FDM technology. Thermoplastic filaments and continuous carbon fibers are separately supplied to a 3D printer, while compaction roller can consolidate fiber reinforced layer right after the extrusion of impregnated filaments. A few researches comprehensively analyzed the effects of compaction on the mechanical properties of CFRCs [35,36] and the enhancement of the mechanical properties due to the compaction roller was confirmed.
The mechanical properties of continuous fiber reinforced composites fabricated by modified FDM technology are summarized in Table 7. Zhang et al. [35] found that the tensile strength and bending strength of specimens were enhanced to 644.8 MPa and 401.2 MPa by employing compaction, while the tensile strength and bending strength were 109.9 MPa and 163.1MPa without applying any pressure during the fabrication process. Moreover, Ueda et al. [36] compared both the tensile and flexural properties of carbon fiber reinforced Nylon composites printed by modified FDM and conventional FDM technologies. The average tensile strengths were approximately 1031 MPa and 777 MPs for the composites fabricated by modified FDM and conventional FDM, respectively. The results showed no difference in the tensile modulus between the two methods due to the same percentage of fibers, while the tensile strength of the modified FDM specimen was approximately 33% higher than that of the conventional FDM fabricated specimen due to the lower void percentage.
Composites printed by other AM technologies
In addition to FDM, SLA [21,37] and LOM [38,39] technologies were also reported to have the abilities to produce CFRCs with continuous fibers. However, only limited work could be found. The mechanical properties of continuous fiber reinforced composites fabricated by SLA and LOM technologies are summarized in Table 8. For SLA printed FRCs, the fibers can be either fiber filaments or woven fabrics. Sano et al. [21] reported that the tensile strength and Young's modulus of FRCs with continuous glass fiber woven fabrics showed significant increase. The tensile strength was 80 MPa, which was 10.5 times higher than that of the pure resin. Moreover, Lu et al. [37] performed tensile tests on pure polymer and carbon fiber reinforced composites produced by both FDM and SLA technologies. The results showed that with embedded carbon fibers, an increase of elastic modulus by 110.49% and 23.69% for FDM and SLA fabricated composites, respectively. The SEM analysis revealed that the presence of pores at the fiber-matrix interfaces in both the SLA and FDM printed composites caused the failure of fiber-matrix bonding and reduce the tensile properties.
For LOM technology, Chang et al. [38] employed a novel method to produce carbon fiber reinforced PEEK composite using prepreg composite sheets with 59% fiber volume fractions. The tensile strength and modulus of the printed composite were 1513.8 MPa and 133.1 GPa, respectively. Such high tensile strength is fully comparable with the strength of materials manufactured by the manually layered autoclave method. Moreover, Parandoush et al. [39] used the same technology to manufacture carbon fiber reinforced PA6 composites, which had the highest tensile strength of 668.3 MPa among all AM manufactured PA6 composites reported. SEM analysis revealed superior interfacial bonding and a high volume fraction of continuous carbon fibers (55% Vf). Figure 4 (they all utilize Nylon as matrix materials), carbon fiber reinforced composites display higher tensile strength than glass fiber and aramid fiber reinforced composites, especially at high fiber volume fraction (e.g. 40% Vf). Please note that when fiber volume fraction is over 40%, only glass fibers and aramid fibers can be successfully printed out in composites.
No record shows that carbon fibers can be printed out with such a high fiber volume fraction in composites because the CFRCs tend to become fragile when carbon fiber fraction is high. In addition, recent progress in LOM technology (oval Region five) improves the tensile strength of AM fabricated composites up to 1500 MPa, which enables AM printed composite to replace materials produced by traditional autoclave consolidation process (oval Region six). Oval Region seven in Figure 4 depicts that the AM manufactured continuous fiber reinforced composites can compete with those fabricated by compression molding. Although the mechanical properties of discontinuous fiber reinforced composites (shown in oval Region eight), are enhanced compared to pure polymers, they are inferior with respect to the mechanical properties of the composites manufactured by other continuous fibers AM technologies.
In general, although FDM and LOM can produce composites with relatively high strengths (500-1500 MPa), they still fall short in comparison to the tensile strength (more than 1500 MPa) and fiber volume (up to 70%) of the conventionally manufactured composites. This leaves exciting opportunities for various AM technologies to further enhance the mechanical properties of the FRCs to be explored in the near future through material development and process enhancement.
Conclusions
Adding discontinuous and continuous fibers to pure polymer materials during the additive manufacturing process has become a promising trend in the production of composites. The addition of reinforcing fibers has improved the mechanical properties of the original pure polymer materials. This paper briefly reviews the five major additive manufacturing technologies (FDM, SLA, SLS, DIW and LOM) used to fabricate fiber reinforced composites. Attention has been made on the mechanical properties of AM fabricated fiber reinforced composites.
For discontinuous fiber reinforced composites, the tensile strength and modulus of composites fabricated by FDM and modified FDM are normally better than those fabricated by SLS, DIW and SLA. The mechanical performance of fiber reinforced PLA, Nylon, Ink, photocurable resin and powder are enhanced with additional fibers when using FDM, SLS, DIW and SLA technologies. However. for PEEK used in FDM technology, the mechanical properties decrease with the increase of carbon fiber percentage. Moreover, some research groups modified the FDM printer by adding the consolidation process, the tensile strength of a carbon fiber reinforced Nylon material could be 250 MPa, which is two times larger than that tensile strength of standard FDM manufactured Nylon FRCs. Furthermore, processing temperature also affects the mechanical properties of FDM fabricated composites.
For FDM manufactured continuous carbon fiber reinforced composites, the tensile strengths increase with the fiber volume fractions in general. Moreover, the carbon fiber reinforced Nylon composites show superior mechanical performance than other types of CFRCs (GF/Nylon, Aramid/Nylon and CF/PLA). The tensile strength of CF/Nylon composites can be over 300 MPa when carbon fiber volume percentage is over 20%, which gives FDM manufactured CF/Nylon composites opportunities to compete with composites fabricated by compression molding. Furthermore, some researchers confirmed the enhancement of the mechanical properties due to the additional compaction roller. The tensile strength of the modified FDM specimen was approximately 33% higher than that of the conventional FDM fabricated specimen due to the lower void formations. In addition, the maximum reported tensile strength is approximately 1500 MPa using LOM technology which is even better than using autoclave consolidation.
Although some AM manufactured fiber reinforced composites have the same level of mechanical performance as traditionally manufactured composites, voids and poor bonding are observed. Therefore, IOP Publishing doi:10.1088/1757-899X/1067/1/012105 14 the research of AM fabricated fiber reinforced composites is still in its infancy. The recommended areas to be investigated in the future include: • novel matrix materials (such as thermosetting plastics and nanocomposites) are expected to be developed for discontinuous fiber reinforced composites. • possible approaches to increase continuous fiber volume fraction and reduce the void percentage are expected to be explored to further enhance the mechanical properties of the AM fabricated composites.
|
2021-05-10T00:04:05.442Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ff40224b3d7685a631ec4eb34feccc73a8e07b78",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1067/1/012105",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f2f83740e01cbdb622746b3eb31be40955108973",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
54474413
|
pes2o/s2orc
|
v3-fos-license
|
RNA-on-X 1 and 2 in Drosophila melanogaster fulfill separate functions in dosage compensation
In Drosophila melanogaster, the male-specific lethal (MSL) complex plays a key role in dosage compensation by stimulating expression of male X-chromosome genes. It consists of MSL proteins and two long noncoding RNAs, roX1 and roX2, that are required for spreading of the complex on the chromosome and are redundant in the sense that loss of either does not affect male viability. However, despite rapid evolution, both roX species are present in diverse Drosophilidae species, raising doubts about their full functional redundancy. Thus, we have investigated consequences of deleting roX1 and/or roX2 to probe their specific roles and redundancies in D. melanogaster. We have created a new mutant allele of roX2 and show that roX1 and roX2 have partly separable functions in dosage compensation. In larvae, roX1 is the most abundant variant and the only variant present in the MSL complex when the complex is transmitted (physically associated with the X-chromosome) in mitosis. Loss of roX1 results in reduced expression of the genes on the X-chromosome, while loss of roX2 leads to MSL-independent upregulation of genes with male-biased testis-specific transcription. In roX1 roX2 mutant, gene expression is strongly reduced in a manner that is not related to proximity to high-affinity sites. Our results suggest that high tolerance of mis-expression of the X-chromosome has evolved. We propose that this may be a common property of sex-chromosomes, that dosage compensation is a stochastic process and its precision for each individual gene is regulated by the density of high-affinity sites in the locus.
Introduction
In eukaryotic genomes several long non-coding RNAs (lncRNAs) are associated with chromatin and involved in gene expression regulation, but the mechanisms involved are largely unknown.In both mammals and fruit flies, they are required to specifically identify and mark X-chromosomes for dosage compensation, a mechanism that helps maintain balanced expression of the genome.The evolution of sex-chromosomes, for example the X and Y chromosome pairs found in mammals and flies, leads to between-gender differences in gene dosage.Although some genes located on the X-chromosome are expressed in a sex-specific mode, equal expression of most of the genes in males and females is required [1,2].Thus, gradual degeneration of the proto-Y chromosome causes an increasing requirement to equalize gene expression between a single X in males and two X-chromosomes in females.X-chromosome expression must also be balanced with expression of the two sets of autosomal chromosomes.Several fundamentally different mechanisms that solve the gene dosage problem and provide such balance have evolved [1][2][3][4].In mammals, one of the pair of X-chromosomes in females is largely silenced through random X-chromosome inactivation, a mechanism that involves at least three lncRNAs [5,6].One, the long noncoding Xist RNA, plays a key role in marking one of the X-chromosomes and recruiting Polycomb repressive complex 2, thereby mediating its inactivation by histone H3 lysine 27 methylation [7].
In fruit flies, the gene dosage problem has been solved in an apparently opposite way, as Xchromosomal gene expression is increased by approximately a factor of two in males [2,3].This increase is mediated by a combination of general buffering effects that act on all monosomic regions [8][9][10] and the specific targeting and stimulation of the male X-chromosome by the male-specific lethal (MSL) complex.The MSL complex consists of at least five protein components (MSL1, MSL2, MSL3, MLE, and MOF) and two lncRNAs, roX1 and roX2 [3,11,12].Although the mammalian and fly compensatory systems respectively inactivate and activate chromosomes in members of different sexes, both rely on lncRNA for correct targeting.Results of UV-mediated crosslinking analyses suggest that only one species of roX is present per MSL complex in Drosophila [13].Furthermore, inclusion of a roX species is essential for maintaining correct targeting of the MSL complex to the X-chromosome [14].Upregulation of the male X-chromosome is considered to be partly due to enrichment of histone 4 lysine 16 acetylation (H4K16ac), mediated by the acetyltransferase MOF.The increased expression of X-linked genes in male flies is generally accepted, but the mechanisms involved have not been elucidated.Proposed mechanisms, which are hotly debated [15][16][17], include increased transcriptional initiation [18,19], increased elongation [20,21] or an inverse dosage effect [22].
The roX1 and roX2 RNAs differ in sequence and size (3.7 kb versus 0.6 kb) but can still individually support assembly of a functional MSL complex.In an early study of roX1 and roX2, a short homologous stretch was detected [23], which subsequently led to the definition of conserved regions shared by the two RNAs named roX-boxes, located in their 3' ends [24][25][26].Confirmatory genetic studies have shown that expression of six tandem repeats of a 72-bp stem loop region from roX2 is sufficient for mediation of the MSL complex's X-chromosome binding and initiation of H4-Lys16 acetylation in the absence of endogenous roX RNA [24].
The roX RNAs are not maternally deposited and transcription of roX1 is initiated in both male and female embryos at the beginning of the blastoderm stage [27].Females subsequently lose roX1 expression and a few hours after roX1 is first detected roX2 appears, but only in males [28].
Despite differences in size, sequence and initial expression, the two roX RNAs are functionally redundant in the sense that mutations of either roX1 or roX2 alone do not affect male viability and they both co-localize with the MSL complex along the male X-chromosome [23,27].In contrast, double (roX1 roX2) mutations, which cause a systematic redistribution of the MSL complex, are lethal for most males [29][30][31][32].It should be noted that in roX1 roX2 mutant the reduction in MSL complex abundance on the male X-chromosome is dramatic; more pronounced than the reductions observed in mle or mof mutants [14].Nevertheless, some roX1 roX2 mutant males may survive, while mle, msl1, msl2, msl3 or mof loss-of-function mutations are completely male-lethal [29][30][31].Whether other RNA species can fulfill the role of roX RNAs in these instances or the MSL complex can function without RNA species remains to be clarified.Furthermore, the degree of lethality in roX1 roX2 mutant is highly sensitive to several modifying factors, such as expression levels of MSL1 and MSL2 [33], expression of hairpin RNAs [34,35], presence and parental source of the Y-chromosome [31], and a functional siRNA pathway [36].The observations that roX1 roX2 mutations are not completely lethal and there are several modifying factors suggest an additional layer of redundancy in the role of lncRNAs in chromosome-specific targeting.
To further our understanding of the role of lncRNAs (particularly specific roles and redundancies of roX1 and roX2) in chromosome-specific regulation we here provide a comprehensive expression analysis of roX1, roX2 and roX1 roX2 mutants to explore the redundancy as well as the differences between the two lncRNA species.We show that roX1 and roX2 have partly separable functions in dosage compensation.In larvae, roX1 is the most abundant variant and the only variant present in the MSL complex when the complex is transmitted (physically associated with the X-chromosome) in mitosis.Loss of roX1 results in reduced expression of the genes on the X-chromosome, while loss of roX2 leads to MSL-independent upregulation of genes with male-biased testis-specific transcription.In roX1 roX2 mutant, gene expression is strongly reduced in a manner that is not related to proximity to high-affinity sites.
Expression of roX1 and roX2 is differentially regulated throughout the cell cycle
Initial evidence on localization of roX RNAs originates from immunostaining experiments on polytene chromosomes.Indeed, both roX1 and roX2 are expressed in salivary gland cells and co-localize on polytene chromosomes close to perfectly (Fig 1A).Overall, the intensities of roX1 and roX2 RNA in situ hybridization signals correlate closely, and the localization patterns along the X-chromosome are nearly identical, except at cytological band 10C, where the roX2 signal is notably stronger than the roX1 signal.As cytological band 10C is the location of the roX2 gene, this implies that roX2 is favored in MSL complexes targeting the roX2 region rather than roX1.At the onset of dosage compensation in the early male embryo, expression of roX is differentially regulated [27,28].A burst of roX1 transcription in the blastoderm stage is the initial step preceding assembly of the MSL complex.This occurs independently of roX2 expression, which does not begin until 2 h after the MSL complex is first detectable on the X-chromosome.In Schneider 2 cells, roX2 is expressed more strongly than roX1 and is detectable by FISH in 95% of them, while roX1 signals, although bright, are visible only in a small fraction of the cells [37].We therefore asked whether roX1 and roX2 are expressed in different Schneider 2 cells.Simultaneous detection of both roX RNAs showed that the rare cells that express roX1 also express roX2 (Fig 1B).Therefore, in contrast to salivary glands and embryos, only a small fraction of S2 cells express both roX RNAs and all those expressing roX2 also express roX1.
To investigate roX localization and targeting in cells undergoing mitosis we subjected neuroblasts of male larvae and 5-6 h embryos to RNA in situ hybridization analysis.While both roX1 and roX2 were clearly visualized in the "X-territory" in most interphase cells, only roX1 signals were detected on the distal part of the metaphase X-chromosome ( [38,39].We conclude that expression and/or targeting of roX RNAs is differentially regulated depending on the cell type and cell cycle stage, and roX1 RNA is the dominant roX RNA bound to the Xchromosome as part of MSL complexes during mitosis.
Generation of new roX2 mutant alleles
The roX2 mutant allele Df(1)52, the most commonly used roX2 loss-of-function allele, carries a deletion spanning a gene-dense region, including roX2 [30].Removal of this region is lethal, so it is compensated with a rescuing cosmid, frequently P{w + 4Δ4.3}.Nevertheless, roX2 is not the only gene affected by the widely used combination Df(1)52 P{w + 4Δ4.3}, and genes carrying it differ considerably in genetic background from roX1 and wild type flies.In a previous microarray analysis, potential background problems were solved by comparing roX1 roX2 mutant flies with roX2 flies as controls [40].Here, to analyze differences in expression profiles of single (roX1 and roX2) mutants and double (roX1 roX2) roX mutants we decided to create a deletion mutant of roX2 without affecting adjacent genes.Such a mutant would permit analysis of single and double mutants using a roX1 + roX2 + strain as a control and facilitate various other genetic analyses.To create the desired mutant allele, we used the CRISPR-Cas9 technique to induce two double-strand breaks simultaneously in the roX2 locus and recovered four roX2 deletion mutant strains (Fig 2A and S2 Fig).All deletions in these mutants span the longest exon of roX2, including two conserved roX-boxes.As expected, all four mutant strains were viable and fertile.Further analysis was performed with the roX2 9-4 allele, hereafter designated as the roX2 mutant.This deletion does not uncover the intergenic regions flanking roX2 and therefore it is less likely to affect the flanking genes nod and CG11650.The breakpoints are located almost precisely at the sites of double-strand breaks, deleting the region from 7 bp upstream of the annotated transcription start site to 60 bp upstream of the annotated gene end.RNA in situ hybridization confirmed the absence of roX2 RNA in salivary glands (Fig 2B ), while the roX1 signal intensity and binding pattern were apparently unchanged in the roX2 mutant.In larval brain of roX1 mutants the roX2 RNA was still observed in the X-territory of interphase cells, however it was not detected on the metaphase X-chromosome (S1B Fig) .We recombined the newly made roX2 9-4 allele with the roX1 ex6 mutant allele [30] to obtain the roX1 ex6 roX2 9-4 double mutant flies, hereafter roX1 roX2 mutant.As observed with other mutant alleles, removal of both roX RNAs resulted in high male-specific lethality beginning at the third instar larvae stage and continuing through pupal development, although a small number of adult males hatched.
Chromosome-specific effects in roX mutants
The next experiments were designed to investigate the specific roles (if any) of the roX RNA species in dosage compensation and assess potential additional functions in regulation of gene expression.For this, we sequenced (using an Illumina platform) polyadenylated RNA from wildtype, roX1 mutant, roX2 mutant and roX1 roX2 mutant 1 st instar male larvae.This developmental stage was chosen to minimize indirect effects of dosage compensation failure in the roX1 roX2 mutant, as roX1 ex6 roX2 9-4 1 st instar larvae are healthier than those of later stages.The four genotypes compared are not isogenic, however, the outcrosses as described in Material and methods ensure that the entire autosomal complement is heterozygous in all genotypes and half of it will have identical origin.Still, we cannot fully exclude that remaining differences in genetic background could be a contributing factor to the observed changes in expression for some genes.In wildtype larvae, roX1 RNA was approximately ten times more abundant than in roX2 mutant larvae (Fig 2C).Notably, we observed increases in abundance of both roX RNAs in response to absence of the other, but not establishment of wildtype roX levels, in the single mutants.More specifically, we recorded 89% reductions in roX RNA levels in the roX1 mutant, while removal of roX2 RNA (which normally constitutes only 7% of the total roX RNA complement) resulted in a 45% increase in roX1 RNA abundance on average.Therefore, the single mutants differ considerably in levels of roX RNA.Moreover, although viability and fitness are not affected in either of the single mutants, the efficiency of dosage compensation is significantly compromised in the roX1 mutant.The average log2 expression ratio of the X-chromosome in this mutant was -0.13, corresponding to an 8.6% reduction in average expression of X-chromosome genes relative to genes on the four major autosomes.In the roX2 mutant, the average expression ratio for X-chromosome genes was lower than that of autosomal genes, but density distributions for X and autosomal expression ratios were very similar (Fig 3A and 3B and S3 Fig) .A Mann-Whitney U-test confirmed that the two populations cannot be differentiated in terms of these expression parameters, so global X-chromosome transcription is not significantly affected in the roX2 mutant.In conclusion, the roX2 mutant shows no lack of compensation and has roX levels comparable or even higher than wildtype.Thus, it is not clear whether the total amount of roX or the type of roX is responsible for the observed reduction in average expression of X-chromosome genes in the roX1 mutant.The results also implies that the observed increase in levels of roX1 RNA in the roX2 mutant (Fig 2C ) does not lead to hyper-activation of the X-chromosome but is enough to maintain proper X-chromosome expression.
We and others have previously shown that in absence of roX RNAs, the MSL-complex become less abundant on the X-chromosome and relocated to heterochromatic regions including the 4 th chromosome [14,30,37,40].In fact, the fourth chromosome is related to the X-chromosome and evolutionary studies have shown that the 4 th chromosome was ancestrally an X-chromosome that reverted to an autosome [41,42].Importantly, upon analysis of the 4 th chromosome we detected weak but significant downregulation of genes on the fourth chromosome as a specific consequence of roX2 deletion (Fig 3A ), but not the previously reported downregulation of the fourth chromosome in the roX1 roX2 mutant flies [43].
As expected, strong downregulation of X-linked genes occurred in the roX1 roX2 mutant (Fig 3C).However, it was more severe (a 33% reduction relative to wildtype levels) than previously reported in microarray studies [40], and following RNAi depletion of MSL proteins [9,[43][44][45].The distribution plot shows that the vast majority of genes were downregulated in the roX1 roX2 mutant and the entire distribution of X-chromosomal gene expression was shifted approximately -0.56 on log2 scale relative to the expression of genes on the four major autosomal arms.
Dosage compensation of genes in roX mutants depends on their location
The expression ratios of X-linked genes varied widely, especially in the roX1 roX2 mutant (Fig 3C ).It has been proposed that MSL complexes are assembled at the sites of roX RNA transcription, then spread to the neighboring chromatin in cis direction, as well as diffusely, gradually binding to more distant loci.In addition, our in situ hybridization results indicate enrichment of roX2 RNA at cytological region 10C.We therefore tested if dosage compensation has a distinct spatial pattern along the X-chromosome.We observed some clustering of genes related to sensitivity to roX1 or roX2 RNAs, but it appeared to be randomly distributed spatially, except for a gradual decrease in expression of genes in the proximal X-chromosome region in the roX1 mutant, and the 10C region in the roX2 mutant (Fig 4A ).
A number of studies have estimated that the MSL complex binds specifically to roughly 250 chromatin entry sites, high-affinity sites (HAS) or Pion-X sites.Since roX RNAs are important for the spreading of the MSL complex from these high-affinity sites we asked whether the extent of genes' differential expression in roX mutants correlates with their distances from these sites.Dot plots of genes' expression ratios against their distances from HAS or Pion-X sites showed weak trends, but were difficult to interpret due to high variation (S4 Fig) .Thus, for more informative visualization we grouped the genes into bins with increasing distance from HAS (Fig 4B).In roX1 mutant, the average expression ratio was not significantly affected by the distance from HAS.This was also true for genes located within approximately 30 kb from HAS in roX2 and roX1 roX2 mutant.However, more remote genes had higher average expression ratios in roX2 and roX1 roX2 mutant, and thus are less suppressed in the double mutant and even upregulated in the roX2 mutant.On polytene chromosomes in the roX1 roX2 mutant we still observed MSL targeting on the X-chromosome, but only at HAS [14].This might suggest that genes close to HAS would retain dosage compensation function also in the absence of roX RNAs.On the contrary, our results show that genes within approximately 30 kb from HAS are strongly and equally affected while genes more distal to HAS are less sensitive to the absence of roX and absence of bound MSL complex.
The roX sensitivity of genes depends on the MSL complex binding strength
We next asked if the roX-dependent dosage compensation depends on the binding strength of the MSL complex, using publicly available chromatin immunoprecipitation data on MSL1, MOF and MSL3 [46] to correlate with our differential expression data (Fig 5A-5C and S5 Fig).
All X-chromosome genes were ranked in order of increasing MSL complex enrichment and divided into five bins with equal numbers of genes.Thus, bin 1 included unbound and weakly bound genes, while bin 5 included genes highly enriched in MSL proteins.We found that In roX1 roX2 mutant, these weakly MSL complex-binding genes are still suppressed, but much less than strongly binding genes.Since the MSL complex is still enriched at HAS in the absence of roX it is surprising that dosage compensation by roX RNA-free MSL complexes has low efficiency even for genes with the highest MSL enrichment.The genes highly enriched in MSL1 and MSL3 (bin 5) were slightly less down-regulated, but this trend was not seen with MOF enrichment bins (S5F Fig).
Since genes with low MSL complex-binding levels are less suppressed than others in the roX1 roX2 mutant, and upregulated in the roX2 mutant, we asked whether dosage compensation in the absence of roX depends on genes' expression level.For this, we divided the X-chromosome genes into 12 equally sized bins according to their expression levels.In accordance with observations regarding genes that weakly bind the MSL complex, we observed upregulation of weakly expressed genes in the roX2 mutant and less pronounced reduction in their expression in the roX1 roX2 mutant (Fig 5D -5F).
High-affinity sites are defined as those that retain incomplete MSL complexes in msl3, mle or mof mutants [45,[47][48][49][50][51], and it has been suggested that MSL complex-binding is directed by hierarchical affinities of target sites [49,50].In the roX1 roX2 mutant we observed more pronounced reductions in MSL complex abundance on the male X-chromosome than those reported in msl3, mle or mof mutants, but the remaining MSL targets in the roX1 roX2 mutant were highly reminiscent of those described in msl3, mle and mof mutants [14,30].We observed reduced expression of strongly MSL-binding genes in the roX1 roX2 mutant, which is intriguing as these genes are assumed to retain the MSL complex [14].Thus, to test the suggestion, we explored correlations between the MSL binding bins and 263 high affinity sites defined by targeting in mle, mof or msl3 mutants, or following their depletion [45,51,52].In parallel we analyzed the 208 peaks we previously identified in the absence of roX1 roX2 [14].The previously defined 208 peaks in the roX1 roX2 mutant overlap 405 genes on the X-chromosome, 309 of which are among the 328 genes in bin 5 (Fig 5G).We conclude that the 208 MSL peaks defined in the roX1 roX2 mutant correspond more strongly with genes in the highest MSL binding class than the previously defined HAS do (Fig 5G).Intriguingly, expression of X chromosomal genes also correlates with MSL1 binding enrichment (Fig 5H ), and thus overlap with HAS.This suggests that the distribution of MRE motifs and consequently MSL complex-binding is governed by gene expression in a manner that promotes adequate dosage compensation in males.
roX sensitivity and replication timing
In higher eukaryotes replication timing is connected to the chromatin landscape and transcriptional control [53].Generally, early replicating regions are associated with active transcription [54][55][56] whereas late replicating regions are associated with inactive regions and heterochromatin [57].Genome-wide studies on cultured Drosophila cells have revealed dependency of male-specific early replication of the X-chromosome on the MSL complex [56,58].We therefore asked whether X-chromosomal or genome-wide sensitivity to a specific roX mutant condition correlate with replication timing.Using available data on replication timing from analyses of S2 and DmBG3 (male) and Kc167 (female) cells [58] we classified the genes as early or late replicating.Based on our RNA-seq data we then calculated expression ratios for genes grouped by their chromosome location (autosomal or X-chromosomal) and their replication timing as determined in the three cell types.Conceivably, early and late X chromosomal replication domains (determined from analyses of S2 and DmBG3 male cell cultures) are respectively associated with genes bound and unbound by the MSL complex, and thus are affected in similar manners by roX mutations (Fig 6
and S6 Fig).
In female Kc167 cells the relation between sensitivity to roX and replication timing is generally similar to that observed in male cell cultures.However, in Kc167 cells the X-chromosome has a slightly different pattern of replication domains, which shifts the average expression ratio (Fig 6
and S6 Fig).
In particular, the distribution of distinctively upregulated X chromosomal genes in the roX2 mutant only corresponds with the distribution of late-replication regions in male cells.Notably, in larval neuroblasts and embryonic cells (Fig 1C and 1D), we only detected roX1 RNA (no roX2 RNA) on mitotic X-chromosomes, suggesting that roX1-containing MSL complexes mediate dosage compensation in the G1 phase, when replication timing is established [59].It is tempting to speculate that selective transmission of roX1-containing MSL complexes through mitosis enables the cells to quickly and efficiently establish the correct chromatin state and hence maintain correct replication timing.
Testis-biased genes are derepressed in roX2 mutant
Transcription upregulation of the X-chromosome in the roX2 mutant is associated with genes classified as having low expression levels, late replication and weak MSL complex-binding.We asked if this observed upregulation is caused by mis-targeting of MSL complexes associated with excess of roX1, i.e., if the upregulated genes are enriched in MSL complexes due to increases in roX1 levels and/or loss of roX2.To test this possibility, we assessed relative enrichments of MSL1 and H4K16ac on the upregulated genes by ChIP-qPCR analyses.In the roX2 mutant, none of the eight genes we tested became targeted by MSL1 or enriched in H4K16ac at a comparable level to known MSL target genes (S7 Fig) .In contrast, enrichment levels were similar to those detected on the autosomal control genes RpS3 and RpL32.We therefore conclude that stimulation of weakly expressed X chromosomal genes in the roX2 mutant is not mediated by induced targeting of the MSL complex.
Further analysis of upregulated genes in the roX2 mutant showed that they included not only X chromosomal genes but also late-replicating autosomal genes.This, together with the absence of MSL complex-enrichment on these genes, indicates that the upregulation is a roX2specific effect and at least partly separable from MSL complex-mediated gene regulation.Intriguingly, we discovered that these upregulated genes in the roX2 mutant strain include high proportions of genes (both X-chromosomal and autosomal) with male-biased testis-
Discussion
The dosage compensation machinery involving roX1 and roX2 RNAs provides a valuable model system for studying the evolution of lncRNA-genome interactions, chromosome-specific targeting and gene redundancy.LncRNAs differ from protein coding genes and are often less conserved at the level of primary sequence, as expected due to their lack of protein-coding restrictions.Like those encoding other lncRNAs, rapid evolution, i.e., low conservation of the primary sequences of roX genes has complicated comparative studies [24,60].Despite their differences in length and primary sequences, roX1 and roX2 have also been considered functionally redundant in Drosophila melanogaster.However, remarkably considering their rapid evolution and apparent redundancy, orthologs for both roX1 and roX2 have been found in all of 26 species within the Drosophila genus with available whole genome assemblies [60].Models that explain evolutionarily stable redundancy have been proposed [61] suggesting that the presence of both roX1 and roX2 in these diverged species may be attributable to differences in targets, affinities and/or efficiency or additional functions.
On polytene chromosomes, binding patterns of roX1 and roX2 are more or less indistinguishable, except in region 10C where roX2 is almost exclusively present.In the roX2 mutant, genes located in the 10C bin are on average downregulated, but similar downregulation of genes in many other bins is observed, so the effect cannot be directly attributed to loss of roX2.In wildtype 1 st instar larvae, levels of roX1 RNA are much higher than levels of roX2 RNA.Interestingly, in roX1 mutant larvae the absolute amount of roX2 RNA increases, but only to ~10% of wildtype levels of total roX RNA.This appears sufficient to avoid lethality, but still causes a significant decrease in X-chromosome expression.However, despite the huge difference in amounts, not only in number but even more considering the size of the two roX RNAs, the staining intensities of roX RNA on roX1 mutant and wildtype polytene chromosomes seem to be roughly equal.On mitotic chromosomes we only observed roX1 RNA in the MSL complexes bound to the distal X-chromosome and this binding is not redundant.This indicates that just after cell division roX1 RNA will be the dominating variant in assembled MSL complexes.Taken together, our results suggest that roX2 RNA has higher affinity than roX1 RNA for inclusion in MSL complexes.Moreover, varying amounts of the two species with different It should be noted that some male roX1 roX2 mutant escaped, so loss of roX is not completely male-lethal, unlike loss of mle, msl1, msl2, msl3 or mof [29][30][31]62].The complete male lethality in these mutants is attributed to reductions in dosage compensation that have been measured in several studies and observed not only in msl mutants but also following RNAi-mediated depletion of MSL proteins [9,[43][44][45].Notably, the average reduction of Xchromosome expression, relative to wildtype levels, calculated in these cases has varied from ca. 20 to 30%; substantially less than the 35% reduction we observed in the roX1 roX2 mutant.Some of the reported differences may be due to use of different techniques and bioinformatics procedures (including use of different cut-offs for expression and developmental stages).However, the reasons why some males can survive the very dramatic imbalance observed in expression of a large portion of the genome are unclear.Furthermore, the reduction in expression of X-chromosome genes observed in the roX1 mutant is not accompanied by any reported phenotypic changes, indicating that D. melanogaster has high intrinsic ability to cope with significant imbalances in X-chromosome expression.We speculate that in parallel with a compensation mechanism that addresses dosage imbalances the fly has evolved a high degree of tolerance to mis-expression of the X-chromosome.
The 4 th chromosome in D. melanogaster (the Muller F-element) is related to the X-chromosome.Evolutionary studies have shown that sex chromosomes do not always represent terminal stages in evolution-in fact, the 4 th chromosome was ancestrally an X-chromosome that reverted to an autosome [41,42].Moreover, the fly shows high and unusual tolerance to dosage differences [63] and mis-expression [8,[64][65][66] of the 4 th chromosome (although much smaller than the tolerance to those of the X-chromosome).These observations suggest that tolerance of mis-expression is a common outcome in the evolution of sex-chromosomes and this property has been retained with respect to the 4 th chromosome, even after its reversion to an autosome.We propose that high tolerance of mis-expression in the absence of full functional dosage compensation may be selected for during evolution of sex-chromosomes.This is because gradual degeneration of the proto-Y chromosome will be accompanied by an increasing requirement to equalize gene expression between a single X-(in males) and two X-chromosomes (in females), but changes in genomic location of highly sensitive genes will be favored during periods of incomplete (or shifting) dosage compensation.On transcript level, responses to reductions in dosages of X-chromosome genes have been found to be similar to those of autosomal genes [67].Thus, potential mechanisms for the higher tolerance are posttranscriptional compensatory mechanisms or selective alterations in gene composition (changes in genomic locations), similar to those proposed for the observed demasculinization of the Drosophila X-chromosome [68].
Prompted by the strong relationship between orchestration of the X-and 4 th chromosomes by the MSL complex and POF system [2,14,[69][70][71], respectively, we also measured effects of roX suppression on chromosome 4 expression in roX mutants.We observed weak but significant reduction of expression in the roX2 mutant, but the cause of this reduction remains elusive.In roX2 mutant we also observed transcriptional upregulation of X-chromosome genes classified as having low expression levels, late replication and weak MSL complex-binding.The loss of roX2 resulting in MSL complexes only including roX1 RNA might alter the spreading properties.We therefore hypothesized that the observed upregulation might be caused by mis-targeting of the MSL complex in the absence of roX2.However, our ChIP experiment revealed no enrichment of MSL complexes on these genes, and our results rather suggest that roX2 directly or indirectly restricts expression of these male-biased genes independently of its role in the MSL complex.
It is well known that roX RNAs are important for spreading of the MSL complex in regions between HAS [11,14].It is therefore surprising that loss of roX causes a relatively even reduction in expression of X-chromosomal genes and the decrease is not more dramatic with larger distances, as would be expected for reductions in spreading capacity.Indeed, observed reductions in expression were smaller for genes located far from HAS than for closer genes.A possible explanation is that expression of these genes is compensated by an MSL-independent mechanism.It has been previously shown that most genes on the X-chromosome are dosagecompensated [9,72,73], but a subset are not bound by the MSL complex and do not respond to its depletion [74].Our results corroborate these findings since loss of roX RNA in the roX1 roX2 mutant had little effect on the expression of genes classified as having weak MSL complex binding, clearly indicating that at least one other mechanism is involved.The results further show that high-affinity sites, as defined by MSL-targets in the absence of roX1 and roX2, are highly correlated to genes with the highest MSL binding levels.Therefore, sites targeted in the absence of roX provide a more stringent definition of HAS, with stronger correlation to genes bound by high levels of MSL complex, than targets in the absence of mle, mof or msl3.
The increase in expression mediated by the MSL complex is considered a feed-forward mode of regulation, and appears to be more or less equal (ca.35%) for all MSL-bound genes [9].Evidently, highly expressed genes need a stronger increase in transcription than weaklyexpressed genes.Our results suggest that dosage compensation is a stochastic process that depends on HAS distribution and is correlated with expression levels.Evolutionary analysis has shown that newly formed X-chromosomes acquire HAS, putatively via rewiring of the MSL complex by transposable elements and fine-tuning of its regulatory potential [75,76].Such a dynamic process may be required for constant adaptation of the system.Highly expressed genes tend to accumulate HAS in their introns and 3´UTRs, and thus bind relatively high amounts of MSL complex, thereby stimulating the required increase in expression.This also implies that the gene organization on X-chromosomes is under more constraints than autosomes.
This study presents, to our knowledge, the first high-throughput sequencing data and analysis of transcriptomes of roX1, roX2 and roX1 roX2 mutant flies.The results reveal that roX1 and roX2 fulfill separable functions in dosage compensation in D. melanogaster.The two RNA species differ in both transcription level and cell-cycle regulation.
In third instar larvae, roX1 is the more abundant variant and the variant that is included in MSL complexes transmitted physically associated with the X-chromosome in mitosis.Loss of roX1, but not loss of roX2, results in decreased expression of genes on the X-chromosome, albeit without apparent phenotypic consequences.Loss of both roX species leads to a dramatic reduction of X-chromosome expression, but not complete male lethality.Taken together, these findings suggest that high tolerance for mis-expression of X-chromosome genes has evolved.We speculate that it evolved in parallel with dosage compensation mechanisms and that it may be a common property of current and ancient sex-chromosomes.
The roX RNAs are important for spreading of the MSL-complex from HAS, but the reduction of X-chromosome expression in roX1 roX2 mutant is not affected by the need for spreading, i.e., distance from HAS.In addition, the genes targeted by the MSL complex in the roX1 roX2 mutant also show strongly reduced expression.Our results suggest that the function of the MSL complex which is still present at HAS is compromised in the roX1 roX2 mutant and that the dosage of distant genes is compensated by an alternative, unknown, mechanism.We propose that dosage compensation is a stochastic process that depends on HAS distribution.Creation and fine-tuning of binding sites is a dynamic process that is required for constant adaptation of the system.Highly expressed genes will accumulate and be selected for strong HAS (and thus bind more MSL complex) since they require high levels of bound MSL complex for the required increases in expression.
Fly strains and roX2 mutant generation
Flies were cultivated and crossed at 25˚C in vials containing potato mash-yeast-agar.The roX1 ex6 strain [77] was obtained from Victoria Meller (Wayne State University, Detroit).The new roX2 mutant alleles were generated by CRISPR/Cas9 genome editing using a previously outlined strategy [78].Briefly, we constructed a transgenic fly strain expressing two gRNAs in the germline, which are designed to induce double-strand breaks 7 bp upstream of the roX2 transcription start site and 63 bp upstream of the annotated transcription termination.Males with the transgenic gRNA construct were crossed with y 2 cho 2 v 1 ; attP40{nos-Cas9}/CyO females.The male progeny of this and subsequent two crosses were crossed individually to C (1)DX, y 1 w 1 f 1 females.Strains with deletions spanning roX2 were identified by PCR-based screening followed by sequencing, using primers and gRNA oligos listed in S1 Table .Males carrying a roX2 9-4 deletion with the final genotype y 1 cho 2 v 1 roX2 9-4 were crossed with y 1 w 1118 roX1 ex6 females to obtain recombinant roX double mutant X-chromosome y 1 w 1118 roX1 ex6 v 1 roX2 9-4 .This means that the crossover occurred between cho and v genes.
RNA in situ hybridization
Previously described procedures were used in RNA-fluorescent in situ hybridization (FISH) analyses, and preparation of both salivary gland squashes [79] and larval brain squashes [80], following protocol 1.9, method 3, for the latter.Schneider's line 2 cells were treated prior to hybridization as also previously described [37].For embryo staining, y 1 w 1118 embryos were collected on apple juice-agar plates for 1 hour and incubated for 5-6 hours at 25˚C.Squashes were prepared as follows: each embryo to be stained was manually dechorionated and transferred onto a cover slip.The vitelline membrane was pricked with a fine needle and a drop of 2% formaldehyde, 0.1% Triton X-100 in 1× PBS was added immediately.After 2 minutes, the solution was removed with a pipette and a drop of 50% acetic acid, 1% formaldehyde solution was added.After another 2 minutes incubation, a polylysine slide was placed over the cover slip.To spread the cells, the cover slip was gently pressed and then flash-frozen in liquid nitrogen.After removal of the coverslip the slide was immersed in 99% ethanol and stored at -20˚C prior to hybridization.Antisense RNA probes for roX1 (GH10432) and roX2 (GH18991) were synthesized using SP6 RNA Polymerase (Roche) and DIG or Biotin RNA Labelling Mix (Roche), respectively.Primary antibodies were sheep anti-digoxigenin (0.4 mg/mL; Roche) and mouse anti-biotin (1:500, Jackson ImmunoResearch).The secondary antibodies were donkey anti-mouse labelled with Alexa-Fluor488 and donkey anti-sheep labelled with Alexa555 (Thermo Fisher Scientific).
Preparation of RNA library, sequencing and data treatment
To obtain 1 st instar male larvae we collected 80-100 virgin females of the following genotypes: y 1 w 1118 (used as wild type), y 1 w 1118 roX1 ex6 (roX1 mutant), y 1 cho 2 v 1 roX2 9-4 (roX2 mutant), and y 1 w 1118 roX1 ex6 v 1 roX2 9-4 /FM7i, P[w +mC ActGFP]JMR3 (roX1 roX2 mutant).The females were crossed with 50-80 FM7i, P[w +mC ActGFP]JMR3/Y males.Non-GFP 1 st instar larvae were collected, 20 per sample.The collected larvae were flash-frozen in liquid nitrogen and stored at -80˚C.Total RNA was extracted with 1 mL of Tri Reagent (Ambion) per sample, and libraries were prepared with a TruSeq RNA Sample Prep Kit v2 (Illumina) according to the manufacturer's instructions.In total, three wildtype, roX2 mutant and roX1 roX2 mutant biological replicates were prepared and four roX1 mutant replicates.The samples were sequenced using a HiSeq2500 instrument at SciLife lab (Uppsala) and 125 bp long paired-end reads were obtained, and mapped to Drosophila melanogaster genome version 6.09 using STAR v2.5.1b with default settings.Read counts were obtained with HTseq version 0.6.1 using htseq count with default settings.The samples used for the analysis had 29.3-56.2M reads with STAR mapping quality values of 22.9-52.1 and mean mapping coverage of 201-497.After removing genes with low read counts, means of the total expression of the four major autosome arms were centered to zero.Genes were annotated using the dmelanogaster_gene ensembl dataset from BioMart, Dm release 6.17.
Differential expression analysis
Fold-differences in expression of genes among the investigated genotypes were calculated using the DESeq2 software package.Genes for which less than 20 reads were obtained from as a sum of all samples were excluded from the analysis.Of the 1000 most variable genes, 856 genes with an adjusted p-value for at least 2-fold differential expression between the wildtype and each of the three roX mutants exceeding 0.01 were also excluded from the analysis.In addition, the white gene and its upstream neighbors (CG3588, CG14416, CG14417, CG14418 and CG14419) were excluded from the analysis due to strain background dissimilarities among strains in this genomic region.In total, 2356, 2659, 2571, 3164, 105, 10750 and 2042 genes on chromosomes 2L, 2R, 3L, 3R, 4, all autosomes except chromosome 4, and X, respectively, were included.For each of these genes, the average differential expression between replicates was log2-transformed and mean-centred, by subtracting the mean log2 fold change in expression of genes on the major autosomes (2L, 2R, 3L, 3R) from the value for each individual gene (S2 Table ).Thus, the observed differences are relative and based on the assumption that overall expression of the four major autosomal arms is constant under all relevant conditions.
Distance to High Affinity Sites (HAS) and Pioneer on the X sites (PionX)
The coordinates of PionX sites used in the analysis have been previously published [81], and the HAS coordinates on the X-chromosome were extracted from available data [45,51], compiled and kindly provided by Philip and Stenberg [74].The HAS coordinates were converted from release 5 to release 6 of the Drosophila genome using the flybase.orgonline conversion tool.The distances to the closest PionX and HAS sites were calculated for each gene on the Xchromosome, then genes were ranked in order of increasing distances to these sites and split into 10 bins with equal numbers of genes (S2 Table ).
MSL1, MSL3 or MOF binding bins
Binding values of MSL1, MSL3 and MOF in S2 cells were calculated and kindly provided by Philip and Stenberg [74] using the E-MEXP-1508 chromatin immunoprecipitation dataset [46] (S2 Table ).Only X chromosomal genes with binding values for all three proteins were included in the analysis (1640 genes).Genes were ranked by increasing binding value and split into five equal bins.Genes located within MSL1 binding sites in the roX1 roX2 mutant were determined using previously obtained ChIP data [14].The percentage overlap between genes and the previously defined top 1.5% of peaks was calculated using the annotate function of BEDTools.A gene was considered to be within a MSL1 binding peak if any of its transcripts had at least 1% overlap.
Classification of genes into early or late replicating
Bed files with data on early and late replicating domains in S2, Kc167 and DmBG3 cell lines were kindly provided by David MacAlpine [58].The coordinates were converted from
Fig 1 .
Fig 1. Targeting of roX1 and roX2 RNAs in indicated cell types.Results of RNA in situ hybridization with antisense probes against roX1 (yellow) and roX2 (green), with DAPI staining of DNA (blue).(A) roX1 and roX2 RNA target and colocalize on the X-chromosome in male 3 rd instar larvae salivary glands.The genomic loci of roX1 (arrowhead) and roX2 (arrow) are indicated.(B) roX2 is the dominating roX species in S2 cells.The few cells with localized roX1 domains (arrowheads) also show roX2 targeting.(C, D) On metaphase chromosomes roX1 but not roX2 targets the distal part of the X-chromosome, in male 3 rd instar larvae neuroblasts (C) and in male 6 hour embryos (D).Examples of interphase nuclei with colocalized roX1 and roX2 are indicated with arrowheads and the metaphase X-chromosome decorated by roX1 is indicated by asterisks.More than 12 brain preparations and >9 embryos were examined.roX1 was detected on the distal part of the X-chromosome in >70% of metaphase nuclei in both cases.https://doi.org/10.1371/journal.pgen.1007842.g001 Fig 1C and 1D and S1A Fig).We also observed targeting of MLE to the distal part of the mitotic chromosome (S1A Fig), and such targeting by MSL2 and MSL3 has been previously shown
Fig 3 .
Fig 3. roX RNAs are required for proper transcription of genes on the 4 th and X-chromosomes.(A) Boxplots representing distributions of expression ratios for individual chromosome arms and chromosomes 2 and 3 combined (light grey A).The grey dots represent means of the samples.(B) Density plots of expression ratios for the X chromosome (black) and chromosomes 2 and 3 combined (grey).The vertical bar indicates the 0. https://doi.org/10.1371/journal.pgen.1007842.g003
Fig 4 .
Fig 4. Genes far from high-affinity sites require both roX RNAs for proper expression.(A) Expression ratio distribution along the Xchromosome, divided into 50 bins of equal length.Genes were assigned to bins according to coordinates of their centers.The grey bars indicate bins containing roX1 and roX2 genes.(B) Expression ratios of all X chromosomal genes plotted against distance to high-affinity sites split into bins with equal number of genes.The error bars represent 95% confidence intervals.https://doi.org/10.1371/journal.pgen.1007842.g004
Fig 5 .
Fig 5. Highly-expressed genes require high MSL complex levels for proper expression.(A-C) Average expression ratios of X chromosomal genes grouped in equal sized bins based on their MSL1 binding strength (1 lowest to 5 highest); (A) roX1 -WT, (B) roX2 -WT, (C) rox1 roX2-WT.(D-F) Average expression ratios plotted against binned expression values.The error bars indicate 95% confidence intervals; (D) roX1 -WT, (E) roX2 -WT, (F) roX1 roX2-WT.(G) Numbers of genes located within 2, 5 and 10 kb of HAS (three grey bars) and genes overlapping with MSL1 peaks in the roX1 roX2 mutant (black bar), plotted for wild type MSL1-binding bins.The dashed line at the top represents the total number of genes in each MSL binding bin (n = 328).(H) Boxplots showing the distribution of expression in the five MSL1 binding bins.https://doi.org/10.1371/journal.pgen.1007842.g005
Fig 6 .
Fig 6.Differential effects of roX mutations on early and late replicating genes.Average expression ratios of X chromosomal (X) and autosomal (A) genes in; (A) roX1 -WT, (B) roX2 -WT, (C) roX1 roX2-WT.The genes are grouped by their replication time in S2 cultured cells and the expression ratios are calculated from the RNA-seq analysis on first instar larvae.The error bars represent 95% confidence intervals.https://doi.org/10.1371/journal.pgen.1007842.g006
|
2018-12-13T14:06:42.479Z
|
2018-12-01T00:00:00.000
|
{
"year": 2018,
"sha1": "702e462975a98a167fb34112dcd7ff89917a27ae",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1007842&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "702e462975a98a167fb34112dcd7ff89917a27ae",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
18428172
|
pes2o/s2orc
|
v3-fos-license
|
The Effects of Temperature and Body Mass on Jump Performance of the Locust Locusta migratoria
Locusts jump by rapidly releasing energy from cuticular springs built into the hind femur that deform when the femur muscle contracts. This study is the first to examine the effect of temperature on jump energy at each life stage of any orthopteran. Ballistics and high-speed cinematography were used to quantify the energy, distance, and take-off angle of the jump at 15, 25, and 35°C in the locust Locusta migratoria. Allometric analysis across the five juvenile stages at 35°C reveals that jump distance (D; m) scales with body mass (M; g) according to the power equation D = 0.35M 0.17±0.08 (95% CI), jump take-off angle (A; degrees) scales as A = 52.5M 0.00±0.06, and jump energy (E; mJ per jump) scales as E = 1.91M 1.14±0.09. Temperature has no significant effect on the exponent of these relationships, and only a modest effect on the elevation, with an overall Q10 of 1.08 for jump distance and 1.09 for jump energy. On average, adults jump 87% farther and with 74% more energy than predicted based on juvenile scaling data. The positive allometric scaling of jump distance and jump energy across the juvenile life stages is likely facilitated by the concomitant relative increase in the total length (L f+t; mm) of the femur and tibia of the hind leg, L f+t = 34.9M 0.37±0.02. The weak temperature-dependence of jump performance can be traced to the maximum tension of the hind femur muscle and the energy storage capacity of the femur's cuticular springs. The disproportionately greater jump energy and jump distance of adults is associated with relatively longer (12%) legs and a relatively larger (11%) femur muscle cross-sectional area, which could allow more strain loading into the femur's cuticular springs. Augmented jump performance in volant adult locusts achieves the take-off velocity required to initiate flight.
Introduction
Jumping is a form of locomotion used by various groups of vertebrate and invertebrate animals [1]. Jumping has become highly specialised in the locust, and represents the primary means of locomotion for juveniles, while in the adult it provides the thrust necessary to initiate flight [2]. Both juvenile and adult locusts circumvent contractile limitations of muscle by using an energy storage system built into the hind limb, which allows jumps that are of both high force and high velocity [3]. The process of the locust jump begins with the contraction of the small flexor muscle contained within the hind femur, which pulls the tibia almost parallel to the femur. The flexor and large extensor muscles then co-contract, and with the tibia remaining fully flexed owing to a catch mechanism, stress energy is transferred to the femur's internal apodome structures and the semi-lunar processes where it is stored in the form of strain energy. The flexor then suddenly relaxes which frees the tibia and causes the rapid release of energy from the deformed cuticular elements to produce a rapid extension of the tibia [4,5]. By storing and releasing energy in this manner, the effective power output of the jump is amplified by about 10times above that generated by the muscle [3].
The energy of the locust jump can be described using ballistics, according to Eq. (1):
E~M
: g : D
sin 2h
where E is the energy (mJ = g m 2 s 22 ), M is body mass (g), g is acceleration due to gravity (9.81 m s 22 ), D is jump distance (m), and h is jump take-off angle to the horizontal. It is evident from Eq. (1) that jump energy will be significantly affected by the c.a. 50-fold increase in body mass that occurs throughout locust development, as well as any variation in jump distance or jump take-off angle that could occur through successive life stages. These variations can be investigated using an allometric analysis, which takes the general form of a power equation, y = aM b , where y is the variable being investigated, a is the coefficient (elevation), and b is the scaling exponent (the slope of the log-transformed equation). When the principal of allometric cancellation is applied to this analysis [6], the null hypothesis is that jump energy scales throughout the locust life cycle in direct proportion to body mass (/M 1 ), assuming that g, D and h are constants (/M 0 ). A.V. Hill also hypothesised that maximum jump distance should not vary among geometrically similar animals of different body mass [7]. A caveat here is the assumption that locusts maintain geometric similarity throughout the life cycle, since any increase in the hind leg's relative length or femur muscle cross-sectional area could increase jump distance [1,8]. If leg length or muscle cross-sectional area increases faster than geometric similarity (/M 0.33 and M 0.67 , respectively) during locust development, then jump energy and jump distance should show ''positive allometry'', with exponents greater than 1 and 0, respectively. Positive allometry in fact appears in locusts of the genus Schistocerca [2,9,10]. Body temperature could also affect jump performance in the poikilothermic locust. The effect of temperature on performance is often described by an inverted U-shaped curve, whereby performance is optimal at some intermediate temperature range, but declines outside this range [11]. For example, jump distance in the adult house cricket Acheta domestica follows an inverted Ushaped relationship with temperatures between 0-45uC, with peak performance occurring around 26-31uC [12]. However, not all jump performance studies involving Orthoptera show the same degree of thermal sensitivity. For example, the adult two-striped grasshopper Melanoplus bivittatus has a similar jump distance at 20 and 35uC [13], and jump distance in the adult pygmy grasshopper Tetrix subulata increases by only 15% as temperature increases from 15-25uC [14]. The thermal sensitivity of jump energy can be evaluated using the Q 10 calculation, which is the ratio of change with every 10uC increase in temperature. If jump energy is dependent on the rate at which tension is developed in the muscle, which is generally temperature-sensitive, then a Q 10 of 2-2.5 would be expected [15]. However, if jump energy is dependent on the absolute maximum tension developed by the muscle, which is often relatively temperature-insensitive, then a Q 10 of 1.0-1.2 would arise [1,15]. A low Q 10 value for jump energy might also be expected because the locust jump is partly a mechanical process involving the rapid release of strain energy from the femur's cuticular springs [1,16,17]. While the studies referred to above appear divided over the relative thermal sensitivity of locust jump performance [12][13][14], it is also evident that size effects on temperature sensitivity are unknown. This can be determined from the power equation, by testing for differences in exponent (b) and then elevation (a).
The aim of this study is to use allometry to assess the effects of body mass and temperature on jump distance, jump take-off angle, and jump energy at all six life stages of the locust Locusta migratoria to test the hypotheses presented above. Although scaling of jump performance across ontogeny in Schistocerca, and temperature dependence of jump distance in adult A. domestica, M. bivittatus, and T. subulata have been determined independently, this is the first study to combine scaling and temperature dependence in all life stages of an orthopteran.
Materials and Methods
Gregarious-phase locusts Locusta migratoria (Linnaeus 1758) came from a breeding colony (3562uC) at the University of Adelaide [18]. The founding stock was sourced from wild populations in inland eastern Australia, where annual daytime temperatures typically range 10-40uC (http://www.bom.gov.au/climate/dataservices/). Jump performance measurements and metathoracic (hind) leg lengths were taken from locusts at all six stages of the life cycle, including adults. Recently moulted locusts (,1-day-old) were not used because their soft cuticle, and potentially relatively small muscle mass, can compromise jump performance [19]. For volant adult locusts, it was necessary to bind the wings with adhesive tape (1-2% body mass) to prevent flight.
Each locust was placed in a controlled temperature room where it was given a minimum of 30 min to equilibrate to one of three experimental temperatures: 15, 25 or 35uC (verified with a mercury thermometer). Each individual was then encouraged to perform 3-5 ''escape-type'' jumps on a cotton sheet by startling the insect from behind, and when necessary, lightly prodding the tip of the abdomen. The horizontal, straight-line distance between the starting and landing points of each jump (taken midway along the body) was immediately measured to the nearest 0.5 cm using either a 30 or 100 cm ruler, as appropriate. In addition, the initial jump take-off was filmed at 240 frames s 21 using a high speed digital video camera (Xacti VPC-FH1, Sanyo Electric Co., Osaka, Japan), which was positioned lateral to the locust, and perpendicular to the predicted direction of the jump. Jumps that deviated more than 615 degrees from perpendicular were excluded, in order to limit the difference in actual versus perceived take-off angle [20]. Take-off angle was measured to the nearest 0.1 degree by overlaying individual video frames from the start of the jump and using an angle tool in a computer graphics program (CorelDRAW 11, Corel Corp., Ottawa, ON, Canada). Locusts were weighed to 0.1 mg on an analytical balance (AE163, Mettler, Greifensee, Switzerland), and then body mass, jump distance and jump take-off angle were used to calculate energy output for each jump according to Eq. (1), consistent with the methods of earlier studies [9,10,16]. The average energy of the 3-5 jumps performed by each locust was then calculated and used in all analyses.
At each experimental temperature, six locusts were used from each of the five juvenile life stages (N = 30 juveniles), plus six adult male and six adult female locusts (N = 12 adults). No locusts were re-used at different life stages or at different temperatures. Thus, in total, 126 locusts were used for jump performance measurements, consisting of 90 juveniles and 36 adults. The length of the metathoracic femur and tibia (hind leg) was measured to 0.1 mm with digital callipers in every locust used for jump performance measurements (N = 126), and in an additional 24 juvenile and 8 adult locusts (N = 32). Thus, in total, leg length measurements were taken from 158 locusts, consisting of 114 juveniles and 44 adults.
Mean values are presented with 95% confidence intervals (CI). All other data are expressed allometrically, by taking the log 10 of the variable and the log 10 of body mass, and then plotting ordinary least-squares linear regressions. The slopes and intercepts of the regressions were compared with ANCOVA, with body mass as the covariate, according to Zar [21], using GraphPad Prism 5 statistical software (GraphPad Software, La Jolla, CA, USA). When converted to the allometric power equation, the exponent (b) describes the change in the variable as body mass increases throughout ontogeny. The exponents are presented with 95% CI, and if they are statistically indistinguishable, the elevation (a) of the equation is used to describe the effect of temperature on the variable, from which Q 10 was calculated as the quotient of the elevation values at different temperatures.
Results
Mean values of body mass, jump distance, jump take-off angle, and jump energy at each experimental temperature are provided in Table 1. The scaling of jump energy across five juvenile instars fit allometric equations well, but the data from adults cluster above the line (Fig. 1). The exponents of juvenile jump energy equations do not differ significantly between the three temperatures (ANCOVA, 15 and 25uC, F 1,56 = 0.07, P = 0.79; 15 and 35uC, F 1,56 = 0.37, P = 0.55; 25 and 35uC, F 1,56 = 0.08, P = 0.78; N = 30 at each temperature), and so it is possible to calculate a single exponent for all juvenile jump energy data (M 1.1560.05 ). The positive allometric scaling of jump energy in juvenile locusts reflects the finding that juvenile jump distance increases as body mass increases, with a combined exponent across all three temperatures of M 0.1660.05 , while jump take-off angle in juveniles does not vary with body mass, scaling with a combined exponent of M 0.0160.02 (Table 2).
An analysis of the elevations of the juvenile jump energy equations (excluding adults) shows that the intercepts for the 15 and 25uC temperature treatments are statistically indistinguishable (ANCOVA, F 1,57 = 0.83, P = 0.37), whereas the 35uC temperature has a higher elevation than both the 15 and 25uC treatments (35 and 15uC, F 1,57 = 12.98, P,0.001; 35 and 25uC, F 1,57 = 5.96, P = 0.02). Nonetheless, the difference in elevation between temperatures is relatively modest, such that Q 10 = 1.02 between 15 and 25uC, Q 10 = 1.15 between 25 and 35uC, and Q 10 = 1.09 between 15 and 35uC. The modest effect of temperature on jump energy in juvenile locusts reflects the finding that juvenile jump distance is also weakly dependent on temperature, with an overall Q 10 of 1.08 (Table 2).
Consistent with juvenile locusts, the jump energy of adult locusts at 15uC is statistically indistinguishable from that observed at 25uC (ANOVA with Tukey's post hoc test, P.0.05; N = 12 adults at each temperature), but at 35uC adult jump energy is significantly higher than at 15 and 25uC (P,0.05). Nonetheless, the difference in adult jump energy between temperatures is once again relatively modest, with an overall Q 10 of 1.12, which reflects the small effect of temperature on jump distance, where the overall Q 10 is 1.08. However, at all temperatures adult jump energy is significantly greater relative to body mass compared to the juveniles, and for this reason, adults are treated separately. At 15uC, adult jump energy is 76% greater than predicted from juveniles of the same body mass, and at 25 and 35uC adult jump energy is 55 and 90% greater, respectively (Fig. 1). This reflects the finding that adults jump farther relative to body mass compared to juvenile locusts. At 15uC, adult jump distance is 85% farther than predicted, and at 25 and 35uC adult jump distance is 75 and 102% farther, respectively. The relative difference in jump distance and jump energy are not exactly equivalent to one another, at each respective temperature, because the overall mean jump take-off angle of adults is 44.462.0 degrees, whereas in juveniles it is significantly higher, 50.662.0 (t-test, t 124 = 3.5, P,0.001), and this is factored into the calculation for jump energy, Eq. (1).
Metathoracic femur length (L f ; mm) increases with body mass across the juvenile life stages (excluding adults) according to the power equation, L f = 18.1M 0.3760.02 (r 2 = 0.91; N = 114 juveniles), while tibia length (L t ) increases according to L t = 16.7M 0.3760.02 (r 2 = 0.90), such that the combined length of the femur and tibia (L f+t ) scales as L f+t = 34.9M 0.3760.02 (r 2 = 0.91). Based on these equations for juveniles, the length of the adult femur, tibia, and combined femur and tibia are all 12% longer than predicted (N = 44 adults).
Discussion
An important finding of this study is that jump energy (/M 1.15 ) in juvenile locusts increases disproportionately with body mass (Fig. 1). If energy were proportional to mass, then all instars would be expected to jump the same distance. However, jump distance scales positively with body mass (/M 0.16 ) with virtually the same exponent above 0.0 as jump energy is above 1.0 ( Table 2). Thus a 10 mg first instar jumps 16 cm and a 1 g fifth instar jumps 35 cm at 35uC. This finding is consistent with studies on Schistocerca locusts, where jump energy in juveniles scales with an exponent of M 1.11 [2], and maximum jump distance across juvenile and adult life stages scales as M 0.20-0.22 [9,10]. The positive allometric scaling of jump distance and energy observed in the present study could arise partly because smaller instars have a higher frontal area-to-body mass ratio than larger instars, thus making them more susceptible to the effects of aerodynamic drag [22]. However, to some extent, this effect is likely offset by the higher body density of smaller instars compared to larger instars [22], owing to the disproportionate increase in tracheal system volume (/M 1.30 ) that occurs throughout locust development [23]. Certainly the fact that Katz and Gosline [2] arrived at a similar exponent for jump energy (M 1.11 ), even though they circumvented the effects of drag using a force plate to calculate kinetic energy, suggests that other factors must contribute strongly to the positive allometry of jump distance and energy. Importantly, our study shows that the length of the hind leg increases disproportionately with body mass (/M 0.37 ) throughout juvenile ontogeny. This is relevant because jump distance is proportional to the distance through which the force acts, which is related directly to limb length [1,8,10]. Thus, the relatively longer legs of older juveniles likely provide the means to propel these animals farther and with greater jump energy. Conversely, femur muscle cross-sectional area appears to maintain near-geometric proportionality throughout juvenile development, scaling as M 0.68 , which we derived from the allometric cancellation of juvenile femur muscle volume (/M 1.05 ) [24] and juvenile femur length (/M 0.37 ). The proportional scaling of cross-sectional area implies that the femur muscle's capacity to deform the cuticular elements is unlikely to vary across juvenile ontogeny. Thus, unlike hind limb length, femur muscle cross-sectional area is less likely to contribute to the positive allometry of juvenile jump distance and energy.
Another significant finding of this study is that jump energy and jump distance are only weakly dependent on temperature, such that jump energy has an overall Q 10 of 1.09 and jump distance has an overall Q 10 of 1.08 ( Fig. 1; Table 2). This is consistent with the finding that temperature has no significant effect on average or maximum jump distance in the adult two-striped grasshopper Melanoplus bivittatus [13], and only a modest effect (Q 10 of 1.15) on jump distance in the adult pygmy grasshopper Tetrix subulata [14]. More broadly, these findings are consistent with reports in other insect groups, particularly ants and cockroaches, that minimum cost of transport (mJ g 21 m 21 ) is unaffected by temperature [25][26][27]. The modest Q 10 values for locust jump distance and energy likely arise if temperature has a limited effect on both the maximum tension developed by the femur muscle during contraction [15,28], and the energy storage capacity of femur's cuticular springs [1,13,16]. However, one should be cautious before extrapolating the current findings to temperatures outside the 15-35uC range tested. At higher temperatures, the capacity of the femur muscle to produce tension might decline due to insufficient Ca 2+ release or insufficient time to generate tension before deactivation processes are initiated, as has been hypothesised for muscle in general [29]. At lower temperatures, the forcegenerating capacity of individual cross-bridges may start to decline [28], or the time taken to reach maximum tension might become so protracted that the locust initiates the jump early. These factors might explain why the adult house cricket Acheta domestica has a relatively comparable jump distance between 20-35uC, but exhibits very poor jump performance between 0-10uC and at 45uC [12].
The third important finding of this study is that jump energy and jump distance of adults are significantly greater than expected of juveniles of the same body mass (Fig. 1). Averaged over all temperatures, adults jump 87% farther and with 74% more energy than predicted from juvenile scaling. The slight mismatch between the relative difference in jump distance and jump energy occurs because the take-off angle in adults (44.4 degrees) is closer than it is in juveniles (50.6 degrees) to the 45 degree optimum angle, which is the trajectory that maximises distance for a given amount of energy according to Eq. (1), or 43 degrees if aerodynamic drag is considered [19]. The reason the overall mean take-off angle of juvenile locusts is somewhat higher than optimal is unclear, although the protocol used to startle the insect from behind could elicit a less efficient escape jump (ratio of energy-to-distance). In any case, the greater jump distance and energy recorded from adult locusts in this study is consistent with adult Schistocerca americana locusts where maximum jump distance is approximately 80% farther than predicted from juvenile scaling [9]. The higher jump energy and longer jump distance of adult locusts could be facilitated by the slightly longer (12%) relative length of the adult hind leg, which, as already discussed for juveniles, would allow for increased jump energy by lengthening the distance over which the force acts. In addition, recently published data shows that adult femur muscle volume is 24% larger than that of similarly sized juveniles [24,30], and if this is combined with the knowledge that adult femur length is 12% longer, the calculated mean crosssectional area of the femur muscle is 11% larger. As discussed earlier, this is important because a relatively larger muscle crosssection would allow more strain to be loaded into the femur's cuticular springs, thus increasing jump energy. The longer lifespan of the adult life-stage might also allow more time for the cuticular springs to stiffen and augment energy potential [31,32] and it should also allow the cuticular springs to operate at their functional capacity for longer than is possible in juvenile lifestages, where a decrease in jumping ability occurs around each moult [19]. Functionally, the greater jump energy of adults compared to juveniles relates to the difference in the way juvenile and adult locusts utilise the jump 2 flightless juvenile life stages jump as the primary mode of locomotion, whereas adult locusts jump to achieve a minimum take-off velocity of 2.5 m s 21 required to initiate flight [2,33]. Kinetic energy (E) is related to mass (M) and velocity (v) according to E~1= 2 Mv 2 . With a mean adult mass of 1.06 g and jump energy of 3.75 mJ at 35uC (Table 1), then initial velocity is 2.66 m s 21 , which is above the minimum take-off requirement. Fifth instars, however, fall short, with a velocity of 1.98 m s 21 .
In summary, our investigation into the effects of body mass and temperature on ballistic jump energy at each stage of the locust lifecycle -from a 20 mg first instar to a 1 g adult -reveals three important findings: firstly, that jump energy does not scale in direct proportion to body mass across the five juvenile life stages, but instead exhibits positive allometry, scaling with an overall exponent of M 1.15 ; secondly, that jump energy is only weakly dependent on temperature (Q 10 = 1.09) over the 15-35uC range examined; and thirdly, that the energy of the adult jump is disproportionately greater (74% more energy), relative to body mass, than the juvenile jump, which provides the adult with the initial take-off thrust required for flight initiation.
|
2017-11-13T14:15:49.743Z
|
2013-08-13T00:00:00.000
|
{
"year": 2013,
"sha1": "f45db77552fdca666b28a64b0b32b55f5b07340c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0072471&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd5372e8b71599b4cfde38001d9e465ff6dafb82",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
209088824
|
pes2o/s2orc
|
v3-fos-license
|
The effectiveness of guided inquiry based colloid system modules integrated experiments on science process skills and student learning outcomes
Chemistry learning in senior high school is currently dominated by classroom activities. Laboratory activities have not been optimally carried out, because of the science process and student understanding skills have not been well developed. The purpose of this study was to reveal the effectiveness of using guided inquiry colloid system modules integrated experiments on science process skills and student learning outcomes. The type of this research was a quasi-experimental, with a Randomized Control Group Posttest Only Samples were obtained through simple random sampling technique, consisting of experimental class and control class. The experimental class learning using modules based on integrated inquiry inquiry experiment while the learning control class is conventional. From the data analysis, the average value of the ten indicators of the science process skills of the experimental class students was 80.0 (very high category) and the control class was 71.7 (high categoryThe average value of student learning outcomes of the experimental class was (80.9) whict was significantly higher than the control class (60.6). From these findings, it can be concluded that the guided inquiry-based chemical equilibrium module integrated experiments was effective to improve science process skills and student learning outcomes. It is suggested to the chemistry teacher to use this module as an alternative in learning media.
ICOMSET2018 IOP Conf. Series: Journal of Physics: Conf. Series 1317 (2019) 012141 IOP Publishing doi: 10.1088/1742-6596/1317/1/012141 2 scientific processes, practicing scientific thinking skills, instilling and developing scientific attitudes [4] Regulation of Education Minister No 59 of 2014 states that the chemistry learning more emphasis on job skills application / process science. Science process skills are the development of intellectual, social, and physical skills that originate from fundamental abilities which in its basic principles already exist in students [5]. The learning model that can be used in applying science process skills is a guided inquiry model. Learning using a guided inquiry model can make students actively involved during the learning process [6]. In addition, with the implementation of guided inquiry learning, students can develop the concepts that they have learned not only limited to the material recorded and memorized [7].
Guided inquiry learning consists of 5 stages called orientation, exploration, concept formation, application, , and closure [6]. By using a module that implements the guided inquiry model in learning activities, students can engage all their abilities to find the concepts systematically, critically, logically, analytically so that they can formulate their own findings [8] .
Based on research conducted by Bruck LB, and Towns MH (2008) and Megadomani (2011), it was concluded that experimental activitiy or guided inquiry-based experiment is one of the recommended methods in chemistry learning [9] [10]. In addition, some research regarding integrated experiment activities concluded that learning with integrated experiment activities effectively improves student learning outcomes in the cognitive domain [11] [12] [13].
Based on interviews with teachers at SMAN 5 Padang, experiment activities were not integrated into learning process. Experiment activities are often carried out after theoretical learning is taught. In the experiment activities, the students work based on instruction contained in the materials given by the teacher without thinking the reason for doing these stages. In addition, the learning materials have not been fully able to lead students to find their own concept. The experiment activities will become data collection and confirm the theory. So it takes a teaching material that can guide students to find their own concepts.
This study aims to reveal the effectiveness of using colloidal system module based guided inquiry integrated experimental and science process skill that have been compiled by Lidia Fitri on the learning outcomes of students of XI MIPA SMAN 5 Padang.
Research Method
Based on the problems and objectives that have been stated, this type of research was a quasiexperimental research (pseudo experiment). Quasi-experimental research is a research conducted if it cannot control all variables related to the sample [14]. The design of this research was Randomized Control Group Posttest Only Design. In this design there are two classes, the first class is experiment and the second class is control class. The experimental class is a class that uses a colloidal system module based guided inquiry integrated experiment and science process skills. Meanwhile the control class is a class that uses the chemistry book for XI grade which usually uses at school. The population in this study were all students XI grade of high school number 5 on 2017/2018. Sample determination was done by simple random sampling technique. This technique is a random sampling technique regardless of the strata where all members of the population have the opportunity to be sampled. Based on this technique, class XI MIPA 1 was obtained as the experiment class and class XI MIPA 2 as the control class.
This research was carried out in three stages, called the preparation stage, the implementation phase, the completion phase. On the preparation stage consisted determination of the location and schedule of the study, determination of population and sample, making the lesson plan for the colloidal system, making the test questions and the key answers, analyzing the test questions, making the final test questions, and preparing the final test questions. On the implementation phase, the experimental class used a guided inquiry-based colloid system module integrated experiment and science process skills while the control class used chemistry book for garde XI. On the completion The research instrument in this study was the final test of learning outcomes in the cognitive domain. This test was a multiple choice question with 5 answer choices. The final test of learning outcomes consisted of 25 questions derived from 40 test questions that have been tested for validity, reliability, difficulty index, and the power of the difference.
To test the validity of the hypothesis proposed, data analysis was carried out. The analysis carried out is a one-party hypothesis test [15]. To test the hypothesis, first the data is tested for normality and homogeneity.
Based on the results of the test for normality and homogeneity of the final test results, it was found that the two classes of samples were normal and had homogeneous variance. To test the hypotheses, it was calculated by the t-test.
Data Description
Assessment of learning outcomes is done by giving a final test consists 25 multiple choice question to both sample classes. Based on final test data showed that the highest score of the experiment class and the control class are 92. While the lowest score of the experimental class is 44 and the control class is 24. The average score both of these classes are respectively 80.9 and 60.6
Data analysis
To find out the difference in the average of the two classes is a significant difference or not, the similarity test of two averages is carried out. The tests are the difference in the value of the two sample classes, normality test, homogeneity test, and t-test. The value of learning outcomes for both classes of samples is calculated so that the average value is obtained ( X ), standard deviation (S), variance (S 2 ). From both sample classes, the data in Table 1 Table 1 shows that the the learning outcome of experiment class is higher than the control class. To find out whether there are significant differences in the two sample classes, a hypothesis test is conducted. However, first normality and homogeniety tests is done.
Normality
Test. The data of learning outcomes from both of the classes were tested for normality using the Liliefors test. The results of the analysis of the normality test at the 0.05 significant level can be seen in Table 2. Table 2 shows that the value of L 0 is smaller than L t . This shows that both classes of samples are normally distributed.
Homogeneit Test.
To find out the two classes of samples have homogeneous variance or not the homogeneity test is done by using the F test. The results of the homogeneity test analysis at a significant level of 0.05 can be seen in Table 3. Table 3 shows that F arithmetic is smaller than F table so it can be concluded that the two classes have homogeneous variance.
t-Test.
Based on the results of the analysis of the normality test and the homogeneity test of both classes show that the two classes are normally distributed and have homogeneous variance. Therefore, t-test is used to test the hypothesis, the results of which are summarized in Table 4. Table 4 shows that t count = 5.76 and t table = 1,67 so that t tcount > t table and H 0 is rejected. So, it can be concluded that there are significant differences in learning outcomes in both sample classes on knowledge competencies. In addition to assessing learning outcomes in the cognitive domain, assessment of science process skills is also carried out. Students' science process skills are assessed based on indicators of science process skills, namely 1) SPS-1 planning experiment, 2) SPS-2 asking questions, 3) SPS-3 formulating hypothesis, 4) SPS-4 using tools and materials, 5) SPS-5 observing, 6) SPS-6 classifying, 7) SPS-7 interpreting, 8) SPS-8 predicting, 9) SPS-9 applying the concept, 10) SPS-10 communicating. The results of the assessment of science process skills in the experiment class and control class can be seen in Figure 1 and Figure 2.
Discussion
Based on the description and data analysis, it can be seen that there are differences in student learning outcomes in both sample classes. Learning outcomes of experiment class is higher than control class. Differences in learning outcomes are influenced by teaching materials used during the learning process. Learning in both sample classes, experimental class and control class, was carried out by applying the guided inquiry model. Learning using guided guidance models is student-centered learning, students work in small groups with individual roles to ensure that all students are fully involved in the learning process [16]. There are 5 stages in guided inquiry learning, called orientation, exploration, concept formation, application, and closing [6]. The difference between the two sample classes in this study lies in the teaching materials used in the learning process. The experimental class uses an integrated experimental inquiry-based module and science process skills that have been tested for validity and practicality by Lidia Fitri, S.Pd (2017) while the control class uses chemistry book for XI grade commonly used in school.
Learning using guided inquiry modules can attract students' interest in learning. This is due to the stages of learning contained in the guided inquiry-based module, namely orientation, exploration, concept formation, application, and closing. The orientation phase in the module is located in the delivery of indicators, learning objectives, and supporting material. The exploration phase is located in the presentation section of the practicum model and activities. The model described can be in the form of color images and sub-microscopic representation of the model. The concept formation stage lies in the critical question. The application stage is located in the training section and the closing stage is located in the concluding section.
The critical questions contained in the inquiry-based module are the most important part of inquiry learning. Critical questions can lead students to think critically, analytically so that they can help students to develop their own concepts and make conclusions. In addition, key questions also guide students to find concepts in the learning process [17]. This is in accordance with the opinion of Hanson that critical thinking question are the heart of guided inquiry learning, student actively working to learn new content and develop process skills.
In this guided inquiry module, critical thinking questions are designed based on indicators of science process skills (SPS). The indiacator science process skill are observing, classifying, classifying, interpreting, communicating, asking questions, submitting hypotheses, designing experiments, using tools and materials, applying concepts, and conducting experiments [8] [18] [19] [20]. In addition, critical thinking questions are made interconnected and arranged from a low level to a higher level so that students can develop answers based on what they have found in previous information, what they already know based on answers of the previous questions. The guided inqury module that integrates experiment on the material of the colloidal system can increase students interest in learning independently and finding their own concepts [21] Different from experiment class, the book was used in the control class didn't contain the stages of guided inquiry learning so students have difficulty in learning. In addition, the book also didn't contain the critical thinking questions that can guide students in building concepts. The questions contained in the printed book are only confirmation of the results of the experiment, questions that are not interconnected with each other, questions that do not guide students in finding concepts, so that students have difficulty in developing understanding of concepts.
The high learning outcomes of experimental class students were significantly driven by the use of integrated inquiry inquiry-based modules and process skills. By using the module student is guided to think critically by answering critical questions about the model presented. The high learning outcomes of students in the experimental class are also caused by the use of modules that can guide students to find new knowledge relationships obtained with the previous ones and see the relevance and application of concepts. So that experimental class students gain more meaningful experiences and concepts will be more embedded in their minds [22] [23].
With the strong information attached to students' memory, it will also affect the acquisition of student learning outcomes. Beside of that students can learn to solve problems objectively, critically, openly and collaboratively. This will have a positive effect on scientific attitudes, skills and student learning outcomes. This is also supported by Salamah (2017) that active involvement of students can increase science process skill and the direct experience will improve student memory so that knowledge will stay longer. In addition, research conducted by Bilgin and Myers found that learning using a guided inqury learning model can make students more understand the concepts easily and improve the effectiveness of interaction, build teamwork, and interest through highly structured group collaboration [22] Experiment activities that are integrated with learning activities will help students better understand the concept. In an integrated experiment activity, students learn from the fact of the data obtained from lab activities, analyze data on the lab, generalize and concludes what they have found. The new concepts are formed through own experience so that they will last longer in students's memories [12] Learning with module based guided inqury integrated experiment and science process skills can also increase the science process skill of students. It can be seen in the results of the science process skill assessment of the experiment class and control class. The experiment class obtains a higher score than the control class. The high score of science process skill of students will be in line with the high learning outcomes in the cognitive domain. Nworgu and Otum (2013) concluded that learning using guided inquiry models has an effect on improving science process skills and providing opportunities for students to use various sources of information and ideas in understanding and solving problems [24]. Science process skill approach will give students the opportunity to observe, classify, interpret and communicate the data that they have found during the learning process. This will help students understand the material so that it becomes a long time memory . This is in accordance with Hesbon's research (2014), that the science process skills approach in learning can facilitate students to get higher learning outcomes than other approaches [25].
Guided inquiry-based and integrated experiment modules and science process skills provide guidance to students in finding concepts through models, key questions, exercises, and problem solving. This makes the experimental class have several transcendence than the control class 1) making the experiment class students more active than control classes, because students are required to solve their own problems with the help of key questions , 2) Having the ability to think critically and analytically students better than the control class , 3) having higher academic achievement than the control class , 4) Having higher science process skills than the control class Constraints experienced during conducting research are less conditioned students at the first meeting so that learning is slightly disrupted. The next obstacle is poor timing at each stage of learning at the first meeting. So that for the next meeting, the teacher conditions students and provides a time limit for each stage of learning so that learning objectives can be achieved.
Conclusion
Based on the results of research and data analysis that has been carried out, it can be concluded that using the module based guided inquiry integrated experimental and science process skills is effective to improve the student learning outcomes and science proses skills in colloidal system topic. It can be seen from the learning outcomes of experiment class (using integrated guided inquiry based experiments and science process skills) is higher with an average of 80.9 than the control class (using book without based guided inquiry commonly used in schools) with an average of 60. The average of science process skill from experiment class is also higher with average 82 than control class with average 75.
|
2019-11-14T17:10:11.135Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "727c58f3a4bffaca2ca658acda22050f75ed6098",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1317/1/012141/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b3ecd15d40db0b2e93b603696562007ddda1ec97",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119323925
|
pes2o/s2orc
|
v3-fos-license
|
Supercongruences related to ${}_3F_2(1)$ involving harmonic numbers
We show various supercongruences for truncated series which involve central binomial coefficients and harmonic numbers. The corresponding infinite series are also evaluated.
Introduction
In 1997, Van Hamme [21] established the p-adic analogs of several Ramanujan type series. For one of them, the series labeled (H.1), if p ≡ 4 3. ( where p is any prime greater than 3 (we use the notation a ≡ m b to mean a ≡ b (mod m)).
In this paper we will investigate the series and the corresponding partial sums where the terms have one of the following forms The main results are presented in Section 3 (evaluations of the infinite series) and Section 5 (congruences for the truncated series). For example we show that , and The correspondence between the right-hand sides of the infinite series and the finite sum is particularly striking for the appearance of the classic Gamma function and the p-adic analog.
Similar results of lower degree
Before dealing with the main issue, we are going to take a look to similar sums already in the literature, where the central binomial coefficient is raised to a power less than 3. Assume that p is a prime greater than 3. For n ≥ 1, we have Thus, by n = p, we obtain (see [19, (1.10)] for the modulo p 3 version) where q p (a) = a p−1 −1 p is the Fermat quotient and we used the Wolstenholme's theorem 2p p ≡ p 3 2, and the congruences (for the first one we can refer to [14, Theorem 5.1 (a)]). Moreover, the identity implies (see [19, (1.11)] for the modulo p version) where we employed the congruence established in [20, Theorem 1.1], given in [14, and the congruence for 0 ≤ k ≤ n = (p − 1)/2 (note that p divides 2k k for n < k < p) where we also used given in [ 3.
Evaluations of the infinite series
The generalized hypergeometric function is defined as and (x) 0 = 1 is the Pochhammer symbol and a i , b j and z are complex numbers with none of the b j being negative integers or zero. We recall some well-known hypergeometric identities: ii) Whipple's theorem [1, p.16] In the next theorem we evaluate four specific series. .
For (10), let a = b = 1/2 in (8), then On the other hand, by differentiating the right-hand side of (8) and (9) and by using By considering a suitable linear combination of the previous two identities, the special values yield immediately (10). Let a = c = 1/2 in (9), then Moreover, for c = 1/2, e = 1 in (9), we find On the right-hand side, we have As before, by combining the results and by using the special values the conclusion (11) easily follows.
Congruences for the truncated series -Preliminary results
If n is an odd integer, by replacing k with (n − k) is easy to see that The next lemma follows from [2, Theorem 1].
The next lemma establishes some identities involving the harmonic numbers that we will need later on.
Lemma 2. For any non-negative integer n, we have n k=0 n k Moreover, for any even integer, Proof. For n = 2m, let then by Wilf-Zeilberger method we find Let S(m) = k≥1 F (m, k)H (2) k then, by summation by parts (see [5] for a similar approach), we have The other identities can be obtained in a similar way.
The Morita's p-adic Gamma function Γ p is defined as the continuous extension to the set of all p-adic integers Z p of the sequence where p is an odd prime and n > 1 is an integer (see [13,Chapter 7] for a detailed introduction to Γ p ). If x ∈ Z p then Γ p (0) = 1 and where | · | p denotes the p-adic norm. By [9,Theorem 14], for all a, b ∈ Z p , where s p (x) is the integer in {1, 2, . . . , p} such that s p (x) ≡ p x. The above formula is the p-adic analog of the classic reflection formula for the classic Gamma function . if p ≡ 4 1, where m = ⌊p/4⌋. Moreover if p ≡ 4 3 then Proof. We start with (21). Since p ≡ 4 3, we have that m = (p − 3)/4 and where, by (19), As regards (20), we consider only the case p ≡ 4 3 since the other case can be handled similarly. Then 2m m By (18) and by [7,Lemma 2.4], where we also used (22) and
Coda
In this final section we present a few more results with the same flavor related to .
Moreover, it has been proved (see for example [ . The following result yields a p-adic analog of the above series. which are equivalent to [11, (4)] and [11, (5)] respectively.
|
2017-01-30T15:44:37.000Z
|
2017-01-03T00:00:00.000
|
{
"year": 2017,
"sha1": "e0e3d86e179b55d1806219420ee6c739f3fe9e58",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1701.00729",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e0e3d86e179b55d1806219420ee6c739f3fe9e58",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
10108633
|
pes2o/s2orc
|
v3-fos-license
|
Knockdown of the HDAC1 Promotes the Directed Differentiation of Bone Mesenchymal Stem Cells into Cardiomyocytes
Failure of the directed differentiation of the transplanted stem cells into cardiomyocytes is still a major challenge of cardiac regeneration therapy. Our recent study has demonstrated that the expression of histone deacetylase 1 (HDAC1) is decreased in bone mesenchymal stem cells (BMSCs) during their differentiation into cardiomyocytes. However, the potential roles of HDAC1 in cardiac cell differentiation of BMSCs, as well as the mechanisms involved are still unclear. In current study, the expression of HDAC1 in cultured rat BMSCs is knocked down by lentiviral vectors expressing HDAC1-RNAi. The directed differentiation of BMSCs into cardiomyocytes is evaluated by the expression levels of cardiomyocyte-related genes such as GATA-binding protein 4 (GATA-4), Nirenberg, Kim gene 2 homeobox 5 (Nkx2.5), cardiac troponin T (CTnT), myosin heavy chain (MHC), and connexin-43. Compared with that in control BMSCs, the expression of these cardiomyocyte-related genes is significantly increased in these HDAC1 deficient stem cells. The results suggest that HDAC1 is involved in the cardiomyocyte differentiation of BMSCs. Knockdown of the HDAC1 may promote the directed differentiation of BMSCs into cardiomyocytes.
Introduction
Bone mesenchymal stem cells (BMSCs) are non-hematopoietic stem cells in bone marrow. BMSCs are ideal cell-based regeneration therapy in the treatment of cardiovascular disease because these cells exhibit various advantageous traits, including ready accessibility, ease of amplification, low immunogenicity, and amenability to the introduction of exogenous genes and other genetic modifications [1], [2].
Despite the current optional approaches, failure of the directed differentiation of the transplanted stem cells into cardiomyocytes is still a major challenge of cardiac regeneration therapy [3]. This has highlighted the importance and urgency of studying the novel mechanisms of this critical cellular process and exploring new therapeutic options to improve our cardiac regeneration therapy.
Recently, an increasing number of studies have revealed that the epigenetic modification of histone acetylation may play an important role in the transdifferentiation processes of adult stem cells including their directed cardiac cell differentiation [4], [5], [6], [7]. In a pilot study, we have identified that the expression of histone deacetylase 1 (HDAC1) in BMSCs is significantly decreased during their differentiation into cardiomyocytes. However, until now, the potential role of HDAC1 in the directed differentiation of the transplanted stem cells into cardiomyocytes is still unclear. In current study, the expression of HDAC1 in cultured rat BMSCs is knocked down by lentiviral vectors expressing HDAC1-RNAi and the directed differentiation of BMSCs into cardiomyocytes is evaluated.
Experimental animals
One-month-old male Sprague-Dawley (SD) rats, weighing 80 to 110 g, were purchased from the Guangdong Medical Laboratory Animal Center. All protocols were approved by the Institutional Animal Care and Use Committee at the Guangzhou Medical University and were consistent with the Guide for the Care and Use of Laboratory Animals (NIH publication 85-23, revised 1985).
The isolation, culture and identification of BMSCs
SD rat BMSCs were isolated and cultured using the whole bone marrow adherence method described previously [8]. Briefly, SD rats were sacrificed, and their femora and tibiae were rapidly stripped. A 5 ml syringe loaded with an appropriate quantity of complete BMSC culture medium (Dulbecco's modified Eagle's medium/Ham's F12 nutrient mixture (DMEM/F12) with 10% FBS, 100 U/ml penicillin, and 100 U/ml streptomycin) was inserted into the metaphysis of each bone, and was utilised to flush bone marrow cells with culture medium. The isolated BMSCs were centrifuged at 1000 rpm for 10 minutes, resus-pended in complete medium, and cultured. The culture medium was changed every 2-3 days. After reaching 80% confluence, cultured adherent cells were trypsinised with a 0.25% trypsin solution and passaged. The percentages of cells that were positive for the BMSC surface markers of CD34, CD45, CD29, CD44, and CD90 (Santa Cruz) were analysed by flow cytometry. For all experiments, rat BMSCs from passages 3 to 5 (P3 to P5 cells) were used.
1.3. The construction and screening of the optimal HDAC1-RNAi lentiviral vector Based on the sequence information of the HDAC1 gene, four short hairpin RNA (shRNA) sequences and one negative control (NC) sequence were designed. The shRNA expression sequences targeting HDAC1 were ligated into the lentiviral vector pGCSIL-GFP to obtain recombinant pGCSIL-GFP-shHDAC1 plasmids. After the plasmid sequences had been verified, the recombinant plasmids and the lentiviral packaging plasmids pHelper 1.0 and pHelper 2.0 were cotransfected into 293T cells using Lipofectamine 2000. Virus was collected from the culture supernatant, and the viral titre of this solution was determined by serial dilution.
The packaged pGCSIL-GFP-NC lentivirus was used to infect BMSCs at MOIs of 0, 1, 10, and 100. Fluorescence microscopy was utilised to detect the expression of fluorescent proteins after 12 h, 24 h, 48 h, and 72 h of infection. Flow cytometry (at a wavelength of 488 nm) was used to determine infection efficiency and thereby screen for the MOI value and infection time that produced maximal infection efficiency.
Five groups of packaged viruses were used to infect BMSCs, and the BMSCs were collected after 48 hours of infection. Uninfected BMSCs were utilised as a normal control group. To screen for the virus with the best efficiency at silencing HDAC1, real-time quantitative reverse transcription polymerase chain reaction (RT-qPCR) was used to determine the HDAC1 mRNA expression levels for each group, and Western blotting was used to determine the HDAC1 protein expression levels for each group.
RT-PCR
TRIzol was used to extract total RNA samples from cells. Subsequently, a TaKaRa transcription kit was utilised for the reverse transcription (RT) of these samples; in this reaction, the samples were incubated in a PCR thermocycler at 37uC for 15 minutes and then at 85uC for 5 seconds. The RT-PCR amplifications were performed with the TaKaRa PrimeScript II 1st Strand cDNA Synthesis Kit (D6210A), using glyceraldehyde 3phosphate dehydrogenase (GAPDH) as an internal reference. The amplification included the following reaction stages: stage I (initial denaturation), which involved an incubation at 95uC for 30 sec; stage II (30 cycles of PCR amplification), which involved 30 cycles of incubation at 95uC for 5 sec, 60uC for 3 sec, and 72uC for 30 s; and stage III (melting curve analysis), which involved an incubation at 72uC for 10 min followed by an incubation at 16uC for 10 min. The primers for various genes were designed using the Primer 5.0 software package and synthesised by TaKaRa Biotechnology. In particular, the following specific primers were DDCT method was used to calculate the relative multiple of the starting copy number that existed in the template from each experimental group and thereby indicate the relative differences in mRNA levels among the experimental groups.
Western blotting
At least 1610 7 cells were collected for the extraction of nucleoproteins, which was performed as described in the DBI nucleoprotein extraction kit. Protein concentrations were determined with the bicinchoninic acid (BCA) assay. Samples of 16 mg of nucleoproteins were separated on a 10% sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) gel and subsequently electrotransferred onto polyvinylidene difluoride (PVDF) membranes for 90 min at 4uC and 290 mA. After these membranes were blocked in 5% non-fat dry milk for 90 min, they were incubated with a diluted anti-HDAC1 primary antibody (1:500 dilution, Abcam) (in a solution that contained 2% non-fat dry milk) and incubated at 4uC overnight. Subsequently, the membranes were washed with phosphate-buffered saline (PBS) or Tris-buffered saline with Tween 20 (TBST), using sufficient saline solution to cover the PVDF membrane for each wash. The first wash step was performed over the course of 15 min, whereas each of the next three washes were 5 min in duration. Following these washes, the membranes were incubated with 5 ml of 1:5000 diluted goat anti-rabbit secondary antibody (in a solution that contained 2% non-fat dry milk) for 1 h at room temperature. An enhanced chemiluminescence (ECL) solution was prepared; the membranes were incubated in this ECL solution in a darkroom for 5 min, and images of the membranes were then digitally developed. The relative expression level of a target protein was quantified by dividing the greyscale value of the target protein band by the greyscale value of the band corresponding to GAPDH, the protein that was utilised as an internal reference.
Statistical analysis
Statistical analyses were performed using the SPSS 13.0 statistical software package, and the data were expressed as means 6 standard deviation. Comparisons between two groups were performed using the independent samples t-test, and comparisons among multiple groups were performed using the one-way analyses of variance; P,0.05 was regarded as statistically significant.
The morphological characteristics and identification of BMSCs in vitro
At the early stage of primary BMSC cultures, the cells were round. After 24 h of culture, the BMSCs had become adherent and had elongated to adopt fusiform or triangular shapes. After 3-4 d, cells had entered the logarithmic growth phase and exhibited clonal proliferation. These BMSC colonies fused after 9-10 d of culture; at this time, most of the cells exhibited the morphology of flattened long spindles, whereas a small proportion of cells were either triangle-or polygon-shaped. Cells became adherent relatively rapidly after they were passaged. Passaged cells demonstrated a fibroblast-like morphology and an increased volume relative to primary cells. The passaged cells underwent rapid clonal proliferation, requiring only 3-4 d to reach 80% confluence; at this stage, the cells were either re-passaged or cryopreserved. As the number of passages of cells increased, cells became increasingly purified, and the observed cell morphologies became more uniform. Moreover, the BMSCs exhibited a polar arrangement that was swirling in nature or reminiscent of schools of fish (Figure 1). BMSCs at three passages were examined by flow cytometry. The results of these analyses indicated that these BMSCs expressed low levels of the hematopoietic markers CD34 (0.64%) and CD45 (0.35%) but high levels of various stromal and mesenchymal cell surface markers, including CD29 (99.86%), CD44 (99.47%), and CD90 (98.97%) ( Figure 2).
2.1.
Sequencing verification of recombinant plasmids. The recombinant vectors of this experiment were sequenced by Shanghai GeneChem. The positive clones were designated pGCSIL-HDAC1-shRNA1 through pGCSIL-HDAC1-shRNA4, and the NC vector was known as pGCSIL-NC-shRNA. The sequencing results indicated that the shRNA expression template was successfully constructed based on the pGCSIL-GFP vector and that the sequences of interest in each vector were completely correct and identical to the designed target sequence (Figure 3).
The infection of target cells: preliminary experiments
and screening for optimal MOI conditions. Expression of enhanced green fluorescent protein (EGFP) could be observed at 12 h after BMSCs had been infected with the pGCSIL-GFP-NC lentivirus; the EGFP expression levels peaked at 48 h after infection, although strong green fluorescence was sustained through 72 h after infection. In particular, compared with the other examined groups, the ENi.S+5 mg/ml polybrene group exhibited a significantly higher number of fluorescent cells. Under optimal infection conditions, at an MOI of between 1 and 10, an infection efficiency of less than 70% was observed. By contrast, at an MOI of 100, an infection efficiency of 90% or more was observed; moreover, cells exhibited strong growth and no significant differences in cell morphology relative to the normal control group. Flow cytometry revealed an EGFPpositive rate of 93.65% among these infected cells (Figure 4).
HDAC1 mRNA expression determined by RT-
PCR. RT-PCR results revealed that although HDAC1 mRNA could be detected in samples from all groups, significantly lower HDAC1 mRNA expression levels were detected in the groups infected with an interference virus compared with those in either the normal control group or the NC group (P,0.01, Figure 5, Figure 6). In particular, the normal control group exhibited an HDAC1 mRNA expression level that was 32, 11, 13, and 96 times higher than the HDAC1 mRNA expression levels of the LV-HDAC1-shRNA1, LV-HDAC1-shRNA2, LV-HDAC1-shRNA3, and LV-HDAC1-shRNA4 interference groups, respectively. No significant differences in HDAC1 mRNA expression were found between the normal control group and the NC group. Although all the four vectors that were able to successfully inhibit the HDAC expression, the inhibitory effect of LV-HDAC1-shRNA1 and LV-HDAC1-shRNA4 was more efficient compared with that of LV-HDAC-shRNA2 and LV-HDAC1-shRNA3.
HDAC1 expression determined by Western
blotting. The results of western blotting indicated that although HDAC1 protein was expressed in samples from all groups, significantly lower HDAC1 protein expression levels were detected in all the groups infected with an interference virus compared with those of the normal control group or the NC group (P,0.01, Figure 7). In particular, the normal control group exhibited an HDAC1 protein expression level that was 1.66 times, 1.53 times, 1.50 times, and 1.70 times higher than those in LV-HDAC1-shRNA1, LV-HDAC1-shRNA2, LV-HDAC1-shRNA3, and LV-HDAC1-shRNA4 interference groups, respectively. Thus, the inhibitory efficiencies of these interference groups at the protein level were 40%, 35%, 32%, and 41%, respectively. Based on the results, LV-HDAC1-shRNA4 vector was selected to be used in our subsequent experiments.
RT-PCR detection of genes related to myocardial development and structure
The expression levels of genes related to myocardial development and structure were detected in BMSCs infected with LV-HDAC1-shRNA4. The genes detected in this study included Nkx2.5, GATA-4, MHC, connexin-43, and CTnT. As shown in Table 1, the expression of these 5 genes was significantly higher in BMSCs infected with the lentivirus compared with that in cells from either the normal control group or the NC group (P,0.05). No significant difference was found between the normal control group and the NC group with respect to the expression of the 5 detected genes.
Discussion
In our recent pilot study, we have found that the expression level of HDAC1 is significantly decreased in BMSCs after cocultured with cardiomyocytes (Data not shown). Moreover, nascent cardiomyocytes exhibite low levels of HDAC1 expression. These initial results suggest that HDAC1 gene expression might be related to the directed differentiation of BMSCs into cardiomyocytes. We thus hypothesize that the HDAC1 might be a negatively regulator in cardiac cell differentiation from BMSCs. To test our hypothesis, we constructed a shRNA eukaryotic expression vector that specifically silenced the HDAC1 gene by RNAi. We found that HDAC1-shRNA could significantly reduce HDAC1 expression in BMSCs; this reduction in HDAC1 expression in BMSCs significantly increased the expression of cardiac-specific genes, such as Nkx2.5, GATA-4, MHC, connexin-43, and CTnT. Thus, the specific inhibition of HDAC1 gene in BMSCs is able to increase the differentiation of BMSCs into cardiac cell phenotype.
Directed differentiation of stem cells is a complex process in which multiple genes and signalling pathways are involved [9]. Recently, epigenetic modification of histone acetylation has been identified to play impartment roles in this stem cell process [4], [5], [6], [7]. Indeed, rat BMSCs treated with the HDAC inhibitors such as suberoylanilide hydroxamic acid (SAHA) and trichostatin A (TSA) exhibit a significant increase in the expression of cardiomyocyte-specific genes, including GATA4, Nkx2.5, and Mef2c [5], [10]. However, both SAHA and TSA are non-specific inhibitors of HDACs that could suppress multiple HDAC subtypes in addition to HDAC1. In addition, SAHA and TSA treatments promote not only the differentiation of BMSCs into cardiomyocyte-like cells but also the differentiation of BMSCs into endothelial cells [11], hepatocytes [12], chondrocytes [13], and adipocytes [14], among other cell types. However, the detailed information regarding the roles of HDAC subtypes in the differentiation of BMSCs into myocardial phenotypes has not yet been reported. In this study, we identified, via the shRNA approach, that HDAC1 is a critical regulator for the directed differentiation of BMSCs into myocardial phenotypes. The results of our study are consistent with the studies in other stem cell lines. For example, Liu et al. [15] and Dovey et al. [16] have reported that the in vitro induction of specific deficiencies in HDAC1 expression in P19CL6 cells and embryonic stem cells could promote the differentiation of stem cells into myocardial phenotypes.
The molecular mechanisms involved in HDAC1-mediated effect on the directed differentiation of BMSCs into myocardial phenotypes are currently unclear. In embryonic stem cells, Hoxha et al. [17] reported that HDAC1 regulates the phenotypic differentiation of embryonic stem cells into cardiomyocytes by controlling the expression of sex-determining region Y-box 17 (SOX17) and bone morphogenetic protein 2 (BMP2). However, because of ethical concerns and other issues, embryonic stem cells and induced pluripotent stem cells might not be used for clinical stem cell transplantation in the near future. The elucidation of the downstream pathways involved in the HDAC-induced the phenotypic differentiation of BMSCs into myocardial phenotypes should be identified in future study. Moreover, the roles of other HDAC subtypes in cardiac cell differentiation from BMSCs merit additional investigation in the future.
In summary, we have identified that the specific inhibition of HDAC1 gene expression in BMSCs by RNAi causes an increase in the mRNA expression levels of genes related to myocardial development and structure. HDAC1 may play an important role in the directed differentiation of BMSCs into cardiomyocytes.
|
2016-05-12T22:15:10.714Z
|
2014-03-31T00:00:00.000
|
{
"year": 2014,
"sha1": "e6f4b60fe01b64a751bef89161aa9406900bf5b5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0092179",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6f4b60fe01b64a751bef89161aa9406900bf5b5",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
258616066
|
pes2o/s2orc
|
v3-fos-license
|
Dodecamer assembly of a metazoan AAA+ chaperone couples substrate extraction to refolding
Ring-forming AAA+ chaperones solubilize protein aggregates and protect organisms from proteostatic stress. In metazoans, the AAA+ chaperone Skd3 in the mitochondrial intermembrane space (IMS) is critical for human health and efficiently refolds aggregated proteins, but its underlying mechanism is poorly understood. Here, we show that Skd3 harbors both disaggregase and protein refolding activities enabled by distinct assembly states. High-resolution structures of Skd3 hexamers in distinct conformations capture ratchet-like motions that mediate substrate extraction. Unlike previously described disaggregases, Skd3 hexamers further assemble into dodecameric cages in which solubilized substrate proteins can attain near-native states. Skd3 mutants defective in dodecamer assembly retain disaggregase activity but are impaired in client refolding, linking the disaggregase and refolding activities to the hexameric and dodecameric states of Skd3, respectively. We suggest that Skd3 is a combined disaggregase and foldase, and this property is particularly suited to meet the complex proteostatic demands in the mitochondrial IMS.
INTRODUCTION
Protein homeostasis is essential for the survival and proper functioning of cells and requires the correct folding, localization, and quality control of most of the proteome (1). Misfolding and aggregation of proteins, especially under stress conditions, deprive cells of functional proteins and generate toxic species that lead to a variety of protein misfolding diseases (2). To overcome this problem, cells evolved a diverse set of molecular chaperones that participate in every aspect of protein folding and quality control. For example, heat shock protein 70/40 (Hsp70/40) act as central hubs that interact with numerous newly synthesized proteins in the cell (3). Hsp70s protect unfolded or partially unfolded proteins from aggregation and, via cochaperone [e.g., Hsp40 (4) and nucleotide exchange factors (5)]-regulated adenosine triphosphatase (ATPase) cycles, can release client proteins in a conformation conducive to folding (6,7). Chaperonins, such as the bacterial GroEL and eukaryotic TRiC/CCT complexes, form two stacked seven-to nine-membered rings that encapsulate unfolded polypeptides in a central cavity, which provide a protected environment to facilitate client folding (8)(9)(10). In addition, small Hsps act as "holdases" by binding non-native states of proteins at early stages of misfolding, thus preventing the accumulation of irreversible protein aggregates (11). Together, the diverse activities of the different chaperone systems provide a robust network that maintains homeostasis of the proteome.
A special class of molecular chaperones, represented by ClpB in bacteria and Hsp104/Hsp78 in yeast, can further extract proteins from their aggregates and rescue folding (12,13). These "disaggregases" are members of the ATPases associated with diverse cellular activities (AAA + ) family, which assemble into hexameric rings via their nucleotide-binding domains (NBDs) and use ATPase cycles to power the generation of mechanical force. Each NBD extends a pore loop to contact the client protein via a conserved aromatic residue (14). Together, the pore loops form a constricted central pore in the NBD ring through which client proteins are threaded in an unfolded conformation, and this threading activity provides the fundamental mechanism by which ClpB/Hsp104 extract substrates from the aggregate (15)(16)(17). Extensive structural work on protein-threading AAA + members showed that the pore rings form a spiral staircase surrounding the substrate protein, and that the NBD rings adopt varying degrees of spiral distortion linked to the nucleotide state of individual ATPase sites [reviewed in (18)]. These works support a sequential mechanism of substrate translocation in which individual NBD protomers progress around the ring and dissociate from the neighboring subunit at the bottom of the spiral upon adenosine triphosphate (ATP) hydrolysis and nucleotide dissociation, and reassociate with the protomer at the top position upon binding ATP; these coordinated movements propel stepwise substrate translocation (18)(19)(20). ClpB/Hsp104/Hsp78 belong to type II AAA + ATPases that contain two NBD rings working in alternating cycles, with the second NBD (NBD2) serving as the main ATPase motor during substrate translocation. Last, the ability of ClpB/Hsp104/Hsp78 to solubilize and refold aggregated proteins relies on coordination with the Hsp40/70 chaperones (15,21), which recruit these AAA + chaperones to protein aggregates, allosterically activate the NBD, and assist in the folding of client proteins after they are released from the AAA + disaggregase (22,23).
AAA + -powered disaggregase systems are essential for protecting bacterial and yeast cells from extensive proteotoxic stress. Curiously, at the evolutionary transition to metazoans, ClpB/Hsp104/Hsp78 were lost (24), raising puzzling questions about whether and how higher eukaryotic organisms "repair" aggregated proteins. Notably, a partial homolog of ClpB/Hsp104, Skd3, is present in the intermembrane space (IMS) of mitochondria in multiple metazoan lineages (25)(26)(27)(28). Skd3 knockout leads to the aggregation of a variety of mitochondrial proteins and impairs the innate immune response in cell culture studies (27,29). In vivo, Skd3 is ubiquitously expressed but has particularly high expression levels in adult brain tissue (30). A variety of mutations in Skd3 are linked to a severe mitochondria disorder, 3-methylglutaconic aciduria , an autosomal recessive disorder with clinical features that include cataracts, neurological disorder, intellectual disability, and severe congenital neutropenia that can develop into leukemia (30)(31)(32)(33)(34)(35).
The mitochondrial IMS is a complex proteostatic environment where a variety of protein biogenesis events occur (36,37). Mitochondrial inner membrane proteins and matrix proteins need to be maintained in a largely unfolded, translocation-competent conformation while also protected from aggregation as they transit through the IMS. On the other hand, proteins or domains that reside in the IMS need to undergo folding, assembly, and maturation. The small volume of the IMS further renders the local concentration of proteins very high and thus prone to aggregation. However, little is known about protein folding in the mitochondrial IMS. With the exception of the Mia40/Erv system, which catalyzes oxidative protein folding, and the small translocases of the inner membrane (TIM) proteins, specialized for the import of TIM subunits and carriers, no homologs of Hsp40/70/60 have been identified in the mitochondrial IMS. Although it has been suggested that AAA + proteases in the mitochondrial IMS contribute to protein folding, this hypothesis is based mainly on observations with protease-deficient mutants. The discovery of Skd3 raises the possibility that it may serve as one of the missing general chaperones in the mitochondrial IMS.
Despite its importance in physiology and pathology, the precise roles and mechanisms of Skd3 are poorly understood. Molecularly, Skd3 is a unique fusion of an AAA + ATPase domain, which shares homology with NBD2 of ClpB/Hsp104 (24), to an N-terminal ankyrin repeat domain (ARD), a repeat protein scaffold that participates in diverse protein-protein interactions (Fig. 1A). Newly synthesized Skd3 also contains an N-terminal mitochondrial targeting sequence (MTS) that directs its import into mitochondria, where it is sequentially processed by the MPP and PARL proteases ( Fig. 1A) (26,27). Cupo and Shorter (27) showed that Skd3 is strongly activated upon PARL-mediated cleavage and that PARL Skd3 is a highly potent, stand-alone chaperone. In the absence of additional chaperones, PARL Skd3 reactivates aggregated luciferase with efficiencies higher than that of the Hsp104/70/40 system and even solubilizes amyloid fibril formed by α-synuclein, whose aggregation is linked to Parkinson's. Multiple disease-linked mutations in the NBD and ARD of Skd3 impair the reactivation of luciferase aggregates, linking defects in the chaperone activity of Skd3 to pathology (27,38). Recent cryo-electron microscopy (cryo-EM) structures of an Skd3 hexamer revealed that the NBD of Skd3 forms a hexameric spiral in which multiple pore loop residues from each protomer grip the substrate polypeptide, suggesting that it uses a substrate translocation mechanism conserved among AAA + disaggregases (38,39). However, the molecular mechanism underlying Skd3's effective chaperone activity is poorly understood. Most previous work described Skd3 as a disaggregase akin to ClpB and Hsp104, but how a client protein is efficiently refolded by Skd3 in the absence of additional chaperones is unexplained. Lastly, multiple recent studies reported unusual higher-order assembly of Skd3, showing that Skd3 hexamers can further form dodecamers via contacts between the ARDs (25,26,38,39). The roles of the unusual assembly states of Skd3 remain elusive.
In this work, we address these questions using a combination of structural and biochemical studies. We obtained two cryo-EM structures of substrate-bound PARL Skd3 hexamers in "open" and "closed" spiral conformations, which capture the disaggregase at different stages of substrate translocation. Akin to the observation from other laboratories, PARL Skd3 also forms a cylindrical cageshaped dodecamer that harbors a large internal cavity. Biochemical studies show that solubilized luciferase is protected by PARL Skd3 until it reaches a conformation committed to folding, contrary to expectations for the threading mechanism that describes ClpB/ Hsp104. Mutations in the ARD that disrupt the dodecamer interface specifically impair client refolding, suggesting that disaggregation and refolding can be attributed, respectively, to the hexamers and dodecamers of Skd3. Last, the ARD can directly mediate substrate interactions. On the basis of these data, we suggest a working model for how Skd3 couples disaggregation to client refolding to provide an effective repair mechanism for misfolded proteins.
RESULTS
To obtain chaperone-active PARL Skd3, we expressed it as a C-terminal fusion to His 6 -SUMO. After affinity purification of the fusion protein, cleavage using the SUMO protease Ulp1 generates PARL Skd3 containing a defined, native N terminus, which was further purified by size exclusion chromatography to >95% purity (fig. S1A).
To assess the oligomerization of PARL Skd3, we used mass photometry, which provides an accurate measurement of the mass of individual molecules and complexes in solution (40). As mass photometry operates at submicromolar protein concentrations, we used PARL Skd3 bearing a mutation in the Walker B (WB) motif (E455Q) to facilitate detection of Skd3 oligomers, as this mutation reduces ATP hydrolysis rates and thus enhances hexamer assembly. PARL Skd3_WB is primarily distributed between three species with measured molecular masses consistent with monomer, hexamer, and dodecamer (Fig. 1B). The population of Skd3 hexamer relative to the monomer is increased by higher Skd3 concentration (Fig. 1B) and by conditions that reduce ATP hydrolysis, including the WB mutation, the slowly hydrolyzing ATP analog ATPγS, and the nonhydrolyzable ATP analog AMPPNP ( fig. S1, B to E), confirming that PARL Skd3 hexamer assembly is strongly dependent on high ATP occupancy. On the other hand, dodecamers were observed under all conditions that support substantial hexamer formation, suggesting that they are formed by dimerization of hexamers ( Fig. 1B and fig. S1, B to E).
Single-particle cryo-EM analysis confirmed the presence of higher oligomeric states in PARL Skd3 (Fig. 1C), as reported previously (25,38,39). Top views of the two-dimensional (2D) class averages of PARL Skd3_WB bound to ATPγS showed exclusively hexameric rings (Fig. 1C). In side views, particles containing two or three "rings" were observed, and the latter is consistent with the assembly of two hexamers sandwiching the ARD (Fig. 1C). In addition to hexameric rings, wild-type PARL Skd3 in ATPγS also formed heptameric rings that can further assemble into higherorder structures (fig. S1F), as was observed in other studies (38,39). Because the significance of Skd3 heptamer is unclear and because the assembly of PARL Skd3_WB is more robust and homogeneous compared to PARL Skd3, we carried out the subsequent data collection and structural analyses with PARL Skd3_WB assembled in ATPγS. Particle classification ( fig. S2) allowed the reconstruction of both the hexamer complex (Figs. 1D and 2), which we describe first, and the dodecamer complex (Figs. 1E and 3), which is described later. PARL Skd3 hexamer structures reveal ratchet-like movements implicated in substrate translocation PARL Skd3_WB hexamer was reconstructed to an overall resolution of 2.77 Å ( Fig. 2A, fig. S3, and Table 1). Local resolution ranged from~2.5 Å in the core of the NBD ring to 3 to 7 Å in the peripheral regions, the P1 protomer, and the ARDs, probably due to conformational flexibility in regions of low resolution ( fig. S3A). Except for solvent-exposed loops (524 to 536 and 655 to 675), main-chain atoms could be traced throughout the NBDs starting at residue 318, and side-chain densities were resolved for most residues in the final sharpened map of the NBD hexamer ( Fig. 2A and fig. S3G). This allowed us to build an atomic model of the hexamer (Fig. 2B) by rigid-body docking of an AlphaFold-predicted model (41) followed by iterative cycles of manual adjustment in COOT (42) and real-space refinement in Phenix (43). The P1 protomer contains regions with lower resolution than the remainder of the NBD hexamer ( fig. S3A) and was therefore modeled on the basis of rigid-body docking of the model for the P4 protomer. The individual NBDs follow the canonical D2 domain structure of ClpB and Hsp104, with a large and a small subunit that cradle ATP. Compared to ClpB, Skd3 has a 14-amino acid insertion in the D10 helix that protrudes sideways into solvent, followed by an extended loop (residues 522 to 534) that connects D10 to D11 ( fig. S4A). The structures of the individual Skd3 protomers superimpose with a root mean square deviation (RMSD) of <0.6 Å, except for a~20°i nward bend of the interdomain helix in the P6 protomer ( fig. S4A). The NBD ring in most Skd3 hexamers adopts a right-handed open spiral conformation, with P1 to P6 designating the protomers from the lowest to the uppermost positions in the spiral. The pore rings of all six protomers are well resolved and contact an additional contiguous elongated density in the central pore (Figs. 1D and 2A), which was interpreted to arise from a trapped substrate polypeptide and modeled as a poly-alanine chain in the extended conformation. Each protomer contacts the substrate via multiple interactions (Fig. 2, B and C). The conserved Y430 and V431 in the pore loop form the main contacts that clamp the substrate. Analogous to ClpB and Hsp104, Skd3 has a secondary pore loop with multiple residues (E416 and H418) positioned near the substrate polypeptide, although not as close as the primary pore loop (Fig. 2C, right). E416 and H418 are positioned such that they may form interprotomer contacts with the clockwise neighbor. Compared to the Skd3 hexamer structure from Cupo et al. (38), the Skd3 spiral in the current structure is more extended (Fig. 2B), allowing the pore loop contacts of the six protomers to descend along a total of 12 amino acids in the substrate polypeptide in a steep spiral staircase, each spanning two amino acids (5.5 Å) and completing a 60°turn (Fig. 2C, left).
The resolution of the NBD structure allowed us to assign the nucleotide states of the individual ATPase sites based on the density for the bound ATPγS and the position of key catalytic residues. The Skd3 ATPase site consists of the canonical Walker A motif (K387 and T388), WB motif (D454 and E455), sensor 1 (N496), sensor 2 (R620), the catalytic arginine finger R561, and a network of hydrophobic residues contacting the adenine moiety (Fig. 2D). The catalytic side chains are well resolved in protomers P2 to P5 and less well resolved in P1 and P6, suggesting higher flexibility in these seam protomers ( fig. S4C). Protomers 2 to 6 show well-resolved density for ATPγS, which coordinates a Mg 2+ via the nonbridging oxygens on the β-and γ-phosphates ( Fig. 2D and fig. S4C). In addition, the arginine finger R561 from the clockwise neighboring protomer was positioned near the γ-phosphate in P2 to P6, a hallmark of the catalytically active conformation. In the P1 protomer, in contrast, weaker and partial density was observed for ATPγS, and the catalytic contact from R561 is missing ( fig. S4C). Thus, protomers P2 to P6 are ATP-bound in the open spiral of Skd3, whereas P1 is likely in a mixture of post-hydrolysis and apo states.
The structure rationalizes many disease-linked mutations in Skd3 ( fig. S5). Residues R408, E435, Y617, R628, and E639 lie on the interface with the clockwise neighboring protomer; R417 and R475 interact with the counterclockwise neighbor. Mutation of these residues is therefore anticipated to disrupt ring assembly. E435 and R417 are also part of the pore 1 and pore 2 loops, respectively, and could help position the pore loops for substrate interactions. Cupo et al. (38) showed that R417 plays a more important role in Skd3 than in ClpB/Hsp104. M411, H460, C486, and Y567 form the folded core of the large subunit in each protomer, and I682, R650, A591, and G646 stabilize folding of the small subunit. Their mutations are therefore likely to disrupt the folding of the individual NBDs.
The lower resolution of the P1 protomer suggests conformational heterogeneity in the seam protomers. Consistent with this notion, additional EM density was observed at the P1-P6 junction at decreased thresholds. 3D variability analysis (3DVA) in Cryo-SPARC was performed to test this hypothesis, which revealed motions of the seam protomers in the first major variability component (movie S1). 3D classification based on the models generated from 3DVA allowed us to isolate subpopulations of particles in which the NBD ring forms a more closed spiral, which was refined to a resolution of 3.1 Å ( , and movie S1); these movements are conserved among AAA + ATPases and proposed to provide the force generation mechanism that drives substrate translocation. At lower thresholds, additional EM density corresponding to the N-terminal domains of Skd3 was observed. For P1 to P5, EM density was observed for the N-terminal ARDs and the interdomain linker (residues 310 to 336), with the strongest density in the C-terminal half of the linker closer to the NBD. This allowed an Alpha-Fold-predicted model of the Skd3 ARD and the linker region (minus residues 198 to 267, which comprise an insertion domain in the ARD) to be docked as a rigid body into the cryo-EM density (Fig. 2F), suggesting that the linker forms a relatively rigid helix that positions the ARDs and may mediate interdomain communication with the NBDs. Nevertheless, the resolution reduces significantly midway through the interdomain helix, likely reflecting flexibility in the hinge between this helix and the ARD or disorder in the N-terminal end of the linker. EM density for the ARD of P6 is weaker and cannot be fit, suggesting higher flexibility of the ARD and the interdomain linker for the protomer at the uppermost position of the spiral ( Fig. 2F and fig. S4B). In summary, high-resolution structures of both the closed and open spiral conformations of the Skd3 hexamer reveal ratchet-like motions of the seam protomers that are implicated in driving substrate translocation, visualize the nucleotide state of an AAA + ATPase in the open spiral conformation, and suggest the molecular basis of disease-linked Skd3 mutations.
Dodecameric PARL Skd3 forms a fenestrated cage
During cryo-EM data analysis in CryoSPARC, we found that the 3D classification and refinement algorithms were heavily biased toward optimizing the alignment of the NBD but did not effectively recover dodecamer particles or separate them from hexamers. In contrast, classification of the 2D images in RELION allowed in silico purification of the PARL Skd3 dodecamers, which was refined to an overall resolution of 6.5 Å (Fig. 1E, figs. S2 and S7, and Table 1). The local resolution varied from~4 Å for the designated lower NBD ring,~8 Å in the ARDs, to 8 to 12 Å in the upper NBD ring ( fig. S7A). Strong density could be observed for the C terminus of the interdomain helix connecting to the lower NBD ring ( fig. S7A). Local resolution drops significantly midway through the interdomain helices in the lower hexamer, likely reflecting increased conformational flexibility in the linker as was observed in the hexamer structure. The lower local resolution of the upper hexamer further suggests that the dodecamer interface is dynamic, which would lead to variations in the relative orientation of the two hexamers and blurring of the upper hexamer density if the refinement optimizes the alignment of the lower NBD ring. To test this hypothesis, we carried out focused refinement of the upper and lower NBD rings. This resulted in improved resolutions, with the lower NBD ring refined to 3.8 Å and the upper NBD ring refined to 7.7 Å (Fig. 3A). The spiral conformation can be discerned in both NBD rings. The Skd3 dodecamer forms a fenestrated cylindrical cage, 130 Å in diameter and 180 Å in height, and is comparable in size to the GroEL tetradecamer and TRiC hexadecamer ( fig. S7F). Unlike GroEL, which consists of two heptameric rings stacked back-toback, the interior of the dodecamer forms a contiguous large cavity sufficient to fit a globular protein/complex of~70 kDa, such as the model protein luciferase used in refolding studies (Fig. 3B). In addition, unlike GroEL and TRiC, in which the internal cavity is sealed from solvent upon lid closing, the cavity in Skd3 is fenestrated and open to surrounding solvent, with gaps between the neighboring ARDs and interdomain helices that allow diffusion of solvent and small molecules while blocking the entry or exit of most globular proteins.
Model Hexamer (high-res) Hexamer (closed) Hexamer (open) Dodecamer
Despite regions with low resolution, the reconstruction shows that the PARL Skd3 dodecamer is composed of head-to-head dimers of two opposing hexamers, with the ARDs mediating contacts at the interface between the two hexamers (Fig. 3C). The density for the ARDs is clearest in protomers P2 to P4, allowing an AlphaFold-predicted model for the ARD and interdomain helix region to be docked as a rigid body into these densities (Fig. 3C). The docking model suggests that the dodecamer interface is mediated in part by "side-to-side" interactions of the ARDs from the opposing hexamers via residues in the "turn loops" of the ankyrin repeat motifs (ARMs), including 145 NN in ARM1, 177 NRN in ARM2, and 277 DD in ARM3 (Fig. 3C, inset). These residues are conserved or semi-conserved among Skd3 homologs ( Fig. 3D and fig. S8A) but are distinct from the consensus sequence (GH) at the turn loops of most ARDs (44).
To test the role of these contacts in Skd3 dodecamer formation, we replaced the turn loop residues in ARM1 and ARM2 with the consensus sequence (named mutants L1GH and L2GH) and the turn loop Asp residues in ARM3 with GG (mutant L3GG). Using the reactivation of aggregated luciferase as an initial screen for the mutational effects on Skd3 activity, we found that mutation of the individual turn loops had either no effect or a modest effect on Skd3's chaperone activity ( Fig. 3E and fig. S8B). However, combining the L1GH and L2GH mutations resulted in a three-to fourfold reduction in luciferase reactivation ( Fig. 3E and fig. S8B, green). Although the yield of luciferase refolding varied depending on how the aggregate was generated, the mutational effects of turn loop residues were the same regardless of the nature of the aggregates ( Fig. 3E and fig. S8B). Mass photometry measurements of the double mutant, Skd3_L1,L2GH, showed that its dodecamer formation was impaired (Fig. 3F versus Fig. 1B). In the presence of ATP, the ratio of dodecamer relative to hexamer was reduced from 0.91 ± 0.07 with Skd3_WB (Fig. 1B) to 0.41 ± 0.12 with Skd3_L1,L2GH_WB (Fig. 3F). In addition, as shown later (section "Dodecamer assembly-deficient Skd3 mutants uncouple disaggregation from client refolding"), steric blocks at the N terminus of PARL Skd3 impaired Skd3 dodecamer formation without affecting the hexamer. Together, these results support the role of the ARDs in mediating dodecamer formation and suggest that the conserved Asn residues in the ARD turn loops mediate hydrogen bonding interactions that form part of the dodecamer interaction interface.
Client protein undergoes protected folding on PARL Skd3
PARL Skd3 is an efficient chaperone that can generate enzymatically active luciferase from its aggregates in the absence of additional chaperones that promote folding [Figs. 3 and 4 and (27)]. This activity and the observation of the internal cavity in dodecameric PARL Skd3 (Fig. 3B) led us to ask whether PARL Skd3 provides a protected environment for folding in addition to acting as a disaggregase. We addressed this question by challenging the PARL Skd3mediated luciferase refolding reaction with GroEL-D87K, which irreversibly binds unfolded polypeptides and early folding intermediates (45) and thus serves as a conformational probe for the folding state of the client protein released from the chaperone. In the continuous extraction model established for ClpB/Hsp104, client proteins such as luciferase are extracted from the aggregate by continuous threading through the pore ring and are released in an unfolded conformation, which can refold either spontaneously or with the assistance of other chaperones (Fig. 4A, lower pathway) (15,16). If PARL Skd3 acts solely via this mechanism, GroEL-D87K will be an effective inhibitor of luciferase refolding by sequestering the unfolded client protein released from the disaggregase (Fig. 4A, lower pathway), as was shown for ClpB (15,16). In contrast, if luciferase reaches a committed stage of folding on PARL Skd3 before it is released from the chaperone, the refolding reaction would be insensitive to the presence of GroEL-D87K (Fig. 4A, upper pathway).
To distinguish between these models, we measured the kinetics of reactivation of aggregated luciferase, which includes both the disaggregation and refolding reactions. PARL Skd3 mediates efficient reactivation of aggregated luciferase (Fig. 4A), as reported previously (27). Up to 1 μM GroEL-D87K, added either simultaneously with PARL Skd3 or after mixing PARL Skd3 with luciferase aggregates, affected the yield of the reaction by only 10 to 20% (Fig. 4B and fig. S9A). Regardless of how the luciferase aggregate was generated, all the luciferase reactivation reactions mediated by PARL Skd3 were impervious to GroEL-D87K ( Fig. 4B and fig. S9A). These results strongly suggest that unfolded luciferase molecules are strongly protected from external chaperones by PARL Skd3, and further, they reach a conformation committed to folding before release from this chaperone.
As controls, we tested the effectiveness of GroEL-D87K in reactions that allow luciferase to fold from the denatured state, by dilution of GdmHCl-denatured luciferase into aqueous solution to concentrations below 50 nM. Under these conditions, a fraction of denatured luciferase underwent spontaneous folding, which was abolished by 320 nM GroEL-D87K (Fig. 4C, black versus blue), supporting GroEL-D87K as an effective trap for unfolded proteins. Refolding of denatured luciferase was enhanced by PARL Skd3 (Fig. 4C, red versus black); however, when denatured luciferase was presented simultaneously to PARL Skd3 and GroEL-D87K, refolding was abolished (Fig. 4C, red versus navy), indicating that GroEL-D87K outcompetes PARL Skd3 in binding unfolded luciferase. In contrast, when denatured luciferase was preloaded on PARL Skd3 followed by addition of GroEL-D87K, Skd3dependent luciferase refolding was strongly protected, especially at early times (Fig. 4D). This observation is similar to the protection of luciferase in the coupled disaggregation-refolding reaction (Fig. 4B), and together, they strongly suggest that folding of luciferase can occur on PARL Skd3 in a highly protected manner. As additional controls, Hsp104/40/70-mediated reactivation of aggregated luciferase was sensitive to GroEL-D87K ( fig. S9, B and C), analogous to observations with ClpB/DnaKJ (15,16). Luciferase reactivation mediated by the potentiated mutant Hsp104_A503S, which bypasses the Hsp40/70 requirement (46), was also abolished by GroEL-D87K ( fig. S9D). Together, these results show that Skd3-mediated luciferase reactivation uses a mechanism distinct from the continuous threading model established for ClpB/Hsp104, and strongly suggest that luciferase molecules released from PARL Skd3 are in a conformation committed to folding.
To provide independent evidence that luciferase folding can initiate when bound to PARL Skd3, we analyzed the protein complexes generated during the refolding reaction. The coupled luciferase disaggregation-refolding reaction was chilled at 4°C for 15 min after its initiation. Large aggregates were removed by centrifugation, and soluble protein/complexes of different sizes were separated by gel filtration chromatography and analyzed immediately upon elution (Fig. 4E). Significant luciferase activity was detected both in high molecular weight fractions that correspond to Skd3 oligomers (fractions 17 to 20 or 8.5-to 10-ml elution volume) and in low molecular weight fractions that correspond to free luciferase monomer (fractions 28 to 34 or 14-to 17-ml elution volume) (Fig. 4E, blue). In contrast, luciferase activity was detected only in the late fractions for reactions with PARL Skd3_WB or without chaperone at levels 200-fold below that of the reaction with PARL Skd3 (fig. S9, E and F). The cofractionation of luciferase activity with Skd3 oligomers strongly suggests that a fraction of luciferase acquires either the native state or a folding-competent conformation when bound to PARL Skd3. Western blot analysis showed that most luciferase protein eluted in the high molecular weight fractions associated with PARL Skd3 oligomers (Fig. 4, E and F, green, and fig. S9, G to I), whereas most enzymatically active luciferase was detected in the unbound fractions (Fig. 4, E and F, blue). This further suggests that luciferase molecules that have been solubilized but not yet folded remain stably bound to Skd3 oligomers and are more promptly released from Skd3 when they reach a folding-competent conformation.
Together, the results of this section show that PARL Skd3 acts via a mechanism distinct from the continuous extraction model that describes the ClpB/Hsp104 disaggregase systems. Instead, solubilized client proteins remain stably bound to and protected by PARL Skd3 and can undergo folding before their release from PARL Skd3.
Dodecamer assembly-deficient Skd3 mutants uncouple disaggregation from client refolding We next asked whether formation of the Skd3 dodecamer is involved in the protected folding of clients on Skd3 by examining Skd3 mutants defective in dodecamer assembly. The reduced activity of Skd3_L1,L2GH in the coupled disaggregation-refolding reaction suggested a role of the dodecamer in the chaperone activity of Skd3 (Fig. 3). As the defect of this mutant is modest, we sought to further disrupt dodecamer assembly by introducing N-terminal extensions to PARL Skd3 that could act as steric blocks. In support of this notion, fusion of the SUMO moiety to PARL Skd3 abolished dodecamer formation ( fig. S10A). MPP Skd3, which retains a 30-amino acid inhibitory sequence N-terminal to PARL Skd3 (Fig. 1A) and (27), showed heterogeneous assembly in mass photometry measurements ( fig. S10B). However, MPP Skd3 displayed Skd3 concentration-dependent ATPase activation with rates within twofold of that of PARL Skd3 [ fig. S10C and (27)], consistent with its ability to assemble ATPase-active hexamers. This suggests that the behavior of MPP Skd3 in mass photometry was not due to its inability to form hexamers but to poor biophysical properties. Physiologically, MPP Skd3 needs to associate with the mitochondrial inner membrane to gain access to the PARL protease and likely has a propensity to bind to hydrophobic surfaces, which may complicate surface-based mass photometry measurements. We therefore tested deletions of various regions of the inhibitory sequence to improve its stability in solution (MPPΔ1, MPPΔ2, and MPPΔ3; fig. S10D). All three mutants displayed substantially reduced activity in the coupled luciferase disaggregation-refolding reaction, as did MPP Skd3 [ Fig. 5A, fig. S10E, and (27)]. MPPΔ1 Skd3 was prone to aggregation during purification, and MPPΔ2 Skd3 remained poorly behaved on mass photometry (fig. S10F); these mutants were therefore not pursued further. In contrast, MPPΔ3 Skd3, in which the C-terminal hydrophobic residues in the inhibitory sequence were removed, was well expressed and soluble. Mass photometry measurements showed that this mutant assembled efficiently into hexamers but was strongly impaired in dodecamer formation compared to PARL Skd3 (Fig. 5, B and C).
We tested whether the reduced chaperone activities of Skd3_L1,L2GH and MPPΔ3 Skd3, which are specifically defective in dodecamer assembly, were due to impaired disaggregase activity or to less efficient client refolding. To this end, we measured and compared the efficiencies by which wild-type and mutant Skd3 solubilize luciferase aggregates. Preformed luciferase aggregates were incubated with Skd3 and, at various times, the reaction was quenched using apyrase, and soluble and aggregated proteins were separated by centrifugation and visualized by Western blot analysis. These measurements showed that both mutants solubilized luciferase aggregates with efficiencies and kinetics comparable to wild-type PARL Skd3 (Fig. 5, D and E), indicating that the defect of these mutants in luciferase reactivation was not due to impaired disaggregase activity but rather can be attributed to reduced refolding efficiency.
If the Skd3 dodecamer mediates protected refolding of luciferase, a prediction is that the dodecamer assembly-deficient mutants no longer protect the refolding reaction from GroEL-D87K. This was indeed the case: Luciferase reactivation by Skd3_L1,L2GH and MPPΔ3 Skd3 decreased by over twofold in the presence of 320 nM GroEL-D87K and was reduced to background levels (1.5 and 1.4% refolded luciferase, respectively) in the presence of 3 μM GroEL-D87K (Fig. 5, F to H). In contrast, the reaction with wild-type PARL Skd3 was reduced only~10% by 320 nM GroEL-D87K and~50% by 3 μM GroEL-D87K (Figs. 4 and 5G). We also note that neither mutant completely abolished dodecamer formation (Figs. 3F and 5, B and C); therefore, the observations with these mutants provide a lower limit for the sensitivity of the Skd3 hexamer-mediated luciferase refolding reaction to GroEL-D87K. If client proteins fold within the PARL Skd3 dodecamer, another prediction is that proteins whose size exceeds the dimension of the dodecamer cavity cannot be efficiently refolded by this chaperone. To test this model, we examined the refolding of β-galactosidase (β-Gal), which has a monomeric molecular weight of 115 kDa and an elongated structure in the folded monomer, making it incompatible to fit in the internal cavity of the Skd3 dodecamer. PARL Skd3 reactivated preaggregated β-Gal by only 4% above background levels ( fig. S11A). Since this reaction requires Skd3 to both solubilize the aggregate and refold β-Gal, we further measured the refolding of urea-denatured β-Gal, which minimizes contributions from the aggregate solubilization step. Skd3 increased the refolding of ureadenatured β-Gal by only 15 to 20% above that of the spontaneous reaction ( fig. S11B). These observations contrast with those with Hsp104/40/70, which can efficiently refold aggregated β-Gal (47). While the repertoire of client proteins that can be refolded by Skd3 remains to be determined, these observations are consistent with the dimension of the dodecamer cavity, which likely imposes a size limit on the client proteins that can undergo protected refolding on Skd3.
Collectively, the results in this section show that disaggregase activity can be uncoupled from protected client refolding in dodecamer assembly-deficient Skd3 mutants. Thus, the Skd3 hexamer is sufficient for disaggregase activity, whereas efficient and protected client refolding requires assembly of the Skd3 dodecamer.
The ARD can mediate substrate interactions
In the molecular model for the Skd3 dodecamer, multiple conserved hydrophobic and aromatic residues on the concave face of the Skd3 ARD and on an insertion domain between ARM2 and ARM3 (residues 198 to 267 in isoform 1) together form a hydrophobic groove that lines the internal cavity (Fig. 6A). Mutation of hydrophobic residues that constitute this surface to Ala or Gly impaired luciferase reactivation by PARL Skd3 (Fig. 6A). In addition, three MGCA-linked mutations in Skd3 (Y272C, T268M, and A269T) are located at this hydrophobic surface (Fig. 6A); these mutations also severely disrupted PARL Skd3-mediated luciferase refolding (Fig. 6B). These observations support an important role of this hydrophobic groove in the chaperone function of Skd3.
These observations led us to ask whether the Skd3 ARD directly participates in substrate interactions. To probe the interaction of Skd3 with unfolded polypeptides, we used the model substrate casein, which exists predominantly in a disordered state. Equilibrium titrations based on the anisotropy of fluorescein isothiocyanate (FITC)-labeled casein showed that in the presence of ATPγS, PARL Skd3 and PARL Skd3_WB bound FITC-labeled casein with modest affinity, with equilibrium dissociation constants (K d ) of 3.4 and 3.6 μM, respectively (Fig. 6D). Unexpectedly, in the absence of ATP or ATP analogs, where PARL Skd3 is predominantly monomeric (fig. S1E), FITC-casein was bound with a K d value of 0.86 μM (Fig. 6D). The smaller anisotropy change of FITC-casein at saturating concentrations of apo-versus ATPγS-bound PARL Skd3 was consistent with a smaller number of FITC probes in casein that are immobilized by binding of the PARL Skd3 monomer than oligomer. In the presence of ATP and the ATP regeneration system (ARS), which allows ATP binding and hydrolysis, the observed binding affinity and anisotropy change was in between the ATPγS and apo states (Fig. 6D). The isolated ARD also bound to FITCcasein with a K d value of 3.7 μM (Fig. 6D), within fivefold of that of apo-Skd3. Thus, the ARD can directly participate in interaction with an unfolded polypeptide.
To independently assess the participation of the ARD in substrate interactions, we tested whether it can prevent the aggregation of the amyloid β (Aβ42) peptide, a proteolytic fragment of the amyloid precursor protein that is highly prone to aggregation. To compare the activity of ARD with PARL Skd3 in the same oligomeric state, we carried out the reactions without added ATP and ARS such that PARL Skd3 would be predominantly monomeric. PARL Skd3 effectively reduced the extent and delayed the kinetics of amyloid formation by Aβ42 under these conditions (Fig. 6E). Substantial inhibition of Aβ42 was observed at 0.1 μM PARL Skd3, 1 / 50 of the concentration of Aβ42, suggesting high-affinity recognition (Fig. 6, E and G). The ARD also delayed Aβ42 aggregation, albeit about fivefold less efficiently compared to PARL Skd3 (Fig. 6, F and G). The ability of PARL Skd3 and ARD to reduce Aβ42 fibril formation was corroborated by transmission EM (TEM) analysis (Fig. 6H). Mutants L1,L2GH and MPPΔ3 inhibited Aβ42 aggregation as effectively as PARL Skd3 (Fig. 6H and fig. S12), indicating that these mutations specifically disrupt the dodecamer interface ( Fig. 5) but are unlikely to impair interaction with hydrophobic regions of substrate proteins. In contrast, mutant 246-249A was less effective in inhibiting Aβ42 aggregation ( Fig. 6G and fig. S12), supporting its role as a putative substrate contact site on the ARD (Fig. 6A). Thus, PARL Skd3 can protect unstructured and hydrophobic polypeptides from forming irreversible aggregates under ATP-depleted conditions. Moreover, these results provide additional evidence that the Skd3 ARD participates in client interactions that shield hydrophobic regions of a protein from aggregation.
DISCUSSION
Skd3 is an AAA + chaperone that can efficiently reactivate aggregated model substrates and may serve as the missing general chaperone in the mitochondrial IMS, a crowded space with diverse and challenging proteostatic demands. However, the molecular mechanisms that give rise to the remarkably effective chaperone activity of Skd3 remain elusive, and its precise roles in mitochondria remain to be determined. In this work, the combination of structural and biochemical analysis strongly suggests that Skd3 is a multifunctional chaperone that not only acts as a disaggregase, as previously described (27), but also can further provide a protected environment for client folding via dodecamer assembly. Our results suggest a model for how a molecular chaperone couples the disaggregation of client proteins to their refolding, and thus provides an effective repair mechanism for misfolded, aggregated proteins.
The Skd3 NBD is highly homologous to NBD2 of ClpB and Hsp104. High-resolution cryo-EM structures of the PARL Skd3 hexamer here showed that the Skd3 NBD alternates between open and closed spiral conformations (Fig. 2). The closed spiral conformation, also reported by Cupo et al. (38), resembles the structures observed for many protein-remodeling AAA + ATPases (18)(19)(20)48), in which the pore loops of protomers P1 to P5 contact the bound substrate in a spiral staircase arrangement, whereas the P6 "seam" protomer is detached (Fig. 7, inset). The open spiral conformation of PARL Skd3 is less often observed but is structurally similar to the "extended" spiral observed for Hsp104-NBD2 bound to casein (19). In this conformation, the P6 protomer adopts the uppermost position in the spiral staircase and advances contact with the substrate polypeptide by two additional amino acids (Fig. 7, inset). The open spiral conformation is likely favored by high ATP occupancy and thus captured by the combination of WB mutation and slowly hydrolyzing ATPγS in this study. Collectively, these structures are consistent with the conserved force-generation mechanism of proteinthreading AAA + proteins, in which the seam protomer transitions through the closed spiral conformation as it moves from the lowest to the uppermost position of the open spiral during each ATPase cycle. This movement allows the NBD ring to move "up" the substrate polypeptide, generating a force that propels substrate translocation (Fig. 7, inset). These structural observations, together with the requirement of pore loop and ATPase active-site residues for Skd3-mediated disaggregation (27), suggest that the Skd3 hexamer harbors most of the molecular features to act as a disaggregase by threading substrate proteins through the pore ring (Fig. 7, steps 1 and 2).
Nevertheless, multiple observations here demonstrate that disaggregation via continuous substrate threading, established for ClpB/ Hsp104, is insufficient to explain how PARL Skd3 efficiently reactivates aggregated luciferase. The coupled disaggregation-refolding reaction mediated by PARL Skd3 is strongly protected from GroEL-D87K, which traps unfolded polypeptides and early folding intermediates (Fig. 4), indicating that luciferase molecules released from PARL Skd3 are in the native or near-native conformation. This contrasts with the ClpB (15,16) and Hsp104 ( fig. S9, B to D) systems, which are sensitive to the GroEL-D87K trap, and excludes models in which the substrate protein is completely threaded through the Skd3 pore ring. Instead, this observation suggests that partial threading is sufficient to dislodge substrate proteins from the aggregate, as has been observed in some cases with ClpB (49) and Hsp104 (50). The resistance of the PARL Skd3-mediated refolding reaction to GroEL-D87K also excludes models in which substrates are folded via multiple cycles of binding and release by Skd3, which would expose unfolded proteins susceptible to GroEL-D87K. Last, gel filtration chromatography analysis of reaction intermediates showed that solubilized substrates remain stably associated with PARL Skd3 oligomers, and a fraction of substrates acquired the native or near-native conformation when bound to PARL Skd3 (Fig. 4E). Collectively, these data support a model in which aggregate solubilization is tightly coupled to the initiation of client refolding in a protected environment provided by PARL Skd3.
The results here further suggest that client refolding is facilitated by Skd3 dodecamer assembly. Skd3 mutants that disrupt the dodecamer interface retain disaggregase activity but specifically impair the refolding of luciferase (Fig. 5). This and the higher susceptibility of these mutants to GroEL-D87K show that while the hexamer is necessary and sufficient for disaggregation, protected client refolding depends on formation of the Skd3 dodecamer. A recent report also showed that Skd3 dodecamer assembly is correlated with increased luciferase reactivation (38). Structurally, the interior of the PARL Skd3 dodecamer provides a cavity sufficient to fit a folded or partially folded protein of~70 kDa while also shielding the protein from aggregation and other off-pathway interactions (Fig. 3). The stable association of Skd3 oligomer with solubilized luciferase, some of which attained a native or near-native conformation (Fig. 4E), is most consistent with newly extracted substrate protein being encapsulated in the internal cavity of the Skd3 dodecamer. The fenestrated nature of the PARL Skd3 dodecamer structure allows the facile exchange of water, small molecules, and cofactors, providing an environment in its internal cavity that is conducive to folding. On the other hand, the Skd3 dodecamer provides no obvious mechanism to initiate substrate translocation, as entry into the internal cavity through the pore ring would be in the opposite direction of force generation by protein-threading AAA + ATPases. We therefore suggest that the role of the Skd3 dodecamer occurs after substrate extraction by the hexamer, and Step 1, an Skd3 hexamer recognizes aggregated proteins and initiates substrate extraction.
Step 2, an Skd3-bound substrate protein is dislodged from the aggregate. Step 3, Skd3 dodecamer assembles, and substrate protein initiates folding in the internal cavity.
Step 4, substrate protein, either folded or in a folding-competent conformation, is released from Skd3 upon dodecamer disassembly. The inset shows the movement of the seam protomer (P6) from the lowest to the uppermost position as the NBD hexamer rearranges from the closed to the open spiral conformation, which generates a mechanical force that "pulls" the substrate polypeptide in the downward direction (red arrow). The question marks denote that the molecular mechanism underlying the transitions is still unclear.
dodecamer assembly transitions Skd3 from the translocation/disaggregation phase to the refolding phase of its chaperone cycle (Fig. 7, step 3).
How newly extracted substrates enter the internal cavity of the dodecamer is unclear. The simplest model would involve a slippage of the substrate during translocation, leading to its release from the pore ring. Such a scenario could be favored by several factors: (i) limited pulling force from a single Skd3 AAA + ring, which may be insufficient to propel the complete threading of a large substrate such as luciferase; (ii) initiation of client folding, which would impede continued threading [ClpB releases partially translocated substrates upon encountering a folded domain in the substrate (15,49)]; and (iii) interaction of parts of the substrate protein with a second Skd3 hexamer, which would oppose continued threading. Regardless of the precise mechanism, release of partially threaded substrates has been observed with ClpB and was proposed to support substrate reactivation, by avoiding unnecessary unfolding of parts of the protein and preventing additional unfolded polypeptide segments from interfering with refolding (15,49).
Unique to Skd3 is the fusion of its AAA + ATPase moiety to the ARD, which remains the least understood component of this chaperone. Although the ARD is not structurally well resolved, our results here suggest multiple roles for this domain. First, the ARD mediates assembly of the Skd3 dodecamer. Second, the ARD could directly participate in substrate interactions (Fig. 6). Docking of the AlphaFold-predicted model in the dodecamer EM density further suggests that in the interior of the cavity, conserved hydrophobic and aromatic residues cluster on a concave surface in the helical domain of the Skd3 ARD and are shielded by the insertion domain (Fig. 6A). The deleterious effect of mutations at this surface on luciferase refolding and Aβ aggregation suppression by Skd3 (Fig. 6B and fig. S12) supports the role of this surface in chaperone-client interactions. It could be envisioned that the insertion domain acts as a flap that closes upon the hydrophobic cluster, and its opening would expose an extensive hydrophobic groove that mediate interaction with hydrophobic residues on unfolded and partially folded substrate proteins. These observations, together with consideration of the direction of substrate threading by the Skd3 hexamer, further place the ARDs as the initial sites of contact with the protein aggregate (Fig. 7, step 1).
We suggest the following working model for the chaperone cycle of PARL Skd3 (Fig. 7). Hexameric PARL Skd3 mediates the recognition of protein aggregates via its ARD and initiates ATP-driven threading of the substrate polypeptide through its pore ring (step 1). For client proteins such as luciferase, partial threading is sufficient to dislodge Skd3-bound substrates from the aggregate, freeing the ARDs to initiate dodecamer assembly (step 2). Solubilized but unfolded client proteins are captured in the internal cavity of the dodecamer, which provides an aqueous and confined environment in which substrate proteins can sample the folding trajectory while being protected from aggregation (step 3). It appears that client proteins such as luciferase can attain either the native or near-native conformation while associated with Skd3. As the dodecamer interface is dynamic, its disassembly provides a mechanism to release the substrate protein (step 4). Many aspects of this model remain to be tested and understood, including the molecular determinants that regulate Skd3 assembly, the degree of substrate threading required for solubilization, whether and how the Skd3 dodecamer senses and regulates client conformation, and whether additional functions are associated with the higher-order assembly state of Skd3.
The mitochondrial IMS is a crowded and complex environment with diverse and challenging proteostatic demands. Besides a small number of specialized chaperones, no additional general chaperones besides Skd3 have been identified in this space. The multiple chaperone activities of Skd3 may be particularly suited to the diverse proteostatic needs in this environment. On the other hand, the dimension of the Skd3 dodecamer cavity imposes a size limit on IMS proteins that can be refolded by this chaperone. It remains to be understood which one of its chaperone activities described here plays the dominant role in vivo, and how Skd3 cooperates with the other chaperones and proteases to maintain proper protein folding and quality control in this space.
Plasmids
Codon-optimized DNA fragments encoding PARL Skd3 and MPP Skd3 (Twist Biosciences) were cloned into the pET28-His 6 -SUMO plasmid behind the SUMO moiety using Gibson assembly. To increase the efficiency of SUMO protease cleavage, a GGS linker sequence was introduced at the N terminus of MPP Skd3 coding sequence. Site-directed mutagenesis of PARL Skd3 and MPP Skd3 was carried out using the QuikChange mutagenesis protocol (Agilent). To express PARL Skd3_ARD, a stop codon was introduced after R327 using the QuikChange mutagenesis protocol.
Mass photometry
One hundred microliters of PARL Skd3 was centrifuged at 18,000g, 30 min at 4°C, exchanged to K100 buffer [50 mM tris-HCl (pH 8.0), 100 mM KCl, 10 mM MgCl 2 , and 1 mM DTT] containing 5 mM of the desired nucleotide over a PD SpinTrap G25 column (Cytiva), and diluted with the same buffer to submicromolar concentrations during measurements. Measurements were carried out on a Refeyen One MP mass photometer (Refeyn Ltd.) with a 60-s acquisition time. Movies were analyzed using DiscoverMP (Refeyn Ltd.), and the contrast-to-mass conversion was achieved by calibration using the molecular weight standards bovine serum albumin (66 and 132 kDa), Sgt2 (80 kDa), thyroglobulin (330 kDa), and apoferritin (440 kDa). The events recorded from two to three independent measurements were pooled and fitted to Gaussian distributions to extract the mean molecular mass and relative amount of each peak.
Biochemical assays Coupled luciferase disaggregation and refolding
Luciferase aggregates were prepared using two protocols. First, 50 μM purified luciferase was denatured in freshly prepared 8 M urea in refolding buffer [25 mM K-Hepes (pH 8.0), 150 mM KOAc, 10 mM Mg(OAc) 2 , and 10 mM DTT] for 30 min at 30°C. Denatured luciferase was diluted 100-fold in refolding buffer and allowed to aggregate by incubation at room temperature for 5 min (U 8 L 0.5 ). Second, 20 μM purified luciferase was denatured in 4 M urea in refolding buffer for 30 min at 30°C, diluted 100-fold in refolding buffer, and incubated at 30°C for 5 min before the addition of chaperones (U 4 L 0.2 ).
Skd3 samples were centrifuged at 18,000g for 30 min at 4°C before all experiments. Coupled disaggregation-refolding reactions were initiated by mixing 50 nM U 8 L 0.5 luciferase aggregates or 200 nM U 4 L 0.2 luciferase aggregates with a solution containing 1 μM Skd3 or Hsp104/40/70 in refolding buffer supplemented with 5 mM ATP and ARS (consisting of 1 mM creatine phosphate and 0.25 μM creatine kinase). Reactions were incubated at 30°C. At indicated times, an aliquot was removed and diluted 20-fold in luciferase assay reagent (Promega), and chemiluminescence was measured in SpectraMax iD5 (Molecular Devices) using an integration time of 1 s. The percentage of refolded luciferase was calculated using a control reaction containing 50 nM native luciferase in refolding buffer, 5 mM ATP, and ARS. For reactions including the GroEL trap, purified GroEL-D87K was added either during or 5 min after initiation of the refolding reaction.
To measure the refolding of denatured luciferase (spontaneous or Skd3-assisted), 10 μM purified luciferase was denatured with 5 M GdmHCl for 1 hour at 25°C and diluted 100-fold into refolding buffer at 25°C containing 5 mM ATP and ARS, with or without PARL Skd3 and GroEL-D87K. Chemiluminescence was measured at specified times, as described above.
Gel filtration chromatography
To evaluate the association of Skd3 with luciferase during the refolding reaction, reactions were carried out using U 8 L 0.5 luciferase aggregates and were centrifuged (14,000 rpm, 10 min, 4°C) for 15 min after the initiation of the reaction. The supernatant was fractionated on Superdex 200 preequilibrated in refolding buffer containing 5 mM ATP. Fractions (0.5 ml) were collected and, after the void volume, immediately measured for luciferase activity. The individual fractions were further subjected to Western blot analysis using anti-His (luciferase) and anti-clpB (15743-1-AP, Proteintech) antibodies.
Luciferase disaggregation
Coupled disaggregation-refolding reaction was initiated as described above using U 8 L 0.5 luciferase aggregates. At specified times, aliquots of the reaction were removed, mixed with apyrase (2 U/ml), and flash-frozen. Quenched reaction aliquots were centrifuged at 18,000g for 10 min at 4°C. Luciferase in the soluble and total samples was detected by Western blot using anti-His antibody. His 6 -tagged cpSRP43 (10 nM), a 38-kDa plant chaperone, was mixed with the reaction after apyrase addition and used as a loading control. ATPase assays Assays were carried out at 25°C in refolding buffer. Skd3 samples were centrifuged at 18,000g for 30 min at 4°C before the measurements. Reactions were initiated by mixing indicated concentrations of Skd3 in refolding buffer containing 500 μM ATP with trace γ-32 P-ATP. Aliquots of the reaction were quenched in 0.75 M potassium phosphate (pH 3.3). γ-32 P-ATP and 32 P i were separated by thinlayer chromatography and quantified by autoradiography. Observed ATPase rate constants (V/[E]) were plotted as a function of Skd3 concentration and fit to Eq. 1 in which k max is the ATPase rate constant at saturating Skd3 concentrations, K M is the effective concentration for the formation of the ATPase-active complex, and n is the Hill coefficient.
Aβ42 fibrillation
Freshly purified monomeric Aβ42 (5 μM) was diluted in assay buffer [20 mM NaPI (pH 8.0) and 200 μM EDTA] with or without Skd3 and its variants at specified concentrations. All samples were prepared in low-binding Eppendorf tubes (Axygen) on ice by careful pipetting to avoid air bubbles. Each sample was then pipetted into multiple wells (100 μl per well) of a 96-well half-area plate of black polystyrene with a clear bottom and polyethylene glycol (PEG) coating (Corning 3881). Plates were sealed with microseal (Bio-Rad, MSB1001) and incubated at 37°C under quiescent conditions in a plate reader (SpectraMax iD5). Amyloid formation was measured on the basis of the fluorescence of 6 μM thioflavin T at 482 nm. Data were fit using the AmyloFit software (55) to extract the halftimes of aggregation. For TEM imaging, a sample was collected when the aggregation reached the plateau value, diluted fivefold, and imaged by TEM. Five microliters of the reaction mixture was loaded on glow-discharged carboncoated Cu grids (300 mesh, Ted Pella Inc.) for 1 min. Grids were washed twice in doubly deionized water and stained with 2% uranyl acetate for 1 min. Images from different areas of the grid were acquired using a 120-kV electron microscope (FEI Tecnai T12), and representative images are shown.
FITC-casein binding
Lyophilized FITC-casein (Sigma-Aldrich) was resuspended in water at 20 mg/ml and frozen at −80°C until use. Skd3 variants were exchanged into K100 buffer using G-25 columns (Cytiva). Buffers were supplemented with either no nucleotide, 1 mM ATPγS, or 5 mM ATP plus ARS. All measurements were carried out in the corresponding buffer at 25°C. Binding of casein to Skd3 was measured by changes in fluorescence anisotropy on FluoroLog 3-22 (Yobin Yvon), using 200 nM FITC-casein and serial additions of Skd3 to the indicated concentrations. The samples were excited at 482 nm, and the fluorescence anisotropy was recorded at 520 nm. To obtain the equilibrium dissociation constant between casein and Skd3, the data were fit to Eq. 2 ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi f½casein� þ ½Skd3� þ K d g 2 À f4 � ½casein�½Skd3�g in which A obs is the observed anisotropy value, A 0 is the anisotropy value of FITC-casein alone, ΔA is the change in anisotropy at saturating Skd3 concentrations relative to A 0 , and K d is the equilibrium dissociation constant for the interaction between FITC-casein and Skd3.
Data acquisition
For negative staining, 2.5 μl of the purified Skd3 (0.1 mg/ml) was applied to glow-discharged EM grids covered by a thin layer of continuous carbon film (TED Pella Inc.) and stained with 0.75% (w/v) uranyl formate solution. EM grids were imaged on a Tecnai T12 microscope (Thermo Fisher Scientific) operated at 120 kV with Gatan Rio Camera. Images were recorded at a magnification of ×52,000, resulting in a 2.2-Å pixel size on the specimen. Defocus was set to −1.5 to −2 μm. For cryo-EM, 5 to 6 μl of purified PARL Skd3_WB (1.06 mg/ml) were applied to glow-discharged holy carbon grids (Quantifoil 300 mesh Au R1.2/1.3). Four microliters of PARL Skd3_WB (0.25 mg/ml) was applied onto holy carbon grids (Quantifoil 300 mesh Au R1.2/ 1.3) coated with PEG-amino-functionalized graphene oxide. The grids were blotted by Whatman no. 1 filter paper and plungefrozen in liquid ethane using Mark IV Vitrobot (Thermo Fisher Scientific) with blotting times of 3 to 6 s at room temperature and over 90% humidity.
Cryo-EM datasets were collected at Stanford-SLAC Cryo-EM Center using EPU2.9 in Titan Krios G3i equipped with energy filter Selectris and Falcon4. From Quantifoil grids, 10,723 micrographs of no tilt data and 2151 micrographs of the tilt data were pooled to form dataset 1. From the PEG-amino grids (56), 10,854 micrographs were collected without tilt in dataset 2. Cryo-EM data analysis Micrograph CTF correction, particle picking and extraction, and three rounds of 2D classification were performed in CryoSPARC, resulting in 1.04 million particles from dataset 1 and 740,000 particles from dataset 2. For particles from each dataset, multiple rounds of ab initio reconstruction followed by heterogeneous refinement were performed in CryoSPARC. A total of 1.46 million remaining particles from both datasets were pooled and used for nonuniform (NU) refinement ( fig. S2). The resolution of the high-resolution hexamer map was estimated using the gold standard Fourier shell correlation (FSC) = 0.143 criterion (57) and was 3.1 and 2.76 Å, respectively, for the unmasked and corrected tight masked FSC curves from CryoSPARC ( fig. S3C and Table 1).
The unmasked half maps from the NU refinement were used for postprocessing by DeepEMhancer (58) using the highRes deep learning model and default training data provided by the DeepEMhancer authors. No further map sharpening was applied. The Deep-EMhancer-generated map was used for model building in Coot and Phenix.
3DVA was carried out in CryoSPARC on downsampled particles from dataset 1 using five components. The first eigenvector showed large variability at the P1/P6 seam. Particles along this coordinate were displayed in 20 bins, and models generated from NU refinement of particles in the first three and last three bins were used as reference maps for heterogeneous refinement of all the purified particles from dataset 1. The particles in the extended and closed conformation generated in this procedure were used for NU refinement in CryoSPARC (fig. S2). The resolutions of both maps were estimated using the FSC = 0.143 criterion (57). Resolutions for the unmasked FSC curves from CryoSPARC were 3.5 and 3. Table 1).
Weak EM density was observed above the hexamer in the EM map generated by CryoSPARC, and 3D classification of a subset of these particles indicated that <20% are dodecamers. To better isolate dodecamer particles, the 1.78 million particles after 2D classification in CryoSPARC were imported into RELION ( fig. S2). 3D classification with five classes was carried out to separate dodecamer and hexamer particles. A total of 521,000 dodecamer particles were selected for 3D refinement in RELION to generate the final dodecamer map (fig. S2). The resolution of the dodecamer was estimated to be 6.5 Å using unmasked FSC curves in RELION with FSC = 0.143 ( fig. S7C). Masks were then generated for the lower or upper NBD and used for particle subtraction followed by 3D refinement (fig. S2). The resolutions of the maps from focused 3D refinement were 3.8 and 7.7 Å for the lower and upper NBD ring, respectively, as estimated using unmasked FSC curves with FSC = 0.143 in RELION.
Directional FSC curves for the hexamer and dodecamer maps were calculated using unmasked half maps as described in (59). Local resolution for each map was calculated in CryoSPARC and visualized in Chimera. Star and bild files from CryoSPARC particles were generated using UCSF pyem. Map-model FSC was calculated in Phenix with the DeepEMhancer-processed hexamer map using an FSC threshold of 0.5.
Model building
An initial model of Skd3 was obtained using AlphaFold (41), and the NBD (residues 318 to 696) was docked into the P4 protomer of the high-resolution sharpened map of the hexamer using "dock and rebuild" in Phenix (43). Initial model building of the P4 protomer was built using Coot (42) and real space-refined with Phenix. This protomer was duplicated to fit the density of the remaining five protomers by rigid-body fitting, resulting in a hexameric model for the NBD ring that excludes two solvent-exposed loops (524 to 536 and 655 to 675). Iterative cycles of manual adjustments to this model were performed in Coot, followed by real-space refinements in Phenix. ATPγS was modeled into each protomer, including P1, which only had partial occupancy for the nucleotide. Chains A through F correspond to protomers P1 to P6, and chain P corresponds to the peptide substrate, which was modeled as a 14residue poly-alanine chain.
Supplementary Materials
This PDF file includes: Figs. S1 to S12 Legend for movie S1 Other Supplementary Material for this manuscript includes the following: Movie S1 View/request a protocol for this paper from Bio-protocol.
|
2023-05-12T05:07:40.439Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "88e1910f87a07c4c0fd0532a68c1780a4b444436",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "88e1910f87a07c4c0fd0532a68c1780a4b444436",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237408199
|
pes2o/s2orc
|
v3-fos-license
|
Crack propagation modeling of strengthening reinforced concrete deep beams with CFRP plates
Fracture analysis of reinforced concrete deep beam strengthened with carbon fiber-reinforced polymer (CFRP) plates was carried out. The present research aimed to discover whether crack propagation in a strengthened deep beam follows linear elastic fracture mechanics (LEFM) theory or nonlinear fracture mechanics theory. To do so, a new energy release rate based on nonlinear fracture mechanics theory was formulated on the finite element method and the discrete cohesive zone model (DCZM) was developed in deep beams. To validate and compare with numerical models, three deep beams with rectangular cross-sections were tested. The code results based on nonlinear fracture mechanics models were compared with the experimental results and the ABAQUS results carried out based on LEFM. The predicted values of initial stiffness, yielding point and failure load, energy absorption, and compressive strain in the concrete obtained by the proposed model were very close to the experimental results. However, the ABAQUS software results displayed greater differences from the experimental results. For instance, the predicted failure load for the shear-strengthened deep beam using the proposed model only had a 6.3% difference from the experimental result. However, the predicted failure load using ABAQUS software based on LEFM indicated greater differences (25.1%) compared to the experimental result.
Introduction
Reinforced concrete deep beams play an essential role in bridges, buildings, offshore structures, foundations, and military structures [1][2][3]. Pile caps in foundations, coupling beams in buildings, transfer beams, loadbearing walls, and bunker walls conduct as deep beams [4] In the deep beam, the span to effective depth ratio is less than or equal to 2.0 [5] and the transverse plane sections before bending do not remain plane after bending [6] In general, cracks in a concrete structure, such as deep beams, start in the tension zone due to increasing stresses or the presence of initial cracks [7] Therefore, these cracks must be studied correctly. Two approaches are available to study crack propagation in concrete structures [8] They can be divided into two general categories as Linear Elastic Fracture Mechanics (LEFM) and nonlinear fracture mechanics theories. LEFM theory was applied, for the first time, to analyze fracture mechanics on ships used in World War II [9] LEFM considers the stress ahead of the crack to be infinite. The stress intensity factor (SIF) is applied in LEFM to calculate the stress condition near the tip of the crack. A fracture occurs if the SIF reaches the fracture toughness value. The SIF changes with the size of the crack, load, geometry of the structure, and material properties. One of the characteristics of LEFM theory is stress singularity at the crack tip. Kaplan's research [10] showed that LEFM theory is not acceptable to study crack propagation of normal-sized concrete structures but can be used to investigate the propagation of cracks in mass structures such as concrete dams. Later, researchers discovered a zone in front of cracks in a normal-sized concrete structure and called it the fracture process zone (FPZ). The behavior of this zone is non-linear and a great deal of energy is accumulated in this area which the presence of the FPZ explains the softening behavior in the stress-crack opening curve after maximum load [11][12][13][14]. Modeling of this area which has been studied a great deal in recent years is important for concrete structural members such as beams, joints, and deep beams. The FPZ is not used in LEFM theory [15,16] One of the most successful and accurate methods in modeling this area is the discrete cohesive zone model (DCZM) that was developed in the previous studies [17][18][19].
By using different kinds of strengthening, the crack propagation of concrete changes. In recent years, Carbon Fiber Reinforced Polymer (CFRP) plates have been applied in concrete structures [20,21]. Advantages of this type of retrofitting are low weight, corrosion-resistant, easy installation, and high tensile strength [22][23][24]. Nowadays, strengthening using CFRP is of interest to highly versatile materials; available as sheets, plats, bars, and tubes [25] The use of CFRP composites is now identified as a successful, suitable, and efficient technique to strengthen structures [26] Because CFRP plates affect the initiation and propagation of cracks in concrete members, including deep beams, it is necessary to model, test, and study these members. Due to shear crack, the strengthening of deep beams in the shear region is significant. Shear cracks propagate more in deep beams compared with flexural cracks. However, it was expected in advance that flexural strengthening of a deep beam in the flexural zone (soffit strengthen of the deep beam) could not enhance their load-displacement behavior. To verify this issue, such strengthening was performed in this study. Indeed, the capacity of deep beams is affected by shear-strengthening (side face of the deep beam) rather than flexural-strengthening. Many experimental research studies on deep beams and CFRP shear-strengthened deep beam exist [27,28] However, there is limited knowledge on crack propagation in a concrete deep beam strengthened with CFRP plates in flexural and shear regions. It is vital to carry out experimental tests to better understand the crack propagation pattern of concrete deep beams strengthened with CFRP plates.
The present research aims to discover whether the crack propagation in a strengthened deep beam follows LEFM theory or nonlinear fracture mechanics theory. The objective of this paper is to develop new crack propagation criterion based on nonlinear fracture mechanics theory in deep beams strengthened with CFRP plates. Therefore, the DCZM was developed in deep beams and the crack propagation criteria were proposed. As a result, a new energy release rate based on nonlinear fracture mechanics theory was formulated using the finite element method. In this study, a numerical model was proposed to simulate the FPZ. To validate and compare with the numerical models, three deep beam specimens with rectangular cross-sections were tested. One of the deep beams was considered a control specimen. Another deep beam was strengthened in flexural with CFRP plates at the bottom and the last one was strengthened in shear with CFRP plates at both sides of the shear span of the beam. The three beams were tested by a four-point bending test until failure. The experimental results were compared with the results obtained by a LEFM simulation and a simulation based on the nonlinear fracture mechanics. For the LEFM simulation ABAQUS software was applied, the nonlinear simulation was implemented in FEAPpv. [29]. In this program, the solution algorithm is written by the operator. Therefore, each operator can describe a solution plan that meets specific needs. The system includes enough commands that can be applied for use in mechanics, structural, heat transfer, fluid, and other areas by differential equations. Several numerical models have been accomplished by FEAPpv [30][31][32][33][34][35].
Materials
To estimate shear crack propagation for concrete deep beam, a new formulation of energy release rate based on the finite element method was introduced in this study. To compute the strain energy release, the virtual crack closure technique (VCCT), which is the most popular and powerful tool in DCZM, was used [17] A small part of concrete deep beam in the shear span is shown in figure 1. A truss element as the interface is set between interfacial node pairs, nodes ′1′ and ′2′. Initially, nodes ′1′ and ′2′ have the same coordinates. In figure 1, the gap between nodes was exaggerated.
The stiffness of the interface element selected as truss element in the elastic zone (no crack propagates) based on VCCT is given by [36]: where E, G, B, Δ and h are Young's modulus of concrete, shear modulus, the width of the deep beam, mesh size, and height of the deep beam, respectively. k x , and k y are the stiffnesses of the interface element in x and y directions, respectively. The cracking stress proposed from experimental by Thomas et al [37]was adopted to model the initial shear crack as: where s start and ¢ f c are cracking stress and concrete compressive strength, MPa, respectively. If the principal stress in Nodes ′1′ or ′2′ was equal to equation (3), the stiffnesses in equations (1) and (2) were zero and cracks were created. The normal stress of crack propagation from experimental by Walraven [38]was adopted to find strain energy release rates in the deep beam.
where σ N , ρ, K, and C are normal stress, longitudinal reinforcement ratio, size factor, and a coefficient (approximately 0.12), respectively. The K is given by: where d is the effective depth in mm. Equation (3), and (4) were used and formulated in the proposed approach to predict crack propagation modeling of RC deep beams. The new proposed approach was validated with the experimental results of three deep beams. It was assumed that the cross-section of the truss element is equal to A=B×Δ. The strain energy release rates of Mode I, G I is expressed as follows, according to [18] where F N , x 1 and x 2 are nodal force, displacement of node ′1′ and ′2′ in x direction, respectively. The shear stress of crack in the deep beam was adopted by Zhang et al [39] and represented as follows.
( )
Therefore, the strain energy release rates for shear, G II , was based on a study by Xie et al [36] as follows: On the other hand, the G F is the critical fracture energy of concrete deep beam which is equal to equation (11) according to [40] = G f 0.073 , 11 where f t is the tensile strength of concrete. Therefore, when G I +G II <G F , the stiffnesses are equal to equations (1) and (2). When G I +G II >G F , the crack is propagated, the interface element is removed, and equations (1) and (2) become zero. In addition, to find failure stress, ultimate shear stress by Thomas et al [34] was used. Equations (5)-(10) were implemented as a User-Element subroutine to model interface element, and crack propagation criterion was applied with the Material library. The flowchart of the present numerical model, experimental model, and ABAQUS for analysis of control deep beam are shown in figure 2. It is worth mentioning that a correct estimation of energy release rate is important to predict nonlinear crack propagation in concrete structures such as deep beams [14]. The nonlinear fracture mechanics theory is different from the nonlinearity of structure. It is now well known that in order to accurately model cracking behavior of structures such as deep beams, the fracture process zone (FPZ) must be properly modeled to consider the gradual energy dissipation during cracking. A numerical method was developed to model crack propagation based on the mentioned new strain energy release rates in the reinforced concrete deep beam which is flexural-strengthened with CFR [41,42] Furthermore, in this study, a computer code based on work by Shahbazpanahi et al [43] was developed specifically to model shear-strengthening of the reinforced concrete deep beam by modifying the crack propagation criteria.
A 2D plane stress FEAPpv was used to study nonlinear fracture mechanics where four-node isoparametric elements were applied to model bulk concrete with isotropic behavior.Truss elements with elastic behavior were used to model the soffit CFRP. The side face CFRP had linear elastic behavior and was simulated by four-node isoparametric elements. Longitudinal steels were modeled by truss elements with elastic-perfect plastic behavior. The bond-slip between longitudinal steels and concrete were assumed completely. To model slip between CFRP and concrete, the constitutive model obtained by Nakaba et al [44] was used. To find the loaddisplacement curve of the deep beams obtained by FEAPVpv, load was assumed to be incremental rather than displaced. Therefore, load-controlled condition was used.
Numerical model based on LEFM
To compare with the experimental results and FEAPpv based on nonlinear fracture mechanics, the ABAQUS software was used to model crack propagation by conventional cohesive elements. ABAQUS software can only model the crack propagation based on linear fracture mechanics and is not capable of simulating nonlinear fracture mechanics. The concrete parameters used for deep beam based on LEFM in ABAQUS are given in table 1.
Three elements were applied to simulate deep beams by ABAQUS: truss, shell, and solid. Truss elements with elastic-perfect plastic behavior were applied for steel bars because this element can hold just axial while shell elements with linear elastic behavior were used for modeling CFRP plates. Solid elements with plastic behavior were used to model concrete. As mentioned, the simplified flowchart for analysis of control deep beam by ABAQUS of the tests conducted are shown in figure 2.
Experimental test of control and CFRP deep beams
Experimental tests were conducted under static monotonic load to validate the simulation results. Results from both models were compared with the experimental results of the three deep beams. Figure 3 illustrates the deep beam dimensions which were 500 mm in depth, 150 mm in width, and 900 mm in length. The geometry supports and details of the deep beams are also shown in figure 3. The deep beams were reinforced longitudinally with two 16 mm diameter steel rebars at the bottom and two 12 mm diameter steel rebars at the top with 30 mm clear concrete cover. The deep beams were not reinforced transversally to guarantee the shear failures. Only two stirrups were used out of the shear zone to hold the longitudinal reinforcement in place as shown in figure 3. The yield strength of the bars was 400 MPa based on the manufacturer's report.
Specimens
The average concrete compressive strength was 28 MPa (at 28 days) based on testing three cylinders with a diameter of 150 mm and height of 300 mm. Table 2 shows a mix design for the concrete used to cast the deep beams. The mechanical properties of the CFRP plates were found according to ASTM D7565 [43]. The properties of the CFRP plates and steel reinforcement are given in table 3. B-0, B-1, and B-2 were used to designate the control deep beam, the flexural strengthened and the shear strengthened deep beams, respectively. All the deep beams had the same steel reinforcement as shown in figure 3.
Three plywood formworks were built to cast the deep beams. Following casting, the beams were tested after 28 days. The deep beams were simply supported and tested under two equal concentrated loads. For the B-1 deep beam, the CFRP plate was externally bonded at the bottom of the beam. For the B-2 deep beam, the CFRP plate was bonded into both sides of the deep beam in the shear span. Although the CFRP shear-strengthened plates were not fully wrapped, they were interrupted at 100 mm from the top of the beam.
After demolding, all beams were cured and covered with wet burlap at 20°until 28 days, and then the CFRP plates were installed with resin. Before strengthening the specimens with CFRP plates, the concrete surface was cleaned to remove any surface grease. The surfaces of the specimens were further cleaned by grinding and brushing. An epoxy adhesive was uniformly applied in a thin layer on the bonding surfaces. The CFRP was placed over it and pressed firmly by plastic rollers. External strengthening with single-layer CFRP plates had a thickness of 0.8 mm and an elastic modulus of 250 GPa. The ultimate tensile stress of the CFRP plates was 210 MPa.
Instrumentation
A hydraulic jack with 800 kN capacity was used on the deep beam. The mid-span displacements (figure 4) were recorded by Linear Variable Displacement Transformers (LVDTs). Two strain gauges were attached directly under the CFRP plates and concrete below the load to observe strain. The supports were placed at the bottom of the concrete beams. The applied loads were measured by a load cell. All the measurements were automatically recorded using a data logger. A strain gauge was mounted on the surface of the concrete to record strain in the concrete. A four-point bending test setup was used to test the deep beams. One LVDT was set to monitor the displacements under the deep beam. A load cell was utilized to measure the applied load. The test was conducted under a gradually increasing monotonic load (at a loading rate of 0.4 mm min −1 ) until beam failure was reached.
Results
This section presents the validation and comparison of crack propagation obtained by the proposed model based on the nonlinear fracture mechanics theory, the experimental results, and ABAQUS software results based on LEFM.
Crack propagation in the control beam (B-0)
The control beam was studied first to compare the proposed model with the experimental results. A comparison of the proposed model based on nonlinear fracture mechanics theory with the experimental results is shown in figure 6. A good agreement between the experimental results and the proposed model results based on nonlinear fracture mechanics theory was found for the load-deflection curves in the mid-span as shown in figure 6. The load-deflection curves indicated nonlinear behavior. The initial stiffness obtained by the nonlinear fracture mechanics model coincides with the stiffness of the beam at the elastic zone, as predicted by the experimental results. The initial stiffness obtained by the linear elastic fracture mechanics model by ABAQUS was overestimated and had significantly higher stiffness in the mid-span. For the deep beams simulated based on LEFM by ABAQUS, at the same load level, displacements were lower compared to the experimental results.
The results of the proposed model based on nonlinear fracture mechanics theory provided an accurate steel yield load point compared with the experimental results. Thus, the nonlinear fracture mechanics model can detect yield stress in the steel with reasonable accuracy. The most obvious effect of the nonlinear fracture mechanics model on deep beam response can be seen in the plastic zone. The proposed model corresponds closely to the experimental results in the plastic zone. Based on the nonlinear fracture mechanics model, the failure load was 350.7 kN, whereas in the experimental result, this figure was 331.2 kN. Failure load based on the LEFM model by ABAQUS was 401.3. Thus, the nonlinear fracture mechanics model can predict the failure load more accurately. Figure 7 displays the crack paths of the control beam, which were modeled using the nonlinear fracture mechanics model. Only half of the beam was modeled by considering symmetry. At mid-span, one flexural crack was predicted with 350 mm length perpendicular to the axis of the control beam. Furthermore, it can be observed from figure 10, there is only one flexural crack under the load. The nonlinear fracture mechanics model was predicted one flexural-shear crack at shear span and two shear cracks near the support. Figure 8 illustrates the crack path through the experimental test. This crack pattern could be compared with the predicted result shown in figure 7. The nonlinear fracture mechanics model predicted three cracks within the shear span compared with the three cracks observed in the experimental test. Shear span is the distance between a support and the nearest load point. The nonlinear fracture mechanics model observed one big and one small flexural crack within the flexural zone whereas only one crack was observed in the experimental results. Flexural span is the distance between the point loads. The agreement between the crack paths obtained in the nonlinear fracture mechanics model and the experimental test is sufficient to justify the validity of the nonlinear fracture mechanics model. The crack patterns obtained by the nonlinear fracture mechanics model and experimental results were propagated to the last quarter of the section height. Figure 9 shows the crack paths of the control beam simulated using the ABAQUS software based on LEFM. The shear crack near the support cannot be modeled by conventional ABAQUS software. Therefore, the nonlinear fracture mechanics model predicts crack propagation more objectively compared with the ABAQUS software. As shown in figures. 7, 8, and 9, the model of control deep beam works better with FEAPpv based on nonlinear fracture mechanics than ABAQUS based on LEFM. However, the steel yield load by ABAQUS software had greater error (31%) compared to the experimental result, which indicates that it is less capable of estimating the steel yield load. Load failure in the proposed model was predicted within a different range of 5.6% to 8% compared to that in the experimental results. The loaddeflection curve of the nonlinear fracture mechanics model was similar to the experimental results. However, the load-deflection curve by ABAQUS software was higher than the experimental curve. The ABAQUS software results had a greater amount of difference (29.9%-39.4%) compared with the experimental results. The proposed model based on the nonlinear fracture mechanics model could be used to perform analysis of reinforced deep beam flexural-strengthened by CFRP plates. Figure 11 shows the comparison of crack patterns between the proposed model, the experiment, and the ABAQUS software in the deep beam with flexure strengthened by CFRP plates.
The predicted cracking pattern of the proposed model was generally consistent with the experimental observations. In both cases, the flexural cracks were closely spaced near the beam mid-span as shown in figures 11(a) and (b). Within the shear span, the model predicted two shear-flexural cracks, whereas only one crack was observed in the experiment (left shear span) in half of the beam. The model predicted one shear crack near the support, but no crack was observed in the test. The shear cracks could have been too small to be noticed in the experimental test. As shown in figure 11(c), the ABAQUS software could not simulate this crack pattern because neither the accurate stiffness of the FPZ nor the crack propagation criterion was considered in the software. A flexural crack was observed by the nonlinear fracture mechanics model in the control beam near the mid-span at a load level of 280 kN. However, the crack of the beam strengthened in flexure occurred at a relatively higher load level (300 kN) than that for the control beam. The observed flexural crack load for the flexural strengthened deep beam was approximately 300 kN based on the nonlinear fracture mechanics and the experimental results. The deep beam strengthened in flexure showed approximately 2% shorter flexural crack length compared to the control beam. As expected, the behavior of the flexural strengthened deep beam with CFRP plates was slightly changed compared to the control deep beam. Finally, the beam failed because of shear failure with a large shear crack.
Crack propagation in the shear-strengthened deep beam (B-2)
The load-deflection curves obtained from the experimental test, the nonlinear fracture mechanics model, and the ABAQUS software for the deep beam with shear strengthened by CFRP plates are shown in figure 12. The results of the proposed model are close to that of the experimental one. This finding indicates that the nonlinear fracture mechanics model is validated by the test results. The yield point of the load-deflection curve in the nonlinear fracture mechanics model is similar to that in the experimental test result (approximately a 3% difference). However, this point obtained in the LEFM simulations by ABAQUS was higher than that in the experimental test result (approximately a 21% difference). The accuracy of the proposed model was also confirmed by the close value of the failure load obtained from the proposed model and the test (approximately 2% difference at 4 mm deflection). Furthermore, the stiffness of the beam with shear strengthened by CFRP as analyzed by the ABAQUS software was over-estimated. Figure 13 shows the crack paths obtained by the proposed model, the experimental results, and the cracks predicted in the FEA by ABAQUS. A good agreement was observed for the crack paths predicted by the proposed model compared with the experimental results as shown in figures 13(a) and (b). Figures 13(a) and (b) demonstrate that only one flexural crack initiated and propagated towards the loading point. Only one shear-flexural crack was observed near the CFRP plate. The shear cracks stopped initiation and propagation in the shear span because the shear strengthened with CFRP plates as reported by other works [2,5,13].
The flexural crack began to appear at a load of approximately 240 kN. The flexural-shear cracks propagated from the mid-depth of the beam toward the point of the applied load. As the load increased, the flexural-shear crack propagated to the final failure.
A comparison of the shear cracks in the beam strengthened in shear [see figure 13(a)] and the control beam [see figure 8] shows that there are more shear cracks in the control beam. The use of CFRP plates affected and delayed the propagation of the shear crack. Figure 13(c) indicates the crack paths obtained by ABAQUS based on the LEFM. By comparing figures 13(a), (b), and (c), the nonlinear fractures mechanics model for the shearstrengthened deep beam with CFRP is shown to be comparatively better than the LEFM by ABAQUS.
Mesh sensitivity study of the nonlinear fracture mechanics model was presented in terms of load-deflection in figure 14. Mesh (a) is a coarse mesh with 260 elements and 440 interface elements. Mesh (b) had 780 elements and 1132 interface elements as a medium mesh. Fine mesh (Mesh (c)) had 1278 elements and 1896 interface elements. The prediction of load versus deflection at midspan in the three meshes was close to each other. Therefore, the dependency of mesh size does not significantly influence the overall load-deflection curves. Figure 15 shows the effect of shear strengthened by CFRP on crack propagation. Only one small crack formed under the CFRP plate as shown in figure 15. CFRP plates delay and control cracking in the beam and confine the crack propagation of concrete and this behavior has been reported by other researchers [20,45,46]. Shear strengthened by CFRP plate reduces the width of cracks in concrete. Figure 15 shows that crack was not observed in the shear span because of the presence of CFRP plates. The crack length was restrained as well.
In table 4, some factors such as accuracy of predicted initial stiffness, yielding point (based on the same load), plastic zone, and failure point for each model were compared with the experimental results. The predicted values of initial stiffness, yielding point, and failure point obtained by non-linear fracture mechanics were very close to the experimental results. However, the ABAQUS software results based on LEFM had a greater discrepancy. For instance, the steel yielding point in the control deep beam was predicted by non-linear fracture mechanics within a difference of 1.2%. Furthermore, the steel yield load by ABAQUS software based on LEFM displayed a higher difference (13%) compared to the control beam, which indicates that it was less capable of estimating the steel yield load. The failure loads based on the non-linear fracture mechanics and the experimental results for the shear strengthened deep beam were approximately 485 and 498 kN corresponding respectively to mid-span displacements of 4.1 and 4.5 mm. However, these values for the model based on LEFM were equal to 510 kN with mid-span deflections of 3.7 mm. For the shear strengthened deep beam (B-2), delamination of the CFRP plates was evident based on the non-linear fracture mechanics model and the experimental results. However, the model based on LEFM showed that failure occurs because of the rupture of the CFRP plates on the shear span.
In table 5, the energy absorption (the area under the load-deflection curve) is listed for each deep beam at failure load. The calculated energy absorptions for the control deep beam were 987, 1029, and 1154 kN mm based on the experimental test, the proposed model, and the LEFM results, respectively. The energy absorption for the control deep beam obtained from the experimental results was slightly lower than that of the proposed model. Moreover, for the strengthened deep beams in shear, the energy absorption obtained by LEFM was much higher than that of the experimental results and the proposed model. The shear strengthened deep beam showed an increase in the energy absorption up to 67.1% compared to the control deep beam. The experimental compressive strain for the shear-strengthened deep beam was greater than the control deep beam compressive strain due to shear strengthening. The compressive strain reported for the shear-strengthened deep beam was approximately 0.00318. In both models and the experimental results, the control deep beam had the lowest concrete compressive strain compared to the other two deep beams that were also reported in other works [2,5]. The shear strengthened deep beam simulated by the proposed model showed an increase in the compressive strain of 1.2% compared to the experimental results, while this increase was 10.4% for the deep beam modeled by LEFM. Therefore, the nonlinear fracture mechanics model demonstrated better performance for estimating concrete compressive strain compared to the LEFM model.
Here, it is worth mentioning that the concept of FPZ in the nonlinear fracture mechanics theory is broadened to one of interface process zone, which includes the nonlinear behavior of the fracture process zone [15]. Based [1,6,14,22], there is not any numerical model to simulate nonlinear crack propagation in deep beams. Therefore, nonlinear fracture mechanics models should be used to model crack propagation of deep beams to achieve greater accuracy. A new computational approach to implement finite element method to study crack initiation and growth in the deep beam was presented and validated. The novelties of the present study are that strain energy release rates of cracks in the deep beams can be calculated simultaneously as FEA is performed and nonlinear crack propagation can also be directly analyzed. To do so, three reinforced concrete deep beams strengthened by Carbon Fiber-reinforced PolymeCFRP plates was analyzed as an example of the use of the new finite element implementation of the nonlinear crack model propagation. As a result, positive contributions were found for the finite element simulations and the predicted values of initial stiffness, yielding point and failure load, energy absorption, and compressive strain in the concrete obtained by the proposed model were very close to the experimental results.
Conclusions
In the present study, two models for reinforced concrete deep beam strengthened with CFRP plates to predict the fracture behavior were discussed. One of the models was based on nonlinear fracture mechanics by FEAPpv. Furthermore, a new energy release rate was formulated. The DCZM was developed and the crack propagation criteria was proposed. The second model was based on linear elastic fracture mechanics by ABAQUS software. Experimental testing on reinforced concrete deep beams strengthened with CFRP plates was carried out to compare with the models. The proposed model can reasonably predict the experimental results in terms of stiffness, steel yielding load, plastic zone, and failure load. However, the ABAQUS results for the aforementioned parts were in great contrast to the experimental results. Furthermore, the nonlinear fracture mechanics by FEAPpv accurately predicted compared to the linear elastic fracture mechanics model by ABAQUS. For example, in the control deep beam, the most obvious effect of the nonlinear fracture mechanics model on deep beam response can be seen in the plastic zone. The proposed model corresponds closely to the experimental results in the plastic zone. Furthermore, the failure load of the shear-strengthened deep beam was predicted by the proposed model with a 6.3% difference compared to the experimental results whereas the failure load predicted by the ABAQUS software based on LEFM indicated a greater difference of 25.1% with the experimental results. The proposed model captured the crack patterns of the strengthened reinforced concrete deep beam with CFRP plates quite satisfactorily compared to the experimental observations. For the shear strengthened deep beams, the energy absorption obtained by LEFM was much greater than those of the experimental results and the proposed model. Furthermore, the shear strengthened deep beam simulated by the proposed model showed an increase in the compressive strain of 1.2% compared to the experimental results, while this increase was 10.4% for the deep beam modeled by LEFM. Therefore, the nonlinear fracture mechanics model is a reasonable choice for simulating the fracture mechanics of the reinforced concrete deep beam strengthened with CFRP plates. Lack of an internal stirrup in the desired shear failure region of the deep beams and the neglect of the behavior of concrete in the compression zone and crushing constitute the limitations of the proposed model.
|
2021-09-04T20:03:27.585Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "8c76685b4d08d1f1d55d3f5772ea4ea8e93f681d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2053-1591/ac209b",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8c76685b4d08d1f1d55d3f5772ea4ea8e93f681d",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
255216457
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Microwave Power and Gas Flow Rate on the Combustion Characteristics of the ADN-based Liquid Propellant
As a new type of energy-containing material, Ammonium dinitramide based liquid propellant has the advantages of being green, having low toxicity, good stability, and high safety performance. Traditional catalytic combustion methods require preheating of the catalytic bed and deactivation of the catalytic particles at high temperatures, while microwave ignition methods can effectively solve these problems. To study the combustion characteristics of ADN-based liquid propellants during microwave ignition, the influence of microwave power and gas flow rates on the combustion process are analyzed using experimental methods. A high-speed camera was used to observe the enhanced effects of microwave power and gas flow on plasma and flame. Combined with temperature measurement, the combustion process of ADN-based liquid propellants under the action of plasma was analyzed. The combustion process in the presence of microwaves was observed by comparing parameters such as flame length, flame temperature, and radical intensity. Those results show that, with the increase in microwave power, the luminous burning area of the flame grows significantly. The microwave power is increased by 250 W each, and the flame jet length is increased by nearly 20%. The increase in microwave power also leads to an increase in propellant combustion temperature, however, this increase gradually slows down. At a gas flow rate of 20 L/min, the ADN-based liquid propellant showed the best combustion performance with a maximum jet length of 14.51 cm and an average jet length increase of approximately 85.9% compared to 14 L/min. Too much gas flow rate will hinder the development of the jet, while the high-velocity airflow will have a cooling effect on the flame temperature. The results provide a basis for the specific parameter design of microwave ignition and promote the application of ADN-based liquid propellants in the aerospace field.
Introduction
Launching a spacecraft is an economically demanding task that can lead to environmental pollution and even safety accidents. As the competition for outer space resources among the world's major spacefaring nations becomes more intense, the need to find a new type of safe [1,2], reliable, green, and economical propellant has become a key development goal for all countries. Ammonium dinitramide (ADN) based liquid propellant, a green energy-containing material that is safer, less polluting, and less costly than traditional hydrazine propellants, is gradually receiving attention from countries around the world [3].
The Swedish Space Corporation (SSC) first tried to apply ADN-based liquid propellant to the main star of the PRISMA dual-technology experimental satellite "Mango" developed by ECAPS. Russia, Germany, Japan, and other countries have carried out corresponding compounds in the carbon-carbon bonds and hydrocarbon bonds and other chemical bond destruction so that large molecules of hydrocarbons turn into small molecules of hydrocarbons to improve chemical activity. A large number of experiments have been carried out to obtain positive progress and results in this field. Ikeda et al. [22] tried a microwave plasmaassisted ignition system for a single-cylinder internal combustion engine and ignited a thin methane-air mixture and found that microwave plasma-assisted ignition was also stable in the thin state. Hwang et al. [23] conducted microwave plasma-assisted ignition experiments using an acetylene-air mixture in a 1.4 L constant volume combustion chamber to observe the effect of microwaves on flame development and finally found that the flame speed increased by 20% with microwave plasma-assisted ignition, and that microwave ignition had a better combustion index than conventional spark plug ignition.
In this paper, a microwave ignition system adapted to ADN-based thrusters is established to promote the combustion of ADN-based liquid propellants through microwave ignition of air plasma, in order to study the influence of microwave power and gas flow rate on the combustion characteristics of the plasma jet and ADN-based liquid propellants, and to explore the optimal ignition parameters and ignition scheme for achieving ADN-based liquid propellants.
Materials
Ammonium dinitramide (ADN) is a high-energy-density material, generally a white or light yellow solid at ambient temperature and pressure, consisting of an amino cation (NH 4 + ) and a dinitramide anion (N(NO 2 ) 2 − ). Researchers have developed different series of ADN-based liquid propellant formulations and conducted performance tests by configuring ADN, water, and fuel into aqueous solutions in a certain ratio, taking advantage of ADN's solubility in water. Figure 1 shows the ADN-based liquid propellant that we used in experiments. wave plasma ignition is mainly "high-temperature effect", "chemical effect" and "jet effect" [20,21]. The use of microwave-assisted ignition technology in automotive internal combustion engines is a very mature industry. Plasma has a significant combustion-enhancing effect on energy-containing materials, high-energy particles will be hydrocarbon compounds in the carbon-carbon bonds and hydrocarbon bonds and other chemical bond destruction so that large molecules of hydrocarbons turn into small molecules of hydrocarbons to improve chemical activity. A large number of experiments have been carried out to obtain positive progress and results in this field. Ikeda [22] et al. tried a microwave plasma-assisted ignition system for a single-cylinder internal combustion engine and ignited a thin methane-air mixture and found that microwave plasma-assisted ignition was also stable in the thin state. Hwang [23] et al. conducted microwave plasma-assisted ignition experiments using an acetylene-air mixture in a 1.4 L constant volume combustion chamber to observe the effect of microwaves on flame development and finally found that the flame speed increased by 20% with microwave plasma-assisted ignition, and that microwave ignition had a better combustion index than conventional spark plug ignition.
In this paper, a microwave ignition system adapted to ADN-based thrusters is established to promote the combustion of ADN-based liquid propellants through microwave ignition of air plasma, in order to study the influence of microwave power and gas flow rate on the combustion characteristics of the plasma jet and ADN-based liquid propellants, and to explore the optimal ignition parameters and ignition scheme for achieving ADN-based liquid propellants.
Materials
Ammonium dinitramide (ADN) is a high-energy-density material, generally a white or light yellow solid at ambient temperature and pressure, consisting of an amino cation (NH4 + ) and a dinitramide anion (N(NO2)2 − ). Researchers have developed different series of ADN-based liquid propellant formulations and conducted performance tests by configuring ADN, water, and fuel into aqueous solutions in a certain ratio, taking advantage of ADN's solubility in water. Figure 1 shows the ADN-based liquid propellant that we used in experiments.
Design of the Resonant Cavity
Microwave ignition devices use the frequency of microwaves to generate a fast-oscillating electric field, which generates a strong electric field in a specific device, thereby prompting gas discharge to form plasma. BJ 26 type waveguide (NEWSAIL, Zhengzhou, China) is selected for this experimental device, and the metal waveguide can only propagate transverse electric (TE) and transverse magnetic (TM) waves. After calculation, the
Design of the Resonant Cavity
Microwave ignition devices use the frequency of microwaves to generate a fastoscillating electric field, which generates a strong electric field in a specific device, thereby prompting gas discharge to form plasma. BJ 26 type waveguide (NEWSAIL, Zhengzhou, China) is selected for this experimental device, and the metal waveguide can only propagate transverse electric (TE) and transverse magnetic (TM) waves. After calculation, the cut-off frequencies of the two waves are 3.90 GHz and 1.79 GHz, and the microwave input frequency of this experiment is 2.45 GHz, so the main mode of waveguide transmission in this experiment is the TE10 mode. However, the BJ26 type waveguide needs high incident power to break down the air, so the BJ 26 type waveguide is modified and designed in this experiment. The internal dimension of the standard BJ 26 waveguide is a rectangle, the length of the wide side is a = 86 mm, and the length of the narrow side is b = 43 mm. The transmission power of the rectangular waveguide is where E is the electric field strength and Z is the equivalent impedance of the waveguide. Since the TE10 mode wave impedance is only related to the broad side a and has nothing to do with the narrow side b, when the transmission power remains unchanged, the narrow side of the waveguide is generally reduced to 1/2 of the original length to enhance the cavity. Electric field strength, the transmission power after narrow-side reduction can be obtained as The transmission power before narrow-side reduction is The transmission power after narrow-side reduction is When the transmission power remains unchanged, that is, P 1 = P 2 , and formulas (2) and (3) are combined, when the narrow side is reduced to 1/2 of the original length, the electric field intensity in the cavity is increased by 1.44 times.
There are two design methods for shortening the narrow side of the waveguide: (a) reducing both sides at the same time; (b) reducing only one side, the schematic diagram is shown in Figure 2. To determine the specific value L of the reduction length of the resonator, the simulation software Comsol Multiphysics (Version 5.6) is used to simulate the model. Under the input frequency of 2.45 GHz, the BJ 26 rectangular waveguide has a waveguide wavelength of λg = 173.6 mm in the TE10 mode. According to Poynting's theorem, when the cavity length is 3 λg/2, a higher electric field intensity can be obtained in the cavity, so the total length D of the device is set to 260 mm. cut-off frequencies of the two waves are 3.90 GHz and 1.79 GHz, and the microwave input frequency of this experiment is 2.45 GHz, so the main mode of waveguide transmission in this experiment is the TE10 mode. However, the BJ26 type waveguide needs high incident power to break down the air, so the BJ 26 type waveguide is modified and designed in this experiment. The internal dimension of the standard BJ 26 waveguide is a rectangle, the length of the wide side is a = 86 mm, and the length of the narrow side is b = 43 mm. The transmission power of the rectangular waveguide is where E is the electric field strength and Z is the equivalent impedance of the waveguide.
Since the TE10 mode wave impedance is only related to the broad side a and has nothing to do with the narrow side b, when the transmission power remains unchanged, the narrow side of the waveguide is generally reduced to 1/2 of the original length to enhance the cavity. Electric field strength, the transmission power after narrow-side reduction can be obtained as The transmission power before narrow-side reduction is The transmission power after narrow-side reduction is When the transmission power remains unchanged, that is, 1 = 2 , and formulas (2) and (3) are combined, when the narrow side is reduced to 1/2 of the original length, the electric field intensity in the cavity is increased by 1.44 times.
There are two design methods for shortening the narrow side of the waveguide: (a) reducing both sides at the same time; (b) reducing only one side, the schematic diagram is shown in Figure 2. To determine the specific value L of the reduction length of the resonator, the simulation software Comsol Multiphysics (Version 5.6) is used to simulate the model. Under the input frequency of 2.45 GHz, the BJ 26 rectangular waveguide has a waveguide wavelength of = 173.6 mm in the TE10 mode. According to Poynting's theorem, when the cavity length is 3 /2, a higher electric field intensity can be obtained in the cavity, so the total length D of the device is set to 260 mm.
(a) Two-sided tapered waveguide (b) One-sided tapered waveguide The mathematical model of the simulation is based on the electromagnetic field theory of the Maxwell equation, and the Helmholtz equation is derived by combining the constitutive relation. In a steady state analysis, the Helmholtz equation is: where is the permittivity of the dielectric, σ is the conductivity, μ is the permeability, and k is the wavenumber of the dielectric. The mathematical model of the simulation is based on the electromagnetic field theory of the Maxwell equation, and the Helmholtz equation is derived by combining the constitutive relation. In a steady state analysis, the Helmholtz equation is: where is the permittivity of the dielectric, σ is the conductivity, µ is the permeability, and k is the wavenumber of the dielectric. As shown in Figure 3, when the microwave power is 100 W, the strongest electric field exists at 1/4 of the resonance wavelength. According to the electric field distribution in the cavity, the position of the opening for placing the quartz tube is set at 1/4 resonance wavelength away from the short-circuit end face. Two openings are, respectively, designed at the upper and lower ends at 1/4 of the resonance wavelength from the resonant cavity, for inserting the quartz tube and the subsequent input of gas and ADN-based liquid propellant. Because the larger the aperture diameter, the more obvious the weakening of the electric field; if the size is too small, the processing difficulty increases sharply. Therefore, under the premise of satisfying the input of air intake and propellant flow, the aperture should be reduced as much as possible to improve the utilization rate of microwaves. The final selected aperture is 28 mm. The inner diameter of the matching quartz tube is 25 mm, the outer diameter is 28 mm, and the height of the quartz tube is 15 cm. As shown in Figure 3, when the microwave power is 100 W, the strongest electric field exists at 1/4 of the resonance wavelength. According to the electric field distribution in the cavity, the position of the opening for placing the quartz tube is set at 1/4 resonance wavelength away from the short-circuit end face. Two openings are, respectively, designed at the upper and lower ends at 1/4 of the resonance wavelength from the resonant cavity, for inserting the quartz tube and the subsequent input of gas and ADN-based liquid propellant. Because the larger the aperture diameter, the more obvious the weakening of the electric field; if the size is too small, the processing difficulty increases sharply. Therefore, under the premise of satisfying the input of air intake and propellant flow, the aperture should be reduced as much as possible to improve the utilization rate of microwaves. The final selected aperture is 28 mm. The inner diameter of the matching quartz tube is 25 mm, the outer diameter is 28 mm, and the height of the quartz tube is 15 cm. Figure 4 is a schematic diagram of a microwave plasma ignition experiment system, which is composed of a microwave generation component, a microwave plasma ignition component, and an experimental data acquisition system. The function of the microwavegenerating component is to generate microwaves and transport them into the resonant cavity. The microwave generation component includes a microwave power supply, a magnetron, a circulator, a detector, a cooling water tank (including a water pump), and a waveguide connected to a microwave ignition component. The microwave power supply and the magnetron together constitute the microwave source. Circulators and detectors are used to control the direction of microwave transmission and monitor the reflected microwave power. The function of the water pump is to provide cooling water for the whole device, and the water in the load tank is used to absorb the reflected power. The function of the microwave plasma ignition component is to realize the coupling of microwaves in the cavity and decompose the gas to achieve ignition and combustion. The microwave ignition component is centered on a resonant cavity and includes a supporting connection device, a 550-9 air compressor (Fengbao, Shanghai, China), and a BT300-2J peristaltic pump (Longer, Shanghai, China). The main body of the microwave ignition component is a gradient reduction altitude resonant cavity based on the BJ26 waveguide. It is opened at a distance of 1/4 resonance wavelength from the short-circuit end and a quartz tube is placed through it. At the lower end of the resonant cavity, an aluminum connection device is assembled with the cavity to connect the peristaltic pump to the air compressor. Two connectors capable of connecting the trachea are set on the side of the connection device. The gas working medium is passed from both ends at the same time during the test to ensure the uniformity of the gas input and avoid the interference of the uneven gas distribution on the test on one side. The bottom end of the connection device is used as the propellant input end, and the propellant supplied by the peristaltic pump is nebulized Figure 4 is a schematic diagram of a microwave plasma ignition experiment system, which is composed of a microwave generation component, a microwave plasma ignition component, and an experimental data acquisition system. The function of the microwavegenerating component is to generate microwaves and transport them into the resonant cavity. The microwave generation component includes a microwave power supply, a magnetron, a circulator, a detector, a cooling water tank (including a water pump), and a waveguide connected to a microwave ignition component. The microwave power supply and the magnetron together constitute the microwave source. Circulators and detectors are used to control the direction of microwave transmission and monitor the reflected microwave power. The function of the water pump is to provide cooling water for the whole device, and the water in the load tank is used to absorb the reflected power. The function of the microwave plasma ignition component is to realize the coupling of microwaves in the cavity and decompose the gas to achieve ignition and combustion. The microwave ignition component is centered on a resonant cavity and includes a supporting connection device, a 550-9 air compressor (Fengbao, Shanghai, China), and a BT300-2J peristaltic pump (Longer, Shanghai, China). The main body of the microwave ignition component is a gradient reduction altitude resonant cavity based on the BJ26 waveguide. It is opened at a distance of 1/4 resonance wavelength from the short-circuit end and a quartz tube is placed through it. At the lower end of the resonant cavity, an aluminum connection device is assembled with the cavity to connect the peristaltic pump to the air compressor. Two connectors capable of connecting the trachea are set on the side of the connection device. The gas working medium is passed from both ends at the same time during the test to ensure the uniformity of the gas input and avoid the interference of the uneven gas distribution on the test on one side. The bottom end of the connection device is used as the propellant input end, and the propellant supplied by the peristaltic pump is nebulized and pumped into the combustion reaction area in the cavity by inserting the nozzle. The inner conductor is set in the fixed position of the tube and is used to couple the microwave energy to generate a strong electric field, penetrate the gas working medium, and stimulate the generation of microwave plasma. The gas working medium is sent into the resonant chamber through the self-connected device of the air compressor. The microwave energy is directly coupled into the designed resonance cavity through the BJ26 aluminum waveguide with ionized gas working medium in the quartz tube to generate an air plasma. The ADN-based liquid propellant is pumped into the resonant chamber through the opening of the lower end of the connection device of the peristaltic pump through a joint with a diameter of 0.2 mm and reacted with the microwave. The function of the experimental data acquisition system is to collect and record the jet height, flame temperature, and combustion spectrum of the flame in the experiment. It is mainly composed of a temperature sensor, a fiber optic spectrometer, and a high-speed camera. The temperature sensor uses a K-type armored thermocouple with a diameter of 2 mm and a length of 15 cm. Its maximum temperature resistance is 1573 K, and it can carry out longterm high-temperature measurements. The measurement error is 2%. It is fixed above the quartz tube by a clamping device 10 mm. The maximum spectrum measurement range of Ocean USB 2000+ fiber optic spectrometer (Ocean Insight, Orlando, FL, USA) is 200-900 nm, the resolution is 1.5 nm, and the detection integration time is 1 ms. In order to ensure that the measurement signals all come from the measurement target object, a collimating lens is added to the fiber optic probe. The high-speed camera model is Pho-Tron Nova S9 (Photron, San Diego, CA, USA). The maximum shooting speed of the camera is 193 frames per second (corresponding resolution: 256 × 128). In this experiment, under 1024 × 1024 (1 million) pixels, the acquisition frequency of the camera is set to 500 frames per second, which can better understand the instantaneous changes in the flame combustion process. In the experiment, a high-speed camera is used to photograph the changes in the flame during the combustion process, and to record the formation, development, and evolution of the flame in a stable combustion state. nant chamber through the self-connected device of the air compressor. The mic energy is directly coupled into the designed resonance cavity through the BJ26 alu waveguide with ionized gas working medium in the quartz tube to generate plasma. The ADN-based liquid propellant is pumped into the resonant chamber t the opening of the lower end of the connection device of the peristaltic pump thr joint with a diameter of 0.2 mm and reacted with the microwave. The function of perimental data acquisition system is to collect and record the jet height, flame te ture, and combustion spectrum of the flame in the experiment. It is mainly compo temperature sensor, a fiber optic spectrometer, and a high-speed camera. The temp sensor uses a K-type armored thermocouple with a diameter of 2 mm and a leng cm. Its maximum temperature resistance is 1573 K, and it can carry out long-term temperature measurements. The measurement error is 2%. It is fixed above the tube by a clamping device 10 mm. The maximum spectrum measurement range o USB 2000+ fiber optic spectrometer (Ocean Insight, Orlando, FL, USA) is 200-900 resolution is 1.5 nm, and the detection integration time is 1 ms. In order to ensure measurement signals all come from the measurement target object, a collimating added to the fiber optic probe. The high-speed camera model is Pho-Tron Nova S tron, San Diego, CA, USA). The maximum shooting speed of the camera is 193 fram second (corresponding resolution: 256 × 128). In this experiment, under 1024 × million) pixels, the acquisition frequency of the camera is set to 500 frames per which can better understand the instantaneous changes in the flame combustion p In the experiment, a high-speed camera is used to photograph the changes in th during the combustion process, and to record the formation, development, and ev of the flame in a stable combustion state.
Error and Uncertainty Analysis
In this experiment, through repeated experiments and measurements, the value and the average value of the experimental data are taken to reduce random Since the parameters in this experiment are directly measured, such as the temp
Error and Uncertainty Analysis
In this experiment, through repeated experiments and measurements, the median value and the average value of the experimental data are taken to reduce random errors. Since the parameters in this experiment are directly measured, such as the temperature and length of the flame, the measurement error can be measured by the measuring instrument. The accuracy is obtained directly. For directly measured parameters, the uncertainty is expressed as: The known diameter of the quartz tube is D = 28 mm, w D = 1 mm; tube length l = 15 cm, w l = 5 mm; then the relative uncertainties of the tube diameter and tube length are: The measurement accuracy of the thermocouple used to measure the temperature is ±2% at high temperature. Combined with the maximum measurement range of 1573 K in the experiment, the maximum relative error range of the temperature can be obtained as 31.5 K, and the minimum jet temperature measured in the experiment is 567.9 K, then the maximum relative uncertainty of temperature is: The measuring instrument used for the gas flow rate is a rotameter, which is calibrated with air, and its accuracy is 4%, that is, the relative uncertainty of the airflow is 4%. The peristaltic pump is calibrated with pure water, and its accuracy is 1%, that is, the relative uncertainty of the propellant flow is 1%. The Table 1 is a summary of the Uncertainty of parameters.
Effect of Microwave Power on the Combustion Characteristics of Plasma Torch
During the generation and maintenance of the plasma, energy is provided by the incoming microwaves and the microwave power directly influences the effect of the gas discharge in the resonant cavity. To ensure that the microwave power was a single variable factor in the experiment, a flow meter was used to stabilize the gas flow rate at 20 L/min before switching on the microwave generator assembly. The effects of microwave power on the generation of air plasma and the jet length were investigated experimentally when the microwave powers were 500 W, 1000 W, 1500 W, 2000 W, and 2500 W, respectively. Figure 5 shows the plasma jet images of burning 1-5 s under different microwave power. When the microwave power is 1000 W, a bright light is produced in the resonant cavity due to the gas discharge generating air plasma, but the length of the resulting plasma jet is small due to the low microwave power and the electric field strength only allows a limited amount of air to be ionized into plasma. As the microwave power was increased to 1500 W, the intensity of the gas discharge increased and the height of the plasma jet increased significantly, with the plasma jet already extending out of the resonant cavity and appearing inside the quartz tube. Continuing to increase the microwave power, the length of the plasma jet grows. When the microwave power is 2000 W and 2500 W, the plasma jet grows more in area, while the jet length growth is no longer as pronounced as in the previous period. The jet itself can be seen to fluctuate up and down for different times at the same power. W, the plasma jet grows more in area, while the jet length growth is no longer as pronounced as in the previous period. The jet itself can be seen to fluctuate up and down for different times at the same power. Figure 6 shows the median, mean and maximum values of the air plasma jet length at different microwave powers. The measurement of jet length is based on the contact surface between the quartz tube and the boss. The median values of plasma jets with different microwave powers in the experiment are 0 cm, 0.4 cm, 7.3 cm, 10.95 cm, and 12.87 cm, respectively. The difference between the average value and the median value of each power jet length above 1000 W is within 1%. When the microwave power was increased from 1500 W to 2000 W, the median value of the jet length increased by 50%, and when the microwave power was increased from 1500 W to 2500 W, the jet length increased by 76%, which means that the increase of the microwave power on the air plasma jet length is nonlinear, and the rate of this growth is gradually slowing down. The maximum value of the air plasma jet length at 1500 W power does not exceed the minimum value of the plasma jet length at 2000 W power, while the average value at 2000 W is already greater than the minimum at 2500 W. This phenomenon also explains the increase from the side. The increasing trend of microwave power to the plasma jet length is gradually weakening. In general, when the gas flow rate is fixed, the microwave power can increase the length of the plasma jet, but this gain is gradually weakened with the increase of the power. It can be guessed that when the power increases to a certain level, the gas will be ionized to the greatest extent to generate plasma so that the subsequent power increase cannot further increase the total amount of plasma, and the length and area of the jet will not increase. Figure 6 shows the median, mean and maximum values of the air plasma jet length at different microwave powers. The measurement of jet length is based on the contact surface between the quartz tube and the boss. The median values of plasma jets with different microwave powers in the experiment are 0 cm, 0.4 cm, 7.3 cm, 10.95 cm, and 12.87 cm, respectively. The difference between the average value and the median value of each power jet length above 1000 W is within 1%. When the microwave power was increased from 1500 W to 2000 W, the median value of the jet length increased by 50%, and when the microwave power was increased from 1500 W to 2500 W, the jet length increased by 76%, which means that the increase of the microwave power on the air plasma jet length is nonlinear, and the rate of this growth is gradually slowing down. The maximum value of the air plasma jet length at 1500 W power does not exceed the minimum value of the plasma jet length at 2000 W power, while the average value at 2000 W is already greater than the minimum at 2500 W. This phenomenon also explains the increase from the side. The increasing trend of microwave power to the plasma jet length is gradually weakening. In general, when the gas flow rate is fixed, the microwave power can increase the length of the plasma jet, but this gain is gradually weakened with the increase of the power. It can be guessed that when the power increases to a certain level, the gas will be ionized to the greatest extent to generate plasma so that the subsequent power increase cannot further increase the total amount of plasma, and the length and area of the jet will not increase. In the combustion jet experiment of ADN-based liquid propellant with microwave power, according to the experience obtained from the previous microwave power ionization experiment of the gaseous working medium, and the guidance of the minimum power of propellant ignition, keep the propellant flow rate at 20 mL/min, the gas flow rate was 20 L/min, and the microwave initial condition was increased from 500 W to 1500 W, and the combustion of ADN-based liquid propellant was tested under the action of five microwave energies: 1500 W, 1750 W, 2000 W, 2250 W, and 2500 W. Figure 7 shows the combustion images of ADN-based liquid propellants at different times under different microwave powers. In the combustion jet experiment of ADN-based liquid propellant with microwave power, according to the experience obtained from the previous microwave power ionization experiment of the gaseous working medium, and the guidance of the minimum power of propellant ignition, keep the propellant flow rate at 20 mL/min, the gas flow rate was 20 L/min, and the microwave initial condition was increased from 500 W to 1500 W, and the combustion of ADN-based liquid propellant was tested under the action of five microwave energies: 1500 W, 1750 W, 2000 W, 2250 W, and 2500 W. Figure 7 shows the combustion images of ADN-based liquid propellants at different times under different microwave powers.
The input microwave power was compared to the lowest microwave power to achieve the ignition of ADN-based liquid propellant, and the median value of the jet length was compared with the inner diameter of the quartz tube to achieve dimensionless parameters. Figure 8 is the dimensionless processing of the flame jet length under the action of microwave power. Through the parameter fitting after dimensionless, it can be found that all data are distributed on both sides of a straight line, which is linearly distributed, and the straight line can be expressed as: The microwave power has an enhancement effect on the jet length of the flame, and the limit of this enhancement effect is related to the combustion medium itself. When only the gas working medium was continuously fed in the experiment, although the length of the plasma jet generated by the microwave ionized gas working medium was still increased by the microwave energy, the increase ratio decreased from 2000 W to 2500 W. After adding the propellant, the flame jet length is also increased by the microwave. In the case of the 1500 W jet length as the benchmark, for every 250 W increase in the microwave power, the corresponding increase in the flame jet length is close to 20%, indicating that the microwave enhancement effect of the energy on the flame length does not appear to be attenuated. In general, while keeping the gas working medium and propellant flow unchanged, the input of the microwave has a relatively obvious improvement on the flame length. In the experiment, the gas flow rate was kept constant by the airflow meter, the microwave power was gradually increased, and the temperature at the stable time of the plasma jet was measured with the increase of microwave power. The input microwave power was compared to the lowest microwave power to achieve the ignition of ADN-based liquid propellant, and the median value of the jet length was compared with the inner diameter of the quartz tube to achieve dimensionless parameters. Figure 8 is the dimensionless processing of the flame jet length under the action of microwave power. Through the parameter fitting after dimensionless, it can be found that all data are distributed on both sides of a straight line, which is linearly distributed, and the straight line can be expressed as: The microwave power has an enhancement effect on the jet length of the flame, and the limit of this enhancement effect is related to the combustion medium itself. When only the gas working medium was continuously fed in the experiment, although the length of the plasma jet generated by the microwave ionized gas working medium was still increased by the microwave energy, the increase ratio decreased from 2000 W to 2500 W. Figure 9 shows the plasma temperature corresponding to different microwave power during stable combustion, where the error bar represents the upper and lower fluctuation range of the flame temperature during the stable combustion period. Since there is no microwave plasma generation in the case of 500 W, it can be regarded as the room temperature during the experiment, which is 298.33 K. In the case of 1000 W, the plasma jet is short, and the collected data is the temperature of the hot air flowing upward in the tube after the plasma jet is generated or the invisible outer edge of the flame, so the temperature fluctuation range is relatively large and relatively inaccurate. When the microwave power is above 1500 W, the thermocouple can directly measure the plasma jet itself. Taking the temperature of 1500 W as the benchmark, the temperature rise ratio corresponding to the three cases of 1500 W, 2000 W, and 2500 W is approximately 45%. Overall, with the increase in microwave power, the temperature of the plasma jet showed an upward trend. When the microwave power was 2500 W, the temperature of the plasma jet reached 1400.15 K. Figure 10 is the heating curve under the microwave power of 1500 W and 2000 W. It can be seen from the figure that the increase in microwave power not only increases the temperature at a stable rate but also affects the temperature rise rate. The temperature rise rate is faster under the microwave power of 2000 W than that under 1500 W. Under the continuous action of a 2000 W microwave, the time required for the combustion temperature of the propellant to rise to 800 K is 5.75 s, while the time required for 1500 W is about 10 s, which means that the increase in power also has a certain enhancement effect on the temperature rise rate. about 10 s, which means that the increase in power also has a certain enhancement effect on the temperature rise rate.
Influence of Gas Flow on Air Plasma Jet
Before the ADN-based liquid propellant is sprayed into the resonant cavity, the microwave device generates a strong electric field in the resonant cavity through microwave energy to ionize the gas to form air plasma. When the gas flows in the cylindrical quartz tube, different gas flow rates will make the gas motion state different.
The ratio of the inertial force and the viscous force of the fluid itself is defined as the Reynolds number. The Reynolds number is usually used to reflect the flow state of the fluid. It is a dimensionless number and can be used to judge the state of different gases and liquids. The Reynolds number is represented by Re, and its formula is as follows:
Influence of Gas Flow on Air Plasma Jet
Before the ADN-based liquid propellant is sprayed into the resonant cavity, the microwave device generates a strong electric field in the resonant cavity through microwave energy to ionize the gas to form air plasma. When the gas flows in the cylindrical quartz tube, different gas flow rates will make the gas motion state different.
The ratio of the inertial force and the viscous force of the fluid itself is defined as the Reynolds number. The Reynolds number is usually used to reflect the flow state of the In this paper, the gas flows into the quartz tube from the lower end, and the quartz tube can be regarded as a circular tube channel with a certain diameter. For the Reynolds number of a straight pipe, the critical value of the Reynolds number is between 2000 and 2300. If it is less than this value, it can be regarded as laminar flow, and if it is greater than 2300, it can be regarded as turbulent flow. If the Reynolds number is 2300 as the critical value, the corresponding critical air velocity is 38 L/min. In the experiment, keeping the flow rate of the gas working medium below 33 L/min, it can be considered that the gas working medium is kept in a laminar flow state. In the study of the effect of the gas flow gas working medium on the plasma jet, the microwave power was kept at 1500 W, and the gas working medium flow rates were set at 2 L/min, 8 L/min, 14 L/min, 20 L/min, 26 L/min, and 32 L/min, respectively. The influence of gas working medium flow on plasma generation and development is shown in Figure 11. When the airflow rate is maintained at 2 L/min, the breakdown of the gas working medium can be achieved under the action of microwaves, but no plasma jet is generated; the experiment of 2 L/min shows that the gas working medium flow rate is higher than in small cases, although the device can ionize the gaseous working medium, it cannot produce a significant plasma jet. (8) In this paper, the gas flows into the quartz tube from the lower end, and the quartz tube can be regarded as a circular tube channel with a certain diameter. For the Reynolds number of a straight pipe, the critical value of the Reynolds number is between 2000 and 2300. If it is less than this value, it can be regarded as laminar flow, and if it is greater than 2300, it can be regarded as turbulent flow. If the Reynolds number is 2300 as the critical value, the corresponding critical air velocity is 38 L/min. In the experiment, keeping the flow rate of the gas working medium below 33 L/min, it can be considered that the gas working medium is kept in a laminar flow state. In the study of the effect of the gas flow gas working medium on the plasma jet, the microwave power was kept at 1500 W, and the gas working medium flow rates were set at 2 L/min, 8 L/min, 14 L/min, 20 L/min, 26 L/min, and 32 L/min, respectively. The influence of gas working medium flow on plasma generation and development is shown in Figure 11. When the airflow rate is maintained at 2 L/min, the breakdown of the gas working medium can be achieved under the action of microwaves, but no plasma jet is generated; the experiment of 2 L/min shows that the gas working medium flow rate is higher than in small cases, although the device can ionize the gaseous working medium, it cannot produce a significant plasma jet. As shown in Figure 12, when the gas flow rate is 8 L/min and 14 L/min, the gas can generate a relatively stable and continuous plasma jet under the action of microwaves. The plasma jet gradually weakens and becomes thinner from bottom to top, and the color gradually becomes lighter. The shape and area of the plasma jet at different times are not very different. When the gas flow rate increased to 20 L/min, the length of the plasma jet As shown in Figure 12, when the gas flow rate is 8 L/min and 14 L/min, the gas can generate a relatively stable and continuous plasma jet under the action of microwaves. The plasma jet gradually weakens and becomes thinner from bottom to top, and the color gradually becomes lighter. The shape and area of the plasma jet at different times are not very different. When the gas flow rate increased to 20 L/min, the length of the plasma jet was significantly shortened compared to 14 L/min, and the width of the jet was also reduced. Continuing to increase the flow rate of the gas working medium, the plasma jet is ejected along with the rising gas flow without being ionized because the gas flow rate is too fast. This situation reduces the area of the plasma jet and greatly reduces the continuity of the plasma jet. In the case of 26 L/min, some of the plasma jets have been interrupted. The jet area is significantly reduced, and the end blur is serious; when the gas working fluid is 32 L/min, although the plasma can still be generated, the high-speed gas flow rate makes the plasma jet very weak and it can no longer be stable and continuous. During the experiment, when the gas flow rate is too large, the sound of the airflow flowing out of the quartz tube can be clearly heard. Figure 13 shows the results of the air plasma jet length under the action of different gas working medium flow rates. It can be seen from the figure that when the gas flow rate is 8 L/min and 14 L/min, the fluctuation range of the jet length is small, and the median value of the longest jet, the shortest jet, and the jet length is not much different (the median difference is 6 mm). When the gas flow rate is greater than 14 L/min, the reduction of the air plasma jet length is more obvious. Taking the jet length of 14 L/min as the benchmark, the reductions in the jet length of the subsequent three groups were 21.6%, 40.4%, and 75.3%, respectively. The minimum value of jet length also has a big gap after 14 L/min and the longest value of jet. In general, when the gas working medium flow rate is 8 and Figure 13 shows the results of the air plasma jet length under the action of different gas working medium flow rates. It can be seen from the figure that when the gas flow rate is 8 L/min and 14 L/min, the fluctuation range of the jet length is small, and the median value of the longest jet, the shortest jet, and the jet length is not much different (the median difference is 6 mm). When the gas flow rate is greater than 14 L/min, the reduction of the air plasma jet length is more obvious. Taking the jet length of 14 L/min as the benchmark, the reductions in the jet length of the subsequent three groups were 21.6%, 40.4%, and 75.3%, respectively. The minimum value of jet length also has a big gap after 14 L/min and the longest value of jet. In general, when the gas working medium flow rate is 8 and 14L/min, the length of the plasma jet is relatively stable; and when the gas working medium flow rate is above 14 L/min, the length of the plasma jet continues to shorten, and all the jet length statistics are in a downward trend. amount of gas will not be ionized. Exiting the reaction zone with the upward gas flow rate results in a shortening of the plasma jet to the point where continuous plasma generation is not possible. Subsequent attempts were made to continue to increase the gas working medium flow. The experiment proved that a larger flow of gas working medium would have a great impact on the generation and maintenance of plasma, and plasma maintenance could not be achieved under the condition of 38 L/min.
Influence of Gas Flow Rate on the Flame of ADN-based Liquid Propellants
It can be seen from Figure 15 that the color of the flame burning brightness, as well as the height of the inner flame, rises first and then decreases and that the high flow rate of the gas working mass will lead to the decrease of the flame brightness and the reduction of the jet area, and the color of the ADN-based liquid propellant burning under the action of the high flow rate of the gas working mass is light. From the images at different moments (black and white images), it can be seen that when the gas work flow is small, the combustion reaction is relatively not intense, the brightness in the quartz tube is low, and Figure 14 shows the dimensionless processing of the plasma jet length under different gas working medium flow rates. The abscissa is the Reynolds number, and the ordinate is the median value of the air plasma jet length ratio to the inner diameter of the quartz tube to achieve dimensionless parameters. Through the parameter fitting after dimensionless, it can be found that the data is distributed in exponential form, and the fitting line can be expressed as
Influence of Gas Flow Rate on the Flame of ADN-based Liquid Propellants
It can be seen from Figure 15 that the color of the flame burning brightness, as well as the height of the inner flame, rises first and then decreases and that the high flow rate of the gas working mass will lead to the decrease of the flame brightness and the reduction of the jet area, and the color of the ADN-based liquid propellant burning under the action To sum up, the gas flow rate is closely related to the generation of the plasma jet, and the gas flow rate is too large or too small to have an adverse effect on the length of the plasma jet. When the gas flow rate is too small (2 L/min), although the gas working medium is relatively completely ionized, it is difficult to generate a long jet; and when the gas working medium flow rate is large, the plasma jet will also be affected, and a large amount of gas will not be ionized. Exiting the reaction zone with the upward gas flow rate results in a shortening of the plasma jet to the point where continuous plasma generation is not possible. Subsequent attempts were made to continue to increase the gas working medium flow. The experiment proved that a larger flow of gas working medium would have a great impact on the generation and maintenance of plasma, and plasma maintenance could not be achieved under the condition of 38 L/min.
Influence of Gas Flow Rate on the Flame of ADN-based Liquid Propellants
It can be seen from Figure 15 that the color of the flame burning brightness, as well as the height of the inner flame, rises first and then decreases and that the high flow rate of the gas working mass will lead to the decrease of the flame brightness and the reduction of the jet area, and the color of the ADN-based liquid propellant burning under the action of the high flow rate of the gas working mass is light. From the images at different moments (black and white images), it can be seen that when the gas work flow is small, the combustion reaction is relatively not intense, the brightness in the quartz tube is low, and the length of the flame jet formed is short, but the length and area of the flame jet are more stable; while the gas work flow is at 20 L/min, the flame jet length of combustion has a more obvious increase, and the brightness in the quartz tube is extremely high, which means that the combustion reaction is more intense in this case; when the gas flow rate is 26 L/min, the flame jet generated by combustion begins to show a discontinuous phenomenon, and the high-speed airflow blows part of the flame away from the main body of the jet, and the jet height decreases compared with 20 L/min. When the gas flow rate is 32 L/min, the brightness in the quartz tube decreases significantly compared to 20L /min, and the flame jet height also decreases significantly, but the combustion reaction can still be maintained at a gas flow rate of 32 L/min. Figure 16 shows the statistical parameters of the propellant combustion jet at different gas flow rates. As can be seen from Figure 15, when the gas work-quality flow rate is below 26 L/min, it has little effect on the minimum value of the combustion jet, and the minimum values of the first three groups of experiments are all around 6 cm. When the gas flow rate is 8 L/min, the gas flow rate is low, and the maximum value of the combustion jet flame produced is not much different, and the flame jet length is relatively stable, but the jet length is short, and the maximum value of the jet length is 7.02 cm; when the gas flow rate increases to 20 L/min, the combustion jet shows an obvious growth, and its maximum value is 14.51 cm, which is nearly double compared with 14 L/min. At the same time, the median value of the jet also showed a significant increase of 73% compared to 14 L/min, indicating that the flame combustion was better at 20 L/min. The further increase in the gas flow rate led to a decrease in the average length of the jet. At the gas flow rate of 26 L/min, the maximum value of the flame jet increased, but the median value of the jet length decreased more significantly with the average and minimum values, in which the median value decreased by 21.68% compared with 20 L/min. The maximum value of the jet length at 32 L/min also decreases significantly, by 5 cm compared to 26 L/min; the median value decreases by 31.9% compared to the median value at 26 L/min, totaling 3.06 cm. proceeds, resulting in a shortening of the flame length.
means that the combustion reaction is more intense in this case; when the gas flow rate is 26 L/min, the flame jet generated by combustion begins to show a discontinuous phenomenon, and the high-speed airflow blows part of the flame away from the main body of the jet, and the jet height decreases compared with 20 L/min. When the gas flow rate is 32 L/min, the brightness in the quartz tube decreases significantly compared to 20L /min, and the flame jet height also decreases significantly, but the combustion reaction can still be maintained at a gas flow rate of 32 L/min. Figure 16 shows the statistical parameters of the propellant combustion jet at different gas flow rates. As can be seen from Figure 15, when the gas work-quality flow rate is below 26 L/min, it has little effect on the minimum value of the combustion jet, and the minimum values of the first three groups of experiments are all around 6 cm. When the gas flow rate is 8 L/min, the gas flow rate is low, and the maximum value of the combustion jet flame produced is not much different, and the flame jet length is relatively stable, but the jet length is short, and the maximum value of the jet length is 7.02 cm; when the gas flow rate increases to 20 L/min, the combustion jet shows an obvious growth, and its maximum value is 14.51 cm, which is nearly double compared with 14 L/min. At the same time, the median value of the jet also showed a significant increase of 73% compared to 14 L/min, indicating that the flame combustion was better at 20 L/min. The further increase in the gas flow rate led to a decrease in the average length of the jet. At the gas flow rate of 26 L/min, the maximum value of the flame jet increased, but the median value of the jet length decreased more significantly with the average and minimum values, in which the median value decreased by 21.68% compared with 20 L/min. The maximum value of the jet length at 32 L/min also decreases significantly, by 5 cm compared to 26 L/min; the median value decreases by 31.9% compared to the median value at 26 L/min, totaling 3.06 cm. proceeds, resulting in a shortening of the flame length. Figure 17 shows the temperature rise curves of ADN-based liquid propellants at 8 L/min, 20 L/min, and 26 L/min of gas flow rates. Taking temperature as an indicator, the temperature change process is divided into four modes according to the way the gas flows, including diffusion flow, vortex flow, full flow, and steady flow. When microwave Figure 17 shows the temperature rise curves of ADN-based liquid propellants at 8 L/min, 20 L/min, and 26 L/min of gas flow rates. Taking temperature as an indicator, the temperature change process is divided into four modes according to the way the gas flows, including diffusion flow, vortex flow, full flow, and steady flow. When microwave discharge begins, air diffuses and flows in the resonator and the temperature rises rapidly to peak T 2 . When the spray and the airflow are mixed, a vortex occurs, ADN cannot be completely decomposed, and the temperature is briefly reduced to T 3 . With the increase of airflow rate, the oxygen content in the resonator increases, the oxidation reaction of methanol occurs, and the flame temperature rises sharply. Finally, the propellant burns more completely as the gas flow stabilizes. In the case of 8 L /min, the temperature rise curve shows a stepped rise, with a flat period of nearly 1 s after the temperature climbs, after which the temperature continues to rise until it reaches the maximum temperature. The temperature rise curve for 20 L/min also has a stepped rise, but the duration is very short and distributed within 2.5 s at the beginning of the rise. Unlike the 8 L/min case, the temperature rise curve for the 20 L/min case has a period of decline, a brief drop in temperature followed by a rapid rebound and peak temperature. In contrast, the step-like distribution of the temperature rise curve is more obvious and longer in duration for the case of 26 L/min of gas flow rate. The curve also has a short drop in temperature after reaching the extreme value, but the duration is significantly longer than that of 20 L/min. The temperature extremes in the heating curve and a brief drop may be due to the decomposition of the ADN itself, before reaching the first extremes of the ADN itself in the microwave action exothermic, and in the case of 8 L/min there is no significant temperature drop, but a period of flat, which may be due to the lower gas flow rate. In the case of the same input power, the ADN-based liquid propellant absorbed more power so that the combustion temperature does not appear to drop significantly after reaching the extremes due to decomposition. The comparison of the temperature profiles shows that the temperature rises faster at 20 L/min and the duration of the temperature steps during the warming process is shorter. position of the ADN itself, before reaching the first extremes of the ADN itself in the microwave action exothermic, and in the case of 8 L/min there is no significant temperature drop, but a period of flat, which may be due to the lower gas flow rate. In the case of the same input power, the ADN-based liquid propellant absorbed more power so that the combustion temperature does not appear to drop significantly after reaching the extremes due to decomposition. The comparison of the temperature profiles shows that the temperature rises faster at 20 L/min and the duration of the temperature steps during the warming process is shorter.
Conclusions.
In this paper, through the experiment on the actual effect of the resonator under different microwave power and gas flow rates, the following conclusions are drawn: (1) Microwave power plays a leading role in the ignition performance of microwave plasma and the combustion effect of propellant. When the gas flow rate is constant, the increase of microwave power can effectively increase the length and temperature of the air plasma jet. In the case of stable propellant combustion, stopping the microwave power input leads to a cessation of the combustion reaction.
(2) After achieving combustion of ADN-based liquid propellants, the effect of gas
Conclusions
In this paper, through the experiment on the actual effect of the resonator under different microwave power and gas flow rates, the following conclusions are drawn: (1) Microwave power plays a leading role in the ignition performance of microwave plasma and the combustion effect of propellant. When the gas flow rate is constant, the increase of microwave power can effectively increase the length and temperature of the air plasma jet. In the case of stable propellant combustion, stopping the microwave power input leads to a cessation of the combustion reaction. (2) After achieving combustion of ADN-based liquid propellants, the effect of gas flow rate on the flame jet and temperature both increase and then decrease, with the highest flame jet length and temperature at 20 L/min. (3) The changes in microwave power and gas flow rate did not change the wavelength range corresponding to free radicals in the spectrum but had a certain influence on the spectral intensity of free radicals.
|
2022-12-29T16:14:37.356Z
|
2022-12-23T00:00:00.000
|
{
"year": 2022,
"sha1": "4a8a221231705ea78233ae14fb97b8dbf7f5b23e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/16/1/147/pdf?version=1671798042",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3dcb9fc64f79cea4dbf6c0d39c80ab2ed5ab743f",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222448788
|
pes2o/s2orc
|
v3-fos-license
|
Effects of root undercutting, fertilization and thinning on seedling growth and quality of oriental beech (Fagus orientalis Lipsky) seedlings
In this study, the effects of root undercutting, fertilization and thinning applications on the morphological characteristics of oriental beech seedlings grown in Karadag Forest Nursery were investigated. In 2+0 years old seedlings, the effects of treatments including by making root undercutting and thinning (A) and by giving 50 g of ammonium nitrate fertilizer per m2 addition to thinning (B) in July were examined. Moreover, quality classes of seedlings were determined on the basis of treatments. According to the result of this study, significant differences were found between the morphological characteristics of the seedlings depending on the treatments. It was determined that while there is a statistical significance in terms of seedling length, sturdiness quotient, the number of side branches, shoot and root fresh weight, shoot and root dry weight, shoot dry weight/root dry weight, root percentage, Dickson quality index, there is no significance in terms of root collar diameter, fresh seedling weight, dry seedling weight in 2+0 years old seedlings. According to Turkish Standards Institute's deciduous seedling standard (TS 5624/21.03.1988) prepared in March 1988, 48.8% of A treatment and 76.7% of B treatment and 78.9% of the control group were found to be Class I. According to Dickson quality index, the quality index values of seedlings in 2+0 A, 2+0 B and control were determined as and 1.05, 0.74 and 0.68 respectively. In determination of Dickson quality index value, using of important parameters used for determination of seedling quality is important in terms of obtaining more accurate results.
INTRODUCTION
Beech, represented by 10 species in the northern hemisphere, is one of the most important species of deciduous forests. There are two naturally distributed species as oriental beech (Fagus orientalis Lipsky) and European beech (Fagus sylvatica L.) in Turkey. Oriental beech is one of the most common and economically important native deciduous tree species of Turkey. Oriental beech forests in Turkey have a total distribution of 1 899 929 ha to be 1 630 196 ha forest in normal structure and 269 733 ha forest in degraded structure (Anonymous 2015).
Oriental beech is native to the Balkans in the east, through Anatolia (Asia Minor) in the west, and to the Caucasus, northern Iran and Crimea. Wide hybridization zones are observed between oriental and European beeches in the central east and eastern region of the Rhodope Mountains in Bulgaria and Greece. In Turkey, oriental beech is distributed in Thrace and south of the Marmara Sea and throughout the Black Sea Regions. The species can be found as both pure stands and mixed forests with conifers and other deciduous tree species. Its vertical distribution is between 200 and 2200 m above sea level (Atalay 1992, Denk et al. 2002, Gailing and Wuehlisch 2004, Kandemir and Kaya 2009, Papageorgiou et al. 2008).
In Turkey, oriental beech is tree species with the second largest spread among deciduous trees after Quercus spp. (Anonymous 2015), and plantation studies of this species are made in quite high. Plantation activities are expensive and long-term investments. Selection of appropriate species and origin and growing of quality seedlings in terms of physiological and morphological features from these seeds are the most important issues to be considered in the plantation studies (Tosun et al. 1993). Especially in areas having extreme conditions, the production of quality and suitable seedlings gains high importance in order to conduct the plantation studies in the most economical way and to reach the highest success (Üçler andTurna 2003, Yahyaoğlu andGenç 2007). Quality seedling is seedling showing high survival percentage in plantation studies and can make a very good growth in the first years, and is also being in economic balance with these advantages (Tosun et al. 1993).
One of the important factors affecting the success of plantation is the quality of seedlings, and it is possible to increase the quality of seedlings with appropriate seedbed density (Cengiz and Şahin 2002). The seedbed density has a direct effect on the diameter, height and physiological activities of the seedlings (Tolay 1987, Tonguç 2009). Sparse cultivation may have some economic losses (Saatçioğlu 1976), and it is possible to encounter weakness of seedlings in frequent cultivation (Alım ve Kavgacı 2017).
The characteristics used to determine the quality of forest tree seedlings are generally grouped into three groups as genetical, morphological and physiological characteristics (Duryea 1984, Genç 1992, Genç and Yahyaoğlu 2007. Morphological characteristics such as root collar diameter and seedling length give an idea about seedling characteristics suitable for planting (Deligöz 2012). In addition, it is generally desirable for a quality seedling to have a shoot/root ratio of less than 3 to increase the percentage of survival in the field (Grossnickle et al. 1988, Tetik 1995. It is stated that variables such as growth and survival percentage are not alone effective in determining seedling morphology, and morphological and physiological characteristics of seedling should be evaluated together (Ritchie 1984;Thompson 1985). Furthermore, the morphological and physiological characteristics of the seedlings affect the success of the land after planting (Chavasse 1980, Q'Reilly andKeane 2002).
Root growth potential, which is one of the physiological characteristics, is perhaps the most reliable feature in terms of land performance. High root growth potential of seedlings that start to develop rapidly after planting is important in terms of obtaining quality seedlings (Ritchie 1985). The survival of newly planted seedlings, the ability to survive planting shock and to develop roots depends on the white-tipped roots they form (Dirik 1990). In the study conducted in Turkish red pine, it was stated that the water intake was higher in the seedlings with whitetipped root compared to the ones without white-tipped root (Dirik 1991). In addition, some factors such as the seedling age, root cutting, irrigation, fertilization, transplanting, shading, seedling density, nursery soil characteristics are also affect the quality of the seedlings (Eyüboğlu 1979, Alım et al. 2008, Landis 2008, Tonguç and Aydın 2019. Root undercutting is one of the most important cultivation techniques that enable seedlings to achieve certain quality characteristics. With this application, it is provided that the height growth of seedlings and elongation of the roots are prevented, the formation of more compact fibrous roots is encouraged, and the shoot/root ratio is reduced by making the dormancy and hardening (Landis 2008). Although, there are several studies on the technique of seedling growing Effects of root undercutting, fertilization and thinning on seedling growth and quality of oriental beech (Fagus orientalis lipsky) seedlings in oriental beech (Tosun et al. 2002, Güney 2009, Atik 2013, Güney et al. 2016, the current study differs from previous studies, since the root undercutting, fertilization and thinning are evaluated together.
The aim of this study is to determine the effect on the morphological characteristics of the seedlings of root undercutting, fertilization and thinning practices on the oriental beech seedlings and to determine the best application for growing quality seedlings. In addition, depending on the results obtained, it is aimed to reveal the quality classes of the seedlings and to see the change of quality classes according to the treatments. Different quality class criteria including TSI and Dickson were applied in the study and comparison of these criteria creates another objective of the study.
Material
The study was carried out in the Karadağ Forest Nursery (1400 m) within the borders of Trabzon Regional Directorate of Forestry. The nursery where the study was carried out has a land shape of 30% slope and the composition of the soil varies between clay loam, sandy clay loam and sandy loam. The pH of the soil ranges between 3.9 and 4.8 and there is no lime in the nursery soil and it is poor in terms of phosphorus. According to the data obtained from the meteorological station closest to the nursery, annual temperature is between 3.0-11.4 ℃ and average temperature is 7.0 ℃. The average annual rainfall is 342.7 mm, the number of rainy days is 118.3, and the number of frosty days is 97.5. As a study material, 2+0 years old oriental beech (Fagus orientalis Lipsky) seedlings of Trabzon origin were used. Geographical location of Karadağ Forest Nursery in Turkey is shown in Figure 1.
Method
Three different treatments were performed on 2+0 years old oriental beech seedlings. These applications are as follows.
A: In July, root undercutting (at 20 cm deep from soil level) and thinning (50% of the seedlings were removed from the seedbeds) were made. B: Thinning was made, there is no fertilization in first year, In the fifth month of the second year, 50 g of 33 ammonium nitrate fertilizer per m 2 was made. C (Control): Routine application of the nursery without root undercutting and thinning is considered as control.
The experiment was designed in a randomized complete block design with three replications. The location and order of replications for each treatment were randomly determined. Other than root undercutting, thinning and fertilization, cultural applications (irrigation, weed control) were carried out according to the routine work program of the Karadağ Forest Nursery. Undercutting of roots was made by passing the root undercutting knife attached to the tractor under the seedbed parallel to the soil surface. The morphological characteristics of 1+0 year-old oriental beech seedlings were determined. Removal of 2+0 years old seedlings was carried out on 30 seedlings from each replication of each treatment. Measurements of root collar diameter (RCD), seedling length (SL), the number of side branches (NSB), shoot fresh weight (SFW), root fresh weight (RFW), shoot dry weight (SDW), root dry weight (RDW), fresh seedling weight (FSW) and dry seedling weight (DSW) were made. Using these measurement values, shoot dry weight/root dry weight (SDW/RDW), root percentage (RP: RDW/DSW), sturdiness quotient (SQ: SL/RCD) and Dickson quality index (DQI: (DSW)/(SQ)+(SDW/RDW)) were determined for each seedling. Dickson quality index is a formula used to determine the quality of forest tree seedlings and it is developed by Dickson et al. in 1960. It is reported that seedlings are considered as high quality if they are close to and higher than 1 (Aslan 1986), which explains the potential power of seedlings for field performance (Mañas 2009). Ruler that the measurement accuracy is 0.1 cm was used for length measurements, electronic caliper that measurement accuracy is 0.01 mm was used for diameter measurements, and digital scales with a measurement accuracy of 0.001 g was used for weight measurements.
Data were analyzed by SPSS 23 package program. In order to determine the effects of root undercutting, thinning and fertilization on the basic morphological characteristics of oriental beech seedlings, variance analysis (one-way ANOVA) and Duncan's test were performed for each morphological character. In addition, seedlings belonging to thinning and root undercutting treatments were evaluated according to Turkish Standards Institute's deciduous seedling standard (TS 5624/21.03.1988) prepared in March 1988 (Anonymous 1988) and Dickson quality index values (Dickson et al. 1960).
Morphological characteristics of seedlings
The averages of the morphological characteristics of oriental beech seedlings in 1+0 year-old and the mean of the morphological characters measured after root undercutting, fertilization and thinning interventions applied to oriental beech seedlings of 2+0 years old are given in Table 1. As can be seen in Table 1, while there were statistically significant effects in terms of SL, SQ, NSB, SFW, RFW, SDW, RDW, SDW/RDW, RP and DQI depending on applied treatments in 2+0 years old seedlings, there was no statistically significant effect in terms of RCD, FSW and DSW.
Although there was a partial increase in RCD in the seedlings that were applied thinning, there wasn't a statistically significant difference. Seedling length had Effects of root undercutting, fertilization and thinning on seedling growth and quality of oriental beech (Fagus orientalis lipsky) seedlings 218/ D. Güney (2): 214-222 (2020) higher values in seedlings that were made thinning and root undercutting. According to the results of the study, it was seen that the treatments caused a partial increase in total fresh and dry weight of the seedlings compared to the control treatment, but there was no statistically significant difference. However, when evaluated on the basis of shoot and root weights, it is understood that the treatments have significant effects. As a matter of fact, in the control treatment, RFW and RDW were 6.43 and 1.89 g, respectively, while the values in A treatment were 7.65 and 2.86 g, respectively. While SFW and SDW were 6.00 and 2.42 g respectively in the control treatment, these values were 6.73 and 2.76 g respectively in the B treatment.
In general, it is stated that the seedlings grown at low sowing density develop larger diameter and heavier roots and shoots as dry weight, the seedling length and shoot/root ratio are not always affected by the seedling density (Duryea 1984). In the species of Picea orientalis (L.) Link (Eyüboğlu 1988), Robinia pseudoacacia L. (Cengiz andŞahin 2002, Semerci et al. 2008), tree of heaven (Cengiz and Şahin 2002) and Elaeagnus angustifolia L. (Gülcü and Çelik Uysal 2010) in different studies, it was found that the low seedling density increases root collar diameter and seedling weight, but it does not affect seedling length. In some studies, it was reported that the seedbed density for Acer negundo L. are only effective on the root development potential in terms of morphological and physiological characteristics of seedlings (Deligöz 2012), it has positive effects on seedling length, root collar diameter and seedling weight for Juglans nigra L. and Quercus rubra L. (Schultz and Thompson 1997), it has also statistically significant effects on seedling length, root collar diameter, shoot dry weight and root dry weight for 1+0 year-old Crataegus monogyna Jacq. seedlings (Bayar and Deligöz 2016).
Root undercutting treatment increased the fresh and dry root mass in 2+0 years old oriental beech seedlings, while fertilization treatment increased the fresh and dry shoot mass. Consequently, while the percentage of dry root weight is increased in the seedlings applied thinning and root undercutting, the rates of SDW/RDW decrease. Johnsen et al. (1988) reported that being high of root percentage is an important factor in the living success of seedlings with root system rich in capillary roots.
In the B treatment, it was determined that the rates of SDW/RDW increased. In order for a quality seedling to increase the living percentage in the field, it is generally desirable that the shoot/root ratio is less than 3 (Grossnickle et al. 1988, Tetik 1995. In the present study, the values of SDW/RDW were found to be less than 3 for all treatments. In a study conducted to determine the effect on the development of seedlings in the first year of different rates of biohumus applied to oriental beech, it has been determined that seedlings grown from seeds waiting for 12 and 24 hours in 0.5 ml biohumus solution grow the most in terms of both seedling length and root collar diameter (Güney et al. 2010).
Although the seedling length was lower in the A treatment than the control, FSW was about the same as the control and DSW was higher than the control. This can be explained by the increase in total seedling weight due to the fibrous roots caused by root undercutting (Table 1). Studies on oriental beech seedlings express the necessity of making root undercuttings in order to eliminate root thinning in oriental beech seedlings and to form appropriate root systems. However, it is stated that the seedlings that root cuttings have been made must stay in the seedling beds for one more year in order to improve their root systems (Şimşek 1994).
While the mean value of the sturdiness quotient in 1+0 year-old seedlings were found to be 4.32, it was determined as 4.20 in A treatment, 5.49 in B treatment and 5.32 in control treatment in 2+0 years old seedlings. Due to the positive effect of B treatment on seedling length, the highest SQ value was obtained in this application. Seedling length and root collar diameter have a significant effect on both percentages of survival and seedling growth after planting. Seedlings having high values in terms of seedling length and root collar diameter are generally superior to the others with regard to survival percentage and development in the field (Rose et al. 1990, Yahyaoğlu andGenç 2007). In the study carried out in Taurus cedar, it was determined that seedling quality classes have an important effect on the Effects of root undercutting, fertilization and thinning on seedling growth and quality of oriental beech (Fagus orientalis lipsky) seedlings 219/ D. Güney, A. Bayraktar, F. Atar, İ. Turna, / AÇÜ Orman Fak Derg 21 (2): 214-222 (2020) development of seedlings in the field (Eler et al. 1993).
In the A treatment where SL was lower and RCD was higher, the SQ value was lower (Table 1). In a study investigating the effects of vermicompost on morphological characteristics of 1+0 year-old oriental beech seedlings at different planting densities, it was determined that vermicompost had significant effects on development, and growth increased as planting density decreased. Vermicompost giving the best results was used in the study and the sturdiness quotient of the seedlings planted at 10 cm intervals was obtained as 3.08 on average (Atik 2013). While developing new roots, new seedlings planted in the field are not yet sufficiently photosynthetic. Therefore, they meet the nutrients they need from their carbohydrate reserves. Furthermore, the positive effect of carbohydrate content is very important, especially in newly planted bare rooted seedlings. It has been reported that the low density of seedlings improves carbohydrate reserves in seedlings, hence increases resistance and will be able to provide the better development in the field with more nutrient reserves of seedlings (Duryea 1984, Lavender 1984, Wang 1998, Deligöz 2012.
Distribution of seedlings to quality classes
Distribution of oriental beech seedlings, where root undercutting, fertilization and thinning applications are applied according to the standard prepared by Turkish Standards Institute for deciduous tree seedlings (TS 5624) (Anonymous 1988) and Dickson quality index values (Dickson et al. 1960) are given in Table 2. As can be seen in Figure 2, according to the TSI quality classification, the percentage of Class I in 2+0 years old seedlings was highest in the control group, followed by 2+0 B treatment, 2+0 A treatment and 1+0, respectively. According to Dickson quality index, the highest value in 2+0 years old seedlings occurred in A treatment (1.05), followed by B treatment (0.74), and control treatment (0.68) took the last place. In the 1+0 year-old seedlings, the DQI value was found to be 0.25. As it can be seen from these data, seedlings with low quality value according to TSI standards can be higher than Dickson quality index. This result can be explained by the fact that only the seedling length and root collar diameter were taken as a criterion in the deciduous tree seedlings standards prepared by TSI (TS 5624) and no age criterion was given for this. In determining Dickson quality index value, it is important to use more important parameters such as sturdiness quotient, the ratio of shoot dry weight/root dry weight and dry seedling weight depending on seedling length and root collar diameter in order to obtain more accurate results.
Similar to the results obtained in our study, in a study examining the effects of seedbed density of on seedling morphology for Juniperus oxycedrus L. subsp. oxycedrus, it was stated that although there was no difference between some morphological characters depending on the density, the control group had the lowest DQI value (Alım and Kavgacı 2017).
Dickson quality index was determined between 0.44 and 1.29 in another study in which the effect of thinning and root undercutting application on morphological characters was investigated in Anatolian black pine. According to the results of the study, different thinning and root cutting applications were effective on seedling morphology and it was stated that the lowest Dickson quality index value was in the control treatment (Çetinkaya and Deligöz 2012). In a study conducted on Juniperus foetidissima Wild. seedlings, it was stated that the seedbed density had significant effects on root collar diameter, seedling length, shoot, root and seedling dry weight, shoot/root ratio, root percentage, sturdiness quotient and Dickson quality index (Özüberk and Deligöz 2016).
CONCLUSION
Success in plantation studies is directly proportional with being proper to the quality standards of the seedlings used. In the plantation studies carried out in forest, mostly the technical problems are the subject and the seedling is in the second degree. Using quality seedlings, degraded areas need to be transformed into productive forests. In this case, the first task should be to develop a standardization of seedlings. In this study, it was aimed to grow seedlings that can make the best increment with different root undercutting, fertilization and thinning interventions to be applied to the seedlings and to obtain material that meets the quality criteria. In other words, growing of seedlings that is high field success in plantation studies with the most suitable root undercutting, fertilization and thinning application determined in the study was aimed. In addition to this, the seedlings can make a good growth by continuing actively own life in the first years, and are in economic balance with these advantages.
According to the standards prepared by Turkish Standards Institute for deciduous tree seedlings, seedlings in Class I were the highest in the control group, while the control group had the lowest value according to Dickson quality index. TSI's results without using too many criteria in the classification can give misleading results. For this reason, usage the criteria which are important for the seedlings while determining the quality criterion, and especially making classification by determining criteria according to the purpose of seedling cultivation will be very important in terms of application.
In the new researches in order to obtain quality seedlings, the most suitable seedling density should be determined by making thinning interventions according to the tree species in each nursery and the site conditions where the nursery is located. In addition, thinning studies must be supported with root undercuttings in order to obtain seedlings with larger root diameter and healthy root system. Especially in areas where there is a problem of ground cover, fertilization in the nursery is important for the areas where the usage of tall seedlings is required, in order to give positive results in overcoming this problem. In this respect, the applications such as root undercutting time, thinning time and density, fertilizer type, quantity and time should be studied according to nursery conditions, species used and production purpose, and the results obtained should be supported with field trials.
|
2020-10-16T01:13:07.096Z
|
2020-09-15T00:00:00.000
|
{
"year": 2020,
"sha1": "bc2e5643b9d93e6d2ab4520d79e5a7ae5f57cf54",
"oa_license": "CCBY",
"oa_url": "http://ofd.artvin.edu.tr/en/download/article-file/1278208",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0345e0dc2c5f7e0aff2120ba8372e50b423edd65",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
150284995
|
pes2o/s2orc
|
v3-fos-license
|
CHILDREN , PROPAGANDA AND WAR , 1914-1918 : AN EXPLORATION OF VISUAL ARCHIVES IN ENGLISH CITY Infancia , propaganda y Gran Guerra 1914-1918 : una exploración de los archivos visuales en una ciudad inglesa
Since 2014 there have been across Europe programmes of commemorative events to mark the Great War, 1914-18, and early amongst these events was an exhibition of photographs Paris 14-18, la guerre au quotidien at the Galerie des Bibliothéques de la Ville de Paris. The photographs were all taken by Charles Lansiaux, and record daily life in the city, from the recruitment and departure of French soldiers to victory celebrations in 1918. The exhibition importantly pointed to the fact that the Great War was the first conflict where the experiences of civilians were extensively visually documented. Further, as publicity for the exhibition noted: “La présence récurrente de groupes d’enfants dans ces photographies révèle la place nouvelle qui leur incombe, à l’aube du xxe siècle”. Taking a lead from this exhibition this paper will investigate the wartime experiences of children in one English city, namely Birmingham, and how they were visually captured. In particular we will focus on documenting and analysing the connections between the representation of children at war, propaganda and the promotion of patriotism.
INTRODUCTION
In 2012 as part of the Cultural Olympiad the Library of Birmingham, Birmingham Museums and Art Gallery and the University of Birmingham collaborated on the exhibition Children's Lives.The exhibition used the archive and museum collections to document and explore the experiences of children in the city from the eighteenth century to present day.Part of the exhibition looked at children's lives during the First and Second World Wars.Since then we have both been active in working with a range of cultural partners in the city in developing a programme of activities to commemorate the city's involvement in the First World War.This in turn led to an exhibition in the Library of Birmingham, which focused on the day to day experiences of people on the home front in Birmingham and a successful bid to become one of the Arts and Humanities Research Council's First World War Engagement Centres. 1 Childhood was a major theme in both initiatives.In particular, we were interested in exploring a series of interconnected questions about how war may have impacted on children.How did the experience of schooling change as a consequence of society's "war culture"?What role was ascribed to children and youth in the context of national wartime mobilization?How did schools and youth movements support the war effort?What were the experiences of refugee children that were displaced into a new urban environment?To what extent was there a moral panic about girls falling into sexual delinquency and increased illegitimacy and how far did the absence of male authority figures contribute to the rise of criminal children and delinquency both in the period of the war and in the decade immediately afterwards?How did the local state and voluntary organizations respond to the increase of orphans and single parenthood? 2 Parallel to these local developments, programmes of commemorative activities began in the UK and in other parts of Europe.Amongst these events was an exhibition of photographs Paris 14-18, la guerre au quotidien at the Galerie des bibliothéques de la Ville de Paris.The photographs were all taken by Charles Lansiaux, and record daily life in the city from the recruitment and departure of French soldiers to victory celebrations in 1918. 3Walking through the exhibition and viewing the individual photographs was a useful reminder that not only was the First World War a conflict fought in front of the camera lens, it was also the first conflict where the experiences of civilians were extensively visually documented.Further, as publicity for the exhibition noted: "La présence récurrente de groupes d'enfants dans ces photographies révèle la place nouvelle qui leur incombe, à l'aube du XX e siècle".Children are "caught" watching military parades and adopting military poses, Boy Scouts march with along with soldiers, and children are photographed in family groups alongside their fathers in military uniform.Domestic images of children playing in parks, shopping and being entertained by street vendors are displayed next to photographs of child refugees and their families arriving from Belgium and children queuing in soup kitchens.A photograph of a hospitalized child injured in a bomb attack is followed by images of children looking at bomb damage on the street.Child newspaper sellers distribute news reports of events on the Western Front, while others sell patriotic medals or are photographed collecting donations for war orphans and refugees.Children, their heads uncovered, observe the "spectacle" of a military funeral cortege while other children are photographed looking at artillery, sitting astride large canons and parading on Armistice Day.Collectively Lansiaux's photographs reveal the multiple contingencies which informed and controlled children's experiences of war time conditions. 4 a result of working with exhibitions as both curators and researchers, and seeing the visual history of childhood in wartime Paris, we decided that instead of following the traditional route of seeking answers to our questions about children's experiences of war in the written records housed in Birmingham's archives we decided to begin our search with the quintessential modern medium of the camera and documentary photographic practice.The photographic historian Darren Newbury has argued in his study of photography under the Apartheid regime in South Africa that "[photographs] are the starting point for inquiry rather than its end"5 and we were also interested in the possibility that taking a non-traditional approach might lead us in research directions which we might not otherwise have followed if we had begun with the written archive.At the outset we were conscious of the problematic "morass of contradiction, confusion and ambiguity"6 around the status of the photograph as pure document -it is very evident that some of Lansiaux's images were, for example, staged "tableau"-but we were also sympathetic to Graham Clarke's contention that historic photographs are "literally records of a history otherwise unavailable to us", and that they privilege us as they foreground events which we look at as if through a glass darkly.As documents, such images are windows into a world otherwise lost, and to that extent, are significantly and appropriately documentary photographs. 7istoria y Memoria de la Educación, 8 (2018): 307-345 The use of photographs in historical and social research has its own literature.Becker, for example, advised the researcher to focus on "questions" a photograph "might be answering", which could be very different to what the photographer actually intended, and to then look for corroborating evidence. 8Banks advocated the use of three sets of questions: (i) what is the image of, what is its content?(ii) who took it or made it, when and why?And (iii) how do other people come to have it, how do they read it, what do they do with it? 9re recently, Tinkler has suggested a five steps approach to using photographs: identifying basic details (place and time), scrutinizing images for content, considering material evidence, doing contextual research, and reflecting on meaning. 10Scott unlike Becker, Banks and Tinkler did not distinguish between photographs and written sources and identified four lines of enquiry: authenticity, credibility, representativeness and meaning. 11This approach is mirrored by Jordanova who does not isolate photography from other visual media and promotes treating visual materials with the same rigour as any other source asking basic questions about provenance, audience, and the context of both production and reception. 12There is clearly significant overlap within these different methods of engaging critically with photographs as historical evidence.While our primary concern was with the documentary content of photographs and the questions that this content raised for our concern with children, it was important that we remained alert to the issues around authenticity, audience, production and reception. 13
BIRMINGHAM: VISUALISING THE CITY AT WAR
Unlike Paris, but perhaps more in line with other urban photography collections, we found no single extensive collection of First World War photographs in the Birmingham archive.Instead, the many images of the War and its effects are dispersed across a large number of collections.We prioritized the search in collections, which we knew from previous experience to be likely sources of images of children -the archives of families, charitable institutions and war related propaganda.We also targeted certain local newspapers, which we knew began to use images heavily in this period, predominantly the most "tabloid" end of the local press in the form of the Birmingham Illustrated Weekly Mercury and to a lesser extent the Birmingham Gazette.Taking a sample year initially we discovered a wealth of war imagery reproduced in the Birmingham Illustrated Weekly Mercury in particular, and broadened the time frame to the whole of the war period.In doing so, we recognized that the nature of the newspaper coverage is weighted towards positive images intended to bolster support for the war effort, a point recognized by the war-time Lord Mayor of Birmingham, Sir David Brooks, when he wrote that "Tribute should also be paid to the patriotic attitude adopted by the local newspapers throughout the war", and drew attention to their role in "educating public opinion" in supporting military recruitment and financial campaigns in particular. 14The original photographs used by the newspapers are presumed lost.Not surprisingly, collectively their documenting of a city at war and in particular the experiences of children mirrors those produced by Lansiaux.In what follows we look at ten photographs from the many identified in the Birmingham archive and document the issues and questions that they prompted both individually and collectively.
Brooks' comments quoted above reflect the fact that the city in which the photographs were taken was a city fully mobilized for war.As one of England major industrial centres, Birmingham's manufacturing base was turned over en masse to the war effort, and to the making of armaments and munitions in particular.Local factories increased their output and their workforce, and the employees of one local factory, Austin Works in Longbridge, for example, grew from 2,800 in 1914 to over 20,000 by 1918.The city's population grew exponentially as male and female workers, soldiers, medical personnel, the wounded, and refugees poured in from other parts of Britain and beyond.However, the city was also home to a small but vocal opposition to the war led by members of the Religious Society of Friends (also known as Quakers) and the political left, including members of the Independent Labour Party, among others.This opposition was manifested by individual Conscientious Objectors who refused to take up arms and were duly prosecuted, and by collective public demonstrations such as the women's peace marches held in the city at various intervals.15Reproduced with the permission of the Library of Birmingham.
Turning specifically to the photographs, our first image is from the end of the war (figure 1).It is an example of the genre of the school photograph that became increasingly commonplace at the end of the nineteenth century: the whole school/class group portrait, and the familiar architecture of Birmingham school design, in this case at Bordesley Green, can be clearly seen behind the group.Children, mainly of infant and junior age, have been organized in rows with the youngest sitting at the front.Two rows of older children are standing on benches that are hidden from view and behind them are much older boys who are in their scout uniforms.All of the other children are wearing some form of costume rather than their ordinary clothes including eleven boys wearing sailor suits, five girls dressed as nurses, three boys are wearing "Pierrot" outfits, a popular costume associated with the Edwardian music hall and seaside entertainment, and one boy is dressed as a wizard.At first sight, there appears to be an absence of adults, but closer scrutiny reveals the head of a man almost in hiding behind the back row and the leg, shoulder and part of the hat of a figure outside of the frame who is attached to the scouts, occupying the space usually of an adult in school photographs.
Looking at the photograph the eye is immediately drawn to the centrality of the Union Jack flag in the photograph's composition which, despite being held by two boy scouts, is caught moving.A boy wearing a Union Jack is also waving a second flag.Behind him another boy is wearing a sort of Union Jack cravat and a girl in the row behind him has a shawl or Union Jack dress.Several children are wearing medals, including one of the nurses, while others wear rosettes.At the front a small boy wears a dark hat and outfit with different symbols attached and on his front is an image of King George V. Along from him, and placed directly in line with the billowing Union Jack flag, is a small boy in a sailor suit who is holding another flag, but not the Union Jack -it is the Stars and Stripes of America.
Only a few of the children can be described as looking happy and indeed many look totally bemused or even bored with what is going on.Given the length of time, it would have taken to organize them [and their costumes?]this apparent lack of enthusiasm for the experience is not surprising.All of these children were born before the outbreak of war Historia y Memoria de la Educación, 8 (2018): 307-345 and therefore they would have all experienced in some way its impact on their lives.Their understanding of this experience would naturally be filtered by age and circumstance.But what would they have taken and remembered from this "celebratory" moment of which they are a part; a moment saturated with the symbols of patriotism and nation?.What would they have drawn upon to make sense of it?These are questions which we kept returning to as we explored through the images the impact of the culture of war on Birmingham children.All photographs generate questions and because photographs are "randomly inclusive" carrying an excess of information any individual photograph has more than one story to tell 16 .This plurality of narratives can be expanded by juxtaposing one image with others, which potentially open up new lines of enquiry in the archive.Here we take four separate images.The first is of the Parsons family of Halesowen, 17 1915, showing Abednego Parsons, his wife Fanny, and their daughters Sarah and Norah (figure 2).The second is of a Birmingham food queue in 1917 (figure 3).The third is from an article in the Birmingham Illustrated Weekly Mercury in 1918 about photographs 'found' on the battlefields (figure 4).The last image is from 1920 and is of children on a Birmingham Co-operative Society float at the May Day parade under the banner of "War Made us Fatherless" (figure 5).
Historia y Memoria de la Educación, 8 (2018): 307-345 Abednego was a Private in the Royal Army Medical Corps and served in France.He survived and returned home at the end of the war.The family is shown in a conventional studio pose with the father standing behind the mother.Elizabeth Edwards has described such photographs as "little theatres of self" as in the anonymity of the studio they project an ideal, both personal and collective, capturing a moment when the "mythic and the idealised self, in "Sunday best" was performed" and made visible "the abstract norms, values and feelings that surround social life". 18It was also through such moments she also argues that children learned the conventions and traditions of photography, "what it meant to have one's photograph taken". 19In many working class households such photographs represented in the first half of the 20 th century the only surviving document of family history. 20The image captures a likeness of a father and a husband that could be kept on the mantelpiece to be treasured by those who are faced with an impending moment of departure and with no knowledge of the moment of return.
Abednego went with the Royal Army Medical Corps to France.What did Sarah and Norah feel at being separated from their father and not knowing if or when he would return?What difficulties did Fanny face with her husband's absence?Keeping in touch through letters and parcels connected families with absent fathers.Holding a letter, as Santanu Das has argued, physically and emotionally connected the writer with the recipient of the a letter,21 but gaps in correspondence and the uncertainty and anxiety associated with waiting to hear from loved ones inevitably, as Michael Roper has documented, had an emotional impact on family life and relationships. 22terially, we know that the War Office introduced a system of remittances which allowed soldiers to have an element of their pay sent directly to their families and wives and dependents became entitled to a Separation Allowance, but prior to these actions many families suffered significant financial hardship with income significantly reduced by men leaving for war, unemployment and economic uncertainty.In the early months of the war school log books from Birmingham provide valuable information on these hardships.For example, the Head Teacher of Dartmouth Street Boys School recorded at the end of August that there was "much distress in this district" as "about 40 fathers and 60 brothers of our boys have been called up" and the number of free breakfasts had increased from 30 to 70.The following week saw the figure for free breakfasts double to 150, a week later it was 200 and by the end of September the school was providing 300 free breakfasts. 23uch hardships were gradually reduced in Birmingham as family incomes increased through rising wages and earnings from overtime, as the war economy brought women into the munitions industry and other, previous male dominated roles.As Jerry White notes, "all families, especially those with children over the school-leaving age, could expect to generate a household income from many sources, with each often earning well and regularly". 24Nevertheless, even higher wages could not fully offset the problems of food shortages which the war brought.Securing an adequate supply of food was a problem throughout the war but from 1917 it became a serious issue.Queues outside food shops, as in the photograph, became a regular sight, and Birmingham Schools' Medical Officer complained that children were being kept off school to wait in food queues. 25In this image the continuing 'novelty' of the camera on the street is readily apparent with two children and two adults looking directly at the photographer.The adults, predominantly women, are pressed together waiting for the shop to open, they are not the poorest of Birmingham's population and generally appear relaxed.The one boy is dressed in a military style overcoat and a cap with insignia.However, in the poorest parts of the city food shortages hit hard as the Head Teacher of Dartmouth Street Boys noted in February 1917: "Attendance lowest for some time […] The weather [and] Food and fuel are scarce and there are many scholars ill". 26rift became a way of life in the city and "Economy in Waste" leaflets were distributed through schools and other organisations. 27ld tin cans were collected and recycled, householders were asked to gather waste paper, and even offal from slaughterhouses was repurposed as food for pigs and poultry and as fertiliser. 28There was an increased emphasis on growing more food particularly as 1917 saw a serious shortage of potatoes.The City's Parks Department made more allotment plots available and supplied seeds at reduced prices.By the end of the war about 1,800 acres of land in the city were covered by 24 allotments. 29In August 1917, the Government's Food Controller requested all local authorities to appoint food control committees and in Birmingham a registration scheme for all outlets selling sugar and ration cards for sugar were introduced in September 1917.Butter, margarine, tea and bacon were also in short supply.A general rationing scheme was tried in Birmingham as an experiment and from 12 th December every house in the city received a ration card for tea, sugar, butter and margarine.A similar card was introduced to ration meat.The Birmingham trial later became the basis for the rationing scheme that was introduced nationwide in July 1918.Roper has documented how such hardships on the Home Front made "family relationships brittle". 30wards the end of the war the Birmingham Illustrated Weekly Mercury newspaper ran a series of appeals asking for information about family photographs found abandoned on the battlefield.These are images which for those left behind visually represented the possibility of loss, and if not proof of loss, they certainly spoke of uncertainty.The contingency of death was particularly difficult for those who remained at home.As Mary MacLeod Moore wrote in the Sunday Times: "Killed" is final; "Wounded" means hope and possibilities; "Prisoner of war" implies a reunion in the glad time when peace comes again to a stricken work; but "Missing" is terrible.In that one word the soldier's friends see him swallowed up "behind a cloud through which pierces no ray of light". 31For a child to turn a page in a newspaper and suddenly to find themselves and that of a missing father or sibling returning their gaze would have been deeply distressing, and if the photograph in the newspaper and that on the mantelpiece was all that remained the missing body made it difficult to grieve properly.
For some women the loss of a husband and father to her children was too great a burden and the children ended up in care.The father of the Cook children was killed in action in 1915 and the mother turned to alcohol, spending army pay on drink and was drunk for a fortnight.Care for the children was transferred to Middlemore Homes and the family's story was reported in the press under the headline "Sad Story told at the Police Court". 32The case files of Middlemore Homes are full of stories of children being put at risk of neglect due the war.It is also clear from these files that children were emotionally damaged due to loss. 33Younger children would have had only the briefest of memories of fathers and siblings, but they were expected to engage in the cult of remembrance.As Catherine Rollet has so perceptively written: Death in wartime formed a conclusive reference point for their developing identity […] With time, direct memory of the father came to be merged with or submerged beneath the memory constructed by the family and by the nation.A significant obligation imposed itself on everyone: the living had a debt they owed the dead and the children had to play an active part in this duty of remembrance. 34e "fatherless" children in the 1920 May Day photograph perfectly project this active role in the duty of remembrance.The act of loss publically defined these thirty or more children.Their fathers died for the flag and the medals worn by the boys materially declare their acts of sacrifice.At the same time, the banner "WAR MADE US FATHER-LESS" might also be read as implicitly carrying another message that war "just or unjust, is a war against the child".The children were part of the May Day procession, a political event organized by the local labour movement and attended by trade unions and a campaigning group for former soldiers, the National Union of Ex-Servicemen, among others and which concluded with a political rally. 3532 Library of Birmingham, MS 517/A/8/1/4, 23 September 1915.The Middlemore Homes were children's emigration homes for boys and girls originally founded in 1872 to emigrate poor children, often forcibly removed from their parents, to Canada and later Australia.During the First World War, the Homes agreed to retain those children of soldiers, who were placed temporarily in the Homes, in Birmingham rather than send them abroad in case the father returned following the end of the conflict. 33 We know that Abednego survived the war, but what was the impact of his return on his children?Roper has drawn attention to different interpretations amongst historians looking at the emotional impact of war on British soldiers with one group arguing that the "encounter with violence undid the civilizing process" while another has pointed to a status of alienation and an "estrangement from home".Both elements, he argues, were evident amongst soldiers and defined their behavior when they returned home.Certainly, many fathers returned with ill health, both physical and mental, and anger at what had happened to them and a level of dependency which further pressurized family relationships. 36ometimes this pressure was too much as the Middlemore case records again demonstrate.When their father returned injured from the war and was in a convalescent home with paralysis of the face, the Benbow children were placed in care because of maternal neglect due to drunkenness. 37Finally, an interesting by-product of soldiers "returning from the front" as reported by the School Medical Officer in 1916 and again at the end of the war was the outbreak of a scabies epidemic amongst children. 38Reproduced with the permission of the Library of Birmingham.
A newspaper photograph of boys working for the war effort prompted a new line of inquiry: the role of delinquent children in the war (figure 6).The boys were from Norton, formerly known as Saltley Reformatory, Historia y Memoria de la Educación, 8 (2018): 307-345 which was essentially a prison for young offenders. 39Boys were admitted between the ages of about nine and 17 their lives were strictly regulated.Activities were timetabled for each day between waking up at 6am and going to bed at 10pm.They were educated and trained in various trades, including shoemaking, gardening, tailoring and farming.The contribution to the war effort was substantial as the School Superintendent detailed in 1916, the boys had produced in the workshops: thousands of articles of equipment have been either wholly or partly made up -namely, 5,000 dispatch riders' kit covers, 2,500 bandoliers, 2,500 buckle straps, 1,000 military mail bags, 3,400 ammunition pouches, 1,579 water bottle carriers, 925 flag-signaller cases. 40 this time, in addition to working on the farm or in the workshops, 20 other boys were working in munitions factories.
The Annual Reports give a figure each year for the number of old boys wounded or killed in action, and a common thread through the reports is a discourse about masculinity and war; a discourse that connected the "true spirit of manhood […] the high [...] tone of courage and endurance" displayed by former boys on the Western Front with the values "fostered by the school" and still being "inculcated". 41A selection of letters from old boys which reflect this discourse but also give details of life at the Front, being stationed in India or convalescing in hospital, were printed in the reports in every year except 1918 when there was a paper shortage.This practice of including the letters of "old boys" in annual reports was very common among educational and charitable institutions in Birmingham during the war, and a similar trend can be seen in the employees' magazines of the city's industrial firms and manufacturers.The letters offer further evidence of the importance of 'keeping in touch' as the following extracts from 1915 demonstrate: 39 Saltley Reformatory opened in the mid nineteenth century.It was renamed Norton Training School from 1908, and Norton Approved School from 1933.However, despite the changes in name, and some evolution in practice, its primary function, which was to incarcerate, train and "reform" young offenders, remained the same until the late twentieth century. 40Library of Birmingham, L43.94,Birmingham Reformatory 63rd Annual Report, 1916, 10.Many thanks for forwarding the "Schools Gazette".They are very interesting and I read through during my spare time out of the trenches; [ A letter ] acts an incentive, […] [it] is a strong connecting link with the boys of the old school, and through this medium we, who are in the trenches, can vividly and happily recall chums of bygone days. 42e old boys were responding to letters sent by both staff and inmates.One letter is particularly interesting regarding the boys' participation in the war effort: Dear Davey [... ]it is very nice to receive letters from friends, it cheers one up such a lot in the trenches [...] I was glad to hear the band is doing such good work in trying to recruit as much as possible.Always remember, although you are not out here fighting, you are all taking your part in this great War by trying to encourage others to enlist, and help to defend the country's honour. 43ere is an allusion in the 1917 report to the increase in delinquency being experienced in cities 44 and as a consequence pressure being put on the reformatory schools "to provide accommodation for cases committed to their charge".At Norton nearly twice as many boys were admitted in 1916-1917 compared to the previous year and boys were passing through the school too quickly, the training was too short and this was "adverse to the boy's welfare". 45A sense of unease is evident in the Superintendent's report in June for 1918: "Even in these times of unrest it is marked how well the general conduct of the boys has been maintained".The reported highlight of 1917 was the formation of a Cadet Corp affiliated to the 6 th Royal Warwickshire Regiment and by 1918 it had become "the centre of the Home" and had, according to HMI Norris, a "marked effect on the bearing and smartness of the boys and no doubt accounts for the prevailing alertness of expression".The apparent popularity of the Corps with the boys is possibly a reflection of a longstanding attraction to military "involvement" amongst young males in Birmingham, with one in every 11 males aged between 17 and 25 years being a member of the Territorials in 1911 and a further 1 in 25 was a member of the Army Special Reserve. 47Throughout 1918-1919 the Corp had participated "in every public event [...] including the visit to the City of [...] the King and Queen, Presentations of War Honours [...] and the raising of War Funds", but school based work and industrial training were "handicapped" by staff shortages due to war service.
An further indication of the centrality of the Corp for the Home is the shift in the focus of the photographs included in them from sporting teams to images of the Corp, including in 1921 photographs of a parade at the Arc de Triomphe and band playing at Verdun as part of the Corps õvisiting battlefields and places connected with the great War". 48e next two photographs are associated with the field of battle.Both come from a commemorative album for the city's "Tank Bank Week" which was held 31 st December 1917-5 January 1918 to raise money for the war effort.This week, held in Birmingham as the last year of the war opened, was an effort to raise additional funds for the war and to boost public morale.A tank was placed outside the Town Hall in the city's main civic square to draw in the crowds, playing to the public's desire to see this new technology of the war.A civic competitive spirit came into play and Birmingham was keen to outdo other cities that had already held similar events, and in particular to collect more money than Manchester and Liverpool which had raised £4,450,020 and £2,060,512 respectively.A huge board was erected at the side of the Town Hall on which the daily totals were displayed with the message "Birmingham Must Win", and a variety of events and speeches were held in the square every day.The grand total in Birmingham once postal deposits had been included in the count was £6,703,439.The first image, clearly posed for maximum impact, is of a small boy dressed in a sailor's outfit returning the salute of an army officer while handing his contribution to another soldier half hidden inside the tank that gave the week its name (figure 7).The second image from the album, which appears to have been taken at an event during the week from the tank or a nearby platform, is of a large crowd assembled in the civic square and all looking at the photographer as if wishing to see into the future.Two Union Jack flags hang above their heads and positioned at the front of the photograph are three Boy Scouts (figure 8). 49 In this second photograph the central position of the scouts in the photograph is reflective of the organization's mobilization in the war effort. 50Early in the war "Birmingham's Scout Army", as the Weekly Mercury put it, made itself useful in recruitment campaigns.Another photograph in the Birmingham Gazette on 27 th August 1914 showed a Scout holding up a placard emblazoned with the challenge "I am too young to enlist.You are not".Later Scouts worked as bell-boys on trams, built huts for soldiers, worked for the Public Works Committee whitening street kerbs and lamp-posts in parks and helped with air raid precautions.Rollet has argued that "a whole "war culture" aimed at children" was created so that "they too could share in the national effort". 51It is certainly the case 50 See, Kennedy, The Children's War, chapter 4, for a discussion of children in uniform.
n ian grosvenor and siân roberts Historia y Memoria de la Educación, 8 (2018): 307-345 as we can see from these and other photographs that they were part of visual propaganda campaigns, but how were children encouraged to participate in the "national effort" and to what extent did they internalize the values of the "war culture"?
SCHOOLS AT WAR
Many of the opportunities for children and young people to participate in the war effort were organized through schools and school log books enable the historian to get a glimpse of how the war intruded on their lives although the level of detail varies.At Clifton Road Boys School there is no evidence in the entries for 1914 and 1915 that the war was actually happening and the log book records the 'normal' rhythm of school life.In 1916 there are four entries: "Spent £2 in providing & sending eight parcels to our "Old Boys" now at the Front"; "Four more parcels sent to "Old Boys" at Front"; "Mr Whatmore leaves for military service"; and The following year the death in action of Whatmore is recorded."Boys and Teachers" also sent gifts of "Fruits, Eggs, Cocoa, Jam, Cakes, Cigarettes, etc. etc., to the wounded soldiers in the Queen's Hospital, Birmingham".The entries for 1918 are dominated by references to male teachers being absent at Medical Board examinations at the city's Curzon Hall.The School closed early in May 1918 "on account of "maroons"" and in September boys visited "the Exhibition of Trades of wounded & recovered soldiers held at Town Hall".Clifton Road, as with other local authority schools, was closed on the 11 th and 12 th of November due to the "cessation of hostilities". 53istoria y Memoria de la Educación, 8 (2018): 307-345 Such brief and intermittent references to the war contrasts sharply with the entries for Dartmouth Street Boys School.The first reference to the war was on the 24 th August 1914 and is cryptic, "School reopened this morning.All the staff present.War Conditions", and, presumably added later, in the margin is written, "The Great War". 54This was followed, as already noted, by a series of references to local financial hardship and a sharp increase in the number of children needing free breakfasts.Between 1914 and 1919 the Head Teacher gave seven series of lectures to the boys.The lectures were accompanied by the reading of "Patriotic Poems".Over seventy lectures were given and ranged over such topics as "Patriotism (Empire)", "Colonial Gifts to the Motherland", the South African Rebellion, the Great Battle of Ypres, German Atrocities, Gas Attacks in the West, the Military Camp at Salonika, Compulsory Service, "The Waste of War (Young Lives)", German Propaganda, Education after the War, Entry of America into the War, "The Great Re-Union of the Anglo-Saxon Race", the sinking of hospital ships, London air raids, the Russian Revolution, Allied Air Development, Capture of Bagdad, Final Defeat of Submarine War, "The Great Lessons -Self-Forgetfulness and Unity", "Abdication of Kaiser.Failure of "Might is Right"" and "Hopes of Mankind in the League of Nations".Not surprisingly, pupils were praised by school inspectors in 1915 for their good knowledge of the history and geography of the war. 55Letters from "Old Boys at the Front" were read to the school and were "productive of much good work". 56The School was the site both of celebration and mourning.Cheers were given by the boys at a special assembly on the announcement that Private Arthur Vickers, a former pupil, was awarded the Victoria Cross and a half day's holiday was given when he visited the school.Press reports and photographs of the event were collected and displayed. 57In 1916 a short service was held for the whole school in memory of the naval "Heroes lost in the North Sea battles" and in 1917 the Head Teacher recorded, 54 Library of Birmingham, S56/2/2, Dartmouth Street Boys School Log Book, 24 August 1914. 55Library of Birmingham, S56/2/2, Dartmouth Street Boys School Log Book, 14-16 June 1915. 56Library of Birmingham, S56/2/2, Dartmouth Street Boys School Log Book, 4 February 1916.
332
[…] it has been my painful duty to announce to the school the death in action of a number of old scholars: where possible recent letters have been read, and photographs of the school's dead heroes have been placed in the corridor. 58e Head Teacher recorded his praise for the efforts of the boys in raising money for the war effort and listed the various funds: Hospitals, Belgian Relief, Serbian Relief, Soldiers Huts (YMCA & Church Army), Kitchener and Lord Robert Memorials, Jack Cornwall Memorial Hospital, Barnardo Homes Empire Day, Overseas Clubs' Xmas Gifts, Old Scholar's (Soldiers Blinded) Gifts, Injured Horses. 59tween August 1914 and June 1919 they collected over £50 for various war charities, a significant amount of fundraising which must have represented a considerable drain on most families' budgets.The end of the war was marked by the boys singing the National Anthem and giving "cheers for the flag".On Armistice Day 1919 a special assembly was organized that included "Hymns, Songs and suitable music", and an address by the Head Teacher "on the memory of the glorious Dead".Silent remembrance was observed and "Special Lessons on the Celebration and on the League of Nations were given to all classes, ending with the singing of the National Anthem".
There was certainly "great enthusiasm" amongst the pupils of Clifton Road but the Dartmouth Street Log Book reads almost as a record of memorialisation.Did children internalize the values associated with this culture?The school log books, like most education records, are silent on the question as the voices of children are absent.However, a report accidentally found while looking for images in the Birmingham Illustrated Weekly Mercury newspaper gives some clues.On the 13 th November 1915 the newspaper ran a children's competition on the theme "What can the Little Ones do in War Time?", and half a crown was offered for the best letter.First prize was awarded to Irene Harrison age 13 from 145 Lady-Historia y Memoria de la Educación, 8 (2018): 307-345 pool Road, Sparkbrook, one of six children of a widowed mother.Her winning entry was published on 20 th November: Denial is a great sacrifice, and it would bring a smile to many a soldier's face if he had a cigarette that was bought with our pennies that were saved each week instead of being squandered at the sweet shops."Tommy" would treasure a scarf, a pair of gloves, knitted pair of socks or a helmet; he would think more of them if bought and knitted with our small hands, for every soldier has not a sweetheart, wife, or mother; lots of them, given the title of "The Lonely Soldier", never receive parcels from relations like their chums do when away from their home; the simple reason is because they have no friends or relations.Would not it be nice to feel that we have got a friend who is a big red-faced soldier? 60further selection of letters was published on 27 th November, and like Irene the authors advocated duty and sacrifice.Stanley Eld, age 13, living at 69 Somerset Road, considered it to be: absolutely necessary that we should be well equipped to take the fallen's places either in business, professional, or commercial careers […] by just studying hard and keeping fit I am convinced we shall be doing our bit.
Mary Nicholls, age 10, of 147 Vicarage Road, Aston, advocated buying British and knitting for soldiers: One of the first things I think little children should do is to make up their minds that they will not purchase anything made by Germans or Austrians, for by doing so they are helping the enemy.At the Vicarage-road School girls like myself are taking home wool to knit socks to send to our soldiers.
Harold Collins, age 13, of 48 Newport Road, Balsall Heath, suggested that "Some boys interested in wood-work could make picture frames and brackets which they could sell and give the proceeds to war funds", whilst Albert J. Harris, of 31 Eton Road, Sparkbrook, thought girls had a particular role, presumably influenced by some of the recruiting posters which appeared at the time, he thought that girls should "help the recruiting sergeants by trying to persuade the "slackers" to enlist voluntarily while they have the chance".Several of the child writers focused on Belgian refugees, Leslie A. Harris, age 12, of 141 Ladypool Road, Sparkbrook, argued that: Most boys and girls could afford a halfpenny or a penny every week to help this excellent cause… When one thinks about it one realises that the Belgians deserve it, because if it had not been for their bravery and courage the Germans may have been pillaging England the same as they did in Belgium.61
CHILDREN DISPLACED BY WAR
The reference to Belgium Refugees offers a link to our two final photographs (figures 9 and 10).Published on New Year's Day 1900 Ellen Key's Barnetsårhundrade (later published in English as The Century of the Child) presented the wellbeing and universal rights of the child as the defining mission of the twentieth century. 62Writing almost a century later, John Berger commented that "Ours is the century of enforced travel […] the century of disappearances.The century of people helplessly seeing others, who were close to them, disappear over the horizon". 63The two final photographs act as a bridge between these two writers as they are both of children displaced by war and taken for fundraising and promotional purposes.In neither image do the children look distressed and yet both groups of children were refugees who had been uprooted from all they knew and been displaced in history.
Historia y Memoria de la Educación, 8 (2018): 307-345 Brazier and Sanford devote a chapter of Birmingham and the Great War 1914-1919 to Belgian and Serbian Exiles.The Belgian refugee story is better known because of its scale, its association with the beginning of the war and its use in British anti-German propaganda.Birmingham was one of the cities which in 1916 exhibited in its art gallery Louis Raemaker's cartoon depictions of German war atrocities in Belgium and Raemaker had earlier visited the city in 1915. 64The Serbian refugee story is less well known.The caption under figure 9 tells us that the boys are from Serbia, that they are being cared for by J. Douglas Maynard and his wife Adelizaat Serbia House, Selly Oak, and that the home was controlled by a "a committee of local ladies and gentlemen".
The two adults in the photograph are identified as Maynard and "Mr [Alan] Geale, who is acting as scoutmaster".The image was reproduced both in the Weekly Mercury and the Bournville Works Magazine.The works in the magazine title refers to the Cadbury family's chocolate factory, which was located in the Birmingham suburb of Bournville. 65azier and Sandford only devote two pages to the Serbian refugees compared to ten pages on the Belgians in Birmingham.Nevertheless, they add details to what we can see in the photograph.25 boys aged between 10 and 14 arrived in Selly Oak in early 1916 and many of them "were still suffering from the effects of exposure and semi-starvation" following the retreat across the Albanian mountains.While at Serbia House their education was funded by the Serbian Relief Fund, one of the war charities supported by the pupils at Dartmouth Street Boys School.The older boys "through the medium of scouting"" were trained in leadership responsibilities and passed and enforced hostel rules.The hostel was administered by local Quakers -Elizabeth Cadbury chaired the committee, W.A. Albright was treasurer. 66In addition, to this information there is an account in Elizabeth Cadbury's family journal letter of the boys arrival at the hostel on Saturday 20 th May 1916 with their Serbian schoolmaster Historia y Memoria de la Educación, 8 (2018): 307-345 when they were met by a band of Birmingham Boy Scouts and "marched up to the house with flags flying" 67 The boys returned home in 1919.
The two photographs are linked both by place and people.On 29 October 1920, 18 girls from Vienna arrived in Bournville where they lived with local families for a year.Here we can see the girls with George and Elizabeth Cadbury who contributed 15 shillings weekly for the maintenance of each child. 68The Quaker paper The Friend reported their arrival describing them as sweet-looking children of from 8 to 12 years of age [who] all appeared to be very happy as they trooped into the Infants' School, many of them carrying all their possessions in a small bundle on their backs. 69e article goes on to describe their efforts to learn English and the reaction of their hosts in Bournville, closing with the reminder to its readers that "tens of thousands of little children" in Austria "will probably suffer for life through the lack of food".The girls' return to Vienna on 2 September 1921 was similarly reported in The Friend; the article describes how the children and their foster parents clung to each other weeping and contrasted the appearance of the "healthy, plump looking" children, *well clothed and well cared for", with the "thin emaciated and badly clothed" children who had arrived a year earlier. 70e difference between these two groups of children was that the girls were the "children of enemies".They had escaped from a city of suffering following the Allied blockade.There is a further link between the two images.They are both a product of the international humanitarian relief movement; a movement which brought to Birmingham (amongst other cities) an exhibition of children's art in the early 1920s, including artwork from both Serbia and Vienna.The exhibition was aimed at raising money for child relief in Vienna, performed by the Save the Children Fund recently founded by Eglantyne Jebb and her sister Doorothy Buxton, and was accompanied by a series of publications written by Francesca Wilson, a former relief work-67 Library of Birmingham, MS 466/1/1/15/3/13. 68Library of Birmingham, L66.53,Bournville Works Magazine, November 1921, 283. 69The Friend, 19 November 1920, 743.
n ian grosvenor and siân roberts Historia y Memoria de la Educación, 8 (2018): 307-345 er in Serbia who was working with the Quaker Mission in Vienna. 71The publications were illustrated with examples of the children's artwork, and in one of her pamphlets Wilson touched on the psychological dimension of the artwork and its reflection of the effect of war and its aftermath on the children. 72None of the children in either photograph look distressed and yet both groups of children had been displaced by war, uprooted from their home and separated from their history.Their stories physically and emotionally give real meaning to Jebb's observation that "Every war, just or unjust, is a war against the child".Jebb saw no distinction between the children of allies and enemies: "all children were innocent and deserved assistance". 73ilson later wrote a biography of Jebb. 74Finally, refugee stories are also powerful reminders that the "local", as Doreen Massey has emphasized is always a product in part of "global forces". 75
CONCLUSION
It is self-evident from the image led account presented above that "the logic of total war disrupted the routines, formalities, and procedures that had previously provided the rhythm of educational life" in Birmingham schools. 76Schools were sites of patriotism where children contributed to "war work" by collecting for charities and war savings and engaged in activities to honour and memorialize the sacrifice of adults.Nevertheless, schools still continued to function as sites of organised learning.Just as the rhythms of educational life in Birmingham were disrupted by the war so too were those associated with family life and home, Birmingham families endured economic and emotional hardships.However, Catherine Rollet has pointed to the difficulties associated particularly with trying to uncover the emotional experiences of children with fathers at the front and cautioned that "in the case of children, historians need to be even more diffident".There is she argues "very little direct material, some drawings and school essays, letters, personal diaries written by the older children and then autobiographies -but these recreate experience of war after the event". 77This warning could also be employed when talking of the experiences of refugee children.
Children were used, whether knowingly or not, as instruments of adult persuasion and propaganda.This is clearly demonstrated by the images we have identified which were in circulation in local newspapers.Patriotic events employed children as key social actors and photographs were circulated which captured these moments.In these photographs we see the past but in the majority of cases the faces of the children remain unknown.Nevertheless, the photographs reveal the social world of which they are a part and we can witness their connections to the wider social narratives of both community and nation.The photographs we located and used did open up lines of enquiry and enabled connections to be made with other images, but also with written archives.It may be the case that these lines of enquiry and record linkages would have occurred anyway if we had not chosen to place the visual at the centre of our investigations, even so, the photographic evidence of children at war has taken our gaze beyond the institutional sites of school and home that are so often assessed by historians of childhood.As Kim Rasmussen observed, "places for children" are only seldom the same as "children's places". 78The photographs used in this paper, and many of the others found in the archive, point to the significance in children's lives of public spaces.These were spaces of performance and spectacle where children gathered to witness the mobilization and departure of men going to war.They were also spaces where the sounds and sights of childhood changed.They were spaces where children experienced both a "complex of representations" and the "circulation of representations", the effects of the one "always articulating into and re-working the other". 79Men in uniform, some wounded or mutilated, were ever present on the street.The street was a place of flag days and other campaigns to raise funds. 77Rollet, "The home and family life", 345.Street advertising covered the walls and temporary hoardings of the city but the content and language was different.Patriotic symbolism and legal notices visually testified that the city was at war.Maroons became a feature of the city soundscape, and children were able to see and touch the new mechanical weapons of destruction.Cinemas showed images of war and patriotism, music halls interspersed comic turns with one act plays about the war.If children could did not see them, they would know of them through overheard adult conversations.The night, which before the war was turned into day through illumination, returned to semi-darkness because of fear of aerial attacks; attacks which brought the noise of war into public and private spaces.All these sensory experiences became strands in the fabric of children's identity in wartime Birmingham.
What was the legacy of experiences such as these?The social and psychological results of the "war to end all wars" was profound.It not only affected how people thought about the future, but also their view of the child as part of that future.John Thorne has pointed to the period after the "Grande Guerre" as being one of "cultural demobilisation", a turning away from the culture of war. 80The Declaration of the Rights of the Child drafted by Jebb and approved by the League of Nations in September 1923 was one element in this process. 81It was universal, without any distinction on the basis of nationality, "race", or religion.Children assumed "unprecedented importance" with the 1920s being hailed as the Children's Decade. 82The wellbeing of the body and the mind of the child became the focus of professional study. 83The extent to which these concerns were linked to the emotional impact of the culture of war on children rather than the product of idealism and humanitarianism in the face unprecedented loss of life and suffering remains unproven.Cabanes has recently cautioned about "overestimating the extent to which children's psychological wounds were actually taken into consideration in the wake of the Great War" and instead points
Figure 1 :
Figure 1: Victory celebrations at Bordesley Green School, Birmingham, 1918, MS 1645/14.Reproduced with the permission of the Library of Birmingham.
Figure 2 :
Figure 2: Abednego Parsons and family, 1915, MS 4616/1.Reproduced with the permission of the Library of Birmingham.
n
Figure 3: A food queue in Birmingham, 1917, MS 4616/1.Reproduced with the permission of the Library of Birmingham.
Figure 4 :
Figure 4: 'From the Battlefield', The Birmingham Illustrated Weekly Mercury, 7 October 1916, back page.Reproduced with the permission of the Library of Birmingham.
Figure 5 :
Figure 5: Children on a Birmingham Co-operative Society float at the May Day parade, 1920, MS 4616/1.Reproduced with the permission of the Library of Birmingham.
n
Figure 6: Children making munitions, The Birmingham Illustrated Weekly, 14 August 1915, p. 6.Reproduced with the permission of the Library of Birmingham.
n
ian grosvenor and siân roberts Historia y Memoria de la Educación, 8 (2018): 307-345 I received your most welcome letter this evening and you cannot realise how pleased I was when I opened it [...];
Figure 7 :
Figure 7: Child saluting, Birmingham Tank Bank Album, 1918, LF 75.7/531718.Reproduced with the permission of the Library of Birmingham.
Figure 8 :
Figure 8: Crowd scene, Birmingham Tank Bank Album, 1918, LF 75.7/531718.Reproduced with the permission of the Library of Birmingham.
Dispatched 22 more parcels containing scarf & cigarettes or socks and mittens […] to old boys (soldiers &sailors) who have left this school since I took charge in July 1910.Great enthusiasm among present pupils & teachers. 52
Figure 9 :
Figure 9: Serbian boys in Birmingham, Bournville Works Magazine, August 1916, p. 222.Reproduced with the permission of the Cadbury Archive, Mondelēz International.
Figure 10 :
Figure 10: Girls from Vienna with George and Elizabeth Cadbury, 1921, Library of Birmingham MS 466/8/53.Reproduced with the permission of the Cadbury Archive, Mondelēz International.
|
2018-12-05T04:46:40.614Z
|
2018-06-27T00:00:00.000
|
{
"year": 2018,
"sha1": "85b9afccb056985dd9ae3be4e1cdb13424346bdc",
"oa_license": "CCBYNC",
"oa_url": "http://revistas.uned.es/index.php/HMe/article/download/18960/18175",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "85b9afccb056985dd9ae3be4e1cdb13424346bdc",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"History"
]
}
|
14304147
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy of a training intervention on the quality of practitioners' decision support for patients deciding about place of care at the end of life: A randomized control trial: Study protocol
Background Most people prefer home palliation but die in an institution. Some experience decisional conflict when weighing options regarding place of care. Clinicians can identify patients' decisional needs and provide decision support, yet generally lack skills and confidence in doing so. This study aims to determine whether the quality of clinicians' decision support can be improved with a brief, theory-based, skills-building intervention. Theory The Ottawa Decision Support Framework (ODSF) guides an evidence based, practical approach to assist clinicians in providing high-quality decision support. The ODSF proposes that decisional needs [personal uncertainty, knowledge, values clarity, support, personal characteristics] strongly influence the quality of decisions patients make. Clinicians can improve decision quality by providing decision support to address decisional needs [clarify decisional needs, provide facts and probabilities, clarify values, support/guide deliberation, monitor/facilitate progress]. Methods/Design The efficacy of a brief education intervention will be assessed in a two-phase study. In phase one a focused needs assessment will be conducted with key informants. Phase two is a randomized control trial where clinicians will be randomly allocated to an intervention or control group. The intervention, informed by the needs assessment, knowledge transfer best practices and the ODSF, comprises an online tutorial; an interactive skills building workshop; a decision support protocol; performance feedback, and educational outreach. Participants will be assessed: a) at baseline (quality of decision support); b) after the tutorial (knowledge); and c) four weeks after the other interventions (quality of decision support, intention to incorporate decision support into practice and perceived usefulness of intervention components). Between group differences in the primary outcome (quality of decision support scores) will be analyzed using ANOVA. Discussion Few studies have investigated the efficacy of an evidence-based, theory guided intervention aimed at assisting clinicians to strengthen their patient decision support skills. Expanding our understanding of how clinicians can best support palliative patients' decision-making will help to inform best practices in patient-centered palliative care. There is potential transferability of lessons learned to other care situations such as chronic condition management, advance directives and anticipatory care planning. Should the efficacy evaluation reveal clear improvements in the quality of decision support provided by clinicians who received the intervention, a larger scale implementation and effectiveness trial will be considered. Trial registration This study is registered as NCT00614003
Background
There are more choices for place of end-of-life cancer care due to shifts in care to the community, better understanding of the clinical course of cancers, equipment portability, pharmacology advances, and consumer expectations. Place of care has various meanings to patients and families and represents more than a particular geographic location [1]. Options can include hospice, private residence, nursing home, continuing complex care facility, homeless shelter, and/or hospital. For patients with good symptom control, instrumental support, and a predictable course of illness, decisions regarding place of care are often based on values and expectations [2][3][4][5].
Patients frequently experience decisional conflict (personal uncertainty about the best course of action) when considering place for end-of-life care. Personal preferences are weighed against practical considerations and concern for others [2,3,5,6], thus contributing to decisional conflict. Other modifiable factors such as knowledge gaps, unrealistic expectations about outcomes, lack of clarity about what matters most, and feeling pressured to choose a particular option exacerbates the decisional conflict [7]. Unresolved decisional conflict can lead to decisional delay or reversal, dissatisfaction, regret, and blaming the provider [8,9]. Failure to elicit the valued priorities of terminally ill patients and their families can result in missed opportunities, decreased quality of life, unwelcome interventions, and increased risk for complicated bereavement for survivors [3,[10][11][12].
It is generally known that decisional conflict can be reduced with decision support interventions such as decision aids and nurse coaching [7,13]. However, there have been few studies evaluating either of these interventions for decisions at the end-of-life. Although patient decision aids may be useful for some common discrete crossroads decisions with standardized options and outcomes, there may be more payoffs in focusing on coaching interventions that can be applied broadly to care management at the end-of-life. While end-of-life decisions are highly values-sensitive, they also bear a strong resemblance to chronic condition management decisions, which focus on situation monitoring, priority setting, and implementation [14][15][16].
There is evidence that influencing practitioners' knowledge and attitudes about communication and decision support can strengthen subsequent decision support practices, thus matching care planning to patient preferences and avoiding the use of non-valued interventions [17][18][19][20]. Nevertheless few studies have empirically examined the impact of a theoretically informed, decision support training intervention on the quality of decision support knowledge and practices provided by practitioners [21] and none have been undertaken within the context of palliative care practice.
Patients and families want clinicians to listen to their views and preferences [18] and standard palliative practice calls for patient inclusion in care planning [22,23]. Most patients with advanced cancer want full information and the majority wish to participate actively in decision making [24]. Although seriously ill hospital patients want to discuss end-of-life issues, their preferred decision making role varies and is difficult to predict [25], highlighting the need for active and regular assessment of patients' decision making needs. However, clinicians may avoid raising uncomfortable topics [26][27][28] and often lack skills and confidence in helping patients in non-directive ways [29].
Systematic review findings confirm that training clinicians in patient centered approaches is an effective strategy for increasing patients understanding of the evidence and implications [30]. For instance, decision coaching by nurses has helped to foster an informed use of resources and avoid the over-use of interventions that patients don't value in urology care [17] and in gynecological care [31]. Practitioners, such as nurses and other professional care coordinators, through their trusted and frequent interactions with patients are well positioned to elicit and explore decisional needs such as decisional conflict and related factors (e.g.: knowledge, values clarity, and support) [32]. Practitioners can then coach patients with information, values clarification, support and links to resources. While a key challenge is the need to strengthen practitioners' decision support skills [29] training interventions have been shown to markedly improve the quality of practitioners' decision support skills for other clinical problems [33].
Accountability to quality patient outcomes and fiscal responsibility confirms the need to practically and pragmatically address patients' end-of-life decision making needs. Systematic and rigorous evaluation of an evidencebased intervention designed to improve the quality of practitioners' decision support could illuminate best practices for decision support and advance the fields of shared decision making, patient-practitioner communication, palliative care, and ultimately improve the lives of those who are living with a terminal illness and their families.
Multifaceted interventions show promise in influencing professional behavior change [34][35][36][37]. An exploratory study to identify target variables, choose and refine interventions, and establish their theoretical basis prior to large scale effectiveness trials is a sound research approach [38][39][40]. The paucity of empirical inquiry, in this area to date, warrants an exploratory study to describe the con-stant and variable components of a potential intervention and a feasible protocol for intervention delivery.
Guiding Theoretical Models
As the study aims to influence decision support behavior an understanding of factors that can be partially modified, such as attitudes and perceptions of norms that drive actions, is required to inform intervention messages and enhance the potential for behavior change. The pragmatic and conceptual focus of the study also requires an empirically proven, clinically relevant decision support framework to guide intervention content. The Theory of Planned Behavior (TPB) [41,42] and the Ottawa Decision Support Framework (ODSF) [43] fit these criteria. The former predicts the likelihood of behavior change while the latter provides a three step path to optimize quality decision support. The TBP [40,[44][45][46] and the ODSF [3,47,48] are relevant to nursing and have performed well in numerous health-related studies.
Briefly, the TPB proposes that the strength of the intention to change is the primary determinant of actual behavior change. This intention is determined by: (1) a person's attitude to the new behavior (strength of perceived advantages and disadvantages); (2) the extent of self-perceived social pressure to perform or not to perform the new behavior, and (3) degree of perceived control over being able to perform the behavior [49].
According to the ODSF, decision support interventions can be tailored to address modifiable determinants of decisional conflict (knowledge; outcome expectations; values clarity ; support factors) resulting in better quality decisions that are informed and consistent with patients' values [50]. In this study, the ODSF will guide decision support skill acquisition interventions and measures of change.
Clinician-identified factors associated with practice change: utility, strong evidence basis, and flexibility to acknowledge the individuality of patients [51] fit well with the ODSF. Historically, the ODSF has been well received by clinicians [3,33,48], has shown robustness in randomized control studies [7,33] and is predicated on a patient-centered approach which recognizes the unique context, circumstance and patient characteristics situated within the decision support encounter. The TPB and ODSF have guided the study design in the following ways: 1. The predictive potential of theory facilitates selection of training intervention components which offers the best probability of success [36,52].
2. The TPB offers a useful lens to examine attitudes, beliefs and perceived control for engaging in decision support practices and will inform the selection of key messages attached to intervention strategies.
3. Mapping the intervention components onto TPB variables provides a useful template to ensure multiple targets of behavior change are addressed and is consistent with knowledge dissemination best practices [35,53]. 4. The ODSF describes a well defined approach to quality decision support provision and provides a practical vehicle to structure components of a knowledge and skill building intervention.
5. Availability of a broad inventory of theoretically grounded and empirically validated, reliable, time tested tools operationalizing ODSF constructs provides a rigorous platform to inform study interventions and will strengthen the trustworthiness of study findings.
Aims of the study
The primary aim of this study is to evaluate the efficacy of a theory driven, framework-based training intervention, as compared with a control condition (usual care approach) in enhancing the quality of practitioners' patient decision support skills regarding place of care at end of life. In addition, we plan to assess participants' intention to engage in patient decision support in their practice and to determine the acceptability of the intervention components. Specific objectives include: 1. To identify factors affecting the likelihood of practitioners' integrating decision support principles into their practice 2.
To determine the quality of decision support practitioners provide
To design and evaluate components of a decision support training intervention
We plan to test the following study hypothesis: H 1 : A significantly greater proportion of practicioners, who are randomized to a multi-faceted, theory driven, training intervention, will obtain higher scores on quality of decision support following the intervention H 0 : No change in group means in decision quality following the intervention
Study Design and Methods
A two phase, sequential, mixed method design is planned. Phase 1 involves a focused needs assessment of key informants (clinicians, educators, administrators) who plan and provide palliative/oncology care. A purposive sample of about 12 key informants representing different levels of experience, responsibility, clinical focus and setting will be interviewed using a semi-structured interview based on the TBP [54].
Phase 2 is a randomized control trial of a brief educational intervention. Consenting/eligible participants will be randomly allocated to an intervention or control group. The intervention, informed by the needs assessment, comprises: an online tutorial; an interactive skills building workshop, a decision support protocol; performance feedback, and educational outreach.
Participants will be assessed: a) at baseline (quality of decision support); b) after the tutorial (knowledge); and 3-6 weeks after the other interventions (quality of decision support; intention to adopt decision support into clinical practice).
Sample Size
The estimated sample size for the Phase 2 study is based on a test for differences in mean scores of decision support quality and knowledge in the intervention versus the control group. An effect size of .70 requires n = 32/group, when alpha error = 0.05 and beta error = 0.20 [55]. This effect size is conservative in that a previous study (17) reported larger effect sizes which required only 18-20 per group. ]. Information flyers and information sessions explaining the study will be held at study partner organizations. Interested potential participants will be asked to contact the study coordinator if they have further questions or would like to participate in the study. Participants are considered eligible for study inclusion when they meet the following criteria:
Participants
• are a member of a regulated health profession • care for palliative cancer patients, and/or • cancer patients with advanced disease, and • work at least 4 shifts per month, • in a clinical area where end of life care discussions are likely to be undertaken, and • are proficient in written and spoken English.
Ethical considerations
The protocol has been approved by the Research Ethics Committees of the University of Ottawa, The Ottawa Hospital and SCO Health Services. Trial registration with the International Standard Randomized Controlled Registry has been obtained (Trial #NCT00614003).
Procedures Phase 1: Needs Assessment
Semi-structured interviews with a purposive sample of key informants (n ≅ 12 engaged in direct care, education and/ or administration roles) will be used to elicit TPB related factors affecting intentions to provide decision support (personal attitudes, norms, and perceived control). Questions about other barriers to providing decision support will be elicited. Results will inform intervention content and key messages.
Phase 2 Intervention
Consenting practitioners will be randomly allocated to one of two groups. A computer generated randomization list for concealed allocation will be used. To avoid disparate sample sizes permuted blocks will be used. Participants will be allocated by an external statistician, who has no connection to the study team, immediately after collection of baseline data.
Baseline measures of both groups
The quality of decision support skills will be assessed using audio-taped interactions between participants and simulated patients. Simulation scenarios have been created and vetted by a panel of Palliative Advanced Practice Nurses. Simulated patient callers will receive a training and feedback session facilitated by an experienced trainer prior to placing the call to participants. Simulated patient callers will contact each participant and engage in a standardized scenario expressing difficulty related to a place of end-of-life care decision. Calls will be tape recorded and quality scored using the Decision Support Analysis Tool (DSAT) [56].
Intervention (experimental) group
Participants assigned to the intervention group will: a. Complete an on-line decision support tutorial which introduces the 'When you need extra care decision aid', case studies and quizzes related to place of care decision support.
b. Participate in a half day skills building workshop. The workshop contains aggregated feedback on baseline simulated calls; opportunities to observe, compare and contrast a traditional patient education approach to a patient decision support approach; role play using the 'When you need extra care decision aid' and peer scoring of the quality of their decision support provision using the DSAT. A determinants of place of care knowledge module will also be delivered. At the end of the workshop, participants will complete a questionnaire eliciting perceived utility/usability of the a) workshop content, and b) decision support protocol.
c. Participate via telephone in an education outreach session to identify areas needing clarification, share problem solving in using the decision support protocol, and to obtain further feedback. d. Complete a questionnaire eliciting satisfaction with content and process of education program.
Post measures of both groups
a. After the auto-tutorial, knowledge tests will be administered.
b. 2-6 weeks following completion of the full intervention a TPB based survey eliciting behavioral intention to integrate decision support in practice and related attitudes, norms, and perceived control will be administered. c. 2-6 weeks following completion of the full intervention the quality of the participants' decision support skills will be assessed with a different simulated patient scenario using the same methodology as the baseline assessment.
Decision Support Tutorial
Developed by Ottawa Health Decision Centre, Clinical Epidemiology Unit of the Ottawa Health Research Institute (OHRI) this on-line, self-learning resource has been used to train clinicians, graduate students and undergraduate nursing students; US and BC tele-health center nurses; UK urology nurses; and nurses in family practice units in Ontario. Three modules provide 1) an overview of decisional conflict with a three step path to guide decision support, 2) case studies profiling decision support tools (including a decision support protocol) and processes with embedded quizzes to assess comprehension and provide feedback; and 3) a final integrated knowledge test. The original tutorial will be adapted to decisions about the place of terminal care. The tutorial is hosted by the University of Ottawa and is password protected. Participants will be asked to complete the auto-tutorial one week prior to the workshop.
Decision Support Protocol
Participants will be introduced to a decision aid in the tutorial. The decision aid guides patients in advanced planning of location of care and is entitled 'When you need extra care decision aid'. The decision aid is based on the Ottawa Decision Support Framework and the Ottawa Decision Support Practitioners Guide (OPDG). The four page decision aid provides a structured approach to assess patients' decisional needs, provide tailored decision support to address needs and evaluate patients' progress in decision making. There are five elements: 1) general information about place of care options and palliative care; 2) self report of functional and symptom status over the past week based on the Palliative Performance Scale [57] and the Edmonton Symptom Assessment Scale [58] respectively; 3) a self-ranking of which reasons for each option are considered most important; 4) an assessment of what else patients need to prepare for decision making; and 5) a summary of next steps.
Content is based on: a) systematic reviews of literature [4,59,60]; b) previous research on women's decision making needs regarding place of care at the end of life [3]; previous research on family members' decision making needs at the end of life [61] and d) the primary investigator's clinical experience in palliative care. Participants will be provided with a hard copy of the protocol as well as access to the online version.
Skill building workshop
Within three weeks of the online tutorial, a half day workshop will be conducted. Content will be based on the preintervention TPB needs assessment and also a) practical applications of material learned in the tutorial; b) a video illustrating a clinical application of the decision aid; c) a video contrasting a traditional patient education approach and a decision support approach in a clinical scenario d) role play using the decision support protocol; e) self and peer appraisal during role play and f) discussion about barriers and facilitators to integrating decision support into clinical practice. Use of a facilitator who provides face to face communication and uses a range of enabling techniques has been shown to have some impact on changing clinical practice [62].
Specific workshop objectives are that participants will: • understand concepts of decisional needs, decision support, and decision quality, • learn to use decision support tools, • evaluate decision support skills, and • analyze barriers and facilitators to implementing decision support in practice.
Performance feedback
Results of the decision quality scored transcripts of baseline simulated calls will be presented and discussed at the workshop. Participants will also be provided with evaluation tools based on the Decision Support Analysis Tool (DSAT) to self appraise their own and workshop peers' quality of decision support during the case studies and role play activities. As well, the facilitator will provide ongoing feedback from case studies and role plays during the workshop. The DSAT self-appraisal tool has been used to train nurses and medical residents in self-appraisal at the University of Ottawa, the Dartmouth Hitchcock Medical Center, and the US Health Dialogue call center.
Education outreach
Two weeks following the workshop intervention group participants will be scheduled for a personal academic detailing session with the workshop facilitator. Based on social marketing approaches educational outreach provides a focused opportunity to personalize learning and behavioral objectives, provide unbiased descriptions of research evidence and opinion leaders' positions, augment educational materials and reinforce positive behavior [63]. Academic detailing using brief, face-to-face interactions has shown promise for modifying physician and dentists practices [64,65] although one study reported initial resistance to the approach [66] and it had no effect as a single intervention [67].
The detailer will provide individualized information and resources, reinforce decision support behaviors, and help participants to identify opportunities for incorporating decision support behaviors into their practice. The one-toone session will be scheduled at a mutually agreed time, will be conducted by telephone and should last about 15-30 minutes.
Outcome Measures
Primary Outcome Phase 2 Quality of Decision Support Skills will be measured with DSAT modelled on the ODSF and Ivey's Problem Solving Model [68]. The DSAT, with total possible score of 12, assesses the quality of decision support and consists of subscales measuring decision support and communication in a practitioner/patient dyad. The tool demonstrated adequate inter-rater reliability for scoring on both decision support skills (75%, kappa = 0.58) and communication skills (76%, Kappa = 0.68) when it was tested in physician/patient dyads (n = 34 dyads). Construct validity was demonstrated when the scores were correlated to measures of patient and physician satisfaction [56] The DSAT also discriminates between trained and untrained nurses [33].
Secondary Outcomes Phase 2
Measures include: • a knowledge test regarding decision support concepts; • self assessment of decision protocol utility and helpfulness; • behavioral intention to integrate decision support into clinical practice; and • acceptability and utility of intervention components in the experimental group.
Analysis Plan
Primary Outcome Phase 2: Quality of nurses' decision support skills 1. Inter-rater reliability of DSAT scores; two raters, who are blind to group allocation, will independently score pre and post intervention simulated call audio tapes.
2. Primary analysis will be undertaken using a repeated measures ANOVA (baseline; post measures). For missing cases an 'intention to treat', approach will be used under the conservative assumption that no change would occur between pre and post testing. Additionally, the impact of missing cases on findings will be explored between groups to provide further direction for the analysis.
Secondary outcomes Phase 2 1. Descriptive measures (frequency; means; range) will describe participant characteristics and acceptability and utility of training intervention components.
2. Descriptive measures (frequency; means; range) of the TPB based survey eliciting behavioral intention to integrate decision support in practice and related attitudes, norms, and perceived control will be undertaken. Between group differences in intention to integrate decision support practices will be analyzed using a t test.
3.
Data from qualitative open-ended questions using traditional content analysis techniques [69,70] with the TPB as an organizing framework will be undertaken. Thematic coding, followed by member checking to ensure trustworthiness of final themes, will be undertaken.
Discussion
This will be the first study to evaluate the impact of an educational intervention to improve the quality of deci-sion support that practitioners provide to dying patients around place of care. This reproducible, portable intervention addresses a key policy mandate regarding choice for end of life care set out by health providers such as the Ontario Ministry of Health in Canada. This is a pragmatic trial with relatively inclusive entry criteria and we anticipate recruiting participants from across a spectrum of care sectors. As well, because we are bringing the intervention to participants in their home regions we are able to include participants who may be unable to access centrally held education due to time and distance pressures in their clinical setting. These features will improve the generalizability of the findings.
Expanding our understanding of how practitioners can best support palliative patients' decision making will help to improve the quality of end-of-life and care for patients and those who share their lives. If findings from this study show promise, a larger effectiveness trial assessing factors such as cost effectiveness, sustainability, and patient and system outcomes will be undertaken Results will be disseminated via a brief summary prepared for policy makers, a communication flyer for participants, a technical report for the participating organizations, publication in refereed scientific journals, newsletters of palliative care and relevant clinician associations, presentations at scientific and clinical meetings, and clinical rounds. Findings will be available online through the websites of the Canadian Virtual Hospice, Canadian Institutes of Health Research (CIHR) Family Caregiving New Emerging Team, CIHR End-of-Life Care for Seniors New Emerging Team, and the CIHR/NCIC Strategic Training Program in Palliative Care Research.
Design Limitations
Using a similar design in a call center nurses project, contamination was prevented using the following strategies: a) using a private room for simulated calls; and b) requesting that nurses not share or discuss decision support resources or approaches with others. Many of the skills are quite novel (values clarification) and it is unlikely that skills will improve without the workshop and subsequent practice.
Recruitment response may yield an over representation of those more motivated to learn and adopt the intervention than the average adopter. However, this should not pose a threat to internal validity. Moreover, involving early adopters is considered a wise strategy in innovation diffusion [71].
Participants will not be not blinded to the simulated call and know their performance is being monitored. However the use of simulated callers is recognized as a rela-tively reliable method for assessing professional performance, facilitates a standardized experience across participants, provides a clearer picture of decision support skills in general, is a more accurate measure of current practice compared to self-report or chart audit, and has been used widely [72][73][74][75][76][77].
The relative impact of each component of the intervention cannot be established with this design [40]. Feasibility constraints preclude a study design using a sequenced addition and evaluation of intervention components or assessing long term sustainability; however, recent studies suggest that evidence based education strategies may trigger long term practice change [78,79].
|
2014-10-01T00:00:00.000Z
|
2008-04-30T00:00:00.000
|
{
"year": 2008,
"sha1": "694016c2c5be586408b76b5dbd7fff33d36f3694",
"oa_license": "CCBY",
"oa_url": "https://bmcpalliatcare.biomedcentral.com/track/pdf/10.1186/1472-684X-7-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "694016c2c5be586408b76b5dbd7fff33d36f3694",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.